Merge remote-tracking branch 'origin/3.0' into enh/3.0/TD-32686
|
@ -10,6 +10,7 @@ on:
|
|||
- 'docs/**'
|
||||
- 'packaging/**'
|
||||
- 'tests/**'
|
||||
- '*.md'
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
|
@ -17,8 +18,17 @@ concurrency:
|
|||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
name: Build and test
|
||||
name: Build and test on ${{ matrix.os }}
|
||||
runs-on: ${{ matrix.os }}
|
||||
strategy:
|
||||
matrix:
|
||||
os:
|
||||
- ubuntu-20.04
|
||||
- ubuntu-22.04
|
||||
- ubuntu-24.04
|
||||
- macos-13
|
||||
- macos-14
|
||||
- macos-15
|
||||
|
||||
steps:
|
||||
- name: Checkout the repository
|
||||
|
@ -29,12 +39,37 @@ jobs:
|
|||
with:
|
||||
go-version: 1.18
|
||||
|
||||
- name: Install system dependencies
|
||||
- name: Install dependencies on Linux
|
||||
if: runner.os == 'Linux'
|
||||
run: |
|
||||
sudo apt update -y
|
||||
sudo apt install -y build-essential cmake \
|
||||
libgeos-dev libjansson-dev libsnappy-dev liblzma-dev libz-dev \
|
||||
zlib1g pkg-config libssl-dev gawk
|
||||
sudo apt install -y \
|
||||
build-essential \
|
||||
cmake \
|
||||
gawk \
|
||||
libgeos-dev \
|
||||
libjansson-dev \
|
||||
liblzma-dev \
|
||||
libsnappy-dev \
|
||||
libssl-dev \
|
||||
libz-dev \
|
||||
pkg-config \
|
||||
zlib1g
|
||||
|
||||
- name: Install dependencies on macOS
|
||||
if: runner.os == 'macOS'
|
||||
run: |
|
||||
brew update
|
||||
brew install \
|
||||
argp-standalone \
|
||||
gawk \
|
||||
gflags \
|
||||
geos \
|
||||
jansson \
|
||||
openssl \
|
||||
pkg-config \
|
||||
snappy \
|
||||
zlib
|
||||
|
||||
- name: Build and install TDengine
|
||||
run: |
|
||||
|
@ -63,7 +98,7 @@ jobs:
|
|||
run: |
|
||||
taosBenchmark -t 10 -n 10 -y
|
||||
taos -s "select count(*) from test.meters"
|
||||
|
||||
|
||||
- name: Clean up
|
||||
if: always()
|
||||
run: |
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
name: taosKeeper CI
|
||||
name: taosKeeper Build
|
||||
|
||||
on:
|
||||
push:
|
||||
|
@ -8,7 +8,7 @@ on:
|
|||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
name: Run unit tests
|
||||
name: Build and test on ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout the repository
|
|
@ -75,4 +75,4 @@ available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.ht
|
|||
[homepage]: https://www.contributor-covenant.org
|
||||
|
||||
For answers to common questions about this code of conduct, see
|
||||
https://www.contributor-covenant.org/faq
|
||||
https://www.contributor-covenant.org/faq
|
||||
|
|
|
@ -70,7 +70,7 @@ def check_docs(){
|
|||
returnStdout: true
|
||||
)
|
||||
|
||||
def file_no_doc_changed = sh (
|
||||
file_no_doc_changed = sh (
|
||||
script: '''
|
||||
cd ${WKC}
|
||||
git --no-pager diff --name-only FETCH_HEAD `git merge-base FETCH_HEAD ${CHANGE_TARGET}`|grep -v "^docs/en/"|grep -v "^docs/zh/"|grep -v ".md$" || :
|
||||
|
|
425
README-CN.md
|
@ -10,7 +10,36 @@
|
|||
|
||||
简体中文 | [English](README.md) | [TDengine 云服务](https://cloud.taosdata.com/?utm_medium=cn&utm_source=github) | 很多职位正在热招中,请看[这里](https://www.taosdata.com/careers/)
|
||||
|
||||
# TDengine 简介
|
||||
# 目录
|
||||
|
||||
1. [TDengine 简介](#1-tdengine-简介)
|
||||
1. [文档](#2-文档)
|
||||
1. [必备工具](#3-必备工具)
|
||||
- [3.1 Linux预备](#31-linux系统)
|
||||
- [3.2 macOS预备](#32-macos系统)
|
||||
- [3.3 Windows预备](#33-windows系统)
|
||||
- [3.4 克隆仓库](#34-克隆仓库)
|
||||
1. [构建](#4-构建)
|
||||
- [4.1 Linux系统上构建](#41-linux系统上构建)
|
||||
- [4.2 macOS系统上构建](#42-macos系统上构建)
|
||||
- [4.3 Windows系统上构建](#43-windows系统上构建)
|
||||
1. [打包](#5-打包)
|
||||
1. [安装](#6-安装)
|
||||
- [6.1 Linux系统上安装](#61-linux系统上安装)
|
||||
- [6.2 macOS系统上安装](#62-macos系统上安装)
|
||||
- [6.3 Windows系统上安装](#63-windows系统上安装)
|
||||
1. [快速运行](#7-快速运行)
|
||||
- [7.1 Linux系统上运行](#71-linux系统上运行)
|
||||
- [7.2 macOS系统上运行](#72-macos系统上运行)
|
||||
- [7.3 Windows系统上运行](#73-windows系统上运行)
|
||||
1. [测试](#8-测试)
|
||||
1. [版本发布](#9-版本发布)
|
||||
1. [工作流](#10-工作流)
|
||||
1. [覆盖率](#11-覆盖率)
|
||||
1. [成为社区贡献者](#12-成为社区贡献者)
|
||||
|
||||
|
||||
# 1. 简介
|
||||
|
||||
TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series Database, TSDB)。TDengine 能被广泛运用于物联网、工业互联网、车联网、IT 运维、金融等领域。除核心的时序数据库功能外,TDengine 还提供缓存、数据订阅、流式计算等功能,是一极简的时序数据处理平台,最大程度的减小系统设计的复杂度,降低研发和运营成本。与其他时序数据库相比,TDengine 的主要优势如下:
|
||||
|
||||
|
@ -26,323 +55,345 @@ TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series
|
|||
|
||||
- **核心开源**:TDengine 的核心代码包括集群功能全部开源,截止到2022年8月1日,全球超过 135.9k 个运行实例,GitHub Star 18.7k,Fork 4.4k,社区活跃。
|
||||
|
||||
# 文档
|
||||
了解TDengine高级功能的完整列表,请 [点击](https://tdengine.com/tdengine/)。体验TDengine最简单的方式是通过[TDengine云平台](https://cloud.tdengine.com)。
|
||||
|
||||
关于完整的使用手册,系统架构和更多细节,请参考 [TDengine 文档](https://docs.taosdata.com) 或者 [TDengine Documentation](https://docs.tdengine.com)。
|
||||
# 2. 文档
|
||||
|
||||
# 构建
|
||||
关于完整的使用手册,系统架构和更多细节,请参考 [TDengine](https://www.taosdata.com/) 或者 [TDengine 官方文档](https://docs.taosdata.com)。
|
||||
|
||||
用户可根据需求选择通过[容器](https://docs.taosdata.com/get-started/docker/)、[安装包](https://docs.taosdata.com/get-started/package/)、[Kubernetes](https://docs.taosdata.com/deployment/k8s/)来安装或直接使用无需安装部署的[云服务](https://cloud.taosdata.com/)。本快速指南是面向想自己编译、打包、测试的开发者的。
|
||||
|
||||
如果想编译或测试TDengine连接器,请访问以下仓库: [JDBC连接器](https://github.com/taosdata/taos-connector-jdbc), [Go连接器](https://github.com/taosdata/driver-go), [Python连接器](https://github.com/taosdata/taos-connector-python), [Node.js连接器](https://github.com/taosdata/taos-connector-node), [C#连接器](https://github.com/taosdata/taos-connector-dotnet), [Rust连接器](https://github.com/taosdata/taos-connector-rust).
|
||||
|
||||
# 3. 前置条件
|
||||
|
||||
TDengine 目前可以在 Linux、 Windows、macOS 等平台上安装和运行。任何 OS 的应用也可以选择 taosAdapter 的 RESTful 接口连接服务端 taosd。CPU 支持 X64/ARM64,后续会支持 MIPS64、Alpha64、ARM32、RISC-V 等 CPU 架构。目前不支持使用交叉编译器构建。
|
||||
|
||||
用户可根据需求选择通过源码、[容器](https://docs.taosdata.com/get-started/docker/)、[安装包](https://docs.taosdata.com/get-started/package/)或[Kubernetes](https://docs.taosdata.com/deployment/k8s/)来安装。本快速指南仅适用于通过源码安装。
|
||||
|
||||
TDengine 还提供一组辅助工具软件 taosTools,目前它包含 taosBenchmark(曾命名为 taosdemo)和 taosdump 两个软件。默认 TDengine 编译不包含 taosTools, 您可以在编译 TDengine 时使用`cmake .. -DBUILD_TOOLS=true` 来同时编译 taosTools。
|
||||
如果你想要编译 taosAdapter 或者 taosKeeper,需要安装 Go 1.18 及以上版本。
|
||||
|
||||
为了构建TDengine, 请使用 [CMake](https://cmake.org/) 3.13.0 或者更高版本。
|
||||
## 3.1 Linux系统
|
||||
|
||||
## 安装工具
|
||||
<details>
|
||||
|
||||
### Ubuntu 18.04 及以上版本 & Debian:
|
||||
<summary>安装Linux必备工具</summary>
|
||||
|
||||
### Ubuntu 18.04、20.04、22.04
|
||||
|
||||
```bash
|
||||
sudo apt-get install -y gcc cmake build-essential git libssl-dev libgflags2.2 libgflags-dev
|
||||
sudo apt-get udpate
|
||||
sudo apt-get install -y gcc cmake build-essential git libjansson-dev \
|
||||
libsnappy-dev liblzma-dev zlib1g-dev pkg-config
|
||||
```
|
||||
|
||||
#### 为 taos-tools 安装编译需要的软件
|
||||
|
||||
为了在 Ubuntu/Debian 系统上编译 [taos-tools](https://github.com/taosdata/taos-tools) 需要安装如下软件:
|
||||
### CentOS 8
|
||||
|
||||
```bash
|
||||
sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev zlib1g pkg-config
|
||||
```
|
||||
|
||||
### CentOS 7.9
|
||||
|
||||
```bash
|
||||
sudo yum install epel-release
|
||||
sudo yum update
|
||||
sudo yum install -y gcc gcc-c++ make cmake3 gflags git openssl-devel
|
||||
sudo ln -sf /usr/bin/cmake3 /usr/bin/cmake
|
||||
yum install -y epel-release gcc gcc-c++ make cmake git perl dnf-plugins-core
|
||||
yum config-manager --set-enabled powertools
|
||||
yum install -y zlib-static xz-devel snappy-devel jansson-devel pkgconfig libatomic-static libstdc++-static
|
||||
```
|
||||
|
||||
### CentOS 8/Fedora/Rocky Linux
|
||||
</details>
|
||||
|
||||
## 3.2 macOS系统
|
||||
|
||||
<details>
|
||||
|
||||
<summary>安装macOS必备工具</summary>
|
||||
|
||||
根据提示安装依赖工具 [brew](https://brew.sh/).
|
||||
|
||||
```bash
|
||||
sudo dnf install -y gcc gcc-c++ gflags make cmake epel-release git openssl-devel
|
||||
```
|
||||
|
||||
#### 在 CentOS 上构建 taosTools 安装依赖软件
|
||||
|
||||
|
||||
#### CentOS 7.9
|
||||
|
||||
|
||||
```
|
||||
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
|
||||
```
|
||||
|
||||
#### CentOS 8/Fedora/Rocky Linux
|
||||
|
||||
```
|
||||
sudo yum install -y epel-release
|
||||
sudo yum install -y dnf-plugins-core
|
||||
sudo yum config-manager --set-enabled powertools
|
||||
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
|
||||
```
|
||||
|
||||
注意:由于 snappy 缺乏 pkg-config 支持(参考 [链接](https://github.com/google/snappy/pull/86)),会导致 cmake 提示无法发现 libsnappy,实际上工作正常。
|
||||
|
||||
若 powertools 安装失败,可以尝试改用:
|
||||
```
|
||||
sudo yum config-manager --set-enabled powertools
|
||||
```
|
||||
|
||||
#### CentOS + devtoolset
|
||||
|
||||
除上述编译依赖包,需要执行以下命令:
|
||||
|
||||
```
|
||||
sudo yum install centos-release-scl
|
||||
sudo yum install devtoolset-9 devtoolset-9-libatomic-devel
|
||||
scl enable devtoolset-9 -- bash
|
||||
```
|
||||
|
||||
### macOS
|
||||
|
||||
```
|
||||
brew install argp-standalone gflags pkgconfig
|
||||
```
|
||||
|
||||
### 设置 golang 开发环境
|
||||
</details>
|
||||
|
||||
TDengine 包含数个使用 Go 语言开发的组件,比如taosAdapter, 请参考 golang.org 官方文档设置 go 开发环境。
|
||||
## 3.3 Windows系统
|
||||
|
||||
请使用 1.20 及以上版本。对于中国用户,我们建议使用代理来加速软件包下载。
|
||||
<details>
|
||||
|
||||
```
|
||||
go env -w GO111MODULE=on
|
||||
go env -w GOPROXY=https://goproxy.cn,direct
|
||||
```
|
||||
<summary>安装Windows必备工具</summary>
|
||||
|
||||
缺省是不会构建 taosAdapter, 但您可以使用以下命令选择构建 taosAdapter 作为 RESTful 接口的服务。
|
||||
进行中。
|
||||
|
||||
```
|
||||
cmake .. -DBUILD_HTTP=false
|
||||
```
|
||||
</details>
|
||||
|
||||
### 设置 rust 开发环境
|
||||
## 3.4 克隆仓库
|
||||
|
||||
TDengine 包含数个使用 Rust 语言开发的组件. 请参考 rust-lang.org 官方文档设置 rust 开发环境。
|
||||
|
||||
## 获取源码
|
||||
|
||||
首先,你需要从 GitHub 克隆源码:
|
||||
通过如下命令将TDengine仓库克隆到指定计算机:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/taosdata/TDengine.git
|
||||
cd TDengine
|
||||
```
|
||||
如果使用 https 协议下载比较慢,可以通过修改 ~/.gitconfig 文件添加以下两行设置使用 ssh 协议下载。需要首先上传 ssh 密钥到 GitHub,详细方法请参考 GitHub 官方文档。
|
||||
|
||||
```
|
||||
[url "git@github.com:"]
|
||||
insteadOf = https://github.com/
|
||||
```
|
||||
## 特别说明
|
||||
# 4. 构建
|
||||
|
||||
[JDBC 连接器](https://github.com/taosdata/taos-connector-jdbc), [Go 连接器](https://github.com/taosdata/driver-go),[Python 连接器](https://github.com/taosdata/taos-connector-python),[Node.js 连接器](https://github.com/taosdata/taos-connector-node),[C# 连接器](https://github.com/taosdata/taos-connector-dotnet) ,[Rust 连接器](https://github.com/taosdata/taos-connector-rust) 和 [Grafana 插件](https://github.com/taosdata/grafanaplugin)已移到独立仓库。
|
||||
TDengine 还提供一组辅助工具软件 taosTools,目前它包含 taosBenchmark(曾命名为 taosdemo)和 taosdump 两个软件。默认 TDengine 编译不包含 taosTools, 您可以在编译 TDengine 时使用`cmake .. -DBUILD_TOOLS=true` 来同时编译 taosTools。
|
||||
|
||||
为了构建TDengine, 请使用 [CMake](https://cmake.org/) 3.13.0 或者更高版本。
|
||||
|
||||
## 构建 TDengine
|
||||
## 4.1 Linux系统上构建
|
||||
|
||||
### Linux 系统
|
||||
<details>
|
||||
|
||||
可以运行代码仓库中的 `build.sh` 脚本编译出 TDengine 和 taosTools(包含 taosBenchmark 和 taosdump)。
|
||||
<summary>Linux系统上构建步骤</summary>
|
||||
|
||||
可以通过以下命令使用脚本 `build.sh` 编译TDengine和taosTools,包括taosBenchmark和taosdump:
|
||||
|
||||
```bash
|
||||
./build.sh
|
||||
```
|
||||
|
||||
这个脚本等价于执行如下命令:
|
||||
也可以通过以下命令进行构建:
|
||||
|
||||
```bash
|
||||
mkdir debug
|
||||
cd debug
|
||||
mkdir debug && cd debug
|
||||
cmake .. -DBUILD_TOOLS=true -DBUILD_CONTRIB=true
|
||||
make
|
||||
```
|
||||
|
||||
您也可以选择使用 jemalloc 作为内存分配器,替代默认的 glibc:
|
||||
如果你想要编译 taosAdapter,需要添加 `-DBUILD_HTTP=false` 选项。
|
||||
|
||||
如果你想要编译 taosKeeper,需要添加 `--DBUILD_KEEPER=true` 选项。
|
||||
|
||||
可以使用Jemalloc作为内存分配器,而不是使用glibc:
|
||||
|
||||
```bash
|
||||
apt install autoconf
|
||||
cmake .. -DJEMALLOC_ENABLED=true
|
||||
```
|
||||
|
||||
在 X86-64、X86、arm64 平台上,TDengine 生成脚本可以自动检测机器架构。也可以手动配置 CPUTYPE 参数来指定 CPU 类型,如 aarch64 等。
|
||||
|
||||
aarch64:
|
||||
TDengine构建脚本可以自动检测x86、x86-64、arm64平台上主机的体系结构。
|
||||
您也可以通过CPUTYPE选项手动指定架构:
|
||||
|
||||
```bash
|
||||
cmake .. -DCPUTYPE=aarch64 && cmake --build .
|
||||
```
|
||||
|
||||
### Windows 系统
|
||||
</details>
|
||||
|
||||
如果你使用的是 Visual Studio 2013 版本:
|
||||
## 4.2 macOS系统上构建
|
||||
|
||||
打开 cmd.exe,执行 vcvarsall.bat 时,为 64 位操作系统指定“x86_amd64”,为 32 位操作系统指定“x86”。
|
||||
<details>
|
||||
|
||||
```bash
|
||||
<summary>macOS系统上构建步骤</summary>
|
||||
|
||||
请安装XCode命令行工具和cmake。使用XCode 11.4+在Catalina和Big Sur上完成验证。
|
||||
|
||||
```shell
|
||||
mkdir debug && cd debug
|
||||
"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" < x86_amd64 | x86 >
|
||||
cmake .. && cmake --build .
|
||||
```
|
||||
|
||||
如果你想要编译 taosAdapter,需要添加 `-DBUILD_HTTP=false` 选项。
|
||||
|
||||
如果你想要编译 taosKeeper,需要添加 `--DBUILD_KEEPER=true` 选项。
|
||||
|
||||
</details>
|
||||
|
||||
## 4.3 Windows系统上构建
|
||||
|
||||
<details>
|
||||
|
||||
<summary>Windows系统上构建步骤</summary>
|
||||
|
||||
如果您使用的是Visual Studio 2013,请执行“cmd.exe”打开命令窗口执行如下命令。
|
||||
执行vcvarsall.bat时,64位的Windows请指定“amd64”,32位的Windows请指定“x86”。
|
||||
|
||||
```cmd
|
||||
mkdir debug && cd debug
|
||||
"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\vcvarsall.bat" < amd64 | x86 >
|
||||
cmake .. -G "NMake Makefiles"
|
||||
nmake
|
||||
```
|
||||
|
||||
如果你使用的是 Visual Studio 2019 或 2017 版本:
|
||||
如果您使用Visual Studio 2019或2017:
|
||||
|
||||
打开 cmd.exe,执行 vcvarsall.bat 时,为 64 位操作系统指定“x64”,为 32 位操作系统指定“x86”。
|
||||
请执行“cmd.exe”打开命令窗口执行如下命令。
|
||||
执行vcvarsall.bat时,64位的Windows请指定“x64”,32位的Windows请指定“x86”。
|
||||
|
||||
```bash
|
||||
```cmd
|
||||
mkdir debug && cd debug
|
||||
"c:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\Build\vcvarsall.bat" < x64 | x86 >
|
||||
cmake .. -G "NMake Makefiles"
|
||||
nmake
|
||||
```
|
||||
|
||||
你也可以从开始菜单中找到"Visual Studio < 2019 | 2017 >"菜单项,根据你的系统选择"x64 Native Tools Command Prompt for VS < 2019 | 2017 >"或"x86 Native Tools Command Prompt for VS < 2019 | 2017 >",打开命令行窗口,执行:
|
||||
或者,您可以通过点击Windows开始菜单打开命令窗口->“Visual Studio < 2019 | 2017 >”文件夹->“x64原生工具命令提示符VS < 2019 | 2017 >”或“x86原生工具命令提示符VS < 2019 | 2017 >”取决于你的Windows是什么架构,然后执行命令如下:
|
||||
|
||||
```bash
|
||||
```cmd
|
||||
mkdir debug && cd debug
|
||||
cmake .. -G "NMake Makefiles"
|
||||
nmake
|
||||
```
|
||||
</details>
|
||||
|
||||
### macOS 系统
|
||||
# 5. 打包
|
||||
|
||||
安装 XCode 命令行工具和 cmake. 在 Catalina 和 Big Sur 操作系统上,需要安装 XCode 11.4+ 版本。
|
||||
由于一些组件依赖关系,TDengine社区安装程序不能仅由该存储库创建。我们仍在努力改进。
|
||||
|
||||
# 6. 安装
|
||||
|
||||
|
||||
## 6.1 Linux系统上安装
|
||||
|
||||
<details>
|
||||
|
||||
<summary>Linux系统上安装详细步骤</summary>
|
||||
|
||||
构建成功后,TDengine可以通过以下命令进行安装:
|
||||
|
||||
```bash
|
||||
mkdir debug && cd debug
|
||||
cmake .. && cmake --build .
|
||||
sudo make install
|
||||
```
|
||||
从源代码安装还将为TDengine配置服务管理。用户也可以使用[TDengine安装包](https://docs.taosdata.com/get-started/package/)进行安装。
|
||||
|
||||
# 安装
|
||||
</details>
|
||||
|
||||
## Linux 系统
|
||||
## 6.2 macOS系统上安装
|
||||
|
||||
生成完成后,安装 TDengine:
|
||||
<details>
|
||||
|
||||
<summary>macOS系统上安装详细步骤</summary>
|
||||
|
||||
构建成功后,TDengine可以通过以下命令进行安装:
|
||||
|
||||
```bash
|
||||
sudo make install
|
||||
```
|
||||
|
||||
用户可以在[文件目录结构](https://docs.taosdata.com/reference/directory/)中了解更多在操作系统中生成的目录或文件。
|
||||
</details>
|
||||
|
||||
从源代码安装也会为 TDengine 配置服务管理 ,用户也可以选择[从安装包中安装](https://docs.taosdata.com/get-started/package/)。
|
||||
## 6.3 Windows系统上安装
|
||||
|
||||
安装成功后,在终端中启动 TDengine 服务:
|
||||
<details>
|
||||
|
||||
```bash
|
||||
sudo systemctl start taosd
|
||||
```
|
||||
<summary>Windows系统上安装详细步骤</summary>
|
||||
|
||||
用户可以使用 TDengine CLI 来连接 TDengine 服务,在终端中,输入:
|
||||
|
||||
```bash
|
||||
taos
|
||||
```
|
||||
|
||||
如果 TDengine CLI 连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印出错误消息。
|
||||
|
||||
## Windows 系统
|
||||
|
||||
生成完成后,安装 TDengine:
|
||||
构建成功后,TDengine可以通过以下命令进行安装:
|
||||
|
||||
```cmd
|
||||
nmake install
|
||||
```
|
||||
|
||||
## macOS 系统
|
||||
</details>
|
||||
|
||||
生成完成后,安装 TDengine:
|
||||
# 7. 快速运行
|
||||
|
||||
## 7.1 Linux系统上运行
|
||||
|
||||
<details>
|
||||
|
||||
<summary>Linux系统上运行详细步骤</summary>
|
||||
|
||||
在Linux系统上安装TDengine完成后,在终端运行如下命令启动服务:
|
||||
|
||||
```bash
|
||||
sudo make install
|
||||
sudo systemctl start taosd
|
||||
```
|
||||
|
||||
用户可以在[文件目录结构](https://docs.taosdata.com/reference/directory/)中了解更多在操作系统中生成的目录或文件。
|
||||
|
||||
从源代码安装也会为 TDengine 配置服务管理 ,用户也可以选择[从安装包中安装](https://docs.taosdata.com/get-started/package/)。
|
||||
|
||||
安装成功后,可以在应用程序中双击 TDengine 图标启动服务,或者在终端中启动 TDengine 服务:
|
||||
|
||||
```bash
|
||||
sudo launchctl start com.tdengine.taosd
|
||||
```
|
||||
|
||||
用户可以使用 TDengine CLI 来连接 TDengine 服务,在终端中,输入:
|
||||
然后用户可以通过如下命令使用TDengine命令行连接TDengine服务:
|
||||
|
||||
```bash
|
||||
taos
|
||||
```
|
||||
|
||||
如果 TDengine CLI 连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印出错误消息。
|
||||
如果TDengine 命令行连接服务器成功,系统将打印欢迎信息和版本信息。否则,将显示连接错误信息。
|
||||
|
||||
## 快速运行
|
||||
|
||||
如果不希望以服务方式运行 TDengine,也可以在终端中直接运行它。也即在生成完成后,执行以下命令(在 Windows 下,生成的可执行文件会带有 .exe 后缀,例如会名为 taosd.exe ):
|
||||
如果您不想将TDengine作为服务运行,您可以在当前终端中运行它。例如,要在构建完成后快速启动TDengine服务器,在终端中运行以下命令:(我们以Linux为例,Windows上的命令为 `taosd.exe`)
|
||||
|
||||
```bash
|
||||
./build/bin/taosd -c test/cfg
|
||||
```
|
||||
|
||||
在另一个终端,使用 TDengine CLI 连接服务器:
|
||||
在另一个终端上,使用TDengine命令行连接服务器:
|
||||
|
||||
```bash
|
||||
./build/bin/taos -c test/cfg
|
||||
```
|
||||
|
||||
"-c test/cfg"指定系统配置文件所在目录。
|
||||
选项 `-c test/cfg` 指定系统配置文件的目录。
|
||||
|
||||
# 体验 TDengine
|
||||
</details>
|
||||
|
||||
在 TDengine 终端中,用户可以通过 SQL 命令来创建/删除数据库、表等,并进行插入查询操作。
|
||||
## 7.2 macOS系统上运行
|
||||
|
||||
```sql
|
||||
CREATE DATABASE demo;
|
||||
USE demo;
|
||||
CREATE TABLE t (ts TIMESTAMP, speed INT);
|
||||
INSERT INTO t VALUES('2019-07-15 00:00:00', 10);
|
||||
INSERT INTO t VALUES('2019-07-15 01:00:00', 20);
|
||||
SELECT * FROM t;
|
||||
ts | speed |
|
||||
===================================
|
||||
19-07-15 00:00:00.000| 10|
|
||||
19-07-15 01:00:00.000| 20|
|
||||
Query OK, 2 row(s) in set (0.001700s)
|
||||
<details>
|
||||
|
||||
<summary>macOS系统上运行详细步骤</summary>
|
||||
|
||||
在macOS上安装完成后启动服务,双击/applications/TDengine启动程序,或者在终端中执行如下命令:
|
||||
|
||||
```bash
|
||||
sudo launchctl start com.tdengine.taosd
|
||||
```
|
||||
|
||||
# 应用开发
|
||||
然后在终端中使用如下命令通过TDengine命令行连接TDengine服务器:
|
||||
|
||||
## 官方连接器
|
||||
```bash
|
||||
taos
|
||||
```
|
||||
|
||||
TDengine 提供了丰富的应用程序开发接口,其中包括 C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用:
|
||||
如果TDengine命令行连接服务器成功,系统将打印欢迎信息和版本信息。否则,将显示错误信息。
|
||||
|
||||
- [Java](https://docs.taosdata.com/reference/connector/java/)
|
||||
- [C/C++](https://docs.taosdata.com/reference/connector/cpp/)
|
||||
- [Python](https://docs.taosdata.com/reference/connector/python/)
|
||||
- [Go](https://docs.taosdata.com/reference/connector/go/)
|
||||
- [Node.js](https://docs.taosdata.com/reference/connector/node/)
|
||||
- [Rust](https://docs.taosdata.com/reference/connector/rust/)
|
||||
- [C#](https://docs.taosdata.com/reference/connector/csharp/)
|
||||
- [RESTful API](https://docs.taosdata.com/reference/connector/rest-api/)
|
||||
</details>
|
||||
|
||||
# 成为社区贡献者
|
||||
|
||||
## 7.3 Windows系统上运行
|
||||
|
||||
<details>
|
||||
|
||||
<summary>Windows系统上运行详细步骤</summary>
|
||||
|
||||
您可以使用以下命令在Windows平台上启动TDengine服务器:
|
||||
|
||||
```cmd
|
||||
.\build\bin\taosd.exe -c test\cfg
|
||||
```
|
||||
|
||||
在另一个终端上,使用TDengine命令行连接服务器:
|
||||
|
||||
```cmd
|
||||
.\build\bin\taos.exe -c test\cfg
|
||||
```
|
||||
|
||||
选项 `-c test/cfg` 指定系统配置文件的目录。
|
||||
|
||||
</details>
|
||||
|
||||
# 8. 测试
|
||||
|
||||
有关如何在TDengine上运行不同类型的测试,请参考 [TDengine测试](./tests/README-CN.md)
|
||||
|
||||
# 9. 版本发布
|
||||
|
||||
TDengine发布版本的完整列表,请参考 [版本列表](https://github.com/taosdata/TDengine/releases)
|
||||
|
||||
# 10. 工作流
|
||||
|
||||
TDengine构建检查工作流可以在参考 [Github Action](https://github.com/taosdata/TDengine/actions/workflows/taosd-ci-build.yml), 更多的工作流正在创建中,将很快可用。
|
||||
|
||||
# 11. 覆盖率
|
||||
|
||||
最新的TDengine测试覆盖率报告可参考 [coveralls.io](https://coveralls.io/github/taosdata/TDengine)
|
||||
|
||||
<details>
|
||||
|
||||
<summary>如何在本地运行测试覆盖率报告?</summary>
|
||||
|
||||
在本地创建测试覆盖率报告(HTML格式),请运行以下命令:
|
||||
|
||||
```bash
|
||||
cd tests
|
||||
bash setup-lcov.sh -v 1.16 && ./run_local_coverage.sh -b main -c task
|
||||
# on main branch and run cases in longtimeruning_cases.task
|
||||
# for more infomation about options please refer to ./run_local_coverage.sh -h
|
||||
```
|
||||
> **注意:**
|
||||
> 请注意,-b和-i选项将使用-DCOVER=true选项重新编译TDengine,这可能需要花费一些时间。
|
||||
|
||||
</details>
|
||||
|
||||
# 12. 成为社区贡献者
|
||||
|
||||
点击 [这里](https://www.taosdata.com/contributor),了解如何成为 TDengine 的贡献者。
|
||||
|
||||
# 加入技术交流群
|
||||
|
||||
TDengine 官方社群「物联网大数据群」对外开放,欢迎您加入讨论。搜索微信号 "tdengine",加小 T 为好友,即可入群。
|
||||
|
|
40
README.md
|
@ -10,10 +10,10 @@
|
|||
|
||||
[](https://github.com/taosdata/TDengine/actions/workflows/taosd-ci-build.yml)
|
||||
[](https://coveralls.io/github/taosdata/TDengine?branch=3.0)
|
||||

|
||||
[](https://github.com/feici02/TDengine/commits/main/)
|
||||
<br />
|
||||

|
||||

|
||||
[](https://github.com/taosdata/TDengine/releases)
|
||||
[](https://github.com/taosdata/TDengine/blob/main/LICENSE)
|
||||
[](https://bestpractices.coreinfrastructure.org/projects/4201)
|
||||
<br />
|
||||
[](https://twitter.com/tdenginedb)
|
||||
|
@ -22,7 +22,7 @@
|
|||
[](https://www.linkedin.com/company/tdengine)
|
||||
[](https://stackoverflow.com/questions/tagged/tdengine)
|
||||
|
||||
English | [简体中文](README-CN.md) | [TDengine Cloud](https://cloud.tdengine.com) | [Learn more about TSDB](https://tdengine.com/tsdb/)
|
||||
English | [简体中文](README-CN.md) | [TDengine Cloud](https://cloud.tdengine.com) | [Learn more about TSDB](https://tdengine.com/time-series-database/)
|
||||
|
||||
# Table of Contents
|
||||
|
||||
|
@ -74,8 +74,16 @@ For a full list of TDengine competitive advantages, please [check here](https://
|
|||
|
||||
For user manual, system design and architecture, please refer to [TDengine Documentation](https://docs.tdengine.com) ([TDengine 文档](https://docs.taosdata.com))
|
||||
|
||||
You can choose to install TDengine via [container](https://docs.tdengine.com/get-started/deploy-in-docker/), [installation package](https://docs.tdengine.com/get-started/deploy-from-package/), [Kubernetes](https://docs.tdengine.com/operations-and-maintenance/deploy-your-cluster/#kubernetes-deployment) or try [fully managed service](https://cloud.tdengine.com/) without installation. This quick guide is for developers who want to contribute, build, release and test TDengine by themselves.
|
||||
|
||||
For contributing/building/testing TDengine Connectors, please check the following repositories: [JDBC Connector](https://github.com/taosdata/taos-connector-jdbc), [Go Connector](https://github.com/taosdata/driver-go), [Python Connector](https://github.com/taosdata/taos-connector-python), [Node.js Connector](https://github.com/taosdata/taos-connector-node), [C# Connector](https://github.com/taosdata/taos-connector-dotnet), [Rust Connector](https://github.com/taosdata/taos-connector-rust).
|
||||
|
||||
# 3. Prerequisites
|
||||
|
||||
At the moment, TDengine server supports running on Linux/Windows/MacOS systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service. TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future. Right now we don't support build with cross-compiling environment.
|
||||
|
||||
If you want to compile taosAdapter or taosKeeper, you need to install Go 1.18 or above.
|
||||
|
||||
## 3.1 On Linux
|
||||
|
||||
<details>
|
||||
|
@ -85,7 +93,7 @@ For user manual, system design and architecture, please refer to [TDengine Docum
|
|||
### For Ubuntu 18.04、20.04、22.04
|
||||
|
||||
```bash
|
||||
sudo apt-get udpate
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y gcc cmake build-essential git libjansson-dev \
|
||||
libsnappy-dev liblzma-dev zlib1g-dev pkg-config
|
||||
```
|
||||
|
@ -127,10 +135,6 @@ Work in Progress.
|
|||
|
||||
## 3.4 Clone the repo
|
||||
|
||||
<details>
|
||||
|
||||
<summary>Clone the repo</summary>
|
||||
|
||||
Clone the repository to the target machine:
|
||||
|
||||
```bash
|
||||
|
@ -138,21 +142,13 @@ git clone https://github.com/taosdata/TDengine.git
|
|||
cd TDengine
|
||||
```
|
||||
|
||||
|
||||
> **NOTE:**
|
||||
> TDengine Connectors can be found in following repositories: [JDBC Connector](https://github.com/taosdata/taos-connector-jdbc), [Go Connector](https://github.com/taosdata/driver-go), [Python Connector](https://github.com/taosdata/taos-connector-python), [Node.js Connector](https://github.com/taosdata/taos-connector-node), [C# Connector](https://github.com/taosdata/taos-connector-dotnet), [Rust Connector](https://github.com/taosdata/taos-connector-rust).
|
||||
|
||||
</details>
|
||||
|
||||
# 4. Building
|
||||
|
||||
At the moment, TDengine server supports running on Linux/Windows/MacOS systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service. TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future. Right now we don't support build with cross-compiling environment.
|
||||
|
||||
You can choose to install through source code, [container](https://docs.tdengine.com/get-started/deploy-in-docker/), [installation package](https://docs.tdengine.com/get-started/deploy-from-package/) or [Kubernetes](https://docs.tdengine.com/operations-and-maintenance/deploy-your-cluster/#kubernetes-deployment). This quick guide only applies to install from source.
|
||||
|
||||
TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine.
|
||||
|
||||
To build TDengine, use [CMake](https://cmake.org/) 3.13.0 or higher versions in the project directory.
|
||||
TDengine requires [GCC](https://gcc.gnu.org/) 9.3.1 or higher and [CMake](https://cmake.org/) 3.13.0 or higher for building.
|
||||
|
||||
## 4.1 Build on Linux
|
||||
|
||||
|
@ -174,6 +170,10 @@ cmake .. -DBUILD_TOOLS=true -DBUILD_CONTRIB=true
|
|||
make
|
||||
```
|
||||
|
||||
If you want to compile taosAdapter, you need to add the `-DBUILD_HTTP=false` option.
|
||||
|
||||
If you want to compile taosKeeper, you need to add the `--DBUILD_KEEPER=true` option.
|
||||
|
||||
You can use Jemalloc as memory allocator instead of glibc:
|
||||
|
||||
```bash
|
||||
|
@ -202,6 +202,10 @@ mkdir debug && cd debug
|
|||
cmake .. && cmake --build .
|
||||
```
|
||||
|
||||
If you want to compile taosAdapter, you need to add the `-DBUILD_HTTP=false` option.
|
||||
|
||||
If you want to compile taosKeeper, you need to add the `--DBUILD_KEEPER=true` option.
|
||||
|
||||
</details>
|
||||
|
||||
## 4.3 Build on Windows
|
||||
|
|
|
@ -117,7 +117,7 @@ ELSE()
|
|||
ENDIF()
|
||||
|
||||
# force set all platform to JEMALLOC_ENABLED = false
|
||||
SET(JEMALLOC_ENABLED OFF)
|
||||
# SET(JEMALLOC_ENABLED OFF)
|
||||
|
||||
IF(TD_WINDOWS)
|
||||
MESSAGE("${Yellow} set compiler flag for Windows! ${ColourReset}")
|
||||
|
@ -258,3 +258,14 @@ ELSE()
|
|||
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-reserved-user-defined-literal -g3 -Wno-literal-suffix -Werror=return-type -fPIC -gdwarf-2 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k")
|
||||
ENDIF()
|
||||
ENDIF()
|
||||
|
||||
|
||||
IF(TD_LINUX)
|
||||
IF(${JEMALLOC_ENABLED})
|
||||
MESSAGE(STATUS "JEMALLOC_ENABLED Enabled")
|
||||
SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Wno-error=attributes")
|
||||
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-error=attributes")
|
||||
ELSE()
|
||||
MESSAGE(STATUS "JEMALLOC_ENABLED Disabled")
|
||||
ENDIF()
|
||||
ENDIF()
|
|
@ -166,6 +166,10 @@ IF(${BUILD_WITH_ANALYSIS})
|
|||
set(BUILD_WITH_S3 ON)
|
||||
ENDIF()
|
||||
|
||||
IF(${TD_LINUX})
|
||||
set(BUILD_WITH_ANALYSIS ON)
|
||||
ENDIF()
|
||||
|
||||
IF(${BUILD_S3})
|
||||
|
||||
IF(${BUILD_WITH_S3})
|
||||
|
@ -205,13 +209,6 @@ option(
|
|||
off
|
||||
)
|
||||
|
||||
|
||||
option(
|
||||
BUILD_WITH_NURAFT
|
||||
"If build with NuRaft"
|
||||
OFF
|
||||
)
|
||||
|
||||
option(
|
||||
BUILD_WITH_UV
|
||||
"If build with libuv"
|
||||
|
|
|
@ -15,6 +15,18 @@ IF (TD_PRODUCT_NAME)
|
|||
ADD_DEFINITIONS(-DTD_PRODUCT_NAME="${TD_PRODUCT_NAME}")
|
||||
ENDIF ()
|
||||
|
||||
IF (CUS_NAME)
|
||||
ADD_DEFINITIONS(-DCUS_NAME="${CUS_NAME}")
|
||||
ENDIF ()
|
||||
|
||||
IF (CUS_PROMPT)
|
||||
ADD_DEFINITIONS(-DCUS_PROMPT="${CUS_PROMPT}")
|
||||
ENDIF ()
|
||||
|
||||
IF (CUS_EMAIL)
|
||||
ADD_DEFINITIONS(-DCUS_EMAIL="${CUS_EMAIL}")
|
||||
ENDIF ()
|
||||
|
||||
find_program(HAVE_GIT NAMES git)
|
||||
|
||||
IF (DEFINED GITINFO)
|
||||
|
|
|
@ -12,7 +12,7 @@ ExternalProject_Add(curl2
|
|||
BUILD_IN_SOURCE TRUE
|
||||
BUILD_ALWAYS 1
|
||||
UPDATE_COMMAND ""
|
||||
CONFIGURE_COMMAND ${CONTRIB_CONFIG_ENV} ./configure --prefix=$ENV{HOME}/.cos-local.2 --with-ssl=$ENV{HOME}/.cos-local.2 --enable-websockets --enable-shared=no --disable-ldap --disable-ldaps --without-brotli --without-zstd --without-libidn2 --without-nghttp2 --without-libpsl #--enable-debug
|
||||
CONFIGURE_COMMAND ${CONTRIB_CONFIG_ENV} ./configure --prefix=$ENV{HOME}/.cos-local.2 --with-ssl=$ENV{HOME}/.cos-local.2 --enable-websockets --enable-shared=no --disable-ldap --disable-ldaps --without-brotli --without-zstd --without-libidn2 --without-nghttp2 --without-libpsl --without-librtmp #--enable-debug
|
||||
BUILD_COMMAND make -j
|
||||
INSTALL_COMMAND make install
|
||||
TEST_COMMAND ""
|
||||
|
|
|
@ -1,13 +0,0 @@
|
|||
|
||||
# taos-tools
|
||||
ExternalProject_Add(taos-tools
|
||||
GIT_REPOSITORY https://github.com/taosdata/taos-tools.git
|
||||
GIT_TAG 3.0
|
||||
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools"
|
||||
BINARY_DIR ""
|
||||
#BUILD_IN_SOURCE TRUE
|
||||
CONFIGURE_COMMAND ""
|
||||
BUILD_COMMAND ""
|
||||
INSTALL_COMMAND ""
|
||||
TEST_COMMAND ""
|
||||
)
|
|
@ -42,11 +42,6 @@ endif()
|
|||
set(CONTRIB_TMP_FILE "${CMAKE_BINARY_DIR}/deps_tmp_CMakeLists.txt.in")
|
||||
configure_file("${TD_SUPPORT_DIR}/deps_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
|
||||
|
||||
# taos-tools
|
||||
if(${BUILD_TOOLS})
|
||||
cat("${TD_SUPPORT_DIR}/taostools_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
|
||||
endif()
|
||||
|
||||
# taosws-rs
|
||||
if(${WEBSOCKET})
|
||||
cat("${TD_SUPPORT_DIR}/taosws_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
|
||||
|
@ -210,9 +205,18 @@ ENDIF()
|
|||
# download dependencies
|
||||
configure_file(${CONTRIB_TMP_FILE} "${TD_CONTRIB_DIR}/deps-download/CMakeLists.txt")
|
||||
execute_process(COMMAND "${CMAKE_COMMAND}" -G "${CMAKE_GENERATOR}" .
|
||||
WORKING_DIRECTORY "${TD_CONTRIB_DIR}/deps-download")
|
||||
WORKING_DIRECTORY "${TD_CONTRIB_DIR}/deps-download"
|
||||
RESULT_VARIABLE result)
|
||||
IF(NOT result EQUAL "0")
|
||||
message(FATAL_ERROR "CMake step for dowloading dependencies failed: ${result}")
|
||||
ENDIF()
|
||||
|
||||
execute_process(COMMAND "${CMAKE_COMMAND}" --build .
|
||||
WORKING_DIRECTORY "${TD_CONTRIB_DIR}/deps-download")
|
||||
WORKING_DIRECTORY "${TD_CONTRIB_DIR}/deps-download"
|
||||
RESULT_VARIABLE result)
|
||||
IF(NOT result EQUAL "0")
|
||||
message(FATAL_ERROR "CMake step for building dependencies failed: ${result}")
|
||||
ENDIF()
|
||||
|
||||
# ================================================================================================
|
||||
# Build
|
||||
|
|
|
@ -20,14 +20,6 @@ if(${BUILD_WITH_SQLITE})
|
|||
add_subdirectory(sqlite)
|
||||
endif(${BUILD_WITH_SQLITE})
|
||||
|
||||
if(${BUILD_WITH_CRAFT})
|
||||
add_subdirectory(craft)
|
||||
endif(${BUILD_WITH_CRAFT})
|
||||
|
||||
if(${BUILD_WITH_TRAFT})
|
||||
# add_subdirectory(traft)
|
||||
endif(${BUILD_WITH_TRAFT})
|
||||
|
||||
if(${BUILD_S3})
|
||||
add_subdirectory(azure)
|
||||
endif()
|
||||
|
|
|
@ -26,7 +26,7 @@ CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name
|
|||
SUBTABLE(expression) AS subquery
|
||||
|
||||
stream_options: {
|
||||
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
|
||||
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time | FORCE_WINDOW_CLOSE]
|
||||
WATERMARK time
|
||||
IGNORE EXPIRED [0|1]
|
||||
DELETE_MARK time
|
||||
|
@ -56,13 +56,17 @@ window_clause: {
|
|||
}
|
||||
```
|
||||
|
||||
The subquery supports session windows, state windows, and sliding windows. When used with supertables, session windows and state windows must be used together with `partition by tbname`.
|
||||
The subquery supports session windows, state windows, time windows, event windows, and count windows. When used with supertables, state windows, event windows, and count windows must be used together with `partition by tbname`.
|
||||
|
||||
1. SESSION is a session window, where tol_val is the maximum range of the time interval. All data within the tol_val time interval belong to the same window. If the time interval between two consecutive data points exceeds tol_val, the next window automatically starts.
|
||||
|
||||
2. EVENT_WINDOW is an event window, defined by start and end conditions. The window starts when the start_trigger_condition is met and closes when the end_trigger_condition is met. start_trigger_condition and end_trigger_condition can be any condition expressions supported by TDengine and can include different columns.
|
||||
2. STATE_WINDOW is a state window. The col is used to identify the state value. Values with the same state value belong to the same state window. When the value of col changes, the current window ends and the next window is automatically opened.
|
||||
|
||||
3. COUNT_WINDOW is a counting window, divided by a fixed number of data rows. count_val is a constant, a positive integer, and must be at least 2 and less than 2147483648. count_val represents the maximum number of data rows in each COUNT_WINDOW. If the total number of data rows cannot be evenly divided by count_val, the last window will have fewer rows than count_val. sliding_val is a constant, representing the number of rows the window slides, similar to the SLIDING in INTERVAL.
|
||||
3. INTERVAL is a time window, which can be further divided into sliding time windows and tumbling time windows.The INTERVAL clause is used to specify the equal time period of the window, and the SLIDING clause is used to specify the time by which the window slides forward. When the value of interval_val is equal to the value of sliding_val, the time window is a tumbling time window; otherwise, it is a sliding time window. Note: The value of sliding_val must be less than or equal to the value of interval_val.
|
||||
|
||||
4. EVENT_WINDOW is an event window, defined by start and end conditions. The window starts when the start_trigger_condition is met and closes when the end_trigger_condition is met. start_trigger_condition and end_trigger_condition can be any condition expressions supported by TDengine and can include different columns.
|
||||
|
||||
5. COUNT_WINDOW is a counting window, divided by a fixed number of data rows. count_val is a constant, a positive integer, and must be at least 2 and less than 2147483648. count_val represents the maximum number of data rows in each COUNT_WINDOW. If the total number of data rows cannot be evenly divided by count_val, the last window will have fewer rows than count_val. sliding_val is a constant, representing the number of rows the window slides, similar to the SLIDING in INTERVAL.
|
||||
|
||||
The definition of a window is exactly the same as in the time-series data window query, for details refer to the TDengine window functions section.
|
||||
|
||||
|
|
|
@ -304,7 +304,7 @@ Not supported
|
|||
|
||||
## Querying the Written Data
|
||||
|
||||
By running the example code from the previous section, tables will be automatically created in the power database. We can query the data using taos shell or an application. Below is an example of querying the data from the supertable and meters table using taos shell.
|
||||
By running the example code from the previous section, tables will be automatically created in the power database. We can query the data using TDengine CLI or an application. Below is an example of querying the data from the supertable and meters table using TDengine CLI.
|
||||
|
||||
```shell
|
||||
taos> show power.stables;
|
||||
|
|
|
@ -10,7 +10,7 @@ TDengine provides data subscription and consumption interfaces similar to those
|
|||
|
||||
## Creating Topics
|
||||
|
||||
Please use taos shell or refer to the [Execute SQL](../running-sql-statements/) section to execute the SQL for creating topics: `CREATE TOPIC IF NOT EXISTS topic_meters AS SELECT ts, current, voltage, phase, groupid, location FROM meters`
|
||||
Please use TDengine CLI or refer to the [Execute SQL](../running-sql-statements/) section to execute the SQL for creating topics: `CREATE TOPIC IF NOT EXISTS topic_meters AS SELECT ts, current, voltage, phase, groupid, location FROM meters`
|
||||
|
||||
The above SQL will create a subscription named topic_meters. Each record in the messages obtained using this subscription is composed of the columns selected by this query statement `SELECT ts, current, voltage, phase, groupid, location FROM meters`.
|
||||
|
||||
|
@ -31,12 +31,12 @@ There are many parameters for creating consumers, which flexibly support various
|
|||
|
||||
| Parameter Name | Type | Description | Remarks |
|
||||
| :-----------------------: | :-----: | ------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `td.connect.ip` | string | Server IP address | |
|
||||
| `td.connect.ip` | string | FQDN of Server | ip or host name |
|
||||
| `td.connect.user` | string | Username | |
|
||||
| `td.connect.pass` | string | Password | |
|
||||
| `td.connect.port` | integer | Server port number | |
|
||||
| `group.id` | string | Consumer group ID, the same consumer group shares consumption progress | <br />**Required**. Maximum length: 192.<br />Each topic can have up to 100 consumer groups |
|
||||
| `client.id` | string | Client ID | Maximum length: 192 |
|
||||
| `group.id` | string | Consumer group ID, the same consumer group shares consumption progress | <br />**Required**. Maximum length: 192,excess length will be cut off.<br />Each topic can have up to 100 consumer groups |
|
||||
| `client.id` | string | Client ID | Maximum length: 255, excess length will be cut off. |
|
||||
| `auto.offset.reset` | enum | Initial position of the consumer group subscription | <br />`earliest`: default(version < 3.2.0.0); subscribe from the beginning; <br/>`latest`: default(version >= 3.2.0.0); only subscribe from the latest data; <br/>`none`: cannot subscribe without a committed offset |
|
||||
| `enable.auto.commit` | boolean | Whether to enable automatic consumption point submission, true: automatic submission, client application does not need to commit; false: client application needs to commit manually | Default is true |
|
||||
| `auto.commit.interval.ms` | integer | Time interval for automatically submitting consumption records, in milliseconds | Default is 5000 |
|
||||
|
@ -44,6 +44,9 @@ There are many parameters for creating consumers, which flexibly support various
|
|||
| `enable.replay` | boolean | Whether to enable data replay function | Default is off |
|
||||
| `session.timeout.ms` | integer | Timeout after consumer heartbeat is lost, after which rebalance logic is triggered, and upon success, that consumer will be removed (supported from version 3.3.3.0) | Default is 12000, range [6000, 1800000] |
|
||||
| `max.poll.interval.ms` | integer | The longest time interval for consumer poll data fetching, exceeding this time will be considered as the consumer being offline, triggering rebalance logic, and upon success, that consumer will be removed (supported from version 3.3.3.0) | Default is 300000, range [1000, INT32_MAX] |
|
||||
| `fetch.max.wait.ms` | integer | The maximum time it takes for the server to return data once (supported from version 3.3.6.0) | Default is 1000, range [1, INT32_MAX] |
|
||||
| `min.poll.rows` | integer | The minimum number of data returned by the server once (supported from version 3.3.6.0) | Default is 4096, range [1, INT32_MAX]
|
||||
| `msg.consume.rawdata` | integer | When consuming data, the data type pulled is binary and cannot be parsed. It is an internal parameter and is only used for taosx data migration(supported from version 3.3.6.0) | The default value of 0 indicates that it is not effective, and non-zero indicates that it is effective |
|
||||
|
||||
Below are the connection parameters for connectors in various languages:
|
||||
<Tabs defaultValue="java" groupId="lang">
|
||||
|
|
|
@ -87,7 +87,7 @@ Additionally, to accelerate the data processing process, TDengine specifically s
|
|||
|
||||
```text
|
||||
Users can use the performance testing tool taosBenchmark to assess the data compression effect of TDengine. By using the -f option to specify the write configuration file, taosBenchmark can write a specified number of CSV sample data into the specified database parameters and table structure.
|
||||
After completing the data writing, users can execute the flush database command in the taos shell to force all data to be written to the disk. Then, use the du command of the Linux operating system to get the size of the data folder of the specified vnode. Finally, divide the original data size by the actual storage data size to calculate the real compression ratio.
|
||||
After completing the data writing, users can execute the flush database command in the TDengine CLI to force all data to be written to the disk. Then, use the du command of the Linux operating system to get the size of the data folder of the specified vnode. Finally, divide the original data size by the actual storage data size to calculate the real compression ratio.
|
||||
```
|
||||
|
||||
The following command can be used to obtain the storage space occupied by TDengine.
|
||||
|
|
|
@ -1,882 +0,0 @@
|
|||
---
|
||||
title: Deploying Your Cluster
|
||||
slug: /operations-and-maintenance/deploy-your-cluster
|
||||
---
|
||||
|
||||
Since TDengine was designed with a distributed architecture from the beginning, it has powerful horizontal scaling capabilities to meet the growing data processing needs. Therefore, TDengine supports clustering and has open-sourced this core functionality. Users can choose from four deployment methods according to their actual environment and needs—manual deployment, Docker deployment, Kubernetes deployment, and Helm deployment.
|
||||
|
||||
## Manual Deployment
|
||||
|
||||
### Deploying taosd
|
||||
|
||||
taosd is the most important service component in the TDengine cluster. This section describes the steps to manually deploy a taosd cluster.
|
||||
|
||||
#### 1. Clear Data
|
||||
|
||||
If the physical nodes for setting up the cluster contain previous test data or have had other versions of TDengine installed (such as 1.x/2.x), please delete them and clear all data first.
|
||||
|
||||
#### 2. Check Environment
|
||||
|
||||
Before deploying the TDengine cluster, it is crucial to thoroughly check the network settings of all dnodes and the physical nodes where the applications are located. Here are the steps to check:
|
||||
|
||||
- Step 1: Execute the `hostname -f` command on each physical node to view and confirm that all node hostnames are unique. This step can be omitted for nodes where application drivers are located.
|
||||
- Step 2: Execute the `ping host` command on each physical node, where host is the hostname of other physical nodes. This step aims to detect the network connectivity between the current node and other physical nodes. If you cannot ping through, immediately check the network and DNS settings. For Linux operating systems, check the `/etc/hosts` file; for Windows operating systems, check the `C:\Windows\system32\drivers\etc\hosts` file. Network issues will prevent the formation of a cluster, so be sure to resolve this issue.
|
||||
- Step 3: Repeat the above network detection steps on the physical nodes where the application is running. If the network is found to be problematic, the application will not be able to connect to the taosd service. At this point, carefully check the DNS settings or hosts file of the physical node where the application is located to ensure it is configured correctly.
|
||||
- Step 4: Check ports to ensure that all hosts in the cluster can communicate over TCP on port 6030.
|
||||
|
||||
By following these steps, you can ensure that all nodes communicate smoothly at the network level, laying a solid foundation for the successful deployment of the TDengine cluster.
|
||||
|
||||
#### 3. Installation
|
||||
|
||||
To ensure consistency and stability within the cluster, install the same version of TDengine on all physical nodes.
|
||||
|
||||
#### 4. Modify Configuration
|
||||
|
||||
Modify the configuration file of TDengine (the configuration files of all nodes need to be modified). Assuming the endpoint of the first dnode to be started is `h1.tdengine.com:6030`, the cluster-related parameters are as follows.
|
||||
|
||||
```shell
|
||||
# firstEp is the first dnode that each dnode connects to after the initial startup
|
||||
firstEp h1.tdengine.com:6030
|
||||
# Must be configured to the FQDN of this dnode, if there is only one hostname on this machine, you can comment out or delete the following line
|
||||
fqdn h1.tdengine.com
|
||||
# Configure the port of this dnode, default is 6030
|
||||
serverPort 6030
|
||||
```
|
||||
|
||||
The parameters that must be modified are firstEp and fqdn. For each dnode, the firstEp configuration should remain consistent, but fqdn must be set to the value of the dnode it is located on. Other parameters do not need to be modified unless you are clear on why they should be changed.
|
||||
|
||||
For dnodes wishing to join the cluster, it is essential to ensure that the parameters related to the TDengine cluster listed in the table below are set identically. Any mismatch in parameters may prevent the dnode from successfully joining the cluster.
|
||||
|
||||
| Parameter Name | Meaning |
|
||||
|:----------------:|:---------------------------------------------------------:|
|
||||
| statusInterval | Interval at which dnode reports status to mnode |
|
||||
| timezone | Time zone |
|
||||
| locale | System locale information and encoding format |
|
||||
| charset | Character set encoding |
|
||||
| ttlChangeOnWrite | Whether ttl expiration changes with table modification |
|
||||
|
||||
#### 5. Start
|
||||
|
||||
Start the first dnode, such as `h1.tdengine.com`, following the steps mentioned above. Then execute taos in the terminal to start TDengine's CLI program taos, and execute the `show dnodes` command within it to view all dnode information in the current cluster.
|
||||
|
||||
```shell
|
||||
taos> show dnodes;
|
||||
id | endpoint | vnodes|support_vnodes|status| create_time | note |
|
||||
===================================================================================
|
||||
1| h1.tdengine.com:6030 | 0| 1024| ready| 2022-07-16 10:50:42.673 | |
|
||||
```
|
||||
|
||||
You can see that the endpoint of the dnode node that has just started is `h1.tdengine.com:6030`. This address is the first Ep of the new cluster.
|
||||
|
||||
#### 6. Adding dnode
|
||||
|
||||
Follow the steps mentioned earlier, start taosd on each physical node. Each dnode needs to configure the firstEp parameter in the taos.cfg file to the endpoint of the first node of the new cluster, which in this case is `h1.tdengine.com:6030`. On the machine where the first dnode is located, run taos in the terminal, open TDengine's CLI program taos, then log into the TDengine cluster, and execute the following SQL.
|
||||
|
||||
```shell
|
||||
create dnode "h2.tdengine.com:6030"
|
||||
```
|
||||
|
||||
Add the new dnode's endpoint to the cluster's endpoint list. You need to put `fqdn:port` in double quotes, otherwise, it will cause an error when running. Please note to replace the example h2.tdengine.com:6030 with the endpoint of this new dnode. Then execute the following SQL to see if the new node has successfully joined. If the dnode you want to join is currently offline, please refer to the "Common Issues" section later in this chapter for a solution.
|
||||
|
||||
```shell
|
||||
show dnodes;
|
||||
```
|
||||
|
||||
In the logs, please confirm that the fqdn and port of the output dnode are consistent with the endpoint you just tried to add. If they are not consistent, correct it to the correct endpoint. By following the steps above, you can continuously add new dnodes to the cluster one by one, thereby expanding the scale of the cluster and improving overall performance. Make sure to follow the correct process when adding new nodes, which helps maintain the stability and reliability of the cluster.
|
||||
|
||||
**Tips**
|
||||
|
||||
- Any dnode that has joined the cluster can serve as the firstEp for subsequent nodes to be added. The firstEp parameter only functions when that dnode first joins the cluster. After joining, the dnode will save the latest mnode's endpoint list, and subsequently, it no longer depends on this parameter. The firstEp parameter in the configuration file is mainly used for client connections, and if no parameters are set for TDengine's CLI, it will default to connecting to the node specified by firstEp.
|
||||
- Two dnodes that have not configured the firstEp parameter will run independently after starting. At this time, it is not possible to join one dnode to another to form a cluster.
|
||||
- TDengine does not allow merging two independent clusters into a new cluster.
|
||||
|
||||
#### 7. Adding mnode
|
||||
|
||||
When creating a TDengine cluster, the first dnode automatically becomes the mnode of the cluster, responsible for managing and coordinating the cluster. To achieve high availability of mnode, subsequent dnodes need to manually create mnode. Please note that a cluster can create up to 3 mnodes, and only one mnode can be created on each dnode. When the number of dnodes in the cluster reaches or exceeds 3, you can create mnode for the existing cluster. In the first dnode, first log into TDengine through the CLI program taos, then execute the following SQL.
|
||||
|
||||
```shell
|
||||
create mnode on dnode <dnodeId>
|
||||
```
|
||||
|
||||
Please note to replace the dnodeId in the example above with the serial number of the newly created dnode (which can be obtained by executing the `show dnodes` command). Finally, execute the following `show mnodes` to see if the newly created mnode has successfully joined the cluster.
|
||||
|
||||
**Tips**
|
||||
|
||||
During the process of setting up a TDengine cluster, if a new node always shows as offline after executing the create dnode command to add a new node, please follow these steps for troubleshooting.
|
||||
|
||||
- Step 1, check whether the taosd service on the new node has started normally. You can confirm this by checking the log files or using the ps command.
|
||||
- Step 2, if the taosd service has started, next check whether the new node's network connection is smooth and confirm whether the firewall has been turned off. Network issues or firewall settings may prevent the node from communicating with other nodes in the cluster.
|
||||
- Step 3, use the taos -h fqdn command to try to connect to the new node, then execute the show dnodes command. This will display the running status of the new node as an independent cluster. If the displayed list is inconsistent with that shown on the main node, it indicates that the new node may have formed a single-node cluster on its own. To resolve this issue, follow these steps. First, stop the taosd service on the new node. Second, clear all files in the dataDir directory specified in the taos.cfg configuration file on the new node. This will delete all data and configuration information related to that node. Finally, restart the taosd service on the new node. This will reset the new node to its initial state, ready to rejoin the main cluster.
|
||||
|
||||
### Deploying taosAdapter
|
||||
|
||||
This section discusses how to deploy taosAdapter, which provides RESTful and WebSocket access capabilities for the TDengine cluster, thus playing a very important role in the cluster.
|
||||
|
||||
1. Installation
|
||||
|
||||
After the installation of TDengine Enterprise is complete, taosAdapter can be used. If you want to deploy taosAdapter on different servers, TDengine Enterprise needs to be installed on these servers.
|
||||
|
||||
2. Single Instance Deployment
|
||||
|
||||
Deploying a single instance of taosAdapter is very simple. For specific commands and configuration parameters, please refer to the taosAdapter section in the manual.
|
||||
|
||||
3. Multiple Instances Deployment
|
||||
|
||||
The main purposes of deploying multiple instances of taosAdapter are as follows:
|
||||
|
||||
- To increase the throughput of the cluster and prevent taosAdapter from becoming a system bottleneck.
|
||||
- To enhance the robustness and high availability of the cluster, allowing requests entering the business system to be automatically routed to other instances when one instance fails.
|
||||
|
||||
When deploying multiple instances of taosAdapter, it is necessary to address load balancing issues to avoid overloading some nodes while others remain idle. During the deployment process, multiple single instances need to be deployed separately, and the deployment steps for each instance are exactly the same as those for deploying a single instance. The next critical part is configuring Nginx. Below is a verified best practice configuration; you only need to replace the endpoint with the correct address in the actual environment. For the meanings of each parameter, please refer to the official Nginx documentation.
|
||||
|
||||
```json
|
||||
user root;
|
||||
worker_processes auto;
|
||||
error_log /var/log/nginx_error.log;
|
||||
|
||||
|
||||
events {
|
||||
use epoll;
|
||||
worker_connections 1024;
|
||||
}
|
||||
|
||||
http {
|
||||
|
||||
access_log off;
|
||||
|
||||
map $http_upgrade $connection_upgrade {
|
||||
default upgrade;
|
||||
'' close;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 6041;
|
||||
location ~* {
|
||||
proxy_pass http://dbserver;
|
||||
proxy_read_timeout 600s;
|
||||
proxy_send_timeout 600s;
|
||||
proxy_connect_timeout 600s;
|
||||
proxy_next_upstream error http_502 non_idempotent;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $http_connection;
|
||||
}
|
||||
}
|
||||
server {
|
||||
listen 6043;
|
||||
location ~* {
|
||||
proxy_pass http://keeper;
|
||||
proxy_read_timeout 60s;
|
||||
proxy_next_upstream error http_502 http_500 non_idempotent;
|
||||
}
|
||||
}
|
||||
|
||||
server {
|
||||
listen 6060;
|
||||
location ~* {
|
||||
proxy_pass http://explorer;
|
||||
proxy_read_timeout 60s;
|
||||
proxy_next_upstream error http_502 http_500 non_idempotent;
|
||||
}
|
||||
}
|
||||
upstream dbserver {
|
||||
least_conn;
|
||||
server 172.16.214.201:6041 max_fails=0;
|
||||
server 172.16.214.202:6041 max_fails=0;
|
||||
server 172.16.214.203:6041 max_fails=0;
|
||||
}
|
||||
upstream keeper {
|
||||
ip_hash;
|
||||
server 172.16.214.201:6043 ;
|
||||
server 172.16.214.202:6043 ;
|
||||
server 172.16.214.203:6043 ;
|
||||
}
|
||||
upstream explorer{
|
||||
ip_hash;
|
||||
server 172.16.214.201:6060 ;
|
||||
server 172.16.214.202:6060 ;
|
||||
server 172.16.214.203:6060 ;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Deploying taosKeeper
|
||||
|
||||
To use the monitoring capabilities of TDengine, taosKeeper is an essential component. For monitoring, please refer to [TDinsight](../../tdengine-reference/components/tdinsight), and for details on deploying taosKeeper, please refer to the [taosKeeper Reference Manual](../../tdengine-reference/components/taoskeeper).
|
||||
|
||||
### Deploying taosX
|
||||
|
||||
To utilize the data ingestion capabilities of TDengine, it is necessary to deploy the taosX service. For detailed explanations and deployment, please refer to the enterprise edition reference manual.
|
||||
|
||||
### Deploying taosX-Agent
|
||||
|
||||
For some data sources such as Pi, OPC, etc., due to network conditions and data source access restrictions, taosX cannot directly access the data sources. In such cases, a proxy service, taosX-Agent, needs to be deployed. For detailed explanations and deployment, please refer to the enterprise edition reference manual.
|
||||
|
||||
### Deploying taos-Explorer
|
||||
|
||||
TDengine provides the capability to visually manage TDengine clusters. To use the graphical interface, the taos-Explorer service needs to be deployed. For detailed explanations and deployment, please refer to the [taos-Explorer Reference Manual](../../tdengine-reference/components/taosexplorer/)
|
||||
|
||||
## Docker Deployment
|
||||
|
||||
This section will explain how to start TDengine services in Docker containers and access them. You can use environment variables in the docker run command line or docker-compose file to control the behavior of services in the container.
|
||||
|
||||
### Starting TDengine
|
||||
|
||||
The TDengine image is launched with HTTP service activated by default. Use the following command to create a containerized TDengine environment with HTTP service.
|
||||
|
||||
```shell
|
||||
docker run -d --name tdengine \
|
||||
-v ~/data/taos/dnode/data:/var/lib/taos \
|
||||
-v ~/data/taos/dnode/log:/var/log/taos \
|
||||
-p 6041:6041 tdengine/tdengine
|
||||
```
|
||||
|
||||
Detailed parameter explanations are as follows:
|
||||
|
||||
- /var/lib/taos: Default data file directory for TDengine, can be modified through the configuration file.
|
||||
- /var/log/taos: Default log file directory for TDengine, can be modified through the configuration file.
|
||||
|
||||
The above command starts a container named tdengine and maps the HTTP service's port 6041 to the host port 6041. The following command can verify if the HTTP service in the container is available.
|
||||
|
||||
```shell
|
||||
curl -u root:taosdata -d "show databases" localhost:6041/rest/sql
|
||||
```
|
||||
|
||||
Run the following command to access TDengine within the container.
|
||||
|
||||
```shell
|
||||
$ docker exec -it tdengine taos
|
||||
|
||||
taos> show databases;
|
||||
name |
|
||||
=================================
|
||||
information_schema |
|
||||
performance_schema |
|
||||
Query OK, 2 rows in database (0.033802s)
|
||||
```
|
||||
|
||||
Within the container, TDengine CLI or various connectors (such as JDBC-JNI) connect to the server via the container's hostname. Accessing TDengine inside the container from outside is more complex, and using RESTful/WebSocket connection methods is the simplest approach.
|
||||
|
||||
### Starting TDengine in host network mode
|
||||
|
||||
Run the following command to start TDengine in host network mode, which allows using the host's FQDN to establish connections, rather than using the container's hostname.
|
||||
|
||||
```shell
|
||||
docker run -d --name tdengine --network host tdengine/tdengine
|
||||
```
|
||||
|
||||
This method is similar to starting TDengine on the host using the systemctl command. If the TDengine client is already installed on the host, you can directly use the following command to access the TDengine service.
|
||||
|
||||
```shell
|
||||
$ taos
|
||||
|
||||
taos> show dnodes;
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | note |
|
||||
=================================================================================================================================================
|
||||
1 | vm98:6030 | 0 | 32 | ready | 2022-08-19 14:50:05.337 | |
|
||||
Query OK, 1 rows in database (0.010654s)
|
||||
```
|
||||
|
||||
### Start TDengine with a specified hostname and port
|
||||
|
||||
Use the following command to establish a connection on a specified hostname using the TAOS_FQDN environment variable or the fqdn configuration item in taos.cfg. This method provides greater flexibility for deploying TDengine.
|
||||
|
||||
```shell
|
||||
docker run -d \
|
||||
--name tdengine \
|
||||
-e TAOS_FQDN=tdengine \
|
||||
-p 6030:6030 \
|
||||
-p 6041-6049:6041-6049 \
|
||||
-p 6041-6049:6041-6049/udp \
|
||||
tdengine/tdengine
|
||||
```
|
||||
|
||||
First, the above command starts a TDengine service in the container, listening on the hostname tdengine, and maps the container's port 6030 to the host's port 6030, and the container's port range [6041, 6049] to the host's port range [6041, 6049]. If the port range on the host is already in use, you can modify the command to specify a free port range on the host.
|
||||
|
||||
Secondly, ensure that the hostname tdengine is resolvable in /etc/hosts. Use the following command to save the correct configuration information to the hosts file.
|
||||
|
||||
```shell
|
||||
echo 127.0.0.1 tdengine |sudo tee -a /etc/hosts
|
||||
```
|
||||
|
||||
Finally, you can access the TDengine service using the TDengine CLI with tdengine as the server address, as follows.
|
||||
|
||||
```shell
|
||||
taos -h tdengine -P 6030
|
||||
```
|
||||
|
||||
If TAOS_FQDN is set to the same as the hostname of the host, the effect is the same as "starting TDengine in host network mode".
|
||||
|
||||
## Kubernetes Deployment
|
||||
|
||||
As a time-series database designed for cloud-native architectures, TDengine inherently supports Kubernetes deployment. This section introduces how to step-by-step create a highly available TDengine cluster for production use using YAML files, with a focus on common operations of TDengine in a Kubernetes environment. This subsection requires readers to have a certain understanding of Kubernetes, be proficient in running common kubectl commands, and understand concepts such as statefulset, service, and pvc. Readers unfamiliar with these concepts can refer to the Kubernetes official website for learning.
|
||||
To meet the requirements of high availability, the cluster needs to meet the following requirements:
|
||||
|
||||
- 3 or more dnodes: Multiple vnodes in the same vgroup of TDengine should not be distributed on the same dnode, so if creating a database with 3 replicas, the number of dnodes should be 3 or more.
|
||||
- 3 mnodes: mnodes are responsible for managing the entire cluster, with TDengine defaulting to one mnode. If the dnode hosting this mnode goes offline, the entire cluster becomes unavailable.
|
||||
- 3 replicas of the database: TDengine's replica configuration is at the database level, so 3 replicas can ensure that the cluster remains operational even if any one of the 3 dnodes goes offline. If 2 dnodes go offline, the cluster becomes unavailable because RAFT cannot complete the election. (Enterprise edition: In disaster recovery scenarios, if the data files of any node are damaged, recovery can be achieved by restarting the dnode.)
|
||||
|
||||
### Prerequisites
|
||||
|
||||
To deploy and manage a TDengine cluster using Kubernetes, the following preparations need to be made.
|
||||
|
||||
- This article applies to Kubernetes v1.19 and above.
|
||||
- This article uses the kubectl tool for installation and deployment, please install the necessary software in advance.
|
||||
- Kubernetes has been installed and deployed and can normally access or update necessary container repositories or other services.
|
||||
|
||||
### Configure Service
|
||||
|
||||
Create a Service configuration file: taosd-service.yaml, the service name metadata.name (here "taosd") will be used in the next step. First, add the ports used by TDengine, then set the determined labels app (here "tdengine") in the selector.
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: "taosd"
|
||||
labels:
|
||||
app: "tdengine"
|
||||
spec:
|
||||
ports:
|
||||
- name: tcp6030
|
||||
protocol: "TCP"
|
||||
port: 6030
|
||||
- name: tcp6041
|
||||
protocol: "TCP"
|
||||
port: 6041
|
||||
selector:
|
||||
app: "tdengine"
|
||||
```
|
||||
|
||||
### Stateful Services StatefulSet
|
||||
|
||||
According to Kubernetes' descriptions of various deployment types, we will use StatefulSet as the deployment resource type for TDengine. Create the file tdengine.yaml, where replicas define the number of cluster nodes as 3. The node timezone is set to China (Asia/Shanghai), and each node is allocated 5G of standard storage, which you can modify according to actual conditions.
|
||||
|
||||
Please pay special attention to the configuration of startupProbe. After a dnode's Pod goes offline for a period of time and then restarts, the newly online dnode will be temporarily unavailable. If the startupProbe configuration is too small, Kubernetes will consider the Pod to be in an abnormal state and attempt to restart the Pod. This dnode's Pod will frequently restart and never return to a normal state.
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: "tdengine"
|
||||
labels:
|
||||
app: "tdengine"
|
||||
spec:
|
||||
serviceName: "taosd"
|
||||
replicas: 3
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
selector:
|
||||
matchLabels:
|
||||
app: "tdengine"
|
||||
template:
|
||||
metadata:
|
||||
name: "tdengine"
|
||||
labels:
|
||||
app: "tdengine"
|
||||
spec:
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 100
|
||||
podAffinityTerm:
|
||||
labelSelector:
|
||||
matchExpressions:
|
||||
- key: app
|
||||
operator: In
|
||||
values:
|
||||
- tdengine
|
||||
topologyKey: kubernetes.io/hostname
|
||||
containers:
|
||||
- name: "tdengine"
|
||||
image: "tdengine/tdengine:3.2.3.0"
|
||||
imagePullPolicy: "IfNotPresent"
|
||||
ports:
|
||||
- name: tcp6030
|
||||
protocol: "TCP"
|
||||
containerPort: 6030
|
||||
- name: tcp6041
|
||||
protocol: "TCP"
|
||||
containerPort: 6041
|
||||
env:
|
||||
# POD_NAME for FQDN config
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
# SERVICE_NAME and NAMESPACE for fqdn resolve
|
||||
- name: SERVICE_NAME
|
||||
value: "taosd"
|
||||
- name: STS_NAME
|
||||
value: "tdengine"
|
||||
- name: STS_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
# TZ for timezone settings, we recommend to always set it.
|
||||
- name: TZ
|
||||
value: "Asia/Shanghai"
|
||||
# Environment variables with prefix TAOS_ will be parsed and converted into corresponding parameter in taos.cfg. For example, serverPort in taos.cfg should be configured by TAOS_SERVER_PORT when using K8S to deploy
|
||||
- name: TAOS_SERVER_PORT
|
||||
value: "6030"
|
||||
# Must set if you want a cluster.
|
||||
- name: TAOS_FIRST_EP
|
||||
value: "$(STS_NAME)-0.$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local:$(TAOS_SERVER_PORT)"
|
||||
# TAOS_FQND should always be set in k8s env.
|
||||
- name: TAOS_FQDN
|
||||
value: "$(POD_NAME).$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local"
|
||||
volumeMounts:
|
||||
- name: taosdata
|
||||
mountPath: /var/lib/taos
|
||||
startupProbe:
|
||||
exec:
|
||||
command:
|
||||
- taos-check
|
||||
failureThreshold: 360
|
||||
periodSeconds: 10
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- taos-check
|
||||
initialDelaySeconds: 5
|
||||
timeoutSeconds: 5000
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- taos-check
|
||||
initialDelaySeconds: 15
|
||||
periodSeconds: 20
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: taosdata
|
||||
spec:
|
||||
accessModes:
|
||||
- "ReadWriteOnce"
|
||||
storageClassName: "standard"
|
||||
resources:
|
||||
requests:
|
||||
storage: "5Gi"
|
||||
```
|
||||
|
||||
### Deploying TDengine Cluster Using kubectl Command
|
||||
|
||||
First, create the corresponding namespace `dengine-test`, as well as the PVC, ensuring that there is enough remaining space with `storageClassName` set to `standard`. Then execute the following commands in sequence:
|
||||
|
||||
```shell
|
||||
kubectl apply -f taosd-service.yaml -n tdengine-test
|
||||
```
|
||||
|
||||
The above configuration will create a three-node TDengine cluster, with `dnode` automatically configured. You can use the `show dnodes` command to view the current cluster nodes:
|
||||
|
||||
```shell
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
|
||||
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show dnodes"
|
||||
kubectl exec -it tdengine-2 -n tdengine-test -- taos -s "show dnodes"
|
||||
```
|
||||
|
||||
The output is as follows:
|
||||
|
||||
```shell
|
||||
taos show dnodes
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
|
||||
=============================================================================================================================================================================================================================================
|
||||
1 | tdengine-0.ta... | 0 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-19 17:54:18.469 | | | |
|
||||
2 | tdengine-1.ta... | 0 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-19 17:54:38.698 | | | |
|
||||
3 | tdengine-2.ta... | 0 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-19 17:55:02.039 | | | |
|
||||
Query OK, 3 row(s) in set (0.001853s)
|
||||
```
|
||||
|
||||
View the current mnode:
|
||||
|
||||
```shell
|
||||
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
|
||||
taos> show mnodes\G
|
||||
*************************** 1.row ***************************
|
||||
id: 1
|
||||
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: leader
|
||||
status: ready
|
||||
create_time: 2023-07-19 17:54:18.559
|
||||
reboot_time: 2023-07-19 17:54:19.520
|
||||
Query OK, 1 row(s) in set (0.001282s)
|
||||
```
|
||||
|
||||
Create mnode
|
||||
|
||||
```shell
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 2"
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 3"
|
||||
```
|
||||
|
||||
View mnode
|
||||
|
||||
```shell
|
||||
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
|
||||
|
||||
taos> show mnodes\G
|
||||
*************************** 1.row ***************************
|
||||
id: 1
|
||||
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: leader
|
||||
status: ready
|
||||
create_time: 2023-07-19 17:54:18.559
|
||||
reboot_time: 2023-07-20 09:19:36.060
|
||||
*************************** 2.row ***************************
|
||||
id: 2
|
||||
endpoint: tdengine-1.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: follower
|
||||
status: ready
|
||||
create_time: 2023-07-20 09:22:05.600
|
||||
reboot_time: 2023-07-20 09:22:12.838
|
||||
*************************** 3.row ***************************
|
||||
id: 3
|
||||
endpoint: tdengine-2.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: follower
|
||||
status: ready
|
||||
create_time: 2023-07-20 09:22:20.042
|
||||
reboot_time: 2023-07-20 09:22:23.271
|
||||
Query OK, 3 row(s) in set (0.003108s)
|
||||
```
|
||||
|
||||
### Port Forwarding
|
||||
|
||||
Using kubectl port forwarding feature allows applications to access the TDengine cluster running in the Kubernetes environment.
|
||||
|
||||
```shell
|
||||
kubectl port-forward -n tdengine-test tdengine-0 6041:6041 &
|
||||
```
|
||||
|
||||
Use the curl command to verify the TDengine REST API using port 6041.
|
||||
|
||||
```shell
|
||||
curl -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
|
||||
{"code":0,"column_meta":[["name","VARCHAR",64]],"data":[["information_schema"],["performance_schema"],["test"],["test1"]],"rows":4}
|
||||
```
|
||||
|
||||
### Cluster Expansion
|
||||
|
||||
TDengine supports cluster expansion:
|
||||
|
||||
```shell
|
||||
kubectl scale statefulsets tdengine -n tdengine-test --replicas=4
|
||||
```
|
||||
|
||||
The command line argument `--replica=4` indicates that the TDengine cluster is to be expanded to 4 nodes. After execution, first check the status of the POD:
|
||||
|
||||
```shell
|
||||
kubectl get pod -l app=tdengine -n tdengine-test -o wide
|
||||
```
|
||||
|
||||
Output as follows:
|
||||
|
||||
```text
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
tdengine-0 1/1 Running 4 (6h26m ago) 6h53m 10.244.2.75 node86 <none> <none>
|
||||
tdengine-1 1/1 Running 1 (6h39m ago) 6h53m 10.244.0.59 node84 <none> <none>
|
||||
tdengine-2 1/1 Running 0 5h16m 10.244.1.224 node85 <none> <none>
|
||||
tdengine-3 1/1 Running 0 3m24s 10.244.2.76 node86 <none> <none>
|
||||
```
|
||||
|
||||
At this point, the Pod status is still Running. The dnode status in the TDengine cluster can be seen after the Pod status changes to ready:
|
||||
|
||||
```shell
|
||||
kubectl exec -it tdengine-3 -n tdengine-test -- taos -s "show dnodes"
|
||||
```
|
||||
|
||||
The dnode list of the four-node TDengine cluster after expansion:
|
||||
|
||||
```text
|
||||
taos> show dnodes
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
|
||||
=============================================================================================================================================================================================================================================
|
||||
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
|
||||
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
|
||||
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
|
||||
4 | tdengine-3.ta... | 0 | 16 | ready | 2023-07-20 16:01:44.007 | 2023-07-20 16:01:44.889 | | | |
|
||||
Query OK, 4 row(s) in set (0.003628s)
|
||||
```
|
||||
|
||||
### Cleaning up the Cluster
|
||||
|
||||
**Warning**
|
||||
When deleting PVCs, pay attention to the PV persistentVolumeReclaimPolicy. It is recommended to set it to Delete, so that when the PVC is deleted, the PV will be automatically cleaned up, along with the underlying CSI storage resources. If the policy to automatically clean up PVs when deleting PVCs is not configured, after deleting the PVCs, manually cleaning up the PVs may not release the corresponding CSI storage resources.
|
||||
|
||||
To completely remove the TDengine cluster, you need to clean up the statefulset, svc, pvc, and finally delete the namespace.
|
||||
|
||||
```shell
|
||||
kubectl delete statefulset -l app=tdengine -n tdengine-test
|
||||
kubectl delete svc -l app=tdengine -n tdengine-test
|
||||
kubectl delete pvc -l app=tdengine -n tdengine-test
|
||||
kubectl delete namespace tdengine-test
|
||||
```
|
||||
|
||||
### Cluster Disaster Recovery Capabilities
|
||||
|
||||
For high availability and reliability of TDengine in a Kubernetes environment, in terms of hardware damage and disaster recovery, it is discussed on two levels:
|
||||
|
||||
- The disaster recovery capabilities of the underlying distributed block storage, which includes multiple replicas of block storage. Popular distributed block storage like Ceph has multi-replica capabilities, extending storage replicas to different racks, cabinets, rooms, and data centers (or directly using block storage services provided by public cloud vendors).
|
||||
- TDengine's disaster recovery, in TDengine Enterprise, inherently supports the recovery of a dnode's work by launching a new blank dnode when an existing dnode permanently goes offline (due to physical disk damage and data loss).
|
||||
|
||||
## Deploying TDengine Cluster with Helm
|
||||
|
||||
Helm is the package manager for Kubernetes.
|
||||
The previous section on deploying the TDengine cluster with Kubernetes was simple enough, but Helm can provide even more powerful capabilities.
|
||||
|
||||
### Installing Helm
|
||||
|
||||
```shell
|
||||
curl -fsSL -o get_helm.sh \
|
||||
https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
|
||||
chmod +x get_helm.sh
|
||||
./get_helm.sh
|
||||
```
|
||||
|
||||
Helm operates Kubernetes using kubectl and kubeconfig configurations, which can be set up following the Rancher installation configuration for Kubernetes.
|
||||
|
||||
### Installing TDengine Chart
|
||||
|
||||
The TDengine Chart has not yet been released to the Helm repository, it can currently be downloaded directly from GitHub:
|
||||
|
||||
```shell
|
||||
wget https://github.com/taosdata/TDengine-Operator/raw/3.0/helm/tdengine-3.0.2.tgz
|
||||
```
|
||||
|
||||
Retrieve the current Kubernetes storage class:
|
||||
|
||||
```shell
|
||||
kubectl get storageclass
|
||||
```
|
||||
|
||||
In minikube, the default is standard. Then, use the helm command to install:
|
||||
|
||||
```shell
|
||||
helm install tdengine tdengine-3.0.2.tgz \
|
||||
--set storage.className=<your storage class name> \
|
||||
--set image.tag=3.2.3.0
|
||||
|
||||
```
|
||||
|
||||
In a minikube environment, you can set a smaller capacity to avoid exceeding disk space:
|
||||
|
||||
```shell
|
||||
helm install tdengine tdengine-3.0.2.tgz \
|
||||
--set storage.className=standard \
|
||||
--set storage.dataSize=2Gi \
|
||||
--set storage.logSize=10Mi \
|
||||
--set image.tag=3.2.3.0
|
||||
```
|
||||
|
||||
After successful deployment, the TDengine Chart will output instructions for operating TDengine:
|
||||
|
||||
```shell
|
||||
export POD_NAME=$(kubectl get pods --namespace default \
|
||||
-l "app.kubernetes.io/name=tdengine,app.kubernetes.io/instance=tdengine" \
|
||||
-o jsonpath="{.items[0].metadata.name}")
|
||||
kubectl --namespace default exec $POD_NAME -- taos -s "show dnodes; show mnodes"
|
||||
kubectl --namespace default exec -it $POD_NAME -- taos
|
||||
```
|
||||
|
||||
You can create a table for testing:
|
||||
|
||||
```shell
|
||||
kubectl --namespace default exec $POD_NAME -- \
|
||||
taos -s "create database test;
|
||||
use test;
|
||||
create table t1 (ts timestamp, n int);
|
||||
insert into t1 values(now, 1)(now + 1s, 2);
|
||||
select * from t1;"
|
||||
```
|
||||
|
||||
### Configuring values
|
||||
|
||||
TDengine supports customization through `values.yaml`.
|
||||
You can obtain the complete list of values supported by the TDengine Chart with helm show values:
|
||||
|
||||
```shell
|
||||
helm show values tdengine-3.0.2.tgz
|
||||
```
|
||||
|
||||
You can save the results as `values.yaml`, then modify various parameters in it, such as the number of replicas, storage class name, capacity size, TDengine configuration, etc., and then use the following command to install the TDengine cluster:
|
||||
|
||||
```shell
|
||||
helm install tdengine tdengine-3.0.2.tgz -f values.yaml
|
||||
```
|
||||
|
||||
All parameters are as follows:
|
||||
|
||||
```yaml
|
||||
# Default values for tdengine.
|
||||
# This is a YAML-formatted file.
|
||||
# Declare variables to be passed into helm templates.
|
||||
|
||||
replicaCount: 1
|
||||
|
||||
image:
|
||||
prefix: tdengine/tdengine
|
||||
#pullPolicy: Always
|
||||
# Overrides the image tag whose default is the chart appVersion.
|
||||
# tag: "3.0.2.0"
|
||||
|
||||
service:
|
||||
# ClusterIP is the default service type, use NodeIP only if you know what you are doing.
|
||||
type: ClusterIP
|
||||
ports:
|
||||
# TCP range required
|
||||
tcp: [6030, 6041, 6042, 6043, 6044, 6046, 6047, 6048, 6049, 6060]
|
||||
# UDP range
|
||||
udp: [6044, 6045]
|
||||
|
||||
|
||||
# Set timezone here, not in taoscfg
|
||||
timezone: "Asia/Shanghai"
|
||||
|
||||
resources:
|
||||
# We usually recommend not to specify default resources and to leave this as a conscious
|
||||
# choice for the user. This also increases chances charts run on environments with little
|
||||
# resources, such as Minikube. If you do want to specify resources, uncomment the following
|
||||
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
|
||||
# limits:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
# requests:
|
||||
# cpu: 100m
|
||||
# memory: 128Mi
|
||||
|
||||
storage:
|
||||
# Set storageClassName for pvc. K8s use default storage class if not set.
|
||||
#
|
||||
className: ""
|
||||
dataSize: "100Gi"
|
||||
logSize: "10Gi"
|
||||
|
||||
nodeSelectors:
|
||||
taosd:
|
||||
# node selectors
|
||||
|
||||
clusterDomainSuffix: ""
|
||||
# Config settings in taos.cfg file.
|
||||
#
|
||||
# The helm/k8s support will use environment variables for taos.cfg,
|
||||
# converting an upper-snake-cased variable like `TAOS_DEBUG_FLAG`,
|
||||
# to a camelCase taos config variable `debugFlag`.
|
||||
#
|
||||
# Note:
|
||||
# 1. firstEp/secondEp: should not be set here, it's auto generated at scale-up.
|
||||
# 2. serverPort: should not be set, we'll use the default 6030 in many places.
|
||||
# 3. fqdn: will be auto generated in kubernetes, user should not care about it.
|
||||
# 4. role: currently role is not supported - every node is able to be mnode and vnode.
|
||||
#
|
||||
# Btw, keep quotes "" around the value like below, even the value will be number or not.
|
||||
taoscfg:
|
||||
# Starts as cluster or not, must be 0 or 1.
|
||||
# 0: all pods will start as a separate TDengine server
|
||||
# 1: pods will start as TDengine server cluster. [default]
|
||||
CLUSTER: "1"
|
||||
|
||||
# number of replications, for cluster only
|
||||
TAOS_REPLICA: "1"
|
||||
|
||||
|
||||
# TAOS_NUM_OF_RPC_THREADS: number of threads for RPC
|
||||
#TAOS_NUM_OF_RPC_THREADS: "2"
|
||||
|
||||
#
|
||||
# TAOS_NUM_OF_COMMIT_THREADS: number of threads to commit cache data
|
||||
#TAOS_NUM_OF_COMMIT_THREADS: "4"
|
||||
|
||||
# enable/disable installation / usage report
|
||||
#TAOS_TELEMETRY_REPORTING: "1"
|
||||
|
||||
# time interval of system monitor, seconds
|
||||
#TAOS_MONITOR_INTERVAL: "30"
|
||||
|
||||
# time interval of dnode status reporting to mnode, seconds, for cluster only
|
||||
#TAOS_STATUS_INTERVAL: "1"
|
||||
|
||||
# time interval of heart beat from shell to dnode, seconds
|
||||
#TAOS_SHELL_ACTIVITY_TIMER: "3"
|
||||
|
||||
# minimum sliding window time, milli-second
|
||||
#TAOS_MIN_SLIDING_TIME: "10"
|
||||
|
||||
# minimum time window, milli-second
|
||||
#TAOS_MIN_INTERVAL_TIME: "1"
|
||||
|
||||
# the compressed rpc message, option:
|
||||
# -1 (no compression)
|
||||
# 0 (all message compressed),
|
||||
# > 0 (rpc message body which larger than this value will be compressed)
|
||||
#TAOS_COMPRESS_MSG_SIZE: "-1"
|
||||
|
||||
# max number of connections allowed in dnode
|
||||
#TAOS_MAX_SHELL_CONNS: "50000"
|
||||
|
||||
# stop writing logs when the disk size of the log folder is less than this value
|
||||
#TAOS_MINIMAL_LOG_DIR_G_B: "0.1"
|
||||
|
||||
# stop writing temporary files when the disk size of the tmp folder is less than this value
|
||||
#TAOS_MINIMAL_TMP_DIR_G_B: "0.1"
|
||||
|
||||
# if disk free space is less than this value, taosd service exit directly within startup process
|
||||
#TAOS_MINIMAL_DATA_DIR_G_B: "0.1"
|
||||
|
||||
# One mnode is equal to the number of vnode consumed
|
||||
#TAOS_MNODE_EQUAL_VNODE_NUM: "4"
|
||||
|
||||
# enbale/disable http service
|
||||
#TAOS_HTTP: "1"
|
||||
|
||||
# enable/disable system monitor
|
||||
#TAOS_MONITOR: "1"
|
||||
|
||||
# enable/disable async log
|
||||
#TAOS_ASYNC_LOG: "1"
|
||||
|
||||
#
|
||||
# time of keeping log files, days
|
||||
#TAOS_LOG_KEEP_DAYS: "0"
|
||||
|
||||
# The following parameters are used for debug purpose only.
|
||||
# debugFlag 8 bits mask: FILE-SCREEN-UNUSED-HeartBeat-DUMP-TRACE_WARN-ERROR
|
||||
# 131: output warning and error
|
||||
# 135: output debug, warning and error
|
||||
# 143: output trace, debug, warning and error to log
|
||||
# 199: output debug, warning and error to both screen and file
|
||||
# 207: output trace, debug, warning and error to both screen and file
|
||||
#
|
||||
# debug flag for all log type, take effect when non-zero value\
|
||||
#TAOS_DEBUG_FLAG: "143"
|
||||
|
||||
# generate core file when service crash
|
||||
#TAOS_ENABLE_CORE_FILE: "1"
|
||||
```
|
||||
|
||||
### Expansion
|
||||
|
||||
For expansion, refer to the explanation in the previous section, with some additional operations needed from the helm deployment.
|
||||
First, retrieve the name of the StatefulSet from the deployment.
|
||||
|
||||
```shell
|
||||
export STS_NAME=$(kubectl get statefulset \
|
||||
-l "app.kubernetes.io/name=tdengine" \
|
||||
-o jsonpath="{.items[0].metadata.name}")
|
||||
```
|
||||
|
||||
The expansion operation is extremely simple, just increase the replica. The following command expands TDengine to three nodes:
|
||||
|
||||
```shell
|
||||
kubectl scale --replicas 3 statefulset/$STS_NAME
|
||||
```
|
||||
|
||||
Use the commands `show dnodes` and `show mnodes` to check if the expansion was successful.
|
||||
|
||||
### Cleaning up the Cluster
|
||||
|
||||
Under Helm management, the cleanup operation also becomes simple:
|
||||
|
||||
```shell
|
||||
helm uninstall tdengine
|
||||
```
|
||||
|
||||
However, Helm will not automatically remove PVCs, you need to manually retrieve and then delete the PVCs.
|
|
@ -0,0 +1,215 @@
|
|||
---
|
||||
title: Manual Deployment
|
||||
slug: /operations-and-maintenance/deploy-your-cluster/manual-deployment
|
||||
---
|
||||
|
||||
You can deploy TDengine manually on a physical or virtual machine.
|
||||
|
||||
## Deploying taosd
|
||||
|
||||
taosd is the most important service component in the TDengine cluster. This section describes the steps to manually deploy a taosd cluster.
|
||||
|
||||
### 1. Clear Data
|
||||
|
||||
If the physical nodes for setting up the cluster contain previous test data or have had other versions of TDengine installed (such as 1.x/2.x), please delete them and clear all data first.
|
||||
|
||||
### 2. Check Environment
|
||||
|
||||
Before deploying the TDengine cluster, it is crucial to thoroughly check the network settings of all dnodes and the physical nodes where the applications are located. Here are the steps to check:
|
||||
|
||||
- Step 1: Execute the `hostname -f` command on each physical node to view and confirm that all node hostnames are unique. This step can be omitted for nodes where application drivers are located.
|
||||
- Step 2: Execute the `ping host` command on each physical node, where host is the hostname of other physical nodes. This step aims to detect the network connectivity between the current node and other physical nodes. If you cannot ping through, immediately check the network and DNS settings. For Linux operating systems, check the `/etc/hosts` file; for Windows operating systems, check the `C:\Windows\system32\drivers\etc\hosts` file. Network issues will prevent the formation of a cluster, so be sure to resolve this issue.
|
||||
- Step 3: Repeat the above network detection steps on the physical nodes where the application is running. If the network is found to be problematic, the application will not be able to connect to the taosd service. At this point, carefully check the DNS settings or hosts file of the physical node where the application is located to ensure it is configured correctly.
|
||||
- Step 4: Check ports to ensure that all hosts in the cluster can communicate over TCP on port 6030.
|
||||
|
||||
By following these steps, you can ensure that all nodes communicate smoothly at the network level, laying a solid foundation for the successful deployment of the TDengine cluster.
|
||||
|
||||
### 3. Installation
|
||||
|
||||
To ensure consistency and stability within the cluster, install the same version of TDengine on all physical nodes.
|
||||
|
||||
### 4. Modify Configuration
|
||||
|
||||
Modify the configuration file of TDengine (the configuration files of all nodes need to be modified). Assuming the endpoint of the first dnode to be started is `h1.tdengine.com:6030`, the cluster-related parameters are as follows.
|
||||
|
||||
```shell
|
||||
# firstEp is the first dnode that each dnode connects to after the initial startup
|
||||
firstEp h1.tdengine.com:6030
|
||||
# Must be configured to the FQDN of this dnode, if there is only one hostname on this machine, you can comment out or delete the following line
|
||||
fqdn h1.tdengine.com
|
||||
# Configure the port of this dnode, default is 6030
|
||||
serverPort 6030
|
||||
```
|
||||
|
||||
The parameters that must be modified are firstEp and fqdn. For each dnode, the firstEp configuration should remain consistent, but fqdn must be set to the value of the dnode it is located on. Other parameters do not need to be modified unless you are clear on why they should be changed.
|
||||
|
||||
For dnodes wishing to join the cluster, it is essential to ensure that the parameters related to the TDengine cluster listed in the table below are set identically. Any mismatch in parameters may prevent the dnode from successfully joining the cluster.
|
||||
|
||||
| Parameter Name | Meaning |
|
||||
|:----------------:|:---------------------------------------------------------:|
|
||||
| statusInterval | Interval at which dnode reports status to mnode |
|
||||
| timezone | Time zone |
|
||||
| locale | System locale information and encoding format |
|
||||
| charset | Character set encoding |
|
||||
| ttlChangeOnWrite | Whether ttl expiration changes with table modification |
|
||||
|
||||
### 5. Start
|
||||
|
||||
Start the first dnode, such as `h1.tdengine.com`, following the steps mentioned above. Then execute taos in the terminal to start TDengine CLI program taos, and execute the `show dnodes` command within it to view all dnode information in the current cluster.
|
||||
|
||||
```shell
|
||||
taos> show dnodes;
|
||||
id | endpoint | vnodes|support_vnodes|status| create_time | note |
|
||||
===================================================================================
|
||||
1| h1.tdengine.com:6030 | 0| 1024| ready| 2022-07-16 10:50:42.673 | |
|
||||
```
|
||||
|
||||
You can see that the endpoint of the dnode node that has just started is `h1.tdengine.com:6030`. This address is the first Ep of the new cluster.
|
||||
|
||||
### 6. Adding dnode
|
||||
|
||||
Follow the steps mentioned earlier, start taosd on each physical node. Each dnode needs to configure the firstEp parameter in the taos.cfg file to the endpoint of the first node of the new cluster, which in this case is `h1.tdengine.com:6030`. On the machine where the first dnode is located, run taos in the terminal, open TDengine CLI program taos, then log into the TDengine cluster, and execute the following SQL.
|
||||
|
||||
```shell
|
||||
create dnode "h2.tdengine.com:6030"
|
||||
```
|
||||
|
||||
Add the new dnode's endpoint to the cluster's endpoint list. You need to put `fqdn:port` in double quotes, otherwise, it will cause an error when running. Please note to replace the example h2.tdengine.com:6030 with the endpoint of this new dnode. Then execute the following SQL to see if the new node has successfully joined. If the dnode you want to join is currently offline, please refer to the "Common Issues" section later in this chapter for a solution.
|
||||
|
||||
```shell
|
||||
show dnodes;
|
||||
```
|
||||
|
||||
In the logs, please confirm that the fqdn and port of the output dnode are consistent with the endpoint you just tried to add. If they are not consistent, correct it to the correct endpoint. By following the steps above, you can continuously add new dnodes to the cluster one by one, thereby expanding the scale of the cluster and improving overall performance. Make sure to follow the correct process when adding new nodes, which helps maintain the stability and reliability of the cluster.
|
||||
|
||||
**Tips**
|
||||
|
||||
- Any dnode that has joined the cluster can serve as the firstEp for subsequent nodes to be added. The firstEp parameter only functions when that dnode first joins the cluster. After joining, the dnode will save the latest mnode's endpoint list, and subsequently, it no longer depends on this parameter. The firstEp parameter in the configuration file is mainly used for client connections, and if no parameters are set for TDengine CLI, it will default to connecting to the node specified by firstEp.
|
||||
- Two dnodes that have not configured the firstEp parameter will run independently after starting. At this time, it is not possible to join one dnode to another to form a cluster.
|
||||
- TDengine does not allow merging two independent clusters into a new cluster.
|
||||
|
||||
### 7. Adding mnode
|
||||
|
||||
When creating a TDengine cluster, the first dnode automatically becomes the mnode of the cluster, responsible for managing and coordinating the cluster. To achieve high availability of mnode, subsequent dnodes need to manually create mnode. Please note that a cluster can create up to 3 mnodes, and only one mnode can be created on each dnode. When the number of dnodes in the cluster reaches or exceeds 3, you can create mnode for the existing cluster. In the first dnode, first log into TDengine through TDengine CLI program taos, then execute the following SQL.
|
||||
|
||||
```shell
|
||||
create mnode on dnode <dnodeId>
|
||||
```
|
||||
|
||||
Please note to replace the dnodeId in the example above with the serial number of the newly created dnode (which can be obtained by executing the `show dnodes` command). Finally, execute the following `show mnodes` to see if the newly created mnode has successfully joined the cluster.
|
||||
|
||||
**Tips**
|
||||
|
||||
During the process of setting up a TDengine cluster, if a new node always shows as offline after executing the create dnode command to add a new node, please follow these steps for troubleshooting.
|
||||
|
||||
- Step 1, check whether the taosd service on the new node has started normally. You can confirm this by checking the log files or using the ps command.
|
||||
- Step 2, if the taosd service has started, next check whether the new node's network connection is smooth and confirm whether the firewall has been turned off. Network issues or firewall settings may prevent the node from communicating with other nodes in the cluster.
|
||||
- Step 3, use the taos -h fqdn command to try to connect to the new node, then execute the show dnodes command. This will display the running status of the new node as an independent cluster. If the displayed list is inconsistent with that shown on the main node, it indicates that the new node may have formed a single-node cluster on its own. To resolve this issue, follow these steps. First, stop the taosd service on the new node. Second, clear all files in the dataDir directory specified in the taos.cfg configuration file on the new node. This will delete all data and configuration information related to that node. Finally, restart the taosd service on the new node. This will reset the new node to its initial state, ready to rejoin the main cluster.
|
||||
|
||||
## Deploying taosAdapter
|
||||
|
||||
This section discusses how to deploy taosAdapter, which provides RESTful and WebSocket access capabilities for the TDengine cluster, thus playing a very important role in the cluster.
|
||||
|
||||
1. Installation
|
||||
|
||||
After the installation of TDengine Enterprise is complete, taosAdapter can be used. If you want to deploy taosAdapter on different servers, TDengine Enterprise needs to be installed on these servers.
|
||||
|
||||
2. Single Instance Deployment
|
||||
|
||||
Deploying a single instance of taosAdapter is very simple. For specific commands and configuration parameters, please refer to the taosAdapter section in the manual.
|
||||
|
||||
3. Multiple Instances Deployment
|
||||
|
||||
The main purposes of deploying multiple instances of taosAdapter are as follows:
|
||||
|
||||
- To increase the throughput of the cluster and prevent taosAdapter from becoming a system bottleneck.
|
||||
- To enhance the robustness and high availability of the cluster, allowing requests entering the business system to be automatically routed to other instances when one instance fails.
|
||||
|
||||
When deploying multiple instances of taosAdapter, it is necessary to address load balancing issues to avoid overloading some nodes while others remain idle. During the deployment process, multiple single instances need to be deployed separately, and the deployment steps for each instance are exactly the same as those for deploying a single instance. The next critical part is configuring Nginx. Below is a verified best practice configuration; you only need to replace the endpoint with the correct address in the actual environment. For the meanings of each parameter, please refer to the official Nginx documentation.
|
||||
|
||||
```json
|
||||
user root;
|
||||
worker_processes auto;
|
||||
error_log /var/log/nginx_error.log;
|
||||
|
||||
|
||||
events {
|
||||
use epoll;
|
||||
worker_connections 1024;
|
||||
}
|
||||
|
||||
http {
|
||||
|
||||
access_log off;
|
||||
|
||||
map $http_upgrade $connection_upgrade {
|
||||
default upgrade;
|
||||
'' close;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 6041;
|
||||
location ~* {
|
||||
proxy_pass http://dbserver;
|
||||
proxy_read_timeout 600s;
|
||||
proxy_send_timeout 600s;
|
||||
proxy_connect_timeout 600s;
|
||||
proxy_next_upstream error http_502 non_idempotent;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $http_connection;
|
||||
}
|
||||
}
|
||||
server {
|
||||
listen 6043;
|
||||
location ~* {
|
||||
proxy_pass http://keeper;
|
||||
proxy_read_timeout 60s;
|
||||
proxy_next_upstream error http_502 http_500 non_idempotent;
|
||||
}
|
||||
}
|
||||
|
||||
server {
|
||||
listen 6060;
|
||||
location ~* {
|
||||
proxy_pass http://explorer;
|
||||
proxy_read_timeout 60s;
|
||||
proxy_next_upstream error http_502 http_500 non_idempotent;
|
||||
}
|
||||
}
|
||||
upstream dbserver {
|
||||
least_conn;
|
||||
server 172.16.214.201:6041 max_fails=0;
|
||||
server 172.16.214.202:6041 max_fails=0;
|
||||
server 172.16.214.203:6041 max_fails=0;
|
||||
}
|
||||
upstream keeper {
|
||||
ip_hash;
|
||||
server 172.16.214.201:6043 ;
|
||||
server 172.16.214.202:6043 ;
|
||||
server 172.16.214.203:6043 ;
|
||||
}
|
||||
upstream explorer{
|
||||
ip_hash;
|
||||
server 172.16.214.201:6060 ;
|
||||
server 172.16.214.202:6060 ;
|
||||
server 172.16.214.203:6060 ;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Deploying taosKeeper
|
||||
|
||||
To use the monitoring capabilities of TDengine, taosKeeper is an essential component. For monitoring, please refer to [TDinsight](../../../tdengine-reference/components/tdinsight), and for details on deploying taosKeeper, please refer to the [taosKeeper Reference Manual](../../../tdengine-reference/components/taoskeeper).
|
||||
|
||||
## Deploying taosX
|
||||
|
||||
To utilize the data ingestion capabilities of TDengine, it is necessary to deploy the taosX service. For detailed explanations and deployment, please refer to the enterprise edition reference manual.
|
||||
|
||||
## Deploying taosX-Agent
|
||||
|
||||
For some data sources such as Pi, OPC, etc., due to network conditions and data source access restrictions, taosX cannot directly access the data sources. In such cases, a proxy service, taosX-Agent, needs to be deployed. For detailed explanations and deployment, please refer to the enterprise edition reference manual.
|
||||
|
||||
## Deploying taos-Explorer
|
||||
|
||||
TDengine provides the capability to visually manage TDengine clusters. To use the graphical interface, the taos-Explorer service needs to be deployed. For detailed explanations and deployment, please refer to the [taos-Explorer Reference Manual](../../../tdengine-reference/components/taosexplorer/)
|
|
@ -0,0 +1,93 @@
|
|||
---
|
||||
title: Docker Deployment
|
||||
slug: /operations-and-maintenance/deploy-your-cluster/docker-deployment
|
||||
---
|
||||
|
||||
You can deploy TDengine services in Docker containers and use environment variables in the docker run command line or docker-compose file to control the behavior of services in the container.
|
||||
|
||||
## Starting TDengine
|
||||
|
||||
The TDengine image is launched with HTTP service activated by default. Use the following command to create a containerized TDengine environment with HTTP service.
|
||||
|
||||
```shell
|
||||
docker run -d --name tdengine \
|
||||
-v ~/data/taos/dnode/data:/var/lib/taos \
|
||||
-v ~/data/taos/dnode/log:/var/log/taos \
|
||||
-p 6041:6041 tdengine/tdengine
|
||||
```
|
||||
|
||||
Detailed parameter explanations are as follows:
|
||||
|
||||
- /var/lib/taos: Default data file directory for TDengine, can be modified through the configuration file.
|
||||
- /var/log/taos: Default log file directory for TDengine, can be modified through the configuration file.
|
||||
|
||||
The above command starts a container named tdengine and maps the HTTP service's port 6041 to the host port 6041. The following command can verify if the HTTP service in the container is available.
|
||||
|
||||
```shell
|
||||
curl -u root:taosdata -d "show databases" localhost:6041/rest/sql
|
||||
```
|
||||
|
||||
Run the following command to access TDengine within the container.
|
||||
|
||||
```shell
|
||||
$ docker exec -it tdengine taos
|
||||
|
||||
taos> show databases;
|
||||
name |
|
||||
=================================
|
||||
information_schema |
|
||||
performance_schema |
|
||||
Query OK, 2 rows in database (0.033802s)
|
||||
```
|
||||
|
||||
Within the container, TDengine CLI or various connectors (such as JDBC-JNI) connect to the server via the container's hostname. Accessing TDengine inside the container from outside is more complex, and using RESTful/WebSocket connection methods is the simplest approach.
|
||||
|
||||
## Starting TDengine in host network mode
|
||||
|
||||
Run the following command to start TDengine in host network mode, which allows using the host's FQDN to establish connections, rather than using the container's hostname.
|
||||
|
||||
```shell
|
||||
docker run -d --name tdengine --network host tdengine/tdengine
|
||||
```
|
||||
|
||||
This method is similar to starting TDengine on the host using the systemctl command. If the TDengine client is already installed on the host, you can directly use the following command to access the TDengine service.
|
||||
|
||||
```shell
|
||||
$ taos
|
||||
|
||||
taos> show dnodes;
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | note |
|
||||
=================================================================================================================================================
|
||||
1 | vm98:6030 | 0 | 32 | ready | 2022-08-19 14:50:05.337 | |
|
||||
Query OK, 1 rows in database (0.010654s)
|
||||
```
|
||||
|
||||
## Start TDengine with a specified hostname and port
|
||||
|
||||
Use the following command to establish a connection on a specified hostname using the TAOS_FQDN environment variable or the fqdn configuration item in taos.cfg. This method provides greater flexibility for deploying TDengine.
|
||||
|
||||
```shell
|
||||
docker run -d \
|
||||
--name tdengine \
|
||||
-e TAOS_FQDN=tdengine \
|
||||
-p 6030:6030 \
|
||||
-p 6041-6049:6041-6049 \
|
||||
-p 6041-6049:6041-6049/udp \
|
||||
tdengine/tdengine
|
||||
```
|
||||
|
||||
First, the above command starts a TDengine service in the container, listening on the hostname tdengine, and maps the container's port 6030 to the host's port 6030, and the container's port range [6041, 6049] to the host's port range [6041, 6049]. If the port range on the host is already in use, you can modify the command to specify a free port range on the host.
|
||||
|
||||
Secondly, ensure that the hostname tdengine is resolvable in /etc/hosts. Use the following command to save the correct configuration information to the hosts file.
|
||||
|
||||
```shell
|
||||
echo 127.0.0.1 tdengine |sudo tee -a /etc/hosts
|
||||
```
|
||||
|
||||
Finally, you can access the TDengine service using the TDengine CLI with tdengine as the server address, as follows.
|
||||
|
||||
```shell
|
||||
taos -h tdengine -P 6030
|
||||
```
|
||||
|
||||
If TAOS_FQDN is set to the same as the hostname of the host, the effect is the same as "starting TDengine in host network mode".
|
|
@ -0,0 +1,812 @@
|
|||
---
|
||||
title: Kubernetes Deployment
|
||||
slug: /operations-and-maintenance/deploy-your-cluster/kubernetes-deployment
|
||||
---
|
||||
|
||||
You can use kubectl or Helm to deploy TDengine in Kubernetes.
|
||||
|
||||
Note that Helm is only supported in TDengine Enterprise. To deploy TDengine OSS in Kubernetes, use kubectl.
|
||||
|
||||
## Deploy TDengine with kubectl
|
||||
|
||||
As a time-series database designed for cloud-native architectures, TDengine inherently supports Kubernetes deployment. This section introduces how to step-by-step create a highly available TDengine cluster for production use using YAML files, with a focus on common operations of TDengine in a Kubernetes environment. This subsection requires readers to have a certain understanding of Kubernetes, be proficient in running common kubectl commands, and understand concepts such as statefulset, service, and pvc. Readers unfamiliar with these concepts can refer to the Kubernetes official website for learning.
|
||||
To meet the requirements of high availability, the cluster needs to meet the following requirements:
|
||||
|
||||
- 3 or more dnodes: Multiple vnodes in the same vgroup of TDengine should not be distributed on the same dnode, so if creating a database with 3 replicas, the number of dnodes should be 3 or more.
|
||||
- 3 mnodes: mnodes are responsible for managing the entire cluster, with TDengine defaulting to one mnode. If the dnode hosting this mnode goes offline, the entire cluster becomes unavailable.
|
||||
- 3 replicas of the database: TDengine's replica configuration is at the database level, so 3 replicas can ensure that the cluster remains operational even if any one of the 3 dnodes goes offline. If 2 dnodes go offline, the cluster becomes unavailable because RAFT cannot complete the election. (Enterprise edition: In disaster recovery scenarios, if the data files of any node are damaged, recovery can be achieved by restarting the dnode.)
|
||||
|
||||
### Prerequisites
|
||||
|
||||
To deploy and manage a TDengine cluster using Kubernetes, the following preparations need to be made.
|
||||
|
||||
- This article applies to Kubernetes v1.19 and above.
|
||||
- This article uses the kubectl tool for installation and deployment, please install the necessary software in advance.
|
||||
- Kubernetes has been installed and deployed and can normally access or update necessary container repositories or other services.
|
||||
|
||||
### Configure Service
|
||||
|
||||
Create a Service configuration file: taosd-service.yaml, the service name metadata.name (here "taosd") will be used in the next step. First, add the ports used by TDengine, then set the determined labels app (here "tdengine") in the selector.
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: "taosd"
|
||||
labels:
|
||||
app: "tdengine"
|
||||
spec:
|
||||
ports:
|
||||
- name: tcp6030
|
||||
protocol: "TCP"
|
||||
port: 6030
|
||||
- name: tcp6041
|
||||
protocol: "TCP"
|
||||
port: 6041
|
||||
selector:
|
||||
app: "tdengine"
|
||||
```
|
||||
|
||||
### Stateful Services StatefulSet
|
||||
|
||||
According to Kubernetes' descriptions of various deployment types, we will use StatefulSet as the deployment resource type for TDengine. Create the file tdengine.yaml, where replicas define the number of cluster nodes as 3. The node timezone is set to China (Asia/Shanghai), and each node is allocated 5G of standard storage, which you can modify according to actual conditions.
|
||||
|
||||
Please pay special attention to the configuration of startupProbe. After a dnode's Pod goes offline for a period of time and then restarts, the newly online dnode will be temporarily unavailable. If the startupProbe configuration is too small, Kubernetes will consider the Pod to be in an abnormal state and attempt to restart the Pod. This dnode's Pod will frequently restart and never return to a normal state.
|
||||
|
||||
```yaml
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: "tdengine"
|
||||
labels:
|
||||
app: "tdengine"
|
||||
spec:
|
||||
serviceName: "taosd"
|
||||
replicas: 3
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
selector:
|
||||
matchLabels:
|
||||
app: "tdengine"
|
||||
template:
|
||||
metadata:
|
||||
name: "tdengine"
|
||||
labels:
|
||||
app: "tdengine"
|
||||
spec:
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 100
|
||||
podAffinityTerm:
|
||||
labelSelector:
|
||||
matchExpressions:
|
||||
- key: app
|
||||
operator: In
|
||||
values:
|
||||
- tdengine
|
||||
topologyKey: kubernetes.io/hostname
|
||||
containers:
|
||||
- name: "tdengine"
|
||||
image: "tdengine/tdengine:3.2.3.0"
|
||||
imagePullPolicy: "IfNotPresent"
|
||||
ports:
|
||||
- name: tcp6030
|
||||
protocol: "TCP"
|
||||
containerPort: 6030
|
||||
- name: tcp6041
|
||||
protocol: "TCP"
|
||||
containerPort: 6041
|
||||
env:
|
||||
# POD_NAME for FQDN config
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
# SERVICE_NAME and NAMESPACE for fqdn resolve
|
||||
- name: SERVICE_NAME
|
||||
value: "taosd"
|
||||
- name: STS_NAME
|
||||
value: "tdengine"
|
||||
- name: STS_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
# TZ for timezone settings, we recommend to always set it.
|
||||
- name: TZ
|
||||
value: "Asia/Shanghai"
|
||||
# Environment variables with prefix TAOS_ will be parsed and converted into corresponding parameter in taos.cfg. For example, serverPort in taos.cfg should be configured by TAOS_SERVER_PORT when using K8S to deploy
|
||||
- name: TAOS_SERVER_PORT
|
||||
value: "6030"
|
||||
# Must set if you want a cluster.
|
||||
- name: TAOS_FIRST_EP
|
||||
value: "$(STS_NAME)-0.$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local:$(TAOS_SERVER_PORT)"
|
||||
# TAOS_FQND should always be set in k8s env.
|
||||
- name: TAOS_FQDN
|
||||
value: "$(POD_NAME).$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local"
|
||||
volumeMounts:
|
||||
- name: taosdata
|
||||
mountPath: /var/lib/taos
|
||||
startupProbe:
|
||||
exec:
|
||||
command:
|
||||
- taos-check
|
||||
failureThreshold: 360
|
||||
periodSeconds: 10
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
- taos-check
|
||||
initialDelaySeconds: 5
|
||||
timeoutSeconds: 5000
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- taos-check
|
||||
initialDelaySeconds: 15
|
||||
periodSeconds: 20
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: taosdata
|
||||
spec:
|
||||
accessModes:
|
||||
- "ReadWriteOnce"
|
||||
storageClassName: "standard"
|
||||
resources:
|
||||
requests:
|
||||
storage: "5Gi"
|
||||
```
|
||||
|
||||
### Deploying TDengine Cluster Using kubectl Command
|
||||
|
||||
First, create the corresponding namespace `dengine-test`, as well as the PVC, ensuring that there is enough remaining space with `storageClassName` set to `standard`. Then execute the following commands in sequence:
|
||||
|
||||
```shell
|
||||
kubectl apply -f taosd-service.yaml -n tdengine-test
|
||||
```
|
||||
|
||||
The above configuration will create a three-node TDengine cluster, with `dnode` automatically configured. You can use the `show dnodes` command to view the current cluster nodes:
|
||||
|
||||
```shell
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
|
||||
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show dnodes"
|
||||
kubectl exec -it tdengine-2 -n tdengine-test -- taos -s "show dnodes"
|
||||
```
|
||||
|
||||
The output is as follows:
|
||||
|
||||
```shell
|
||||
taos show dnodes
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
|
||||
=============================================================================================================================================================================================================================================
|
||||
1 | tdengine-0.ta... | 0 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-19 17:54:18.469 | | | |
|
||||
2 | tdengine-1.ta... | 0 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-19 17:54:38.698 | | | |
|
||||
3 | tdengine-2.ta... | 0 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-19 17:55:02.039 | | | |
|
||||
Query OK, 3 row(s) in set (0.001853s)
|
||||
```
|
||||
|
||||
View the current mnode:
|
||||
|
||||
```shell
|
||||
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
|
||||
taos> show mnodes\G
|
||||
*************************** 1.row ***************************
|
||||
id: 1
|
||||
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: leader
|
||||
status: ready
|
||||
create_time: 2023-07-19 17:54:18.559
|
||||
reboot_time: 2023-07-19 17:54:19.520
|
||||
Query OK, 1 row(s) in set (0.001282s)
|
||||
```
|
||||
|
||||
Create mnode
|
||||
|
||||
```shell
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 2"
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 3"
|
||||
```
|
||||
|
||||
View mnode
|
||||
|
||||
```shell
|
||||
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
|
||||
|
||||
taos> show mnodes\G
|
||||
*************************** 1.row ***************************
|
||||
id: 1
|
||||
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: leader
|
||||
status: ready
|
||||
create_time: 2023-07-19 17:54:18.559
|
||||
reboot_time: 2023-07-20 09:19:36.060
|
||||
*************************** 2.row ***************************
|
||||
id: 2
|
||||
endpoint: tdengine-1.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: follower
|
||||
status: ready
|
||||
create_time: 2023-07-20 09:22:05.600
|
||||
reboot_time: 2023-07-20 09:22:12.838
|
||||
*************************** 3.row ***************************
|
||||
id: 3
|
||||
endpoint: tdengine-2.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: follower
|
||||
status: ready
|
||||
create_time: 2023-07-20 09:22:20.042
|
||||
reboot_time: 2023-07-20 09:22:23.271
|
||||
Query OK, 3 row(s) in set (0.003108s)
|
||||
```
|
||||
|
||||
### Port Forwarding
|
||||
|
||||
Using kubectl port forwarding feature allows applications to access the TDengine cluster running in the Kubernetes environment.
|
||||
|
||||
```shell
|
||||
kubectl port-forward -n tdengine-test tdengine-0 6041:6041 &
|
||||
```
|
||||
|
||||
Use the curl command to verify the TDengine REST API using port 6041.
|
||||
|
||||
```shell
|
||||
curl -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
|
||||
{"code":0,"column_meta":[["name","VARCHAR",64]],"data":[["information_schema"],["performance_schema"],["test"],["test1"]],"rows":4}
|
||||
```
|
||||
|
||||
### Cluster Expansion
|
||||
|
||||
TDengine supports cluster expansion:
|
||||
|
||||
```shell
|
||||
kubectl scale statefulsets tdengine -n tdengine-test --replicas=4
|
||||
```
|
||||
|
||||
The command line argument `--replica=4` indicates that the TDengine cluster is to be expanded to 4 nodes. After execution, first check the status of the POD:
|
||||
|
||||
```shell
|
||||
kubectl get pod -l app=tdengine -n tdengine-test -o wide
|
||||
```
|
||||
|
||||
Output as follows:
|
||||
|
||||
```text
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
tdengine-0 1/1 Running 4 (6h26m ago) 6h53m 10.244.2.75 node86 <none> <none>
|
||||
tdengine-1 1/1 Running 1 (6h39m ago) 6h53m 10.244.0.59 node84 <none> <none>
|
||||
tdengine-2 1/1 Running 0 5h16m 10.244.1.224 node85 <none> <none>
|
||||
tdengine-3 1/1 Running 0 3m24s 10.244.2.76 node86 <none> <none>
|
||||
```
|
||||
|
||||
At this point, the Pod status is still Running. The dnode status in the TDengine cluster can be seen after the Pod status changes to ready:
|
||||
|
||||
```shell
|
||||
kubectl exec -it tdengine-3 -n tdengine-test -- taos -s "show dnodes"
|
||||
```
|
||||
|
||||
The dnode list of the four-node TDengine cluster after expansion:
|
||||
|
||||
```text
|
||||
taos> show dnodes
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
|
||||
=============================================================================================================================================================================================================================================
|
||||
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
|
||||
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
|
||||
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
|
||||
4 | tdengine-3.ta... | 0 | 16 | ready | 2023-07-20 16:01:44.007 | 2023-07-20 16:01:44.889 | | | |
|
||||
Query OK, 4 row(s) in set (0.003628s)
|
||||
```
|
||||
|
||||
### Cleaning up the Cluster
|
||||
|
||||
**Warning**
|
||||
When deleting PVCs, pay attention to the PV persistentVolumeReclaimPolicy. It is recommended to set it to Delete, so that when the PVC is deleted, the PV will be automatically cleaned up, along with the underlying CSI storage resources. If the policy to automatically clean up PVs when deleting PVCs is not configured, after deleting the PVCs, manually cleaning up the PVs may not release the corresponding CSI storage resources.
|
||||
|
||||
To completely remove the TDengine cluster, you need to clean up the statefulset, svc, pvc, and finally delete the namespace.
|
||||
|
||||
```shell
|
||||
kubectl delete statefulset -l app=tdengine -n tdengine-test
|
||||
kubectl delete svc -l app=tdengine -n tdengine-test
|
||||
kubectl delete pvc -l app=tdengine -n tdengine-test
|
||||
kubectl delete namespace tdengine-test
|
||||
```
|
||||
|
||||
### Cluster Disaster Recovery Capabilities
|
||||
|
||||
For high availability and reliability of TDengine in a Kubernetes environment, in terms of hardware damage and disaster recovery, it is discussed on two levels:
|
||||
|
||||
- The disaster recovery capabilities of the underlying distributed block storage, which includes multiple replicas of block storage. Popular distributed block storage like Ceph has multi-replica capabilities, extending storage replicas to different racks, cabinets, rooms, and data centers (or directly using block storage services provided by public cloud vendors).
|
||||
- TDengine's disaster recovery, in TDengine Enterprise, inherently supports the recovery of a dnode's work by launching a new blank dnode when an existing dnode permanently goes offline (due to physical disk damage and data loss).
|
||||
|
||||
## Deploy TDengine with Helm
|
||||
|
||||
Helm is the package manager for Kubernetes.
|
||||
The previous section on deploying the TDengine cluster with Kubernetes was simple enough, but Helm can provide even more powerful capabilities.
|
||||
|
||||
### Installing Helm
|
||||
|
||||
```shell
|
||||
curl -fsSL -o get_helm.sh \
|
||||
https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
|
||||
chmod +x get_helm.sh
|
||||
./get_helm.sh
|
||||
```
|
||||
|
||||
Helm operates Kubernetes using kubectl and kubeconfig configurations, which can be set up following the Rancher installation configuration for Kubernetes.
|
||||
|
||||
### Installing TDengine Chart
|
||||
|
||||
The TDengine Chart has not yet been released to the Helm repository, it can currently be downloaded directly from GitHub:
|
||||
|
||||
```shell
|
||||
wget https://github.com/taosdata/TDengine-Operator/raw/3.0/helm/tdengine-enterpise-3.5.0.tgz
|
||||
```
|
||||
|
||||
Note that it's for the enterprise edition, and the community edition is not yet available.
|
||||
|
||||
Follow the steps below to install the TDengine Chart:
|
||||
|
||||
```shell
|
||||
# Edit the values.yaml file to set the topology of the cluster
|
||||
vim values.yaml
|
||||
helm install tdengine tdengine-enterprise-3.5.0.tgz -f values.yaml
|
||||
```
|
||||
|
||||
#### Case 1: Simple 1-node Deployment
|
||||
|
||||
The following is a simple example of deploying a single-node TDengine cluster using Helm.
|
||||
|
||||
```yaml
|
||||
# This example is a simple deployment with one server replica.
|
||||
name: "tdengine"
|
||||
|
||||
image:
|
||||
repository: image.cloud.taosdata.com/ # Leave a trailing slash for the repository, or "" for no repository
|
||||
server: taosx/integrated:3.3.5.1-b0a54bdd
|
||||
|
||||
# Set timezone here, not in taoscfg
|
||||
timezone: "Asia/Shanghai"
|
||||
|
||||
labels:
|
||||
app: "tdengine"
|
||||
# Add more labels as needed.
|
||||
|
||||
services:
|
||||
server:
|
||||
type: ClusterIP
|
||||
replica: 1
|
||||
ports:
|
||||
# TCP range required
|
||||
tcp: [6041, 6030, 6060]
|
||||
# UDP range, optional
|
||||
udp:
|
||||
volumes:
|
||||
- name: data
|
||||
mountPath: /var/lib/taos
|
||||
spec:
|
||||
storageClassName: "local-path"
|
||||
accessModes: [ "ReadWriteOnce" ]
|
||||
resources:
|
||||
requests:
|
||||
storage: "10Gi"
|
||||
- name: log
|
||||
mountPath: /var/log/taos/
|
||||
spec:
|
||||
storageClassName: "local-path"
|
||||
accessModes: [ "ReadWriteOnce" ]
|
||||
resources:
|
||||
requests:
|
||||
storage: "10Gi"
|
||||
files:
|
||||
- name: cfg # must be lower case.
|
||||
mountPath: /etc/taos/taos.cfg
|
||||
content: |
|
||||
dataDir /var/lib/taos/
|
||||
logDir /var/log/taos/
|
||||
```
|
||||
|
||||
Let's explain the above configuration:
|
||||
|
||||
- name: The name of the deployment, here it is "tdengine".
|
||||
- image:
|
||||
- repository: The image repository address, remember to leave a trailing slash for the repository, or set it to an empty string to use docker.io.
|
||||
- server: The specific name and tag of the server image. You need to ask your business partner for the TDengine Enterprise image.
|
||||
- timezone: Set the timezone, here it is "Asia/Shanghai".
|
||||
- labels: Add labels to the deployment, here is an app label with the value "tdengine", more labels can be added as needed.
|
||||
- services:
|
||||
- server: Configure the server service.
|
||||
- type: The service type, here it is **ClusterIP**.
|
||||
- replica: The number of replicas, here it is 1.
|
||||
- ports: Configure the ports of the service.
|
||||
- tcp: The required TCP port range, here it is [6041, 6030, 6060].
|
||||
- udp: The optional UDP port range, which is not configured here.
|
||||
- volumes: Configure the volumes.
|
||||
- name: The name of the volume, here there are two volumes, data and log.
|
||||
- mountPath: The mount path of the volume.
|
||||
- spec: The specification of the volume.
|
||||
- storageClassName: The storage class name, here it is **local-path**.
|
||||
- accessModes: The access mode, here it is **ReadWriteOnce**.
|
||||
- resources.requests.storage: The requested storage size, here it is **10Gi**.
|
||||
- files: Configure the files to mount in TDengine server.
|
||||
- name: The name of the file, here it is **cfg**.
|
||||
- mountPath: The mount path of the file, which is **taos.cfg**.
|
||||
- content: The content of the file, here the **dataDir** and **logDir** are configured.
|
||||
|
||||
After configuring the values.yaml file, use the following command to install the TDengine Chart:
|
||||
|
||||
```shell
|
||||
helm install simple tdengine-enterprise-3.5.0.tgz -f values.yaml
|
||||
```
|
||||
|
||||
After installation, you can see the instructions to see the status of the TDengine cluster:
|
||||
|
||||
```shell
|
||||
NAME: simple
|
||||
LAST DEPLOYED: Sun Feb 9 13:40:00 2025 default
|
||||
STATUS: deployed
|
||||
REVISION: 1
|
||||
TEST SUITE: None
|
||||
NOTES:
|
||||
1. Get first POD name:
|
||||
|
||||
export POD_NAME=$(kubectl get pods --namespace default \
|
||||
-l "app.kubernetes.io/name=tdengine,app.kubernetes.io/instance=simple" -o jsonpath="{.items[0].metadata.name}")
|
||||
|
||||
2. Show dnodes/mnodes:
|
||||
|
||||
kubectl --namespace default exec $POD_NAME -- taos -s "show dnodes; show mnodes"
|
||||
|
||||
3. Run into TDengine CLI:
|
||||
|
||||
kubectl --namespace default exec -it $POD_NAME -- taos
|
||||
```
|
||||
|
||||
Follow the instructions to check the status of the TDengine cluster:
|
||||
|
||||
```shell
|
||||
root@u1-58:/data1/projects/helm# kubectl --namespace default exec $POD_NAME -- taos -s "show dnodes; show mnodes"
|
||||
Welcome to the TDengine Command Line Interface, Client Version:3.3.5.1
|
||||
Copyright (c) 2023 by TDengine, all rights reserved.
|
||||
|
||||
taos> show dnodes; show mnodes
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | machine_id |
|
||||
==========================================================================================================================================================================================================
|
||||
1 | simple-tdengine-0.simple-td... | 0 | 85 | ready | 2025-02-07 21:17:34.903 | 2025-02-08 15:52:34.781 | | BWhWyPiEBrWZrQCSqTSc2a/H |
|
||||
Query OK, 1 row(s) in set (0.005133s)
|
||||
|
||||
id | endpoint | role | status | create_time | role_time |
|
||||
==================================================================================================================================
|
||||
1 | simple-tdengine-0.simple-td... | leader | ready | 2025-02-07 21:17:34.906 | 2025-02-08 15:52:34.878 |
|
||||
Query OK, 1 row(s) in set (0.004299s)
|
||||
```
|
||||
|
||||
To clean up the TDengine cluster, use the following command:
|
||||
|
||||
```shell
|
||||
helm uninstall simple
|
||||
kubectl delete pvc -l app.kubernetes.io/instance=simple
|
||||
```
|
||||
|
||||
#### Case 2: Tiered-Storage Deployment
|
||||
|
||||
The following is an example of deploying a TDengine cluster with tiered storage using Helm.
|
||||
|
||||
```yaml
|
||||
# This is an example of a 3-tiered storage deployment with one server replica.
|
||||
name: "tdengine"
|
||||
|
||||
image:
|
||||
repository: image.cloud.taosdata.com/ # Leave a trailing slash for the repository, or "" for no repository
|
||||
server: taosx/integrated:3.3.5.1-b0a54bdd
|
||||
|
||||
# Set timezone here, not in taoscfg
|
||||
timezone: "Asia/Shanghai"
|
||||
|
||||
labels:
|
||||
# Add more labels as needed.
|
||||
|
||||
services:
|
||||
server:
|
||||
type: ClusterIP
|
||||
replica: 1
|
||||
ports:
|
||||
# TCP range required
|
||||
tcp: [6041, 6030, 6060]
|
||||
# UDP range, optional
|
||||
udp:
|
||||
volumes:
|
||||
- name: tier0
|
||||
mountPath: /data/taos0/
|
||||
spec:
|
||||
storageClassName: "local-path"
|
||||
accessModes: [ "ReadWriteOnce" ]
|
||||
resources:
|
||||
requests:
|
||||
storage: "10Gi"
|
||||
- name: tier1
|
||||
mountPath: /data/taos1/
|
||||
spec:
|
||||
storageClassName: "local-path"
|
||||
accessModes: [ "ReadWriteOnce" ]
|
||||
resources:
|
||||
requests:
|
||||
storage: "10Gi"
|
||||
- name: tier2
|
||||
mountPath: /data/taos2/
|
||||
spec:
|
||||
storageClassName: "local-path"
|
||||
accessModes: [ "ReadWriteOnce" ]
|
||||
resources:
|
||||
requests:
|
||||
storage: "10Gi"
|
||||
- name: log
|
||||
mountPath: /var/log/taos/
|
||||
spec:
|
||||
storageClassName: "local-path"
|
||||
accessModes: [ "ReadWriteOnce" ]
|
||||
resources:
|
||||
requests:
|
||||
storage: "10Gi"
|
||||
environment:
|
||||
TAOS_DEBUG_FLAG: "131"
|
||||
files:
|
||||
- name: cfg # must be lower case.
|
||||
mountPath: /etc/taos/taos.cfg
|
||||
content: |
|
||||
dataDir /data/taos0/ 0 1
|
||||
dataDir /data/taos1/ 1 0
|
||||
dataDir /data/taos2/ 2 0
|
||||
```
|
||||
|
||||
You can see that the configuration is similar to the previous one, with the addition of the tiered storage configuration. The dataDir configuration in the taos.cfg file is also modified to support tiered storage.
|
||||
|
||||
After configuring the values.yaml file, use the following command to install the TDengine Chart:
|
||||
|
||||
```shell
|
||||
helm install tiered tdengine-enterprise-3.5.0.tgz -f values.yaml
|
||||
```
|
||||
|
||||
#### Case 3: 2-replica Deployment
|
||||
|
||||
TDengine support 2-replica deployment with an arbitrator, which can be configured as follows:
|
||||
|
||||
```yaml
|
||||
# This example shows how to deploy a 2-replica TDengine cluster with an arbitrator.
|
||||
name: "tdengine"
|
||||
|
||||
image:
|
||||
repository: image.cloud.taosdata.com/ # Leave a trailing slash for the repository, or "" for no repository
|
||||
server: taosx/integrated:3.3.5.1-b0a54bdd
|
||||
|
||||
# Set timezone here, not in taoscfg
|
||||
timezone: "Asia/Shanghai"
|
||||
|
||||
labels:
|
||||
my-app: "tdengine"
|
||||
# Add more labels as needed.
|
||||
|
||||
services:
|
||||
arbitrator:
|
||||
type: ClusterIP
|
||||
volumes:
|
||||
- name: arb-data
|
||||
mountPath: /var/lib/taos
|
||||
spec:
|
||||
storageClassName: "local-path"
|
||||
accessModes: [ "ReadWriteOnce" ]
|
||||
resources:
|
||||
requests:
|
||||
storage: "10Gi"
|
||||
- name: arb-log
|
||||
mountPath: /var/log/taos/
|
||||
spec:
|
||||
storageClassName: "local-path"
|
||||
accessModes: [ "ReadWriteOnce" ]
|
||||
resources:
|
||||
requests:
|
||||
storage: "10Gi"
|
||||
server:
|
||||
type: ClusterIP
|
||||
replica: 2
|
||||
ports:
|
||||
# TCP range required
|
||||
tcp: [6041, 6030, 6060]
|
||||
# UDP range, optional
|
||||
udp:
|
||||
volumes:
|
||||
- name: data
|
||||
mountPath: /var/lib/taos
|
||||
spec:
|
||||
storageClassName: "local-path"
|
||||
accessModes: [ "ReadWriteOnce" ]
|
||||
resources:
|
||||
requests:
|
||||
storage: "10Gi"
|
||||
- name: log
|
||||
mountPath: /var/log/taos/
|
||||
spec:
|
||||
storageClassName: "local-path"
|
||||
accessModes: [ "ReadWriteOnce" ]
|
||||
resources:
|
||||
requests:
|
||||
storage: "10Gi"
|
||||
```
|
||||
|
||||
You can see that the configuration is similar to the first one, with the addition of the arbitrator configuration. The arbitrator service is configured with the same storage as the server service, and the server service is configured with 2 replicas (the arbitrator should be 1 replica and not able to be changed).
|
||||
|
||||
#### Case 4: 3-replica Deployment with Single taosX
|
||||
|
||||
```yaml
|
||||
# This example shows how to deploy a 3-replica TDengine cluster with separate taosx/explorer service.
|
||||
# Users should know that the explorer/taosx service is not cluster-ready, so it is recommended to deploy it separately.
|
||||
name: "tdengine"
|
||||
|
||||
image:
|
||||
repository: image.cloud.taosdata.com/ # Leave a trailing slash for the repository, or "" for no repository
|
||||
server: taosx/integrated:3.3.5.1-b0a54bdd
|
||||
|
||||
# Set timezone here, not in taoscfg
|
||||
timezone: "Asia/Shanghai"
|
||||
|
||||
labels:
|
||||
# Add more labels as needed.
|
||||
|
||||
services:
|
||||
server:
|
||||
type: ClusterIP
|
||||
replica: 3
|
||||
ports:
|
||||
# TCP range required
|
||||
tcp: [6041, 6030]
|
||||
# UDP range, optional
|
||||
udp:
|
||||
volumes:
|
||||
- name: data
|
||||
mountPath: /var/lib/taos
|
||||
spec:
|
||||
storageClassName: "local-path"
|
||||
accessModes: ["ReadWriteOnce"]
|
||||
resources:
|
||||
requests:
|
||||
storage: "10Gi"
|
||||
- name: log
|
||||
mountPath: /var/log/taos/
|
||||
spec:
|
||||
storageClassName: "local-path"
|
||||
accessModes: ["ReadWriteOnce"]
|
||||
resources:
|
||||
requests:
|
||||
storage: "10Gi"
|
||||
environment:
|
||||
ENABLE_TAOSX: "0" # Disable taosx in server replicas.
|
||||
taosx:
|
||||
type: ClusterIP
|
||||
volumes:
|
||||
- name: taosx-data
|
||||
mountPath: /var/lib/taos
|
||||
spec:
|
||||
storageClassName: "local-path"
|
||||
accessModes: ["ReadWriteOnce"]
|
||||
resources:
|
||||
requests:
|
||||
storage: "10Gi"
|
||||
- name: taosx-log
|
||||
mountPath: /var/log/taos/
|
||||
spec:
|
||||
storageClassName: "local-path"
|
||||
accessModes: ["ReadWriteOnce"]
|
||||
resources:
|
||||
requests:
|
||||
storage: "10Gi"
|
||||
files:
|
||||
- name: taosx
|
||||
mountPath: /etc/taos/taosx.toml
|
||||
content: |-
|
||||
# TAOSX configuration in TOML format.
|
||||
[monitor]
|
||||
# FQDN of taosKeeper service, no default value
|
||||
fqdn = "localhost"
|
||||
# How often to send metrics to taosKeeper, default every 10 seconds. Only value from 1 to 10 is valid.
|
||||
interval = 10
|
||||
|
||||
# log configuration
|
||||
[log]
|
||||
# All log files are stored in this directory
|
||||
#
|
||||
#path = "/var/log/taos" # on linux/macOS
|
||||
|
||||
# log filter level
|
||||
#
|
||||
#level = "info"
|
||||
|
||||
# Compress archived log files or not
|
||||
#
|
||||
#compress = false
|
||||
|
||||
# The number of log files retained by the current explorer server instance in the `path` directory
|
||||
#
|
||||
#rotationCount = 30
|
||||
|
||||
# Rotate when the log file reaches this size
|
||||
#
|
||||
#rotationSize = "1GB"
|
||||
|
||||
# Log downgrade when the remaining disk space reaches this size, only logging `ERROR` level logs
|
||||
#
|
||||
#reservedDiskSize = "1GB"
|
||||
|
||||
# The number of days log files are retained
|
||||
#
|
||||
#keepDays = 30
|
||||
|
||||
# Watching the configuration file for log.loggers changes, default to true.
|
||||
#
|
||||
#watching = true
|
||||
|
||||
# Customize the log output level of modules, and changes will be applied after modifying the file when log.watching is enabled
|
||||
#
|
||||
# ## Examples:
|
||||
#
|
||||
# crate = "error"
|
||||
# crate::mod1::mod2 = "info"
|
||||
# crate::span[field=value] = "warn"
|
||||
#
|
||||
[log.loggers]
|
||||
#"actix_server::accept" = "warn"
|
||||
#"taos::query" = "warn"
|
||||
```
|
||||
|
||||
You can see that the configuration is similar to the first one, with the addition of the taosx configuration. The taosx service is configured with similar storage configuration as the server service, and the server service is configured with 3 replicas. Since the taosx service is not cluster-ready, it is recommended to deploy it separately.
|
||||
|
||||
After configuring the values.yaml file, use the following command to install the TDengine Chart:
|
||||
|
||||
```shell
|
||||
helm install replica3 tdengine-enterprise-3.5.0.tgz -f values.yaml
|
||||
```
|
||||
|
||||
You can use the following command to expose the explorer service to the outside world with ingress:
|
||||
|
||||
```shell
|
||||
tee replica3-ingress.yaml <<EOF
|
||||
# This is a helm chart example for deploying 3 replicas of TDengine Explorer
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: replica3-ingress
|
||||
namespace: default
|
||||
spec:
|
||||
rules:
|
||||
- host: replica3.local.tdengine.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: replica3-tdengine-taosx
|
||||
port:
|
||||
number: 6060
|
||||
EOF
|
||||
|
||||
kubectl apply -f replica3-ingress.yaml
|
||||
```
|
||||
|
||||
Use `kubectl get ingress` to view the ingress service.
|
||||
|
||||
```shell
|
||||
root@server:/data1/projects/helm# kubectl get ingress
|
||||
NAME CLASS HOSTS ADDRESS PORTS AGE
|
||||
replica3-ingress nginx replica3.local.tdengine.com 192.168.1.58 80 48m
|
||||
```
|
||||
|
||||
You can configure the domain name resolution to point to the ingress service's external IP address. For example, add the following line to the hosts file:
|
||||
|
||||
```conf
|
||||
192.168.1.58 replica3.local.tdengine.com
|
||||
```
|
||||
|
||||
Now you can access the explorer service through the domain name `replica3.local.tdengine.com`.
|
||||
|
||||
```shell
|
||||
curl http://replica3.local.tdengine.com
|
||||
```
|
|
@ -0,0 +1,11 @@
|
|||
---
|
||||
title: Deploying Your Cluster
|
||||
slug: /operations-and-maintenance/deploy-your-cluster
|
||||
---
|
||||
|
||||
import DocCardList from '@theme/DocCardList';
|
||||
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
|
||||
|
||||
You can deploy a TDengine cluster manually, by using Docker, or by using Kubernetes. For Kubernetes deployments, you can use kubectl or Helm.
|
||||
|
||||
<DocCardList items={useCurrentSidebarCategory().items}/>
|
|
@ -17,7 +17,9 @@ TDengine is designed for various writing scenarios, and many of these scenarios
|
|||
|
||||
```sql
|
||||
COMPACT DATABASE db_name [start with 'XXXX'] [end with 'YYYY'];
|
||||
SHOW COMPACT [compact_id];
|
||||
COMPACT [db_name.]VGROUPS IN (vgroup_id1, vgroup_id2, ...) [start with 'XXXX'] [end with 'YYYY'];
|
||||
SHOW COMPACTS;
|
||||
SHOW COMPACT compact_id;
|
||||
KILL COMPACT compact_id;
|
||||
```
|
||||
|
||||
|
|
|
@ -1,14 +1,19 @@
|
|||
---
|
||||
title: Data Backup and Restoration
|
||||
slug: /operations-and-maintenance/back-up-and-restore-data
|
||||
slug: /operations-and-maintenance/data-backup-and-restoration
|
||||
---
|
||||
|
||||
To prevent data loss and accidental deletions, TDengine provides comprehensive features such as data backup, restoration, fault tolerance, and real-time synchronization of remote data to ensure the security of data storage. This section briefly explains the backup and restoration functions.
|
||||
import Image from '@theme/IdealImage';
|
||||
import imgBackup from '../assets/data-backup-01.png';
|
||||
|
||||
You can back up the data in your TDengine cluster and restore it in the event that data is lost or damaged.
|
||||
|
||||
## Data Backup and Restoration Using taosdump
|
||||
|
||||
taosdump is an open-source tool that supports backing up data from a running TDengine cluster and restoring the backed-up data to the same or another running TDengine cluster. taosdump can back up the database as a logical data unit or back up data records within a specified time period in the database. When using taosdump, you can specify the directory path for data backup. If no directory path is specified, taosdump will default to backing up the data in the current directory.
|
||||
|
||||
### Back Up Data with taosdump
|
||||
|
||||
Below is an example of using taosdump to perform data backup.
|
||||
|
||||
```shell
|
||||
|
@ -19,6 +24,8 @@ After executing the above command, taosdump will connect to the TDengine cluster
|
|||
|
||||
When using taosdump, if the specified storage path already contains data files, taosdump will prompt the user and exit immediately to avoid data overwriting. This means the same storage path can only be used for one backup. If you see related prompts, please operate carefully to avoid accidental data loss.
|
||||
|
||||
### Restore Data with taosdump
|
||||
|
||||
To restore data files from a specified local file path to a running TDengine cluster, you can execute the taosdump command by specifying command-line parameters and the data file path. Below is an example code for taosdump performing data restoration.
|
||||
|
||||
```shell
|
||||
|
@ -27,25 +34,62 @@ taosdump -i /file/path -h localhost -P 6030
|
|||
|
||||
After executing the above command, taosdump will connect to the TDengine cluster at localhost:6030 and restore the data files from /file/path to the TDengine cluster.
|
||||
|
||||
## Data Backup and Restoration Based on TDengine Enterprise
|
||||
## Data Backup and Restoration in TDengine Enterprise
|
||||
|
||||
TDengine Enterprise provides an efficient incremental backup feature, with the following process.
|
||||
TDengine Enterprise implements incremental backup and recovery of data by using data subscription. The backup and recovery functions of TDengine Enterprise include the following concepts:
|
||||
|
||||
Step 1, access the taosExplorer service through a browser, usually at the port 6060 of the IP address where the TDengine cluster is located, such as `http://localhost:6060`.
|
||||
1. Incremental data backup: Based on TDengine's data subscription function, all data changes of **the backup object** (including: addition, modification, deletion, metadata change, etc.) are recorded to generate a backup file.
|
||||
2. Data recovery: Use the backup file generated by incremental data backup to restore **the backup object** to a specified point in time.
|
||||
3. Backup object: The object that the user backs up can be a **database** or a **supertable**.
|
||||
4. Backup plan: The user creates a periodic backup task for the backup object. The backup plan starts at a specified time point and periodically executes the backup task at intervals of **the backup cycle. Each backup task generates a** **backup point** .
|
||||
5. Backup point: Each time a backup task is executed, a set of backup files is generated. They correspond to a time point, called **a backup point** . The first backup point is called **the initial backup point** .
|
||||
6. Restore task: The user selects a backup point in the backup plan and creates a restore task. The restore task starts from **the initial backup point** and plays back the data changes in **the backup file** one by one until the specified backup point ends.
|
||||
|
||||
Step 2, in the "System Management - Backup" page of the taosExplorer service, add a new data backup task, fill in the database name and backup storage file path in the task configuration information, and start the data backup after completing the task creation. Three parameters can be configured on the data backup configuration page:
|
||||
### Incremental Backup Example
|
||||
|
||||
- Backup cycle: Required, configure the time interval for each data backup execution, which can be selected from a dropdown menu to execute once every day, every 7 days, or every 30 days. After configuration, a data backup task will be initiated at 0:00 of the corresponding backup cycle;
|
||||
- Database: Required, configure the name of the database to be backed up (the database's wal_retention_period parameter must be greater than 0);
|
||||
- Directory: Required, configure the path in the running environment of taosX where the data will be backed up, such as `/root/data_backup`;
|
||||
<figure>
|
||||
<Image img={imgBackup} alt="Incremental backup process"/>
|
||||
<figcaption>Figure 1. Incremental backup process</figcaption>
|
||||
</figure>
|
||||
|
||||
Step 3, after the data backup task is completed, find the created data backup task in the list of created tasks on the same page, and directly perform one-click restoration to restore the data to TDengine.
|
||||
1. The user creates a backup plan to execute the backup task every 1 day starting from 2024-08-27 00:00:00 .
|
||||
2. The first backup task was executed at 2024-08-27 00:00:00, generating an initial backup point .
|
||||
3. After that, the backup task is executed every 1 day, and multiple backup points are generated .
|
||||
4. Users can select a backup point and create a restore task .
|
||||
5. The restore task starts from the initial backup point, applies the backup points one by one, and restores to the specified backup point.
|
||||
|
||||
Compared to taosdump, if the same data is backed up multiple times in the specified storage path, since TDengine Enterprise not only has high backup efficiency but also implements incremental processing, each backup task will be completed quickly. As taosdump always performs full backups, TDengine Enterprise can significantly reduce system overhead in scenarios with large data volumes and is more convenient.
|
||||
### Back Up Data in TDengine Enterprise
|
||||
|
||||
**Common Error Troubleshooting**
|
||||
1. In a web browser, open the taosExplorer interface for TDengine. This interface is located on port 6060 on the hostname or IP address running TDengine.
|
||||
2. In the main menu on the left, click **Management** and open the **Backup** tab.
|
||||
3. Under **Backup Plan**, click **Create New Backup** to define your backup plan.
|
||||
1. **Database:** Select the database that you want to backup.
|
||||
2. **Super Table:** (Optional) Select the supertable that you want to backup. If you do not select a supertable, all data in the database is backed up.
|
||||
3. **Next execution time:** Enter the date and time when you want to perform the initial backup for this backup plan. If you specify a date and time in the past, the initial backup is performed immediately.
|
||||
4. **Backup Cycle:** Specify how often you want to perform incremental backups. The value of this field must be less than the value of `WAL_RETENTION_PERIOD` for the specified database.
|
||||
5. **Retry times:** Enter how many times you want to retry a backup task that has failed, provided that the specific failure might be resolved by retrying.
|
||||
6. **Retry interval:** Enter the delay in seconds between retry attempts.
|
||||
7. **Directory:** Enter the full path of the directory in which you want to store backup files.
|
||||
8. **Backup file max size:** Enter the maximum size of a single backup file. If the total size of your backup exceeds this number, the backup is split into multiple files.
|
||||
9. **Compression level:** Select **fastest** for the fastest performance but lowest compression ratio, **best** for the highest compression ratio but slowest performance, or **balanced** for a combination of performance and compression.
|
||||
|
||||
1. If the task fails to start and reports the following error:
|
||||
4. Click **Confirm** to create the backup plan.
|
||||
|
||||
You can view your backup plans and modify, clone, or delete them using the buttons in the **Operation** columns. Click **Refresh** to update the status of your plans. Note that you must stop a backup plan before you can delete it. You can also click **View** in the **Backup File** column to view the backup record points and files created by each plan.
|
||||
|
||||
### Restore Data in TDengine Enterprise
|
||||
|
||||
1. Locate the backup plan containing data that you want to restore and click **View** in the **Backup File** column.
|
||||
2. Determine the backup record point to which you want to restore and click the Restore icon in the **Operation** column.
|
||||
3. Select the backup file timestamp and target database and click **Confirm**.
|
||||
|
||||
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Port Access Exception
|
||||
|
||||
A port access exception is indicated by the following error:
|
||||
|
||||
```text
|
||||
Error: tmq to td task exec error
|
||||
|
@ -54,9 +98,11 @@ Caused by:
|
|||
[0x000B] Unable to establish connection
|
||||
```
|
||||
|
||||
The cause is an abnormal connection to the data source port, check whether the data source FQDN is connected and whether port 6030 is accessible.
|
||||
If you encounter this error, check whether the data source FQDN is connected and whether port 6030 is listening and accessible.
|
||||
|
||||
2. If using a WebSocket connection, the task fails to start and reports the following error:
|
||||
### Connection Issues
|
||||
|
||||
A connection issue is indicated by the task failing to start and reporting the following error:
|
||||
|
||||
```text
|
||||
Error: tmq to td task exec error
|
||||
|
@ -67,15 +113,16 @@ Caused by:
|
|||
2: failed to lookup address information: Temporary failure in name resolution
|
||||
```
|
||||
|
||||
When using a WebSocket connection, you may encounter various types of errors, which can be seen after "Caused by". Here are some possible errors:
|
||||
The following are some possible errors for WebSocket connections:
|
||||
- "Temporary failure in name resolution": DNS resolution error. Check whether the specified IP address or FQDN can be accessed normally.
|
||||
- "IO error: Connection refused (os error 111)": Port access failed. Check whether the port is configured correctly and is enabled and accessible.
|
||||
- "IO error: received corrupt message": Message parsing failed. This may be because SSL was enabled using the WSS method, but the source port is not supported.
|
||||
- "HTTP error: *": Confirm that you are connecting to the correct taosAdapter port and that your LSB/Nginx/Proxy has been configured correctly.
|
||||
- "WebSocket protocol error: Handshake not finished": WebSocket connection error. This is typically caused by an incorrectly configured port.
|
||||
|
||||
- "Temporary failure in name resolution": DNS resolution error, check if the IP or FQDN can be accessed normally.
|
||||
- "IO error: Connection refused (os error 111)": Port access failure, check if the port is configured correctly or if it is open and accessible.
|
||||
- "IO error: received corrupt message": Message parsing failed, possibly because SSL was enabled using wss, but the source port does not support it.
|
||||
- "HTTP error: *": Possibly connected to the wrong taosAdapter port or incorrect LSB/Nginx/Proxy configuration.
|
||||
- "WebSocket protocol error: Handshake not finished": WebSocket connection error, usually because the configured port is incorrect.
|
||||
### WAL Configuration
|
||||
|
||||
3. If the task fails to start and reports the following error:
|
||||
A WAL configuration issue is indicated by the task failing to start and reporting the following error:
|
||||
|
||||
```text
|
||||
Error: tmq to td task exec error
|
||||
|
@ -84,11 +131,8 @@ Caused by:
|
|||
[0x038C] WAL retention period is zero
|
||||
```
|
||||
|
||||
This is due to incorrect WAL configuration in the source database, preventing subscription.
|
||||
|
||||
Solution:
|
||||
Modify the data WAL configuration:
|
||||
To resolve this error, modify the WAL retention period for the affected database:
|
||||
|
||||
```sql
|
||||
alter database test wal_retention_period 3600;
|
||||
ALTER DATABASE test WAL_RETENTION_PERIOD 3600;
|
||||
```
|
||||
|
|
|
@ -26,6 +26,8 @@ Flink Connector supports all platforms that can run Flink 1.19 and above version
|
|||
|
||||
| Flink Connector Version | Major Changes | TDengine Version|
|
||||
|-------------------------| ------------------------------------ | ---------------- |
|
||||
| 2.1.0 | Fix the issue of writing varchar types from different data sources.| - |
|
||||
| 2.0.2 | The Table Sink supports types such as RowKind.UPDATE_BEFORE, RowKind.UPDATE_AFTER, and RowKind.DELETE.| - |
|
||||
| 2.0.1 | Sink supports writing types from Rowdata implementations.| - |
|
||||
| 2.0.0 | 1.Support SQL queries on data in TDengine database. <br/> 2. Support CDC subscription to data in TDengine database.<br/> 3. Supports reading and writing to TDengine database using Table SQL. | 3.3.5.1 and higher|
|
||||
| 1.0.0 | Support Sink function to write data from other sources to TDengine in the future.| 3.3.2.0 and higher|
|
||||
|
@ -85,7 +87,8 @@ TDengine currently supports timestamp, number, character, and boolean types, and
|
|||
| SMALLINT | Short |
|
||||
| TINYINT | Byte |
|
||||
| BOOL | Boolean |
|
||||
| BINARY | byte[] |
|
||||
| VARCHAR | StringData |
|
||||
| BINARY | StringData |
|
||||
| NCHAR | StringData |
|
||||
| JSON | StringData |
|
||||
| VARBINARY | byte[] |
|
||||
|
@ -115,7 +118,7 @@ If using Maven to manage a project, simply add the following dependencies in pom
|
|||
<dependency>
|
||||
<groupId>com.taosdata.flink</groupId>
|
||||
<artifactId>flink-connector-tdengine</artifactId>
|
||||
<version>2.0.1</version>
|
||||
<version>2.1.0</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
|
|
|
@ -7,36 +7,24 @@ Power BI is a business analytics tool provided by Microsoft. By configuring the
|
|||
|
||||
## Prerequisites
|
||||
|
||||
Install and run Power BI Desktop software (if not installed, please download the latest version for Windows OS 32/64 bit from its official address).
|
||||
- TDengine 3.3.4.0 and above version is installed and running normally (both Enterprise and Community versions are available).
|
||||
- taosAdapter is running normally, refer to [taosAdapter Reference](../../../tdengine-reference/components/taosadapter/).
|
||||
- Install and run Power BI Desktop software (if not installed, please download the latest version for Windows OS 32/64 bit from its official address).
|
||||
- Download the latest Windows OS X64 client driver from the TDengine official website and install it on the machine running Power BI. After successful installation, the TDengine driver can be seen in the "ODBC Data Sources (32-bit)" or "ODBC Data Sources (64-bit)" management tool.
|
||||
|
||||
## Install ODBC Driver
|
||||
## Configure Data Source
|
||||
|
||||
Download the latest Windows OS X64 client driver from the TDengine official website and install it on the machine running Power BI. After successful installation, the TDengine driver can be seen in the "ODBC Data Sources (32-bit)" or "ODBC Data Sources (64-bit)" management tool.
|
||||
**Step 1**, Search and open the [ODBC Data Source (64 bit)] management tool in the Start menu of the Windows operating system and configure it, refer to [Install ODBC Driver](../../../tdengine-reference/client-libraries/odbc/#Installation).
|
||||
|
||||
**Step 2**, Open Power BI and log in, click [Home] -> [Get Data] -> [Other] -> [ODBC] -> [Connect], add data source.
|
||||
|
||||
## Configure ODBC Data Source
|
||||
**Step 3**, Select the data source name just created, such as [MyTDengine], if you need to enter SQL, you can click the [Advanced options] tab, in the expanded dialog box enter the SQL statement. Click the [OK] button to connect to the configured data source.
|
||||
|
||||
The steps to configure the ODBC data source are as follows.
|
||||
**Step 4**, Enter the [Navigator], you can browse the corresponding database's tables/views and load data.
|
||||
|
||||
Step 1, search and open "ODBC Data Sources (32-bit)" or "ODBC Data Sources (64-bit)" management tool from the Windows start menu.
|
||||
Step 2, click the "User DSN" tab → "Add" button, enter the "Create New Data Source" dialog box.
|
||||
Step 3, in the list of "Select the driver you want to install for this data source" choose "TDengine", click the "Finish" button, enter the TDengine ODBC data source configuration page. Fill in the necessary information as follows.
|
||||
## Data Analysis
|
||||
|
||||
- DSN: Data source name, required, such as "MyTDengine".
|
||||
- Connection Type: Check the "WebSocket" checkbox.
|
||||
- URL: ODBC data source URL, required, such as `http://127.0.0.1:6041`.
|
||||
- Database: Indicates the database to connect to, optional, such as "test".
|
||||
- Username: Enter username, if not filled, default is "root".
|
||||
- Password: Enter user password, if not filled, default is "taosdata".
|
||||
|
||||
Step 4, click the "Test Connection" button, test the connection situation, if successfully connected, it will prompt "Successfully connected to `http://127.0.0.1:6041`".
|
||||
Step 5, click the "OK" button, to save the configuration and exit.
|
||||
|
||||
## Import TDengine Data into Power BI
|
||||
|
||||
The steps to import TDengine data into Power BI are as follows:
|
||||
Step 1, open Power BI and log in, click "Home" → "Get Data" → "Other" → "ODBC" → "Connect", add data source.
|
||||
Step 2, select the data source name just created, such as "MyTDengine", if you need to enter SQL, you can click the "Advanced options" tab, in the expanded dialog box enter the SQL statement. Click the "OK" button to connect to the configured data source.
|
||||
Step 3, enter the "Navigator", you can browse the corresponding database's tables/views and load data.
|
||||
### Instructions for use
|
||||
|
||||
To fully leverage Power BI's advantages in analyzing data from TDengine, users need to first understand core concepts such as dimensions, metrics, window split queries, data split queries, time-series, and correlation, then import data through custom SQL.
|
||||
|
||||
|
@ -47,7 +35,7 @@ To fully leverage Power BI's advantages in analyzing data from TDengine, users n
|
|||
- Time-Series: When drawing curves or aggregating data by time, it is usually necessary to introduce a date table. Date tables can be imported from Excel spreadsheets, or obtained in TDengine by executing SQL like `select _wstart date, count(*) cnt from test.meters where ts between A and B interval(1d) fill(0)`, where the fill clause represents the filling mode in case of data missing, and the pseudocolumn `_wstart` is the date column to be obtained.
|
||||
- Correlation: Tells how data is related, such as metrics and dimensions can be associated together through the tbname column, date tables and metrics can be associated through the date column, combined to form visual reports.
|
||||
|
||||
## Smart Meter Example
|
||||
### Smart Meter Example
|
||||
|
||||
TDengine employs a unique data model to optimize the storage and query performance of time-series data. This model uses supertables as templates to create an independent table for each device. Each table is designed with high scalability in mind, supporting up to 4096 data columns and 128 tag columns. This design enables TDengine to efficiently handle large volumes of time-series data while maintaining flexibility and ease of use.
|
||||
|
||||
|
@ -56,24 +44,35 @@ Taking smart meters as an example, suppose each meter generates one record per s
|
|||
In Power BI, users can map the tag columns in TDengine tables to dimension columns for grouping and filtering data. Meanwhile, the aggregated results of the data columns can be imported as measure columns for calculating key indicators and generating reports. In this way, Power BI helps decision-makers quickly obtain the information they need, gain a deeper understanding of business operations, and make more informed decisions.
|
||||
|
||||
Follow the steps below to experience the functionality of generating time-series data reports through Power BI.
|
||||
Step 1, Use TDengine's taosBenchMark to quickly generate data for 1,000 smart meters over 3 days, with a collection frequency of 1s.
|
||||
```shell
|
||||
taosBenchmark -t 1000 -n 259200 -S 1000 -y
|
||||
```
|
||||
Step 2, Import dimension data. In Power BI, import the tag columns of the table, named as tags, using the following SQL to get the tag data of all smart meters under the supertable.
|
||||
```sql
|
||||
select distinct tbname device, groupId, location from test.meters
|
||||
```
|
||||
Step 3, Import measure data. In Power BI, import the average current, voltage, and phase of each smart meter in 1-hour time windows, named as data, with the following SQL.
|
||||
```sql
|
||||
select tbname, _wstart ws, avg(current), avg(voltage), avg(phase) from test.meters PARTITION by tbname interval(1h)
|
||||
```
|
||||
Step 4, Import date data. Using a 1-day time window, obtain the time range and data count of the time-series data, with the following SQL. In the Power Query editor, convert the format of the date column from "text" to "date".
|
||||
```sql
|
||||
select _wstart date, count(*) from test.meters interval(1d) having count(*)>0
|
||||
```
|
||||
Step 5, Establish the relationship between dimensions and measures. Open the model view and establish the relationship between the tags and data tables, setting tbname as the relationship data column.
|
||||
Step 6, Establish the relationship between date and measures. Open the model view and establish the relationship between the date dataset and data, with the relationship data columns being date and datatime.
|
||||
Step 7, Create reports. Use these data in bar charts, pie charts, and other controls.
|
||||
|
||||
**Step 1**, Use TDengine's taosBenchMark to quickly generate data for 1,000 smart meters over 3 days, with a collection frequency of 1s.
|
||||
|
||||
```shell
|
||||
taosBenchmark -t 1000 -n 259200 -S 1000 -y
|
||||
```
|
||||
|
||||
**Step 2**, Import dimension data. In Power BI, import the tag columns of the table, named as tags, using the following SQL to get the tag data of all smart meters under the supertable.
|
||||
|
||||
```sql
|
||||
select distinct tbname device, groupId, location from test.meters
|
||||
```
|
||||
|
||||
**Step 3**, Import measure data. In Power BI, import the average current, voltage, and phase of each smart meter in 1-hour time windows, named as data, with the following SQL.
|
||||
|
||||
```sql
|
||||
select tbname, _wstart ws, avg(current), avg(voltage), avg(phase) from test.meters PARTITION by tbname interval(1h)
|
||||
```
|
||||
|
||||
**Step 4**, Import date data. Using a 1-day time window, obtain the time range and data count of the time-series data, with the following SQL. In the Power Query editor, convert the format of the date column from "text" to "date".
|
||||
|
||||
```sql
|
||||
select _wstart date, count(*) from test.meters interval(1d) having count(*)>0
|
||||
```
|
||||
|
||||
**Step 5**, Establish the relationship between dimensions and measures. Open the model view and establish the relationship between the tags and data tables, setting tbname as the relationship data column.
|
||||
|
||||
**Step 6**, Establish the relationship between date and measures. Open the model view and establish the relationship between the date dataset and data, with the relationship data columns being date and datatime.
|
||||
|
||||
**Step 7**, Create reports. Use these data in bar charts, pie charts, and other controls.
|
||||
|
||||
Due to TDengine's superior performance in handling time-series data, users can enjoy a very good experience during data import and daily regular data refreshes. For more information on building Power BI visual effects, please refer to the official Power BI documentation.
|
||||
|
|
|
@ -11,43 +11,42 @@ import imgStep04 from '../../assets/seeq-04.png';
|
|||
|
||||
Seeq is advanced analytics software for the manufacturing and Industrial Internet of Things (IIOT). Seeq supports innovative new features using machine learning in process manufacturing organizations. These features enable organizations to deploy their own or third-party machine learning algorithms to advanced analytics applications used by frontline process engineers and subject matter experts, thus extending the efforts of a single data scientist to many frontline staff.
|
||||
|
||||
Through the TDengine Java connector, Seeq can easily support querying time-series data provided by TDengine and offer data presentation, analysis, prediction, and other functions.
|
||||
Through the `TDengine Java connector`, Seeq can easily support querying time-series data provided by TDengine and offer data presentation, analysis, prediction, and other functions.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Seeq has been installed. Download the relevant software from [Seeq's official website](https://www.seeq.com/customer-download), such as Seeq Server and Seeq Data Lab, etc. Seeq Data Lab needs to be installed on a different server from Seeq Server and interconnected through configuration. For detailed installation and configuration instructions, refer to the [Seeq Knowledge Base](https://support.seeq.com/kb/latest/cloud/).
|
||||
- TDengine 3.1.0.3 and above version is installed and running normally (both Enterprise and Community versions are available).
|
||||
- taosAdapter is running normally, refer to [taosAdapter Reference](../../../tdengine-reference/components/taosadapter/).
|
||||
- Seeq has been installed. Download the relevant software from [Seeq's official website](https://www.seeq.com/customer-download), such as `Seeq Server` and `Seeq Data Lab`, etc. `Seeq Data Lab` needs to be installed on a different server from `Seeq Server` and interconnected through configuration. For detailed installation and configuration instructions, refer to the [Seeq Knowledge Base](https://support.seeq.com/kb/latest/cloud/).
|
||||
- Install the JDBC driver. Download the `TDengine JDBC connector` file `taos-jdbcdriver-3.2.5-dist.jar` or a higher version from `maven.org`.
|
||||
|
||||
- TDengine local instance has been installed. Please refer to the [official documentation](../../../get-started). If using TDengine Cloud, please go to https://cloud.taosdata.com apply for an account and log in to see how to access TDengine Cloud.
|
||||
## Configure Data Source
|
||||
|
||||
## Configuring Seeq to Access TDengine
|
||||
### Configuration of JDBC Connector
|
||||
|
||||
1. Check the data storage location
|
||||
**Step 1**, Check the data storage location
|
||||
|
||||
```shell
|
||||
sudo seeq config get Folders/Data
|
||||
```
|
||||
|
||||
2. Download the TDengine Java connector package from maven.org, the latest version is [3.2.5](https://repo1.maven.org/maven2/com/taosdata/jdbc/taos-jdbcdriver/3.2.5/taos-jdbcdriver-3.2.5-dist.jar), and copy it to the plugins\lib in the data storage location.
|
||||
**Step 2**, Download the TDengine Java connector package from `maven.org` and copy it to the `plugins\lib` directory in the data storage location.
|
||||
|
||||
3. Restart seeq server
|
||||
**Step 3**, Restart seeq server
|
||||
|
||||
```shell
|
||||
sudo seeq restart
|
||||
```
|
||||
|
||||
4. Enter License
|
||||
**Step 4**, Enter License
|
||||
|
||||
Use a browser to visit ip:34216 and follow the instructions to enter the license.
|
||||
|
||||
## Using Seeq to Analyze TDengine Time-Series Data
|
||||
## Load TDengine Time-Series Data
|
||||
|
||||
This section demonstrates how to use Seeq software in conjunction with TDengine for time-series data analysis.
|
||||
This chapter demonstrates how to use the Seeq software to load TDengine time-series data.
|
||||
|
||||
### Scenario Introduction
|
||||
|
||||
The example scenario is a power system where users collect electricity usage data from power station instruments daily and store it in the TDengine cluster. Now, users want to predict how power consumption will develop and purchase more equipment to support it. User power consumption varies with monthly orders, and considering seasonal changes, power consumption will differ. This city is located in the northern hemisphere, so more electricity is used in summer. We simulate data to reflect these assumptions.
|
||||
|
||||
### Data Schema
|
||||
**Step 1**, Create tables in TDengine.
|
||||
|
||||
```sql
|
||||
CREATE STABLE meters (ts TIMESTAMP, num INT, temperature FLOAT, goods INT) TAGS (device NCHAR(20));
|
||||
|
@ -58,7 +57,7 @@ CREATE TABLE goods (ts1 TIMESTAMP, ts2 TIMESTAMP, goods FLOAT);
|
|||
<Image img={imgStep01} alt=""/>
|
||||
</figure>
|
||||
|
||||
### Data Construction Method
|
||||
**Step 2**, Construct data in TDengine.
|
||||
|
||||
```shell
|
||||
python mockdata.py
|
||||
|
@ -67,11 +66,7 @@ taos -s "insert into power.goods select _wstart, _wstart + 10d, avg(goods) from
|
|||
|
||||
The source code is hosted on [GitHub Repository](https://github.com/sangshuduo/td-forecasting).
|
||||
|
||||
## Using Seeq for Data Analysis
|
||||
|
||||
### Configuring Data Source
|
||||
|
||||
Log in using a Seeq administrator role account and create a new data source.
|
||||
**第 3 步**,Log in using a Seeq administrator role account and create a new data source.
|
||||
|
||||
- Power
|
||||
|
||||
|
@ -251,6 +246,12 @@ Log in using a Seeq administrator role account and create a new data source.
|
|||
}
|
||||
```
|
||||
|
||||
## Data Analysis
|
||||
|
||||
### Scenario Introduction
|
||||
|
||||
The example scenario is a power system where users collect electricity usage data from power station instruments daily and store it in the TDengine cluster. Now, users want to predict how power consumption will develop and purchase more equipment to support it. User power consumption varies with monthly orders, and considering seasonal changes, power consumption will differ. This city is located in the northern hemisphere, so more electricity is used in summer. We simulate data to reflect these assumptions.
|
||||
|
||||
### Using Seeq Workbench
|
||||
|
||||
Log in to the Seeq service page and create a new Seeq Workbench. By selecting data sources from search results and choosing different tools as needed, you can display data or make predictions. For detailed usage methods, refer to the [official knowledge base](https://support.seeq.com/space/KB/146440193/Seeq+Workbench).
|
||||
|
@ -330,77 +331,7 @@ Program output results:
|
|||
<Image img={imgStep03} alt=""/>
|
||||
</figure>
|
||||
|
||||
## Configuring Seeq Data Source Connection to TDengine Cloud
|
||||
|
||||
Configuring a Seeq data source connection to TDengine Cloud is essentially no different from connecting to a local TDengine installation. Simply log in to TDengine Cloud, select "Programming - Java" and copy the JDBC string with a token to fill in as the DatabaseJdbcUrl value for the Seeq Data Source.
|
||||
Note that when using TDengine Cloud, the database name needs to be specified in SQL commands.
|
||||
|
||||
### Configuration example using TDengine Cloud as a data source
|
||||
|
||||
```json
|
||||
{
|
||||
"QueryDefinitions": [
|
||||
{
|
||||
"Name": "CloudVoltage",
|
||||
"Type": "SIGNAL",
|
||||
"Sql": "SELECT ts, voltage FROM test.meters",
|
||||
"Enabled": true,
|
||||
"TestMode": false,
|
||||
"TestQueriesDuringSync": true,
|
||||
"InProgressCapsulesEnabled": false,
|
||||
"Variables": null,
|
||||
"Properties": [
|
||||
{
|
||||
"Name": "Name",
|
||||
"Value": "Voltage",
|
||||
"Sql": null,
|
||||
"Uom": "string"
|
||||
},
|
||||
{
|
||||
"Name": "Interpolation Method",
|
||||
"Value": "linear",
|
||||
"Sql": null,
|
||||
"Uom": "string"
|
||||
},
|
||||
{
|
||||
"Name": "Maximum Interpolation",
|
||||
"Value": "2day",
|
||||
"Sql": null,
|
||||
"Uom": "string"
|
||||
}
|
||||
],
|
||||
"CapsuleProperties": null
|
||||
}
|
||||
],
|
||||
"Type": "GENERIC",
|
||||
"Hostname": null,
|
||||
"Port": 0,
|
||||
"DatabaseName": null,
|
||||
"Username": "root",
|
||||
"Password": "taosdata",
|
||||
"InitialSql": null,
|
||||
"TimeZone": null,
|
||||
"PrintRows": false,
|
||||
"UseWindowsAuth": false,
|
||||
"SqlFetchBatchSize": 100000,
|
||||
"UseSSL": false,
|
||||
"JdbcProperties": null,
|
||||
"GenericDatabaseConfig": {
|
||||
"DatabaseJdbcUrl": "jdbc:TAOS-RS://gw.cloud.tdengine.com?useSSL=true&token=41ac9d61d641b6b334e8b76f45f5a8XXXXXXXXXX",
|
||||
"SqlDriverClassName": "com.taosdata.jdbc.rs.RestfulDriver",
|
||||
"ResolutionInNanoseconds": 1000,
|
||||
"ZonedColumnTypes": []
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Example of Seeq Workbench Interface with TDengine Cloud as Data Source
|
||||
|
||||
<figure>
|
||||
<Image img={imgStep04} alt=""/>
|
||||
</figure>
|
||||
|
||||
## Solution Summary
|
||||
### Solution Summary
|
||||
|
||||
By integrating Seeq and TDengine, users can fully leverage the efficient storage and querying capabilities of TDengine, while also benefiting from the powerful data visualization and analysis features provided by Seeq.
|
||||
|
||||
|
|
|
@ -0,0 +1,41 @@
|
|||
---
|
||||
sidebar_label: Tableau
|
||||
title: Integration With Tableau
|
||||
toc_max_heading_level: 4
|
||||
---
|
||||
|
||||
Tableau is a well-known business intelligence tool that supports multiple data sources, making it easy to connect, import, and integrate data. And through an intuitive user interface, users can create rich and diverse visual charts, with powerful analysis and filtering functions, providing strong support for data decision-making. Users can import tag data, raw time-series data, or time-series data aggregated over time from TDengine into Tableau via the TDengine ODBC Connector to create reports or dashboards, and no code writing is required throughout the entire process.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Prepare the following environment:
|
||||
|
||||
- TDengine 3.3.5.4 and above version is installed and running normally (both Enterprise and Community versions are available)
|
||||
- taosAdapter is running normally, refer to [taosAdapter Reference](../../../tdengine-reference/components/taosadapter/)
|
||||
- Install and run Tableau Desktop (if not installed, please download and install Windows operating system 64-bit [Download Tableau Desktop](https://www.tableau.com/products/desktop/download)). Install Tableau please refer to [Tableau Desktop](https://www.tableau.com).
|
||||
- Download the latest Windows operating system X64 client driver from the TDengine official website and install it, refer to [Install ODBC Driver](../../../tdengine-reference/client-libraries/odbc/#Installation).
|
||||
|
||||
## Configure Data Source
|
||||
|
||||
**Step 1**, Search and open the "ODBC Data Source (64 bit)" management tool in the Start menu of the Windows operating system and configure it, refer to [Install ODBC Driver](../../../tdengine-reference/client-libraries/odbc/#Installation).
|
||||
|
||||
**Step 2**, Start Tableau in the Windows system environment, then search for "ODBC" on its connection page and select "Other Databases (ODBC)".
|
||||
|
||||
**Step 3**, Click the `DSN` radio button, then select the configured data source (MyTDengine), and click the `Connect` button. After the connection is successful, delete the content of the string attachment, and finally click the `Sign In` button.
|
||||
|
||||

|
||||
|
||||
## Data Analysis
|
||||
|
||||
**Step 1**, In the workbook page, the connected data sources will be displayed. Clicking on the dropdown list of databases will display the databases that require data analysis. On this basis, click the search button in the table options to display all tables in the database. Then, drag the table to be analyzed to the right area to display the table structure.
|
||||
|
||||

|
||||
|
||||
**Step 2**, Click the `Update Now` button below to display the data in the table.
|
||||
|
||||

|
||||
|
||||
**Step 3**, Click on the "Worksheet" at the bottom of the window to pop up the data analysis window, which displays all the fields of the analysis table. Drag the fields to the rows and columns to display the chart.
|
||||
|
||||

|
||||
|
|
@ -0,0 +1,42 @@
|
|||
---
|
||||
sidebar_label: Excel
|
||||
title: Integration With Excel
|
||||
toc_max_heading_level: 4
|
||||
---
|
||||
|
||||
By configuring the use of the ODBC connector, Excel can quickly access data from TDengine. Users can import tag data, raw time-series data, or time-aggregated time series data from TDengine into Excel to create reports or dashboards, all without the need for any coding.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Prepare the following environment:
|
||||
|
||||
- TDengine 3.3.5.7 and above version is installed and running normally (both Enterprise and Community versions are available).
|
||||
- taosAdapter is running normally, refer to [taosAdapter Reference](../../../tdengine-reference/components/taosadapter/).
|
||||
- Install and run Excel. If not installed, please download and install it. For specific instructions, please refer to Microsoft's official documentation.
|
||||
- Download the latest Windows operating system X64 client driver from the TDengine official website and install it, refer to [Install ODBC Driver](../../../tdengine-reference/client-libraries/odbc/#Installation).
|
||||
|
||||
## Configure Data Source
|
||||
|
||||
**Step 1**, Search and open the [ODBC Data Source (64 bit)] management tool in the Start menu of the Windows operating system and configure it, refer to [Install ODBC Driver](../../../tdengine-reference/client-libraries/odbc/#Installation).
|
||||
|
||||
**Step 2**, Start Excel in the Windows system environment, then select [Data] -> [Get Data] -> [From Other Sources] -> [From ODBC].
|
||||
|
||||

|
||||
|
||||
**Step 3**, In the pop-up window, select the data source you need to connect to from the drop-down list of [Data source name (DSN)], and then click the [OK] button.
|
||||
|
||||

|
||||
|
||||
**Step 4**, Enter the username and password for TDengine.
|
||||
|
||||

|
||||
|
||||
**Step 5**, In the pop-up [Navigator] dialog box, select the database tables you want to load, and then click [Load] to complete the data loading.
|
||||
|
||||

|
||||
|
||||
## Data Analysis
|
||||
|
||||
Select the imported data. On the [Insert] tab, choose the column chart, and then configure the data fields in the [PivotChart Fields] pane on the right.
|
||||
|
||||

|
After Width: | Height: | Size: 300 KiB |
After Width: | Height: | Size: 761 KiB |
After Width: | Height: | Size: 1.3 MiB |
After Width: | Height: | Size: 659 KiB |
After Width: | Height: | Size: 505 KiB |
After Width: | Height: | Size: 243 KiB |
After Width: | Height: | Size: 255 KiB |
After Width: | Height: | Size: 226 KiB |
After Width: | Height: | Size: 107 KiB |
|
@ -243,6 +243,8 @@ The effective value of charset is UTF-8.
|
|||
| concurrentCheckpoint | |Supported, effective immediately | Internal parameter, whether to check checkpoints concurrently |
|
||||
| maxStreamBackendCache | |Supported, effective immediately | Internal parameter, maximum cache used by stream computing |
|
||||
| streamSinkDataRate | |Supported, effective after restart| Internal parameter, used to control the write speed of stream computing results |
|
||||
| streamNotifyMessageSize | After 3.3.6.0 | Not supported | Internal parameter, controls the message size for event notifications, default value is 8192 |
|
||||
| streamNotifyFrameSize | After 3.3.6.0 | Not supported | Internal parameter, controls the underlying frame size when sending event notification messages, default value is 256 |
|
||||
|
||||
### Log Related
|
||||
|
||||
|
|
|
@ -13,45 +13,108 @@ import Icinga2 from "../../assets/resources/_icinga2.mdx"
|
|||
import TCollector from "../../assets/resources/_tcollector.mdx"
|
||||
|
||||
taosAdapter is a companion tool for TDengine, serving as a bridge and adapter between the TDengine cluster and applications. It provides an easy and efficient way to ingest data directly from data collection agents (such as Telegraf, StatsD, collectd, etc.). It also offers InfluxDB/OpenTSDB compatible data ingestion interfaces, allowing InfluxDB/OpenTSDB applications to be seamlessly ported to TDengine.
|
||||
The connectors of TDengine in various languages communicate with TDengine through the WebSocket interface, hence the taosAdapter must be installed.
|
||||
|
||||
taosAdapter offers the following features:
|
||||
|
||||
- RESTful interface
|
||||
- Compatible with InfluxDB v1 write interface
|
||||
- Compatible with OpenTSDB JSON and telnet format writing
|
||||
- Seamless connection to Telegraf
|
||||
- Seamless connection to collectd
|
||||
- Seamless connection to StatsD
|
||||
- Supports Prometheus remote_read and remote_write
|
||||
- Retrieves the VGroup ID of the virtual node group (VGroup) where the table is located
|
||||
|
||||
## taosAdapter Architecture Diagram
|
||||
The architecture diagram is as follows:
|
||||
|
||||
<figure>
|
||||
<Image img={imgAdapter} alt="taosAdapter architecture"/>
|
||||
<figcaption>Figure 1. taosAdapter architecture</figcaption>
|
||||
</figure>
|
||||
|
||||
## Deployment Methods for taosAdapter
|
||||
## Feature List
|
||||
|
||||
### Installing taosAdapter
|
||||
The taosAdapter provides the following features:
|
||||
|
||||
- WebSocket Interface:
|
||||
Supports executing SQL, schemaless writing, parameter binding, and data subscription through the WebSocket protocol.
|
||||
- Compatible with InfluxDB v1 write interface:
|
||||
[https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/](https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/)
|
||||
- Compatible with OpenTSDB JSON and telnet format writing:
|
||||
- [http://opentsdb.net/docs/build/html/api_http/put.html](http://opentsdb.net/docs/build/html/api_http/put.html)
|
||||
- [http://opentsdb.net/docs/build/html/api_telnet/put.html](http://opentsdb.net/docs/build/html/api_telnet/put.html)
|
||||
- collectd data writing:
|
||||
collectd is a system statistics collection daemon, visit [https://collectd.org/](https://collectd.org/) for more information.
|
||||
- StatsD data writing:
|
||||
StatsD is a simple yet powerful daemon for gathering statistics. Visit [https://github.com/statsd/statsd](https://github.com/statsd/statsd) for more information.
|
||||
- icinga2 OpenTSDB writer data writing:
|
||||
icinga2 is a software for collecting check results metrics and performance data. Visit [https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer](https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer) for more information.
|
||||
- TCollector data writing:
|
||||
TCollector is a client process that collects data from local collectors and pushes it to OpenTSDB. Visit [http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html](http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html) for more information.
|
||||
- node_exporter data collection and writing:
|
||||
node_exporter is an exporter of machine metrics. Visit [https://github.com/prometheus/node_exporter](https://github.com/prometheus/node_exporter) for more information.
|
||||
- Supports Prometheus remote_read and remote_write:
|
||||
remote_read and remote_write are Prometheus's data read-write separation cluster solutions. Visit [https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis](https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis) for more information.
|
||||
- RESTful API:
|
||||
[RESTful API](../../client-libraries/rest-api/)
|
||||
|
||||
### WebSocket Interface
|
||||
|
||||
Through the WebSocket interface of taosAdapter, connectors in various languages can achieve SQL execution, schemaless writing, parameter binding, and data subscription functionalities. Refer to the [Development Guide](../../../developer-guide/connecting-to-tdengine/#websocket-connection) for more details.
|
||||
|
||||
### Compatible with InfluxDB v1 write interface
|
||||
|
||||
You can use any client that supports the HTTP protocol to write data in InfluxDB compatible format to TDengine by accessing the Restful interface URL `http://<fqdn>:6041/influxdb/v1/write`.
|
||||
|
||||
Supported InfluxDB parameters are as follows:
|
||||
|
||||
- `db` specifies the database name used by TDengine
|
||||
- `precision` the time precision used by TDengine
|
||||
- `u` TDengine username
|
||||
- `p` TDengine password
|
||||
- `ttl` the lifespan of automatically created subtables, determined by the TTL parameter of the first data entry in the subtable, which cannot be updated. For more information, please refer to the TTL parameter in the [table creation document](../../sql-manual/manage-tables/).
|
||||
|
||||
Note: Currently, InfluxDB's token authentication method is not supported, only Basic authentication and query parameter verification are supported.
|
||||
Example: `curl --request POST http://127.0.0.1:6041/influxdb/v1/write?db=test --user "root:taosdata" --data-binary "measurement,host=host1 field1=2i,field2=2.0 1577836800000000000"`
|
||||
|
||||
### Compatible with OpenTSDB JSON and telnet format writing
|
||||
|
||||
You can use any client that supports the HTTP protocol to write data in OpenTSDB compatible format to TDengine by accessing the Restful interface URL `http://<fqdn>:6041/<APIEndPoint>`. EndPoint as follows:
|
||||
|
||||
```text
|
||||
/opentsdb/v1/put/json/<db>
|
||||
/opentsdb/v1/put/telnet/<db>
|
||||
```
|
||||
|
||||
### collectd data writing
|
||||
|
||||
<CollectD />
|
||||
|
||||
### StatsD data writing
|
||||
|
||||
<StatsD />
|
||||
|
||||
### icinga2 OpenTSDB writer data writing
|
||||
|
||||
<Icinga2 />
|
||||
|
||||
### TCollector data writing
|
||||
|
||||
<TCollector />
|
||||
|
||||
### node_exporter data collection and writing
|
||||
|
||||
An exporter used by Prometheus that exposes hardware and operating system metrics from \*NIX kernels
|
||||
|
||||
- Enable configuration of taosAdapter node_exporter.enable
|
||||
- Set the relevant configuration for node_exporter
|
||||
- Restart taosAdapter
|
||||
|
||||
### Supports Prometheus remote_read and remote_write
|
||||
|
||||
<Prometheus />
|
||||
|
||||
### RESTful API
|
||||
|
||||
You can use any client that supports the HTTP protocol to write data to TDengine or query data from TDengine by accessing the RESTful interface URL `http://<fqdn>:6041/rest/sql`. For details, please refer to the [REST API documentation](../../client-libraries/rest-api/).
|
||||
|
||||
## Installation
|
||||
|
||||
taosAdapter is part of the TDengine server software. If you are using TDengine server, you do not need any additional steps to install taosAdapter. If you need to deploy taosAdapter separately from the TDengine server, you should install the complete TDengine on that server to install taosAdapter. If you need to compile taosAdapter from source code, you can refer to the [Build taosAdapter](https://github.com/taosdata/taosadapter/blob/3.0/BUILD.md) document.
|
||||
|
||||
### Starting/Stopping taosAdapter
|
||||
After the installation is complete, you can start the taosAdapter service using the command `systemctl start taosadapter`.
|
||||
|
||||
On Linux systems, the taosAdapter service is managed by default by systemd. Use the command `systemctl start taosadapter` to start the taosAdapter service. Use the command `systemctl stop taosadapter` to stop the taosAdapter service.
|
||||
|
||||
### Removing taosAdapter
|
||||
|
||||
Use the command rmtaos to remove the TDengine server software, including taosAdapter.
|
||||
|
||||
### Upgrading taosAdapter
|
||||
|
||||
taosAdapter and TDengine server need to use the same version. Please upgrade taosAdapter by upgrading the TDengine server.
|
||||
taosAdapter deployed separately from taosd must be upgraded by upgrading the TDengine server on its server.
|
||||
|
||||
## taosAdapter Parameter List
|
||||
## Configuration
|
||||
|
||||
taosAdapter supports configuration through command-line parameters, environment variables, and configuration files. The default configuration file is `/etc/taos/taosadapter.toml`.
|
||||
|
||||
|
@ -80,6 +143,7 @@ Usage of taosAdapter:
|
|||
--instanceId int instance ID. Env "TAOS_ADAPTER_INSTANCE_ID" (default 32)
|
||||
--log.compress whether to compress old log. Env "TAOS_ADAPTER_LOG_COMPRESS"
|
||||
--log.enableRecordHttpSql whether to record http sql. Env "TAOS_ADAPTER_LOG_ENABLE_RECORD_HTTP_SQL"
|
||||
--log.keepDays uint log retention days, must be a positive integer. Env "TAOS_ADAPTER_LOG_KEEP_DAYS" (default 30)
|
||||
--log.level string log level (trace debug info warning error). Env "TAOS_ADAPTER_LOG_LEVEL" (default "info")
|
||||
--log.path string log path. Env "TAOS_ADAPTER_LOG_PATH" (default "/var/log/taos")
|
||||
--log.reservedDiskSize string reserved disk size for log dir (KB MB GB), must be a positive integer. Env "TAOS_ADAPTER_LOG_RESERVED_DISK_SIZE" (default "1GB")
|
||||
|
@ -90,6 +154,8 @@ Usage of taosAdapter:
|
|||
--log.sqlRotationSize string record sql log rotation size(KB MB GB), must be a positive integer. Env "TAOS_ADAPTER_LOG_SQL_ROTATION_SIZE" (default "1GB")
|
||||
--log.sqlRotationTime duration record sql log rotation time. Env "TAOS_ADAPTER_LOG_SQL_ROTATION_TIME" (default 24h0m0s)
|
||||
--logLevel string log level (trace debug info warning error). Env "TAOS_ADAPTER_LOG_LEVEL" (default "info")
|
||||
--maxAsyncConcurrentLimit int The maximum number of concurrent calls allowed for the C asynchronous method. 0 means use CPU core count. Env "TAOS_ADAPTER_MAX_ASYNC_CONCURRENT_LIMIT"
|
||||
--maxSyncConcurrentLimit int The maximum number of concurrent calls allowed for the C synchronized method. 0 means use CPU core count. Env "TAOS_ADAPTER_MAX_SYNC_CONCURRENT_LIMIT"
|
||||
--monitor.collectDuration duration Set monitor duration. Env "TAOS_ADAPTER_MONITOR_COLLECT_DURATION" (default 3s)
|
||||
--monitor.disable Whether to disable monitoring. Env "TAOS_ADAPTER_MONITOR_DISABLE" (default true)
|
||||
--monitor.identity string The identity of the current instance, or 'hostname:port' if it is empty. Env "TAOS_ADAPTER_MONITOR_IDENTITY"
|
||||
|
@ -118,7 +184,7 @@ Usage of taosAdapter:
|
|||
--opentsdb_telnet.flushInterval duration opentsdb_telnet flush interval (0s means not valid) . Env "TAOS_ADAPTER_OPENTSDB_TELNET_FLUSH_INTERVAL"
|
||||
--opentsdb_telnet.maxTCPConnections int max tcp connections. Env "TAOS_ADAPTER_OPENTSDB_TELNET_MAX_TCP_CONNECTIONS" (default 250)
|
||||
--opentsdb_telnet.password string opentsdb_telnet password. Env "TAOS_ADAPTER_OPENTSDB_TELNET_PASSWORD" (default "taosdata")
|
||||
--opentsdb_telnet.ports ints opentsdb_telnet tcp port. Env "TAOS_ADAPTER_OPENTSDB_TELNET_PORTS" (default [6046,6047,6048,6049])
|
||||
--opentsdb_telnet.ports ints opentsdb telnet tcp port. Env "TAOS_ADAPTER_OPENTSDB_TELNET_PORTS" (default [6046,6047,6048,6049])
|
||||
--opentsdb_telnet.tcpKeepAlive enable tcp keep alive. Env "TAOS_ADAPTER_OPENTSDB_TELNET_TCP_KEEP_ALIVE"
|
||||
--opentsdb_telnet.ttl int opentsdb_telnet data ttl. Env "TAOS_ADAPTER_OPENTSDB_TELNET_TTL"
|
||||
--opentsdb_telnet.user string opentsdb_telnet user. Env "TAOS_ADAPTER_OPENTSDB_TELNET_USER" (default "root")
|
||||
|
@ -131,6 +197,9 @@ Usage of taosAdapter:
|
|||
--prometheus.enable enable prometheus. Env "TAOS_ADAPTER_PROMETHEUS_ENABLE" (default true)
|
||||
--restfulRowLimit int restful returns the maximum number of rows (-1 means no limit). Env "TAOS_ADAPTER_RESTFUL_ROW_LIMIT" (default -1)
|
||||
--smlAutoCreateDB Whether to automatically create db when writing with schemaless. Env "TAOS_ADAPTER_SML_AUTO_CREATE_DB"
|
||||
--ssl.certFile string ssl cert file path. Env "TAOS_ADAPTER_SSL_CERT_FILE"
|
||||
--ssl.enable enable ssl. Env "TAOS_ADAPTER_SSL_ENABLE"
|
||||
--ssl.keyFile string ssl key file path. Env "TAOS_ADAPTER_SSL_KEY_FILE"
|
||||
--statsd.allowPendingMessages int statsd allow pending messages. Env "TAOS_ADAPTER_STATSD_ALLOW_PENDING_MESSAGES" (default 50000)
|
||||
--statsd.db string statsd db name. Env "TAOS_ADAPTER_STATSD_DB" (default "statsd")
|
||||
--statsd.deleteCounters statsd delete counter cache after gather. Env "TAOS_ADAPTER_STATSD_DELETE_COUNTERS" (default true)
|
||||
|
@ -157,27 +226,44 @@ Usage of taosAdapter:
|
|||
-V, --version Print the version and exit
|
||||
```
|
||||
|
||||
Note:
|
||||
When using a browser to make API calls, please set the following Cross-Origin Resource Sharing (CORS) parameters according to the actual situation:
|
||||
See the example configuration file at [example/config/taosadapter.toml](https://github.com/taosdata/taosadapter/blob/3.0/example/config/taosadapter.toml).
|
||||
|
||||
```text
|
||||
AllowAllOrigins
|
||||
AllowOrigins
|
||||
AllowHeaders
|
||||
ExposeHeaders
|
||||
AllowCredentials
|
||||
AllowWebSockets
|
||||
```
|
||||
### Cross-Origin Configuration
|
||||
|
||||
When making API calls from the browser, please configure the following Cross-Origin Resource Sharing (CORS) parameters based on your actual situation:
|
||||
|
||||
- **`cors.allowAllOrigins`**: Whether to allow all origins to access, default is true.
|
||||
- **`cors.allowOrigins`**: A comma-separated list of origins allowed to access. Multiple origins can be specified.
|
||||
- **`cors.allowHeaders`**: A comma-separated list of request headers allowed for cross-origin access. Multiple headers can be specified.
|
||||
- **`cors.exposeHeaders`**: A comma-separated list of response headers exposed for cross-origin access. Multiple headers can be specified.
|
||||
- **`cors.allowCredentials`**: Whether to allow cross-origin requests to include user credentials, such as cookies, HTTP authentication information, or client SSL certificates.
|
||||
- **`cors.allowWebSockets`**: Whether to allow WebSockets connections.
|
||||
|
||||
If you are not making API calls through a browser, you do not need to worry about these configurations.
|
||||
|
||||
The above configurations take effect for the following interfaces:
|
||||
|
||||
* RESTful API requests
|
||||
* WebSocket API requests
|
||||
* InfluxDB v1 write interface
|
||||
* OpenTSDB HTTP write interface
|
||||
|
||||
For details about the CORS protocol, please refer to: [https://www.w3.org/wiki/CORS_Enabled](https://www.w3.org/wiki/CORS_Enabled) or [https://developer.mozilla.org/docs/Web/HTTP/CORS](https://developer.mozilla.org/docs/Web/HTTP/CORS).
|
||||
|
||||
See the example configuration file at [example/config/taosadapter.toml](https://github.com/taosdata/taosadapter/blob/3.0/example/config/taosadapter.toml).
|
||||
### Connection Pool Configuration
|
||||
|
||||
### Connection Pool Parameters Description
|
||||
taosAdapter uses a connection pool to manage connections to TDengine, improving concurrency performance and resource utilization. The connection pool configuration applies to the following interfaces, and these interfaces share a single connection pool:
|
||||
|
||||
When using the RESTful API, the system will manage TDengine connections through a connection pool. The connection pool can be configured with the following parameters:
|
||||
* RESTful API requests
|
||||
* InfluxDB v1 write interface
|
||||
* OpenTSDB JSON and telnet format writing
|
||||
* Telegraf data writing
|
||||
* collectd data writing
|
||||
* StatsD data writing
|
||||
* node_exporter data collection writing
|
||||
* Prometheus remote_read and remote_write
|
||||
|
||||
The configuration parameters for the connection pool are as follows:
|
||||
|
||||
- **`pool.maxConnect`**: The maximum number of connections allowed in the pool, default is twice the number of CPU cores. It is recommended to keep the default setting.
|
||||
- **`pool.maxIdle`**: The maximum number of idle connections in the pool, default is the same as `pool.maxConnect`. It is recommended to keep the default setting.
|
||||
|
@ -185,153 +271,136 @@ When using the RESTful API, the system will manage TDengine connections through
|
|||
- **`pool.waitTimeout`**: Timeout for obtaining a connection from the pool, default is set to 60 seconds. If a connection is not obtained within the timeout period, HTTP status code 503 will be returned. This parameter is available starting from version 3.3.3.0.
|
||||
- **`pool.maxWait`**: The maximum number of requests waiting to get a connection in the pool, default is 0, which means no limit. When the number of queued requests exceeds this value, new requests will return HTTP status code 503. This parameter is available starting from version 3.3.3.0.
|
||||
|
||||
## Feature List
|
||||
### HTTP Response Code Configuration
|
||||
|
||||
- RESTful API
|
||||
[RESTful API](../../client-libraries/rest-api/)
|
||||
- Compatible with InfluxDB v1 write interface
|
||||
[https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/](https://docs.influxdata.com/influxdb/v2.0/reference/api/influxdb-1x/write/)
|
||||
- Compatible with OpenTSDB JSON and telnet format writing
|
||||
- [http://opentsdb.net/docs/build/html/api_http/put.html](http://opentsdb.net/docs/build/html/api_http/put.html)
|
||||
- [http://opentsdb.net/docs/build/html/api_telnet/put.html](http://opentsdb.net/docs/build/html/api_telnet/put.html)
|
||||
- Seamless connection with collectd.
|
||||
collectd is a system statistics collection daemon, visit [https://collectd.org/](https://collectd.org/) for more information.
|
||||
- Seamless connection with StatsD.
|
||||
StatsD is a simple yet powerful daemon for gathering statistics. Visit [https://github.com/statsd/statsd](https://github.com/statsd/statsd) for more information.
|
||||
- Seamless connection with icinga2.
|
||||
icinga2 is a software for collecting check results metrics and performance data. Visit [https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer](https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer) for more information.
|
||||
- Seamless connection with tcollector.
|
||||
TCollector is a client process that collects data from local collectors and pushes it to OpenTSDB. Visit [http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html](http://opentsdb.net/docs/build/html/user_guide/utilities/tcollector.html) for more information.
|
||||
- Seamless connection with node_exporter.
|
||||
node_exporter is an exporter of machine metrics. Visit [https://github.com/prometheus/node_exporter](https://github.com/prometheus/node_exporter) for more information.
|
||||
- Supports Prometheus remote_read and remote_write.
|
||||
remote_read and remote_write are Prometheus's data read-write separation cluster solutions. Visit [https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis](https://prometheus.io/blog/2019/10/10/remote-read-meets-streaming/#remote-apis) for more information.
|
||||
- Get the VGroup ID of the virtual node group (VGroup) where the table is located.
|
||||
taosAdapter uses the parameter `httpCodeServerError` to set whether to return a non-200 HTTP status code when the C interface returns an error. When set to true, it will return different HTTP status codes based on the error code returned by C. See [HTTP Response Codes](../../client-libraries/rest-api/) for details.
|
||||
|
||||
## Interface
|
||||
This configuration only affects the **RESTful interface**.
|
||||
|
||||
### TDengine RESTful Interface
|
||||
**Parameter Description**
|
||||
|
||||
You can use any client that supports the HTTP protocol to write data to TDengine or query data from TDengine by accessing the RESTful interface URL `http://<fqdn>:6041/rest/sql`. For details, please refer to the [REST API documentation](../../client-libraries/rest-api/).
|
||||
- **`httpCodeServerError`**:
|
||||
- **When set to `true`**: Map the error code returned by the C interface to the corresponding HTTP status code.
|
||||
- **When set to `false`**: Regardless of the error returned by the C interface, always return the HTTP status code `200` (default value).
|
||||
|
||||
### InfluxDB
|
||||
### Memory limit configuration
|
||||
|
||||
You can use any client that supports the HTTP protocol to write data in InfluxDB compatible format to TDengine by accessing the Restful interface URL `http://<fqdn>:6041/influxdb/v1/write`.
|
||||
taosAdapter will monitor the memory usage during its operation and adjust it through two thresholds. The valid value range is an integer from 1 to 100, and the unit is the percentage of system physical memory.
|
||||
|
||||
Supported InfluxDB parameters are as follows:
|
||||
This configuration only affects the following interfaces:
|
||||
|
||||
- `db` specifies the database name used by TDengine
|
||||
- `precision` the time precision used by TDengine
|
||||
- `u` TDengine username
|
||||
- `p` TDengine password
|
||||
- `ttl` the lifespan of automatically created subtables, determined by the TTL parameter of the first data entry in the subtable, which cannot be updated. For more information, please refer to the TTL parameter in the [table creation document](../../sql-manual/manage-tables/).
|
||||
* RESTful interface request
|
||||
* InfluxDB v1 write interface
|
||||
* OpenTSDB HTTP write interface
|
||||
* Prometheus remote_read and remote_write interfaces
|
||||
|
||||
Note: Currently, InfluxDB's token authentication method is not supported, only Basic authentication and query parameter verification are supported.
|
||||
Example: curl --request POST `http://127.0.0.1:6041/influxdb/v1/write?db=test` --user "root:taosdata" --data-binary "measurement,host=host1 field1=2i,field2=2.0 1577836800000000000"
|
||||
**Parameter Description**
|
||||
|
||||
### OpenTSDB
|
||||
- **`pauseQueryMemoryThreshold`**:
|
||||
- When memory usage exceeds this threshold, taosAdapter will stop processing query requests.
|
||||
- Default value: `70` (i.e. 70% of system physical memory).
|
||||
- **`pauseAllMemoryThreshold`**:
|
||||
- When memory usage exceeds this threshold, taosAdapter will stop processing all requests (including writes and queries).
|
||||
- Default value: `80` (i.e. 80% of system physical memory).
|
||||
|
||||
You can use any client that supports the HTTP protocol to write data in OpenTSDB compatible format to TDengine by accessing the Restful interface URL `http://<fqdn>:6041/<APIEndPoint>`. EndPoint as follows:
|
||||
When memory usage falls below the threshold, taosAdapter will automatically resume the corresponding function.
|
||||
|
||||
```text
|
||||
/opentsdb/v1/put/json/<db>
|
||||
/opentsdb/v1/put/telnet/<db>
|
||||
```
|
||||
**HTTP return content:**
|
||||
|
||||
### collectd
|
||||
- **When `pauseQueryMemoryThreshold` is exceeded**:
|
||||
- HTTP status code: `503`
|
||||
- Return content: `"query memory exceeds threshold"`
|
||||
|
||||
<CollectD />
|
||||
- **When `pauseAllMemoryThreshold` is exceeded**:
|
||||
- HTTP status code: `503`
|
||||
- Return content: `"memory exceeds threshold"`
|
||||
|
||||
### StatsD
|
||||
**Status check interface:**
|
||||
|
||||
<StatsD />
|
||||
The memory status of taosAdapter can be checked through the following interface:
|
||||
- **Normal status**: `http://<fqdn>:6041/-/ping` returns `code 200`.
|
||||
- **Memory exceeds threshold**:
|
||||
- If the memory exceeds `pauseAllMemoryThreshold`, `code 503` is returned.
|
||||
- If the memory exceeds `pauseQueryMemoryThreshold` and the request parameter contains `action=query`, `code 503` is returned.
|
||||
|
||||
### icinga2 OpenTSDB writer
|
||||
**Related configuration parameters:**
|
||||
|
||||
<Icinga2 />
|
||||
- **`monitor.collectDuration`**: memory monitoring interval, default value is `3s`, environment variable is `TAOS_MONITOR_COLLECT_DURATION`.
|
||||
- **`monitor.incgroup`**: whether to run in a container (set to `true` for running in a container), default value is `false`, environment variable is `TAOS_MONITOR_INCGROUP`.
|
||||
- **`monitor.pauseQueryMemoryThreshold`**: memory threshold (percentage) for query request pause, default value is `70`, environment variable is `TAOS_MONITOR_PAUSE_QUERY_MEMORY_THRESHOLD`.
|
||||
- **`monitor.pauseAllMemoryThreshold`**: memory threshold (percentage) for query and write request pause, default value is `80`, environment variable is `TAOS_MONITOR_PAUSE_ALL_MEMORY_THRESHOLD`.
|
||||
|
||||
### TCollector
|
||||
You can make corresponding adjustments based on the specific project application scenario and operation strategy, and it is recommended to use operation monitoring software to monitor the system memory status in a timely manner. The load balancer can also check the operation status of taosAdapter through this interface.
|
||||
|
||||
<TCollector />
|
||||
### Schemaless write create DB configuration
|
||||
|
||||
### node_exporter
|
||||
Starting from **version 3.0.4.0**, taosAdapter provides the parameter `smlAutoCreateDB` to control whether to automatically create a database (DB) when writing to the schemaless protocol.
|
||||
|
||||
An exporter used by Prometheus that exposes hardware and operating system metrics from \*NIX kernels
|
||||
The `smlAutoCreateDB` parameter only affects the following interfaces:
|
||||
|
||||
- Enable configuration of taosAdapter node_exporter.enable
|
||||
- Set the relevant configuration for node_exporter
|
||||
- Restart taosAdapter
|
||||
- InfluxDB v1 write interface
|
||||
- OpenTSDB JSON and telnet format writing
|
||||
- Telegraf data writing
|
||||
- collectd data writing
|
||||
- StatsD data writing
|
||||
- node_exporter data writing
|
||||
|
||||
### prometheus
|
||||
**Parameter Description**
|
||||
|
||||
<Prometheus />
|
||||
- **`smlAutoCreateDB`**:
|
||||
- **When set to `true`**: When writing to the schemaless protocol, if the target database does not exist, taosAdapter will automatically create the database.
|
||||
- **When set to `false`**: The user needs to manually create the database, otherwise the write will fail (default value).
|
||||
|
||||
### Getting the VGroup ID of a table
|
||||
### Number of results returned configuration
|
||||
|
||||
You can send a POST request to the HTTP interface `http://<fqdn>:<port>/rest/sql/<db>/vgid` to get the VGroup ID of a table.
|
||||
The body should be a JSON array of multiple table names.
|
||||
taosAdapter provides the parameter `restfulRowLimit` to control the number of results returned by the HTTP interface.
|
||||
|
||||
Example: Get the VGroup ID for the database power and tables d_bind_1 and d_bind_2.
|
||||
The `restfulRowLimit` parameter only affects the return results of the following interfaces:
|
||||
|
||||
- RESTful interface
|
||||
- Prometheus remote_read interface
|
||||
|
||||
**Parameter Description**
|
||||
|
||||
- **`restfulRowLimit`**:
|
||||
- **When set to a positive integer**: The number of results returned by the interface will not exceed this value.
|
||||
- **When set to `-1`**: The number of results returned by the interface is unlimited (default value).
|
||||
|
||||
### Log configuration
|
||||
|
||||
1. You can set the taosAdapter log output detail level by setting the --log.level parameter or the environment variable TAOS_ADAPTER_LOG_LEVEL. Valid values include: panic, fatal, error, warn, warning, info, debug, and trace.
|
||||
2. Starting from **3.3.5.0 version**, taosAdapter supports dynamic modification of log level through HTTP interface. Users can dynamically adjust the log level by sending HTTP PUT request to /config interface. The authentication method of this interface is the same as /rest/sql interface, and the configuration item key-value pair in JSON format must be passed in the request body.
|
||||
|
||||
The following is an example of setting the log level to debug through the curl command:
|
||||
|
||||
```shell
|
||||
curl --location 'http://127.0.0.1:6041/rest/sql/power/vgid' \
|
||||
--user 'root:taosdata' \
|
||||
--data '["d_bind_1","d_bind_2"]'
|
||||
curl --location --request PUT 'http://127.0.0.1:6041/config' \
|
||||
-u root:taosdata \
|
||||
--data '{"log.level": "debug"}'
|
||||
```
|
||||
|
||||
response:
|
||||
## Service Management
|
||||
|
||||
```json
|
||||
{"code":0,"vgIDs":[153,152]}
|
||||
```
|
||||
### Starting/Stopping taosAdapter
|
||||
|
||||
## Memory Usage Optimization Methods
|
||||
On Linux systems, the taosAdapter service is managed by default by systemd. Use the command `systemctl start taosadapter` to start the taosAdapter service. Use the command `systemctl stop taosadapter` to stop the taosAdapter service.
|
||||
|
||||
taosAdapter will monitor its memory usage during operation and adjust it through two thresholds. Valid values range from -1 to 100 as a percentage of system physical memory.
|
||||
### Upgrading taosAdapter
|
||||
|
||||
- pauseQueryMemoryThreshold
|
||||
- pauseAllMemoryThreshold
|
||||
taosAdapter and TDengine server need to use the same version. Please upgrade taosAdapter by upgrading the TDengine server.
|
||||
taosAdapter deployed separately from taosd must be upgraded by upgrading the TDengine server on its server.
|
||||
|
||||
When the pauseQueryMemoryThreshold threshold is exceeded, it stops processing query requests.
|
||||
### Removing taosAdapter
|
||||
|
||||
HTTP return content:
|
||||
Use the command rmtaos to remove the TDengine server software, including taosAdapter.
|
||||
|
||||
- code 503
|
||||
- body "query memory exceeds threshold"
|
||||
## Monitoring Metrics
|
||||
|
||||
When the pauseAllMemoryThreshold threshold is exceeded, it stops processing all write and query requests.
|
||||
Currently, taosAdapter only collects monitoring indicators for RESTful/WebSocket related requests. There are no monitoring indicators for other interfaces.
|
||||
|
||||
HTTP return content:
|
||||
taosAdapter reports monitoring indicators to taosKeeper, which will be written to the monitoring database by taosKeeper. The default is the `log` database, which can be modified in the taoskeeper configuration file. The following is a detailed introduction to these monitoring indicators.
|
||||
|
||||
- code 503
|
||||
- body "memory exceeds threshold"
|
||||
|
||||
When memory falls below the threshold, the corresponding functions are resumed.
|
||||
|
||||
Status check interface `http://<fqdn>:6041/-/ping`
|
||||
|
||||
- Normally returns `code 200`
|
||||
- No parameters If memory exceeds pauseAllMemoryThreshold, it will return `code 503`
|
||||
- Request parameter `action=query` If memory exceeds either pauseQueryMemoryThreshold or pauseAllMemoryThreshold, it will return `code 503`
|
||||
|
||||
Corresponding configuration parameters
|
||||
|
||||
```text
|
||||
monitor.collectDuration Monitoring interval Environment variable "TAOS_MONITOR_COLLECT_DURATION" (default value 3s)
|
||||
monitor.incgroup Whether it is running in cgroup (set to true in containers) Environment variable "TAOS_MONITOR_INCGROUP"
|
||||
monitor.pauseAllMemoryThreshold Memory threshold for stopping inserts and queries Environment variable "TAOS_MONITOR_PAUSE_ALL_MEMORY_THRESHOLD" (default value 80)
|
||||
monitor.pauseQueryMemoryThreshold Memory threshold for stopping queries Environment variable "TAOS_MONITOR_PAUSE_QUERY_MEMORY_THRESHOLD" (default value 70)
|
||||
```
|
||||
|
||||
You can adjust according to the specific project application scenario and operational strategy, and it is recommended to use operational monitoring software to monitor the system memory status in real time. Load balancers can also check the running status of taosAdapter through this interface.
|
||||
|
||||
## taosAdapter Monitoring Metrics
|
||||
|
||||
taosAdapter collects monitoring metrics related to REST/WebSocket requests. These monitoring metrics are reported to taosKeeper, which writes them into the monitoring database, by default the `log` database, which can be modified in the taoskeeper configuration file. Below is a detailed introduction to these monitoring metrics.
|
||||
|
||||
### adapter_requests table
|
||||
|
||||
`adapter_requests` records taosadapter monitoring data.
|
||||
The `adapter_requests` table records taosAdapter monitoring data, and the fields are as follows:
|
||||
|
||||
| field | type | is_tag | comment |
|
||||
| :--------------- | :----------- | :----- | :---------------------------------------- |
|
||||
|
@ -354,32 +423,10 @@ taosAdapter collects monitoring metrics related to REST/WebSocket requests. Thes
|
|||
| endpoint | VARCHAR | | request endpoint |
|
||||
| req_type | NCHAR | tag | request type: 0 for REST, 1 for WebSocket |
|
||||
|
||||
## Result Return Limit
|
||||
|
||||
taosAdapter controls the number of results returned through the parameter `restfulRowLimit`, -1 represents no limit, default is no limit.
|
||||
## Changes after upgrading httpd to taosAdapter
|
||||
|
||||
This parameter controls the return of the following interfaces
|
||||
|
||||
- `http://<fqdn>:6041/rest/sql`
|
||||
- `http://<fqdn>:6041/prometheus/v1/remote_read/:db`
|
||||
|
||||
## Configure HTTP Return Codes
|
||||
|
||||
taosAdapter uses the parameter `httpCodeServerError` to set whether to return a non-200 HTTP status code when the C interface returns an error. When set to true, it will return different HTTP status codes based on the error code returned by C. See [HTTP Response Codes](../../client-libraries/rest-api/) for details.
|
||||
|
||||
## Configure Automatic DB Creation for Schemaless Writes
|
||||
|
||||
Starting from version 3.0.4.0, taosAdapter provides the parameter `smlAutoCreateDB` to control whether to automatically create a DB when writing via the schemaless protocol. The default value is false, which does not automatically create a DB, and requires the user to manually create a DB before performing schemaless writes.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
You can check the running status of taosAdapter with the command `systemctl status taosadapter`.
|
||||
|
||||
You can also adjust the detail level of taosAdapter log output by setting the --logLevel parameter or the environment variable TAOS_ADAPTER_LOG_LEVEL. Valid values include: panic, fatal, error, warn, warning, info, debug, and trace.
|
||||
|
||||
## How to Migrate from Older Versions of TDengine to taosAdapter
|
||||
|
||||
In TDengine server version 2.2.x.x or earlier, the taosd process included an embedded HTTP service. As mentioned earlier, taosAdapter is a standalone software managed by systemd, having its own process. Moreover, there are some differences in configuration parameters and behaviors between the two, as shown in the table below:
|
||||
In TDengine server version 2.2.x.x or earlier, the taosd process included an embedded HTTP service(httpd). As mentioned earlier, taosAdapter is a standalone software managed by systemd, having its own process. Moreover, there are some differences in configuration parameters and behaviors between the two, as shown in the table below:
|
||||
|
||||
| **#** | **embedded httpd** | **taosAdapter** | **comment** |
|
||||
| ----- | ------------------- | ---------------------------------------------------------- | ------------------------------------------------------------ |
|
||||
|
|
|
@ -70,6 +70,7 @@ Metric details (from top to bottom, left to right):
|
|||
- **Databases** - Number of databases.
|
||||
- **Connections** - Current number of connections.
|
||||
- **DNodes/MNodes/VGroups/VNodes**: Total and alive count of each resource.
|
||||
- **Classified Connection Counts**: The current number of active connections, classified by user, application, and IP.
|
||||
- **DNodes/MNodes/VGroups/VNodes Alive Percent**: The ratio of alive/total for each resource, enable alert rules, and trigger when the resource survival rate (average healthy resource ratio within 1 minute) is less than 100%.
|
||||
- **Measuring Points Used**: Number of measuring points used with alert rules enabled (no data for community edition, healthy by default).
|
||||
|
||||
|
@ -183,22 +184,22 @@ After importing, click on "Alert rules" on the left side of the Grafana interfac
|
|||
|
||||
The specific configuration of the 14 alert rules is as follows:
|
||||
|
||||
| alert rule| Rule threshold| Behavior when no data | Data scanning interval |Duration | SQL |
|
||||
| ------ | --------- | ---------------- | ----------- |------- |----------------------|
|
||||
|CPU load of dnode node|average > 80%|Trigger alert|5 minutes|5 minutes |`select now(), dnode_id, last(cpu_system) as cup_use from log.taosd_dnodes_info where _ts >= (now- 5m) and _ts < now partition by dnode_id having first(_ts) > 0 `|
|
||||
|Memory of dnode node |average > 60%|Trigger alert|5 minutes|5 minutes|`select now(), dnode_id, last(mem_engine) / last(mem_total) * 100 as taosd from log.taosd_dnodes_info where _ts >= (now- 5m) and _ts <now partition by dnode_id`|
|
||||
|Disk capacity occupancy of dnode nodes | > 80%|Trigger alert|5 minutes|5 minutes|`select now(), dnode_id, data_dir_level, data_dir_name, last(used) / last(total) * 100 as used from log.taosd_dnodes_data_dirs where _ts >= (now - 5m) and _ts < now partition by dnode_id, data_dir_level, data_dir_name`|
|
||||
|Authorization expires |< 60天|Trigger alert|1 day|0 0 seconds|`select now(), cluster_id, last(grants_expire_time) / 86400 as expire_time from log.taosd_cluster_info where _ts >= (now - 24h) and _ts < now partition by cluster_id having first(_ts) > 0 `|
|
||||
|The used measurement points has reached the authorized number|>= 90%|Trigger alert|1 day|0 seconds|`select now(), cluster_id, CASE WHEN max(grants_timeseries_total) > 0.0 THEN max(grants_timeseries_used) /max(grants_timeseries_total) * 100.0 ELSE 0.0 END AS result from log.taosd_cluster_info where _ts >= (now - 30s) and _ts < now partition by cluster_id having timetruncate(first(_ts), 1m) > 0`|
|
||||
|Number of concurrent query requests | > 100|Do not trigger alert|1 minute|0 seconds|`select now() as ts, count(*) as slow_count from performance_schema.perf_queries`|
|
||||
|Maximum time for slow query execution (no time window) |> 300秒|Do not trigger alert|1 minute|0 seconds|`select now() as ts, count(*) as slow_count from performance_schema.perf_queries where exec_usec>300000000`|
|
||||
|dnode offline |total != alive|Trigger alert|30 seconds|0 seconds|`select now(), cluster_id, last(dnodes_total) - last(dnodes_alive) as dnode_offline from log.taosd_cluster_info where _ts >= (now -30s) and _ts < now partition by cluster_id having first(_ts) > 0`|
|
||||
|vnode offline |total != alive|Trigger alert|30 seconds|0 seconds|`select now(), cluster_id, last(vnodes_total) - last(vnodes_alive) as vnode_offline from log.taosd_cluster_info where _ts >= (now - 30s) and _ts < now partition by cluster_id having first(_ts) > 0 `|
|
||||
|Number of data deletion requests |> 0|Do not trigger alert|30 seconds|0 seconds|``select now(), count(`count`) as `delete_count` from log.taos_sql_req where sql_type = 'delete' and _ts >= (now -30s) and _ts < now``|
|
||||
|Adapter RESTful request fail |> 5|Do not trigger alert|30 seconds|0 seconds|``select now(), sum(`fail`) as `Failed` from log.adapter_requests where req_type=0 and ts >= (now -30s) and ts < now``|
|
||||
|Adapter WebSocket request fail |> 5|Do not trigger alert|30 seconds|0 seconds|``select now(), sum(`fail`) as `Failed` from log.adapter_requests where req_type=1 and ts >= (now -30s) and ts < now``|
|
||||
|Dnode data reporting is missing |< 3|Trigger alert|180 seconds|0 seconds|`select now(), cluster_id, count(*) as dnode_report from log.taosd_cluster_info where _ts >= (now -180s) and _ts < now partition by cluster_id having timetruncate(first(_ts), 1h) > 0`|
|
||||
|Restart dnode |max(update_time) > last(update_time)|Trigger alert|90 seconds|0 seconds|`select now(), dnode_id, max(uptime) - last(uptime) as dnode_restart from log.taosd_dnodes_info where _ts >= (now - 90s) and _ts < now partition by dnode_id`|
|
||||
| alert rule | Rule threshold | Behavior when no data | Data scanning interval | Duration | SQL |
|
||||
| ------------------------------------------------------------- | ------------------------------------ | --------------------- | ---------------------- | ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| CPU load of dnode node | average > 80% | Trigger alert | 5 minutes | 5 minutes | `select now(), dnode_id, last(cpu_system) as cup_use from log.taosd_dnodes_info where _ts >= (now- 5m) and _ts < now partition by dnode_id having first(_ts) > 0 ` |
|
||||
| Memory of dnode node | average > 60% | Trigger alert | 5 minutes | 5 minutes | `select now(), dnode_id, last(mem_engine) / last(mem_total) * 100 as taosd from log.taosd_dnodes_info where _ts >= (now- 5m) and _ts <now partition by dnode_id` |
|
||||
| Disk capacity occupancy of dnode nodes | > 80% | Trigger alert | 5 minutes | 5 minutes | `select now(), dnode_id, data_dir_level, data_dir_name, last(used) / last(total) * 100 as used from log.taosd_dnodes_data_dirs where _ts >= (now - 5m) and _ts < now partition by dnode_id, data_dir_level, data_dir_name` |
|
||||
| Authorization expires | < 60天 | Trigger alert | 1 day | 0 0 seconds | `select now(), cluster_id, last(grants_expire_time) / 86400 as expire_time from log.taosd_cluster_info where _ts >= (now - 24h) and _ts < now partition by cluster_id having first(_ts) > 0 ` |
|
||||
| The used measurement points has reached the authorized number | >= 90% | Trigger alert | 1 day | 0 seconds | `select now(), cluster_id, CASE WHEN max(grants_timeseries_total) > 0.0 THEN max(grants_timeseries_used) /max(grants_timeseries_total) * 100.0 ELSE 0.0 END AS result from log.taosd_cluster_info where _ts >= (now - 30s) and _ts < now partition by cluster_id having timetruncate(first(_ts), 1m) > 0` |
|
||||
| Number of concurrent query requests | > 100 | Do not trigger alert | 1 minute | 0 seconds | `select now() as ts, count(*) as slow_count from performance_schema.perf_queries` |
|
||||
| Maximum time for slow query execution (no time window) | > 300秒 | Do not trigger alert | 1 minute | 0 seconds | `select now() as ts, count(*) as slow_count from performance_schema.perf_queries where exec_usec>300000000` |
|
||||
| dnode offline | total != alive | Trigger alert | 30 seconds | 0 seconds | `select now(), cluster_id, last(dnodes_total) - last(dnodes_alive) as dnode_offline from log.taosd_cluster_info where _ts >= (now -30s) and _ts < now partition by cluster_id having first(_ts) > 0` |
|
||||
| vnode offline | total != alive | Trigger alert | 30 seconds | 0 seconds | `select now(), cluster_id, last(vnodes_total) - last(vnodes_alive) as vnode_offline from log.taosd_cluster_info where _ts >= (now - 30s) and _ts < now partition by cluster_id having first(_ts) > 0 ` |
|
||||
| Number of data deletion requests | > 0 | Do not trigger alert | 30 seconds | 0 seconds | ``select now(), count(`count`) as `delete_count` from log.taos_sql_req where sql_type = 'delete' and _ts >= (now -30s) and _ts < now`` |
|
||||
| Adapter RESTful request fail | > 5 | Do not trigger alert | 30 seconds | 0 seconds | ``select now(), sum(`fail`) as `Failed` from log.adapter_requests where req_type=0 and ts >= (now -30s) and ts < now`` |
|
||||
| Adapter WebSocket request fail | > 5 | Do not trigger alert | 30 seconds | 0 seconds | ``select now(), sum(`fail`) as `Failed` from log.adapter_requests where req_type=1 and ts >= (now -30s) and ts < now`` |
|
||||
| Dnode data reporting is missing | < 3 | Trigger alert | 180 seconds | 0 seconds | `select now(), cluster_id, count(*) as dnode_report from log.taosd_cluster_info where _ts >= (now -180s) and _ts < now partition by cluster_id having timetruncate(first(_ts), 1h) > 0` |
|
||||
| Restart dnode | max(update_time) > last(update_time) | Trigger alert | 90 seconds | 0 seconds | `select now(), dnode_id, max(uptime) - last(uptime) as dnode_restart from log.taosd_dnodes_info where _ts >= (now - 90s) and _ts < now partition by dnode_id` |
|
||||
|
||||
TDengine users can modify and improve these alert rules according to their own business needs. In Grafana 7.5 and below versions, the Dashboard and Alert rules functions are combined, while in subsequent new versions, the two functions are separated. To be compatible with Grafana7.5 and below versions, an Alert Used Only panel has been added to the TDinsight panel, which is only required for Grafana7.5 and below versions.
|
||||
|
||||
|
@ -258,19 +259,19 @@ Install and configure TDinsight dashboard in Grafana on Ubuntu 18.04/20.04 syste
|
|||
|
||||
Most command line options can also be achieved through environment variables.
|
||||
|
||||
| Short Option | Long Option | Environment Variable | Description |
|
||||
| ------------ | ------------------------------- | ------------------------------ | -------------------------------------------------------- |
|
||||
| -v | --plugin-version | TDENGINE_PLUGIN_VERSION | TDengine datasource plugin version, default is latest. |
|
||||
| -P | --grafana-provisioning-dir | GF_PROVISIONING_DIR | Grafana provisioning directory, default is `/etc/grafana/provisioning/` |
|
||||
| -G | --grafana-plugins-dir | GF_PLUGINS_DIR | Grafana plugins directory, default is `/var/lib/grafana/plugins`. |
|
||||
| -O | --grafana-org-id | GF_ORG_ID | Grafana organization ID, default is 1. |
|
||||
| -n | --tdengine-ds-name | TDENGINE_DS_NAME | TDengine datasource name, default is TDengine. |
|
||||
| -a | --tdengine-api | TDENGINE_API | TDengine REST API endpoint. Default is `http://127.0.0.1:6041`. |
|
||||
| -u | --tdengine-user | TDENGINE_USER | TDengine user name. [default: root] |
|
||||
| -p | --tdengine-password | TDENGINE_PASSWORD | TDengine password. [default: taosdata] |
|
||||
| -i | --tdinsight-uid | TDINSIGHT_DASHBOARD_UID | TDinsight dashboard `uid`. [default: tdinsight] |
|
||||
| -t | --tdinsight-title | TDINSIGHT_DASHBOARD_TITLE | TDinsight dashboard title. [default: TDinsight] |
|
||||
| -e | --tdinsight-editable | TDINSIGHT_DASHBOARD_EDITABLE | If the provisioning dashboard could be editable. [default: false] |
|
||||
| Short Option | Long Option | Environment Variable | Description |
|
||||
| ------------ | -------------------------- | ---------------------------- | ----------------------------------------------------------------------- |
|
||||
| -v | --plugin-version | TDENGINE_PLUGIN_VERSION | TDengine datasource plugin version, default is latest. |
|
||||
| -P | --grafana-provisioning-dir | GF_PROVISIONING_DIR | Grafana provisioning directory, default is `/etc/grafana/provisioning/` |
|
||||
| -G | --grafana-plugins-dir | GF_PLUGINS_DIR | Grafana plugins directory, default is `/var/lib/grafana/plugins`. |
|
||||
| -O | --grafana-org-id | GF_ORG_ID | Grafana organization ID, default is 1. |
|
||||
| -n | --tdengine-ds-name | TDENGINE_DS_NAME | TDengine datasource name, default is TDengine. |
|
||||
| -a | --tdengine-api | TDENGINE_API | TDengine REST API endpoint. Default is `http://127.0.0.1:6041`. |
|
||||
| -u | --tdengine-user | TDENGINE_USER | TDengine user name. [default: root] |
|
||||
| -p | --tdengine-password | TDENGINE_PASSWORD | TDengine password. [default: taosdata] |
|
||||
| -i | --tdinsight-uid | TDINSIGHT_DASHBOARD_UID | TDinsight dashboard `uid`. [default: tdinsight] |
|
||||
| -t | --tdinsight-title | TDINSIGHT_DASHBOARD_TITLE | TDinsight dashboard title. [default: TDinsight] |
|
||||
| -e | --tdinsight-editable | TDINSIGHT_DASHBOARD_EDITABLE | If the provisioning dashboard could be editable. [default: false] |
|
||||
|
||||
:::note
|
||||
The new version of the plugin uses the Grafana unified alerting feature, the `-E` option is no longer supported.
|
||||
|
|
|
@ -6,6 +6,9 @@ slug: /tdengine-reference/tools/tdengine-cli
|
|||
|
||||
The TDengine command line program (hereinafter referred to as TDengine CLI) is the simplest and most commonly used tool for users to operate and interact with TDengine instances. It requires the installation of either the TDengine Server package or the TDengine Client package before use.
|
||||
|
||||
## Get
|
||||
TDengine CLI is the default installation component in the TDengine server and client installation package. It can be used after installation, refer to [TDengine Installation](../../../get-started/)
|
||||
|
||||
## Startup
|
||||
|
||||
To enter the TDengine CLI, simply execute `taos` in the terminal.
|
||||
|
@ -29,15 +32,83 @@ To exit the TDengine CLI, execute `q`, `quit`, or `exit` and press enter.
|
|||
taos> quit
|
||||
```
|
||||
|
||||
## Command Line Parameters
|
||||
|
||||
### Basic Parameters
|
||||
|
||||
You can change the behavior of the TDengine CLI by configuring command line parameters. Below are some commonly used command line parameters:
|
||||
|
||||
- -h HOST: The FQDN of the server where the TDengine service is located, default is to connect to the local service.
|
||||
- -P PORT: Specifies the port number used by the server.
|
||||
- -u USER: Username to use when connecting.
|
||||
- -p PASSWORD: Password to use when connecting to the server.
|
||||
- -?, --help: Prints out all command line parameters.
|
||||
- -s COMMAND: SQL command executed in non-interactive mode.
|
||||
Use the `-s` parameter to execute SQL non interactively, and exit after execution. This mode is suitable for use in automated scripts.
|
||||
For example, connect to the server h1.taos.com with the following command, and execute the SQL specified by `-s`:
|
||||
```bash
|
||||
taos -h my-server -s "use db; show tables;"
|
||||
```
|
||||
|
||||
- -c CONFIGDIR: Specify the configuration file directory.
|
||||
In Linux, the default is `/etc/tao`. The default name of the configuration file in this directory is `taos.cfg`.
|
||||
Use the `-c` parameter to change the location where the `taosc` client loads the configuration file. For client configuration parameters, refer to [Client Configuration](../../components/taosc).
|
||||
The following command specifies the `taos.cfg` configuration file under `/root/cfg/` loaded by the `taosc` client.
|
||||
|
||||
```bash
|
||||
taos -c /root/cfg/
|
||||
```
|
||||
|
||||
### Advanced Parameters
|
||||
|
||||
- -a AUTHSTR: Authorization information for connecting to the server.
|
||||
- -A: Calculate authorization information using username and password.
|
||||
- -B: Set BI tool display mode, after setting, all outputs follow the format of BI tools.
|
||||
- -C: Print the configuration parameters of `taos.cfg` in the directory specified by -c.
|
||||
- -d DATABASE: Specifies the database to use when connecting to the server.
|
||||
- -E dsn: Connect to cloud services or servers providing WebSocket connections using WebSocket DSN.
|
||||
- -f FILE: Execute SQL script file in non-interactive mode. Each SQL statement in the file must occupy one line.
|
||||
- -k: Test the running status of the server, 0: unavailable, 1: network ok, 2: service ok, 3: service degraded, 4: exiting.
|
||||
- -l PKTLEN: Test packet size used during network testing.
|
||||
- -n NETROLE: Test range during network connection testing, default is `client`, options are `client`, `server`.
|
||||
- -N PKTNUM: Number of test packets used during network testing.
|
||||
- -r: Convert time columns to unsigned 64-bit integer type output (i.e., uint64_t in C language).
|
||||
- -R: Connect to the server using RESTful mode.
|
||||
- -t: Test the startup status of the server, status same as -k.
|
||||
- -w DISPLAYWIDTH: Client column display width.
|
||||
- -z TIMEZONE: Specifies the timezone, default is the local timezone.
|
||||
- -V: Print the current version number.
|
||||
|
||||
## Data Export/Import
|
||||
|
||||
### Data Export To a File
|
||||
|
||||
- You can use the symbol “>>” to export query results to a file, the syntax is: sql query statement >> 'output file name'; If no path is written for the output file, it will be output to the current directory. For example, `select * from d0 >> '/root/d0.csv';` will output the query results to /root/d0.csv.
|
||||
|
||||
### Data Import From a File
|
||||
|
||||
- You can use insert into table_name file 'input file name', to import the data file exported in the previous step back into the specified table. For example, `insert into d0 file '/root/d0.csv';` means to import all the data exported above back into the d0 table.
|
||||
|
||||
|
||||
## Execute SQL Scripts
|
||||
|
||||
In the TDengine CLI, you can run multiple SQL commands from a script file using the `source` command.
|
||||
In the TDengine CLI, you can run multiple SQL commands from a script file using the `source` command, multiple SQL statements in the script file can be written in line.
|
||||
|
||||
```sql
|
||||
taos> source <filename>;
|
||||
```
|
||||
|
||||
## Online Modification of Display Character Width
|
||||
## TDengine CLI Tips
|
||||
|
||||
### TAB Key Completion
|
||||
|
||||
- Pressing the TAB key when no command is present will list all commands supported by TDengine CLI.
|
||||
- Pressing the TAB key when preceded by a space will display the first of all possible command words at this position, pressing TAB again will switch to the next one.
|
||||
- If a string precedes the TAB key, it will search for all command words that match the prefix of this string and display the first one, pressing TAB again will switch to the next one.
|
||||
- Entering a backslash `\` + TAB key, will automatically complete to the column display mode command word `\G;`.
|
||||
|
||||
|
||||
### Modification of Display Character Width
|
||||
|
||||
You can adjust the display character width in the TDengine CLI using the following command:
|
||||
|
||||
|
@ -47,71 +118,16 @@ taos> SET MAX_BINARY_DISPLAY_WIDTH <nn>;
|
|||
|
||||
If the displayed content ends with ..., it indicates that the content has been truncated. You can modify the display character width with this command to display the full content.
|
||||
|
||||
## Command Line Parameters
|
||||
### Other
|
||||
|
||||
You can change the behavior of the TDengine CLI by configuring command line parameters. Below are some commonly used command line parameters:
|
||||
- You can use the up and down arrow keys to view previously entered commands.
|
||||
- In TDengine CLI, use the `alter user` command to change user passwords, the default password is `taosdata`.
|
||||
- Ctrl+C to stop a query that is in progress.
|
||||
- Execute `RESET QUERY CACHE` to clear the cache of the local table Schema.
|
||||
- Batch execute SQL statements.
|
||||
You can store a series of TDengine CLI commands (ending with a semicolon `;`, each SQL statement on a new line) in a file, and execute the command `source <file-name>` in TDengine CLI to automatically execute all SQL statements in that file.
|
||||
|
||||
- -h HOST: The FQDN of the server where the TDengine service is located, default is to connect to the local service
|
||||
- -P PORT: Specifies the port number used by the server
|
||||
- -u USER: Username to use when connecting
|
||||
- -p PASSWORD: Password to use when connecting to the server
|
||||
- -?, --help: Prints out all command line parameters
|
||||
|
||||
There are many other parameters:
|
||||
|
||||
- -a AUTHSTR: Authorization information for connecting to the server
|
||||
- -A: Calculate authorization information using username and password
|
||||
- -B: Set BI tool display mode, after setting, all outputs follow the format of BI tools
|
||||
- -c CONFIGDIR: Specify the configuration file directory, default in Linux environment is `/etc/taos`, the default name of the configuration file in this directory is `taos.cfg`
|
||||
- -C: Print the configuration parameters of `taos.cfg` in the directory specified by -c
|
||||
- -d DATABASE: Specifies the database to use when connecting to the server
|
||||
- -E dsn: Connect to cloud services or servers providing WebSocket connections using WebSocket DSN
|
||||
- -f FILE: Execute SQL script file in non-interactive mode. Each SQL statement in the file must occupy one line
|
||||
- -k: Test the running status of the server, 0: unavailable, 1: network ok, 2: service ok, 3: service degraded, 4: exiting
|
||||
- -l PKTLEN: Test packet size used during network testing
|
||||
- -n NETROLE: Test range during network connection testing, default is `client`, options are `client`, `server`
|
||||
- -N PKTNUM: Number of test packets used during network testing
|
||||
- -r: Convert time columns to unsigned 64-bit integer type output (i.e., uint64_t in C language)
|
||||
- -R: Connect to the server using RESTful mode
|
||||
- -s COMMAND: SQL command executed in non-interactive mode
|
||||
- -t: Test the startup status of the server, status same as -k
|
||||
- -w DISPLAYWIDTH: Client column display width
|
||||
- -z TIMEZONE: Specifies the timezone, default is the local timezone
|
||||
- -V: Print the current version number
|
||||
|
||||
Example:
|
||||
|
||||
```shell
|
||||
taos -h h1.taos.com -s "use db; show tables;"
|
||||
```
|
||||
|
||||
## Configuration File
|
||||
|
||||
You can also control the behavior of the TDengine CLI through parameters set in the configuration file. For available configuration parameters, please refer to [Client Configuration](../../components/taosc)
|
||||
|
||||
## Error Code Table
|
||||
|
||||
Starting from TDengine version 3.3.4.8, TDengine CLI returns specific error codes in error messages. Users can visit the TDengine official website's error code page to find specific reasons and solutions, see: [Error Code Reference](../../error-codes/)
|
||||
|
||||
## TDengine CLI TAB Key Completion
|
||||
|
||||
- Pressing the TAB key when no command is present will list all commands supported by TDengine CLI
|
||||
- Pressing the TAB key when preceded by a space will display the first of all possible command words at this position, pressing TAB again will switch to the next one
|
||||
- If a string precedes the TAB key, it will search for all command words that match the prefix of this string and display the first one, pressing TAB again will switch to the next one
|
||||
- Entering a backslash `\` + TAB key, will automatically complete to the column display mode command word `\G;`
|
||||
|
||||
## TDengine CLI Tips
|
||||
|
||||
- You can use the up and down arrow keys to view previously entered commands
|
||||
- In TDengine CLI, use the `alter user` command to change user passwords, the default password is `taosdata`
|
||||
- Ctrl+C to stop a query that is in progress
|
||||
- Execute `RESET QUERY CACHE` to clear the cache of the local table Schema
|
||||
- Batch execute SQL statements. You can store a series of TDengine CLI commands (ending with a semicolon `;`, each SQL statement on a new line) in a file, and execute the command `source <file-name>` in TDengine CLI to automatically execute all SQL statements in that file
|
||||
|
||||
## TDengine CLI Export Query Results to a File
|
||||
|
||||
- You can use the symbol “>>” to export query results to a file, the syntax is: sql query statement >> 'output file name'; If no path is written for the output file, it will be output to the current directory. For example, select * from d0 >> '/root/d0.csv'; will output the query results to /root/d0.csv.
|
||||
|
||||
## TDengine CLI Import Data from a File into a Table
|
||||
|
||||
- You can use insert into table_name file 'input file name', to import the data file exported in the previous step back into the specified table. For example, insert into d0 file '/root/d0.csv'; means to import all the data exported above back into the d0 table.
|
||||
|
|
|
@ -6,49 +6,26 @@ slug: /tdengine-reference/tools/taosdump
|
|||
|
||||
`taosdump` is a TDengine data backup/recovery tool provided for open source users, and the backed up data files adopt the standard [Apache AVRO](https://avro.apache.org/)
|
||||
Format, convenient for exchanging data with the external ecosystem.
|
||||
Taosdump provides multiple data backup and recovery options to meet different data needs, and all supported options can be viewed through --help.
|
||||
taosdump provides multiple data backup and recovery options to meet different data needs, and all supported options can be viewed through --help.
|
||||
|
||||
## Installation
|
||||
## Get
|
||||
|
||||
Taosdump provides two installation methods:
|
||||
taosdump is the default installation component in the TDengine server and client installation package. It can be used after installation, refer to [TDengine Installation](../../../get-started/)
|
||||
|
||||
- Taosdump is the default installation component in the TDengine installation package, which can be used after installing TDengine. For how to install TDengine, please refer to [TDengine Installation](../../../get-started/)
|
||||
## Startup
|
||||
|
||||
- Compile and install taos tools separately, refer to [taos tools](https://github.com/taosdata/taos-tools) .
|
||||
taosdump needs to be run in the command line terminal. It must be run with parameters to indicate backup or restore operations, such as:
|
||||
``` bash
|
||||
taosdump -h my-server -D test -o /root/test/
|
||||
```
|
||||
The above command means to backup the `test` database on the `my server` machine to the `/root/test/` directory.
|
||||
|
||||
## Common Use Cases
|
||||
``` bash
|
||||
taosdump -h my-server -i /root/test/
|
||||
```
|
||||
The above command means to restore the previously backed up data files in the `/root/test/` directory to the host named `my server`.
|
||||
|
||||
### taosdump Backup Data
|
||||
|
||||
1. Backup all databases: specify the `-A` or `--all-databases` parameter;
|
||||
2. Backup multiple specified databases: use the `-D db1,db2,...` parameter;
|
||||
3. Backup certain supertables or basic tables in a specified database: use the `dbname stbname1 stbname2 tbname1 tbname2 ...` parameter, note that this input sequence starts with the database name, supports only one database, and the second and subsequent parameters are the names of the supertables or basic tables in that database, separated by spaces;
|
||||
4. Backup the system log database: TDengine clusters usually include a system database named `log`, which contains data for TDengine's own operation, taosdump does not back up the log database by default. If there is a specific need to back up the log database, you can use the `-a` or `--allow-sys` command line parameter.
|
||||
5. "Tolerant" mode backup: Versions after taosdump 1.4.1 provide the `-n` and `-L` parameters, used for backing up data without using escape characters and in "tolerant" mode, which can reduce backup data time and space occupied when table names, column names, and label names do not use escape characters. If unsure whether to use `-n` and `-L`, use the default parameters for "strict" mode backup. For an explanation of escape characters, please refer to the [official documentation](../../sql-manual/escape-characters/).
|
||||
6. If a backup file already exists in the directory specified by the `-o` parameter, to prevent data from being overwritten, taosdump will report an error and exit. Please replace it with another empty directory or clear the original data before backing up.
|
||||
7. Currently, taosdump does not support data breakpoint backup function. Once the data backup is interrupted, it needs to be started from scratch.
|
||||
If the backup takes a long time, it is recommended to use the (-S -E options) method to specify the start/end time for segmented backup.
|
||||
|
||||
:::tip
|
||||
|
||||
- Versions after taosdump 1.4.1 provide the `-I` parameter, used for parsing avro file schema and data, specifying the `-s` parameter will only parse the schema.
|
||||
- Backups after taosdump 1.4.2 use the `-B` parameter to specify the number of batches, the default value is 16384. If "Error actual dump .. batch .." occurs due to insufficient network speed or disk performance in some environments, you can try adjusting the `-B` parameter to a smaller value.
|
||||
- taosdump's export does not support interruption recovery, so the correct way to handle an unexpected termination of the process is to delete all related files that have been exported or generated.
|
||||
- taosdump's import supports interruption recovery, but when the process restarts, you may receive some "table already exists" prompts, which can be ignored.
|
||||
|
||||
:::
|
||||
|
||||
### taosdump Restore Data
|
||||
|
||||
- Restore data files from a specified path: use the `-i` parameter along with the data file path. As mentioned earlier, the same directory should not be used to back up different data sets, nor should the same path be used to back up the same data set multiple times, otherwise, the backup data will cause overwriting or multiple backups.
|
||||
- taosdump supports data recovery to a new database name with the parameter `-W`, please refer to the command line parameter description for details.
|
||||
|
||||
:::tip
|
||||
taosdump internally uses the TDengine stmt binding API to write restored data, currently using 16384 as a batch for writing. If there are many columns in the backup data, it may cause a "WAL size exceeds limit" error, in which case you can try adjusting the `-B` parameter to a smaller value.
|
||||
|
||||
:::
|
||||
|
||||
## Detailed Command Line Parameters List
|
||||
## Command Line Parameters
|
||||
|
||||
Below is the detailed command line parameters list for taosdump:
|
||||
|
||||
|
@ -69,8 +46,8 @@ Usage: taosdump [OPTION...] dbname [tbname ...]
|
|||
-c, --config-dir=CONFIG_DIR Configure directory. Default is /etc/taos
|
||||
-i, --inpath=INPATH Input file path.
|
||||
-o, --outpath=OUTPATH Output file path.
|
||||
-r, --resultFile=RESULTFILE DumpOut/In Result file path and name.
|
||||
-a, --allow-sys Allow to dump system database
|
||||
-r, --resultFile=RESULTFILE DumpOut/In Result file path and name.
|
||||
-a, --allow-sys Allow to dump system database.
|
||||
-A, --all-databases Dump all databases.
|
||||
-D, --databases=DATABASES Dump inputted databases. Use comma to separate
|
||||
databases' name.
|
||||
|
@ -102,23 +79,54 @@ Usage: taosdump [OPTION...] dbname [tbname ...]
|
|||
-n, --no-escape No escape char '`'. Default is using it.
|
||||
-Q, --dot-replace Replace dot character with underline character in
|
||||
the table name.(Version 2.5.3)
|
||||
-T, --thread-num=THREAD_NUM Number of thread for dump in file. Default is
|
||||
8.
|
||||
-W, --rename=RENAME-LIST Rename database name with new name during
|
||||
-T, --thread-num=THREAD_NUM Number of thread for dump in file. Default is 8.
|
||||
-W, --rename=RENAME-LIST Rename database name with new name during.
|
||||
importing data. RENAME-LIST:
|
||||
"db1=newDB1|db2=newDB2" means rename db1 to newDB1
|
||||
and rename db2 to newDB2 (Version 2.5.4)
|
||||
and rename db2 to newDB2 (Version 2.5.4).
|
||||
-k, --retry-count=VALUE Set the number of retry attempts for connection or
|
||||
query failures
|
||||
-z, --retry-sleep-ms=VALUE retry interval sleep time, unit ms
|
||||
-C, --cloud=CLOUD_DSN specify a DSN to access TDengine cloud service
|
||||
-R, --restful Use RESTful interface to connect TDengine
|
||||
query failures.
|
||||
-z, --retry-sleep-ms=VALUE retry interval sleep time, unit ms.
|
||||
-C, --cloud=CLOUD_DSN specify a DSN to access TDengine cloud service.
|
||||
-R, --restful Use RESTful interface to connect TDengine.
|
||||
-t, --timeout=SECONDS The timeout seconds for websocket to interact.
|
||||
-g, --debug Print debug info.
|
||||
-?, --help Give this help list
|
||||
--usage Give a short usage message
|
||||
-V, --version Print program version
|
||||
-?, --help Give this help list.
|
||||
--usage Give a short usage message.
|
||||
-V, --version Print program version.
|
||||
|
||||
Mandatory or optional arguments to long options are also mandatory or optional
|
||||
for any corresponding short options.
|
||||
```
|
||||
|
||||
## Common Use Cases
|
||||
|
||||
### Backup Data
|
||||
|
||||
1. Backup all databases: specify the `-A` or `--all-databases` parameter.
|
||||
2. Backup multiple specified databases: use the `-D db1,db2,...` parameter.
|
||||
3. Backup certain supertables or basic tables in a specified database: use the `dbname stbname1 stbname2 tbname1 tbname2 ...` parameter, note that this input sequence starts with the database name, supports only one database, and the second and subsequent parameters are the names of the supertables or basic tables in that database, separated by spaces.
|
||||
4. Backup the system log database: TDengine clusters usually include a system database named `log`, which contains data for TDengine's own operation, taosdump does not back up the log database by default. If there is a specific need to back up the log database, you can use the `-a` or `--allow-sys` command line parameter.
|
||||
5. "Tolerant" mode backup: Versions after taosdump 1.4.1 provide the `-n` and `-L` parameters, used for backing up data without using escape characters and in "tolerant" mode, which can reduce backup data time and space occupied when table names, column names, and label names do not use escape characters. If unsure whether to use `-n` and `-L`, use the default parameters for "strict" mode backup. For an explanation of escape characters, please refer to the [official documentation](../../sql-manual/escape-characters/)
|
||||
6. If a backup file already exists in the directory specified by the `-o` parameter, to prevent data from being overwritten, taosdump will report an error and exit. Please replace it with another empty directory or clear the original data before backing up.
|
||||
7. Currently, taosdump does not support data breakpoint backup function. Once the data backup is interrupted, it needs to be started from scratch.
|
||||
If the backup takes a long time, it is recommended to use the (-S -E options) method to specify the start/end time for segmented backup.
|
||||
|
||||
:::tip
|
||||
|
||||
- Versions after taosdump 1.4.1 provide the `-I` parameter, used for parsing avro file schema and data, specifying the `-s` parameter will only parse the schema.
|
||||
- Backups after taosdump 1.4.2 use the `-B` parameter to specify the number of batches, the default value is 16384. If "Error actual dump .. batch .." occurs due to insufficient network speed or disk performance in some environments, you can try adjusting the `-B` parameter to a smaller value.
|
||||
- taosdump's export does not support interruption recovery, so the correct way to handle an unexpected termination of the process is to delete all related files that have been exported or generated.
|
||||
- taosdump's import supports interruption recovery, but when the process restarts, you may receive some "table already exists" prompts, which can be ignored.
|
||||
|
||||
:::
|
||||
|
||||
### Restore Data
|
||||
|
||||
- Restore data files from a specified path: use the `-i` parameter along with the data file path. As mentioned earlier, the same directory should not be used to back up different data sets, nor should the same path be used to back up the same data set multiple times, otherwise, the backup data will cause overwriting or multiple backups.
|
||||
- taosdump supports data recovery to a new database name with the parameter `-W`, please refer to the command line parameter description for details.
|
||||
|
||||
:::tip
|
||||
taosdump internally uses the TDengine stmt binding API to write restored data, currently using 16384 as a batch for writing. If there are many columns in the backup data, it may cause a "WAL size exceeds limit" error, in which case you can try adjusting the `-B` parameter to a smaller value.
|
||||
|
||||
:::
|
||||
|
|
|
@ -2,34 +2,33 @@
|
|||
title: taosBenchmark Reference
|
||||
sidebar_label: taosBenchmark
|
||||
slug: /tdengine-reference/tools/taosbenchmark
|
||||
toc_max_heading_level: 4
|
||||
---
|
||||
|
||||
TaosBenchmark is a performance benchmarking tool for TDengine products, providing insertion, query, and subscription performance testing for TDengine products, and outputting performance indicators.
|
||||
|
||||
## Installation
|
||||
## Get
|
||||
|
||||
taosBenchmark provides two installation methods:
|
||||
taosBenchmark is the default installation component in the TDengine server and client installation package. It can be used after installation, refer to [TDengine Installation](../../../get-started/)
|
||||
|
||||
- taosBenchmark is the default installation component in the TDengine installation package, which can be used after installing TDengine. For how to install TDengine, please refer to [TDengine Installation](../../../get started/)
|
||||
## Startup
|
||||
|
||||
- Compile and install taos tools separately, refer to [taos tools](https://github.com/taosdata/taos-tools) .
|
||||
taosbenchmark supports three operating modes:
|
||||
- No Parameter Mode
|
||||
- Command Line Mode
|
||||
- JSON Configuration File Mode
|
||||
|
||||
## Operation
|
||||
The `Command Line Mode` is a subset of the `JSON Configuration File Mode` function. When both are used at the same time, the `Command Line Mode` takes precedence.
|
||||
|
||||
### Configuration and Operation Methods
|
||||
:::tip
|
||||
Ensure that the TDengine cluster is running correctly before running taosBenchmark.
|
||||
:::
|
||||
|
||||
taosBbenchmark supports three operating modes:
|
||||
- No parameter mode
|
||||
- Command line mode
|
||||
- JSON configuration file mode
|
||||
The command-line approach is a subset of the functionality of JSON configuration files, which immediately uses the command line and then the configuration file, with the parameters specified by the command line taking precedence.
|
||||
|
||||
**Ensure that the TDengine cluster is running correctly before running taosBenchmark.**
|
||||
|
||||
### Running Without Command Line Arguments
|
||||
### No Parameter Mode
|
||||
|
||||
|
||||
Execute the following command to quickly experience taosBenchmark performing a write performance test on TDengine based on the default configuration.
|
||||
|
||||
```shell
|
||||
taosBenchmark
|
||||
```
|
||||
|
@ -37,17 +36,18 @@ taosBenchmark
|
|||
When running without parameters, taosBenchmark defaults to connecting to the TDengine cluster specified in `/etc/taos/taos.cfg `.
|
||||
After successful connection, a smart meter example database test, super meters, and 10000 sub meters will be created, with 10000 records per sub meter. If the test database already exists, it will be deleted before creating a new one.
|
||||
|
||||
### Running Using Command Line Configuration Parameters
|
||||
### Command Line Mode
|
||||
|
||||
When running taosBenchmark using command line parameters and controlling its behavior, the `-f <json file>` parameter cannot be used. All configuration parameters must be specified through the command line. Below is an example of using command line mode to test the write performance of taosBenchmark.
|
||||
The parameters supported by the command line are those frequently used in the write function. The query and subscription function does not support the command line mode.
|
||||
For example:
|
||||
|
||||
```shell
|
||||
taosBenchmark -I stmt -n 200 -t 100
|
||||
```bash
|
||||
taosBenchmark -d db -t 100 -n 1000 -T 4 -I stmt -y
|
||||
```
|
||||
|
||||
The above command `taosBenchmark` will create a database named `test`, establish a supertable `meters` within it, create 100 subtables in the supertable, and insert 200 records into each subtable using parameter binding.
|
||||
This command means that using `taosBenchmark` will create a database named `db`, create the default super table `meters`, sub table 100, and use parameter binding (stmt) to write 1000 records for each sub table.
|
||||
|
||||
### Running Using a Configuration File
|
||||
### JSON Configuration File Mode
|
||||
|
||||
Running in configuration file mode provides all functions, so parameters can be configured to run in the configuration file.
|
||||
|
||||
|
@ -55,42 +55,8 @@ Running in configuration file mode provides all functions, so parameters can be
|
|||
taosBenchmark -f <json file>
|
||||
```
|
||||
|
||||
**Below are a few examples of configuration files:**
|
||||
|
||||
#### JSON Configuration File Example for Insertion Scenario
|
||||
|
||||
<details>
|
||||
<summary>insert.json</summary>
|
||||
|
||||
```json
|
||||
{{#include /taos-tools/example/insert.json}}
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
#### Example JSON Configuration File for Query Scenario
|
||||
|
||||
<details>
|
||||
<summary>query.json</summary>
|
||||
|
||||
```json
|
||||
{{#include /taos-tools/example/query.json}}
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
#### Example JSON Configuration File for Subscription Scenario
|
||||
|
||||
<details>
|
||||
<summary>tmq.json</summary>
|
||||
|
||||
```json
|
||||
{{#include /taos-tools/example/tmq.json}}
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
## Detailed Explanation of Command Line Parameters
|
||||
## Command Line Parameters
|
||||
|
||||
- **-f/--file \<json file>** :
|
||||
The JSON configuration file to use, specifying all parameters. This parameter cannot be used simultaneously with other command line parameters. There is no default value.
|
||||
|
@ -216,61 +182,7 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
|
|||
Displays help information and exits. Cannot be used with other parameters.
|
||||
|
||||
|
||||
## Output performance indicators
|
||||
|
||||
#### Write indicators
|
||||
|
||||
After writing is completed, a summary performance metric will be output in the last two lines in the following format:
|
||||
``` bash
|
||||
SUCC: Spent 8.527298 (real 8.117379) seconds to insert rows: 10000000 with 8 thread(s) into test 1172704.41 (real 1231924.74) records/second
|
||||
SUCC: insert delay, min: 19.6780ms, avg: 64.9390ms, p90: 94.6900ms, p95: 105.1870ms, p99: 130.6660ms, max: 157.0830ms
|
||||
```
|
||||
First line write speed statistics:
|
||||
- Spent: Total write time, in seconds, counting from the start of writing the first data to the end of the last data. This indicates that a total of 8.527298 seconds were spent
|
||||
- Real: Total write time (calling the engine), excluding the time spent preparing data for the testing framework. Purely counting the time spent on engine calls, The time spent is 8.117379 seconds. If 8.527298-8.117379=0.409919 seconds, it is the time spent preparing data for the testing framework
|
||||
- Rows: Write the total number of rows, which is 10 million pieces of data
|
||||
- Threads: The number of threads being written, which is 8 threads writing simultaneously
|
||||
- Records/second write speed = `total write time` / `total number of rows written`, real in parentheses is the same as before, indicating pure engine write speed
|
||||
|
||||
Second line single write delay statistics:
|
||||
- min: Write minimum delay
|
||||
- avg: Write normal delay
|
||||
- p90: Write delay p90 percentile delay number
|
||||
- p95: Write delay p95 percentile delay number
|
||||
- p99: Write delay p99 percentile delay number
|
||||
- max: maximum write delay
|
||||
Through this series of indicators, the distribution of write request latency can be observed
|
||||
|
||||
#### Query indicators
|
||||
The query performance test mainly outputs the QPS indicator of query request speed, and the output format is as follows:
|
||||
|
||||
``` bash
|
||||
complete query with 3 threads and 10000 query delay avg: 0.002686s min: 0.001182s max: 0.012189s p90: 0.002977s p95: 0.003493s p99: 0.004645s SQL command: select ...
|
||||
INFO: Total specified queries: 30000
|
||||
INFO: Spend 26.9530 second completed total queries: 30000, the QPS of all threads: 1113.049
|
||||
```
|
||||
|
||||
- The first line represents the percentile distribution of query execution and query request delay for each of the three threads executing 10000 queries. The SQL command is the test query statement
|
||||
- The second line indicates that a total of 10000 * 3 = 30000 queries have been completed
|
||||
- The third line indicates that the total query time is 26.9653 seconds, and the query rate per second (QPS) is 1113.049 times/second
|
||||
|
||||
#### Subscription metrics
|
||||
|
||||
The subscription performance test mainly outputs consumer consumption speed indicators, with the following output format:
|
||||
``` bash
|
||||
INFO: consumer id 0 has poll total msgs: 376, period rate: 37.592 msgs/s, total rows: 3760000, period rate: 375924.815 rows/s
|
||||
INFO: consumer id 1 has poll total msgs: 362, period rate: 36.131 msgs/s, total rows: 3620000, period rate: 361313.504 rows/s
|
||||
INFO: consumer id 2 has poll total msgs: 364, period rate: 36.378 msgs/s, total rows: 3640000, period rate: 363781.731 rows/s
|
||||
INFO: consumerId: 0, consume msgs: 1000, consume rows: 10000000
|
||||
INFO: consumerId: 1, consume msgs: 1000, consume rows: 10000000
|
||||
INFO: consumerId: 2, consume msgs: 1000, consume rows: 10000000
|
||||
INFO: Consumed total msgs: 3000, total rows: 30000000
|
||||
```
|
||||
- Lines 1 to 3 real-time output of the current consumption speed of each consumer, msgs/s represents the number of consumption messages, each message contains multiple rows of data, and rows/s represents the consumption speed calculated by rows
|
||||
- Lines 4 to 6 show the overall statistics of each consumer after the test is completed, including the total number of messages consumed and the total number of lines
|
||||
- The overall statistics of all consumers in line 7, `msgs` represents how many messages were consumed in total, `rows` represents how many rows of data were consumed in total
|
||||
|
||||
## Configuration File Parameters Detailed Explanation
|
||||
## Configuration File Parameters
|
||||
|
||||
### General Configuration Parameters
|
||||
|
||||
|
@ -287,7 +199,7 @@ The parameters listed in this section apply to all functional modes.
|
|||
|
||||
- **password** : Password for connecting to the TDengine server, default value is taosdata.
|
||||
|
||||
### Configuration Parameters for Insertion Scenarios
|
||||
### Insertion Configuration Parameters
|
||||
|
||||
In insertion scenarios, `filetype` must be set to `insert`. For this parameter and other common parameters, see Common Configuration Parameters.
|
||||
|
||||
|
@ -302,7 +214,7 @@ In insertion scenarios, `filetype` must be set to `insert`. For this parameter a
|
|||
“continue_if_fail”: “yes”, taosBenchmark warns the user and continues writing
|
||||
“continue_if_fail”: “smart”, if the child table does not exist upon failure, taosBenchmark will create the child table and continue writing
|
||||
|
||||
#### Database Related Configuration Parameters
|
||||
#### Database Parameters
|
||||
|
||||
Parameters related to database creation are configured in the `dbinfo` section of the json configuration file, specific parameters are as follows. Other parameters correspond to those specified in TDengine's `create database`, see [../../taos-sql/database]
|
||||
|
||||
|
@ -310,23 +222,7 @@ Parameters related to database creation are configured in the `dbinfo` section o
|
|||
|
||||
- **drop**: Whether to delete the database before insertion, options are "yes" or "no", "no" means do not create. Default is to delete.
|
||||
|
||||
#### Stream Computing Related Configuration Parameters
|
||||
|
||||
Parameters related to stream computing are configured in the `stream` section of the json configuration file, specific parameters are as follows.
|
||||
|
||||
- **stream_name**: Name of the stream computing, mandatory.
|
||||
|
||||
- **stream_stb**: Name of the supertable corresponding to the stream computing, mandatory.
|
||||
|
||||
- **stream_sql**: SQL statement for the stream computing, mandatory.
|
||||
|
||||
- **trigger_mode**: Trigger mode for the stream computing, optional.
|
||||
|
||||
- **watermark**: Watermark for the stream computing, optional.
|
||||
|
||||
- **drop**: Whether to create stream computing, options are "yes" or "no", "no" means do not create.
|
||||
|
||||
#### Supertable Related Configuration Parameters
|
||||
#### Supertable Parameters
|
||||
|
||||
Parameters related to supertable creation are configured in the `super_tables` section of the json configuration file, specific parameters are as follows.
|
||||
|
||||
|
@ -334,9 +230,9 @@ Parameters related to supertable creation are configured in the `super_tables` s
|
|||
|
||||
- **child_table_exists**: Whether the child table already exists, default is "no", options are "yes" or "no".
|
||||
|
||||
- **child_table_count**: Number of child tables, default is 10.
|
||||
- **childtable_count**: Number of child tables, default is 10.
|
||||
|
||||
- **child_table_prefix**: Prefix for child table names, mandatory, no default value.
|
||||
- **childtable_prefix**: Prefix for child table names, mandatory, no default value.
|
||||
|
||||
- **escape_character**: Whether the supertable and child table names contain escape characters, default is "no", options are "yes" or "no".
|
||||
|
||||
|
@ -360,7 +256,7 @@ Parameters related to supertable creation are configured in the `super_tables` s
|
|||
|
||||
- **childtable_limit** : Effective only when child_table_exists is yes, specifies the limit when getting the subtable list from the supertable.
|
||||
|
||||
- **interlace_rows** : Enables interlaced insertion mode and specifies the number of rows to insert into each subtable at a time. Interlaced insertion mode means inserting the number of rows specified by this parameter into each subtable in turn and repeating this process until all subtable data is inserted. The default value is 0, i.e., data is inserted into one subtable before moving to the next subtable.
|
||||
- **interlace_rows** : Enables interlaced insertion mode and specifies the number of rows to insert into each subtable at a time. Interlaced insertion mode means inserting the number of rows specified by this parameter into each subtable in turn and repeating this process until all subtable data is inserted. The default is 0, i.e., data is inserted into one subtable before moving to the next subtable.
|
||||
|
||||
- **insert_interval** : Specifies the insertion interval for interlaced insertion mode, in ms, default value is 0. Only effective when `-B/--interlace-rows` is greater than 0. It means that the data insertion thread will wait for the time interval specified by this value after inserting interlaced records for each subtable before proceeding to the next round of writing.
|
||||
|
||||
|
@ -388,7 +284,7 @@ Parameters related to supertable creation are configured in the `super_tables` s
|
|||
- **sqls** : Array of strings type, specifies the array of sql to be executed after the supertable is successfully created, the table name specified in sql must be prefixed with the database name, otherwise an unspecified database error will occur
|
||||
|
||||
|
||||
#### Tag and Data Column Configuration Parameters
|
||||
#### Tag and Data Columns
|
||||
|
||||
Specify the configuration parameters for tag and data columns in `super_tables` under `columns` and `tag`.
|
||||
|
||||
|
@ -423,7 +319,7 @@ Specify the configuration parameters for tag and data columns in `super_tables`
|
|||
|
||||
- **fillNull**: String type, specifies whether this column randomly inserts NULL values, can be specified as "true" or "false", only effective when generate_row_rule is 2.
|
||||
|
||||
#### Insertion Behavior Configuration Parameters
|
||||
#### Insertion Behavior Parameters
|
||||
|
||||
- **thread_count**: The number of threads for inserting data, default is 8.
|
||||
|
||||
|
@ -431,13 +327,11 @@ Specify the configuration parameters for tag and data columns in `super_tables`
|
|||
|
||||
- **create_table_thread_count** : The number of threads for creating tables, default is 8.
|
||||
|
||||
- **connection_pool_size** : The number of pre-established connections with the TDengine server. If not configured, it defaults to the specified number of threads.
|
||||
|
||||
- **result_file** : The path to the result output file, default is ./output.txt.
|
||||
|
||||
- **confirm_parameter_prompt** : A toggle parameter that requires user confirmation after a prompt to continue. The default value is false.
|
||||
- **confirm_parameter_prompt** : A toggle parameter that requires user confirmation after a prompt to continue. The value can be "yes" or "no", by default "no".
|
||||
|
||||
- **interlace_rows** : Enables interleaved insertion mode and specifies the number of rows to insert into each subtable at a time. Interleaved insertion mode refers to inserting the specified number of rows into each subtable in sequence and repeating this process until all subtable data has been inserted. The default value is 0, meaning data is inserted into one subtable completely before moving to the next.
|
||||
- **interlace_rows** : Enables interleaved insertion mode and specifies the number of rows to insert into each subtable at a time. Interleaved insertion mode refers to inserting the specified number of rows into each subtable in sequence and repeating this process until all subtable data has been inserted. The default is 0, meaning data is inserted into one subtable completely before moving to the next.
|
||||
This parameter can also be configured in `super_tables`; if configured, the settings in `super_tables` take higher priority and override the global settings.
|
||||
|
||||
- **insert_interval** :
|
||||
|
@ -447,67 +341,84 @@ Specify the configuration parameters for tag and data columns in `super_tables`
|
|||
- **num_of_records_per_req** :
|
||||
The number of data rows requested per write to TDengine, default is 30000. If set too high, the TDengine client driver will return corresponding error messages, and this parameter needs to be reduced to meet the writing requirements.
|
||||
|
||||
- **prepare_rand** : The number of unique values in the generated random data. If it is 1, it means all data are the same. The default value is 10000.
|
||||
- **prepare_rand** : The number of unique values in the generated random data. If it is 1, it means all data are the same. The default is 10000.
|
||||
|
||||
- **pre_load_tb_meta** : Whether to pre-load the meta data of subtables, values are “yes” or "no". When there are a large number of subtables, turning on this option can improve the writing speed.
|
||||
|
||||
### Configuration Parameters for Query Scenarios
|
||||
### Query Parameters
|
||||
|
||||
In query scenarios, `filetype` must be set to `query`.
|
||||
`query_times` specifies the number of times to run the query, numeric type.
|
||||
|
||||
Query scenarios can control the execution of slow query statements by setting `kill_slow_query_threshold` and `kill_slow_query_interval` parameters, where threshold controls that queries exceeding the specified exec_usec time will be killed by taosBenchmark, in seconds; interval controls the sleep time to avoid continuous slow query CPU consumption, in seconds.
|
||||
|
||||
For other common parameters, see Common Configuration Parameters.
|
||||
For other common parameters, see [General Configuration Parameters](#general-configuration-parameters)
|
||||
|
||||
#### Configuration Parameters for Executing Specified Query Statements
|
||||
|
||||
Configuration parameters for querying specified tables (can specify supertables, subtables, or regular tables) are set in `specified_table_query`.
|
||||
|
||||
- **mixed_query** "yes": `Mixed Query` "no": `Normal Query`, default is "no"
|
||||
`Mixed Query`: All SQL statements in `sqls` are grouped by the number of threads, with each thread executing one group. Each SQL statement in a thread needs to perform `query_times` queries.
|
||||
`Normal Query `: Each SQL in `sqls` starts `threads` and exits after executing `query_times` times. The next SQL can only be executed after all previous SQL threads have finished executing and exited.
|
||||
Regardless of whether it is a `Normal Query` or `Mixed Query`, the total number of query executions is the same. The total number of queries = `sqls` * `threads` * `query_times`. The difference is that `Normal Query` starts `threads` for each SQL query, while ` Mixed Query` only starts `threads` once to complete all SQL queries. The number of thread startups for the two is different.
|
||||
|
||||
- **query_interval** : Query interval, in seconds, default is 0.
|
||||
#### Specified Query
|
||||
|
||||
Configuration parameters for querying specified tables (can specify supertables, subtables, or regular tables) are set in `specified_table_query`.
|
||||
- **mixed_query** : Query Mode . "yes" is `Mixed Query`, "no" is `General Query`, default is "no".
|
||||
`General Query`:
|
||||
Each SQL in `sqls` starts `threads` threads to query this SQL, Each thread exits after executing the `query_times` queries, and only after all threads executing this SQL have completed can the next SQL be executed.
|
||||
The total number of queries(`General Query`) = the number of `sqls` * `query_times` * `threads`
|
||||
`Mixed Query`:
|
||||
All SQL statements in `sqls` are divided into `threads` groups, with each thread executing one group. Each SQL statement needs to execute `query_times` queries.
|
||||
The total number of queries(`Mixed Query`) = the number of `sqls` * `query_times`
|
||||
- **query_interval** : Query interval, in millisecond, default is 0.
|
||||
- **threads** : Number of threads executing the SQL query, default is 1.
|
||||
|
||||
- **sqls**:
|
||||
- **sql**: The SQL command to execute, required.
|
||||
- **result**: File to save the query results, if not specified, results are not saved.
|
||||
|
||||
#### Configuration Parameters for Querying Supertables
|
||||
#### Supertables
|
||||
|
||||
Configuration parameters for querying supertables are set in `super_table_query`.
|
||||
The thread mode of the super table query is the same as the `Normal Query` mode of the specified query statement described above, except that `sqls` is filled all sub tables.
|
||||
|
||||
- **stblname** : The name of the supertable to query, required.
|
||||
|
||||
- **query_interval** : Query interval, in seconds, default is 0.
|
||||
|
||||
- **threads** : Number of threads executing the SQL query, default is 1.
|
||||
|
||||
- **sqls** :
|
||||
- **sql** : The SQL command to execute, required; for supertable queries, keep "xxxx" in the SQL command, the program will automatically replace it with all subtable names of the supertable.
|
||||
- **result** : File to save the query results, if not specified, results are not saved.
|
||||
- **Note**: The maximum number of SQL arrays configured under SQL is 100.
|
||||
|
||||
### Configuration Parameters for Subscription Scenarios
|
||||
### Subscription Parameters
|
||||
|
||||
In subscription scenarios, `filetype` must be set to `subscribe`, this parameter and other common parameters see Common Configuration Parameters.
|
||||
In the subscription scenario, `filetype` must be set to `subscribe`. For details of this parameter and other general parameters, see [General Configuration Parameters](#general-configuration-parameters)
|
||||
The subscription configuration parameters are set under `tmq_info`. The parameters are as follows:
|
||||
|
||||
#### Configuration Parameters for Executing Specified Subscription Statements
|
||||
- **concurrent**: the number of consumers who consume subscriptions, or the number of concurrent consumers. The default value is 1.
|
||||
- **create_mode**: create a consumer mode.
|
||||
Which can be sequential: create in sequence. parallel: It is created at the same time. It is required and has no default value.
|
||||
- **group_mode**: generate the consumer groupId mode.
|
||||
Which can take the value share: all consumers can only generate one groupId independent: Each consumer generates an independent groupId. If `group.id` is not set, this item is mandatory and has no default value.
|
||||
-**poll_delay**: The polling timeout time passed in by calling tmq_consumer_poll.
|
||||
The unit is milliseconds. A negative number means the default timeout is 1 second.
|
||||
-**enable.manual.commit**: whether manual submission is allowed.
|
||||
The value can be true: manual submission is allowed, after consuming messages, manually call tmq_commit_sync to complete the submission. false:Do not submit, default value: false.
|
||||
-**rows_file**: a file that stores consumption data.
|
||||
It can be a full path or a relative path with a file name.The actual saved file will be followed by the consumer serial number. For example, rows_file is result, and the actual file name is result_1 (consumer 1) result_2 (consumer 2).
|
||||
-**expect_rows**: the number of rows and data types expected to be consumed by each consumer.
|
||||
When the consumption reaches this number, the consumption will exit, and the consumption will continue without setting.
|
||||
-**topic_list**: specifies the topic list and array type of consumption.
|
||||
Example of topic list format: `{"name": "topic1", "sql": "select * from test.meters;"}`.
|
||||
name: Specify the topic name.
|
||||
sql: Specify the sql statement for creating topic, Ensure that the sql is correct, and the framework will automatically create topic.
|
||||
|
||||
Configuration parameters for subscribing to specified tables (can specify supertables, subtables, or regular tables) are set in `specified_table_query`.
|
||||
For the following parameters, see the description of [Subscription](../../../advanced-features/data-subscription/):
|
||||
- **client.id**
|
||||
- **auto.offset.reset**
|
||||
- **enable.auto.commit**
|
||||
- **enable.auto.commit**
|
||||
- **msg.with.table.name**
|
||||
- **auto.commit.interval.ms**
|
||||
- **group.id**: If this value is not specified, the groupId will be generated by the rule specified by `group_mode`. If this value is specified, the `group_mode` parameter is ignore.
|
||||
|
||||
- **threads/concurrent** : Number of threads executing the SQL, default is 1.
|
||||
|
||||
- **sqls** :
|
||||
- **sql** : The SQL command to execute, required.
|
||||
### Data Type Comparison Table
|
||||
|
||||
#### Data Type Writing Comparison Table in Configuration File
|
||||
|
||||
| # | **Engine** | **taosBenchmark**
|
||||
| # | **TDengine** | **taosBenchmark**
|
||||
| --- | :----------------: | :---------------:
|
||||
| 1 | TIMESTAMP | timestamp
|
||||
| 2 | INT | int
|
||||
|
@ -529,3 +440,97 @@ Configuration parameters for subscribing to specified tables (can specify supert
|
|||
| 18 | JSON | json
|
||||
|
||||
Note: Data types in the taosBenchmark configuration file must be in lowercase to be recognized.
|
||||
|
||||
## Example Of Configuration Files
|
||||
|
||||
**Below are a few examples of configuration files:**
|
||||
|
||||
#### Insertion Example
|
||||
|
||||
<details>
|
||||
<summary>insert.json</summary>
|
||||
|
||||
```json
|
||||
{{#include /TDengine/tools/taos-tools/example/insert.json}}
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
#### Query Example
|
||||
|
||||
<details>
|
||||
<summary>query.json</summary>
|
||||
|
||||
```json
|
||||
{{#include /TDengine/tools/taos-tools/example/query.json}}
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
#### Subscription Example
|
||||
|
||||
<details>
|
||||
<summary>tmq.json</summary>
|
||||
|
||||
```json
|
||||
{{#include /TDengine/tools/taos-tools/example/tmq.json}}
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
Other json examples see [here](https://github.com/taosdata/TDengine/tree/main/tools/taos-tools/example)
|
||||
|
||||
## Output Performance Indicators
|
||||
|
||||
#### Write Indicators
|
||||
|
||||
After writing is completed, a summary performance metric will be output in the last two lines in the following format:
|
||||
``` bash
|
||||
SUCC: Spent 8.527298 (real 8.117379) seconds to insert rows: 10000000 with 8 thread(s) into test 1172704.41 (real 1231924.74) records/second
|
||||
SUCC: insert delay, min: 19.6780ms, avg: 64.9390ms, p90: 94.6900ms, p95: 105.1870ms, p99: 130.6660ms, max: 157.0830ms
|
||||
```
|
||||
First line write speed statistics:
|
||||
- Spent: Total write time, in seconds, counting from the start of writing the first data to the end of the last data. This indicates that a total of 8.527298 seconds were spent.
|
||||
- Real: Total write time (calling the engine), excluding the time spent preparing data for the testing framework. Purely counting the time spent on engine calls, The time spent is 8.117379 seconds. If 8.527298-8.117379=0.409919 seconds, it is the time spent preparing data for the testing framework.
|
||||
- Rows: Write the total number of rows, which is 10 million pieces of data.
|
||||
- Threads: The number of threads being written, which is 8 threads writing simultaneously.
|
||||
- Records/second write speed = `total write time` / `total number of rows written`, real in parentheses is the same as before, indicating pure engine write speed.
|
||||
|
||||
Second line single write delay statistics:
|
||||
- min: Write minimum delay.
|
||||
- avg: Write normal delay.
|
||||
- p90: Write delay p90 percentile delay number.
|
||||
- p95: Write delay p95 percentile delay number.
|
||||
- p99: Write delay p99 percentile delay number.
|
||||
- max: maximum write delay.
|
||||
Through this series of indicators, the distribution of write request latency can be observed.
|
||||
|
||||
#### Query indicators
|
||||
The query performance test mainly outputs the QPS indicator of query request speed, and the output format is as follows:
|
||||
|
||||
``` bash
|
||||
complete query with 3 threads and 10000 query delay avg: 0.002686s min: 0.001182s max: 0.012189s p90: 0.002977s p95: 0.003493s p99: 0.004645s SQL command: select ...
|
||||
INFO: Spend 26.9530 second completed total queries: 30000, the QPS of all threads: 1113.049
|
||||
```
|
||||
|
||||
- The first line represents the percentile distribution of query execution and query request delay for each of the three threads executing 10000 queries. The SQL command is the test query statement.
|
||||
- The second line indicates that the total query time is 26.9653 seconds, and the query rate per second (QPS) is 1113.049 times/second.
|
||||
- If the `continue_if_fail` option is set to `yes` in the query, the last line will output the number of failed requests and error rate, the format like "error + number of failed requests (error rate)".
|
||||
- QPS = number of successful requests / time spent (in seconds)
|
||||
- Error rate = number of failed requests / (number of successful requests + number of failed requests)
|
||||
|
||||
#### Subscription indicators
|
||||
|
||||
The subscription performance test mainly outputs consumer consumption speed indicators, with the following output format:
|
||||
``` bash
|
||||
INFO: consumer id 0 has poll total msgs: 376, period rate: 37.592 msgs/s, total rows: 3760000, period rate: 375924.815 rows/s
|
||||
INFO: consumer id 1 has poll total msgs: 362, period rate: 36.131 msgs/s, total rows: 3620000, period rate: 361313.504 rows/s
|
||||
INFO: consumer id 2 has poll total msgs: 364, period rate: 36.378 msgs/s, total rows: 3640000, period rate: 363781.731 rows/s
|
||||
INFO: consumerId: 0, consume msgs: 1000, consume rows: 10000000
|
||||
INFO: consumerId: 1, consume msgs: 1000, consume rows: 10000000
|
||||
INFO: consumerId: 2, consume msgs: 1000, consume rows: 10000000
|
||||
INFO: Consumed total msgs: 3000, total rows: 30000000
|
||||
```
|
||||
- Lines 1 to 3 real-time output of the current consumption speed of each consumer, msgs/s represents the number of consumption messages, each message contains multiple rows of data, and rows/s represents the consumption speed calculated by rows.
|
||||
- Lines 4 to 6 show the overall statistics of each consumer after the test is completed, including the total number of messages consumed and the total number of lines.
|
||||
- The overall statistics of all consumers in line 7, `msgs` represents how many messages were consumed in total, `rows` represents how many rows of data were consumed in total.
|
|
@ -55,14 +55,13 @@ join_clause:
|
|||
|
||||
window_clause: {
|
||||
SESSION(ts_col, tol_val)
|
||||
| STATE_WINDOW(col)
|
||||
| STATE_WINDOW(col) [TRUE_FOR(true_for_duration)]
|
||||
| INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)] [WATERMARK(watermark_val)] [FILL(fill_mod_and_val)]
|
||||
| EVENT_WINDOW START WITH start_trigger_condition END WITH end_trigger_condition
|
||||
| EVENT_WINDOW START WITH start_trigger_condition END WITH end_trigger_condition [TRUE_FOR(true_for_duration)]
|
||||
| COUNT_WINDOW(count_val[, sliding_val])
|
||||
|
||||
interp_clause:
|
||||
RANGE(ts_val [, ts_val]) EVERY(every_val) FILL(fill_mod_and_val)
|
||||
| RANGE(ts_val, surrounding_time_val) FILL(fill_mod_and_val)
|
||||
RANGE(ts_val [, ts_val] [, surrounding_time_val]) EVERY(every_val) FILL(fill_mod_and_val)
|
||||
|
||||
partition_by_clause:
|
||||
PARTITION BY partition_by_expr [, partition_by_expr] ...
|
||||
|
|
|
@ -1967,7 +1967,7 @@ ignore_null_values: {
|
|||
- For queries on tables with composite primary keys, if there are data with the same timestamp, only the data with the smallest composite primary key participates in the calculation.
|
||||
- INTERP query supports NEAR FILL mode, i.e., when FILL is needed, it uses the data closest to the current time point for interpolation. When the timestamps before and after are equally close to the current time slice, FILL the previous row's value. This mode is not supported in stream computing and window queries. For example: SELECT INTERP(col) FROM tb RANGE('2023-01-01 00:00:00', '2023-01-01 00:10:00') FILL(NEAR).(Supported from version 3.3.4.9).
|
||||
- INTERP can only use the pseudocolumn `_irowts_origin` when using FILL PREV/NEXT/NEAR modes. `_irowts_origin` is supported from version 3.3.4.9.
|
||||
- INTERP `RANGE` clause supports the expansion of the time range (supported from version 3.3.4.9), such as `RANGE('2023-01-01 00:00:00', 10s)` means to find data 10s before and after the time point '2023-01-01 00:00:00' for interpolation, FILL PREV/NEXT/NEAR respectively means to look for data forward/backward/around the time point, if there is no data around the time point, then use the value specified by FILL for interpolation, therefore the FILL clause must specify a value at this time. For example: SELECT INTERP(col) FROM tb RANGE('2023-01-01 00:00:00', 10s) FILL(PREV, 1). Currently, only the combination of time point and time range is supported, not the combination of time interval and time range, i.e., RANGE('2023-01-01 00:00:00', '2023-02-01 00:00:00', 1h) is not supported. The specified time range rules are similar to EVERY, the unit cannot be year or month, the value cannot be 0, and cannot have quotes. When using this extension, other FILL modes except FILL PREV/NEXT/NEAR are not supported, and the EVERY clause cannot be specified.
|
||||
- INTERP `RANGE` clause supports the expansion of the time range (supported from version 3.3.4.9), For example, `RANGE('2023-01-01 00:00:00', 10s)` means that only data within 10s around the time point '2023-01-01 00:00:00' can be used for interpolation. `FILL PREV/NEXT/NEAR` respectively means to look for data forward/backward/around the time point. If there is no data around the time point, the default value specified by `FILL` is used for interpolation. Therefore the `FILL` clause must specify the default value at the same time. For example: SELECT INTERP(col) FROM tb RANGE('2023-01-01 00:00:00', 10s) FILL(PREV, 1). Starting from the 3.3.6.0 version, the combination of time period and time range is supported. When interpolating for each point within the time period, the time range requirement must be met. Prior versions only supported single time point and its time range. The available values for time range are similar to `EVERY`, the unit cannot be year or month, the value must be greater than 0, and cannot be in quotes. When using this extension, `FILL` modes other than `PREV/NEXT/NEAR` are not supported.
|
||||
|
||||
### LAST
|
||||
|
||||
|
@ -2125,6 +2125,28 @@ UNIQUE(expr)
|
|||
|
||||
**Applicable to**: Tables and supertables.
|
||||
|
||||
### COLS
|
||||
|
||||
```sql
|
||||
COLS(func(expr), output_expr1, [, output_expr2] ... )
|
||||
```
|
||||
|
||||
**Function Description**: On the data row where the execution result of function func(expr) is located, execute the expression output_expr1, [, output_expr2], return its result, and the result of func (expr) is not output.
|
||||
|
||||
**Return Data Type**: Returns multiple columns of data, and the data type of each column is the type of the result returned by the corresponding expression.
|
||||
|
||||
**Applicable Data Types**: All type fields.
|
||||
|
||||
**Applicable to**: Tables and Super Tables.
|
||||
|
||||
**Usage Instructions**:
|
||||
- Func function type: must be a single-line selection function (output result is a single-line selection function, for example, last is a single-line selection function, but top is a multi-line selection function).
|
||||
- Mainly used to obtain the associated columns of multiple selection function results in a single SQL query. For example: select cols(max(c0), ts), cols(max(c1), ts) from ... can be used to get the different ts values of the maximum values of columns c0 and c1.
|
||||
- The result of the parameter func is not returned. If you need to output the result of func, you can add additional output columns, such as: select first(ts), cols(first(ts), c1) from ..
|
||||
- When there is only one column in the output, you can set an alias for the function. For example, you can do it like this: "select cols(first (ts), c1) as c11 from ...".
|
||||
- Output one or more columns, and you can set an alias for each output column of the function. For example, you can do it like this: "select (first (ts), c1 as c11, c2 as c22) from ...".
|
||||
|
||||
|
||||
## Time-Series Specific Functions
|
||||
|
||||
Time-Series specific functions are tailor-made by TDengine to meet the query scenarios of time-series data. In general databases, implementing similar functionalities usually requires complex query syntax and is inefficient. TDengine has built these functionalities into functions, greatly reducing the user's cost of use.
|
||||
|
|
|
@ -53,9 +53,9 @@ The syntax for the window clause is as follows:
|
|||
```sql
|
||||
window_clause: {
|
||||
SESSION(ts_col, tol_val)
|
||||
| STATE_WINDOW(col)
|
||||
| STATE_WINDOW(col) [TRUE_FOR(true_for_duration)]
|
||||
| INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)] [FILL(fill_mod_and_val)]
|
||||
| EVENT_WINDOW START WITH start_trigger_condition END WITH end_trigger_condition
|
||||
| EVENT_WINDOW START WITH start_trigger_condition END WITH end_trigger_condition [TRUE_FOR(true_for_duration)]
|
||||
| COUNT_WINDOW(count_val[, sliding_val])
|
||||
}
|
||||
```
|
||||
|
@ -177,6 +177,12 @@ TDengine also supports using CASE expressions in state quantities, which can exp
|
|||
SELECT tbname, _wstart, CASE WHEN voltage >= 205 and voltage <= 235 THEN 1 ELSE 0 END status FROM meters PARTITION BY tbname STATE_WINDOW(CASE WHEN voltage >= 205 and voltage <= 235 THEN 1 ELSE 0 END);
|
||||
```
|
||||
|
||||
The state window supports using the TRUE_FOR parameter to set its minimum duration. If the window's duration is less than the specified value, it will be discarded automatically and no result will be returned. For example, setting the minimum duration to 3 seconds:
|
||||
|
||||
```
|
||||
SELECT COUNT(*), FIRST(ts), status FROM temp_tb_1 STATE_WINDOW(status) TRUE_FOR (3s);
|
||||
```
|
||||
|
||||
### Session Window
|
||||
|
||||
The session window is determined based on the timestamp primary key values of the records. As shown in the diagram below, if the continuous interval of the timestamps is set to be less than or equal to 12 seconds, the following 6 records form 2 session windows, which are: [2019-04-28 14:22:10, 2019-04-28 14:22:30] and [2019-04-28 14:23:10, 2019-04-28 14:23:30]. This is because the interval between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, exceeding the continuous interval (12 seconds).
|
||||
|
@ -212,6 +218,12 @@ select _wstart, _wend, count(*) from t event_window start with c1 > 0 end with c
|
|||
<Image img={imgStep04} alt=""/>
|
||||
</figure>
|
||||
|
||||
The event window supports using the TRUE_FOR parameter to set its minimum duration. If the window's duration is less than the specified value, it will be discarded automatically and no result will be returned. For example, setting the minimum duration to 3 seconds:
|
||||
|
||||
```
|
||||
select _wstart, _wend, count(*) from t event_window start with c1 > 0 end with c2 < 10 true_for (3s);
|
||||
```
|
||||
|
||||
### Count Window
|
||||
|
||||
Count windows divide data into windows based on a fixed number of data rows. By default, data is sorted by timestamp, then divided into multiple windows based on the value of count_val, and aggregate calculations are performed. count_val represents the maximum number of data rows in each count window; if the total number of data rows is not divisible by count_val, the last window will have fewer rows than count_val. sliding_val is a constant that represents the number of rows the window slides, similar to the SLIDING in interval.
|
||||
|
|
|
@ -9,9 +9,9 @@ import imgStream from './assets/stream-processing-01.png';
|
|||
## Creating Stream Computing
|
||||
|
||||
```sql
|
||||
CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name[(field1_name, field2_name [PRIMARY KEY], ...)] [TAGS (create_definition [, create_definition] ...)] SUBTABLE(expression) AS subquery
|
||||
CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name[(field1_name, field2_name [PRIMARY KEY], ...)] [TAGS (create_definition [, create_definition] ...)] SUBTABLE(expression) AS subquery [notification_definition]
|
||||
stream_options: {
|
||||
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
|
||||
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time | FORCE_WINDOW_CLOSE]
|
||||
WATERMARK time
|
||||
IGNORE EXPIRED [0|1]
|
||||
DELETE_MARK time
|
||||
|
@ -58,6 +58,10 @@ window_clause: {
|
|||
|
||||
Where SESSION is a session window, tol_val is the maximum range of the time interval. All data within the tol_val time interval belong to the same window. If the time between two consecutive data points exceeds tol_val, the next window automatically starts. The window's_wend equals the last data point's time plus tol_val.
|
||||
|
||||
STATE_WINDOW is a state window. The col is used to identify the state value. Values with the same state value belong to the same state window. When the value of col changes, the current window ends and the next window is automatically opened.
|
||||
|
||||
INTERVAL is a time window, which can be further divided into sliding time windows and tumbling time windows.The INTERVAL clause is used to specify the equal time period of the window, and the SLIDING clause is used to specify the time by which the window slides forward. When the value of interval_val is equal to the value of sliding_val, the time window is a tumbling time window; otherwise, it is a sliding time window. Note: The value of sliding_val must be less than or equal to the value of interval_val.
|
||||
|
||||
EVENT_WINDOW is an event window, defined by start and end conditions. The window starts when the start_trigger_condition is met and closes when the end_trigger_condition is met. start_trigger_condition and end_trigger_condition can be any condition expression supported by TDengine and can include different columns.
|
||||
|
||||
COUNT_WINDOW is a count window, dividing the window by a fixed number of data rows. count_val is a constant, a positive integer, must be at least 2 and less than 2147483648. count_val represents the maximum number of data rows each COUNT_WINDOW contains. If the total number of data rows is not divisible by count_val, the last window will have fewer rows than count_val. sliding_val is a constant, representing the number of rows the window slides, similar to the SLIDING in INTERVAL.
|
||||
|
@ -81,6 +85,8 @@ CREATE STREAM streams1 IGNORE EXPIRED 1 WATERMARK 100s INTO streamt1 AS
|
|||
SELECT _wstart, count(*), avg(voltage) from meters PARTITION BY tbname COUNT_WINDOW(10);
|
||||
```
|
||||
|
||||
notification_definition clause specifies the addresses to which notifications should be sent when designated events occur during window computations, such as window opening or closing. For more details, see [Stream Computing Event Notifications](#stream-computing-event-notifications).
|
||||
|
||||
## Stream Computation Partitioning
|
||||
|
||||
You can use `PARTITION BY TBNAME`, tags, regular columns, or expressions to partition a stream for multi-partition computation. Each partition's timeline and window are independent, aggregating separately, and writing into different subtables of the target table.
|
||||
|
@ -301,3 +307,223 @@ CREATE SNODE ON DNODE [id]
|
|||
|
||||
The id is the serial number of the dnode in the cluster. Please be mindful of the selected dnode, as the intermediate state of stream computing will automatically be backed up on it.
|
||||
Starting from version 3.3.4.0, in a multi-replica environment, creating a stream will perform an **existence check** of snode, requiring the snode to be created first. If the snode does not exist, the stream cannot be created.
|
||||
|
||||
## Stream Computing Event Notifications
|
||||
|
||||
### User Guide
|
||||
|
||||
Stream computing supports sending event notifications to external systems when windows open or close. Users can specify the events to be notified and the target addresses for receiving notification messages using the notification_definition clause.
|
||||
|
||||
```sql
|
||||
notification_definition:
|
||||
NOTIFY (url [, url] ...) ON (event_type [, event_type] ...) [notification_options]
|
||||
|
||||
event_type:
|
||||
'WINDOW_OPEN'
|
||||
| 'WINDOW_CLOSE'
|
||||
|
||||
notification_options: {
|
||||
NOTIFY_HISTORY [0|1]
|
||||
ON_FAILURE [DROP|PAUSE]
|
||||
}
|
||||
```
|
||||
|
||||
The rules for the syntax above are as follows:
|
||||
1. `url`: Specifies the target address for the notification. It must include the protocol, IP or domain name, port, and may include a path and parameters. Currently, only the websocket protocol is supported. For example: 'ws://localhost:8080', 'ws://localhost:8080/notify', 'wss://localhost:8080/notify?key=foo'.
|
||||
2. `event_type`: Defines the events that trigger notifications. Supported event types include:
|
||||
1. 'WINDOW_OPEN': Window open event; triggered when any type of window opens.
|
||||
2. 'WINDOW_CLOSE': Window close event; triggered when any type of window closes.
|
||||
3. `NOTIFY_HISTORY`: Controls whether to trigger notifications during the computation of historical data. The default value is 0, which means no notifications are sent.
|
||||
4. `ON_FAILURE`: Determines whether to allow dropping some events if sending notifications fails (e.g., in poor network conditions). The default value is `PAUSE`:
|
||||
1. PAUSE means that the stream computing task is paused if sending a notification fails. taosd will retry until the notification is successfully delivered and the task resumes.
|
||||
2. DROP means that if sending a notification fails, the event information is discarded, and the stream computing task continues running unaffected.
|
||||
|
||||
For example, the following creates a stream that computes the per-minute average current from electric meters and sends notifications to two target addresses when the window opens and closes. It does not send notifications for historical data and does not allow dropping notifications on failure:
|
||||
|
||||
```sql
|
||||
CREATE STREAM avg_current_stream FILL_HISTORY 1
|
||||
AS SELECT _wstart, _wend, AVG(current) FROM meters
|
||||
INTERVAL (1m)
|
||||
NOTIFY ('ws://localhost:8080/notify', 'wss://192.168.1.1:8080/notify?key=foo')
|
||||
ON ('WINDOW_OPEN', 'WINDOW_CLOSE');
|
||||
NOTIFY_HISTORY 0
|
||||
ON_FAILURE PAUSE;
|
||||
```
|
||||
|
||||
When the specified events are triggered, taosd will send a POST request to the given URL(s) with a JSON message body. A single request may contain events from several streams, and the event types may differ.
|
||||
|
||||
The details of the event information depend on the type of window:
|
||||
|
||||
1. Time Window: At the opening, the start time is sent; at the closing, the start time, end time, and computation result are sent.
|
||||
2. State Window: At the opening, the start time, previous window's state, and current window's state are sent; at closing, the start time, end time, computation result, current window state, and next window state are sent.
|
||||
3. Session Window: At the opening, the start time is sent; at the closing, the start time, end time, and computation result are sent.
|
||||
4. Event Window: At the opening, the start time along with the data values and corresponding condition index that triggered the window opening are sent; at the closing, the start time, end time, computation result, and the triggering data value and condition index for window closure are sent.
|
||||
5. Count Window: At the opening, the start time is sent; at the closing, the start time, end time, and computation result are sent.
|
||||
|
||||
An example structure for the notification message is shown below:
|
||||
|
||||
```json
|
||||
{
|
||||
"messageId": "unique-message-id-12345",
|
||||
"timestamp": 1733284887203,
|
||||
"streams": [
|
||||
{
|
||||
"streamName": "avg_current_stream",
|
||||
"events": [
|
||||
{
|
||||
"tableName": "t_a667a16127d3b5a18988e32f3e76cd30",
|
||||
"eventType": "WINDOW_OPEN",
|
||||
"eventTime": 1733284887097,
|
||||
"windowId": "window-id-67890",
|
||||
"windowType": "Time",
|
||||
"windowStart": 1733284800000
|
||||
},
|
||||
{
|
||||
"tableName": "t_a667a16127d3b5a18988e32f3e76cd30",
|
||||
"eventType": "WINDOW_CLOSE",
|
||||
"eventTime": 1733284887197,
|
||||
"windowId": "window-id-67890",
|
||||
"windowType": "Time",
|
||||
"windowStart": 1733284800000,
|
||||
"windowEnd": 1733284860000,
|
||||
"result": {
|
||||
"_wstart": 1733284800000,
|
||||
"avg(current)": 1.3
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"streamName": "max_voltage_stream",
|
||||
"events": [
|
||||
{
|
||||
"tableName": "t_96f62b752f36e9b16dc969fe45363748",
|
||||
"eventType": "WINDOW_OPEN",
|
||||
"eventTime": 1733284887231,
|
||||
"windowId": "window-id-13579",
|
||||
"windowType": "Event",
|
||||
"windowStart": 1733284800000,
|
||||
"triggerCondition": {
|
||||
"conditionIndex": 0,
|
||||
"fieldValue": {
|
||||
"c1": 10,
|
||||
"c2": 15
|
||||
}
|
||||
},
|
||||
},
|
||||
{
|
||||
"tableName": "t_96f62b752f36e9b16dc969fe45363748",
|
||||
"eventType": "WINDOW_CLOSE",
|
||||
"eventTime": 1733284887231,
|
||||
"windowId": "window-id-13579",
|
||||
"windowType": "Event",
|
||||
"windowStart": 1733284800000,
|
||||
"windowEnd": 1733284810000,
|
||||
"triggerCondition": {
|
||||
"conditionIndex": 1,
|
||||
"fieldValue": {
|
||||
"c1": 20
|
||||
"c2": 3
|
||||
}
|
||||
},
|
||||
"result": {
|
||||
"_wstart": 1733284800000,
|
||||
"max(voltage)": 220
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
The following sections explain the fields in the notification message.
|
||||
|
||||
### Root-Level Field Descriptions
|
||||
|
||||
1. "messageId": A string that uniquely identifies the notification message. It ensures that the entire message can be tracked and de-duplicated.
|
||||
2. "timestamp": A long integer timestamp representing the time when the notification message was generated, accurate to the millisecond (i.e., the number of milliseconds since '00:00, Jan 1 1970 UTC').
|
||||
3. "streams": An array containing the event information for multiple stream tasks. (See the following sections for details.)
|
||||
|
||||
### "stream" Object Field Descriptions
|
||||
|
||||
1. "streamName": A string representing the name of the stream task, used to identify which stream the events belong to.
|
||||
2. "events": An array containing the list of event objects for the stream task. Each event object includes detailed information. (See the next sections for details.)
|
||||
|
||||
### "event" Object Field Descriptions
|
||||
|
||||
#### Common Fields
|
||||
|
||||
These fields are common to all event objects.
|
||||
1. "tableName": A string indicating the name of the target subtable.
|
||||
2. "eventType": A string representing the event type ("WINDOW_OPEN", "WINDOW_CLOSE", or "WINDOW_INVALIDATION").
|
||||
3. "eventTime": A long integer timestamp that indicates when the event was generated, accurate to the millisecond (i.e., the number of milliseconds since '00:00, Jan 1 1970 UTC').
|
||||
4. "windowId": A string representing the unique identifier for the window. This ID ensures that the open and close events for the same window can be correlated. In the case that taosd restarts due to a fault, some events may be sent repeatedly, but the windowId remains constant for the same window.
|
||||
5. "windowType": A string that indicates the window type ("Time", "State", "Session", "Event", or "Count").
|
||||
|
||||
#### Fields for Time Windows
|
||||
|
||||
These fields are present only when "windowType" is "Time".
|
||||
1. When "eventType" is "WINDOW_OPEN", the following field is included:
|
||||
1. "windowStart": A long integer timestamp representing the start time of the window, matching the time precision of the result table.
|
||||
2. When "eventType" is "WINDOW_CLOSE", the following fields are included:
|
||||
1. "windowStart": A long integer timestamp representing the start time of the window.
|
||||
1. "windowEnd": A long integer timestamp representing the end time of the window.
|
||||
1. "result": An object containing key-value pairs of the computed result columns and their corresponding values.
|
||||
|
||||
#### Fields for State Windows
|
||||
|
||||
These fields are present only when "windowType" is "State".
|
||||
1. When "eventType" is "WINDOW_OPEN", the following fields are included:
|
||||
1. "windowStart": A long integer timestamp representing the start time of the window.
|
||||
1. "prevState": A value of the same type as the state column, representing the state of the previous window. If there is no previous window (i.e., this is the first window), it will be NULL.
|
||||
1. "curState": A value of the same type as the state column, representing the current window's state.
|
||||
2. When "eventType" is "WINDOW_CLOSE", the following fields are included:
|
||||
1. "windowStart": A long integer timestamp representing the start time of the window.
|
||||
1. "windowEnd": A long integer timestamp representing the end time of the window.
|
||||
1. "curState": The current window's state.
|
||||
1. "nextState": The state for the next window.
|
||||
1. "result": An object containing key-value pairs of the computed result columns and their corresponding values.
|
||||
|
||||
#### Fields for Session Windows
|
||||
|
||||
These fields are present only when "windowType" is "Session".
|
||||
1. When "eventType" is "WINDOW_OPEN", the following field is included:
|
||||
1. "windowStart": A long integer timestamp representing the start time of the window.
|
||||
2. When "eventType" is "WINDOW_CLOSE", the following fields are included:
|
||||
1. "windowStart": A long integer timestamp representing the start time of the window.
|
||||
1. "windowEnd": A long integer timestamp representing the end time of the window.
|
||||
1. "result": An object containing key-value pairs of the computed result columns and their corresponding values.
|
||||
|
||||
#### Fields for Event Windows
|
||||
|
||||
These fields are present only when "windowType" is "Event".
|
||||
1. When "eventType" is "WINDOW_OPEN", the following fields are included:
|
||||
1. "windowStart": A long integer timestamp representing the start time of the window.
|
||||
1. "triggerCondition": An object that provides information about the condition that triggered the window to open. It includes:
|
||||
1. "conditionIndex": An integer representing the index of the condition that triggered the window, starting from 0.
|
||||
1. "fieldValue": An object containing key-value pairs of the column names related to the condition and their respective values.
|
||||
2. When "eventType" is "WINDOW_CLOSE", the following fields are included:
|
||||
1. "windowStart": A long integer timestamp representing the start time of the window.
|
||||
1. "windowEnd": A long integer timestamp representing the end time of the window.
|
||||
1. "triggerCondition": An object that provides information about the condition that triggered the window to close. It includes:
|
||||
1. "conditionIndex": An integer representing the index of the condition that triggered the closure, starting from 0.
|
||||
1. "fieldValue": An object containing key-value pairs of the related column names and their respective values.
|
||||
1. "result": An object containing key-value pairs of the computed result columns and their corresponding values.
|
||||
|
||||
#### Fields for Count Windows
|
||||
|
||||
These fields are present only when "windowType" is "Count".
|
||||
1. When "eventType" is "WINDOW_OPEN", the following field is included:
|
||||
1. "windowStart": A long integer timestamp representing the start time of the window.
|
||||
2. When "eventType" is "WINDOW_CLOSE", the following fields are included:
|
||||
1. "windowStart": A long integer timestamp representing the start time of the window.
|
||||
1. "windowEnd": A long integer timestamp representing the end time of the window.
|
||||
1. "result": An object containing key-value pairs of the computed result columns and their corresponding values.
|
||||
|
||||
#### Fields for Window Invalidation
|
||||
|
||||
Due to scenarios such as data disorder, updates, or deletions during stream computing, windows that have already been generated might be removed or their results need to be recalculated. In such cases, a notification with the eventType "WINDOW_INVALIDATION" is sent to inform which windows have been invalidated.
|
||||
For events with "eventType" as "WINDOW_INVALIDATION", the following fields are included:
|
||||
1. "windowStart": A long integer timestamp representing the start time of the window.
|
||||
1. "windowEnd": A long integer timestamp representing the end time of the window.
|
||||
|
|
|
@ -43,7 +43,8 @@ TDengine supports `UNION ALL` and `UNION` operators. UNION ALL combines the resu
|
|||
| 9 | LIKE | BINARY, NCHAR, and VARCHAR | Matches the specified pattern string with wildcard |
|
||||
| 10 | NOT LIKE | BINARY, NCHAR, and VARCHAR | Does not match the specified pattern string with wildcard |
|
||||
| 11 | MATCH, NMATCH | BINARY, NCHAR, and VARCHAR | Regular expression match |
|
||||
| 12 | CONTAINS | JSON | Whether a key exists in JSON |
|
||||
| 12 | REGEXP, NOT REGEXP | BINARY, NCHAR, and VARCHAR | Regular expression match |
|
||||
| 13 | CONTAINS | JSON | Whether a key exists in JSON |
|
||||
|
||||
LIKE conditions use wildcard strings for matching checks, with the following rules:
|
||||
|
||||
|
@ -51,7 +52,7 @@ LIKE conditions use wildcard strings for matching checks, with the following rul
|
|||
- If you want to match an underscore character that is originally in the string, you can write it as \_ in the wildcard string, i.e., add a backslash to escape it.
|
||||
- The wildcard string cannot exceed 100 bytes in length. It is not recommended to use too long wildcard strings, as it may severely affect the performance of the LIKE operation.
|
||||
|
||||
MATCH and NMATCH conditions use regular expressions for matching, with the following rules:
|
||||
MATCH/REGEXP and NMATCH/NOT REGEXP conditions use regular expressions for matching, with the following rules:
|
||||
|
||||
- Supports regular expressions that comply with the POSIX standard, see Regular Expressions for specific standards.
|
||||
- When MATCH matches a regular expression, it returns TRUE. When NMATCH does not match a regular expression, it returns TRUE.
|
||||
|
|
|
@ -33,6 +33,7 @@ The JDBC driver implementation for TDengine strives to be consistent with relati
|
|||
|
||||
| taos-jdbcdriver Version | Major Changes | TDengine Version |
|
||||
| ----------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ |
|
||||
| 3.5.3 | Support unsigned data types in WebSocket connections. | - |
|
||||
| 3.5.2 | Fixed WebSocket result set free bug. | - |
|
||||
| 3.5.1 | Fixed the getObject issue in data subscription. | - |
|
||||
| 3.5.0 | 1. Optimized the performance of WebSocket connection parameter binding, supporting parameter binding queries using binary data. <br/> 2. Optimized the performance of small queries in WebSocket connection. <br/> 3. Added support for setting time zone and app info on WebSocket connection. | 3.3.5.0 and higher |
|
||||
|
@ -128,24 +129,27 @@ Please refer to the specific error codes:
|
|||
|
||||
TDengine currently supports timestamp, numeric, character, boolean types, and the corresponding Java type conversions are as follows:
|
||||
|
||||
| TDengine DataType | JDBCType |
|
||||
| ----------------- | ------------------ |
|
||||
| TIMESTAMP | java.sql.Timestamp |
|
||||
| INT | java.lang.Integer |
|
||||
| BIGINT | java.lang.Long |
|
||||
| FLOAT | java.lang.Float |
|
||||
| DOUBLE | java.lang.Double |
|
||||
| SMALLINT | java.lang.Short |
|
||||
| TINYINT | java.lang.Byte |
|
||||
| BOOL | java.lang.Boolean |
|
||||
| BINARY | byte array |
|
||||
| NCHAR | java.lang.String |
|
||||
| JSON | java.lang.String |
|
||||
| VARBINARY | byte[] |
|
||||
| GEOMETRY | byte[] |
|
||||
| TDengine DataType | JDBCType | Remark |
|
||||
| ----------------- | -------------------- | --------------------------------------- |
|
||||
| TIMESTAMP | java.sql.Timestamp | |
|
||||
| BOOL | java.lang.Boolean | |
|
||||
| TINYINT | java.lang.Byte | |
|
||||
| TINYINT UNSIGNED | java.lang.Short | only supported in WebSocket connections |
|
||||
| SMALLINT | java.lang.Short | |
|
||||
| SMALLINT UNSIGNED | java.lang.Integer | only supported in WebSocket connections |
|
||||
| INT | java.lang.Integer | |
|
||||
| INT UNSIGNED | java.lang.Long | only supported in WebSocket connections |
|
||||
| BIGINT | java.lang.Long | |
|
||||
| BIGINT UNSIGNED | java.math.BigInteger | only supported in WebSocket connections |
|
||||
| FLOAT | java.lang.Float | |
|
||||
| DOUBLE | java.lang.Double | |
|
||||
| BINARY | byte array | |
|
||||
| NCHAR | java.lang.String | |
|
||||
| JSON | java.lang.String | only supported in tags |
|
||||
| VARBINARY | byte[] | |
|
||||
| GEOMETRY | byte[] | |
|
||||
|
||||
**Note**: JSON type is only supported in tags.
|
||||
Due to historical reasons, the BINARY type in TDengine is not truly binary data and is no longer recommended. Please use VARBINARY type instead.
|
||||
**Note**: Due to historical reasons, the BINARY type in TDengine is not truly binary data and is no longer recommended. Please use VARBINARY type instead.
|
||||
GEOMETRY type is binary data in little endian byte order, complying with the WKB standard. For more details, please refer to [Data Types](../../sql-manual/data-types/)
|
||||
For the WKB standard, please refer to [Well-Known Binary (WKB)](https://libgeos.org/specifications/wkb/)
|
||||
For the Java connector, you can use the jts library to conveniently create GEOMETRY type objects, serialize them, and write to TDengine. Here is an example [Geometry Example](https://github.com/taosdata/TDengine/blob/main/docs/examples/java/src/main/java/com/taos/example/GeometryDemo.java)
|
||||
|
|
|
@ -25,6 +25,8 @@ Support all platforms that can run Node.js.
|
|||
|
||||
| Node.js Connector Version | Major Changes | TDengine Version |
|
||||
| ------------------------- | ------------------------------------------------------------------------ | --------------------------- |
|
||||
| 3.1.4 | Modified the readme.| - |
|
||||
| 3.1.3 | Upgraded the es5-ext version to address vulnerabilities in the lower version. | - |
|
||||
| 3.1.2 | Optimized data protocol and parsing, significantly improved performance. | - |
|
||||
| 3.1.1 | Optimized data transmission performance. | 3.3.2.0 and higher versions |
|
||||
| 3.1.0 | New release, supports WebSocket connection. | 3.2.0.0 and higher versions |
|
||||
|
@ -132,16 +134,20 @@ Node.js connector (`@tdengine/websocket`), which connects to a TDengine instance
|
|||
In addition to obtaining a connection through a specified URL, you can also use WSConfig to specify parameters when establishing a connection.
|
||||
|
||||
```js
|
||||
try {
|
||||
let url = 'ws://127.0.0.1:6041'
|
||||
let conf = WsSql.NewConfig(url)
|
||||
conf.setUser('root')
|
||||
conf.setPwd('taosdata')
|
||||
conf.setDb('db')
|
||||
conf.setTimeOut(500)
|
||||
let wsSql = await WsSql.open(conf);
|
||||
} catch (e) {
|
||||
console.error(e);
|
||||
const taos = require("@tdengine/websocket");
|
||||
|
||||
async function createConnect() {
|
||||
try {
|
||||
let url = 'ws://127.0.0.1:6041'
|
||||
let conf = new taos.WSConfig(url)
|
||||
conf.setUser('root')
|
||||
conf.setPwd('taosdata')
|
||||
conf.setDb('db')
|
||||
conf.setTimeOut(500)
|
||||
let wsSql = await taos.sqlConnect(conf)
|
||||
} catch (e) {
|
||||
console.error(e);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
|
|
@ -25,6 +25,7 @@ import RequestId from "../../assets/resources/_request_id.mdx";
|
|||
|
||||
| Connector Version | Major Changes | TDengine Version |
|
||||
|-------------------|------------------------------------------------------------|--------------------|
|
||||
| 3.1.6 | Optimize WebSocket connection message handling. | - |
|
||||
| 3.1.5 | Fix WebSocket encoding error for Chinese character length. | - |
|
||||
| 3.1.4 | Improved WebSocket query and insert performance. | 3.3.2.0 and higher |
|
||||
| 3.1.3 | Supported WebSocket auto-reconnect. | - |
|
||||
|
@ -39,25 +40,25 @@ For error reporting in other TDengine modules, please refer to [Error Codes](../
|
|||
|
||||
## Data Type Mapping
|
||||
|
||||
| TDengine DataType | C# Type |
|
||||
|-------------------|------------------|
|
||||
| TIMESTAMP | DateTime |
|
||||
| TINYINT | sbyte |
|
||||
| SMALLINT | short |
|
||||
| INT | int |
|
||||
| BIGINT | long |
|
||||
| TINYINT UNSIGNED | byte |
|
||||
| SMALLINT UNSIGNED | ushort |
|
||||
| INT UNSIGNED | uint |
|
||||
| BIGINT UNSIGNED | ulong |
|
||||
| FLOAT | float |
|
||||
| DOUBLE | double |
|
||||
| BOOL | bool |
|
||||
| BINARY | byte[] |
|
||||
| NCHAR | string (utf-8 encoded) |
|
||||
| JSON | byte[] |
|
||||
| VARBINARY | byte[] |
|
||||
| GEOMETRY | byte[] |
|
||||
| TDengine DataType | C# Type |
|
||||
|-------------------|----------|
|
||||
| TIMESTAMP | DateTime |
|
||||
| TINYINT | sbyte |
|
||||
| SMALLINT | short |
|
||||
| INT | int |
|
||||
| BIGINT | long |
|
||||
| TINYINT UNSIGNED | byte |
|
||||
| SMALLINT UNSIGNED | ushort |
|
||||
| INT UNSIGNED | uint |
|
||||
| BIGINT UNSIGNED | ulong |
|
||||
| FLOAT | float |
|
||||
| DOUBLE | double |
|
||||
| BOOL | bool |
|
||||
| BINARY | byte[] |
|
||||
| NCHAR | string |
|
||||
| JSON | byte[] |
|
||||
| VARBINARY | byte[] |
|
||||
| GEOMETRY | byte[] |
|
||||
|
||||
**Note**: JSON type is only supported in tags.
|
||||
The GEOMETRY type is binary data in little endian byte order, conforming to the WKB standard. For more details, please refer to [Data Types](../../sql-manual/data-types/)
|
||||
|
|
|
@ -138,7 +138,7 @@ The table below explains how the ODBC connector maps server data types to defaul
|
|||
| BIGINT | SQL_BIGINT | SQL_C_SBIGINT |
|
||||
| BIGINT UNSIGNED | SQL_BIGINT | SQL_C_UBIGINT |
|
||||
| FLOAT | SQL_REAL | SQL_C_FLOAT |
|
||||
| DOUBLE | SQL_REAL | SQL_C_DOUBLE |
|
||||
| DOUBLE | SQL_DOUBLE | SQL_C_DOUBLE |
|
||||
| BINARY | SQL_BINARY | SQL_C_BINARY |
|
||||
| SMALLINT | SQL_SMALLINT | SQL_C_SSHORT |
|
||||
| SMALLINT UNSIGNED | SQL_SMALLINT | SQL_C_USHORT |
|
||||
|
@ -146,33 +146,145 @@ The table below explains how the ODBC connector maps server data types to defaul
|
|||
| TINYINT UNSIGNED | SQL_TINYINT | SQL_C_UTINYINT |
|
||||
| BOOL | SQL_BIT | SQL_C_BIT |
|
||||
| NCHAR | SQL_VARCHAR | SQL_C_CHAR |
|
||||
| JSON | SQL_VARCHAR | SQL_C_CHAR |
|
||||
| VARCHAR | SQL_VARCHAR | SQL_C_CHAR |
|
||||
| JSON | SQL_WVARCHAR | SQL_C_WCHAR |
|
||||
| GEOMETRY | SQL_VARBINARY | SQL_C_BINARY |
|
||||
| VARBINARY | SQL_VARBINARY | SQL_C_BINARY |
|
||||
|
||||
## API Reference
|
||||
|
||||
This section summarizes the ODBC API by functionality. For a complete ODBC API reference, please visit the [ODBC Programmer's Reference page](http://msdn.microsoft.com/en-us/library/ms714177.aspx).
|
||||
### API List
|
||||
|
||||
### Data Source and Driver Management
|
||||
- **Currently exported ODBC functions are**:
|
||||
|
||||
| ODBC/Setup API | Linux | macOS | Windows | Note |
|
||||
| :----- | :---- | :---- | :---- | :---- |
|
||||
| ConfigDSN | ❌ | ❌ | ✅ | |
|
||||
| ConfigDriver | ❌ | ❌ | ✅ | |
|
||||
| ConfigTranslator | ❌ | ❌ | ❌ | |
|
||||
| SQLAllocHandle | ✅ | ✅ | ✅ | |
|
||||
| SQLBindCol | ✅ | ✅ | ✅ | Column-Wise Binding only |
|
||||
| SQLBindParameter | ✅ | ✅ | ✅ | Column-Wise Binding only |
|
||||
| SQLBrowseConnect | ❌ | ❌ | ❌ | |
|
||||
| SQLBulkOperations | ❌ | ❌ | ❌ | TDengine has no counterpart |
|
||||
| SQLCloseCursor | ✅ | ✅ | ✅ | |
|
||||
| SQLColAttribute | ✅ | ✅ | ✅ | |
|
||||
| SQLColumnPrivileges | ❌ | ❌ | ❌ | TDengine has no strict counterpart |
|
||||
| SQLColumns | ✅ | ✅ | ✅ | |
|
||||
| SQLCompleteAsync | ❌ | ❌ | ❌ | |
|
||||
| SQLConnect | ✅ | ✅ | ✅ | |
|
||||
| SQLCopyDesc | ❌ | ❌ | ❌ | |
|
||||
| SQLDescribeCol | ✅ | ✅ | ✅ | |
|
||||
| SQLDescribeParam | ✅ | ✅ | ✅ | |
|
||||
| SQLDisconnect | ✅ | ✅ | ✅ | |
|
||||
| SQLDriverConnect | ✅ | ✅ | ✅ | |
|
||||
| SQLEndTran | ✅ | ✅ | ✅ | TDengine is non-transactional, thus this is at most simulating |
|
||||
| SQLExecDirect | ✅ | ✅ | ✅ | |
|
||||
| SQLExecute | ✅ | ✅ | ✅ | |
|
||||
| SQLExtendedFetch | ❌ | ❌ | ❌ | |
|
||||
| SQLFetch | ✅ | ✅ | ✅ | |
|
||||
| SQLFetchScroll | ✅ | ✅ | ✅ | TDengine has no counterpart, just implement SQL_FETCH_NEXT |
|
||||
| SQLForeignKeys | ❌ | ❌ | ❌ | TDengine has no counterpart |
|
||||
| SQLFreeHandle | ✅ | ✅ | ✅ | |
|
||||
| SQLFreeStmt | ✅ | ✅ | ✅ | |
|
||||
| SQLGetConnectAttr | ✅ | ✅ | ✅ | Supports partial attributes; unsupported attributes are listed below. |
|
||||
| SQLGetCursorName | ❌ | ❌ | ❌ | TDengine has no counterpart |
|
||||
| SQLGetData | ✅ | ✅ | ✅ | |
|
||||
| SQLGetDescField | ❌ | ❌ | ❌ | |
|
||||
| SQLGetDescRec | ❌ | ❌ | ❌ | |
|
||||
| SQLGetDiagField | ✅ | ✅ | ✅ | |
|
||||
| SQLGetDiagRec | ✅ | ✅ | ✅ | |
|
||||
| SQLGetEnvAttr | ✅ | ✅ | ✅ | |
|
||||
| SQLGetInfo | ✅ | ✅ | ✅ | |
|
||||
| SQLGetStmtAttr | ✅ | ✅ | ✅ | Supports partial attributes; unsupported attributes are listed below. |
|
||||
| SQLGetTypeInfo | ✅ | ✅ | ✅ | |
|
||||
| SQLMoreResults | ✅ | ✅ | ✅ | |
|
||||
| SQLNativeSql | ❌ | ❌ | ❌ | TDengine has no counterpart |
|
||||
| SQLNumParams | ✅ | ✅ | ✅ | |
|
||||
| SQLNumResultCols | ✅ | ✅ | ✅ | |
|
||||
| SQLParamData | ❌ | ❌ | ❌ | TDengine has no counterpart |
|
||||
| SQLPrepare | ✅ | ✅ | ✅ | |
|
||||
| SQLPrimaryKeys | ✅ | ✅ | ✅ | |
|
||||
| SQLProcedureColumns | ❌ | ❌ | ❌ | TDengine has no counterpart |
|
||||
| SQLProcedures | ❌ | ❌ | ❌ | TDengine has no counterpart |
|
||||
| SQLPutData | ❌ | ❌ | ❌ | TDengine has no counterpart |
|
||||
| SQLRowCount | ✅ | ✅ | ✅ | |
|
||||
| SQLSetConnectAttr | ✅ | ✅ | ✅ | Supports partial attributes; unsupported attributes are listed below. |
|
||||
| SQLSetCursorName | ❌ | ❌ | ❌ | TDengine has no counterpart |
|
||||
| SQLSetDescField | ❌ | ❌ | ❌ | |
|
||||
| SQLSetDescRec | ❌ | ❌ | ❌ | |
|
||||
| SQLSetEnvAttr | ✅ | ✅ | ✅ | |
|
||||
| SQLSetPos | ❌ | ❌ | ❌ | TDengine has no counterpart |
|
||||
| SQLSetStmtAttr | ✅ | ✅ | ✅ | Supports partial attributes; unsupported attributes are listed below. |
|
||||
| SQLSpecialColumns | ❌ | ❌ | ❌ | TDengine has no counterpart |
|
||||
| SQLStatistics | ❌ | ❌ | ❌ | TDengine has no counterpart |
|
||||
| SQLTablePrivileges | ❌ | ❌ | ❌ | TDengine has no strict counterpart |
|
||||
| SQLTables | ✅ | ✅ | ✅ | |
|
||||
|
||||
- **Non-supported-statement-attributes (SQLSetStmtAttr)**
|
||||
|
||||
| Attribute | Note |
|
||||
| :----- | :---- |
|
||||
| SQL_ATTR_CONCURRENCY | TDengine has no updatable-CURSOR machanism |
|
||||
| SQL_ATTR_FETCH_BOOKMARK_PTR | TDengine has no BOOKMARK machanism |
|
||||
| SQL_ATTR_IMP_PARAM_DESC | |
|
||||
| SQL_ATTR_IMP_ROW_DESC | |
|
||||
| SQL_ATTR_KEYSET_SIZE | |
|
||||
| SQL_ATTR_PARAM_BIND_OFFSET_PTR | |
|
||||
| SQL_ATTR_PARAM_OPERATION_PTR | |
|
||||
| SQL_ATTR_ROW_NUMBER | Readonly attribute |
|
||||
| SQL_ATTR_ROW_OPERATION_PTR | |
|
||||
| SQL_ATTR_SIMULATE_CURSOR | |
|
||||
|
||||
- **Non-supported-connection-attributes (SQLSetConnectAttr)**
|
||||
|
||||
| Attribute | Note |
|
||||
| :----- | :---- |
|
||||
| SQL_ATTR_AUTO_IPD | Readonly attribute |
|
||||
| SQL_ATTR_CONNECTION_DEAD | Readonly attribute |
|
||||
| SQL_ATTR_ENLIST_IN_DTC | |
|
||||
| SQL_ATTR_PACKET_SIZE | |
|
||||
| SQL_ATTR_TRACE | |
|
||||
| SQL_ATTR_TRACEFILE | |
|
||||
| SQL_ATTR_TRANSLATE_LIB | |
|
||||
| SQL_ATTR_TRANSLATE_OPTION | |
|
||||
|
||||
- **Enable any programming language with ODBC-bindings/ODBC-plugings to communicate with TDengine:**
|
||||
|
||||
| programming language | ODBC-API or bindings/plugins |
|
||||
| :----- | :---- |
|
||||
| C/C++ | ODBC-API |
|
||||
| CSharp | System.Data.Odbc |
|
||||
| Erlang | odbc module |
|
||||
| Go | [odbc](https://github.com/alexbrainman/odbc), database/sql |
|
||||
| Haskell | HDBC, HDBC-odbc |
|
||||
| Common Lisp | plain-odbc |
|
||||
| Nodejs | odbc |
|
||||
| Python3 | pyodbc |
|
||||
| Rust | odbc |
|
||||
|
||||
### API Functional Categories
|
||||
|
||||
This section summarizes the ODBC API by functionality. For a complete ODBC API reference, please visit the [Microsoft Open Database Connectivity (ODBC)](https://learn.microsoft.com/en-us/sql/odbc/microsoft-open-database-connectivity-odbc).
|
||||
|
||||
#### Data Source and Driver Management
|
||||
|
||||
- API: ConfigDSN
|
||||
- **Supported**: Yes
|
||||
- **Supported**: Yes (Windows only)
|
||||
- **Standard**: ODBC
|
||||
- **Function**: Configures data sources
|
||||
|
||||
- API: ConfigDriver
|
||||
- **Supported**: Yes
|
||||
- **Supported**: Yes (Windows only)
|
||||
- **Standard**: ODBC
|
||||
- **Function**: Used to perform installation and configuration tasks related to a specific driver
|
||||
|
||||
- API: ConfigTranslator
|
||||
- **Supported**: Yes
|
||||
- **Supported**: No
|
||||
- **Standard**: ODBC
|
||||
- **Function**: Used to parse the DSN configuration, translating or converting between DSN configuration and actual database driver configuration
|
||||
|
||||
### Connecting to Data Sources
|
||||
#### Connecting to Data Sources
|
||||
|
||||
- API: SQLAllocHandle
|
||||
- **Supported**: Yes
|
||||
|
@ -204,7 +316,7 @@ This section summarizes the ODBC API by functionality. For a complete ODBC API r
|
|||
- **Standard**: Deprecated
|
||||
- **Function**: In ODBC 3.x, the ODBC 2.x function SQLAllocConnect has been replaced by SQLAllocHandle
|
||||
|
||||
### Retrieving Information about Drivers and Data Sources
|
||||
#### Retrieving Information about Drivers and Data Sources
|
||||
|
||||
- API: SQLDataSources
|
||||
- **Supported**: No
|
||||
|
@ -231,7 +343,7 @@ This section summarizes the ODBC API by functionality. For a complete ODBC API r
|
|||
- **Standard**: ISO 92
|
||||
- **Function**: Returns information about supported data types
|
||||
|
||||
### Setting and Retrieving Driver Properties
|
||||
#### Setting and Retrieving Driver Properties
|
||||
|
||||
- API: SQLSetConnectAttr
|
||||
- **Supported**: Yes
|
||||
|
@ -283,7 +395,7 @@ This section summarizes the ODBC API by functionality. For a complete ODBC API r
|
|||
- **Standard**: Deprecated
|
||||
- **Purpose**: In ODBC 3.x, the ODBC 2.0 function SQLSetStmtOption has been replaced by SQLGetStmtAttr
|
||||
|
||||
### Preparing SQL Requests
|
||||
#### Preparing SQL Requests
|
||||
|
||||
- API: SQLAllocStmt
|
||||
- **Supported**: Not supported
|
||||
|
@ -320,7 +432,7 @@ This section summarizes the ODBC API by functionality. For a complete ODBC API r
|
|||
- **Standard**: ODBC
|
||||
- **Purpose**: Sets options that control cursor behavior
|
||||
|
||||
### Submitting Requests
|
||||
#### Submitting Requests
|
||||
|
||||
- API: SQLExecute
|
||||
- **Supported**: Supported
|
||||
|
@ -357,7 +469,7 @@ This section summarizes the ODBC API by functionality. For a complete ODBC API r
|
|||
- **Standard**: ISO 92
|
||||
- **Function**: When using stream input mode, it can be used to send data blocks to output parameters
|
||||
|
||||
### Retrieving Results and Information About Results
|
||||
#### Retrieving Results and Information About Results
|
||||
|
||||
- API: SQLRowCount
|
||||
- **Support**: Supported
|
||||
|
@ -419,7 +531,7 @@ This section summarizes the ODBC API by functionality. For a complete ODBC API r
|
|||
- **Standard**: ODBC
|
||||
- **Function**: Performs bulk insert and bulk bookmark operations, including updates, deletions, and fetching by bookmark
|
||||
|
||||
### Retrieving Error or Diagnostic Information
|
||||
#### Retrieving Error or Diagnostic Information
|
||||
|
||||
- API: SQLError
|
||||
- **Support**: Not supported
|
||||
|
@ -436,7 +548,7 @@ This section summarizes the ODBC API by functionality. For a complete ODBC API r
|
|||
- **Standard**: ISO 92
|
||||
- **Function**: Returns additional diagnostic information (multiple diagnostic results)
|
||||
|
||||
### Retrieving Information About System Table Entries Related to the Data Source
|
||||
#### Retrieving Information About System Table Entries Related to the Data Source
|
||||
|
||||
- API: SQLColumnPrivileges
|
||||
- **Support**: Not supported
|
||||
|
@ -488,7 +600,7 @@ This section summarizes the ODBC API by functionality. For a complete ODBC API r
|
|||
- **Standard**: ODBC
|
||||
- **Function**: Returns column information for stored procedures, including details of input and output parameters
|
||||
|
||||
### Transaction Execution
|
||||
#### Transaction Execution
|
||||
|
||||
- API: SQLTransact
|
||||
- **Support**: Not supported
|
||||
|
@ -498,9 +610,9 @@ This section summarizes the ODBC API by functionality. For a complete ODBC API r
|
|||
- API: SQLEndTran
|
||||
- **Support**: Supported
|
||||
- **Standard**: ISO 92
|
||||
- **Function**: Used to commit or rollback transactions, TDengine does not support transactions, therefore rollback operation is not supported
|
||||
- **Function**: Used to commit or rollback transactions. TDengine is non-transactional, so this function can at most simulate commit or rollback operations. If there are any outstanding connections or statements, neither commit nor rollback will succeed
|
||||
|
||||
### Connection Termination
|
||||
#### Connection Termination
|
||||
|
||||
- API: SQLDisconnect
|
||||
- **Support**: Supported
|
||||
|
@ -532,6 +644,3 @@ This section summarizes the ODBC API by functionality. For a complete ODBC API r
|
|||
- **Standard**: ODBC
|
||||
- **Function**: Closes the cursor associated with the current statement handle and releases all resources used by the cursor
|
||||
|
||||
## Integration with Third Parties
|
||||
|
||||
As an example of using the TDengine ODBC driver, you can use Power BI to analyze time-series data with TDengine. For more details, please refer to [Power BI](../../../third-party-tools/analytics/power-bi/)
|
||||
|
|
|
@ -458,6 +458,12 @@ This document details the server error codes that may be encountered when using
|
|||
| 0x80002665 | The _TAGS pseudocolumn can only be used for subtable and supertable queries | Illegal tag column query | Check and correct the SQL statement |
|
||||
| 0x80002666 | Subquery does not output primary timestamp column | Check and correct the SQL statement | |
|
||||
| 0x80002667 | Invalid usage of expr: %s | Illegal expression | Check and correct the SQL statement |
|
||||
| 0x80002687 | True_for duration cannot be negative | Use negative value as true_for duration | Check and correct the SQL statement |
|
||||
| 0x80002688 | Cannot use 'year' or 'month' as true_for duration | Use year or month as true_for_duration | Check and correct the SQL statement |
|
||||
| 0x80002689 | Invalid using cols function | Illegal using cols function | Check and correct the SQL statement |
|
||||
| 0x8000268A | Cols function's first param must be a select function that output a single row | The first parameter of the cols function should be a selection function | Check and correct the SQL statement |
|
||||
| 0x8000268B | Invalid using cols function with multiple output columns | Illegal using the cols function for multiple column output | Check and correct the SQL statement |
|
||||
| 0x8000268C | Invalid using alias for cols function | Illegal cols function alias | Check and correct the SQL statement |
|
||||
| 0x800026FF | Parser internal error | Internal error in parser | Preserve the scene and logs, report issue on GitHub |
|
||||
| 0x80002700 | Planner internal error | Internal error in planner | Preserve the scene and logs, report issue on GitHub |
|
||||
| 0x80002701 | Expect ts equal | JOIN condition validation failed | Preserve the scene and logs, report issue on GitHub |
|
||||
|
|
|
@ -73,7 +73,7 @@ If the client encounters a connection failure, please follow the steps below to
|
|||
|
||||
### 5. What to do if you encounter the error "Unable to resolve FQDN"?
|
||||
|
||||
This error occurs because the client or data node cannot resolve the FQDN (Fully Qualified Domain Name). For the TAOS Shell or client applications, please check the following:
|
||||
This error occurs because the client or data node cannot resolve the FQDN (Fully Qualified Domain Name). For the TDengine CLI or client applications, please check the following:
|
||||
|
||||
1. Check if the FQDN of the server you are connecting to is correct.
|
||||
2. If there is a DNS server in the network configuration, check if it is working properly
|
||||
|
@ -244,15 +244,15 @@ launchctl limit maxfiles
|
|||
This prompt indicates that the number of vnodes required for creating the db is not enough, exceeding the upper limit of vnodes in the dnode. By default, a dnode contains twice the number of CPU cores worth of vnodes, which can also be controlled by the supportVnodes parameter in the configuration file.
|
||||
Normally, increase the supportVnodes parameter in taos.cfg.
|
||||
|
||||
### 21 Why can data from a specified time period be queried using taos-CLI on the server, but not on the client machine?
|
||||
### 21 Why can data from a specified time period be queried using TDengine CLI on the server, but not on the client machine?
|
||||
|
||||
This issue is due to the client and server having different time zone settings. Adjusting the client's time zone to match the server's will resolve the issue.
|
||||
|
||||
### 22 The table name is confirmed to exist, but returns "table name does not exist" when writing or querying, why?
|
||||
|
||||
In TDengine, all names, including database names and table names, are case-sensitive. If these names are not enclosed in backticks (\`) in the program or taos-CLI, even if you input them in uppercase, the engine will convert them to lowercase for use. If the names are enclosed in backticks, the engine will not convert them to lowercase and will use them as is.
|
||||
In TDengine, all names, including database names and table names, are case-sensitive. If these names are not enclosed in backticks (\`) in the program or TDengine CLI, even if you input them in uppercase, the engine will convert them to lowercase for use. If the names are enclosed in backticks, the engine will not convert them to lowercase and will use them as is.
|
||||
|
||||
### 23 How to fully display field content in taos-CLI queries?
|
||||
### 23 How to fully display field content in TDengine CLI queries?
|
||||
|
||||
You can use the \G parameter for vertical display, such as `show databases\G\;` (for ease of input, press TAB after "\" to automatically complete the content).
|
||||
|
||||
|
@ -315,4 +315,4 @@ Problem solving: You should configure the automatic mount of the dataDir directo
|
|||
Directly querying from child table is fast. The query from super table with TAG filter is designed to meet the convenience of querying. It can filter data from multiple child tables at the same time. If the goal is to pursue performance and the child table has been clearly queried, directly querying from the sub table can achieve higher performance
|
||||
|
||||
### 35 How to view data compression ratio indicators?
|
||||
Currently, TDengine only provides compression ratios based on tables, not databases or the entire system. To view the compression ratios, execute the `SHOW TABLE DISTRIBUTED table_name;` command in the client taos-CLI. The table_name can be a super table, regular table, or subtable. For details [Click Here](https://docs.tdengine.com/tdengine-reference/sql-manual/show-commands/#show-table-distributed)
|
||||
Currently, TDengine only provides compression ratios based on tables, not databases or the entire system. To view the compression ratios, execute the `SHOW TABLE DISTRIBUTED table_name;` command in the client TDengine CLI. The table_name can be a super table, regular table, or subtable. For details [Click Here](https://docs.tdengine.com/tdengine-reference/sql-manual/show-commands/#show-table-distributed)
|
|
@ -4,7 +4,7 @@ title: taosTools Release History and Download Links
|
|||
slug: /release-history/taostools
|
||||
---
|
||||
|
||||
Download links for various versions of taosTools are as follows:
|
||||
Starting from version 3.0.6.0, taosTools has been integrated into the TDengine installation package and is no longer provided separately. Download links for various versions of taosTools (corresponding to TDengine 3.0.5.2 and earlier) are as follows:
|
||||
|
||||
For other historical versions, please visit [here](https://tdengine.com/downloads/historical)
|
||||
|
||||
|
|
|
@ -56,4 +56,4 @@ slug: /release-history/release-notes/3-3-2-0
|
|||
13. Upgrading to 3.3.0.0 and enabling `cachemodel` causes incorrect row count returns for `last + group by`
|
||||
14. `taos-explorer` navigation bar does not display all supertable names (Enterprise Edition only)
|
||||
15. Querying causes `taosd` to crash when compound primary key VARCHAR length exceeds 125
|
||||
16. High CPU usage by `taos CLI` and `taosAdapter`
|
||||
16. High CPU usage by `TDengine CLI` and `taosAdapter`
|
||||
|
|
|
@ -53,7 +53,7 @@ slug: /release-history/release-notes/3-3-3-0
|
|||
22. Cursor error in data filling during cache update causes taosd to exit abnormally
|
||||
23. Random incorrect results of STDDEV function calculation
|
||||
24. Unable to add offline nodes in multi-tier storage and encryption scenarios
|
||||
25. taos CLI cannot input passwords longer than 20 bytes
|
||||
25. TDengine CLI cannot input passwords longer than 20 bytes
|
||||
26. SQL write error: int data overflow
|
||||
27. Metadata consistency in scenarios of high query concurrency
|
||||
28. Attempt to solve the issue where manually clicking the stop button does not stop the task
|
||||
|
@ -90,4 +90,4 @@ slug: /release-history/release-notes/3-3-3-0
|
|||
59. Client memory leak
|
||||
60. Open-source users unable to modify other database options after upgrading stt_trigger value
|
||||
61. Incorrect results for NOT IN (NULL) queries
|
||||
62. taos shell and taosBenchmark unable to successfully connect to cloud service instances
|
||||
62. TDengine CLI and taosBenchmark unable to successfully connect to cloud service instances
|
||||
|
|
|
@ -52,13 +52,13 @@ slug: /release-history/release-notes/3-3-4-3
|
|||
1. fix: memory leak caused by repeated addition and deletion of tables on the Windows platform.
|
||||
1. fix(stream): check the right return code for concurrent checkpoint trans.
|
||||
1. fix: the "too many session" problem while perform large concurrent queries.
|
||||
1. fix: the problem of taos shell crashing in slow query scenarios on the Windows platform.
|
||||
1. fix: the encrypted database cannot be recovered when opening the dnode log.
|
||||
1. fix: the problem that taosd cannot be started due to mnode synchronization timeout.
|
||||
1. fix: the slow sorting of file group data during snapshot synchronization leads to the inability of Vnode to recover.
|
||||
1. fix: when writing data with escape characters to a varchar field throug line protocol, taosd will crash.
|
||||
1. fix: metadata file damage caused by incorrect logic processing of error code
|
||||
1. fix: when a query statement contains multiple nested "not" conditional statements, not setting the scalar mode will lead to query errors.
|
||||
1. fix: the problem of dnode going offline due to timeout of vnode stat report.
|
||||
1. fix: taosd failed to start on servers that not support AVX instructions.
|
||||
1. fix(taosX): handle 0x09xx error codes in migration
|
||||
2. fix: the problem of TDengine CLI crashing in slow query scenarios on the Windows platform.
|
||||
3. fix: the encrypted database cannot be recovered when opening the dnode log.
|
||||
4. fix: the problem that taosd cannot be started due to mnode synchronization timeout.
|
||||
5. fix: the slow sorting of file group data during snapshot synchronization leads to the inability of Vnode to recover.
|
||||
6. fix: when writing data with escape characters to a varchar field throug line protocol, taosd will crash.
|
||||
7. fix: metadata file damage caused by incorrect logic processing of error code
|
||||
8. fix: when a query statement contains multiple nested "not" conditional statements, not setting the scalar mode will lead to query errors.
|
||||
9. fix: the problem of dnode going offline due to timeout of vnode stat report.
|
||||
10. fix: taosd failed to start on servers that not support AVX instructions.
|
||||
11. fix(taosX): handle 0x09xx error codes in migration
|
||||
|
|
|
@ -10,7 +10,7 @@ slug: /release-history/release-notes/3-3-5-0
|
|||
2. feat: refactor taosX incremental backup-restore
|
||||
3. feat: add stmt2 apis in JDBC via websocket connection
|
||||
4. feat: add stmt2 api in Rust connector
|
||||
5. feat: add error codes in error prompts in taos-CLI
|
||||
5. feat: add error codes in error prompts in TDengine CLI
|
||||
6. feat: superSet can connect TDengine with python connector
|
||||
7. feat: configurable grafana dashboards in explorer management
|
||||
8. feat: add taosX-agent in-memory cache queu capacity option
|
||||
|
|
|
@ -38,6 +38,6 @@ slug: /release-history/release-notes/3.3.5.2
|
|||
20. fix: column names were not correctly copied when using SELECT * FROM subqueries
|
||||
21. fix: when performing max/min function on string type data, the results are inaccurate and taosd will crash
|
||||
22. fix: stream computing does not support the use of the HAVING clause, but no error is reported during creation
|
||||
23. fix: the version information displayed by taos shell for the server is inaccurate, such as being unable to correctly distinguish between the community edition and the enterprise edition
|
||||
23. fix: the version information displayed by TDengine CLI for the server is inaccurate, such as being unable to correctly distinguish between the community edition and the enterprise edition
|
||||
24. fix: in certain specific query scenarios, when JOIN and CAST are used together, taosd may crash
|
||||
|
||||
|
|
|
@ -3,13 +3,9 @@ title: Release Notes
|
|||
slug: /release-history/release-notes
|
||||
---
|
||||
|
||||
[3.3.5.0](./3-3-5-0/)
|
||||
```mdx-code-block
|
||||
import DocCardList from '@theme/DocCardList';
|
||||
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
|
||||
|
||||
[3.3.5.2](./3.3.5.2)
|
||||
[3.3.4.8](./3-3-4-8/)
|
||||
|
||||
[3.3.4.3](./3-3-4-3/)
|
||||
|
||||
[3.3.3.0](./3-3-3-0/)
|
||||
|
||||
[3.3.2.0](./3-3-2-0/)
|
||||
<DocCardList items={useCurrentSidebarCategory().items}/>
|
||||
```
|
After Width: | Height: | Size: 56 KiB |
|
@ -1,84 +1,84 @@
|
|||
### Configuring taosAdapter
|
||||
#### Configuring taosAdapter
|
||||
|
||||
Method to configure taosAdapter to receive collectd data:
|
||||
|
||||
- Enable the configuration item in the taosAdapter configuration file (default location is /etc/taos/taosadapter.toml)
|
||||
|
||||
```
|
||||
...
|
||||
[opentsdb_telnet]
|
||||
enable = true
|
||||
maxTCPConnections = 250
|
||||
tcpKeepAlive = false
|
||||
dbs = ["opentsdb_telnet", "collectd", "icinga2", "tcollector"]
|
||||
ports = [6046, 6047, 6048, 6049]
|
||||
user = "root"
|
||||
password = "taosdata"
|
||||
...
|
||||
```
|
||||
```toml
|
||||
...
|
||||
[opentsdb_telnet]
|
||||
enable = true
|
||||
maxTCPConnections = 250
|
||||
tcpKeepAlive = false
|
||||
dbs = ["opentsdb_telnet", "collectd", "icinga2", "tcollector"]
|
||||
ports = [6046, 6047, 6048, 6049]
|
||||
user = "root"
|
||||
password = "taosdata"
|
||||
...
|
||||
```
|
||||
|
||||
The default database name written by taosAdapter is `collectd`, but you can also modify the dbs item in the taosAdapter configuration file to specify a different name. Fill in user and password with the actual TDengine configuration values. After modifying the configuration file, taosAdapter needs to be restarted.
|
||||
The default database name written by taosAdapter is `collectd`, but you can also modify the dbs item in the taosAdapter configuration file to specify a different name. Fill in user and password with the actual TDengine configuration values. After modifying the configuration file, taosAdapter needs to be restarted.
|
||||
|
||||
- You can also use taosAdapter command line parameters or set environment variables to start, to enable taosAdapter to receive collectd data, for more details please refer to the taosAdapter reference manual.
|
||||
|
||||
### Configuring collectd
|
||||
#### Configuring collectd
|
||||
|
||||
collectd uses a plugin mechanism that can write the collected monitoring data to different data storage software in various forms. TDengine supports direct collection plugins and write_tsdb plugins.
|
||||
|
||||
#### Configuring to receive direct collection plugin data
|
||||
1. **Configuring to receive direct collection plugin data**
|
||||
|
||||
Modify the related configuration items in the collectd configuration file (default location /etc/collectd/collectd.conf).
|
||||
Modify the related configuration items in the collectd configuration file (default location /etc/collectd/collectd.conf).
|
||||
|
||||
```text
|
||||
LoadPlugin network
|
||||
<Plugin network>
|
||||
Server "<taosAdapter's host>" "<port for collectd direct>"
|
||||
</Plugin>
|
||||
```
|
||||
```xml
|
||||
LoadPlugin network
|
||||
<Plugin network>
|
||||
Server "<taosAdapter's host>" "<port for collectd direct>"
|
||||
</Plugin>
|
||||
```
|
||||
|
||||
Where \<taosAdapter's host> should be filled with the domain name or IP address of the server running taosAdapter. \<port for collectd direct> should be filled with the port used by taosAdapter to receive collectd data (default is 6045).
|
||||
Where \<taosAdapter's host> should be filled with the domain name or IP address of the server running taosAdapter. \<port for collectd direct> should be filled with the port used by taosAdapter to receive collectd data (default is 6045).
|
||||
|
||||
Example as follows:
|
||||
Example as follows:
|
||||
|
||||
```text
|
||||
LoadPlugin network
|
||||
<Plugin network>
|
||||
Server "127.0.0.1" "6045"
|
||||
</Plugin>
|
||||
```
|
||||
```xml
|
||||
LoadPlugin network
|
||||
<Plugin network>
|
||||
Server "127.0.0.1" "6045"
|
||||
</Plugin>
|
||||
```
|
||||
|
||||
#### Configuring write_tsdb plugin data
|
||||
2. **Configuring write_tsdb plugin data**
|
||||
|
||||
Modify the related configuration items in the collectd configuration file (default location /etc/collectd/collectd.conf).
|
||||
Modify the related configuration items in the collectd configuration file (default location /etc/collectd/collectd.conf).
|
||||
|
||||
```text
|
||||
LoadPlugin write_tsdb
|
||||
<Plugin write_tsdb>
|
||||
<Node>
|
||||
Host "<taosAdapter's host>"
|
||||
Port "<port for collectd write_tsdb plugin>"
|
||||
...
|
||||
</Node>
|
||||
</Plugin>
|
||||
```
|
||||
```xml
|
||||
LoadPlugin write_tsdb
|
||||
<Plugin write_tsdb>
|
||||
<Node>
|
||||
Host "<taosAdapter's host>"
|
||||
Port "<port for collectd write_tsdb plugin>"
|
||||
...
|
||||
</Node>
|
||||
</Plugin>
|
||||
```
|
||||
|
||||
Where \<taosAdapter's host> should be filled with the domain name or IP address of the server running taosAdapter. \<port for collectd write_tsdb plugin> should be filled with the port used by taosAdapter to receive collectd write_tsdb plugin data (default is 6047).
|
||||
Where \<taosAdapter's host> should be filled with the domain name or IP address of the server running taosAdapter. \<port for collectd write_tsdb plugin> should be filled with the port used by taosAdapter to receive collectd write_tsdb plugin data (default is 6047).
|
||||
|
||||
```text
|
||||
LoadPlugin write_tsdb
|
||||
<Plugin write_tsdb>
|
||||
<Node>
|
||||
Host "127.0.0.1"
|
||||
Port "6047"
|
||||
HostTags "status=production"
|
||||
StoreRates false
|
||||
AlwaysAppendDS false
|
||||
</Node>
|
||||
</Plugin>
|
||||
```
|
||||
```xml
|
||||
LoadPlugin write_tsdb
|
||||
<Plugin write_tsdb>
|
||||
<Node>
|
||||
Host "127.0.0.1"
|
||||
Port "6047"
|
||||
HostTags "status=production"
|
||||
StoreRates false
|
||||
AlwaysAppendDS false
|
||||
</Node>
|
||||
</Plugin>
|
||||
```
|
||||
|
||||
Then restart collectd:
|
||||
Then restart collectd:
|
||||
|
||||
```
|
||||
systemctl restart collectd
|
||||
```
|
||||
```shell
|
||||
systemctl restart collectd
|
||||
```
|
||||
|
|
|
@ -1,43 +1,43 @@
|
|||
### Configuring taosAdapter
|
||||
#### Configuring taosAdapter
|
||||
|
||||
Method to configure taosAdapter to receive icinga2 data:
|
||||
|
||||
- Enable the configuration item in the taosAdapter configuration file (default location /etc/taos/taosadapter.toml)
|
||||
|
||||
```
|
||||
...
|
||||
[opentsdb_telnet]
|
||||
enable = true
|
||||
maxTCPConnections = 250
|
||||
tcpKeepAlive = false
|
||||
dbs = ["opentsdb_telnet", "collectd", "icinga2", "tcollector"]
|
||||
ports = [6046, 6047, 6048, 6049]
|
||||
user = "root"
|
||||
password = "taosdata"
|
||||
...
|
||||
```
|
||||
```toml
|
||||
...
|
||||
[opentsdb_telnet]
|
||||
enable = true
|
||||
maxTCPConnections = 250
|
||||
tcpKeepAlive = false
|
||||
dbs = ["opentsdb_telnet", "collectd", "icinga2", "tcollector"]
|
||||
ports = [6046, 6047, 6048, 6049]
|
||||
user = "root"
|
||||
password = "taosdata"
|
||||
...
|
||||
```
|
||||
|
||||
The default database name written by taosAdapter is `icinga2`, but you can also modify the dbs item in the taosAdapter configuration file to specify a different name. Fill in user and password with the actual TDengine configuration values. taosAdapter needs to be restarted after modifications.
|
||||
The default database name written by taosAdapter is `icinga2`, but you can also modify the dbs item in the taosAdapter configuration file to specify a different name. Fill in user and password with the actual TDengine configuration values. taosAdapter needs to be restarted after modifications.
|
||||
|
||||
- You can also use taosAdapter command line parameters or set environment variables to enable taosAdapter to receive icinga2 data, for more details please refer to the taosAdapter reference manual
|
||||
|
||||
### Configuring icinga2
|
||||
#### Configuring icinga2
|
||||
|
||||
- Enable icinga2's opentsdb-writer (reference link https://icinga.com/docs/icinga-2/latest/doc/14-features/#opentsdb-writer)
|
||||
- Modify the configuration file `/etc/icinga2/features-enabled/opentsdb.conf` filling in \<taosAdapter's host> with the domain name or IP address of the server running taosAdapter, \<port for icinga2> with the corresponding port supported by taosAdapter for receiving icinga2 data (default is 6048)
|
||||
|
||||
```
|
||||
object OpenTsdbWriter "opentsdb" {
|
||||
host = "<taosAdapter's host>"
|
||||
port = <port for icinga2>
|
||||
}
|
||||
```
|
||||
```c
|
||||
object OpenTsdbWriter "opentsdb" {
|
||||
host = "<taosAdapter's host>"
|
||||
port = <port for icinga2>
|
||||
}
|
||||
```
|
||||
|
||||
Example file:
|
||||
Example file:
|
||||
|
||||
```
|
||||
object OpenTsdbWriter "opentsdb" {
|
||||
host = "127.0.0.1"
|
||||
port = 6048
|
||||
}
|
||||
```
|
||||
```c
|
||||
object OpenTsdbWriter "opentsdb" {
|
||||
host = "127.0.0.1"
|
||||
port = 6048
|
||||
}
|
||||
```
|
||||
|
|
|
@ -1,18 +1,18 @@
|
|||
Configuring Prometheus is done by editing the Prometheus configuration file `prometheus.yml` (default location `/etc/prometheus/prometheus.yml`).
|
||||
|
||||
### Configure Third-Party Database Address
|
||||
#### Configure Third-Party Database Address
|
||||
|
||||
Set the `remote_read url` and `remote_write url` to point to the domain name or IP address of the server running the taosAdapter service, the REST service port (taosAdapter defaults to 6041), and the name of the database you want to write to in TDengine, ensuring the URLs are formatted as follows:
|
||||
|
||||
- remote_read url: `http://<taosAdapter's host>:<REST service port>/prometheus/v1/remote_read/<database name>`
|
||||
- remote_write url: `http://<taosAdapter's host>:<REST service port>/prometheus/v1/remote_write/<database name>`
|
||||
|
||||
### Configure Basic Authentication
|
||||
#### Configure Basic Authentication
|
||||
|
||||
- username: \<TDengine's username>
|
||||
- password: \<TDengine's password>
|
||||
|
||||
### Example configuration of remote_write and remote_read in the prometheus.yml file
|
||||
#### Example configuration of remote_write and remote_read in the prometheus.yml file
|
||||
|
||||
```yaml
|
||||
remote_write:
|
||||
|
|
|
@ -1,46 +1,46 @@
|
|||
### Configure taosAdapter
|
||||
#### Configure taosAdapter
|
||||
|
||||
Method to configure taosAdapter to receive StatsD data:
|
||||
|
||||
- Enable the configuration item in the taosAdapter configuration file (default location /etc/taos/taosadapter.toml)
|
||||
|
||||
```
|
||||
...
|
||||
[statsd]
|
||||
enable = true
|
||||
port = 6044
|
||||
db = "statsd"
|
||||
user = "root"
|
||||
password = "taosdata"
|
||||
worker = 10
|
||||
gatherInterval = "5s"
|
||||
protocol = "udp"
|
||||
maxTCPConnections = 250
|
||||
tcpKeepAlive = false
|
||||
allowPendingMessages = 50000
|
||||
deleteCounters = true
|
||||
deleteGauges = true
|
||||
deleteSets = true
|
||||
deleteTimings = true
|
||||
...
|
||||
```
|
||||
```toml
|
||||
...
|
||||
[statsd]
|
||||
enable = true
|
||||
port = 6044
|
||||
db = "statsd"
|
||||
user = "root"
|
||||
password = "taosdata"
|
||||
worker = 10
|
||||
gatherInterval = "5s"
|
||||
protocol = "udp"
|
||||
maxTCPConnections = 250
|
||||
tcpKeepAlive = false
|
||||
allowPendingMessages = 50000
|
||||
deleteCounters = true
|
||||
deleteGauges = true
|
||||
deleteSets = true
|
||||
deleteTimings = true
|
||||
...
|
||||
```
|
||||
|
||||
The default database name written by taosAdapter is `statsd`, but you can also modify the db item in the taosAdapter configuration file to specify a different name. Fill in the user and password with the actual TDengine configuration values. After modifying the configuration file, taosAdapter needs to be restarted.
|
||||
The default database name written by taosAdapter is `statsd`, but you can also modify the db item in the taosAdapter configuration file to specify a different name. Fill in the user and password with the actual TDengine configuration values. After modifying the configuration file, taosAdapter needs to be restarted.
|
||||
|
||||
- You can also use taosAdapter command line arguments or set environment variables to enable the taosAdapter to receive StatsD data. For more details, please refer to the taosAdapter reference manual.
|
||||
|
||||
### Configure StatsD
|
||||
#### Configure StatsD
|
||||
|
||||
To use StatsD, download its [source code](https://github.com/statsd/statsd). Modify its configuration file according to the example file `exampleConfig.js` found in the root directory of the local source code download. Replace \<taosAdapter's host> with the domain name or IP address of the server running taosAdapter, and \<port for StatsD> with the port that taosAdapter uses to receive StatsD data (default is 6044).
|
||||
|
||||
```
|
||||
```text
|
||||
Add to the backends section "./backends/repeater"
|
||||
Add to the repeater section { host:'<taosAdapter's host>', port: <port for StatsD>}
|
||||
```
|
||||
|
||||
Example configuration file:
|
||||
|
||||
```
|
||||
```js
|
||||
{
|
||||
port: 8125
|
||||
, backends: ["./backends/repeater"]
|
||||
|
@ -50,7 +50,7 @@ port: 8125
|
|||
|
||||
After adding the following content, start StatsD (assuming the configuration file is modified to config.js).
|
||||
|
||||
```
|
||||
```shell
|
||||
npm install
|
||||
node stats.js config.js &
|
||||
```
|
||||
|
|
|
@ -1,27 +1,27 @@
|
|||
### Configuring taosAdapter
|
||||
#### Configuring taosAdapter
|
||||
|
||||
To configure taosAdapter to receive data from TCollector:
|
||||
|
||||
- Enable the configuration in the taosAdapter configuration file (default location /etc/taos/taosadapter.toml)
|
||||
|
||||
```
|
||||
...
|
||||
[opentsdb_telnet]
|
||||
enable = true
|
||||
maxTCPConnections = 250
|
||||
tcpKeepAlive = false
|
||||
dbs = ["opentsdb_telnet", "collectd", "icinga2", "tcollector"]
|
||||
ports = [6046, 6047, 6048, 6049]
|
||||
user = "root"
|
||||
password = "taosdata"
|
||||
...
|
||||
```
|
||||
```toml
|
||||
...
|
||||
[opentsdb_telnet]
|
||||
enable = true
|
||||
maxTCPConnections = 250
|
||||
tcpKeepAlive = false
|
||||
dbs = ["opentsdb_telnet", "collectd", "icinga2", "tcollector"]
|
||||
ports = [6046, 6047, 6048, 6049]
|
||||
user = "root"
|
||||
password = "taosdata"
|
||||
...
|
||||
```
|
||||
|
||||
The default database name that taosAdapter writes to is `tcollector`, but you can specify a different name by modifying the dbs option in the taosAdapter configuration file. Fill in the user and password with the actual values configured in TDengine. After modifying the configuration file, taosAdapter needs to be restarted.
|
||||
The default database name that taosAdapter writes to is `tcollector`, but you can specify a different name by modifying the dbs option in the taosAdapter configuration file. Fill in the user and password with the actual values configured in TDengine. After modifying the configuration file, taosAdapter needs to be restarted.
|
||||
|
||||
- You can also use taosAdapter command line arguments or set environment variables to enable the taosAdapter to receive tcollector data. For more details, please refer to the taosAdapter reference manual.
|
||||
|
||||
### Configuring TCollector
|
||||
#### Configuring TCollector
|
||||
|
||||
To use TCollector, download its [source code](https://github.com/OpenTSDB/tcollector). Its configuration options are in its source code. Note: There are significant differences between different versions of TCollector; this only refers to the latest code in the current master branch (git commit: 37ae920).
|
||||
|
||||
|
@ -29,7 +29,7 @@ Modify the contents of `collectors/etc/config.py` and `tcollector.py`. Change th
|
|||
|
||||
Example of git diff output for source code modifications:
|
||||
|
||||
```
|
||||
```diff
|
||||
index e7e7a1c..ec3e23c 100644
|
||||
--- a/collectors/etc/config.py
|
||||
+++ b/collectors/etc/config.py
|
||||
|
|
|
@ -19,7 +19,7 @@
|
|||
<dependency>
|
||||
<groupId>com.taosdata.jdbc</groupId>
|
||||
<artifactId>taos-jdbcdriver</artifactId>
|
||||
<version>3.5.2</version>
|
||||
<version>3.5.3</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>org.locationtech.jts</groupId>
|
||||
|
|
|
@ -47,7 +47,7 @@
|
|||
<dependency>
|
||||
<groupId>com.taosdata.jdbc</groupId>
|
||||
<artifactId>taos-jdbcdriver</artifactId>
|
||||
<version>3.5.2</version>
|
||||
<version>3.5.3</version>
|
||||
</dependency>
|
||||
|
||||
</dependencies>
|
||||
|
|
|
@ -18,7 +18,7 @@
|
|||
<dependency>
|
||||
<groupId>com.taosdata.jdbc</groupId>
|
||||
<artifactId>taos-jdbcdriver</artifactId>
|
||||
<version>3.5.2</version>
|
||||
<version>3.5.3</version>
|
||||
</dependency>
|
||||
<!-- druid -->
|
||||
<dependency>
|
||||
|
|
|
@ -17,7 +17,7 @@
|
|||
<dependency>
|
||||
<groupId>com.taosdata.jdbc</groupId>
|
||||
<artifactId>taos-jdbcdriver</artifactId>
|
||||
<version>3.5.2</version>
|
||||
<version>3.5.3</version>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.google.guava</groupId>
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
<parent>
|
||||
<groupId>org.springframework.boot</groupId>
|
||||
<artifactId>spring-boot-starter-parent</artifactId>
|
||||
<version>2.4.0</version>
|
||||
<version>2.7.18</version>
|
||||
<relativePath/> <!-- lookup parent from repository -->
|
||||
</parent>
|
||||
<groupId>com.taosdata.example</groupId>
|
||||
|
@ -18,6 +18,18 @@
|
|||
<java.version>1.8</java.version>
|
||||
</properties>
|
||||
|
||||
<dependencyManagement>
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>com.baomidou</groupId>
|
||||
<artifactId>mybatis-plus-bom</artifactId>
|
||||
<version>3.5.10.1</version>
|
||||
<type>pom</type>
|
||||
<scope>import</scope>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
</dependencyManagement>
|
||||
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>org.springframework.boot</groupId>
|
||||
|
@ -28,14 +40,21 @@
|
|||
<artifactId>lombok</artifactId>
|
||||
<optional>true</optional>
|
||||
</dependency>
|
||||
<!-- spring boot2 引入可选模块 -->
|
||||
<dependency>
|
||||
<groupId>com.baomidou</groupId>
|
||||
<artifactId>mybatis-plus-boot-starter</artifactId>
|
||||
<version>3.1.2</version>
|
||||
</dependency>
|
||||
|
||||
<!-- jdk 8+ 引入可选模块 -->
|
||||
<dependency>
|
||||
<groupId>com.baomidou</groupId>
|
||||
<artifactId>mybatis-plus-jsqlparser-4.9</artifactId>
|
||||
</dependency>
|
||||
<dependency>
|
||||
<groupId>com.h2database</groupId>
|
||||
<artifactId>h2</artifactId>
|
||||
<version>2.3.232</version>
|
||||
<scope>runtime</scope>
|
||||
</dependency>
|
||||
<dependency>
|
||||
|
@ -47,7 +66,7 @@
|
|||
<dependency>
|
||||
<groupId>com.taosdata.jdbc</groupId>
|
||||
<artifactId>taos-jdbcdriver</artifactId>
|
||||
<version>3.5.2</version>
|
||||
<version>3.5.3</version>
|
||||
</dependency>
|
||||
|
||||
<dependency>
|
||||
|
|
|
@ -1,34 +1,26 @@
|
|||
package com.taosdata.example.mybatisplusdemo.config;
|
||||
|
||||
import com.baomidou.mybatisplus.extension.plugins.PaginationInterceptor;
|
||||
|
||||
import com.baomidou.mybatisplus.annotation.DbType;
|
||||
import com.baomidou.mybatisplus.extension.plugins.MybatisPlusInterceptor;
|
||||
import com.baomidou.mybatisplus.extension.plugins.inner.PaginationInnerInterceptor;
|
||||
import org.mybatis.spring.annotation.MapperScan;
|
||||
import org.springframework.context.annotation.Bean;
|
||||
import org.springframework.context.annotation.Configuration;
|
||||
import org.springframework.transaction.annotation.EnableTransactionManagement;
|
||||
|
||||
@EnableTransactionManagement
|
||||
@Configuration
|
||||
@MapperScan("com.taosdata.example.mybatisplusdemo.mapper")
|
||||
public class MybatisPlusConfig {
|
||||
|
||||
|
||||
/** mybatis 3.4.1 pagination config start ***/
|
||||
// @Bean
|
||||
// public MybatisPlusInterceptor mybatisPlusInterceptor() {
|
||||
// MybatisPlusInterceptor interceptor = new MybatisPlusInterceptor();
|
||||
// interceptor.addInnerInterceptor(new PaginationInnerInterceptor());
|
||||
// return interceptor;
|
||||
// }
|
||||
|
||||
// @Bean
|
||||
// public ConfigurationCustomizer configurationCustomizer() {
|
||||
// return configuration -> configuration.setUseDeprecatedExecutor(false);
|
||||
// }
|
||||
|
||||
/**
|
||||
* 添加分页插件
|
||||
*/
|
||||
@Bean
|
||||
public PaginationInterceptor paginationInterceptor() {
|
||||
// return new PaginationInterceptor();
|
||||
PaginationInterceptor paginationInterceptor = new PaginationInterceptor();
|
||||
//TODO: mybatis-plus do not support TDengine, use postgresql Dialect
|
||||
paginationInterceptor.setDialectType("postgresql");
|
||||
|
||||
return paginationInterceptor;
|
||||
public MybatisPlusInterceptor mybatisPlusInterceptor() {
|
||||
MybatisPlusInterceptor interceptor = new MybatisPlusInterceptor();
|
||||
interceptor.addInnerInterceptor(new PaginationInnerInterceptor(DbType.MYSQL));
|
||||
return interceptor;
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
@ -5,6 +5,7 @@ import com.taosdata.example.mybatisplusdemo.domain.Meters;
|
|||
import org.apache.ibatis.annotations.Insert;
|
||||
import org.apache.ibatis.annotations.Param;
|
||||
import org.apache.ibatis.annotations.Update;
|
||||
import org.apache.ibatis.executor.BatchResult;
|
||||
|
||||
import java.util.List;
|
||||
|
||||
|
@ -15,17 +16,6 @@ public interface MetersMapper extends BaseMapper<Meters> {
|
|||
|
||||
@Insert("insert into meters (tbname, ts, groupid, location, current, voltage, phase) values(#{tbname}, #{ts}, #{groupid}, #{location}, #{current}, #{voltage}, #{phase})")
|
||||
int insertOne(Meters one);
|
||||
|
||||
@Insert({
|
||||
"<script>",
|
||||
"insert into meters (tbname, ts, groupid, location, current, voltage, phase) values ",
|
||||
"<foreach collection='list' item='item' index='index' separator=','>",
|
||||
"(#{item.tbname}, #{item.ts}, #{item.groupid}, #{item.location}, #{item.current}, #{item.voltage}, #{item.phase})",
|
||||
"</foreach>",
|
||||
"</script>"
|
||||
})
|
||||
int insertBatch(@Param("list") List<Meters> metersList);
|
||||
|
||||
@Update("drop stable if exists meters")
|
||||
void dropTable();
|
||||
}
|
||||
|
|
|
@ -11,9 +11,6 @@ public interface TemperatureMapper extends BaseMapper<Temperature> {
|
|||
@Update("CREATE TABLE if not exists temperature(ts timestamp, temperature float) tags(location nchar(64), tbIndex int)")
|
||||
int createSuperTable();
|
||||
|
||||
@Update("create table #{tbName} using temperature tags( #{location}, #{tbindex})")
|
||||
int createTable(@Param("tbName") String tbName, @Param("location") String location, @Param("tbindex") int tbindex);
|
||||
|
||||
@Update("drop table if exists temperature")
|
||||
void dropSuperTable();
|
||||
|
||||
|
|
|
@ -10,7 +10,7 @@ public interface WeatherMapper extends BaseMapper<Weather> {
|
|||
@Update("CREATE TABLE if not exists weather(ts timestamp, temperature float, humidity int, location nchar(100))")
|
||||
int createTable();
|
||||
|
||||
@Insert("insert into weather (ts, temperature, humidity, location) values(#{ts}, #{temperature}, #{humidity}, #{location})")
|
||||
@Insert("insert into weather (ts, temperature, humidity, location) values(#{ts}, #{temperature}, #{humidity}, #{location, jdbcType=NCHAR})")
|
||||
int insertOne(Weather one);
|
||||
|
||||
@Update("drop table if exists weather")
|
||||
|
|
|
@ -0,0 +1,19 @@
|
|||
package com.taosdata.example.mybatisplusdemo.service;
|
||||
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.stereotype.Service;
|
||||
|
||||
import javax.sql.DataSource;
|
||||
import java.sql.Connection;
|
||||
import java.sql.SQLException;
|
||||
|
||||
@Service
|
||||
public class DatabaseConnectionService {
|
||||
|
||||
@Autowired
|
||||
private DataSource dataSource;
|
||||
|
||||
public Connection getConnection() throws SQLException {
|
||||
return dataSource.getConnection();
|
||||
}
|
||||
}
|
|
@ -0,0 +1,23 @@
|
|||
package com.taosdata.example.mybatisplusdemo.service;
|
||||
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.stereotype.Service;
|
||||
|
||||
import java.sql.Connection;
|
||||
import java.sql.SQLException;
|
||||
import java.sql.Statement;
|
||||
|
||||
@Service
|
||||
public class TemperatureService {
|
||||
@Autowired
|
||||
private DatabaseConnectionService databaseConnectionService;
|
||||
|
||||
public void createTable(String tableName, String location, int tbIndex) throws SQLException {
|
||||
|
||||
|
||||
try (Connection connection = databaseConnectionService.getConnection();
|
||||
Statement statement = connection.createStatement()) {
|
||||
statement.executeUpdate("create table " + tableName + " using temperature tags( '" + location +"', " + tbIndex + ")");
|
||||
}
|
||||
}
|
||||
}
|
|
@ -5,6 +5,7 @@ import com.baomidou.mybatisplus.core.metadata.IPage;
|
|||
import com.baomidou.mybatisplus.extension.plugins.pagination.Page;
|
||||
import com.taosdata.example.mybatisplusdemo.domain.Meters;
|
||||
import com.taosdata.example.mybatisplusdemo.domain.Weather;
|
||||
import org.apache.ibatis.executor.BatchResult;
|
||||
import org.junit.Assert;
|
||||
import org.junit.Before;
|
||||
import org.junit.Test;
|
||||
|
@ -18,6 +19,8 @@ import java.util.LinkedList;
|
|||
import java.util.List;
|
||||
import java.util.Random;
|
||||
|
||||
import static java.sql.Statement.SUCCESS_NO_INFO;
|
||||
|
||||
@RunWith(SpringJUnit4ClassRunner.class)
|
||||
@SpringBootTest
|
||||
public class MetersMapperTest {
|
||||
|
@ -63,8 +66,19 @@ public class MetersMapperTest {
|
|||
metersList.add(one);
|
||||
|
||||
}
|
||||
int affectRows = mapper.insertBatch(metersList);
|
||||
Assert.assertEquals(100, affectRows);
|
||||
List<BatchResult> affectRowsList = mapper.insert(metersList, 10000);
|
||||
|
||||
long totalAffectedRows = 0;
|
||||
for (BatchResult batchResult : affectRowsList) {
|
||||
int[] updateCounts = batchResult.getUpdateCounts();
|
||||
for (int status : updateCounts) {
|
||||
if (status == SUCCESS_NO_INFO) {
|
||||
totalAffectedRows++;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Assert.assertEquals(100, totalAffectedRows);
|
||||
}
|
||||
|
||||
@Test
|
||||
|
@ -93,7 +107,7 @@ public class MetersMapperTest {
|
|||
|
||||
@Test
|
||||
public void testSelectCount() {
|
||||
int count = mapper.selectCount(null);
|
||||
long count = mapper.selectCount(null);
|
||||
// Assert.assertEquals(5, count);
|
||||
System.out.println(count);
|
||||
}
|
||||
|
|
|
@ -4,6 +4,7 @@ import com.baomidou.mybatisplus.core.conditions.query.QueryWrapper;
|
|||
import com.baomidou.mybatisplus.core.metadata.IPage;
|
||||
import com.baomidou.mybatisplus.extension.plugins.pagination.Page;
|
||||
import com.taosdata.example.mybatisplusdemo.domain.Temperature;
|
||||
import com.taosdata.example.mybatisplusdemo.service.TemperatureService;
|
||||
import org.junit.After;
|
||||
import org.junit.Assert;
|
||||
import org.junit.Before;
|
||||
|
@ -13,6 +14,8 @@ import org.springframework.beans.factory.annotation.Autowired;
|
|||
import org.springframework.boot.test.context.SpringBootTest;
|
||||
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
|
||||
|
||||
import java.sql.ResultSet;
|
||||
import java.sql.SQLException;
|
||||
import java.sql.Timestamp;
|
||||
import java.util.HashMap;
|
||||
import java.util.List;
|
||||
|
@ -22,18 +25,20 @@ import java.util.Random;
|
|||
@RunWith(SpringJUnit4ClassRunner.class)
|
||||
@SpringBootTest
|
||||
public class TemperatureMapperTest {
|
||||
@Autowired
|
||||
private TemperatureService temperatureService;
|
||||
|
||||
private static Random random = new Random(System.currentTimeMillis());
|
||||
private static String[] locations = {"北京", "上海", "深圳", "广州", "杭州"};
|
||||
|
||||
@Before
|
||||
public void before() {
|
||||
public void before() throws SQLException {
|
||||
mapper.dropSuperTable();
|
||||
// create table temperature
|
||||
mapper.createSuperTable();
|
||||
// create table t_X using temperature
|
||||
for (int i = 0; i < 10; i++) {
|
||||
mapper.createTable("t" + i, locations[random.nextInt(locations.length)], i);
|
||||
temperatureService.createTable("t" + i, locations[i % locations.length], i);
|
||||
}
|
||||
// insert into table
|
||||
int affectRows = 0;
|
||||
|
@ -107,7 +112,7 @@ public class TemperatureMapperTest {
|
|||
* **/
|
||||
@Test
|
||||
public void testSelectCount() {
|
||||
int count = mapper.selectCount(null);
|
||||
long count = mapper.selectCount(null);
|
||||
Assert.assertEquals(10, count);
|
||||
}
|
||||
|
||||
|
|
|
@ -52,7 +52,7 @@ public class WeatherMapperTest {
|
|||
one.setTemperature(random.nextFloat() * 50);
|
||||
one.setHumidity(random.nextInt(100));
|
||||
one.setLocation("望京");
|
||||
int affectRows = mapper.insert(one);
|
||||
int affectRows = mapper.insertOne(one);
|
||||
Assert.assertEquals(1, affectRows);
|
||||
}
|
||||
|
||||
|
@ -82,7 +82,7 @@ public class WeatherMapperTest {
|
|||
|
||||
@Test
|
||||
public void testSelectCount() {
|
||||
int count = mapper.selectCount(null);
|
||||
long count = mapper.selectCount(null);
|
||||
// Assert.assertEquals(5, count);
|
||||
System.out.println(count);
|
||||
}
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
<parent>
|
||||
<groupId>org.springframework.boot</groupId>
|
||||
<artifactId>spring-boot-starter-parent</artifactId>
|
||||
<version>2.6.15</version>
|
||||
<version>2.7.18</version>
|
||||
<relativePath/> <!-- lookup parent from repository -->
|
||||
</parent>
|
||||
<groupId>com.taosdata.example</groupId>
|
||||
|
@ -34,7 +34,7 @@
|
|||
<dependency>
|
||||
<groupId>org.mybatis.spring.boot</groupId>
|
||||
<artifactId>mybatis-spring-boot-starter</artifactId>
|
||||
<version>2.1.1</version>
|
||||
<version>2.3.2</version>
|
||||
</dependency>
|
||||
|
||||
<dependency>
|
||||
|
@ -70,7 +70,7 @@
|
|||
<dependency>
|
||||
<groupId>com.taosdata.jdbc</groupId>
|
||||
<artifactId>taos-jdbcdriver</artifactId>
|
||||
<version>3.5.2</version>
|
||||
<version>3.5.3</version>
|
||||
</dependency>
|
||||
|
||||
<dependency>
|
||||
|
|
|
@ -50,13 +50,6 @@
|
|||
), groupId int)
|
||||
</update>
|
||||
|
||||
<update id="createTable" parameterType="com.taosdata.example.springbootdemo.domain.Weather">
|
||||
create table if not exists test.t#{groupId} using test.weather tags
|
||||
(
|
||||
#{location},
|
||||
#{groupId}
|
||||
)
|
||||
</update>
|
||||
|
||||
<select id="select" resultMap="BaseResultMap">
|
||||
select * from test.weather order by ts desc
|
||||
|
@ -69,8 +62,8 @@
|
|||
</select>
|
||||
|
||||
<insert id="insert" parameterType="com.taosdata.example.springbootdemo.domain.Weather">
|
||||
insert into test.t#{groupId} (ts, temperature, humidity, note, bytes)
|
||||
values (#{ts}, ${temperature}, ${humidity}, #{note}, #{bytes})
|
||||
insert into test.t${groupId} (ts, temperature, humidity, note, bytes)
|
||||
values (#{ts}, #{temperature}, #{humidity}, #{note}, #{bytes})
|
||||
</insert>
|
||||
|
||||
<select id="getSubTables" resultType="String">
|
||||
|
|
|
@ -0,0 +1,19 @@
|
|||
package com.taosdata.example.springbootdemo.service;
|
||||
|
||||
import org.springframework.beans.factory.annotation.Autowired;
|
||||
import org.springframework.stereotype.Service;
|
||||
|
||||
import javax.sql.DataSource;
|
||||
import java.sql.Connection;
|
||||
import java.sql.SQLException;
|
||||
|
||||
@Service
|
||||
public class DatabaseConnectionService {
|
||||
|
||||
@Autowired
|
||||
private DataSource dataSource;
|
||||
|
||||
public Connection getConnection() throws SQLException {
|
||||
return dataSource.getConnection();
|
||||
}
|
||||
}
|
|
@ -6,6 +6,9 @@ import org.springframework.beans.factory.annotation.Autowired;
|
|||
import org.springframework.stereotype.Service;
|
||||
|
||||
import java.nio.charset.StandardCharsets;
|
||||
import java.sql.Connection;
|
||||
import java.sql.SQLException;
|
||||
import java.sql.Statement;
|
||||
import java.sql.Timestamp;
|
||||
import java.util.List;
|
||||
import java.util.Map;
|
||||
|
@ -16,6 +19,9 @@ public class WeatherService {
|
|||
|
||||
@Autowired
|
||||
private WeatherMapper weatherMapper;
|
||||
|
||||
@Autowired
|
||||
private DatabaseConnectionService databaseConnectionService;
|
||||
private Random random = new Random(System.currentTimeMillis());
|
||||
private String[] locations = {"北京", "上海", "广州", "深圳", "天津"};
|
||||
|
||||
|
@ -32,7 +38,7 @@ public class WeatherService {
|
|||
weather.setGroupId(i % locations.length);
|
||||
weather.setNote("note-" + i);
|
||||
weather.setBytes(locations[random.nextInt(locations.length)].getBytes(StandardCharsets.UTF_8));
|
||||
weatherMapper.createTable(weather);
|
||||
createTable(weather);
|
||||
count += weatherMapper.insert(weather);
|
||||
}
|
||||
return count;
|
||||
|
@ -78,4 +84,14 @@ public class WeatherService {
|
|||
weather.setLocation(location);
|
||||
return weather;
|
||||
}
|
||||
|
||||
public void createTable(Weather weather) {
|
||||
try (Connection connection = databaseConnectionService.getConnection();
|
||||
Statement statement = connection.createStatement()) {
|
||||
String tableName = "t" + weather.getGroupId();
|
||||
statement.executeUpdate("create table if not exists " + tableName + " using test.weather tags( '" + weather.getLocation() +"', " + weather.getGroupId() + ")");
|
||||
} catch (SQLException e) {
|
||||
throw new RuntimeException(e);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -4,8 +4,8 @@
|
|||
#spring.datasource.username=root
|
||||
#spring.datasource.password=taosdata
|
||||
# datasource config - JDBC-RESTful
|
||||
spring.datasource.driver-class-name=com.taosdata.jdbc.rs.RestfulDriver
|
||||
spring.datasource.url=jdbc:TAOS-RS://localhost:6041/test
|
||||
spring.datasource.driver-class-name=com.taosdata.jdbc.ws.WebSocketDriver
|
||||
spring.datasource.url=jdbc:TAOS-WS://localhost:6041/test
|
||||
spring.datasource.username=root
|
||||
spring.datasource.password=taosdata
|
||||
spring.datasource.druid.initial-size=5
|
||||
|
|
|
@ -67,7 +67,7 @@
|
|||
<dependency>
|
||||
<groupId>com.taosdata.jdbc</groupId>
|
||||
<artifactId>taos-jdbcdriver</artifactId>
|
||||
<version>3.5.2</version>
|
||||
<version>3.5.3</version>
|
||||
<!-- <scope>system</scope>-->
|
||||
<!-- <systemPath>${project.basedir}/src/main/resources/lib/taos-jdbcdriver-2.0.15-dist.jar</systemPath>-->
|
||||
</dependency>
|
||||
|
@ -77,13 +77,6 @@
|
|||
<artifactId>fastjson</artifactId>
|
||||
<version>1.2.83</version>
|
||||
</dependency>
|
||||
<!-- mysql: just for test -->
|
||||
<dependency>
|
||||
<groupId>mysql</groupId>
|
||||
<artifactId>mysql-connector-java</artifactId>
|
||||
<version>8.0.28</version>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
<!-- log4j -->
|
||||
<dependency>
|
||||
<groupId>org.apache.logging.log4j</groupId>
|
||||
|
|
|
@ -198,7 +198,7 @@ splitSql.setSelect("ts, current, voltage, phase, groupid, location")
|
|||
", current: " + rowData.getFloat(1) +
|
||||
", voltage: " + rowData.getInt(2) +
|
||||
", phase: " + rowData.getFloat(3) +
|
||||
", location: " + new String(rowData.getBinary(4)));
|
||||
", location: " + rowData.getString(4).toString());
|
||||
sb.append("\n");
|
||||
return sb.toString();
|
||||
});
|
||||
|
@ -273,7 +273,7 @@ splitSql.setSelect("ts, current, voltage, phase, groupid, location")
|
|||
", current: " + row.getFloat(1) +
|
||||
", voltage: " + row.getInt(2) +
|
||||
", phase: " + row.getFloat(3) +
|
||||
", location: " + new String(row.getBinary(4)));
|
||||
", location: " + rowData.getString(4).toString());
|
||||
sb.append("\n");
|
||||
totalVoltage.addAndGet(row.getInt(2));
|
||||
}
|
||||
|
@ -311,7 +311,7 @@ splitSql.setSelect("ts, current, voltage, phase, groupid, location")
|
|||
", current: " + rowData.getFloat(1) +
|
||||
", voltage: " + rowData.getInt(2) +
|
||||
", phase: " + rowData.getFloat(3) +
|
||||
", location: " + new String(rowData.getBinary(4)));
|
||||
", location: " + rowData.getString(4).toString());
|
||||
sb.append("\n");
|
||||
totalVoltage.addAndGet(rowData.getInt(2));
|
||||
return sb.toString();
|
||||
|
@ -353,7 +353,7 @@ splitSql.setSelect("ts, current, voltage, phase, groupid, location")
|
|||
", current: " + row.getFloat(1) +
|
||||
", voltage: " + row.getInt(2) +
|
||||
", phase: " + row.getFloat(3) +
|
||||
", location: " + new String(row.getBinary(4)));
|
||||
", location: " + rowData.getString(4).toString());
|
||||
sb.append("\n");
|
||||
totalVoltage.addAndGet(row.getInt(2));
|
||||
}
|
||||
|
@ -489,9 +489,9 @@ splitSql.setSelect("ts, current, voltage, phase, groupid, location")
|
|||
" `current` FLOAT," +
|
||||
" voltage INT," +
|
||||
" phase FLOAT," +
|
||||
" location VARBINARY," +
|
||||
" location VARCHAR(255)," +
|
||||
" groupid INT," +
|
||||
" tbname VARBINARY" +
|
||||
" tbname VARCHAR(255)" +
|
||||
") WITH (" +
|
||||
" 'connector' = 'tdengine-connector'," +
|
||||
" 'td.jdbc.url' = 'jdbc:TAOS-WS://localhost:6041/power?user=root&password=taosdata'," +
|
||||
|
@ -506,9 +506,9 @@ splitSql.setSelect("ts, current, voltage, phase, groupid, location")
|
|||
" `current` FLOAT," +
|
||||
" voltage INT," +
|
||||
" phase FLOAT," +
|
||||
" location VARBINARY," +
|
||||
" location VARCHAR(255)," +
|
||||
" groupid INT," +
|
||||
" tbname VARBINARY" +
|
||||
" tbname VARCHAR(255)" +
|
||||
") WITH (" +
|
||||
" 'connector' = 'tdengine-connector'," +
|
||||
" 'td.jdbc.mode' = 'sink'," +
|
||||
|
@ -535,9 +535,9 @@ splitSql.setSelect("ts, current, voltage, phase, groupid, location")
|
|||
" `current` FLOAT," +
|
||||
" voltage INT," +
|
||||
" phase FLOAT," +
|
||||
" location VARBINARY," +
|
||||
" location VARCHAR(255)," +
|
||||
" groupid INT," +
|
||||
" tbname VARBINARY" +
|
||||
" tbname VARCHAR(255)" +
|
||||
") WITH (" +
|
||||
" 'connector' = 'tdengine-connector'," +
|
||||
" 'bootstrap.servers' = 'localhost:6041'," +
|
||||
|
@ -554,12 +554,12 @@ splitSql.setSelect("ts, current, voltage, phase, groupid, location")
|
|||
" `current` FLOAT," +
|
||||
" voltage INT," +
|
||||
" phase FLOAT," +
|
||||
" location VARBINARY," +
|
||||
" location VARCHAR(255)," +
|
||||
" groupid INT," +
|
||||
" tbname VARBINARY" +
|
||||
" tbname VARCHAR(255)" +
|
||||
") WITH (" +
|
||||
" 'connector' = 'tdengine-connector'," +
|
||||
" 'td.jdbc.mode' = 'cdc'," +
|
||||
" 'td.jdbc.mode' = 'sink'," +
|
||||
" 'td.jdbc.url' = 'jdbc:TAOS-WS://localhost:6041/power_sink?user=root&password=taosdata'," +
|
||||
" 'sink.db.name' = 'power_sink'," +
|
||||
" 'sink.supertable.name' = 'sink_meters'" +
|
||||
|
|
|
@ -22,7 +22,7 @@
|
|||
<dependency>
|
||||
<groupId>com.taosdata.jdbc</groupId>
|
||||
<artifactId>taos-jdbcdriver</artifactId>
|
||||
<version>3.5.2</version>
|
||||
<version>3.5.3</version>
|
||||
</dependency>
|
||||
<!-- ANCHOR_END: dep-->
|
||||
|
||||
|
|
|
@ -2,6 +2,7 @@ package com.taos.example;
|
|||
|
||||
import com.taosdata.jdbc.ws.TSWSPreparedStatement;
|
||||
|
||||
import java.math.BigInteger;
|
||||
import java.sql.*;
|
||||
import java.util.Random;
|
||||
|
||||
|
@ -26,7 +27,12 @@ public class WSParameterBindingFullDemo {
|
|||
"binary_col BINARY(100), " +
|
||||
"nchar_col NCHAR(100), " +
|
||||
"varbinary_col VARBINARY(100), " +
|
||||
"geometry_col GEOMETRY(100)) " +
|
||||
"geometry_col GEOMETRY(100)," +
|
||||
"utinyint_col tinyint unsigned," +
|
||||
"usmallint_col smallint unsigned," +
|
||||
"uint_col int unsigned," +
|
||||
"ubigint_col bigint unsigned" +
|
||||
") " +
|
||||
"tags (" +
|
||||
"int_tag INT, " +
|
||||
"double_tag DOUBLE, " +
|
||||
|
@ -34,7 +40,12 @@ public class WSParameterBindingFullDemo {
|
|||
"binary_tag BINARY(100), " +
|
||||
"nchar_tag NCHAR(100), " +
|
||||
"varbinary_tag VARBINARY(100), " +
|
||||
"geometry_tag GEOMETRY(100))"
|
||||
"geometry_tag GEOMETRY(100)," +
|
||||
"utinyint_tag tinyint unsigned," +
|
||||
"usmallint_tag smallint unsigned," +
|
||||
"uint_tag int unsigned," +
|
||||
"ubigint_tag bigint unsigned" +
|
||||
")"
|
||||
};
|
||||
private static final int numOfSubTable = 10, numOfRow = 10;
|
||||
|
||||
|
@ -79,7 +90,7 @@ public class WSParameterBindingFullDemo {
|
|||
// set table name
|
||||
pstmt.setTableName("ntb_json_" + i);
|
||||
// set tags
|
||||
pstmt.setTagJson(1, "{\"device\":\"device_" + i + "\"}");
|
||||
pstmt.setTagJson(0, "{\"device\":\"device_" + i + "\"}");
|
||||
// set columns
|
||||
long current = System.currentTimeMillis();
|
||||
for (int j = 0; j < numOfRow; j++) {
|
||||
|
@ -94,25 +105,29 @@ public class WSParameterBindingFullDemo {
|
|||
}
|
||||
|
||||
private static void stmtAll(Connection conn) throws SQLException {
|
||||
String sql = "INSERT INTO ? using stb tags(?,?,?,?,?,?,?) VALUES (?,?,?,?,?,?,?,?)";
|
||||
String sql = "INSERT INTO ? using stb tags(?,?,?,?,?,?,?,?,?,?,?) VALUES (?,?,?,?,?,?,?,?,?,?,?,?)";
|
||||
|
||||
try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) {
|
||||
|
||||
// set table name
|
||||
pstmt.setTableName("ntb");
|
||||
// set tags
|
||||
pstmt.setTagInt(1, 1);
|
||||
pstmt.setTagDouble(2, 1.1);
|
||||
pstmt.setTagBoolean(3, true);
|
||||
pstmt.setTagString(4, "binary_value");
|
||||
pstmt.setTagNString(5, "nchar_value");
|
||||
pstmt.setTagVarbinary(6, new byte[] { (byte) 0x98, (byte) 0xf4, 0x6e });
|
||||
pstmt.setTagGeometry(7, new byte[] {
|
||||
pstmt.setTagInt(0, 1);
|
||||
pstmt.setTagDouble(1, 1.1);
|
||||
pstmt.setTagBoolean(2, true);
|
||||
pstmt.setTagString(3, "binary_value");
|
||||
pstmt.setTagNString(4, "nchar_value");
|
||||
pstmt.setTagVarbinary(5, new byte[] { (byte) 0x98, (byte) 0xf4, 0x6e });
|
||||
pstmt.setTagGeometry(6, new byte[] {
|
||||
0x01, 0x01, 0x00, 0x00,
|
||||
0x00, 0x00, 0x00, 0x00,
|
||||
0x00, 0x00, 0x00, 0x59,
|
||||
0x40, 0x00, 0x00, 0x00,
|
||||
0x00, 0x00, 0x00, 0x59, 0x40 });
|
||||
pstmt.setTagShort(7, (short)255);
|
||||
pstmt.setTagInt(8, 65535);
|
||||
pstmt.setTagLong(9, 4294967295L);
|
||||
pstmt.setTagBigInteger(10, new BigInteger("18446744073709551615"));
|
||||
|
||||
long current = System.currentTimeMillis();
|
||||
|
||||
|
@ -129,6 +144,10 @@ public class WSParameterBindingFullDemo {
|
|||
0x00, 0x00, 0x00, 0x59,
|
||||
0x40, 0x00, 0x00, 0x00,
|
||||
0x00, 0x00, 0x00, 0x59, 0x40 });
|
||||
pstmt.setShort(9, (short)255);
|
||||
pstmt.setInt(10, 65535);
|
||||
pstmt.setLong(11, 4294967295L);
|
||||
pstmt.setObject(12, new BigInteger("18446744073709551615"));
|
||||
pstmt.addBatch();
|
||||
pstmt.executeBatch();
|
||||
System.out.println("Successfully inserted rows to example_all_type_stmt.ntb");
|
||||
|
|
|
@ -8,7 +8,7 @@ TDengine 是一款[开源](https://www.taosdata.com/tdengine/open_source_time-se
|
|||
|
||||
TDengine 充分利用了时序数据的特点,提出了“一个数据采集点一张表”与“超级表”的概念,设计了创新的存储引擎,让数据的写入、查询和存储效率都得到极大的提升。为正确理解并使用 TDengine,无论你在工作中是什么角色,请您仔细阅读[数据模型](./basic/model)一章。
|
||||
|
||||
如果你是开发工程师,请一定仔细阅读[开发指南](./develop)一章,该部分对数据库连接、建模、插入数据、查询、流式计算、缓存、数据订阅、用户自定义函数等功能都做了详细介绍,并配有各种编程语言的示例代码。大部分情况下,你只要复制粘贴示例代码,针对自己的应用稍作改动,就能跑起来。对 REST API、各种编程语言的连接器(Connector)想做更多详细了解的话,请看[连接器](./reference/connector)一章。
|
||||
如果你是开发工程师,请一定仔细阅读[开发指南](./develop)一章,该部分对数据库连接、建模、写入、查询、流式计算、缓存、数据订阅、用户自定义函数等功能都做了详细介绍,并配有各种编程语言的示例代码。大部分情况下,只要复制粘贴示例代码,针对自己的应用稍作改动,就能跑起来。对 REST API、各种编程语言的连接器(Connector)想做更多详细了解,请看[连接器](./reference/connector)一章。
|
||||
|
||||
我们已经生活在大数据时代,纵向扩展已经无法满足日益增长的业务需求,任何系统都必须具有水平扩展的能力,集群成为大数据以及 Database 系统的不可缺失功能。TDengine 团队不仅实现了集群功能,而且将这一重要核心功能开源。怎么部署、管理和维护 TDengine 集群,请仔细参考[运维管理](./operation)一章。
|
||||
|
||||
|
@ -16,7 +16,7 @@ TDengine 采用 SQL 作为查询语言,大大降低学习成本、降低迁移
|
|||
|
||||
如果你是系统管理员,关心安装、升级、容错灾备、关心数据导入、导出、配置参数,如何监测 TDengine 是否健康运行,如何提升系统运行的性能,请仔细参考[运维指南](./operation)一章。
|
||||
|
||||
如果你对数据库内核设计感兴趣,或是开源爱好者,建议仔细阅读[技术内幕](./tdinternal)一章。该章从分布式架构到存储引擎、查询引擎、数据订阅,再到流计算引擎都做了详细阐述。建议对照文档,查看TDengine在GitHub的源代码,对TDengine的设计和编码做深入了解,更欢迎加入开源社区,贡献代码。
|
||||
如果你对数据库内核设计感兴趣,或是开源爱好者,建议仔细阅读[技术内幕](./tdinternal)一章。该章从分布式架构到存储引擎、查询引擎、数据订阅,再到流计算引擎都做了详细阐述。建议对照文档,查看 TDengine 在 GitHub 的源代码,对 TDengine 的设计和编码做深入了解,更欢迎加入开源社区,贡献代码。
|
||||
|
||||
最后,作为一个开源软件,欢迎大家的参与。如果发现文档有任何错误、描述不清晰的地方,请在每个页面的最下方,点击“编辑本文档”直接进行修改。
|
||||
|
||||
|
|
|
@ -9,7 +9,7 @@ toc_max_heading_level: 4
|
|||
|
||||
时序数据,即时间序列数据(Time-Series Data),它们是一组按照时间发生先后顺序进行排列的序列数据。日常生活中,设备、传感器采集的数据就是时序数据,证券交易的记录也是时序数据。因此时序数据的处理并不陌生,特别在是工业自动化以及证券金融行业,专业的时序数据处理软件早已存在,比如工业领域的 PI System 以及金融行业的 KDB。
|
||||
|
||||
这些时序数据是周期、准周期产生的,或事件触发产生的,有的采集频率高,有的采集频率低。一般被发送至服务器中进行汇总并进行实时分析和处理,对系统的运行做出实时监测或预警,对股市行情进行预测。这些数据也可以被长期保存下来,用以进行离线数据分析。比如统计时间区间内设备的运行节奏与产出,分析如何进一步优化配置来提升生产效率;统计一段时间内生产过程中的成本分布,分析如何降低生产成本;统计一段时间内的设备异常值,结合业务分析潜在的安全隐患,以降低故障时长等等。
|
||||
这些时序数据是周期、准周期产生的,或事件触发产生的,有的采集频率高,有的采集频率低。一般被发送至服务器进行汇总并进行实时分析和处理,对系统的运行做出实时监测或预警,对股市行情进行预测。这些数据也可以被长期保存下来,用以进行离线数据分析。比如统计时间区间内设备的运行节奏与产出,分析如何进一步优化配置来提升生产效率;统计一段时间内生产过程中的成本分布,分析如何降低生产成本;统计一段时间内的设备异常值,结合业务分析潜在的安全隐患,以降低故障时长等等。
|
||||
|
||||
过去的二十年,随着数据通讯成本的急剧下降,以及各种传感技术和智能设备的出现,特别是物联网与工业 4.0 的推动,工业、物联网企业为了监测设备、环境、生产线及整个系统的运行状态,在各个关键点都配有传感器,采集各种数据。从手环、共享出行、智能电表、环境监测设备到电梯、数控机床、挖掘机、工业生产线等都在源源不断的产生海量的实时数据,时序数据的体量正指数级的增长。以智能电表为例,智能电表每隔 15 分钟采集一次数据,每天会自动生成 96 条记录。现在全中国已经有超过 10 亿台智能电表,一天就产生 960 亿条时序数据。一台联网的汽车往往每隔 10 到 15 秒采集一次数据发到云端,那么一天下来就很容易产生 1000 条记录。假设中国有 2 亿车辆联网,它们每天将产生总计 2000 亿条甚至更多的时序数据。
|
||||
|
||||
|
@ -33,7 +33,7 @@ toc_max_heading_level: 4
|
|||
|
||||
7. 用户关注的是一段时间的趋势:对于一条银行交易记录,或者一条微博、微信,对于它的用户而言,每一条都很重要。但对于物联网、工业时序数据,每个数据点与数据点的变化并不大,大家关心的更多是一段时间,比如过去五分钟、一小时数据变化的趋势,不会只针对一个时间点进行。
|
||||
|
||||
8. 数据是有保留期限的:采集的数据一般都有基于时长的保留策略,比如仅仅保留一天、一周、一个月、一年甚至更长时间,该类数据的价值往往是由时间段决定的,因此对于不在重要时间段内的数据,都是可以被视为过期数据整块删除的。
|
||||
8. 数据是有保留期限的:采集的数据一般都有基于时长的保留策略,比如仅仅保留一天、一周、一个月、一年甚至更长时间,该类数据的价值往往是由时间段决定的,因此对于不在重要时间段内的数据,都是可以被视为过期数据整块删除的。
|
||||
|
||||
9. 需要实时分析计算操作:对于大部分互联网大数据应用,更多的是离线分析,即使有实时分析,但要求并不高。比如用户画像、可以积累一定的用户行为数据后进行,早一天晚一天画不会特别影响结果。但是对于工业、物联网的平台应用以及交易系统,对数据的实时计算要求就往往很高,因为需要根据计算结果进行实时报警、监控,从而避免事故的发生、决策时机的错过。
|
||||
|
||||
|
@ -47,7 +47,7 @@ toc_max_heading_level: 4
|
|||
|
||||
1. 电力能源领域:电力能源领域范围较大,不论是在发电、输电、配电、用电还是其他环节中,各种电力设备都会产生大量时序数据,以风力发电为例,风电机作为大型设备,拥有可能高达数百的数据采集点,因此每日所产生的时序数据量极其之大,对这些数据的监控分析是确保发电环节准确无误的必要工作。在用电环节,对智能电表实时采集回来的电流、电压等数据进行快速计算,实时了解最新的用电总量、尖、峰、平、谷用电量,判断设备是否正常工作。有些时候,电力系统可能需要拉取历史上某一年的全量数据,通过机器学习等技术分析用户的用电习惯、进行负荷预测、节能方案设计、帮助电力公司合理规划电力的供应。或者拉取上个月的尖峰平谷用电量,根据不同价位进行周期性的电费结算,以上都是时序数据在电力能源领域的典型应用。
|
||||
|
||||
2. 车联网/轨道交通领域:车辆的 GPS 、速度、油耗、故障信息等,都是典型的时序数据,通过对它们科学合理地数据分析,可以为车辆管理和优化提供强有力的支持。但是,不同车型采集的点位信息从数百点到数千点之间不一而同,随着联网的交通设备数量越来越多,这些海量的时序数据如何安全上传、数据存储、查询和分析,成为了一个亟待解决的行业问题。对于交通工具的本身,科学合理地处理时序数据可以实现车辆轨迹追踪、无人驾驶、故障预警等功能。对于交通工具的整体配套服务,也可以提供良好的支持。比如,在新一代的智能地铁管理系统中,通过地铁站中各种传感器的时序数据采集分析,可以在站中实时展示各个车厢的拥挤度、温度、舒适度等数据,让用户可以自行选择体验度最佳的出行方案,对于地铁运营商,也可以更好地实现乘客流量的调度管理。
|
||||
2. 车联网/轨道交通领域:车辆的 GPS 、速度、油耗、故障信息等,都是典型的时序数据,通过科学合理地数据分析,可以为车辆管理和优化提供强有力的支持。但是,不同车型采集的点位信息从数百点到数千点之间不一而同,随着联网的交通设备数量越来越多,这些海量的时序数据如何安全上传、数据存储、查询和分析,成为了一个亟待解决的行业问题。对于交通工具的本身,科学合理地处理时序数据可以实现车辆轨迹追踪、无人驾驶、故障预警等功能。对于交通工具的整体配套服务,也可以提供良好的支持。比如,在新一代的智能地铁管理系统中,通过地铁站中各种传感器的时序数据采集分析,可以在站中实时展示各个车厢的拥挤度、温度、舒适度等数据,让用户可以自行选择体验度最佳的出行方案,对于地铁运营商,也可以更好地实现乘客流量的调度管理。
|
||||
|
||||
3. 智能制造领域:过去的十几年间,许多传统工业企业的数字化得到了长足的发展,单个工厂从传统的几千个数据采集点,到如今数十万点、上百万点,部分远程运维场景面临上万设备、千万点的数据采集存储的需求,这些数据都属于典型的时序数据。就整个工业大数据系统而言,时序数据的处理是相当复杂的。以烟草行业的数据采集为例,设备的工业数据协议各式各样、数据采集单位随着设备类型的不同而不同。数据的实时处理能力随着数据采集点的持续增加而难以跟上,与此同时还要兼顾数据的高性能、高可用、可拓展性等等诸多特性。但从另一个角度来看,如果大数据平台能够解决以上困难,满足企业对于时序数据存储分析的需求,就可以帮助企业实现更加智能化、自动化的生产模式,从而完成质的飞升。
|
||||
|
||||
|
@ -55,7 +55,7 @@ toc_max_heading_level: 4
|
|||
|
||||
5. IT 运维领域:IT 领域中,基础设施(如服务器、网络设备、存储设备)、应用程序运行的过程中会产生大量的时序数据。通过对这些时序数据的监控,可以很快地发现基础设施/应用的运行状态和服务可用性,包括系统是否在线、服务是否正常响应等;也能看到具体到某一个具体的点位的性能指标:如 CPU 利用率、内存利用率、磁盘空间利用率、网络带宽利用率等; 还可以监控系统产生的错误日志和异常事件,包括入侵检测、安全事件日志、权限控制等,最终通过设置报警规则,及时通知管理员或运维人员具体的情况,从而及时发现问题、预防故障,并优化系统性能,确保系统稳定可靠地运行。
|
||||
|
||||
6. 金融领域:金融领域目前正经历着数据管理的一场革命,它们的行情数据属于典型的时序数据,由于保留行情数据的储存期限往往需长达 5 至 10 年,甚至超过 30 年,而且可能全世界各个国家/地区的主流金融市场的交易数据都需要全量保存,因此行情数据的总量数据体量庞大,会轻松达到 TB 级别,造成存储、查询等等各方面的瓶颈。在金融领域中,量化交易平台是最能凸显时序数据处理重要性的革命性应用之一:通过对大量时序行情数据的读取分析来及时响应市场变化,帮助交易者把握投资机会,同时规避不必要的风险,实现资产的稳健增长。可以实现包括但不限于:资产管理、情绪监控、股票回测、交易信号模拟、报表自动生成等等诸多功能。
|
||||
6. 金融领域:金融领域目前正经历着数据管理的一场革命,行情数据属于典型的时序数据,由于保留行情数据的储存期限往往需长达 5 至 10 年,甚至超过 30 年,而且可能全世界各个国家/地区的主流金融市场的交易数据都需要全量保存,因此行情数据的总量数据体量庞大,会轻松达到 TB 级别,造成存储、查询等等各方面的瓶颈。在金融领域中,量化交易平台是最能凸显时序数据处理重要性的革命性应用之一:通过对大量时序行情数据的读取分析来及时响应市场变化,帮助交易者把握投资机会,同时规避不必要的风险,实现资产的稳健增长。可以实现包括但不限于:资产管理、情绪监控、股票回测、交易信号模拟、报表自动生成等等诸多功能。
|
||||
|
||||
## 处理时序数据所需要的工具
|
||||
|
||||
|
@ -71,11 +71,11 @@ toc_max_heading_level: 4
|
|||
|
||||
5. 缓存(Cache):物联网、工业、金融应用需要实时展示一些设备或股票的最新状态,因此平台需要缓存技术提供快速的数据访问。原因是:由于时序数据体量极大,如果不使用缓存技术,而是进行常规的读取、筛选,那么对于监控设备最新状态之类的计算是十分困难的,将会导致很大的延迟,从而失去“实时”的意义。因此,缓存技术是时序数据处理平台不可缺少的一环, Redis 就是这样一种常用的缓存工具。
|
||||
|
||||
处理时序数据需要一系列模块的协同作业,从数据采集到存储、计算、分析与可视化,再到专用的时序数据算法库,每个环节都有相应的工具支持。这些工具的选择取决于具体的业务需求和数据特点,合理地选用和搭配它们才能做到高效地处理各种类型的时序数据,挖掘数据背后的价值。
|
||||
处理时序数据需要一系列模块的协同作业,从数据采集到存储、计算、分析与可视化,再到专用的时序数据算法库,每个环节都有相应的工具支持。这些工具的选择取决于具体的业务需求和数据特点,合理地选用和搭配才能做到高效地处理各种类型的时序数据,挖掘数据背后的价值。
|
||||
|
||||
## 专用时序数据处理工具的必要性
|
||||
|
||||
在时序数据的十大特征一节中提到,对于一个优秀的时序大数据处理平台来说,它必然需要具备处理时序数据十大特征的能力。在处理时序数据所需要的工具一节中介绍了时序大数据平台处理时序数据所需要的主要模块/组件。 结合这两节的内容与实际情况,可以发现:处理海量时序数据,其实是一个很庞大复杂的系统。
|
||||
在时序数据的十大特征一节中提到,对于一个优秀的时序大数据处理平台来说,必然需要具备处理时序数据十大特征的能力。在处理时序数据所需要的工具一节中介绍了时序大数据平台处理时序数据所需要的主要模块/组件。结合这两节的内容与实际情况,可以发现:处理海量时序数据,其实是一个很庞大复杂的系统。
|
||||
|
||||
早些年,为处理日益增长的互联网数据,众多的工具开始出现,最流行的便是 Hadoop 体系。除使用大家所熟悉的 Hadoop 组件如 HDFS、MapReduce、HBase 和 Hive 外,通用的大数据处理平台往往还使用 Kafka 或其他消息队列工具,Redis 或其他缓存软件,Flink 或其他实时流式数据处理软件。存储上也有人选用 MongoDB、Cassandra 或其他 NoSQL 数据库。这样一个典型的大数据处理平台基本上能很好的处理互联网行业的引用,比如典型的用户画像、舆情分析等。
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@ TDengine 是一个高性能、分布式的时序数据库。通过集成的缓
|
|||
|
||||
TDengine OSS 是一个开源的高性能时序数据库,与其他时序数据库相比,它的核心优势在于其集群开源、高性能和云原生架构。而且除了基础的写入、查询和存储功能外,TDengine OSS 还集成了缓存、流式计算和数据订阅等高级功能,这些功能显著简化了系统设计,降低了企业的研发和运营成本。
|
||||
|
||||
在 TDengine OSS 的基础上,企业版 TDengine Enterprise 提供了增强的辅助功能,包括数据的备份恢复、异地容灾、多级存储、视图、权限控制、安全加密、IP 白名单、支持 MQTT、OPC-UA、OPC-DA、PI、Wonderware、Kafka 等各种数据源。这些功能为企业提供了更为全面、安全、可靠和高效的时序数据管理解决方案。更多的细节请看 [TDengine Enterprise](https://www.taosdata.com/tdengine-pro)
|
||||
在 TDengine OSS 的基础上,TDengine Enterprise 提供了增强的辅助功能,包括数据的备份恢复、异地容灾、多级存储、视图、权限控制、安全加密、IP 白名单、支持 MQTT、OPC-UA、OPC-DA、PI、Wonderware、Kafka 等各种数据源。这些功能为企业提供了更为全面、安全、可靠和高效的时序数据管理解决方案。更多的细节请看 [TDengine Enterprise](https://www.taosdata.com/tdengine-pro)。
|
||||
|
||||
此外,TDengine Cloud 作为一种全托管的云服务,存储与计算分离,分开计费,为企业提供了企业级的工具和服务,彻底解决了运维难题,尤其适合中小规模的用户使用。更多的细节请看[TDengine 云服务](https://cloud.taosdata.com/?utm_source=menu&utm_medium=webcn)
|
||||
|
||||
|
@ -30,19 +30,19 @@ TDengine 经过特别优化,以适应时间序列数据的独特需求,引
|
|||
|
||||
4. 流式计算:TDengine 流式计算引擎提供了实时处理写入的数据流的能力,不仅支持连续查询,还支持事件驱动的流式计算。它提供了替代复杂流处理系统的轻量级解决方案,并能够在高吞吐的数据写入的情况下,提供毫秒级的计算结果延迟。
|
||||
|
||||
5. 数据订阅:TDengine 提供了类似 Kafka 的数据订阅功能。但用户可以通过 SQL 来灵活控制订阅的数据内容,并使用 Kafka 相同的 API 来订阅一张表、一组表、全部列或部分列、甚至整个数据库的数据。TDengine 可以替代需要集成消息队列产品的场景, 从而简化系统设计的复杂度,降低运营维护成本。
|
||||
5. 数据订阅:TDengine 提供了类似 Kafka 的数据订阅功能。但用户可以通过 SQL 来灵活控制订阅的数据内容,并使用和 Kafka 相同的 API 来订阅一张表、一组表、全部列或部分列、甚至整个数据库的数据。TDengine 可以替代需要集成消息队列产品的场景, 从而简化系统设计的复杂度,降低运营维护成本。
|
||||
|
||||
6. 可视化/BI:TDengine 本身不提供可视化或 BI 的功能。但通过其 RESTful API, 标准的 JDBC、ODBC 接口,TDengine 能够 Grafana、Google Data Studio、Power BI、Tableau 以及国产 BI 工具无缝集成。
|
||||
6. 可视化/BI:TDengine 本身不提供可视化或 BI 的功能。但通过其 RESTful API, 标准的 JDBC、ODBC 接口,TDengine 能够和 Grafana、Google Data Studio、Power BI、Tableau 以及国产 BI 工具无缝集成。
|
||||
|
||||
7. 集群功能:TDengine 支持集群部署,能够随着业务数据量的增长,通过增加节点线性提升系统处理能力,实现水平扩展。同时,通过多副本技术提供高可用性,并支持 Kubernetes 部署。同时还提供了多种运维工具,方便系统管理员更好地管理和维护集群的健壮运行。
|
||||
7. 集群功能:TDengine 支持集群部署,能够随着业务数据量的增长,通过增加节点线性提升系统处理能力,实现水平扩展。同时,通过多副本技术提供高可用性,支持 Kubernetes 部署,提供了多种运维工具,方便系统管理员更好地管理和维护集群的健壮运行。
|
||||
|
||||
8. 数据迁移:TDengine 提供了多种便捷的数据导入导出功能,包括脚本文件导入导出、数据文件导入导出、taosdump 工具导入导出等。
|
||||
|
||||
9. 编程连接器:TDengine 提供不同语言的连接器,包括 C/C++、Java、Go、Node.js、Rust、Python、C#、R、PHP 等。这些连接器大多都支持原生连接和 WebSocket 两种连接方式。TDengine 也提供 RESTful 接口,任何语言的应用程序可以直接通过 HTTP 请求访问数据库。
|
||||
9. 编程连接器:TDengine 提供多种语言的连接器,包括 C/C++、Java、Go、Node.js、Rust、Python、C#、R、PHP 等。这些连接器大多都支持原生连接和 WebSocket 两种连接方式。TDengine 也提供 RESTful 接口,任何语言的应用程序可以直接通过 HTTP 请求访问数据库。
|
||||
|
||||
10. 数据安全:TDengine 提供了丰富的用户管理和权限管理功能以控制不同用户对数据库和表的访问权限,提供了 IP 白名单功能以控制不同帐号只能从特定的服务器接入集群。TDengine 支持系统管理员对不同数据库按需加密,数据加密后对读写完全透明且对性能的影响很小。还提供了审计日志功能以记录系统中的敏感操作。
|
||||
|
||||
11. 常用工具:TDengine 还提供了交互式命令行程序(CLI),便于管理集群、检查系统状态、做即时查询。压力测试工具 taosBenchmark,用于测试 TDengine 的性能。TDengine 还提供了图形化管理界面,简化了操作和管理过程。
|
||||
11. 常用工具:TDengine 提供了交互式命令行程序(CLI),便于管理集群、检查系统状态、做即时查询。压力测试工具 taosBenchmark,用于测试 TDengine 的性能。TDengine 还提供了图形化管理界面,简化了操作和管理过程。
|
||||
|
||||
12. 零代码数据接入:TDengine 企业版提供了丰富的数据接入功能,依托强大的数据接入平台,无需一行代码,只需要做简单的配置即可实现多种数据源的数据接入,目前已经支持的数据源包括:OPC-UA、OPC-DA、PI、MQTT、Kafka、InfluxDB、OpenTSDB、MySQL、SQL Server、Oracle、Wonderware Historian、MongoDB。
|
||||
|
||||
|
@ -63,8 +63,11 @@ TDengine 经过特别优化,以适应时间序列数据的独特需求,引
|
|||
6. 核心开源:TDengine 的核心代码,包括集群功能,均在开源协议下公开发布。它在 GitHub 网站全球趋势排行榜上多次位居榜首,显示出其受欢迎程度。同时,TDengine 拥有一个活跃的开发者社区,为技术的持续发展和创新提供了有力支持。
|
||||
|
||||
采用 TDengine,企业可以在物联网、车联网、工业互联网等典型场景中显著降低大数据平台的总拥有成本,主要体现在以下几个方面:
|
||||
|
||||
1. 高性能带来的成本节约:TDengine 卓越的写入、查询和存储性能意味着系统所需的计算资源和存储资源可以大幅度减少。这不仅降低了硬件成本,还减少了能源消耗和维护费用。
|
||||
|
||||
2. 标准化与兼容性带来的成本效益:由于 TDengine 支持标准 SQL,并与众多第三方软件实现了无缝集成,用户可以轻松地将现有系统迁移到 TDengine 上,无须重写大量代码。这种标准化和兼容性大大降低了学习和迁移成本,缩短了项目周期。
|
||||
|
||||
3. 简化系统架构带来的成本降低:作为一个极简的时序数据平台,TDengine 集成了消息队列、缓存、流计算等必要功能,避免了额外集成众多其他组件的需要。这种简化的系统架构显著降低了系统的复杂度,从而减少了研发和运营成本,提高了整体运营效率。
|
||||
|
||||
## 技术生态
|
||||
|
@ -78,7 +81,7 @@ TDengine 经过特别优化,以适应时间序列数据的独特需求,引
|
|||
<center><figcaption>图 1. TDengine 技术生态图</figcaption></center>
|
||||
</figure>
|
||||
|
||||
上图中,左侧是各种数据采集或消息队列,包括 OPC-UA、MQTT、Telegraf、也包括 Kafka,他们的数据将被源源不断的写入到 TDengine。右侧则是可视化、BI 工具、组态软件、应用程序。下侧则是 TDengine 自身提供的命令行程序(CLI)以及可视化管理工具。
|
||||
上图中,左侧是各种数据采集或消息队列,包括 OPC-UA、MQTT、Telegraf、也包括 Kafka,它们的数据将被源源不断的写入到 TDengine。右侧是可视化、BI 工具、组态软件、应用程序。下侧是 TDengine 自身提供的命令行程序(CLI)以及可视化管理工具。
|
||||
|
||||
## 典型适用场景
|
||||
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
sidebar_label: 用Docker快速体验
|
||||
sidebar_label: 用 Docker 快速体验
|
||||
title: 用 Docker 快速体验 TDengine
|
||||
description: 使用 Docker 快速体验 TDengine 的高效写入和查询
|
||||
---
|
||||
|
@ -91,7 +91,7 @@ taosBenchmark 提供了丰富的选项,允许用户自定义测试参数,如
|
|||
taosBenchmark --help
|
||||
```
|
||||
|
||||
有关taosBenchmark 的详细使用方法,请参考[taosBenchmark 参考手册](../../reference/tools/taosbenchmark)
|
||||
有关 taosBenchmark 的详细使用方法,请参考 [taosBenchmark 参考手册](../../reference/tools/taosbenchmark)
|
||||
|
||||
### 体验查询
|
||||
|
||||
|
|
|
@ -8,7 +8,7 @@ import Tabs from "@theme/Tabs";
|
|||
import TabItem from "@theme/TabItem";
|
||||
import PkgListV3 from "/components/PkgListV3";
|
||||
|
||||
TDengine 完整的软件包包括服务端(taosd)、应用驱动(taosc)、用于与第三方系统对接并提供 RESTful 接口的 taosAdapter、命令行程序(CLI,taos)和一些工具软件。目前 TDinsight 仅在 Linux 系统上安装和运行,后续将支持 Windows、macOS 等系统。TDengine 除了提供多种语言的连接器之外,还通过 [taosAdapter](../../reference/components/taosadapter/) 提供 [RESTful 接口](../../reference/connector/rest-api/)。
|
||||
TDengine 完整的软件包包括服务端(taosd)、应用驱动(taosc)、用于与第三方系统对接并提供 RESTful 接口的 taosAdapter、命令行程序(TDengine CLI)和一些工具软件。目前 TDinsight 仅在 Linux 系统上安装和运行,后续将支持 Windows、macOS 等系统。TDengine 除了提供多种语言的连接器之外,还通过 [taosAdapter](../../reference/components/taosadapter/) 提供 [RESTful 接口](../../reference/connector/rest-api/)。
|
||||
|
||||
为方便使用,标准的服务端安装包包含了 taosd、taosAdapter、taosc、taos、taosdump、taosBenchmark、TDinsight 安装脚本和示例代码;如果您只需要用到服务端程序和客户端连接的 C/C++ 语言支持,也可以仅下载 Lite 版本的安装包。
|
||||
|
||||
|
@ -17,30 +17,27 @@ TDengine 完整的软件包包括服务端(taosd)、应用驱动(taosc)
|
|||
此外,TDengine 也提供 macOS x64/m1 平台的 pkg 安装包。
|
||||
|
||||
## 运行环境要求
|
||||
|
||||
在linux系统中,运行环境最低要求如下:
|
||||
1. linux 内核版本:3.10.0-1160.83.1.el7.x86_64 或以上
|
||||
2. glibc 版本:2.17 或以上
|
||||
|
||||
linux 内核版本 - 3.10.0-1160.83.1.el7.x86_64;
|
||||
|
||||
glibc 版本 - 2.17;
|
||||
|
||||
如果通过clone源码进行编译安装,还需要满足:
|
||||
|
||||
cmake版本 - 3.26.4或以上;
|
||||
|
||||
gcc 版本 - 9.3.1或以上;
|
||||
|
||||
如果通过 Clone 源码进行编译安装,还需要满足:
|
||||
1. cmake 版本:3.26.4 或以上
|
||||
2. gcc 版本:9.3.1 或以上
|
||||
|
||||
## 安装
|
||||
|
||||
**注意**
|
||||
|
||||
从TDengine 3.0.6.0 开始,不再提供单独的 taosTools 安装包,原 taosTools 安装包中包含的工具都在 TDengine-server 安装包中,如果需要请直接下载 TDengine -server 安装包。
|
||||
从 TDengine 3.0.6.0 开始,不再提供单独的 taosTools 安装包,原 taosTools 安装包中包含的工具都在 TDengine-server 安装包中,如果需要请直接下载 TDengine-server 安装包。
|
||||
|
||||
<Tabs>
|
||||
<TabItem label="Deb 安装" value="debinst">
|
||||
|
||||
1. 从列表中下载获得 Deb 安装包;
|
||||
1. 从列表中下载获得 Deb 安装包:
|
||||
<PkgListV3 type={6}/>
|
||||
|
||||
2. 进入到安装包所在目录,执行如下的安装命令:
|
||||
|
||||
> 请将 `<version>` 替换为下载的安装包版本
|
||||
|
@ -53,8 +50,9 @@ sudo dpkg -i TDengine-server-<version>-Linux-x64.deb
|
|||
|
||||
<TabItem label="RPM 安装" value="rpminst">
|
||||
|
||||
1. 从列表中下载获得 RPM 安装包;
|
||||
1. 从列表中下载获得 RPM 安装包:
|
||||
<PkgListV3 type={5}/>
|
||||
|
||||
2. 进入到安装包所在目录,执行如下的安装命令:
|
||||
|
||||
> 请将 `<version>` 替换为下载的安装包版本
|
||||
|
@ -67,7 +65,7 @@ sudo rpm -ivh TDengine-server-<version>-Linux-x64.rpm
|
|||
|
||||
<TabItem label="tar.gz 安装" value="tarinst">
|
||||
|
||||
1. 从列表中下载获得 tar.gz 安装包;
|
||||
1. 从列表中下载获得 tar.gz 安装包:
|
||||
<PkgListV3 type={0}/>
|
||||
2. 进入到安装包所在目录,使用 `tar` 解压安装包;
|
||||
3. 进入到安装包所在目录,先解压文件后,进入子目录,执行其中的 install.sh 安装脚本。
|
||||
|
@ -126,14 +124,14 @@ apt-get 方式只适用于 Debian 或 Ubuntu 系统。
|
|||
**注意**
|
||||
- 目前 TDengine 在 Windows 平台上只支持 Windows Server 2016/2019 和 Windows 10/11。
|
||||
- 从 TDengine 3.1.0.0 开始,只提供 Windows 客户端安装包。如果需要 Windows 服务端安装包,请联系 TDengine 销售团队升级为企业版。
|
||||
- Windows 上需要安装 VC 运行时库,可在此下载安装 [VC运行时库](https://learn.microsoft.com/zh-cn/cpp/windows/latest-supported-vc-redist?view=msvc-170), 如果已经安装此运行库可忽略。
|
||||
- Windows 上需要安装 VC 运行时库,可在此下载安装 [VC 运行时库](https://learn.microsoft.com/zh-cn/cpp/windows/latest-supported-vc-redist?view=msvc-170),如果已经安装此运行库可忽略。
|
||||
|
||||
按照以下步骤安装:
|
||||
|
||||
1. 从列表中下载获得 exe 安装程序;
|
||||
1. 从列表中下载获得 exe 安装程序:
|
||||
<PkgListV3 type={3}/>
|
||||
2. 运行可执行程序来安装 TDengine。
|
||||
Note: 从 3.0.1.7 开始,只提供 TDengine 客户端的 Windows 客户端的下载。想要使用TDengine 服务端的 Windows 版本,请联系销售升级为企业版本。
|
||||
Note: 从 3.0.1.7 版本开始,只提供 TDengine 客户端的 Windows 客户端的下载。想要使用 TDengine 服务端的 Windows 版本,请联系 TDengine 销售团队升级为企业版。
|
||||
|
||||
</TabItem>
|
||||
<TabItem label="macOS 安装" value="macos">
|
||||
|
@ -210,12 +208,12 @@ sudo launchctl start com.tdengine.taoskeeper
|
|||
sudo launchctl start com.tdengine.taos-explorer
|
||||
```
|
||||
|
||||
你也可以直接运行 start-all.sh 脚本来启动上面的所有服务
|
||||
你也可以直接运行 `start-all.sh` 脚本来启动上面的所有服务
|
||||
```bash
|
||||
start-all.sh
|
||||
```
|
||||
|
||||
可以使用 `launchctl` 命令管理上面提到的每个 TDengine 服务,以下示例使用 `taosd` :
|
||||
可以使用 `launchctl` 命令管理上面提到的每个 TDengine 服务,以下示例使用 `taosd` :
|
||||
|
||||
```bash
|
||||
sudo launchctl start com.tdengine.taosd
|
||||
|
|
|
@ -4,7 +4,7 @@ title: 通过云服务 快速体验 TDengine
|
|||
toc_max_heading_level: 4
|
||||
---
|
||||
|
||||
TDengine Cloud 作为一个全托管的时序大数据云服务平台,致力于让用户迅速领略TDengine 的强大功能。该平台不仅继承了 TDengine Enterprise 的核心功能特性,还充分发挥了 TDengine 的云原生优势。TDengine Cloud 以其极致的资源弹性伸缩、高可用性、容器化部署以及按需付费等特点,灵活满足各类用户需求,为用户打造高效、可靠且经济的时序大数据处理解决方案。
|
||||
TDengine Cloud 作为一个全托管的时序大数据云服务平台,致力于让用户迅速领略 TDengine 的强大功能。该平台不仅继承了 TDengine Enterprise 的核心功能特性,还充分发挥了 TDengine 的云原生优势。TDengine Cloud 以其极致的资源弹性伸缩、高可用性、容器化部署以及按需付费等特点,灵活满足各类用户需求,为用户打造高效、可靠且经济的时序大数据处理解决方案。
|
||||
|
||||
TDengine Cloud 大幅减轻了用户在部署、运维等方面的人力负担,同时提供了全方位的企业级服务。这些服务涵盖多角色、多层次的用户管理、数据共享功能,以适应各种异构网络环境。此外,TDengine Cloud 还提供私有链接服务和极简的数据备份与恢复功能,确保数据安全无忧。
|
||||
|
||||
|
@ -25,11 +25,10 @@ TDengine Cloud 大幅减轻了用户在部署、运维等方面的人力负担
|
|||
|
||||
要在 TDengine Cloud 中创建 TDengine 实例,只须遵循以下 3 个简单步骤。
|
||||
|
||||
1. 第 1 步,选择公共数据库。在此步骤中,TDengine Cloud 提供了可供公共访问的智能电表等数据库。通过浏览和查询这些数据库,你可以立即体验 TDengine 的各种功能和高性能。你可以根据需求在此步骤启动数据库访问,或在后续使用过程中再进行启动。若不需要此步骤,可直接点击“下一步”按钮跳过。
|
||||
1. 选择公共数据库。在此步骤中,TDengine Cloud 提供了可供公共访问的智能电表等数据库。通过浏览和查询这些数据库,你可以立即体验 TDengine 的各种功能和高性能。你可以根据需求在此步骤启动数据库访问,或在后续使用过程中再进行启动。若不需要此步骤,可直接点击“下一步”按钮跳过。
|
||||
|
||||
2. 第 2 步,创建组织。在此步骤中,请输入一个具有意义的名称,代表你的公司或组织,这将有助于你和平台更好地管理云上资源。
|
||||
|
||||
3. 第 3 步,创建实例。在此步骤中,你需要填写实例的区域、名称、是否选择高可用选项以及计费方案等必填信息。确认无误后,点击“创建”按钮。大约等待 1min,新的TDengine 实例便会创建完成。随后,你可以在控制台中对该实例进行各种操作,如查询数据、创建订阅、创建流等。
|
||||
2. 创建组织。在此步骤中,请输入一个具有意义的名称,代表你的公司或组织,这将有助于你和平台更好地管理云上资源。
|
||||
|
||||
3. 创建实例。在此步骤中,你需要填写实例的区域、名称、是否选择高可用选项以及计费方案等必填信息。确认无误后,点击“创建”按钮。大约等待 1min,新的 TDengine 实例便会创建完成。随后,你可以在控制台中对该实例进行各种操作,如查询数据、创建订阅、创建流等。
|
||||
|
||||
TDengine Cloud 提供多种级别的计费方案,包括入门版、基础版、标准版、专业版和旗舰版,以满足不同客户的需求。如果你觉得现有计费方案无法满足自己的特定需求,请联系 TDengine Cloud 的客户支持团队,他们将为你量身定制计费方案。注册后,你将获得一定的免费额度,以便体验服务
|
||||
|
|
|
@ -8,7 +8,7 @@ import xiaot_new from './xiaot-20231007.png'
|
|||
import channel from './channel.webp'
|
||||
import official_account from './official-account.webp'
|
||||
|
||||
TDengine 完整的软件包包括服务端(taosd)、用于与第三方系统对接并提供 RESTful 接口的 taosAdapter、应用驱动(taosc)、命令行程序 (CLI,taos) 和一些工具软件。TDengine 除了提供多种语言的连接器之外,还通过 [taosAdapter](../reference/components/taosadapter) 提供 [RESTful 接口](../reference/connector/rest-api)。
|
||||
TDengine 完整的软件包包括服务端(taosd)、用于与第三方系统对接并提供 RESTful 接口的 taosAdapter、应用驱动(taosc)、命令行程序 (TDengine CLI) 和一些工具软件。TDengine 除了提供多种语言的连接器之外,还通过 [taosAdapter](../reference/components/taosadapter) 提供 [RESTful 接口](../reference/connector/rest-api)。
|
||||
|
||||
本章主要介绍如何快速设置 TDengine 环境并体验其高效写入和查询。
|
||||
|
||||
|
|
|
@ -25,11 +25,11 @@ toc_max_heading_level: 4
|
|||
### 采集量
|
||||
|
||||
采集量是指通过各种传感器、设备或其他类型的采集点所获取的物理量,如电流、电压、温度、压力、GPS 等。由于这些物理量随时间不断变化,因此采集的数据类型多
|
||||
样,包括整型、浮点型、布尔型以及字符串等。随着时间的积累,存储的数据将持续增长。以智能电表为例,其中的 current(电流)、voltage(电压)和 phase(相位)便是典型的采集量。
|
||||
样,包括整型、浮点型、布尔型以及字符串等。随着时间的积累,存储的数据将持续增长。以智能电表为例,其中的 current、voltage 和 phase 便是典型的采集量。
|
||||
|
||||
### 标签
|
||||
|
||||
标签是指附着在传感器、设备或其他类型采集点上的静态属性,这些属性不会随时间发生变化,例如设备型号、颜色、设备所在地等。标签的数据类型可以是任意类型。尽管标签本身是静态的,但在实际应用中,用户可能需要对标签进行修改、删除或添加。与采集量不同,随着时间的推移,存储的标签数据量保持相对稳定,不会呈现明显的增长趋势。在智能电表的示例中,location(位置)和 Group ID(分组 ID)就是典型的标签。
|
||||
标签是指附着在传感器、设备或其他类型采集点上的静态属性,这些属性不会随时间发生变化,例如设备型号、颜色、设备所在地等。标签的数据类型可以是任意类型。尽管标签本身是静态的,但在实际应用中,用户可能需要对标签进行修改、删除或添加。与采集量不同,随着时间的推移,存储的标签数据量保持相对稳定,不会呈现明显的增长趋势。在智能电表的示例中,location 和 Group ID 就是典型的标签。
|
||||
|
||||
### 数据采集点
|
||||
|
||||
|
@ -49,9 +49,9 @@ toc_max_heading_level: 4
|
|||
|
||||
4. 一个数据块内部,采用列式存储,对于不同的数据类型,可以采用不同压缩算法来提高压缩率。并且,由于采集量的变化通常是缓慢的,压缩率会更高。
|
||||
|
||||
如果采用传统的方式,将多个数据采集点的数据写入一张表,由于网络延时不可控,不同数据采集点的数据到达服务器的时序是无法保证的,写入操作是要有锁保护的,而且一个数据采集点的数据是难以保证连续存储在一起的。采用一个数据采集点一张表的方式,能最大程度的保证单个数据采集点的插入和查询的性能是最优的,,而且数据压缩率最高。
|
||||
如果采用传统的方式,将多个数据采集点的数据写入一张表,由于网络延时不可控,不同数据采集点的数据到达服务器的时序是无法保证的,写入操作是要有锁保护的,而且一个数据采集点的数据是难以保证连续存储在一起的。采用一个数据采集点一张表的方式,能最大程度的保证单个数据采集点的插入和查询的性能是最优的,而且数据压缩率最高。
|
||||
|
||||
在 TDengine 中,通常使用数据采集点的名称(如:d1001)来做表名,每个数据采集点可以有多个采集量(如:current、voltage、phase 等),每个采集量对应一张表的一列。采集量的数据类型可以是整型、浮点型、字符串等。
|
||||
在 TDengine 中,通常使用数据采集点的名称(如 d1001)来做表名,每个数据采集点可以有多个采集量(如 current、voltage、phase 等),每个采集量对应一张表的一列。采集量的数据类型可以是整型、浮点型、字符串等。
|
||||
|
||||
此外,表的第一列必须是时间戳,即数据类型为 Timestamp。对于每个采集量,TDengine 将使用第一列时间戳建立索引,采用列式存储。对于复杂的设备,比如汽车,它有多个数据采集点,则需要为一辆汽车建立多张表。
|
||||
|
||||
|
@ -86,12 +86,12 @@ toc_max_heading_level: 4
|
|||
### 时间戳
|
||||
|
||||
时间戳在时序数据处理中扮演着至关重要的角色,特别是在应用程序需要从多个不同时区访问数据库时,这一问题变得更加复杂。在深入了解 TDengine 如何处理时间戳与时区之前,我们先介绍以下几个基本概念。
|
||||
- 本地日期时间:指特定地区的当地时间,通常表示为 yyyy-MM-dd hh:mm:ss.SSS 格 式 的 字 符 串。 这 种 时 间 表 示 不 包 含 任 何 时 区 信 息, 如“2021-07-21 12:00:00.000”。
|
||||
- 时区:地球上不同地理位置的标准时间。协调世界时(Universal Time Coordinated,UTC)或格林尼治时间是国际时间标准,其他时区通常表示为相对于 UTC 的偏移量,如“UTC+8”代表东八区时间。 UTC 时间戳:表示自 UNIX 纪 元(即 UTC 时 间 1970 年 1 月 1 日 0 点) 起 经 过的毫秒数。例如,“1700000000000”对应的日期时间是“2023-11-14 22:13:20(UTC+0)”。 在 TDengine 中保存时序数据时,实际上保存的是 UTC 时间戳。TDengine 在写入数据时,时间戳的处理分为如下两种情况。
|
||||
- RFC-3339 格式:当使用这种格式时,TDengine 能够正确解析带有时区信息的时间字符串为 UTC 时间戳。例如,“2018-10-03T14:38:05.000+08:00”会被转换为UTC 时间戳。
|
||||
- 本地日期时间:指特定地区的当地时间,通常表示为 yyyy-MM-dd hh:mm:ss.SSS 格式的字符串。这种时间表示不包含任何时区信息,如 “2021-07-21 12:00:00.000”。
|
||||
- 时区:地球上不同地理位置的标准时间。协调世界时(Universal Time Coordinated,UTC)或格林尼治时间是国际时间标准,其他时区通常表示为相对于 UTC 的偏移量,如 “UTC+8” 代表东八区时间。 UTC 时间戳:表示自 UNIX 纪元(即 UTC 时间 1970 年 1 月 1 日 0 点)起经过的毫秒数。例如,“1700000000000” 对应的日期时间是 “2023-11-14 22:13:20(UTC+0)”。 在 TDengine 中保存时序数据时,实际上保存的是 UTC 时间戳。TDengine 在写入数据时,时间戳的处理分为如下两种情况。
|
||||
- RFC-3339 格式:当使用这种格式时,TDengine 能够正确解析带有时区信息的时间字符串为 UTC 时间戳。例如,“2018-10-03T14:38:05.000+08:00” 会被转换为 UTC 时间戳。
|
||||
- 非 RFC-3339 格式:如果时间字符串不包含时区信息,TDengine 将使用应用程序所在的时区设置自动将时间转换为 UTC 时间戳。
|
||||
|
||||
在查询数据时,TDengine 客户端会根据应用程序当前的时区设置,自动将保存的UTC 时间戳转换成本地时间进行显示,确保用户在不同时区下都能看到正确的时间信息。
|
||||
在查询数据时,TDengine 客户端会根据应用程序当前的时区设置,自动将保存的 UTC 时间戳转换成本地时间进行显示,确保用户在不同时区下都能看到正确的时间信息。
|
||||
|
||||
## 数据建模
|
||||
|
||||
|
@ -110,7 +110,7 @@ CREATE DATABASE power PRECISION 'ms' KEEP 3650 DURATION 10 BUFFER 16;
|
|||
- `DURATION 10` :每 10 天的数据放在一个数据文件中
|
||||
- `BUFFER 16` :写入使用大小为 16MB 的内存池。
|
||||
|
||||
在创建power数据库后,可以执行 USE 语句来使用切换数据库。
|
||||
在创建 power 数据库后,可以执行 USE 语句来使用切换数据库。
|
||||
|
||||
```sql
|
||||
use power;
|
||||
|
@ -134,10 +134,10 @@ CREATE STABLE meters (
|
|||
|
||||
在 TDengine 中,创建超级表的 SQL 语句与关系型数据库类似。例如,上面的 SQL 中,`CREATE STABLE` 为关键字,表示创建超级表;接着,`meters` 是超级表的名称;在表名后面的括号中,定义超级表的列(列名、数据类型等),规则如下:
|
||||
|
||||
1. 第 1 列必须为时间戳列。例如:`ts timestamp` 表示,时间戳列名是 `t`s,数据类型为 `timestamp`;
|
||||
2. 从第 2 列开始是采集量列。采集量的数据类型可以为整型、浮点型、字符串等。例如:`current float` 表示,采集量电流 `current`,数据类型为 `float`;
|
||||
1. 第 1 列必须为时间戳列。例如:`ts timestamp` 表示,时间戳列名是 `ts`,数据类型为 `timestamp`;
|
||||
2. 第 2 列开始是采集量列。采集量的数据类型可以为整型、浮点型、字符串等。例如:`current float` 表示,采集量电流 `current`,数据类型为 `float`。
|
||||
|
||||
最后,TAGS是关键字,表示标签,在 TAGS 后面的括号中,定义超级表的标签(标签名、数据类型等)。
|
||||
最后,TAGS 是关键字,表示标签,在 TAGS 后面的括号中,定义超级表的标签(标签名、数据类型等)。
|
||||
1. 标签的数据类型可以为整型、浮点型、字符串等。例如:`location varchar(64)` 表示,标签地区 `location`,数据类型为 `varchar(64)`;
|
||||
2. 标签的名称不能与采集量列的名称相同。
|
||||
|
||||
|
@ -155,7 +155,7 @@ USING meters (
|
|||
);
|
||||
```
|
||||
|
||||
上面的 SQL 中,`CREATE TABLE` 为关键字,表示创建表;`d1001` 是子表的名称;`USING` 是关键字,表示要使用超级表作为模版;`meters` 是超级表的名称;在超级表名后的括号中,`location`, `group_id` 表示,是超级表的标签列名列表;`TAGS` 是关键字,在后面的括号中指定子表的标签列的值。`"California.SanFrancisco"` 和 `2` 表示子表 `d1001` 的位置为 `California.SanFrancisco`,分组 ID 为 `2` 。
|
||||
上面的 SQL 中,`CREATE TABLE` 为关键字,表示创建表;`d1001` 是子表的名称;`USING` 是关键字,表示要使用超级表作为模版;`meters` 是超级表的名称;在超级表名后的括号中,`location`、`group_id` 表示,是超级表的标签列名列表;`TAGS` 是关键字,在后面的括号中指定子表的标签列的值。`"California.SanFrancisco"` 和 `2` 表示子表 `d1001` 的位置为 `California.SanFrancisco`,分组 ID 为 `2`。
|
||||
|
||||
当对超级表进行写入或查询操作时,用户可以使用伪列 tbname 来指定或输出对应操作的子表名。
|
||||
|
||||
|
@ -178,7 +178,7 @@ TAGS (
|
|||
);
|
||||
```
|
||||
|
||||
上面的 SQL 中,`INSERT INTO d1002` 表示,向子表 `d1002` 中写入数据;`USING meters` 表示,使用超级表 `meters` 作为模版;`TAGS ("California.SanFrancisco", 2)` 表示,子表 `d1002` 的标签值分别为 `California.SanFrancisco` 和 `2`;`VALUES (NOW, 10.2, 219, 0.32)` 表示,向子表 `d1002` 插入一行记录,值分别为NOW(当前时间戳)、10.2(电流)、219(电压)、0.32(相位)。在 TDengine 执行这条 SQL 时,如果子表 `d1002` 已经存在,则直接写入数据;当子表 `d1002` 不存在,会先自动创建子表,再写入数据。
|
||||
上面的 SQL 中,`INSERT INTO d1002` 表示,向子表 `d1002` 中写入数据;`USING meters` 表示,使用超级表 `meters` 作为模版;`TAGS ("California.SanFrancisco", 2)` 表示,子表 `d1002` 的标签值分别为 `California.SanFrancisco` 和 `2`;`VALUES (NOW, 10.2, 219, 0.32)` 表示,向子表 `d1002` 插入一行记录,值分别为 NOW(当前时间戳)、10.2(电流)、219(电压)、0.32(相位)。在 TDengine 执行这条 SQL 时,如果子表 `d1002` 已经存在,则直接写入数据;当子表 `d1002` 不存在,会先自动创建子表,再写入数据。
|
||||
|
||||
### 创建普通表
|
||||
|
||||
|
@ -204,7 +204,7 @@ CREATE TABLE d1003(
|
|||
);
|
||||
```
|
||||
|
||||
上面的 SQL 表示,创建普通表 `d1003` ,表结构包括 `ts`、`current`、`voltage`、`phase`、`location`、`group_id`,共 6 个列。这样的数据模型,与关系型数据库完全一致。
|
||||
上面的 SQL 表示,创建普通表 `d1003`,表结构包括 `ts`、`current`、`voltage`、`phase`、`location`、`group_id`,共 6 个列。这样的数据模型,与关系型数据库完全一致。
|
||||
|
||||
采用普通表作为数据模型意味着静态标签数据(如 location 和 group_id)会重复存储在表的每一行中。这种做法不仅增加了存储空间的消耗,而且在进行查询时,由于无法直接利用标签数据进行过滤,查询性能会显著低于使用超级表的数据模型。
|
||||
|
||||
|
|
|
@ -12,9 +12,9 @@ toc_max_heading_level: 4
|
|||
|
||||
### 一次写入一条
|
||||
|
||||
假设设备 ID 为 d1001 的智能电表在 2018 年 10 月 3 日 14:38:05 采集到数据:电流10.3A,电压 219V,相位 0.31。在第 3 章中,我们已经在 TDengine 的 power 数据库中创建了属于超级表 meters 的子表 d1001。接下来可以通过下面的 insert 语句在子表 d1001 中写入时序数据。
|
||||
假设设备 ID 为 d1001 的智能电表在 2018 年 10 月 3 日 14:38:05 采集到数据:电流 10.3A,电压 219V,相位 0.31。在第 3 章中,我们已经在 TDengine 的 power 数据库中创建了属于超级表 meters 的子表 d1001。接下来可以通过下面的 insert 语句在子表 d1001 中写入时序数据。
|
||||
|
||||
1. 可以通过下面的 INSERT 语句向子表d1001中写入时序数据。
|
||||
1. 可以通过下面的 INSERT 语句向子表 d1001 中写入时序数据。
|
||||
|
||||
```sql
|
||||
insert into d1001 (ts, current, voltage, phase) values ( "2018-10-03 14:38:05", 10.3, 219, 0.31)
|
||||
|
@ -120,7 +120,7 @@ values( "d1001, "2018-10-03 14:38:05", 10.2, 220, 0.23, "California.SanFrancisco
|
|||
|
||||
## 更新
|
||||
|
||||
可以通过写入重复时间戳的一条数据来更新时序数据,新写入的数据会替换旧值。 下面的 SQL,通过指定列的方式,向子表 `d1001` 中写入 1 行数据;当子表 `d1001` 中已经存在日期时间为 `2018-10-03 14:38:05` 的数据时,`current`(电流)的新值22,会替换旧值。
|
||||
可以通过写入重复时间戳的一条数据来更新时序数据,新写入的数据会替换旧值。下面的 SQL,通过指定列的方式,向子表 `d1001` 中写入 1 行数据;当子表 `d1001` 中已经存在日期时间为 `2018-10-03 14:38:05` 的数据时,`current`(电流)的新值 22,会替换旧值。
|
||||
|
||||
```sql
|
||||
INSERT INTO d1001 (ts, current) VALUES ("2018-10-03 14:38:05", 22);
|
||||
|
@ -128,7 +128,7 @@ INSERT INTO d1001 (ts, current) VALUES ("2018-10-03 14:38:05", 22);
|
|||
|
||||
## 删除
|
||||
|
||||
为方便用户清理由于设备故障等原因产生的异常数据,TDengine 支持根据时间戳删除时序数据。 下面的 SQL,将超级表 `meters` 中所有时间戳早于 `2021-10-01 10:40:00.100` 的数据删除。数据删除后不可恢复,请慎重使用。为了确保删除的数据确实是自己要删除的,建议可以先使用 select 语句加 where 后的删除条件查看要删除的数据内容,确认无误后再执行 delete 。
|
||||
为方便用户清理由于设备故障等原因产生的异常数据,TDengine 支持根据时间戳删除时序数据。下面的 SQL,将超级表 `meters` 中所有时间戳早于 `2021-10-01 10:40:00.100` 的数据删除。数据删除后不可恢复,请慎重使用。为了确保删除的数据确实是自己要删除的,建议可以先使用 select 语句加 where 后的删除条件查看要删除的数据内容,确认无误后再执行 delete 。
|
||||
|
||||
```sql
|
||||
delete from meters where ts < '2021-10-01 10:40:00.100' ;
|
||||
|
|