test: merge with main
|
@ -16,7 +16,6 @@ debug/
|
|||
release/
|
||||
target/
|
||||
debs/
|
||||
deps/
|
||||
rpms/
|
||||
mac/
|
||||
*.pyc
|
||||
|
@ -101,6 +100,7 @@ tests/examples/JDBC/JDBCDemo/.settings/
|
|||
source/libs/parser/inc/sql.*
|
||||
tests/script/tmqResult.txt
|
||||
tests/tmqResult.txt
|
||||
tests/script/jenkins/basic.txt
|
||||
|
||||
# Emacs
|
||||
# -*- mode: gitignore; -*-
|
||||
|
@ -129,3 +129,5 @@ tools/COPYING
|
|||
tools/BUGS
|
||||
tools/taos-tools
|
||||
tools/taosws-rs
|
||||
tags
|
||||
.clangd
|
||||
|
|
|
@ -0,0 +1,28 @@
|
|||
repos:
|
||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||
rev: v2.3.0
|
||||
hooks:
|
||||
- id: check-yaml
|
||||
- id: check-json
|
||||
- id: end-of-file-fixer
|
||||
- id: trailing-whitespace
|
||||
|
||||
repos:
|
||||
- repo: https://github.com/psf/black
|
||||
rev: stable
|
||||
hooks:
|
||||
- id: black
|
||||
|
||||
repos:
|
||||
- repo: https://github.com/pocc/pre-commit-hooks
|
||||
rev: master
|
||||
hooks:
|
||||
- id: cppcheck
|
||||
args: ["--error-exitcode=0"]
|
||||
|
||||
repos:
|
||||
- repo: https://github.com/crate-ci/typos
|
||||
rev: v1.15.7
|
||||
hooks:
|
||||
- id: typos
|
||||
|
|
@ -10,14 +10,20 @@ if (NOT DEFINED TD_SOURCE_DIR)
|
|||
set( TD_SOURCE_DIR ${PROJECT_SOURCE_DIR} )
|
||||
endif()
|
||||
|
||||
SET(TD_COMMUNITY_DIR ${PROJECT_SOURCE_DIR})
|
||||
|
||||
set(TD_SUPPORT_DIR "${TD_SOURCE_DIR}/cmake")
|
||||
set(TD_CONTRIB_DIR "${TD_SOURCE_DIR}/contrib")
|
||||
|
||||
|
||||
|
||||
|
||||
include(${TD_SUPPORT_DIR}/cmake.platform)
|
||||
include(${TD_SUPPORT_DIR}/cmake.define)
|
||||
include(${TD_SUPPORT_DIR}/cmake.options)
|
||||
include(${TD_SUPPORT_DIR}/cmake.version)
|
||||
|
||||
|
||||
# contrib
|
||||
add_subdirectory(contrib)
|
||||
|
||||
|
|
|
@ -0,0 +1,47 @@
|
|||
# 贡献者契约行为准则
|
||||
|
||||
[](code_of_conduct.md)
|
||||
|
||||
## 我们的承诺
|
||||
|
||||
为了营造一个开放和热情的环境,作为贡献者和维护者,我们承诺让每个人参与我们的项目和社区成为一种无骚扰的体验,无论年龄、体型、残疾、种族、性别特征、性别认同和表达、经验水平、教育、社会经济地位、国籍、个人外表、种族、宗教或性认同和取向如何。
|
||||
|
||||
## 我们的标准
|
||||
|
||||
有助于创造积极环境的行为示例包括:
|
||||
|
||||
- 使用热情和包容的语言
|
||||
- 尊重不同的观点和经历
|
||||
- 优雅地接受建设性的批评
|
||||
- 专注于对社区最有利的事情
|
||||
- 对其他社区成员表示同情
|
||||
|
||||
参与者不可接受的行为示例包括:
|
||||
|
||||
- 使用性感的语言或图像以及不受欢迎的性关注或进步
|
||||
- 拖钓、侮辱/贬损评论以及人身或政治攻击
|
||||
- 公共或私人骚扰
|
||||
- 未经明确许可发布他人的私人信息,例如物理地址或电子地址
|
||||
- 在专业环境中可能被合理认为不适当的其他行为
|
||||
|
||||
## 我们的责任
|
||||
|
||||
项目维护人员负责阐明可接受行为的标准,并期望针对任何不可接受行为的情况采取适当和公平的纠正措施。
|
||||
|
||||
项目维护者有权利和责任删除、编辑或拒绝评论、提交、代码、wiki 编辑、问题和其他不符合本行为准则的贡献,或暂时或永久禁止任何贡献者从事他们认为不适当、威胁、冒犯或有害的其他行为。
|
||||
|
||||
## 范围
|
||||
|
||||
本行为准则适用于所有项目空间,也适用于个人在公共场所代表项目或其社区时。 代表项目或社区的示例包括使用官方项目电子邮件地址、通过官方社交媒体帐户发布信息或在在线或离线活动中担任指定代表。 项目的表示可以由项目维护者进一步定义和澄清。
|
||||
|
||||
## 执法
|
||||
|
||||
可以通过 support@taosdata.com 联系项目团队来报告辱骂、骚扰或其他不可接受的行为。 所有投诉都将被审查和调查,并将产生被认为必要且适合具体情况的回应。 项目团队有义务对事件的报告者保密。 具体执行政策的更多细节可能会单独发布。
|
||||
|
||||
不善意遵守或执行行为准则的项目维护者可能会面临由项目领导的其他成员确定的临时或永久影响。
|
||||
|
||||
## 来源
|
||||
|
||||
本行为准则改编自贡献者公约 1.4 版,可在 https://www.contributor-covenant.org/version/1/4/code-of-conduct.html 获取
|
||||
|
||||
有关此行为准则的常见问题的答案,请参阅 https://www.contributor-covenant.org/faq
|
|
@ -7,25 +7,18 @@
|
|||
- 任何用户都可以通过 **[GitHub issue tracker](https://github.com/taosdata/TDengine/issues)** 向我们报告错误。请您对所遇到的问题进行**详细描述**,最好提供重现错误的详细步骤。
|
||||
- 欢迎提供包含由 Bug 生成的日志文件的附录。
|
||||
|
||||
## 需要强调的代码提交规则
|
||||
## 代码提交规则
|
||||
|
||||
- 在提交代码之前,需要**同意贡献者许可协议(CLA)**。点击 [TaosData CLA](https://cla-assistant.io/taosdata/TDengine) 阅读并签署协议。如果您不接受该协议,请停止提交。
|
||||
- 请在 [GitHub issue tracker](https://github.com/taosdata/TDengine/issues) 中解决问题或添加注册功能。
|
||||
- 如果在 [GitHub issue tracker](https://github.com/taosdata/TDengine/issues) 中没有找到相应的问题或功能,请**创建一个新的 issue**。
|
||||
- 将代码提交到我们的存储库时,请创建**包含问题编号的 PR**。
|
||||
1. 在提交代码之前,需要**同意贡献者许可协议(CLA)**。点击 [TaosData CLA](https://cla-assistant.io/taosdata/TDengine) 阅读并签署协议。如果您不接受该协议,请停止提交。
|
||||
2. 请在 [GitHub issue tracker](https://github.com/taosdata/TDengine/issues) 中解决问题或添加注册功能。
|
||||
如果在 [GitHub issue tracker](https://github.com/taosdata/TDengine/issues) 中没有找到相应的问题或功能,请**创建一个新的 issue**。
|
||||
将代码提交到我们的存储库时,请创建**包含问题编号的 PR**。
|
||||
3. 将TDengine仓库库fork到自己的账户中并创建分支(branch)。
|
||||
注意:默认分支`main`不能直接接受PR,请基于开发分支`3.0`创建自己的分支。
|
||||
注意:修改文档的分支要以`docs/`为开头,以免进行不必要的测试。
|
||||
4. 创建pull request,将自己的分支合并到开发分支`3.0`,我们开发团队将尽快审核。
|
||||
|
||||
## 贡献指南
|
||||
|
||||
1. 请用友好的语气书写。
|
||||
|
||||
2. **主动语态**总体上优于被动语态。主动语态中的句子会突出执行动作的人,而不是被动语态突出动作的接受者。
|
||||
|
||||
3. 文档写作建议
|
||||
|
||||
- 正确拼写产品名称 “TDengine”。 “TD” 用大写字母,“TD” 和 “engine” 之间没有空格 **(正确拼写:TDengine)**。
|
||||
- 在句号或其他标点符号后只留一个空格。
|
||||
|
||||
4. 尽量**使用简单句**,而不是复杂句。
|
||||
如遇任何问题,请添加官方微信 tdengine1。我们的团队会帮忙解决。
|
||||
|
||||
## 给贡献者的礼品
|
||||
|
||||
|
@ -55,4 +48,4 @@ TDengine 社区致力于让更多的开发者理解和使用它。
|
|||
|
||||
## 联系我们
|
||||
|
||||
如果您有什么问题需要解决,或者有什么问题需要解答,可以添加微信:TDengineECO
|
||||
如果您有什么问题需要解决,或者有什么问题需要解答,可以添加微信:tdengine1。
|
||||
|
|
|
@ -1,40 +1,36 @@
|
|||
# Contributing
|
||||
# Contributing to TDengine
|
||||
|
||||
We appreciate contributions from all developers. Feel free to follow us, fork the repository, report bugs, and even submit your code on GitHub. However, we would like developers to follow the guidelines in this document to ensure effective cooperation.
|
||||
TDengine Community Edition is free, open-source software. Its development is led by the TDengine Team, but we welcome contributions from all community members and open-source developers. This document describes how you can contribute, no matter whether you're a user or a developer yourself.
|
||||
|
||||
## Reporting a bug
|
||||
## Bug reports
|
||||
|
||||
- Any users can report bugs to us through the **[GitHub issue tracker](https://github.com/taosdata/TDengine/issues)**. We would appreciate if you could provide **a detailed description** of the problem you encountered, including steps to reproduce it.
|
||||
All users can report bugs to us through the **[GitHub issue tracker](https://github.com/taosdata/TDengine/issues)**. To ensure that the development team can locate and resolve the issue that you experienced, please include the following in your bug report:
|
||||
|
||||
- Attaching log files caused by the bug is really appreciated.
|
||||
- A detailed description of the issue, including the steps to reproduce it.
|
||||
- Any log files that may be relevant to the issue.
|
||||
|
||||
## Guidelines for committing code
|
||||
## Code contributions
|
||||
|
||||
- You must agree to the **Contributor License Agreement(CLA) before submitting your code patch**. Follow the **[TAOSData CLA](https://cla-assistant.io/taosdata/TDengine)** link to read through and sign the agreement. If you do not accept the agreement, your contributions cannot be accepted.
|
||||
Developers are encouraged to submit patches to the project, and all contributions, from minor documentation changes to bug fixes, are appreciated by our team. To ensure that your code can be merged successfully and improve the experience for other community members, we ask that you go through the following procedure before submitting a pull request:
|
||||
|
||||
- Please solve an issue or add a feature registered in the **[GitHub issue tracker](https://github.com/taosdata/TDengine/issues)**.
|
||||
- If no corresponding issue or feature is found in the issue tracker, please **create one**.
|
||||
- When submitting your code to our repository, please create a pull request with the **issue number** included.
|
||||
1. Read and accept the terms of the TAOS Data Contributor License Agreement (CLA) located at [https://cla-assistant.io/taosdata/TDengine](https://cla-assistant.io/taosdata/TDengine).
|
||||
|
||||
## Guidelines for communicating
|
||||
2. For bug fixes, search the [GitHub issue tracker](https://github.com/taosdata/TDengine/issues) to check whether the bug has already been filed.
|
||||
- If the bug that you want to fix already exists in the issue tracker, review the previous discussion before submitting your patch.
|
||||
- If the bug that you want to fix does not exist in the issue tracker, click **New issue** and file a report.
|
||||
- Ensure that you note the issue number in your pull request when you submit your patch.
|
||||
|
||||
3. Fork our repository to your GitHub account and create a branch for your patch.
|
||||
**Important:** The `main` branch is for stable versions and cannot accept patches directly. For all code and documentation changes, create your own branch from the development branch `3.0` and not from `main`.
|
||||
Note: For a documentation change, ensure that the branch name starts with `docs/` so that the change can be merged without running tests.
|
||||
|
||||
4. Create a pull request to merge your changes into the development branch `3.0`, and our team members will review the request as soon as possible.
|
||||
|
||||
1. Please be **nice and polite** in the description.
|
||||
2. **Active voice is better than passive voice in general**. Sentences in the active voice will highlight who is performing the action rather than the recipient of the action highlighted by the passive voice.
|
||||
3. Documentation writing advice
|
||||
If you encounter any difficulties or problems in contributing your code, you can join our [Discord server](https://discord.com/invite/VZdSuUg4pS) and receive assistance from members of the TDengine Team.
|
||||
|
||||
- Spell the product name "TDengine" correctly. "TD" is written in capital letters, and there is no space between "TD" and "engine" (**Correct spelling: TDengine**).
|
||||
- Please **capitalize the first letter** of every sentence.
|
||||
- Leave **only one space** after periods or other punctuation marks.
|
||||
- Use **American spelling**.
|
||||
- When possible, **use second person** rather than first person (e.g.“You are recommended to use a reverse proxy such as Nginx.” rather than “We recommend to use a reverse proxy such as Nginx.”).
|
||||
## Expressing our thanks
|
||||
|
||||
5. Use **simple sentences**, rather than complex sentences.
|
||||
|
||||
## Gifts for the contributors
|
||||
|
||||
Developers, as long as you contribute to TDengine, whether it's code contributions to fix bugs or feature requests, or documentation changes, **you are eligible for a very special Contributor Souvenir Gift!**
|
||||
|
||||
**You can choose one of the following gifts:**
|
||||
To thank community members for your support, we are offering a free gift to any developer who submits at least one contribution. You can choose one of the following items:
|
||||
|
||||
<p align="left">
|
||||
<img
|
||||
|
@ -53,12 +49,10 @@ Developers, as long as you contribute to TDengine, whether it's code contributio
|
|||
width="200"
|
||||
/>
|
||||
|
||||
The TDengine community is committed to making TDengine accepted and used by more developers.
|
||||
If you would like to claim your gift, send an email to [developer@tdengine.com](mailto:developer@tdengine.com?subject=Claiming&20my%20developer%20gift) including the following information:
|
||||
|
||||
Just fill out the **Contributor Submission Form** to choose your desired gift.
|
||||
- Your GitHub account name
|
||||
- Your name and mailing address
|
||||
- Your preferred gift
|
||||
|
||||
- [Contributor Submission Form](https://page.ma.scrmtech.com/form/index?pf_uid=27715_2095&id=12100)
|
||||
|
||||
## Contact us
|
||||
|
||||
If you have any problems or questions that need help from us, please feel free to add our WeChat account: TDengineECO.
|
||||
Note: Limit one per person.
|
28
Jenkinsfile2
|
@ -40,7 +40,7 @@ def check_docs() {
|
|||
sh '''
|
||||
cd ${WKC}
|
||||
git reset --hard
|
||||
git clean -fxd
|
||||
git clean -f
|
||||
rm -rf examples/rust/
|
||||
git remote prune origin
|
||||
git fetch
|
||||
|
@ -86,7 +86,7 @@ def pre_test(){
|
|||
git fetch
|
||||
cd ${WKC}
|
||||
git reset --hard
|
||||
git clean -fxd
|
||||
git clean -f
|
||||
rm -rf examples/rust/
|
||||
git remote prune origin
|
||||
git fetch
|
||||
|
@ -173,7 +173,7 @@ def pre_test_build_mac() {
|
|||
'''
|
||||
sh '''
|
||||
cd ${WK}/debug
|
||||
cmake .. -DBUILD_TEST=true -DBUILD_HTTPS=false
|
||||
cmake .. -DBUILD_TEST=true -DBUILD_HTTPS=false -DCMAKE_BUILD_TYPE=Release
|
||||
make -j10
|
||||
ctest -j10 || exit 7
|
||||
'''
|
||||
|
@ -201,6 +201,7 @@ def pre_test_win(){
|
|||
'''
|
||||
bat '''
|
||||
cd %WIN_COMMUNITY_ROOT%
|
||||
git clean -f
|
||||
git reset --hard
|
||||
git remote prune origin
|
||||
git fetch
|
||||
|
@ -303,7 +304,7 @@ def pre_test_build_win() {
|
|||
set CL=/MP8
|
||||
echo ">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cmake"
|
||||
time /t
|
||||
cmake .. -G "NMake Makefiles JOM" -DBUILD_TEST=true || exit 7
|
||||
cmake .. -G "NMake Makefiles JOM" -DBUILD_TEST=true -DBUILD_TOOLS=true || exit 7
|
||||
echo ">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> jom -j 6"
|
||||
time /t
|
||||
jom -j 6 || exit 8
|
||||
|
@ -312,7 +313,8 @@ def pre_test_build_win() {
|
|||
bat '''
|
||||
cd %WIN_CONNECTOR_ROOT%
|
||||
python.exe -m pip install --upgrade pip
|
||||
python -m pip install .
|
||||
python -m pip uninstall taospy -y
|
||||
python -m pip install taospy==2.7.10
|
||||
xcopy /e/y/i/f %WIN_INTERNAL_ROOT%\\debug\\build\\lib\\taos.dll C:\\Windows\\System32
|
||||
'''
|
||||
return 1
|
||||
|
@ -330,8 +332,6 @@ def run_win_test() {
|
|||
bat '''
|
||||
echo "windows test ..."
|
||||
cd %WIN_CONNECTOR_ROOT%
|
||||
python.exe -m pip install --upgrade pip
|
||||
python -m pip install .
|
||||
xcopy /e/y/i/f %WIN_INTERNAL_ROOT%\\debug\\build\\lib\\taos.dll C:\\Windows\\System32
|
||||
ls -l C:\\Windows\\System32\\taos.dll
|
||||
time /t
|
||||
|
@ -361,7 +361,7 @@ pipeline {
|
|||
}
|
||||
parallel {
|
||||
stage('check docs') {
|
||||
agent{label " worker03 || slave215 || slave217 || slave219 || Mac_catalina "}
|
||||
agent{label " slave1_47 || slave1_48 || slave1_49 || slave1_52 || worker03 || slave215 || slave217 || slave219 || Mac_catalina "}
|
||||
steps {
|
||||
check_docs()
|
||||
}
|
||||
|
@ -386,7 +386,7 @@ pipeline {
|
|||
}
|
||||
steps {
|
||||
catchError(buildResult: 'FAILURE', stageResult: 'FAILURE') {
|
||||
timeout(time: 55, unit: 'MINUTES'){
|
||||
timeout(time: 126, unit: 'MINUTES'){
|
||||
pre_test_win()
|
||||
pre_test_build_win()
|
||||
run_win_ctest()
|
||||
|
@ -407,7 +407,7 @@ pipeline {
|
|||
}
|
||||
}
|
||||
stage('linux test') {
|
||||
agent{label " worker03 || slave215 || slave217 || slave219 "}
|
||||
agent{label " slave1_47 || slave1_48 || slave1_49 || slave1_52 || worker03 || slave215 || slave217 || slave219 "}
|
||||
options { skipDefaultCheckout() }
|
||||
when {
|
||||
changeRequest()
|
||||
|
@ -422,16 +422,14 @@ pipeline {
|
|||
echo "${WKDIR}/restore.sh -p ${BRANCH_NAME} -n ${BUILD_ID} -c {container name}"
|
||||
}
|
||||
catchError(buildResult: 'FAILURE', stageResult: 'FAILURE') {
|
||||
timeout(time: 120, unit: 'MINUTES'){
|
||||
timeout(time: 130, unit: 'MINUTES'){
|
||||
pre_test()
|
||||
script {
|
||||
sh '''
|
||||
date
|
||||
rm -rf ${WKC}/debug
|
||||
cd ${WKC}/tests/parallel_test
|
||||
time ./container_build.sh -w ${WKDIR} -t 10 -e
|
||||
rm -f /tmp/cases.task
|
||||
./collect_cases.sh -e
|
||||
time ./container_build.sh -w ${WKDIR} -e
|
||||
'''
|
||||
def extra_param = ""
|
||||
def log_server_file = "/home/log_server.json"
|
||||
|
@ -462,7 +460,7 @@ pipeline {
|
|||
cd ${WKC}/tests/parallel_test
|
||||
export DEFAULT_RETRY_TIME=2
|
||||
date
|
||||
''' + timeout_cmd + ''' time ./run.sh -e -m /home/m.json -t /tmp/cases.task -b ${BRANCH_NAME}_${BUILD_ID} -l ${WKDIR}/log -o 480 ''' + extra_param + '''
|
||||
''' + timeout_cmd + ''' time ./run.sh -e -m /home/m.json -t cases.task -b ${BRANCH_NAME}_${BUILD_ID} -l ${WKDIR}/log -o 600 ''' + extra_param + '''
|
||||
'''
|
||||
}
|
||||
}
|
||||
|
|
36
README-CN.md
|
@ -15,7 +15,7 @@
|
|||
[](https://coveralls.io/github/taosdata/TDengine?branch=develop)
|
||||
[](https://bestpractices.coreinfrastructure.org/projects/4201)
|
||||
|
||||
简体中文 | [English](README.md) | 很多职位正在热招中,请看[这里](https://www.taosdata.com/cn/careers/)
|
||||
简体中文 | [English](README.md) | [TDengine 云服务](https://cloud.taosdata.com/?utm_medium=cn&utm_source=github) | 很多职位正在热招中,请看[这里](https://www.taosdata.com/cn/careers/)
|
||||
|
||||
# TDengine 简介
|
||||
|
||||
|
@ -39,7 +39,7 @@ TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series
|
|||
|
||||
# 构建
|
||||
|
||||
TDengine 目前可以在 Linux、 Windows、macOS 等平台上安装和运行。任何 OS 的应用也可以选择 taosAdapter 的 RESTful 接口连接服务端 taosd。CPU 支持 X64/ARM64,后续会支持 MIPS64、Alpha64、ARM32、RISC-V 等 CPU 架构。
|
||||
TDengine 目前可以在 Linux、 Windows、macOS 等平台上安装和运行。任何 OS 的应用也可以选择 taosAdapter 的 RESTful 接口连接服务端 taosd。CPU 支持 X64/ARM64,后续会支持 MIPS64、Alpha64、ARM32、RISC-V 等 CPU 架构。目前不支持使用交叉编译器构建。
|
||||
|
||||
用户可根据需求选择通过源码、[容器](https://docs.taosdata.com/get-started/docker/)、[安装包](https://docs.taosdata.com/get-started/package/)或[Kubernetes](https://docs.taosdata.com/deployment/k8s/)来安装。本快速指南仅适用于通过源码安装。
|
||||
|
||||
|
@ -52,7 +52,7 @@ TDengine 还提供一组辅助工具软件 taosTools,目前它包含 taosBench
|
|||
### Ubuntu 18.04 及以上版本 & Debian:
|
||||
|
||||
```bash
|
||||
sudo apt-get install -y gcc cmake build-essential git libssl-dev
|
||||
sudo apt-get install -y gcc cmake build-essential git libssl-dev libgflags2.2 libgflags-dev
|
||||
```
|
||||
|
||||
#### 为 taos-tools 安装编译需要的软件
|
||||
|
@ -60,7 +60,7 @@ sudo apt-get install -y gcc cmake build-essential git libssl-dev
|
|||
为了在 Ubuntu/Debian 系统上编译 [taos-tools](https://github.com/taosdata/taos-tools) 需要安装如下软件:
|
||||
|
||||
```bash
|
||||
sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev pkg-config
|
||||
sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev zlib1g pkg-config
|
||||
```
|
||||
|
||||
### CentOS 7.9
|
||||
|
@ -68,14 +68,14 @@ sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-d
|
|||
```bash
|
||||
sudo yum install epel-release
|
||||
sudo yum update
|
||||
sudo yum install -y gcc gcc-c++ make cmake3 git openssl-devel
|
||||
sudo yum install -y gcc gcc-c++ make cmake3 gflags git openssl-devel
|
||||
sudo ln -sf /usr/bin/cmake3 /usr/bin/cmake
|
||||
```
|
||||
|
||||
### CentOS 8 & Fedora
|
||||
### CentOS 8/Fedora/Rocky Linux
|
||||
|
||||
```bash
|
||||
sudo dnf install -y gcc gcc-c++ make cmake epel-release git openssl-devel
|
||||
sudo dnf install -y gcc gcc-c++ gflags make cmake epel-release git openssl-devel
|
||||
```
|
||||
|
||||
#### 在 CentOS 上构建 taosTools 安装依赖软件
|
||||
|
@ -85,29 +85,39 @@ sudo dnf install -y gcc gcc-c++ make cmake epel-release git openssl-devel
|
|||
|
||||
|
||||
```
|
||||
sudo yum install -y zlib-devel xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libstdc++-static openssl-devel
|
||||
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
|
||||
```
|
||||
|
||||
#### CentOS 8/Rocky Linux
|
||||
#### CentOS 8/Fedora/Rocky Linux
|
||||
|
||||
```
|
||||
sudo yum install -y epel-release
|
||||
sudo yum install -y dnf-plugins-core
|
||||
sudo yum config-manager --set-enabled powertools
|
||||
sudo yum install -y zlib-devel xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libstdc++-static openssl-devel
|
||||
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
|
||||
```
|
||||
|
||||
注意:由于 snappy 缺乏 pkg-config 支持(参考 [链接](https://github.com/google/snappy/pull/86)),会导致 cmake 提示无法发现 libsnappy,实际上工作正常。
|
||||
|
||||
若 powertools 安装失败,可以尝试改用:
|
||||
```
|
||||
sudo yum config-manager --set-enabled Powertools
|
||||
sudo yum config-manager --set-enabled powertools
|
||||
```
|
||||
|
||||
#### CentOS + devtoolset
|
||||
|
||||
除上述编译依赖包,需要执行以下命令:
|
||||
|
||||
```
|
||||
sudo yum install centos-release-scl
|
||||
sudo yum install devtoolset-9 devtoolset-9-libatomic-devel
|
||||
scl enable devtoolset-9 -- bash
|
||||
```
|
||||
|
||||
### macOS
|
||||
|
||||
```
|
||||
brew install argp-standalone pkgconfig
|
||||
brew install argp-standalone gflags pkgconfig
|
||||
```
|
||||
|
||||
### 设置 golang 开发环境
|
||||
|
@ -276,7 +286,7 @@ sudo make install
|
|||
安装成功后,可以在应用程序中双击 TDengine 图标启动服务,或者在终端中启动 TDengine 服务:
|
||||
|
||||
```bash
|
||||
launchctl start com.tdengine.taosd
|
||||
sudo launchctl start com.tdengine.taosd
|
||||
```
|
||||
|
||||
用户可以使用 TDengine CLI 来连接 TDengine 服务,在终端中,输入:
|
||||
|
|
44
README.md
|
@ -14,6 +14,12 @@
|
|||
[](https://ci.appveyor.com/project/sangshuduo/tdengine-2n8ge/branch/master)
|
||||
[](https://coveralls.io/github/taosdata/TDengine?branch=develop)
|
||||
[](https://bestpractices.coreinfrastructure.org/projects/4201)
|
||||
<br />
|
||||
[](https://twitter.com/tdenginedb)
|
||||
[](https://www.youtube.com/@tdengine)
|
||||
[](https://discord.com/invite/VZdSuUg4pS)
|
||||
[](https://www.linkedin.com/company/tdengine)
|
||||
[](https://stackoverflow.com/questions/tagged/tdengine)
|
||||
|
||||
English | [简体中文](README-CN.md) | [TDengine Cloud](https://cloud.tdengine.com) | [Learn more about TSDB](https://tdengine.com/tsdb/)
|
||||
|
||||
|
@ -31,7 +37,7 @@ TDengine is an open source, high-performance, cloud native [time-series database
|
|||
|
||||
- **[Easy Data Analytics](https://tdengine.com/tdengine/time-series-data-analytics-made-easy/)**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way.
|
||||
|
||||
- **[Open Source](https://tdengine.com/tdengine/open-source-time-series-database/)**: TDengine’s core modules, including cluster feature, are all available under open source licenses. It has gathered 18.8k stars on GitHub. There is an active developer community, and over 139k running instances worldwide.
|
||||
- **[Open Source](https://tdengine.com/tdengine/open-source-time-series-database/)**: TDengine’s core modules, including cluster feature, are all available under open source licenses. It has gathered 19.9k stars on GitHub. There is an active developer community, and over 139k running instances worldwide.
|
||||
|
||||
For a full list of TDengine competitive advantages, please [check here](https://tdengine.com/tdengine/). The easiest way to experience TDengine is through [TDengine Cloud](https://cloud.tdengine.com).
|
||||
|
||||
|
@ -41,7 +47,7 @@ For user manual, system design and architecture, please refer to [TDengine Docum
|
|||
|
||||
# Building
|
||||
|
||||
At the moment, TDengine server supports running on Linux/Windows/macOS systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future.
|
||||
At the moment, TDengine server supports running on Linux/Windows/macOS systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future. Right now we don't support build with cross-compiling environment.
|
||||
|
||||
You can choose to install through source code, [container](https://docs.tdengine.com/get-started/docker/), [installation package](https://docs.tdengine.com/get-started/package/) or [Kubernetes](https://docs.tdengine.com/deployment/k8s/). This quick guide only applies to installing from source.
|
||||
|
||||
|
@ -54,7 +60,7 @@ To build TDengine, use [CMake](https://cmake.org/) 3.0.2 or higher versions in t
|
|||
### Ubuntu 18.04 and above or Debian
|
||||
|
||||
```bash
|
||||
sudo apt-get install -y gcc cmake build-essential git libssl-dev
|
||||
sudo apt-get install -y gcc cmake build-essential git libssl-dev libgflags2.2 libgflags-dev
|
||||
```
|
||||
|
||||
#### Install build dependencies for taosTools
|
||||
|
@ -62,7 +68,7 @@ sudo apt-get install -y gcc cmake build-essential git libssl-dev
|
|||
To build the [taosTools](https://github.com/taosdata/taos-tools) on Ubuntu/Debian, the following packages need to be installed.
|
||||
|
||||
```bash
|
||||
sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev pkg-config
|
||||
sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev zlib1g pkg-config
|
||||
```
|
||||
|
||||
### CentOS 7.9
|
||||
|
@ -70,14 +76,14 @@ sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-d
|
|||
```bash
|
||||
sudo yum install epel-release
|
||||
sudo yum update
|
||||
sudo yum install -y gcc gcc-c++ make cmake3 git openssl-devel
|
||||
sudo yum install -y gcc gcc-c++ make cmake3 gflags git openssl-devel
|
||||
sudo ln -sf /usr/bin/cmake3 /usr/bin/cmake
|
||||
```
|
||||
|
||||
### CentOS 8 & Fedora
|
||||
### CentOS 8/Fedora/Rocky Linux
|
||||
|
||||
```bash
|
||||
sudo dnf install -y gcc gcc-c++ make cmake epel-release git openssl-devel
|
||||
sudo dnf install -y gcc gcc-c++ make cmake epel-release gflags git openssl-devel
|
||||
```
|
||||
|
||||
#### Install build dependencies for taosTools on CentOS
|
||||
|
@ -85,16 +91,16 @@ sudo dnf install -y gcc gcc-c++ make cmake epel-release git openssl-devel
|
|||
#### CentOS 7.9
|
||||
|
||||
```
|
||||
sudo yum install -y zlib-devel xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libstdc++-static openssl-devel
|
||||
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
|
||||
```
|
||||
|
||||
#### CentOS 8/Rocky Linux
|
||||
#### CentOS 8/Fedora/Rocky Linux
|
||||
|
||||
```
|
||||
sudo yum install -y epel-release
|
||||
sudo yum install -y dnf-plugins-core
|
||||
sudo yum config-manager --set-enabled powertools
|
||||
sudo yum install -y zlib-devel xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libstdc++-static openssl-devel
|
||||
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
|
||||
```
|
||||
|
||||
Note: Since snappy lacks pkg-config support (refer to [link](https://github.com/google/snappy/pull/86)), it leads a cmake prompt libsnappy not found. But snappy still works well.
|
||||
|
@ -105,10 +111,20 @@ If the PowerTools installation fails, you can try to use:
|
|||
sudo yum config-manager --set-enabled powertools
|
||||
```
|
||||
|
||||
#### For CentOS + devtoolset
|
||||
|
||||
Besides above dependencies, please run following commands:
|
||||
|
||||
```
|
||||
sudo yum install centos-release-scl
|
||||
sudo yum install devtoolset-9 devtoolset-9-libatomic-devel
|
||||
scl enable devtoolset-9 -- bash
|
||||
```
|
||||
|
||||
### macOS
|
||||
|
||||
```
|
||||
brew install argp-standalone pkgconfig
|
||||
brew install argp-standalone gflags pkgconfig
|
||||
```
|
||||
|
||||
### Setup golang environment
|
||||
|
@ -280,7 +296,7 @@ Installing from source code will also configure service management for TDengine.
|
|||
To start the service after installation, double-click the /applications/TDengine to start the program, or in a terminal, use:
|
||||
|
||||
```bash
|
||||
launchctl start com.tdengine.taosd
|
||||
sudo launchctl start com.tdengine.taosd
|
||||
```
|
||||
|
||||
Then users can use the TDengine CLI to connect the TDengine server. In a terminal, use:
|
||||
|
@ -349,6 +365,6 @@ Please follow the [contribution guidelines](CONTRIBUTING.md) to contribute to th
|
|||
For more information about TDengine, you can follow us on social media and join our Discord server:
|
||||
|
||||
- [Discord](https://discord.com/invite/VZdSuUg4pS)
|
||||
- [Twitter](https://twitter.com/TaosData)
|
||||
- [Twitter](https://twitter.com/TDengineDB)
|
||||
- [LinkedIn](https://www.linkedin.com/company/tdengine/)
|
||||
- [YouTube](https://www.youtube.com/channel/UCmp-1U6GS_3V3hjir6Uq5DQ)
|
||||
- [YouTube](https://www.youtube.com/@tdengine)
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
cmake_minimum_required(VERSION 3.0)
|
||||
|
||||
set(CMAKE_VERBOSE_MAKEFILE OFF)
|
||||
set(CMAKE_VERBOSE_MAKEFILE ON)
|
||||
set(TD_BUILD_TAOSA_INTERNAL FALSE)
|
||||
|
||||
#set output directory
|
||||
SET(LIBRARY_OUTPUT_PATH ${PROJECT_BINARY_DIR}/build/lib)
|
||||
|
@ -114,24 +115,61 @@ ELSE ()
|
|||
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${GCC_COVERAGE_COMPILE_FLAGS} ${GCC_COVERAGE_LINK_FLAGS}")
|
||||
ENDIF ()
|
||||
|
||||
# disable all assert
|
||||
IF ((${DISABLE_ASSERT} MATCHES "true") OR (${DISABLE_ASSERTS} MATCHES "true"))
|
||||
ADD_DEFINITIONS(-DDISABLE_ASSERT)
|
||||
MESSAGE(STATUS "Disable all asserts")
|
||||
ENDIF()
|
||||
|
||||
INCLUDE(CheckCCompilerFlag)
|
||||
IF (TD_ARM_64 OR TD_ARM_32)
|
||||
SET(COMPILER_SUPPORT_SSE42 false)
|
||||
ELSEIF (("${CMAKE_C_COMPILER_ID}" MATCHES "Clang") OR ("${CMAKE_C_COMPILER_ID}" MATCHES "AppleClang"))
|
||||
SET(COMPILER_SUPPORT_SSE42 true)
|
||||
MESSAGE(STATUS "Always enable sse4.2 for Clang/AppleClang")
|
||||
ELSE()
|
||||
CHECK_C_COMPILER_FLAG("-msse4.2" COMPILER_SUPPORT_SSE42)
|
||||
ENDIF()
|
||||
|
||||
CHECK_C_COMPILER_FLAG("-mfma" COMPILER_SUPPORT_FMA)
|
||||
CHECK_C_COMPILER_FLAG("-mavx" COMPILER_SUPPORT_AVX)
|
||||
CHECK_C_COMPILER_FLAG("-mavx2" COMPILER_SUPPORT_AVX2)
|
||||
|
||||
IF (COMPILER_SUPPORT_SSE42)
|
||||
SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -msse4.2")
|
||||
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -msse4.2")
|
||||
ENDIF()
|
||||
|
||||
IF ("${SIMD_SUPPORT}" MATCHES "true")
|
||||
IF (COMPILER_SUPPORT_FMA)
|
||||
SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -mfma")
|
||||
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -mfma")
|
||||
ENDIF()
|
||||
IF (COMPILER_SUPPORT_AVX)
|
||||
SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -mavx")
|
||||
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -mavx")
|
||||
ENDIF()
|
||||
IF (COMPILER_SUPPORT_AVX2)
|
||||
SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -mavx2")
|
||||
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -mavx2")
|
||||
ENDIF()
|
||||
MESSAGE(STATUS "SIMD instructions (FMA/AVX/AVX2) is ACTIVATED")
|
||||
ENDIF()
|
||||
|
||||
# build mode
|
||||
SET(CMAKE_C_FLAGS_REL "${CMAKE_C_FLAGS} -Werror -Werror=return-type -fPIC -O3 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k")
|
||||
SET(CMAKE_CXX_FLAGS_REL "${CMAKE_CXX_FLAGS} -Werror -Wno-reserved-user-defined-literal -Wno-literal-suffix -Werror=return-type -fPIC -O3 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k")
|
||||
|
||||
IF (${BUILD_SANITIZER})
|
||||
SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Werror -Werror=return-type -fPIC -gdwarf-2 -fsanitize=address -fsanitize=undefined -fsanitize-recover=all -fsanitize=float-divide-by-zero -fsanitize=float-cast-overflow -fno-sanitize=shift-base -fno-sanitize=alignment -g3 -Wformat=0")
|
||||
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror -Wno-literal-suffix -Werror=return-type -fPIC -gdwarf-2 -fsanitize=address -fsanitize=undefined -fsanitize-recover=all -fsanitize=float-divide-by-zero -fsanitize=float-cast-overflow -fno-sanitize=shift-base -fno-sanitize=alignment -g3 -Wformat=0")
|
||||
MESSAGE(STATUS "Will compile with Address Sanitizer!")
|
||||
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-literal-suffix -Werror=return-type -fPIC -gdwarf-2 -fsanitize=address -fsanitize=undefined -fsanitize-recover=all -fsanitize=float-divide-by-zero -fsanitize=float-cast-overflow -fno-sanitize=shift-base -fno-sanitize=alignment -g3 -Wformat=0")
|
||||
MESSAGE(STATUS "Compile with Address Sanitizer!")
|
||||
ELSEIF (${BUILD_RELEASE})
|
||||
SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS_REL}")
|
||||
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS_REL}")
|
||||
ELSE ()
|
||||
SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Werror -Werror=return-type -fPIC -gdwarf-2 -g3 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k")
|
||||
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror -Wno-literal-suffix -Werror=return-type -fPIC -gdwarf-2 -g3 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k")
|
||||
ENDIF ()
|
||||
|
||||
MESSAGE("System processor ID: ${CMAKE_SYSTEM_PROCESSOR}")
|
||||
IF (TD_INTEL_64 OR TD_INTEL_32)
|
||||
ADD_DEFINITIONS("-msse4.2")
|
||||
IF("${FMA_SUPPORT}" MATCHES "true")
|
||||
MESSAGE(STATUS "turn fma function support on")
|
||||
ADD_DEFINITIONS("-mfma")
|
||||
ELSE ()
|
||||
MESSAGE(STATUS "turn fma function support off")
|
||||
ENDIF()
|
||||
SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Werror -Werror=return-type -fPIC -g3 -gdwarf-2 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k")
|
||||
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-reserved-user-defined-literal -g3 -Wno-literal-suffix -Werror=return-type -fPIC -gdwarf-2 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k")
|
||||
ENDIF ()
|
||||
|
||||
ENDIF ()
|
||||
|
|
|
@ -21,7 +21,7 @@ IF (TD_LINUX)
|
|||
ELSEIF (TD_WINDOWS)
|
||||
SET(TD_MAKE_INSTALL_SH "${TD_SOURCE_DIR}/packaging/tools/make_install.bat")
|
||||
INSTALL(CODE "MESSAGE(\"make install script: ${TD_MAKE_INSTALL_SH}\")")
|
||||
INSTALL(CODE "execute_process(COMMAND ${TD_MAKE_INSTALL_SH} :needAdmin ${TD_SOURCE_DIR} ${PROJECT_BINARY_DIR} Windows ${TD_VER_NUMBER})")
|
||||
INSTALL(CODE "execute_process(COMMAND ${TD_MAKE_INSTALL_SH} :needAdmin ${TD_SOURCE_DIR} ${PROJECT_BINARY_DIR} Windows ${TD_VER_NUMBER} ${TD_BUILD_TAOSA_INTERNAL})")
|
||||
ELSEIF (TD_DARWIN)
|
||||
SET(TD_MAKE_INSTALL_SH "${TD_SOURCE_DIR}/packaging/tools/make_install.sh")
|
||||
INSTALL(CODE "MESSAGE(\"make install script: ${TD_MAKE_INSTALL_SH}\")")
|
||||
|
|
|
@ -64,12 +64,25 @@ IF(${TD_WINDOWS})
|
|||
ON
|
||||
)
|
||||
|
||||
MESSAGE("build geos Win32")
|
||||
option(
|
||||
BUILD_GEOS
|
||||
"If build geos on Windows"
|
||||
ON
|
||||
)
|
||||
|
||||
ELSEIF (TD_DARWIN_64)
|
||||
IF(${BUILD_TEST})
|
||||
add_definitions(-DCOMPILER_SUPPORTS_CXX13)
|
||||
ENDIF ()
|
||||
ENDIF ()
|
||||
|
||||
option(
|
||||
BUILD_GEOS
|
||||
"If build geos on Windows"
|
||||
ON
|
||||
)
|
||||
|
||||
option(
|
||||
BUILD_SHARED_LIBS
|
||||
""
|
||||
|
@ -109,7 +122,7 @@ option(
|
|||
option(
|
||||
BUILD_WITH_ROCKSDB
|
||||
"If build with rocksdb"
|
||||
OFF
|
||||
ON
|
||||
)
|
||||
|
||||
option(
|
||||
|
@ -171,3 +184,14 @@ option(
|
|||
ON
|
||||
)
|
||||
|
||||
option(
|
||||
BUILD_RELEASE
|
||||
"If build release version"
|
||||
OFF
|
||||
)
|
||||
|
||||
option(
|
||||
BUILD_CONTRIB
|
||||
"If build thirdpart from source"
|
||||
OFF
|
||||
)
|
||||
|
|
|
@ -1,20 +1,17 @@
|
|||
cmake_minimum_required(VERSION 3.0)
|
||||
|
||||
MESSAGE("Current system is ${CMAKE_SYSTEM_NAME}")
|
||||
|
||||
# init
|
||||
SET(TD_LINUX FALSE)
|
||||
SET(TD_WINDOWS FALSE)
|
||||
SET(TD_DARWIN FALSE)
|
||||
|
||||
MESSAGE("Compiler ID: ${CMAKE_CXX_COMPILER_ID}")
|
||||
if(CMAKE_COMPILER_IS_GNUCXX MATCHES 1)
|
||||
set(CXX_COMPILER_IS_GNU TRUE)
|
||||
else()
|
||||
set(CXX_COMPILER_IS_GNU FALSE)
|
||||
endif()
|
||||
|
||||
MESSAGE("Current system name is ${CMAKE_SYSTEM_NAME}.")
|
||||
MESSAGE("Current system: ${CMAKE_SYSTEM_NAME}")
|
||||
|
||||
IF (${CMAKE_SYSTEM_NAME} MATCHES "Linux" OR ${CMAKE_SYSTEM_NAME} MATCHES "Darwin")
|
||||
|
||||
|
@ -26,6 +23,8 @@ IF (${CMAKE_SYSTEM_NAME} MATCHES "Linux" OR ${CMAKE_SYSTEM_NAME} MATCHES "Darwin
|
|||
set(CMAKE_SHARED_LIBRARY_CREATE_CXX_FLAGS "${CMAKE_SHARED_LIBRARY_CREATE_CXX_FLAGS} -undefined dynamic_lookup")
|
||||
ENDIF ()
|
||||
|
||||
MESSAGE("Current system processor: ${CMAKE_SYSTEM_PROCESSOR}")
|
||||
|
||||
IF (${CMAKE_SYSTEM_NAME} MATCHES "Linux")
|
||||
|
||||
SET(TD_LINUX TRUE)
|
||||
|
@ -38,13 +37,27 @@ IF (${CMAKE_SYSTEM_NAME} MATCHES "Linux" OR ${CMAKE_SYSTEM_NAME} MATCHES "Darwin
|
|||
SET(TD_LINUX_32 TRUE)
|
||||
ENDIF ()
|
||||
|
||||
EXECUTE_PROCESS(COMMAND chmod 777 ${CMAKE_CURRENT_LIST_DIR}/../packaging/tools/get_os.sh)
|
||||
EXECUTE_PROCESS(COMMAND readlink /bin/sh OUTPUT_VARIABLE SHELL_LINK)
|
||||
MESSAGE(STATUS "The shell is: " ${SHELL_LINK})
|
||||
|
||||
IF (${SHELL_LINK} MATCHES "dash")
|
||||
EXECUTE_PROCESS(COMMAND ${CMAKE_CURRENT_LIST_DIR}/../packaging/tools/get_os.sh "" OUTPUT_VARIABLE TD_OS_INFO)
|
||||
ELSE ()
|
||||
EXECUTE_PROCESS(COMMAND sh ${CMAKE_CURRENT_LIST_DIR}/../packaging/tools/get_os.sh "" OUTPUT_VARIABLE TD_OS_INFO)
|
||||
ENDIF ()
|
||||
MESSAGE(STATUS "The current OS is " ${TD_OS_INFO})
|
||||
IF (${TD_OS_INFO} MATCHES "Alpine")
|
||||
SET(TD_ALPINE TRUE)
|
||||
ADD_DEFINITIONS("-D_ALPINE")
|
||||
ENDIF ()
|
||||
|
||||
ELSEIF (${CMAKE_SYSTEM_NAME} MATCHES "Darwin")
|
||||
|
||||
SET(TD_DARWIN TRUE)
|
||||
SET(OSTYPE "macOS")
|
||||
ADD_DEFINITIONS("-DDARWIN -Wno-tautological-pointer-compare")
|
||||
|
||||
MESSAGE("Current system processor is ${CMAKE_SYSTEM_PROCESSOR}.")
|
||||
IF (${CMAKE_SYSTEM_PROCESSOR} MATCHES "arm64")
|
||||
MESSAGE("Current system arch is arm64")
|
||||
SET(TD_DARWIN_64 TRUE)
|
||||
|
@ -80,28 +93,40 @@ ELSEIF (${CMAKE_SYSTEM_NAME} MATCHES "Windows")
|
|||
ENDIF()
|
||||
|
||||
IF ("${CPUTYPE}" STREQUAL "")
|
||||
MESSAGE(STATUS "The current platform " ${CMAKE_SYSTEM_PROCESSOR} " is detected")
|
||||
|
||||
IF (CMAKE_SYSTEM_PROCESSOR MATCHES "(amd64)|(AMD64)")
|
||||
MESSAGE(STATUS "The current platform is amd64")
|
||||
MESSAGE(STATUS "Current platform is amd64")
|
||||
SET(PLATFORM_ARCH_STR "amd64")
|
||||
SET(TD_INTEL_64 TRUE)
|
||||
ADD_DEFINITIONS("-D_TD_X86_")
|
||||
ELSEIF (CMAKE_SYSTEM_PROCESSOR MATCHES "(x86)|(X86)")
|
||||
MESSAGE(STATUS "The current platform is x86")
|
||||
MESSAGE(STATUS "Current platform is x86")
|
||||
SET(PLATFORM_ARCH_STR "i386")
|
||||
SET(TD_INTEL_32 TRUE)
|
||||
ADD_DEFINITIONS("-D_TD_X86_")
|
||||
ELSEIF (CMAKE_SYSTEM_PROCESSOR MATCHES "armv7l")
|
||||
MESSAGE(STATUS "The current platform is aarch32")
|
||||
MESSAGE(STATUS "Current platform is aarch32")
|
||||
SET(PLATFORM_ARCH_STR "arm")
|
||||
SET(TD_ARM_32 TRUE)
|
||||
ADD_DEFINITIONS("-D_TD_ARM_")
|
||||
ADD_DEFINITIONS("-D_TD_ARM_32")
|
||||
ELSEIF (CMAKE_SYSTEM_PROCESSOR MATCHES "(aarch64)|(arm64)")
|
||||
MESSAGE(STATUS "The current platform is aarch64")
|
||||
MESSAGE(STATUS "Current platform is aarch64")
|
||||
SET(PLATFORM_ARCH_STR "arm64")
|
||||
SET(TD_ARM_64 TRUE)
|
||||
ADD_DEFINITIONS("-D_TD_ARM_")
|
||||
ADD_DEFINITIONS("-D_TD_ARM_64")
|
||||
ELSEIF (CMAKE_SYSTEM_PROCESSOR MATCHES "loongarch64")
|
||||
MESSAGE(STATUS "The current platform is loongarch64")
|
||||
SET(PLATFORM_ARCH_STR "loongarch64")
|
||||
SET(TD_LOONGARCH_64 TRUE)
|
||||
ADD_DEFINITIONS("-D_TD_LOONGARCH_")
|
||||
ADD_DEFINITIONS("-D_TD_LOONGARCH_64")
|
||||
ELSEIF (CMAKE_SYSTEM_PROCESSOR MATCHES "mips64")
|
||||
SET(PLATFORM_ARCH_STR "mips")
|
||||
MESSAGE(STATUS "input cpuType: mips64")
|
||||
SET(TD_MIPS_64 TRUE)
|
||||
ADD_DEFINITIONS("-D_TD_MIPS_")
|
||||
ADD_DEFINITIONS("-D_TD_MIPS_64")
|
||||
ENDIF ()
|
||||
ELSE ()
|
||||
# if generate ARM version:
|
||||
|
@ -118,6 +143,12 @@ ELSE ()
|
|||
ADD_DEFINITIONS("-D_TD_ARM_")
|
||||
ADD_DEFINITIONS("-D_TD_ARM_64")
|
||||
SET(TD_ARM_64 TRUE)
|
||||
ELSEIF (${CPUTYPE} MATCHES "loongarch64")
|
||||
SET(PLATFORM_ARCH_STR "loongarch64")
|
||||
MESSAGE(STATUS "input cpuType: loongarch64")
|
||||
SET(TD_LOONGARCH_64 TRUE)
|
||||
ADD_DEFINITIONS("-D_TD_LOONGARCH_")
|
||||
ADD_DEFINITIONS("-D_TD_LOONGARCH_64")
|
||||
ELSEIF (${CPUTYPE} MATCHES "mips64")
|
||||
SET(PLATFORM_ARCH_STR "mips")
|
||||
MESSAGE(STATUS "input cpuType: mips64")
|
||||
|
@ -137,7 +168,27 @@ ELSE ()
|
|||
ENDIF ()
|
||||
ENDIF ()
|
||||
|
||||
MESSAGE(STATUS "platform arch:" ${PLATFORM_ARCH_STR})
|
||||
IF(APPLE)
|
||||
set(CMAKE_THREAD_LIBS_INIT "-lpthread")
|
||||
set(CMAKE_HAVE_THREADS_LIBRARY 1)
|
||||
set(CMAKE_USE_WIN32_THREADS_INIT 0)
|
||||
set(CMAKE_USE_PTHREADS 1)
|
||||
set(THREADS_PREFER_PTHREAD_FLAG ON)
|
||||
ENDIF()
|
||||
|
||||
MESSAGE("C Compiler ID: ${CMAKE_C_COMPILER_ID}")
|
||||
MESSAGE("CXX Compiler ID: ${CMAKE_CXX_COMPILER_ID}")
|
||||
MESSAGE(STATUS "Platform arch:" ${PLATFORM_ARCH_STR})
|
||||
|
||||
set(TD_DEPS_DIR "x86")
|
||||
if (TD_LINUX)
|
||||
IF (TD_ARM_64 OR TD_ARM_32)
|
||||
set(TD_DEPS_DIR "arm")
|
||||
ELSEIF (TD_MIPS_64)
|
||||
set(TD_DEPS_DIR "mips")
|
||||
ELSE()
|
||||
set(TD_DEPS_DIR "x86")
|
||||
ENDIF()
|
||||
endif()
|
||||
MESSAGE(STATUS "DEPS_DIR: " ${TD_DEPS_DIR})
|
||||
|
||||
MESSAGE("C Compiler: ${CMAKE_C_COMPILER} (${CMAKE_C_COMPILER_ID}, ${CMAKE_C_COMPILER_VERSION})")
|
||||
MESSAGE("CXX Compiler: ${CMAKE_CXX_COMPILER} (${CMAKE_C_COMPILER_ID}, ${CMAKE_CXX_COMPILER_VERSION})")
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
IF (DEFINED VERNUMBER)
|
||||
SET(TD_VER_NUMBER ${VERNUMBER})
|
||||
ELSE ()
|
||||
SET(TD_VER_NUMBER "3.0.1.6")
|
||||
SET(TD_VER_NUMBER "3.1.0.0.alpha")
|
||||
ENDIF ()
|
||||
|
||||
IF (DEFINED VERCOMPATIBLE)
|
||||
|
@ -16,7 +16,7 @@ find_program(HAVE_GIT NAMES git)
|
|||
IF (DEFINED GITINFO)
|
||||
SET(TD_VER_GIT ${GITINFO})
|
||||
ELSEIF (HAVE_GIT)
|
||||
execute_process(COMMAND git log -1 --format=%H WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR} OUTPUT_VARIABLE GIT_COMMITID)
|
||||
execute_process(COMMAND git log -1 --format=%H WORKING_DIRECTORY ${TD_COMMUNITY_DIR} OUTPUT_VARIABLE GIT_COMMITID)
|
||||
#message(STATUS "git log result:${GIT_COMMITID}")
|
||||
IF (GIT_COMMITID)
|
||||
string (REGEX REPLACE "[\n\t\r]" "" GIT_COMMITID ${GIT_COMMITID})
|
||||
|
@ -26,10 +26,27 @@ ELSEIF (HAVE_GIT)
|
|||
SET(TD_VER_GIT "no git commit id")
|
||||
ENDIF ()
|
||||
ELSE ()
|
||||
message(STATUS "no git cmd")
|
||||
message(STATUS "no git found")
|
||||
SET(TD_VER_GIT "no git commit id")
|
||||
ENDIF ()
|
||||
|
||||
IF (DEFINED GITINFOI)
|
||||
SET(TD_VER_GIT_INTERNAL ${GITINFOI})
|
||||
ELSEIF (HAVE_GIT)
|
||||
execute_process(COMMAND git log -1 --format=%H WORKING_DIRECTORY ${PROJECT_SOURCE_DIR} OUTPUT_VARIABLE GIT_COMMITID)
|
||||
message(STATUS "git log result:${GIT_COMMITID}")
|
||||
IF (GIT_COMMITID)
|
||||
string (REGEX REPLACE "[\n\t\r]" "" GIT_COMMITID ${GIT_COMMITID})
|
||||
SET(TD_VER_GIT_INTERNAL ${GIT_COMMITID})
|
||||
ELSE ()
|
||||
message(STATUS "not a git repository")
|
||||
SET(TD_VER_GIT "no git commit id")
|
||||
ENDIF ()
|
||||
ELSE ()
|
||||
message(STATUS "no git cmd")
|
||||
SET(TD_VER_GIT_INTERNAL "no git commit id")
|
||||
ENDIF ()
|
||||
|
||||
IF (DEFINED VERDATE)
|
||||
SET(TD_VER_DATE ${VERDATE})
|
||||
ELSE ()
|
||||
|
@ -65,13 +82,14 @@ ELSE ()
|
|||
ENDIF ()
|
||||
|
||||
MESSAGE(STATUS "============= compile version parameter information start ============= ")
|
||||
MESSAGE(STATUS "ver number:" ${TD_VER_NUMBER})
|
||||
MESSAGE(STATUS "compatible ver number:" ${TD_VER_COMPATIBLE})
|
||||
MESSAGE(STATUS "communit commit id:" ${TD_VER_GIT})
|
||||
MESSAGE(STATUS "build date:" ${TD_VER_DATE})
|
||||
MESSAGE(STATUS "ver type:" ${TD_VER_VERTYPE})
|
||||
MESSAGE(STATUS "ver cpu:" ${TD_VER_CPUTYPE})
|
||||
MESSAGE(STATUS "os type:" ${TD_VER_OSTYPE})
|
||||
MESSAGE(STATUS "version: " ${TD_VER_NUMBER})
|
||||
MESSAGE(STATUS "compatible: " ${TD_VER_COMPATIBLE})
|
||||
MESSAGE(STATUS "commit id: " ${TD_VER_GIT})
|
||||
MESSAGE(STATUS "build date: " ${TD_VER_DATE})
|
||||
MESSAGE(STATUS "build type: " ${CMAKE_BUILD_TYPE})
|
||||
MESSAGE(STATUS "type: " ${TD_VER_VERTYPE})
|
||||
MESSAGE(STATUS "cpu: " ${TD_VER_CPUTYPE})
|
||||
MESSAGE(STATUS "os: " ${TD_VER_OSTYPE})
|
||||
MESSAGE(STATUS "============= compile version parameter information end ============= ")
|
||||
|
||||
STRING(REPLACE "." "_" TD_LIB_VER_NUMBER ${TD_VER_NUMBER})
|
||||
|
|
|
@ -0,0 +1,12 @@
|
|||
|
||||
# geos
|
||||
ExternalProject_Add(geos
|
||||
GIT_REPOSITORY https://github.com/libgeos/geos.git
|
||||
GIT_TAG 3.12.0
|
||||
SOURCE_DIR "${TD_CONTRIB_DIR}/geos"
|
||||
BINARY_DIR ""
|
||||
CONFIGURE_COMMAND ""
|
||||
BUILD_COMMAND ""
|
||||
INSTALL_COMMAND ""
|
||||
TEST_COMMAND ""
|
||||
)
|
|
@ -1,11 +1,29 @@
|
|||
|
||||
# rocksdb
|
||||
ExternalProject_Add(rocksdb
|
||||
GIT_REPOSITORY https://github.com/taosdata-contrib/rocksdb.git
|
||||
GIT_TAG v6.23.3
|
||||
SOURCE_DIR "${TD_CONTRIB_DIR}/rocksdb"
|
||||
CONFIGURE_COMMAND ""
|
||||
BUILD_COMMAND ""
|
||||
INSTALL_COMMAND ""
|
||||
TEST_COMMAND ""
|
||||
)
|
||||
if (${BUILD_CONTRIB})
|
||||
ExternalProject_Add(rocksdb
|
||||
URL https://github.com/facebook/rocksdb/archive/refs/tags/v8.1.1.tar.gz
|
||||
URL_HASH MD5=3b4c97ee45df9c8a5517308d31ab008b
|
||||
DOWNLOAD_NO_PROGRESS 1
|
||||
DOWNLOAD_DIR "${TD_CONTRIB_DIR}/deps-download"
|
||||
SOURCE_DIR "${TD_CONTRIB_DIR}/rocksdb"
|
||||
CONFIGURE_COMMAND ""
|
||||
BUILD_COMMAND ""
|
||||
INSTALL_COMMAND ""
|
||||
TEST_COMMAND ""
|
||||
)
|
||||
else()
|
||||
if (NOT ${TD_LINUX})
|
||||
ExternalProject_Add(rocksdb
|
||||
URL https://github.com/facebook/rocksdb/archive/refs/tags/v8.1.1.tar.gz
|
||||
URL_HASH MD5=3b4c97ee45df9c8a5517308d31ab008b
|
||||
DOWNLOAD_NO_PROGRESS 1
|
||||
DOWNLOAD_DIR "${TD_CONTRIB_DIR}/deps-download"
|
||||
SOURCE_DIR "${TD_CONTRIB_DIR}/rocksdb"
|
||||
CONFIGURE_COMMAND ""
|
||||
BUILD_COMMAND ""
|
||||
INSTALL_COMMAND ""
|
||||
TEST_COMMAND ""
|
||||
)
|
||||
endif()
|
||||
endif()
|
||||
|
|
|
@ -2,6 +2,7 @@
|
|||
# stub
|
||||
ExternalProject_Add(stub
|
||||
GIT_REPOSITORY https://github.com/coolxv/cpp-stub.git
|
||||
GIT_TAG 5e903b8e
|
||||
GIT_SUBMODULES "src"
|
||||
SOURCE_DIR "${TD_CONTRIB_DIR}/cpp-stub"
|
||||
BINARY_DIR "${TD_CONTRIB_DIR}/cpp-stub/src"
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
# taosadapter
|
||||
ExternalProject_Add(taosadapter
|
||||
GIT_REPOSITORY https://github.com/taosdata/taosadapter.git
|
||||
GIT_TAG a11131c
|
||||
GIT_TAG main
|
||||
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taosadapter"
|
||||
BINARY_DIR ""
|
||||
#BUILD_IN_SOURCE TRUE
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
# taos-tools
|
||||
ExternalProject_Add(taos-tools
|
||||
GIT_REPOSITORY https://github.com/taosdata/taos-tools.git
|
||||
GIT_TAG 0fb640b
|
||||
GIT_TAG main
|
||||
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools"
|
||||
BINARY_DIR ""
|
||||
#BUILD_IN_SOURCE TRUE
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
# taosws-rs
|
||||
ExternalProject_Add(taosws-rs
|
||||
GIT_REPOSITORY https://github.com/taosdata/taos-connector-rust.git
|
||||
GIT_TAG 0373a70
|
||||
GIT_TAG main
|
||||
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taosws-rs"
|
||||
BINARY_DIR ""
|
||||
#BUILD_IN_SOURCE TRUE
|
||||
|
|
|
@ -0,0 +1,9 @@
|
|||
-DLINUX
|
||||
-DWEBSOCKET
|
||||
-I/usr/include
|
||||
-Iinclude
|
||||
-Iinclude/os
|
||||
-Iinclude/common
|
||||
-Iinclude/util
|
||||
-Iinclude/libs/transport
|
||||
-Itools/shell/inc
|
|
@ -77,11 +77,23 @@ if(${BUILD_WITH_LEVELDB})
|
|||
cat("${TD_SUPPORT_DIR}/leveldb_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
|
||||
endif(${BUILD_WITH_LEVELDB})
|
||||
|
||||
# rocksdb
|
||||
if(${BUILD_WITH_ROCKSDB})
|
||||
cat("${TD_SUPPORT_DIR}/rocksdb_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
|
||||
add_definitions(-DUSE_ROCKSDB)
|
||||
endif(${BUILD_WITH_ROCKSDB})
|
||||
if (${BUILD_CONTRIB})
|
||||
if(${BUILD_WITH_ROCKSDB})
|
||||
cat("${TD_SUPPORT_DIR}/rocksdb_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
|
||||
add_definitions(-DUSE_ROCKSDB)
|
||||
endif()
|
||||
else()
|
||||
if (NOT ${TD_LINUX})
|
||||
if(${BUILD_WITH_ROCKSDB})
|
||||
cat("${TD_SUPPORT_DIR}/rocksdb_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
|
||||
add_definitions(-DUSE_ROCKSDB)
|
||||
endif(${BUILD_WITH_ROCKSDB})
|
||||
else()
|
||||
if(${BUILD_WITH_ROCKSDB})
|
||||
add_definitions(-DUSE_ROCKSDB)
|
||||
endif(${BUILD_WITH_ROCKSDB})
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# canonical-raft
|
||||
if(${BUILD_WITH_CRAFT})
|
||||
|
@ -134,6 +146,11 @@ if(${BUILD_ADDR2LINE})
|
|||
endif(NOT ${TD_WINDOWS})
|
||||
endif(${BUILD_ADDR2LINE})
|
||||
|
||||
# geos
|
||||
if(${BUILD_GEOS})
|
||||
cat("${TD_SUPPORT_DIR}/geos_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
|
||||
endif()
|
||||
|
||||
# download dependencies
|
||||
configure_file(${CONTRIB_TMP_FILE} "${TD_CONTRIB_DIR}/deps-download/CMakeLists.txt")
|
||||
execute_process(COMMAND "${CMAKE_COMMAND}" -G "${CMAKE_GENERATOR}" .
|
||||
|
@ -170,8 +187,8 @@ if(${BUILD_TEST})
|
|||
PUBLIC $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/cpp-stub/src_darwin>
|
||||
)
|
||||
endif(${TD_DARWIN})
|
||||
|
||||
|
||||
|
||||
|
||||
endif(${BUILD_TEST})
|
||||
|
||||
# cJson
|
||||
|
@ -222,19 +239,113 @@ endif(${BUILD_WITH_LEVELDB})
|
|||
|
||||
# rocksdb
|
||||
# To support rocksdb build on ubuntu: sudo apt-get install libgflags-dev
|
||||
if(${BUILD_WITH_ROCKSDB})
|
||||
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-error=maybe-uninitialized")
|
||||
option(WITH_TESTS "" OFF)
|
||||
option(WITH_BENCHMARK_TOOLS "" OFF)
|
||||
option(WITH_TOOLS "" OFF)
|
||||
option(WITH_LIBURING "" OFF)
|
||||
option(ROCKSDB_BUILD_SHARED "Build shared versions of the RocksDB libraries" OFF)
|
||||
add_subdirectory(rocksdb EXCLUDE_FROM_ALL)
|
||||
target_include_directories(
|
||||
rocksdb
|
||||
PUBLIC $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/rocksdb/include>
|
||||
)
|
||||
endif(${BUILD_WITH_ROCKSDB})
|
||||
if (${BUILD_WITH_UV})
|
||||
if(${TD_LINUX})
|
||||
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS_REL}")
|
||||
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS_REL}")
|
||||
IF ("${CMAKE_BUILD_TYPE}" STREQUAL "")
|
||||
SET(CMAKE_BUILD_TYPE Release)
|
||||
endif()
|
||||
endif(${TD_LINUX})
|
||||
endif (${BUILD_WITH_UV})
|
||||
|
||||
if (${BUILD_WITH_ROCKSDB})
|
||||
if (${BUILD_CONTRIB})
|
||||
if(${TD_LINUX})
|
||||
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS_REL} -Wno-error=maybe-uninitialized -Wno-error=unused-but-set-variable -Wno-error=unused-variable -Wno-error=unused-function -Wno-errno=unused-private-field -Wno-error=unused-result")
|
||||
if ("${CMAKE_BUILD_TYPE}" STREQUAL "")
|
||||
SET(CMAKE_BUILD_TYPE Release)
|
||||
endif()
|
||||
endif(${TD_LINUX})
|
||||
MESSAGE(STATUS "CXXXX STATUS CONFIG: " ${CMAKE_CXX_FLAGS})
|
||||
|
||||
if(${TD_DARWIN})
|
||||
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-error=maybe-uninitialized")
|
||||
endif(${TD_DARWIN})
|
||||
|
||||
if (${TD_DARWIN_ARM64})
|
||||
set(HAS_ARMV8_CRC true)
|
||||
endif(${TD_DARWIN_ARM64})
|
||||
|
||||
if (${TD_WINDOWS})
|
||||
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /wd4244 /wd4819")
|
||||
option(WITH_JNI "" OFF)
|
||||
option(WITH_MD_LIBRARY "build with MD" OFF)
|
||||
set(SYSTEM_LIBS ${SYSTEM_LIBS} shlwapi.lib rpcrt4.lib)
|
||||
endif(${TD_WINDOWS})
|
||||
|
||||
|
||||
if(${TD_DARWIN})
|
||||
option(HAVE_THREAD_LOCAL "" OFF)
|
||||
option(WITH_IOSTATS_CONTEXT "" OFF)
|
||||
option(WITH_PERF_CONTEXT "" OFF)
|
||||
endif(${TD_DARWIN})
|
||||
|
||||
option(WITH_FALLOCATE "" OFF)
|
||||
option(WITH_JEMALLOC "" OFF)
|
||||
option(WITH_GFLAGS "" OFF)
|
||||
option(PORTABLE "" ON)
|
||||
option(WITH_LIBURING "" OFF)
|
||||
option(FAIL_ON_WARNINGS OFF)
|
||||
|
||||
option(WITH_TESTS "" OFF)
|
||||
option(WITH_BENCHMARK_TOOLS "" OFF)
|
||||
option(WITH_TOOLS "" OFF)
|
||||
option(WITH_LIBURING "" OFF)
|
||||
|
||||
option(ROCKSDB_BUILD_SHARED "Build shared versions of the RocksDB libraries" OFF)
|
||||
add_subdirectory(rocksdb EXCLUDE_FROM_ALL)
|
||||
target_include_directories(
|
||||
rocksdb
|
||||
PUBLIC $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/rocksdb/include>
|
||||
)
|
||||
else()
|
||||
if (NOT ${TD_LINUX})
|
||||
MESSAGE(STATUS "CXXXX STATUS CONFIG: " ${CMAKE_CXX_FLAGS})
|
||||
if(${TD_DARWIN})
|
||||
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-error=maybe-uninitialized")
|
||||
endif(${TD_DARWIN})
|
||||
|
||||
if (${TD_DARWIN_ARM64})
|
||||
set(HAS_ARMV8_CRC true)
|
||||
endif(${TD_DARWIN_ARM64})
|
||||
|
||||
if (${TD_WINDOWS})
|
||||
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /wd4244 /wd4819")
|
||||
option(WITH_JNI "" OFF)
|
||||
option(WITH_MD_LIBRARY "build with MD" OFF)
|
||||
set(SYSTEM_LIBS ${SYSTEM_LIBS} shlwapi.lib rpcrt4.lib)
|
||||
endif(${TD_WINDOWS})
|
||||
|
||||
|
||||
if(${TD_DARWIN})
|
||||
option(HAVE_THREAD_LOCAL "" OFF)
|
||||
option(WITH_IOSTATS_CONTEXT "" OFF)
|
||||
option(WITH_PERF_CONTEXT "" OFF)
|
||||
endif(${TD_DARWIN})
|
||||
|
||||
option(WITH_FALLOCATE "" OFF)
|
||||
option(WITH_JEMALLOC "" OFF)
|
||||
option(WITH_GFLAGS "" OFF)
|
||||
option(PORTABLE "" ON)
|
||||
option(WITH_LIBURING "" OFF)
|
||||
option(FAIL_ON_WARNINGS OFF)
|
||||
|
||||
option(WITH_TESTS "" OFF)
|
||||
option(WITH_BENCHMARK_TOOLS "" OFF)
|
||||
option(WITH_TOOLS "" OFF)
|
||||
option(WITH_LIBURING "" OFF)
|
||||
|
||||
option(ROCKSDB_BUILD_SHARED "Build shared versions of the RocksDB libraries" OFF)
|
||||
add_subdirectory(rocksdb EXCLUDE_FROM_ALL)
|
||||
target_include_directories(
|
||||
rocksdb
|
||||
PUBLIC $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/rocksdb/include>
|
||||
)
|
||||
endif()
|
||||
|
||||
endif()
|
||||
endif()
|
||||
|
||||
# lucene
|
||||
# To support build on ubuntu: sudo apt-get install libboost-all-dev
|
||||
|
@ -242,10 +353,10 @@ if(${BUILD_WITH_LUCENE})
|
|||
option(ENABLE_TEST "Enable the tests" OFF)
|
||||
add_subdirectory(lucene EXCLUDE_FROM_ALL)
|
||||
target_include_directories(
|
||||
lucene++
|
||||
lucene++
|
||||
PUBLIC $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/lucene/include>
|
||||
)
|
||||
|
||||
)
|
||||
|
||||
endif(${BUILD_WITH_LUCENE})
|
||||
|
||||
# NuRaft
|
||||
|
@ -305,7 +416,7 @@ if(${BUILD_MSVCREGEX})
|
|||
target_include_directories(msvcregex
|
||||
PRIVATE "msvcregex"
|
||||
)
|
||||
target_link_libraries(msvcregex
|
||||
target_link_libraries(msvcregex
|
||||
INTERFACE Shell32
|
||||
)
|
||||
SET_TARGET_PROPERTIES(msvcregex PROPERTIES OUTPUT_NAME msvcregex)
|
||||
|
@ -365,8 +476,8 @@ if(${BUILD_WITH_BDB})
|
|||
IMPORTED_LOCATION "${CMAKE_CURRENT_SOURCE_DIR}/bdb/libdb.a"
|
||||
INTERFACE_INCLUDE_DIRECTORIES "${CMAKE_CURRENT_SOURCE_DIR}/bdb"
|
||||
)
|
||||
target_link_libraries(bdb
|
||||
INTERFACE pthread
|
||||
target_link_libraries(bdb
|
||||
INTERFACE pthread
|
||||
)
|
||||
endif(${BUILD_WITH_BDB})
|
||||
|
||||
|
@ -378,12 +489,12 @@ if(${BUILD_WITH_SQLITE})
|
|||
IMPORTED_LOCATION "${CMAKE_CURRENT_SOURCE_DIR}/sqlite/.libs/libsqlite3.a"
|
||||
INTERFACE_INCLUDE_DIRECTORIES "${CMAKE_CURRENT_SOURCE_DIR}/sqlite"
|
||||
)
|
||||
target_link_libraries(sqlite
|
||||
INTERFACE m
|
||||
INTERFACE pthread
|
||||
target_link_libraries(sqlite
|
||||
INTERFACE m
|
||||
INTERFACE pthread
|
||||
)
|
||||
if(NOT TD_WINDOWS)
|
||||
target_link_libraries(sqlite
|
||||
target_link_libraries(sqlite
|
||||
INTERFACE dl
|
||||
)
|
||||
endif(NOT TD_WINDOWS)
|
||||
|
@ -391,22 +502,22 @@ endif(${BUILD_WITH_SQLITE})
|
|||
|
||||
# addr2line
|
||||
if(${BUILD_ADDR2LINE})
|
||||
if(NOT ${TD_WINDOWS})
|
||||
check_include_file( "sys/types.h" HAVE_SYS_TYPES_H)
|
||||
check_include_file( "sys/stat.h" HAVE_SYS_STAT_H )
|
||||
check_include_file( "inttypes.h" HAVE_INTTYPES_H )
|
||||
check_include_file( "stddef.h" HAVE_STDDEF_H )
|
||||
check_include_file( "stdlib.h" HAVE_STDLIB_H )
|
||||
check_include_file( "string.h" HAVE_STRING_H )
|
||||
check_include_file( "memory.h" HAVE_MEMORY_H )
|
||||
check_include_file( "strings.h" HAVE_STRINGS_H )
|
||||
if(NOT ${TD_WINDOWS})
|
||||
check_include_file( "sys/types.h" HAVE_SYS_TYPES_H)
|
||||
check_include_file( "sys/stat.h" HAVE_SYS_STAT_H )
|
||||
check_include_file( "inttypes.h" HAVE_INTTYPES_H )
|
||||
check_include_file( "stddef.h" HAVE_STDDEF_H )
|
||||
check_include_file( "stdlib.h" HAVE_STDLIB_H )
|
||||
check_include_file( "string.h" HAVE_STRING_H )
|
||||
check_include_file( "memory.h" HAVE_MEMORY_H )
|
||||
check_include_file( "strings.h" HAVE_STRINGS_H )
|
||||
check_include_file( "stdint.h" HAVE_STDINT_H )
|
||||
check_include_file( "unistd.h" HAVE_UNISTD_H )
|
||||
check_include_file( "sgidefs.h" HAVE_SGIDEFS_H )
|
||||
check_include_file( "stdafx.h" HAVE_STDAFX_H )
|
||||
check_include_file( "elf.h" HAVE_ELF_H )
|
||||
check_include_file( "libelf.h" HAVE_LIBELF_H )
|
||||
check_include_file( "libelf/libelf.h" HAVE_LIBELF_LIBELF_H)
|
||||
check_include_file( "elf.h" HAVE_ELF_H )
|
||||
check_include_file( "libelf.h" HAVE_LIBELF_H )
|
||||
check_include_file( "libelf/libelf.h" HAVE_LIBELF_LIBELF_H)
|
||||
check_include_file( "alloca.h" HAVE_ALLOCA_H )
|
||||
check_include_file( "elfaccess.h" HAVE_ELFACCESS_H)
|
||||
check_include_file( "sys/elf_386.h" HAVE_SYS_ELF_386_H )
|
||||
|
@ -414,7 +525,7 @@ if(${BUILD_ADDR2LINE})
|
|||
check_include_file( "sys/elf_sparc.h" HAVE_SYS_ELF_SPARC_H)
|
||||
check_include_file( "sys/ia64/elf.h" HAVE_SYS_IA64_ELF_H )
|
||||
set(VERSION 0.3.1)
|
||||
set(PACKAGE_VERSION "\"${VERSION}\"")
|
||||
set(PACKAGE_VERSION "\"${VERSION}\"")
|
||||
configure_file(libdwarf/cmake/config.h.cmake config.h)
|
||||
file(GLOB_RECURSE LIBDWARF_SOURCES "libdwarf/src/lib/libdwarf/*.c")
|
||||
add_library(libdwarf STATIC ${LIBDWARF_SOURCES})
|
||||
|
@ -434,6 +545,23 @@ if(${BUILD_ADDR2LINE})
|
|||
endif(NOT ${TD_WINDOWS})
|
||||
endif(${BUILD_ADDR2LINE})
|
||||
|
||||
# geos
|
||||
if(${BUILD_GEOS})
|
||||
if(${TD_LINUX})
|
||||
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS_REL}")
|
||||
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS_REL}")
|
||||
IF ("${CMAKE_BUILD_TYPE}" STREQUAL "")
|
||||
SET(CMAKE_BUILD_TYPE Release)
|
||||
endif()
|
||||
endif(${TD_LINUX})
|
||||
option(BUILD_SHARED_LIBS "Build GEOS with shared libraries" OFF)
|
||||
add_subdirectory(geos EXCLUDE_FROM_ALL)
|
||||
unset(CMAKE_CXX_STANDARD CACHE) # undo libgeos's setting of global CMAKE_CXX_STANDARD
|
||||
target_include_directories(
|
||||
geos_c
|
||||
PUBLIC $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/geos/include>
|
||||
)
|
||||
endif(${BUILD_GEOS})
|
||||
|
||||
# ================================================================================================
|
||||
# Build test
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
message("contrib test/rocksdb:" ${BUILD_DEPENDENCY_TESTS})
|
||||
|
||||
add_executable(rocksdbTest "")
|
||||
target_sources(rocksdbTest
|
||||
PRIVATE
|
||||
"${CMAKE_CURRENT_SOURCE_DIR}/main.c"
|
||||
)
|
||||
target_link_libraries(rocksdbTest rocksdb)
|
||||
target_link_libraries(rocksdbTest rocksdb)
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
#include <assert.h>
|
||||
#include <bits/stdint-uintn.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
|
@ -9,38 +10,307 @@
|
|||
const char DBPath[] = "rocksdb_c_simple_example";
|
||||
const char DBBackupPath[] = "/tmp/rocksdb_c_simple_example_backup";
|
||||
|
||||
static const int32_t endian_test_var = 1;
|
||||
#define IS_LITTLE_ENDIAN() (*(uint8_t *)(&endian_test_var) != 0)
|
||||
#define TD_RT_ENDIAN() (IS_LITTLE_ENDIAN() ? TD_LITTLE_ENDIAN : TD_BIG_ENDIAN)
|
||||
|
||||
#define POINTER_SHIFT(p, b) ((void *)((char *)(p) + (b)))
|
||||
static void *taosDecodeFixedU64(const void *buf, uint64_t *value) {
|
||||
if (IS_LITTLE_ENDIAN()) {
|
||||
memcpy(value, buf, sizeof(*value));
|
||||
} else {
|
||||
((uint8_t *)value)[7] = ((uint8_t *)buf)[0];
|
||||
((uint8_t *)value)[6] = ((uint8_t *)buf)[1];
|
||||
((uint8_t *)value)[5] = ((uint8_t *)buf)[2];
|
||||
((uint8_t *)value)[4] = ((uint8_t *)buf)[3];
|
||||
((uint8_t *)value)[3] = ((uint8_t *)buf)[4];
|
||||
((uint8_t *)value)[2] = ((uint8_t *)buf)[5];
|
||||
((uint8_t *)value)[1] = ((uint8_t *)buf)[6];
|
||||
((uint8_t *)value)[0] = ((uint8_t *)buf)[7];
|
||||
}
|
||||
|
||||
return POINTER_SHIFT(buf, sizeof(*value));
|
||||
}
|
||||
|
||||
// ---- Fixed U64
|
||||
static int32_t taosEncodeFixedU64(void **buf, uint64_t value) {
|
||||
if (buf != NULL) {
|
||||
if (IS_LITTLE_ENDIAN()) {
|
||||
memcpy(*buf, &value, sizeof(value));
|
||||
} else {
|
||||
((uint8_t *)(*buf))[0] = value & 0xff;
|
||||
((uint8_t *)(*buf))[1] = (value >> 8) & 0xff;
|
||||
((uint8_t *)(*buf))[2] = (value >> 16) & 0xff;
|
||||
((uint8_t *)(*buf))[3] = (value >> 24) & 0xff;
|
||||
((uint8_t *)(*buf))[4] = (value >> 32) & 0xff;
|
||||
((uint8_t *)(*buf))[5] = (value >> 40) & 0xff;
|
||||
((uint8_t *)(*buf))[6] = (value >> 48) & 0xff;
|
||||
((uint8_t *)(*buf))[7] = (value >> 56) & 0xff;
|
||||
}
|
||||
|
||||
*buf = POINTER_SHIFT(*buf, sizeof(value));
|
||||
}
|
||||
|
||||
return (int32_t)sizeof(value);
|
||||
}
|
||||
|
||||
typedef struct KV {
|
||||
uint64_t k1;
|
||||
uint64_t k2;
|
||||
} KV;
|
||||
|
||||
int kvSerial(KV *kv, char *buf) {
|
||||
int len = 0;
|
||||
len += taosEncodeFixedU64((void **)&buf, kv->k1);
|
||||
len += taosEncodeFixedU64((void **)&buf, kv->k2);
|
||||
return len;
|
||||
}
|
||||
const char *kvDBName(void *name) { return "kvDBname"; }
|
||||
int kvDBComp(void *state, const char *aBuf, size_t aLen, const char *bBuf, size_t bLen) {
|
||||
KV w1, w2;
|
||||
|
||||
memset(&w1, 0, sizeof(w1));
|
||||
memset(&w2, 0, sizeof(w2));
|
||||
|
||||
char *p1 = (char *)aBuf;
|
||||
char *p2 = (char *)bBuf;
|
||||
// p1 += 1;
|
||||
// p2 += 1;
|
||||
|
||||
p1 = taosDecodeFixedU64(p1, &w1.k1);
|
||||
p2 = taosDecodeFixedU64(p2, &w2.k1);
|
||||
|
||||
p1 = taosDecodeFixedU64(p1, &w1.k2);
|
||||
p2 = taosDecodeFixedU64(p2, &w2.k2);
|
||||
|
||||
if (w1.k1 < w2.k1) {
|
||||
return -1;
|
||||
} else if (w1.k1 > w2.k1) {
|
||||
return 1;
|
||||
}
|
||||
|
||||
if (w1.k2 < w2.k2) {
|
||||
return -1;
|
||||
} else if (w1.k2 > w2.k2) {
|
||||
return 1;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
int kvDeserial(KV *kv, char *buf) {
|
||||
char *p1 = (char *)buf;
|
||||
// p1 += 1;
|
||||
p1 = taosDecodeFixedU64(p1, &kv->k1);
|
||||
p1 = taosDecodeFixedU64(p1, &kv->k2);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int main(int argc, char const *argv[]) {
|
||||
rocksdb_t * db;
|
||||
rocksdb_t *db;
|
||||
rocksdb_backup_engine_t *be;
|
||||
rocksdb_options_t * options = rocksdb_options_create();
|
||||
rocksdb_options_set_create_if_missing(options, 1);
|
||||
|
||||
// open DB
|
||||
char *err = NULL;
|
||||
db = rocksdb_open(options, DBPath, &err);
|
||||
char *err = NULL;
|
||||
const char *path = "/tmp/db";
|
||||
|
||||
// Write
|
||||
rocksdb_writeoptions_t *writeoptions = rocksdb_writeoptions_create();
|
||||
rocksdb_put(db, writeoptions, "key", 3, "value", 5, &err);
|
||||
rocksdb_options_t *opt = rocksdb_options_create();
|
||||
rocksdb_options_set_create_if_missing(opt, 1);
|
||||
rocksdb_options_set_create_missing_column_families(opt, 1);
|
||||
|
||||
// Read
|
||||
rocksdb_readoptions_t *readoptions = rocksdb_readoptions_create();
|
||||
rocksdb_readoptions_set_snapshot(readoptions, rocksdb_create_snapshot(db));
|
||||
// rocksdb_readoptions_set_snapshot(readoptions, rocksdb_create_snapshot(db));
|
||||
int len = 1;
|
||||
char buf[256] = {0};
|
||||
size_t vallen = 0;
|
||||
char * val = rocksdb_get(db, readoptions, "key", 3, &vallen, &err);
|
||||
printf("val:%s\n", val);
|
||||
char *val = rocksdb_get(db, readoptions, "key", 3, &vallen, &err);
|
||||
snprintf(buf, vallen + 5, "val:%s", val);
|
||||
printf("%ld %ld %s\n", strlen(val), vallen, buf);
|
||||
|
||||
// Update
|
||||
// rocksdb_put(db, writeoptions, "key", 3, "eulav", 5, &err);
|
||||
char **cfName = calloc(len, sizeof(char *));
|
||||
for (int i = 0; i < len; i++) {
|
||||
cfName[i] = "test";
|
||||
}
|
||||
const rocksdb_options_t **cfOpt = malloc(len * sizeof(rocksdb_options_t *));
|
||||
for (int i = 0; i < len; i++) {
|
||||
cfOpt[i] = rocksdb_options_create_copy(opt);
|
||||
if (i != 0) {
|
||||
rocksdb_comparator_t *comp = rocksdb_comparator_create(NULL, NULL, kvDBComp, kvDBName);
|
||||
rocksdb_options_set_comparator((rocksdb_options_t *)cfOpt[i], comp);
|
||||
}
|
||||
}
|
||||
|
||||
// Delete
|
||||
rocksdb_delete(db, writeoptions, "key", 3, &err);
|
||||
rocksdb_column_family_handle_t **cfHandle = malloc(len * sizeof(rocksdb_column_family_handle_t *));
|
||||
db = rocksdb_open_column_families(opt, path, len, (const char *const *)cfName, cfOpt, cfHandle, &err);
|
||||
|
||||
// Read again
|
||||
val = rocksdb_get(db, readoptions, "key", 3, &vallen, &err);
|
||||
printf("val:%s\n", val);
|
||||
{
|
||||
rocksdb_readoptions_t *rOpt = rocksdb_readoptions_create();
|
||||
size_t vlen = 0;
|
||||
|
||||
char *v = rocksdb_get_cf(db, rOpt, cfHandle[0], "key", strlen("key"), &vlen, &err);
|
||||
printf("Get value %s, and len = %d\n", v, (int)vlen);
|
||||
}
|
||||
|
||||
rocksdb_writeoptions_t *wOpt = rocksdb_writeoptions_create();
|
||||
rocksdb_writebatch_t *wBatch = rocksdb_writebatch_create();
|
||||
rocksdb_writebatch_put_cf(wBatch, cfHandle[0], "key", strlen("key"), "value", strlen("value"));
|
||||
rocksdb_write(db, wOpt, wBatch, &err);
|
||||
|
||||
rocksdb_readoptions_t *rOpt = rocksdb_readoptions_create();
|
||||
size_t vlen = 0;
|
||||
|
||||
{
|
||||
rocksdb_writeoptions_t *wOpt = rocksdb_writeoptions_create();
|
||||
rocksdb_writebatch_t *wBatch = rocksdb_writebatch_create();
|
||||
for (int i = 0; i < 100; i++) {
|
||||
char buf[128] = {0};
|
||||
KV kv = {.k1 = (100 - i) % 26, .k2 = i % 26};
|
||||
kvSerial(&kv, buf);
|
||||
rocksdb_writebatch_put_cf(wBatch, cfHandle[1], buf, sizeof(kv), "value", strlen("value"));
|
||||
}
|
||||
rocksdb_write(db, wOpt, wBatch, &err);
|
||||
}
|
||||
{
|
||||
{
|
||||
char buf[128] = {0};
|
||||
KV kv = {.k1 = 0, .k2 = 0};
|
||||
kvSerial(&kv, buf);
|
||||
char *v = rocksdb_get_cf(db, rOpt, cfHandle[1], buf, sizeof(kv), &vlen, &err);
|
||||
printf("Get value %s, and len = %d, xxxx\n", v, (int)vlen);
|
||||
rocksdb_iterator_t *iter = rocksdb_create_iterator_cf(db, rOpt, cfHandle[1]);
|
||||
rocksdb_iter_seek_to_first(iter);
|
||||
int i = 0;
|
||||
while (rocksdb_iter_valid(iter)) {
|
||||
size_t klen, vlen;
|
||||
const char *key = rocksdb_iter_key(iter, &klen);
|
||||
const char *value = rocksdb_iter_value(iter, &vlen);
|
||||
KV kv;
|
||||
kvDeserial(&kv, (char *)key);
|
||||
printf("kv1: %d\t kv2: %d, len:%d, value = %s\n", (int)(kv.k1), (int)(kv.k2), (int)(klen), value);
|
||||
i++;
|
||||
rocksdb_iter_next(iter);
|
||||
}
|
||||
rocksdb_iter_destroy(iter);
|
||||
}
|
||||
{
|
||||
char buf[128] = {0};
|
||||
KV kv = {.k1 = 0, .k2 = 0};
|
||||
int len = kvSerial(&kv, buf);
|
||||
rocksdb_iterator_t *iter = rocksdb_create_iterator_cf(db, rOpt, cfHandle[1]);
|
||||
rocksdb_iter_seek(iter, buf, len);
|
||||
if (!rocksdb_iter_valid(iter)) {
|
||||
printf("invalid iter");
|
||||
}
|
||||
{
|
||||
char buf[128] = {0};
|
||||
KV kv = {.k1 = 100, .k2 = 0};
|
||||
int len = kvSerial(&kv, buf);
|
||||
|
||||
rocksdb_iterator_t *iter = rocksdb_create_iterator_cf(db, rOpt, cfHandle[1]);
|
||||
rocksdb_iter_seek(iter, buf, len);
|
||||
if (!rocksdb_iter_valid(iter)) {
|
||||
printf("invalid iter\n");
|
||||
rocksdb_iter_seek_for_prev(iter, buf, len);
|
||||
if (!rocksdb_iter_valid(iter)) {
|
||||
printf("stay invalid iter\n");
|
||||
} else {
|
||||
size_t klen = 0, vlen = 0;
|
||||
const char *key = rocksdb_iter_key(iter, &klen);
|
||||
const char *value = rocksdb_iter_value(iter, &vlen);
|
||||
KV kv;
|
||||
kvDeserial(&kv, (char *)key);
|
||||
printf("kv1: %d\t kv2: %d, len:%d, value = %s\n", (int)(kv.k1), (int)(kv.k2), (int)(klen), value);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// char *v = rocksdb_get_cf(db, rOpt, cfHandle[0], "key", strlen("key"), &vlen, &err);
|
||||
// printf("Get value %s, and len = %d\n", v, (int)vlen);
|
||||
|
||||
rocksdb_column_family_handle_destroy(cfHandle[0]);
|
||||
rocksdb_column_family_handle_destroy(cfHandle[1]);
|
||||
rocksdb_close(db);
|
||||
|
||||
// {
|
||||
// // rocksdb_options_t *Options = rocksdb_options_create();
|
||||
// db = rocksdb_open(comm, path, &err);
|
||||
// if (db != NULL) {
|
||||
// rocksdb_options_t *cfo = rocksdb_options_create_copy(comm);
|
||||
// rocksdb_comparator_t *cmp1 = rocksdb_comparator_create(NULL, NULL, kvDBComp, kvDBName);
|
||||
// rocksdb_options_set_comparator(cfo, cmp1);
|
||||
|
||||
// rocksdb_column_family_handle_t *handle = rocksdb_create_column_family(db, cfo, "cf1", &err);
|
||||
|
||||
// rocksdb_column_family_handle_destroy(handle);
|
||||
// rocksdb_close(db);
|
||||
// db = NULL;
|
||||
// }
|
||||
// }
|
||||
|
||||
// int ncf = 2;
|
||||
|
||||
// rocksdb_column_family_handle_t **pHandle = malloc(ncf * sizeof(rocksdb_column_family_handle_t *));
|
||||
|
||||
// {
|
||||
// rocksdb_options_t *options = rocksdb_options_create_copy(comm);
|
||||
|
||||
// rocksdb_comparator_t *cmp1 = rocksdb_comparator_create(NULL, NULL, kvDBComp, kvDBName);
|
||||
// rocksdb_options_t *dbOpts1 = rocksdb_options_create_copy(comm);
|
||||
// rocksdb_options_t *dbOpts2 = rocksdb_options_create_copy(comm);
|
||||
// rocksdb_options_set_comparator(dbOpts2, cmp1);
|
||||
// // rocksdb_column_family_handle_t *cf = rocksdb_create_column_family(db, dbOpts1, "cmp1", &err);
|
||||
|
||||
// const char *pName[] = {"default", "cf1"};
|
||||
|
||||
// const rocksdb_options_t **pOpts = malloc(ncf * sizeof(rocksdb_options_t *));
|
||||
// pOpts[0] = dbOpts1;
|
||||
// pOpts[1] = dbOpts2;
|
||||
|
||||
// rocksdb_options_t *allOptions = rocksdb_options_create_copy(comm);
|
||||
// db = rocksdb_open_column_families(allOptions, "test", ncf, pName, pOpts, pHandle, &err);
|
||||
// }
|
||||
|
||||
// // rocksdb_options_t *options = rocksdb_options_create();
|
||||
// // rocksdb_options_set_create_if_missing(options, 1);
|
||||
|
||||
// // //rocksdb_open_column_families(const rocksdb_options_t *options, const char *name, int num_column_families,
|
||||
// // const char *const *column_family_names,
|
||||
// // const rocksdb_options_t *const *column_family_options,
|
||||
// // rocksdb_column_family_handle_t **column_family_handles, char **errptr);
|
||||
|
||||
// for (int i = 0; i < 100; i++) {
|
||||
// char buf[128] = {0};
|
||||
|
||||
// rocksdb_writeoptions_t *wopt = rocksdb_writeoptions_create();
|
||||
// KV kv = {.k1 = i, .k2 = i};
|
||||
// kvSerial(&kv, buf);
|
||||
// rocksdb_put_cf(db, wopt, pHandle[0], buf, strlen(buf), (const char *)&i, sizeof(i), &err);
|
||||
// }
|
||||
|
||||
// rocksdb_close(db);
|
||||
// Write
|
||||
// rocksdb_writeoptions_t *writeoptions = rocksdb_writeoptions_create();
|
||||
// rocksdb_put(db, writeoptions, "key", 3, "value", 5, &err);
|
||||
|
||||
//// Read
|
||||
// rocksdb_readoptions_t *readoptions = rocksdb_readoptions_create();
|
||||
// rocksdb_readoptions_set_snapshot(readoptions, rocksdb_create_snapshot(db));
|
||||
// size_t vallen = 0;
|
||||
// char *val = rocksdb_get(db, readoptions, "key", 3, &vallen, &err);
|
||||
// printf("val:%s\n", val);
|
||||
|
||||
//// Update
|
||||
//// rocksdb_put(db, writeoptions, "key", 3, "eulav", 5, &err);
|
||||
|
||||
//// Delete
|
||||
// rocksdb_delete(db, writeoptions, "key", 3, &err);
|
||||
|
||||
//// Read again
|
||||
// val = rocksdb_get(db, readoptions, "key", 3, &vallen, &err);
|
||||
// printf("val:%s\n", val);
|
||||
|
||||
// rocksdb_close(db);
|
||||
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
|
After Width: | Height: | Size: 13 KiB |
After Width: | Height: | Size: 14 KiB |
|
@ -4,7 +4,7 @@ if(${BUILD_DOCS})
|
|||
find_package(Doxygen)
|
||||
if (DOXYGEN_FOUND)
|
||||
# Build the doc
|
||||
set(DOXYGEN_IN ${TD_SOURCE_DIR}/docs/Doxyfile.in)
|
||||
set(DOXYGEN_IN ${TD_SOURCE_DIR}/docs/doxgen/Doxyfile.in)
|
||||
set(DOXYGEN_OUT ${CMAKE_BINARY_DIR}/Doxyfile)
|
||||
|
||||
configure_file(${DOXYGEN_IN} ${DOXYGEN_OUT} @ONLY)
|
||||
|
|
|
@ -204,7 +204,7 @@ group vnodeProcessReqs()
|
|||
s -> s:
|
||||
note right
|
||||
save the requests in log store
|
||||
and wait for comfirmation or
|
||||
and wait for confirmation or
|
||||
other cases
|
||||
end note
|
||||
|
||||
|
@ -236,7 +236,7 @@ s -> s: syncAppendReqToLogStore()
|
|||
s -> v: walWrite()
|
||||
|
||||
alt has meta req
|
||||
<- s: comfirmation
|
||||
<- s: confirmation
|
||||
else
|
||||
s -> v: vnodeApplyReqs()
|
||||
end
|
||||
|
|
|
@ -1,10 +1,11 @@
|
|||
---
|
||||
title: TDengine Documentation
|
||||
sidebar_label: Documentation Home
|
||||
description: This website contains the user manuals for TDengine, an open-source, cloud-native time-series database optimized for IoT, Connected Cars, and Industrial IoT.
|
||||
slug: /
|
||||
---
|
||||
|
||||
TDengine is an [open-source](https://tdengine.com/tdengine/open-source-time-series-database/), [cloud-native](https://tdengine.com/tdengine/cloud-native-time-series-database/) [time-series database](https://tdengine.com/tsdb/) optimized for the Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. This document is the TDengine user manual. It introduces the basic, as well as novel concepts, in TDengine, and also talks in detail about installation, features, SQL, APIs, operation, maintenance, kernel design, and other topics. It’s written mainly for architects, developers, and system administrators.
|
||||
TDengine is an [open-source](https://tdengine.com/tdengine/open-source-time-series-database/), [cloud-native](https://tdengine.com/tdengine/cloud-native-time-series-database/) [time-series database](https://tdengine.com/tsdb/) optimized for the Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. This document is the TDengine user manual. It introduces the basic, as well as novel concepts, in TDengine, and also talks in detail about installation, features, SQL, APIs, operation, maintenance, kernel design, and other topics. It's written mainly for architects, developers, and system administrators.
|
||||
|
||||
To get an overview of TDengine, such as a feature list, benchmarks, and competitive advantages, please browse through the [Introduction](./intro) section.
|
||||
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
---
|
||||
title: Introduction
|
||||
description: This document introduces the major features, competitive advantages, typical use cases, and benchmarks of TDengine.
|
||||
toc_max_heading_level: 2
|
||||
---
|
||||
|
||||
|
@ -43,7 +44,7 @@ For more details on features, please read through the entire documentation.
|
|||
|
||||
## Competitive Advantages
|
||||
|
||||
By making full use of [characteristics of time series data](https://tdengine.com/tsdb/characteristics-of-time-series-data/), TDengine differentiates itself from other [time series databases](https://tdengine.com/tsdb), with the following advantages.
|
||||
By making full use of [characteristics of time series data](https://tdengine.com/tsdb/characteristics-of-time-series-data/), TDengine differentiates itself from other [time series databases](https://tdengine.com/tsdb/), with the following advantages.
|
||||
|
||||
- **[High-Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression.
|
||||
|
||||
|
@ -56,7 +57,7 @@ By making full use of [characteristics of time series data](https://tdengine.com
|
|||
|
||||
- **[Easy Data Analytics](https://tdengine.com/tdengine/time-series-data-analytics-made-easy/)**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way.
|
||||
|
||||
- **[Open Source](https://tdengine.com/tdengine/open-source-time-series-database/)**: TDengine’s core modules, including cluster feature, are all available under open source licenses. It has gathered over 19k stars on GitHub. There is an active developer community, and over 140k running instances worldwide.
|
||||
- **[Open Source](https://tdengine.com/tdengine/open-source-time-series-database/)**: TDengine's core modules, including cluster feature, are all available under open source licenses. It has gathered over 19k stars on GitHub. There is an active developer community, and over 140k running instances worldwide.
|
||||
|
||||
With TDengine, the total cost of ownership of your time-series data platform can be greatly reduced.
|
||||
|
||||
|
@ -108,8 +109,8 @@ As a high-performance, scalable and SQL supported time-series database, TDengine
|
|||
|
||||
| **System Performance Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
|
||||
| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | --------------------------------------------------------------------------------------------------------------------------- |
|
||||
| Very large total processing capacity | | | √ | TDengine’s cluster functions can easily improve processing capacity via multi-server coordination. |
|
||||
| Extremely high-speed data processing | | | √ | TDengine’s storage and data processing are optimized for IoT, and can process data many times faster than similar products. |
|
||||
| Very large total processing capacity | | | √ | TDengine's cluster functions can easily improve processing capacity via multi-server coordination. |
|
||||
| Extremely high-speed data processing | | | √ | TDengine's storage and data processing are optimized for IoT, and can process data many times faster than similar products. |
|
||||
| Extremely fast processing of high resolution data | | | √ | TDengine has achieved the same or better performance than other relational and NoSQL data processing systems. |
|
||||
|
||||
### System Maintenance Requirements
|
||||
|
@ -122,13 +123,12 @@ As a high-performance, scalable and SQL supported time-series database, TDengine
|
|||
|
||||
## Comparison with other databases
|
||||
|
||||
- [Writing Performance Comparison of TDengine and InfluxDB ](https://tdengine.com/2022/02/23/4975.html)
|
||||
- [Query Performance Comparison of TDengine and InfluxDB](https://tdengine.com/2022/02/24/5120.html)
|
||||
- [TDengine vs OpenTSDB](https://tdengine.com/2019/09/12/710.html)
|
||||
- [TDengine vs Cassandra](https://tdengine.com/2019/09/12/708.html)
|
||||
- [TDengine vs InfluxDB](https://tdengine.com/2019/09/12/706.html)
|
||||
- [TDengine vs. InfluxDB](https://tdengine.com/tsdb-comparison-influxdb-vs-tdengine/)
|
||||
- [TDengine vs. TimescaleDB](https://tdengine.com/tsdb-comparison-timescaledb-vs-tdengine/)
|
||||
- [TDengine vs. OpenTSDB](https://tdengine.com/performance-tdengine-vs-opentsdb/)
|
||||
- [TDengine vs. Cassandra](https://tdengine.com/performance-tdengine-vs-cassandra/)
|
||||
|
||||
## More readings
|
||||
- [Introduction to Time-Series Database](https://tdengine.com/tsdb/)
|
||||
- [Introduction to TDengine competitive advantages](https://tdengine.com/tdengine/)
|
||||
|
||||
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
---
|
||||
title: Concepts
|
||||
description: This document describes the basic concepts of TDengine, including the supertable.
|
||||
---
|
||||
|
||||
In order to explain the basic concepts and provide some sample code, the TDengine documentation smart meters as a typical time series use case. We assume the following: 1. Each smart meter collects three metrics i.e. current, voltage, and phase; 2. There are multiple smart meters; 3. Each meter has static attributes like location and group ID. Based on this, collected data will look similar to the following table:
|
||||
|
@ -126,7 +127,7 @@ To make full use of time-series data characteristics, TDengine adopts a strategy
|
|||
|
||||
If the metric data of multiple DCPs are traditionally written into a single table, due to uncontrollable network delays, the timing of the data from different DCPs arriving at the server cannot be guaranteed, write operations must be protected by locks, and metric data from one DCP cannot be guaranteed to be continuously stored together. **One table for one data collection point can ensure the best performance of insert and query of a single data collection point to the greatest possible extent.**
|
||||
|
||||
TDengine suggests using DCP ID as the table name (like d1001 in the above table). Each DCP may collect one or multiple metrics (like the `current`, `voltage`, `phase` as above). Each metric has a corresponding column in the table. The data type for a column can be int, float, string and others. In addition, the first column in the table must be a timestamp. TDengine uses the timestamp as the index, and won’t build the index on any metrics stored. Column wise storage is used.
|
||||
TDengine suggests using DCP ID as the table name (like d1001 in the above table). Each DCP may collect one or multiple metrics (like the `current`, `voltage`, `phase` as above). Each metric has a corresponding column in the table. The data type for a column can be int, float, string and others. In addition, the first column in the table must be a timestamp. TDengine uses the timestamp as the index, and won't build the index on any metrics stored. Column wise storage is used.
|
||||
|
||||
Complex devices, such as connected cars, may have multiple DCPs. In this case, multiple tables are created for a single device, one table per DCP.
|
||||
|
||||
|
|
|
@ -1,11 +1,12 @@
|
|||
---
|
||||
sidebar_label: Docker
|
||||
title: Quick Install on Docker
|
||||
sidebar_label: Docker
|
||||
description: This document describes how to install TDengine in a Docker container and perform queries and inserts.
|
||||
---
|
||||
|
||||
This document describes how to install TDengine in a Docker container and perform queries and inserts.
|
||||
|
||||
- The easiest way to explore TDengine is through [TDengine Cloud](http://cloud.tdengine.com).
|
||||
- The easiest way to explore TDengine is through [TDengine Cloud](https://cloud.tdengine.com).
|
||||
- To get started with TDengine in a non-containerized environment, see [Quick Install from Package](../../get-started/package).
|
||||
- If you want to view the source code, build TDengine yourself, or contribute to the project, see the [TDengine GitHub repository](https://github.com/taosdata/TDengine).
|
||||
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
sidebar_label: Package
|
||||
title: Quick Install from Package
|
||||
sidebar_label: Package
|
||||
description: This document describes how to install TDengine on Linux, Windows, and macOS and perform queries and inserts.
|
||||
---
|
||||
|
||||
import Tabs from "@theme/Tabs";
|
||||
|
@ -9,7 +10,7 @@ import PkgListV3 from "/components/PkgListV3";
|
|||
|
||||
This document describes how to install TDengine on Linux/Windows/macOS and perform queries and inserts.
|
||||
|
||||
- The easiest way to explore TDengine is through [TDengine Cloud](http://cloud.tdengine.com).
|
||||
- The easiest way to explore TDengine is through [TDengine Cloud](https://cloud.tdengine.com).
|
||||
- To get started with TDengine on Docker, see [Quick Install on Docker](../../get-started/docker).
|
||||
- If you want to view the source code, build TDengine yourself, or contribute to the project, see the [TDengine GitHub repository](https://github.com/taosdata/TDengine).
|
||||
|
||||
|
@ -17,7 +18,20 @@ The full package of TDengine includes the TDengine Server (`taosd`), TDengine Cl
|
|||
|
||||
The standard server installation package includes `taos`, `taosd`, `taosAdapter`, `taosBenchmark`, and sample code. You can also download the Lite package that includes only `taosd` and the C/C++ connector.
|
||||
|
||||
The TDengine Community Edition is released as Deb and RPM packages. The Deb package can be installed on Debian, Ubuntu, and derivative systems. The RPM package can be installed on CentOS, RHEL, SUSE, and derivative systems. A .tar.gz package is also provided for enterprise customers, and you can install TDengine over `apt-get` as well. The .tar.tz package includes `taosdump` and the TDinsight installation script. If you want to use these utilities with the Deb or RPM package, download and install taosTools separately. TDengine can also be installed on x64 Windows and x64/m1 macOS.
|
||||
TDengine OSS is released as Deb and RPM packages. The Deb package can be installed on Debian, Ubuntu, and derivative systems. The RPM package can be installed on CentOS, RHEL, SUSE, and derivative systems. A .tar.gz package is also provided for enterprise customers, and you can install TDengine over `apt-get` as well. The .tar.tz package includes `taosdump` and the TDinsight installation script. If you want to use these utilities with the Deb or RPM package, download and install taosTools separately. TDengine can also be installed on x64 Windows and x64/m1 macOS.
|
||||
|
||||
## Operating environment requirements
|
||||
In the Linux system, the minimum requirements for the operating environment are as follows:
|
||||
|
||||
linux core version - 3.10.0-1160.83.1.el7.x86_64;
|
||||
|
||||
glibc version - 2.17;
|
||||
|
||||
If compiling and installing through clone source code, it is also necessary to meet the following requirements:
|
||||
|
||||
cmake version - 3.26.4 or above;
|
||||
|
||||
gcc version - 9.3.1 or above;
|
||||
|
||||
## Installation
|
||||
|
||||
|
@ -101,7 +115,7 @@ sudo apt-get install tdengine
|
|||
|
||||
:::tip
|
||||
This installation method is supported only for Debian and Ubuntu.
|
||||
::::
|
||||
:::
|
||||
</TabItem>
|
||||
<TabItem label="Windows" value="windows">
|
||||
|
||||
|
@ -187,7 +201,7 @@ You can use the TDengine CLI to monitor your TDengine deployment and execute ad
|
|||
|
||||
<TabItem label="Windows" value="windows">
|
||||
|
||||
After the installation is complete, run `C:\TDengine\taosd.exe` to start TDengine Server.
|
||||
After the installation is complete, please run `sc start taosd` or run `C:\TDengine\taosd.exe` with administrator privilege to start TDengine Server. Please run `sc start taosadapter` or run `C:\TDengine\taosadapter.exe` with administrator privilege to start taosAdapter to provide http/REST service.
|
||||
|
||||
## Command Line Interface (CLI)
|
||||
|
||||
|
@ -201,16 +215,20 @@ After the installation is complete, double-click the /applications/TDengine to s
|
|||
|
||||
The following `launchctl` commands can help you manage TDengine service:
|
||||
|
||||
- Start TDengine Server: `launchctl start com.tdengine.taosd`
|
||||
- Start TDengine Server: `sudo launchctl start com.tdengine.taosd`
|
||||
|
||||
- Stop TDengine Server: `launchctl stop com.tdengine.taosd`
|
||||
- Stop TDengine Server: `sudo launchctl stop com.tdengine.taosd`
|
||||
|
||||
- Check TDengine Server status: `launchctl list | grep taosd`
|
||||
- Check TDengine Server status: `sudo launchctl list | grep taosd`
|
||||
|
||||
- Check TDengine Server status details: `launchctl print system/com.tdengine.taosd`
|
||||
|
||||
:::info
|
||||
|
||||
- The `launchctl` command does not require _root_ privileges. You don't need to use the `sudo` command.
|
||||
- The first content returned by the `launchctl list | grep taosd` command is the PID of the program, if '-' indicates that the TDengine service is not running.
|
||||
- Please use `sudo` to run `launchctl` to manage _com.tdengine.taosd_ with administrator privileges.
|
||||
- The administrator privilege is required for service management to enhance security.
|
||||
- Troubleshooting:
|
||||
- The first column returned by the command `launchctl list | grep taosd` is the PID of the program. If it's `-`, that means the TDengine service is not running.
|
||||
- If the service is abnormal, please check the `launchd.log` file from the system log or the `taosdlog` from the `/var/log/taos directory` for more information.
|
||||
|
||||
:::
|
||||
|
||||
|
|
|
@ -0,0 +1,7 @@
|
|||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="-0.5 -1 32 32" width="50" height="50">
|
||||
<g fill="#5865f2">
|
||||
<path
|
||||
d="M26.0015 6.9529C24.0021 6.03845 21.8787 5.37198 19.6623 5C19.3833 5.48048 19.0733 6.13144 18.8563 6.64292C16.4989 6.30193 14.1585 6.30193 11.8336 6.64292C11.6166 6.13144 11.2911 5.48048 11.0276 5C8.79575 5.37198 6.67235 6.03845 4.6869 6.9529C0.672601 12.8736 -0.41235 18.6548 0.130124 24.3585C2.79599 26.2959 5.36889 27.4739 7.89682 28.2489C8.51679 27.4119 9.07477 26.5129 9.55525 25.5675C8.64079 25.2265 7.77283 24.808 6.93587 24.312C7.15286 24.1571 7.36986 23.9866 7.57135 23.8161C12.6241 26.1255 18.0969 26.1255 23.0876 23.8161C23.3046 23.9866 23.5061 24.1571 23.7231 24.312C22.8861 24.808 22.0182 25.2265 21.1037 25.5675C21.5842 26.5129 22.1422 27.4119 22.7621 28.2489C25.2885 27.4739 27.8769 26.2959 30.5288 24.3585C31.1952 17.7559 29.4733 12.0212 26.0015 6.9529ZM10.2527 20.8402C8.73376 20.8402 7.49382 19.4608 7.49382 17.7714C7.49382 16.082 8.70276 14.7025 10.2527 14.7025C11.7871 14.7025 13.0425 16.082 13.0115 17.7714C13.0115 19.4608 11.7871 20.8402 10.2527 20.8402ZM20.4373 20.8402C18.9183 20.8402 17.6768 19.4608 17.6768 17.7714C17.6768 16.082 18.8873 14.7025 20.4373 14.7025C21.9717 14.7025 23.2271 16.082 23.1961 17.7714C23.1961 19.4608 21.9872 20.8402 20.4373 20.8402Z"
|
||||
></path>
|
||||
</g>
|
||||
</svg>
|
After Width: | Height: | Size: 1.3 KiB |
|
@ -0,0 +1,6 @@
|
|||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="-1 -2 18 18" width="50" height="50">
|
||||
<path
|
||||
fill="#000"
|
||||
d="M8 0C3.58 0 0 3.58 0 8c0 3.54 2.29 6.53 5.47 7.59.4.07.55-.17.55-.38 0-.19-.01-.82-.01-1.49-2.01.37-2.53-.49-2.69-.94-.09-.23-.48-.94-.82-1.13-.28-.15-.68-.52-.01-.53.63-.01 1.08.58 1.23.82.72 1.21 1.87.87 2.33.66.07-.52.28-.87.51-1.07-1.78-.2-3.64-.89-3.64-3.95 0-.87.31-1.59.82-2.15-.08-.2-.36-1.02.08-2.12 0 0 .67-.21 2.2.82.64-.18 1.32-.27 2-.27.68 0 1.36.09 2 .27 1.53-1.04 2.2-.82 2.2-.82.44 1.1.16 1.92.08 2.12.51.56.82 1.27.82 2.15 0 3.07-1.87 3.75-3.65 3.95.29.25.54.73.54 1.48 0 1.07-.01 1.93-.01 2.2 0 .21.15.46.55.38A8.013 8.013 0 0016 8c0-4.42-3.58-8-8-8z"
|
||||
></path>
|
||||
</svg>
|
After Width: | Height: | Size: 705 B |
|
@ -1,8 +1,15 @@
|
|||
---
|
||||
title: Get Started
|
||||
description: This article describes how to install TDengine and test its performance.
|
||||
description: This document describes how to install TDengine on various platforms.
|
||||
---
|
||||
|
||||
import GitHubSVG from './github.svg'
|
||||
import DiscordSVG from './discord.svg'
|
||||
import TwitterSVG from './twitter.svg'
|
||||
import YouTubeSVG from './youtube.svg'
|
||||
import LinkedInSVG from './linkedin.svg'
|
||||
import StackOverflowSVG from './stackoverflow.svg'
|
||||
|
||||
You can install and run TDengine on Linux/Windows/macOS machines as well as Docker containers. You can also deploy TDengine as a managed service with TDengine Cloud.
|
||||
|
||||
The full package of TDengine includes the TDengine Server (`taosd`), TDengine Client (`taosc`), taosAdapter for connecting with third-party systems and providing a RESTful interface, a command-line interface, and some tools. In addition to connectors for multiple languages, TDengine also provides a [RESTful interface](/reference/rest-api) through [taosAdapter](/reference/taosadapter).
|
||||
|
@ -12,4 +19,25 @@ import DocCardList from '@theme/DocCardList';
|
|||
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
|
||||
|
||||
<DocCardList items={useCurrentSidebarCategory().items}/>
|
||||
```
|
||||
```
|
||||
|
||||
## Join TDengine Community
|
||||
|
||||
<table width="100%">
|
||||
<tr align="center" style={{border:0}}>
|
||||
<td width="16%" style={{border:0}}><a href="https://github.com/taosdata/TDengine" target="_blank"><GitHubSVG /></a></td>
|
||||
<td width="16%" style={{border:0}}><a href="https://discord.com/invite/VZdSuUg4pS" target="_blank"><DiscordSVG /></a></td>
|
||||
<td width="16%" style={{border:0}}><a href="https://twitter.com/TDengineDB" target="_blank"><TwitterSVG /></a></td>
|
||||
<td width="16%" style={{border:0}}><a href="https://www.youtube.com/@tdengine" target="_blank"><YouTubeSVG /></a></td>
|
||||
<td width="16%" style={{border:0}}><a href="https://www.linkedin.com/company/tdengine" target="_blank"><LinkedInSVG /></a></td>
|
||||
<td width="16%" style={{border:0}}><a href="https://stackoverflow.com/questions/tagged/tdengine" target="_blank"><StackOverflowSVG /></a></td>
|
||||
</tr>
|
||||
<tr align="center" style={{border:0,backgroundColor:'transparent'}}>
|
||||
<td width="16%" style={{border:0,padding:0}}><a href="https://github.com/taosdata/TDengine" target="_blank">Star GitHub</a></td>
|
||||
<td width="16%" style={{border:0,padding:0}}><a href="https://discord.com/invite/VZdSuUg4pS" target="_blank">Join Discord</a></td>
|
||||
<td width="16%" style={{border:0,padding:0}}><a href="https://twitter.com/TDengineDB" target="_blank">Follow Twitter</a></td>
|
||||
<td width="16%" style={{border:0,padding:0}}><a href="https://www.youtube.com/@tdengine" target="_blank">Subscribe YouTube</a></td>
|
||||
<td width="16%" style={{border:0,padding:0}}><a href="https://www.linkedin.com/company/tdengine" target="_blank">Follow LinkedIn</a></td>
|
||||
<td width="16%" style={{border:0,padding:0}}><a href="https://stackoverflow.com/questions/tagged/tdengine" target="_blank">Ask StackOverflow</a></td>
|
||||
</tr>
|
||||
</table>
|
||||
|
|
|
@ -0,0 +1,6 @@
|
|||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 -2 24 24" width="50" height="50">
|
||||
<path
|
||||
fill="rgb(10, 102, 194)"
|
||||
d="M20.5 2h-17A1.5 1.5 0 002 3.5v17A1.5 1.5 0 003.5 22h17a1.5 1.5 0 001.5-1.5v-17A1.5 1.5 0 0020.5 2zM8 19H5v-9h3zM6.5 8.25A1.75 1.75 0 118.3 6.5a1.78 1.78 0 01-1.8 1.75zM19 19h-3v-4.74c0-1.42-.6-1.93-1.38-1.93A1.74 1.74 0 0013 14.19a.66.66 0 000 .14V19h-3v-9h2.9v1.3a3.11 3.11 0 012.7-1.4c1.55 0 3.36.86 3.36 3.66z"
|
||||
></path>
|
||||
</svg>
|
After Width: | Height: | Size: 461 B |
|
@ -0,0 +1,7 @@
|
|||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="-8 0 48 48" width="50" height="50">
|
||||
<path d="M26 41v-9h4v13H0V32h4v9h22z" fill="#BCBBBB" />
|
||||
<path
|
||||
d="M23 34l.8-3-16.1-3.3L7 31l16 3zM9.2 23.2l15 7 1.4-3-15-7-1.4 3zm4.2-7.4L26 26.4l2.1-2.5-12.7-10.6-2.1 2.5zM21.5 8l-2.7 2 9.9 13.3 2.7-2L21.5 8zM7 38h16v-3H7v3z"
|
||||
fill="#F48024"
|
||||
/>
|
||||
</svg>
|
After Width: | Height: | Size: 350 B |
|
@ -0,0 +1,7 @@
|
|||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 -2 24 24" width="50" height="50">
|
||||
<g fill="rgb(29, 155, 240)">
|
||||
<path
|
||||
d="M23.643 4.937c-.835.37-1.732.62-2.675.733.962-.576 1.7-1.49 2.048-2.578-.9.534-1.897.922-2.958 1.13-.85-.904-2.06-1.47-3.4-1.47-2.572 0-4.658 2.086-4.658 4.66 0 .364.042.718.12 1.06-3.873-.195-7.304-2.05-9.602-4.868-.4.69-.63 1.49-.63 2.342 0 1.616.823 3.043 2.072 3.878-.764-.025-1.482-.234-2.11-.583v.06c0 2.257 1.605 4.14 3.737 4.568-.392.106-.803.162-1.227.162-.3 0-.593-.028-.877-.082.593 1.85 2.313 3.198 4.352 3.234-1.595 1.25-3.604 1.995-5.786 1.995-.376 0-.747-.022-1.112-.065 2.062 1.323 4.51 2.093 7.14 2.093 8.57 0 13.255-7.098 13.255-13.254 0-.2-.005-.402-.014-.602.91-.658 1.7-1.477 2.323-2.41z"
|
||||
></path>
|
||||
</g>
|
||||
</svg>
|
After Width: | Height: | Size: 772 B |
|
@ -0,0 +1,11 @@
|
|||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="-2 -8 32 32" width="50" height="50">
|
||||
<g>
|
||||
<g>
|
||||
<path
|
||||
d="M27.9727 3.12324C27.6435 1.89323 26.6768 0.926623 25.4468 0.597366C23.2197 2.24288e-07 14.285 0 14.285 0C14.285 0 5.35042 2.24288e-07 3.12323 0.597366C1.89323 0.926623 0.926623 1.89323 0.597366 3.12324C2.24288e-07 5.35042 0 10 0 10C0 10 2.24288e-07 14.6496 0.597366 16.8768C0.926623 18.1068 1.89323 19.0734 3.12323 19.4026C5.35042 20 14.285 20 14.285 20C14.285 20 23.2197 20 25.4468 19.4026C26.6768 19.0734 27.6435 18.1068 27.9727 16.8768C28.5701 14.6496 28.5701 10 28.5701 10C28.5701 10 28.5677 5.35042 27.9727 3.12324Z"
|
||||
fill="#FF0000"
|
||||
></path>
|
||||
<path d="M11.4253 14.2854L18.8477 10.0004L11.4253 5.71533V14.2854Z" fill="white"></path>
|
||||
</g>
|
||||
</g>
|
||||
</svg>
|
After Width: | Height: | Size: 801 B |
|
@ -12,4 +12,4 @@ When using REST connection, the feature of bulk pulling can be enabled if the si
|
|||
{{#include docs/examples/java/src/main/java/com/taos/example/WSConnectExample.java:main}}
|
||||
```
|
||||
|
||||
More configuration about connection,please refer to [Java Connector](/reference/connector/java)
|
||||
More configuration about connection, please refer to [Java Connector](/reference/connector/java)
|
||||
|
|
|
@ -1,3 +1,3 @@
|
|||
```php title="原生连接"
|
||||
```php title=""native"
|
||||
{{#include docs/examples/php/connect.php}}
|
||||
```
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
sidebar_label: Connect
|
||||
title: Connect to TDengine
|
||||
description: "How to establish connections to TDengine and how to install and use TDengine connectors."
|
||||
sidebar_label: Connect
|
||||
description: This document describes how to establish connections to TDengine and how to install and use TDengine connectors.
|
||||
---
|
||||
|
||||
import Tabs from "@theme/Tabs";
|
||||
|
@ -33,7 +33,7 @@ There are two ways for a connector to establish connections to TDengine:
|
|||
|
||||
For REST and native connections, connectors provide similar APIs for performing operations and running SQL statements on your databases. The main difference is the method of establishing the connection, which is not visible to users.
|
||||
|
||||
Key differences:
|
||||
Key differences:
|
||||
|
||||
3. The REST connection is more accessible with cross-platform support, however it results in a 30% performance downgrade.
|
||||
1. The TDengine client driver (taosc) has the highest performance with all the features of TDengine like [Parameter Binding](/reference/connector/cpp#parameter-binding-api), [Subscription](/reference/connector/cpp#subscription-and-consumption-api), etc.
|
||||
|
@ -83,7 +83,7 @@ If `maven` is used to manage the projects, what needs to be done is only adding
|
|||
<dependency>
|
||||
<groupId>com.taosdata.jdbc</groupId>
|
||||
<artifactId>taos-jdbcdriver</artifactId>
|
||||
<version>3.0.0</version>
|
||||
<version>3.2.1</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
|
@ -198,7 +198,7 @@ The sample code below are based on dotnet6.0, they may need to be adjusted if yo
|
|||
<TabItem label="R" value="r">
|
||||
|
||||
1. Download [taos-jdbcdriver-version-dist.jar](https://repo1.maven.org/maven2/com/taosdata/jdbc/taos-jdbcdriver/3.0.0/).
|
||||
2. Install the dependency package `RJDBC`:
|
||||
2. Install the dependency package `RJDBC`:
|
||||
|
||||
```R
|
||||
install.packages("RJDBC")
|
||||
|
@ -213,7 +213,7 @@ If the client driver (taosc) is already installed, then the C connector is alrea
|
|||
</TabItem>
|
||||
<TabItem label="PHP" value="php">
|
||||
|
||||
**Download Source Code Package and Unzip:**
|
||||
**Download Source Code Package and Unzip: **
|
||||
|
||||
```shell
|
||||
curl -L -o php-tdengine.tar.gz https://github.com/Yurunsoft/php-tdengine/archive/refs/tags/v1.0.2.tar.gz \
|
||||
|
@ -223,13 +223,13 @@ curl -L -o php-tdengine.tar.gz https://github.com/Yurunsoft/php-tdengine/archive
|
|||
|
||||
> Version number `v1.0.2` is only for example, it can be replaced to any newer version, please check available version from [TDengine PHP Connector Releases](https://github.com/Yurunsoft/php-tdengine/releases).
|
||||
|
||||
**Non-Swoole Environment:**
|
||||
**Non-Swoole Environment: **
|
||||
|
||||
```shell
|
||||
phpize && ./configure && make -j && make install
|
||||
```
|
||||
|
||||
**Specify TDengine Location:**
|
||||
**Specify TDengine Location: **
|
||||
|
||||
```shell
|
||||
phpize && ./configure --with-tdengine-dir=/usr/local/Cellar/tdengine/3.0.0.0 && make -j && make install
|
||||
|
@ -238,7 +238,7 @@ phpize && ./configure --with-tdengine-dir=/usr/local/Cellar/tdengine/3.0.0.0 &&
|
|||
> `--with-tdengine-dir=` is followed by the TDengine installation location.
|
||||
> This way is useful in case TDengine location can't be found automatically or macOS.
|
||||
|
||||
**Swoole Environment:**
|
||||
**Swoole Environment: **
|
||||
|
||||
```shell
|
||||
phpize && ./configure --enable-swoole && make -j && make install
|
||||
|
@ -288,6 +288,6 @@ Prior to establishing connection, please make sure TDengine is already running a
|
|||
</Tabs>
|
||||
|
||||
:::tip
|
||||
If the connection fails, in most cases it's caused by improper configuration for FQDN or firewall. Please refer to the section "Unable to establish connection" in [FAQ](https://docs.tdengine.com/train-faq/faq).
|
||||
If the connection fails, in most cases it's caused by improper configuration for FQDN or firewall. Please refer to the section "Unable to establish connection" in [FAQ](../../train-faq/faq).
|
||||
|
||||
:::
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
---
|
||||
title: Data Model
|
||||
description: This document describes the data model of TDengine.
|
||||
---
|
||||
|
||||
The data model employed by TDengine is similar to that of a relational database. You have to create databases and tables. You must design the data model based on your own business and application requirements. You should design the [STable](/concept/#super-table-stable) (an abbreviation for super table) schema to fit your data. This chapter will explain the big picture without getting into syntactical details.
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
---
|
||||
title: Insert Using SQL
|
||||
description: This document describes how to insert data into TDengine using SQL.
|
||||
---
|
||||
|
||||
import Tabs from "@theme/Tabs";
|
||||
|
@ -29,25 +30,31 @@ Application programs can execute `INSERT` statement through connectors to insert
|
|||
The below SQL statement is used to insert one row into table "d1001".
|
||||
|
||||
```sql
|
||||
INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31);
|
||||
INSERT INTO d1001 VALUES (ts1, 10.3, 219, 0.31);
|
||||
```
|
||||
|
||||
`ts1` is Unix timestamp, the timestamps which is larger than the difference between current time and KEEP in config is only allowed. For further detail, refer to [TDengine SQL insert timestamp section](/taos-sql/insert).
|
||||
|
||||
### Insert Multiple Rows
|
||||
|
||||
Multiple rows can be inserted in a single SQL statement. The example below inserts 2 rows into table "d1001".
|
||||
|
||||
```sql
|
||||
INSERT INTO d1001 VALUES (1538548684000, 10.2, 220, 0.23) (1538548696650, 10.3, 218, 0.25);
|
||||
INSERT INTO d1001 VALUES (ts2, 10.2, 220, 0.23) (ts2, 10.3, 218, 0.25);
|
||||
```
|
||||
|
||||
`ts1` and `ts2` is Unix timestamp, the timestamps which is larger than the difference between current time and KEEP in config is only allowed. For further detail, refer to [TDengine SQL insert timestamp section](/taos-sql/insert).
|
||||
|
||||
### Insert into Multiple Tables
|
||||
|
||||
Data can be inserted into multiple tables in the same SQL statement. The example below inserts 2 rows into table "d1001" and 1 row into table "d1002".
|
||||
|
||||
```sql
|
||||
INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6, 218, 0.33) d1002 VALUES (1538548696800, 12.3, 221, 0.31);
|
||||
INSERT INTO d1001 VALUES (ts1, 10.3, 219, 0.31) (ts2, 12.6, 218, 0.33) d1002 VALUES (ts3, 12.3, 221, 0.31);
|
||||
```
|
||||
|
||||
`ts1`, `ts2` and `ts3` is Unix timestamp, the timestamps which is larger than the difference between current time and KEEP in config is only allowed. For further detail, refer to [TDengine SQL insert timestamp section](/taos-sql/insert).
|
||||
|
||||
For more details about `INSERT` please refer to [INSERT](/taos-sql/insert).
|
||||
|
||||
:::info
|
||||
|
|
|
@ -0,0 +1,47 @@
|
|||
---
|
||||
title: Write from Kafka
|
||||
description: This document describes how to insert data into TDengine using Kafka.
|
||||
---
|
||||
|
||||
import Tabs from "@theme/Tabs";
|
||||
import TabItem from "@theme/TabItem";
|
||||
import PyKafka from "./_py_kafka.mdx";
|
||||
|
||||
## About Kafka
|
||||
|
||||
Apache Kafka is an open-source distributed event streaming platform, used by thousands of companies for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications. For the key concepts of kafka, please refer to [kafka documentation](https://kafka.apache.org/documentation/#gettingStarted).
|
||||
|
||||
### kafka topic
|
||||
|
||||
Messages in Kafka are organized by topics. A topic may have one or more partitions. We can manage kafka topics through `kafka-topics`.
|
||||
|
||||
create a topic named `kafka-events`:
|
||||
|
||||
```
|
||||
bin/kafka-topics.sh --create --topic kafka-events --bootstrap-server localhost:9092
|
||||
```
|
||||
|
||||
Alter `kafka-events` topic to set partitions to 3:
|
||||
|
||||
```
|
||||
bin/kafka-topics.sh --alter --topic kafka-events --partitions 3 --bootstrap-server=localhost:9092
|
||||
```
|
||||
|
||||
Show all topics and partitions in Kafka:
|
||||
|
||||
```
|
||||
bin/kafka-topics.sh --bootstrap-server=localhost:9092 --describe
|
||||
```
|
||||
|
||||
## Insert into TDengine
|
||||
|
||||
We can write data into TDengine via SQL or Schemaless. For more information, please refer to [Insert Using SQL](/develop/insert-data/sql-writing/) or [High Performance Writing](/develop/insert-data/high-volume/) or [Schemaless Writing](/reference/schemaless/).
|
||||
|
||||
## Examples
|
||||
|
||||
<Tabs defaultValue="Python" groupId="lang">
|
||||
<TabItem label="Python" value="Python">
|
||||
<PyKafka />
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
sidebar_label: InfluxDB Line Protocol
|
||||
title: InfluxDB Line Protocol
|
||||
sidebar_label: InfluxDB Line Protocol
|
||||
description: This document describes how to insert data into TDengine using the InfluxDB Line Protocol.
|
||||
---
|
||||
|
||||
import Tabs from "@theme/Tabs";
|
||||
|
@ -37,9 +38,9 @@ meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0
|
|||
- All the data in `tag_set` will be converted to NCHAR type automatically .
|
||||
- Each data in `field_set` must be self-descriptive for its data type. For example 1.2f32 means a value 1.2 of float type. Without the "f" type suffix, it will be treated as type double.
|
||||
- Multiple kinds of precision can be used for the `timestamp` field. Time precision can be from nanosecond (ns) to hour (h).
|
||||
- You can configure smlChildTableName in taos.cfg to specify table names, for example, `smlChildTableName=tname`. You can insert `st,tname=cpul,t1=4 c1=3 1626006833639000000` and the cpu1 table will be automatically created. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored.
|
||||
- It is assumed that the order of field_set in a supertable is consistent, meaning that the first record contains all fields and subsequent records store fields in the same order. If the order is not consistent, set smlDataFormat in taos.cfg to false. Otherwise, data will be written out of order and a database error will occur.(smlDataFormat in taos.cfg default to false after version of 3.0.1.3)
|
||||
:::
|
||||
- The child table name is created automatically in a rule to guarantee its uniqueness. But you can configure `smlChildTableName` in taos.cfg to specify a tag value as the table names if the tag value is unique globally. For example, if a tag is called `tname` and you set `smlChildTableName=tname` in taos.cfg, when you insert `st,tname=cpu1,t1=4 c1=3 1626006833639000000`, the child table `cpu1` will be created automatically. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored.
|
||||
- It is assumed that the order of field_set in a supertable is consistent, meaning that the first record contains all fields and subsequent records store fields in the same order. If the order is not consistent, set smlDataFormat in taos.cfg to false. Otherwise, data will be written out of order and a database error will occur.(smlDataFormat in taos.cfg default to false after version of 3.0.1.3, smlDataFormat is discarded since 3.0.3.0)
|
||||
:::
|
||||
|
||||
For more details please refer to [InfluxDB Line Protocol](https://docs.influxdata.com/influxdb/v2.0/reference/syntax/line-protocol/) and [TDengine Schemaless](/reference/schemaless/#Schemaless-Line-Protocol)
|
||||
|
||||
|
@ -68,7 +69,7 @@ For more details please refer to [InfluxDB Line Protocol](https://docs.influxdat
|
|||
|
||||
## Query Examples
|
||||
|
||||
If you want query the data of `location=California.LosAngeles,groupid=2`,here is the query SQL:
|
||||
If you want query the data of `location=California.LosAngeles,groupid=2`, here is the query SQL:
|
||||
|
||||
```sql
|
||||
SELECT * FROM meters WHERE location = "California.LosAngeles" AND groupid = 2;
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
sidebar_label: OpenTSDB Line Protocol
|
||||
title: OpenTSDB Line Protocol
|
||||
sidebar_label: OpenTSDB Line Protocol
|
||||
description: This document describes how to insert data into TDengine using the OpenTSDB Line Protocol.
|
||||
---
|
||||
|
||||
import Tabs from "@theme/Tabs";
|
||||
|
@ -32,7 +33,7 @@ For example:
|
|||
meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3
|
||||
```
|
||||
|
||||
- The defult child table name is generated by rules.You can configure smlChildTableName in taos.cfg to specify child table names, for example, `smlChildTableName=tname`. You can insert `meters.current 1648432611250 11.3 tname=cpu1 location=California.LosAngeles groupid=3` and the cpu1 table will be automatically created. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored.
|
||||
- The child table name is created automatically in a rule to guarantee its uniqueness. But you can configure `smlChildTableName` in taos.cfg to specify a tag value as the table names if the tag value is unique globally. For example, if a tag is called `tname` and you set `smlChildTableName=tname` in taos.cfg, when you insert `st,tname=cpu1,t1=4 c1=3 1626006833639000000`, the child table `cpu1` will be automatically created. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored.
|
||||
Please refer to [OpenTSDB Telnet API](http://opentsdb.net/docs/build/html/api_telnet/put.html) for more details.
|
||||
|
||||
## Examples
|
||||
|
@ -83,7 +84,7 @@ Query OK, 4 row(s) in set (0.005399s)
|
|||
|
||||
## Query Examples
|
||||
|
||||
If you want query the data of `location=California.LosAngeles groupid=3`,here is the query SQL:
|
||||
If you want query the data of `location=California.LosAngeles groupid=3`, here is the query SQL:
|
||||
|
||||
```sql
|
||||
SELECT * FROM `meters.current` WHERE location = "California.LosAngeles" AND groupid = 3;
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
sidebar_label: OpenTSDB JSON Protocol
|
||||
title: OpenTSDB JSON Protocol
|
||||
sidebar_label: OpenTSDB JSON Protocol
|
||||
description: This document describes how to insert data into TDengine using the OpenTSDB JSON protocol.
|
||||
---
|
||||
|
||||
import Tabs from "@theme/Tabs";
|
||||
|
@ -47,9 +48,8 @@ Please refer to [OpenTSDB HTTP API](http://opentsdb.net/docs/build/html/api_http
|
|||
:::note
|
||||
|
||||
- In JSON protocol, strings will be converted to NCHAR type and numeric values will be converted to double type.
|
||||
- Only data in array format is accepted and so an array must be used even if there is only one row.
|
||||
- The defult child table name is generated by rules.You can configure smlChildTableName in taos.cfg to specify child table names, for example, `smlChildTableName=tname`. You can insert `"tags": { "host": "web02","dc": "lga","tname":"cpu1"}` and the cpu1 table will be automatically created. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored.
|
||||
:::
|
||||
- The child table name is created automatically in a rule to guarantee its uniqueness. But you can configure `smlChildTableName` in taos.cfg to specify a tag value as the table names if the tag value is unique globally. For example, if a tag is called `tname` and you set `smlChildTableName=tname` in taos.cfg, when you insert `st,tname=cpu1,t1=4 c1=3 1626006833639000000`, the child table `cpu1` will be automatically created. Note that if multiple rows have the same tname but different tag_set values, the tag_set of the first row is used to create the table and the others are ignored.
|
||||
:::
|
||||
|
||||
## Examples
|
||||
|
||||
|
@ -97,7 +97,7 @@ Query OK, 2 row(s) in set (0.004076s)
|
|||
|
||||
## Query Examples
|
||||
|
||||
If you want query the data of "tags": {"location": "California.LosAngeles", "groupid": 1},here is the query SQL:
|
||||
If you want query the data of "tags": {"location": "California.LosAngeles", "groupid": 1}, here is the query SQL:
|
||||
|
||||
```sql
|
||||
SELECT * FROM `meters.current` WHERE location = "California.LosAngeles" AND groupid = 3;
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
sidebar_label: High Performance Writing
|
||||
title: High Performance Writing
|
||||
sidebar_label: High Performance Writing
|
||||
description: This document describes how to achieve high performance when writing data into TDengine.
|
||||
---
|
||||
|
||||
import Tabs from "@theme/Tabs";
|
||||
|
@ -27,7 +28,7 @@ From the perspective of application program, you need to consider:
|
|||
- Writing to known existing tables is more efficient than writing to uncertain tables in automatic creating mode because the later needs to check whether the table exists or not before actually writing data into it.
|
||||
- Writing in SQL is more efficient than writing in schemaless mode because schemaless writing creates table automatically and may alter table schema.
|
||||
|
||||
Application programs need to take care of the above factors and try to take advantage of them. The application progam should write to single table in each write batch. The batch size needs to be tuned to a proper value on a specific system. The number of concurrent connections needs to be tuned to a proper value too to achieve the best writing throughput.
|
||||
Application programs need to take care of the above factors and try to take advantage of them. The application program should write to single table in each write batch. The batch size needs to be tuned to a proper value on a specific system. The number of concurrent connections needs to be tuned to a proper value too to achieve the best writing throughput.
|
||||
|
||||
### Data Source
|
||||
|
||||
|
@ -48,7 +49,7 @@ If the data source is Kafka, then the application program is a consumer of Kafka
|
|||
|
||||
On the server side, database configuration parameter `vgroups` needs to be set carefully to maximize the system performance. If it's set too low, the system capability can't be utilized fully; if it's set too big, unnecessary resource competition may be produced. A normal recommendation for `vgroups` parameter is 2 times of the number of CPU cores. However, depending on the actual system resources, it may still need to tuned.
|
||||
|
||||
For more configuration parameters, please refer to [Database Configuration](../../../taos-sql/database) and [Server Configuration](../../../reference/config)。
|
||||
For more configuration parameters, please refer to [Database Configuration](../../../taos-sql/database) and [Server Configuration](../../../reference/config).
|
||||
|
||||
## Sample Programs
|
||||
|
||||
|
@ -97,7 +98,7 @@ The main Program is responsible for:
|
|||
3. Start reading threads
|
||||
4. Output writing speed every 10 seconds
|
||||
|
||||
The main program provides 4 parameters for tuning:
|
||||
The main program provides 4 parameters for tuning:
|
||||
|
||||
1. The number of reading threads, default value is 1
|
||||
2. The number of writing threads, default value is 2
|
||||
|
@ -191,7 +192,7 @@ TDENGINE_JDBC_URL="jdbc:TAOS://localhost:6030?user=root&password=taosdata"
|
|||
|
||||
If you want to launch the sample program on a remote server, please follow below steps:
|
||||
|
||||
1. Package the sample programs. Execute below command under directory `TDengine/docs/examples/java` :
|
||||
1. Package the sample programs. Execute below command under directory `TDengine/docs/examples/java`:
|
||||
```
|
||||
mvn package
|
||||
```
|
||||
|
@ -384,7 +385,7 @@ SQLWriter class encapsulates the logic of composing SQL and writing data. Please
|
|||
pip3 install faster-fifo
|
||||
```
|
||||
|
||||
3. Click the "Copy" in the above sample programs to copy `fast_write_example.py` 、 `sql_writer.py` and `mockdatasource.py`.
|
||||
3. Click the "Copy" in the above sample programs to copy `fast_write_example.py`, `sql_writer.py`, and `mockdatasource.py`.
|
||||
|
||||
4. Execute the program
|
||||
|
|
@ -0,0 +1,121 @@
|
|||
### python Kafka client
|
||||
|
||||
For python kafka client, please refer to [kafka client](https://cwiki.apache.org/confluence/display/KAFKA/Clients#Clients-Python). In this document, we use [kafka-python](http://github.com/dpkp/kafka-python).
|
||||
|
||||
### consume from Kafka
|
||||
|
||||
The simple way to consume messages from Kafka is to read messages one by one. The demo is as follows:
|
||||
|
||||
```
|
||||
from kafka import KafkaConsumer
|
||||
consumer = KafkaConsumer('my_favorite_topic')
|
||||
for msg in consumer:
|
||||
print (msg)
|
||||
```
|
||||
|
||||
For higher performance, we can consume message from kafka in batch. The demo is as follows:
|
||||
|
||||
```
|
||||
from kafka import KafkaConsumer
|
||||
consumer = KafkaConsumer('my_favorite_topic')
|
||||
while True:
|
||||
msgs = consumer.poll(timeout_ms=500, max_records=1000)
|
||||
if msgs:
|
||||
print (msgs)
|
||||
```
|
||||
|
||||
### multi-threading
|
||||
|
||||
For more higher performance we can process data from kafka in multi-thread. We can use python's ThreadPoolExecutor to achieve multithreading. The demo is as follows:
|
||||
|
||||
```
|
||||
from concurrent.futures import ThreadPoolExecutor, Future
|
||||
pool = ThreadPoolExecutor(max_workers=10)
|
||||
pool.submit(...)
|
||||
```
|
||||
|
||||
### multi-process
|
||||
|
||||
For more higher performance, sometimes we use multiprocessing. In this case, the number of Kafka Consumers should not be greater than the number of Kafka Topic Partitions. The demo is as follows:
|
||||
|
||||
```
|
||||
from multiprocessing import Process
|
||||
|
||||
ps = []
|
||||
for i in range(5):
|
||||
p = Process(target=Consumer().consume())
|
||||
p.start()
|
||||
ps.append(p)
|
||||
|
||||
for p in ps:
|
||||
p.join()
|
||||
```
|
||||
|
||||
In addition to python's built-in multithreading and multiprocessing library, we can also use the third-party library gunicorn.
|
||||
|
||||
### examples
|
||||
|
||||
<details>
|
||||
<summary>kafka_example_perform</summary>
|
||||
|
||||
`kafka_example_perform` is the entry point of the examples.
|
||||
|
||||
```py
|
||||
{{#include docs/examples/python/kafka_example_perform.py}}
|
||||
```
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>kafka_example_common</summary>
|
||||
|
||||
`kafka_example_common` is the common code of the examples.
|
||||
|
||||
```py
|
||||
{{#include docs/examples/python/kafka_example_common.py}}
|
||||
```
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>kafka_example_producer</summary>
|
||||
|
||||
`kafka_example_producer` is `producer`, which is responsible for generating test data and sending it to kafka.
|
||||
|
||||
```py
|
||||
{{#include docs/examples/python/kafka_example_producer.py}}
|
||||
```
|
||||
</details>
|
||||
|
||||
<details>
|
||||
<summary>kafka_example_consumer</summary>
|
||||
|
||||
`kafka_example_consumer` is `consumer`, which is responsible for consuming data from kafka and writing it to TDengine.
|
||||
|
||||
```py
|
||||
{{#include docs/examples/python/kafka_example_consumer.py}}
|
||||
```
|
||||
</details>
|
||||
|
||||
### execute Python examples
|
||||
|
||||
<details>
|
||||
<summary>execute Python examples</summary>
|
||||
|
||||
1. install and start up `kafka`
|
||||
2. install python3 and pip
|
||||
3. install `taospy` by pip
|
||||
4. install `kafka-python` by pip
|
||||
5. execute this example
|
||||
|
||||
The entry point of this example is `kafka_example_perform.py`. For more information about usage, please use `--help` command.
|
||||
|
||||
```
|
||||
python3 kafka_example_perform.py --help
|
||||
```
|
||||
|
||||
For example, the following command is creating 100 sub-table and inserting 20000 data for each table and the kafka max poll is 100 and 1 thread and 1 process per thread.
|
||||
|
||||
```
|
||||
python3 kafka_example_perform.py -table-count=100 -table-items=20000 -max-poll=100 -threads=1 -processes=1
|
||||
```
|
||||
|
||||
</details>
|
|
@ -0,0 +1,3 @@
|
|||
```rust
|
||||
{{#include docs/examples/rust/nativeexample/examples/schemaless_insert_line.rs}}
|
||||
```
|
|
@ -1,5 +1,6 @@
|
|||
---
|
||||
title: Insert Data
|
||||
description: This document describes how to insert data into TDengine.
|
||||
---
|
||||
|
||||
TDengine supports multiple protocols of inserting data, including SQL, InfluxDB Line protocol, OpenTSDB Telnet protocol, and OpenTSDB JSON protocol. Data can be inserted row by row, or in batches. Data from one or more collection points can be inserted simultaneously. Data can be inserted with multiple threads, and out of order data and historical data can be inserted as well. InfluxDB Line protocol, OpenTSDB Telnet protocol and OpenTSDB JSON protocol are the 3 kinds of schemaless insert protocols supported by TDengine. It's not necessary to create STables and tables in advance if using schemaless protocols, and the schemas can be adjusted automatically based on the data being inserted.
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: Query Data
|
||||
description: "This chapter introduces major query functionalities and how to perform sync and async query using connectors."
|
||||
description: This document describes how to query data in TDengine and how to perform synchronous and asynchronous queries using connectors.
|
||||
---
|
||||
|
||||
import Tabs from "@theme/Tabs";
|
||||
|
@ -20,10 +20,10 @@ import CAsync from "./_c_async.mdx";
|
|||
|
||||
## Introduction
|
||||
|
||||
SQL is used by TDengine as its query language. Application programs can send SQL statements to TDengine through REST API or connectors. TDengine's CLI `taos` can also be used to execute ad hoc SQL queries. Here is the list of major query functionalities supported by TDengine:
|
||||
SQL is used by TDengine as its query language. Application programs can send SQL statements to TDengine through REST API or connectors. TDengine's CLI `taos` can also be used to execute ad hoc SQL queries. Here is the list of major query functionalities supported by TDengine:
|
||||
|
||||
- Query on single column or multiple columns
|
||||
- Filter on tags or data columns:>, <, =, <\>, like
|
||||
- Filter on tags or data columns: >, <, =, <\>, like
|
||||
- Grouping of results: `Group By` - Sorting of results: `Order By` - Limit the number of results: `Limit/Offset`
|
||||
- Windowed aggregate queries for time windows (interval), session windows (session), and state windows (state_window)
|
||||
- Arithmetic on columns of numeric types or aggregate results
|
||||
|
@ -160,7 +160,7 @@ In the section describing [Insert](/develop/insert-data/sql-writing), a database
|
|||
:::note
|
||||
|
||||
1. With either REST connection or native connection, the above sample code works well.
|
||||
2. Please note that `use db` can't be used in case of REST connection because it's stateless.
|
||||
2. Please note that `use db` can't be used in case of REST connection because it's stateless. You can specify the database name by either the REST endpoint's parameter or <db_name>.<table_name> in the SQL command.
|
||||
|
||||
:::
|
||||
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
sidebar_label: Stream Processing
|
||||
description: "The TDengine stream processing engine combines data inserts, preprocessing, analytics, real-time computation, and alerting into a single component."
|
||||
title: Stream Processing
|
||||
sidebar_label: Stream Processing
|
||||
description: This document describes the stream processing component of TDengine.
|
||||
---
|
||||
|
||||
Raw time-series data is often cleaned and preprocessed before being permanently stored in a database. In a traditional time-series solution, this generally requires the deployment of stream processing systems such as Kafka or Flink. However, the complexity of such systems increases the cost of development and maintenance.
|
||||
|
|
|
@ -7,6 +7,7 @@ title: Data Subscription
|
|||
import Tabs from "@theme/Tabs";
|
||||
import TabItem from "@theme/TabItem";
|
||||
import Java from "./_sub_java.mdx";
|
||||
import JavaWS from "./_sub_java_ws.mdx"
|
||||
import Python from "./_sub_python.mdx";
|
||||
import Go from "./_sub_go.mdx";
|
||||
import Rust from "./_sub_rust.mdx";
|
||||
|
@ -22,7 +23,7 @@ By subscribing to a topic, a consumer can obtain the latest data in that topic i
|
|||
|
||||
To implement these features, TDengine indexes its write-ahead log (WAL) file for fast random access and provides configurable methods for replacing and retaining this file. You can define a retention period and size for this file. For information, see the CREATE DATABASE statement. In this way, the WAL file is transformed into a persistent storage engine that remembers the order in which events occur. However, note that configuring an overly long retention period for your WAL files makes database compression inefficient. TDengine then uses the WAL file instead of the time-series database as its storage engine for queries in the form of topics. TDengine reads the data from the WAL file; uses a unified query engine instance to perform filtering, transformations, and other operations; and finally pushes the data to consumers.
|
||||
|
||||
|
||||
Tips: Data subscription is to consume data from the wal. If some wal files are deleted according to WAL retention policy, the deleted data can't be consumed any more. So you need to set a reasonable value for parameter `WAL_RETENTION_PERIOD` or `WAL_RETENTION_SIZE` when creating the database and make sure your application consume the data in a timely way to make sure there is no data loss. This behavior is similar to Kafka and other widely used message queue products.
|
||||
|
||||
## Data Schema and API
|
||||
|
||||
|
@ -80,10 +81,6 @@ Set<String> subscription() throws SQLException;
|
|||
|
||||
ConsumerRecords<V> poll(Duration timeout) throws SQLException;
|
||||
|
||||
void commitAsync();
|
||||
|
||||
void commitAsync(OffsetCommitCallback callback);
|
||||
|
||||
void commitSync() throws SQLException;
|
||||
|
||||
void close() throws SQLException;
|
||||
|
@ -94,22 +91,27 @@ void close() throws SQLException;
|
|||
<TabItem value="Python" label="Python">
|
||||
|
||||
```python
|
||||
class TaosConsumer():
|
||||
def __init__(self, *topics, **configs)
|
||||
class Consumer:
|
||||
def subscribe(self, topics):
|
||||
pass
|
||||
|
||||
def __iter__(self)
|
||||
def unsubscribe(self):
|
||||
pass
|
||||
|
||||
def __next__(self)
|
||||
def poll(self, timeout: float = 1.0):
|
||||
pass
|
||||
|
||||
def sync_next(self)
|
||||
|
||||
def subscription(self)
|
||||
def assignment(self):
|
||||
pass
|
||||
|
||||
def unsubscribe(self)
|
||||
def poll(self, timeout: float = 1.0):
|
||||
pass
|
||||
|
||||
def close(self)
|
||||
|
||||
def __del__(self)
|
||||
def close(self):
|
||||
pass
|
||||
|
||||
def commit(self, message):
|
||||
pass
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
@ -117,19 +119,22 @@ class TaosConsumer():
|
|||
<TabItem label="Go" value="Go">
|
||||
|
||||
```go
|
||||
func NewConsumer(conf *Config) (*Consumer, error)
|
||||
func NewConsumer(conf *tmq.ConfigMap) (*Consumer, error)
|
||||
|
||||
func (c *Consumer) Close() error
|
||||
// rebalanceCb is reserved for compatibility purpose
|
||||
func (c *Consumer) Subscribe(topic string, rebalanceCb RebalanceCb) error
|
||||
|
||||
func (c *Consumer) Commit(ctx context.Context, message unsafe.Pointer) error
|
||||
// rebalanceCb is reserved for compatibility purpose
|
||||
func (c *Consumer) SubscribeTopics(topics []string, rebalanceCb RebalanceCb) error
|
||||
|
||||
func (c *Consumer) FreeMessage(message unsafe.Pointer)
|
||||
func (c *Consumer) Poll(timeoutMs int) tmq.Event
|
||||
|
||||
func (c *Consumer) Poll(timeout time.Duration) (*Result, error)
|
||||
|
||||
func (c *Consumer) Subscribe(topics []string) error
|
||||
// tmq.TopicPartition is reserved for compatibility purpose
|
||||
func (c *Consumer) Commit() ([]tmq.TopicPartition, error)
|
||||
|
||||
func (c *Consumer) Unsubscribe() error
|
||||
|
||||
func (c *Consumer) Close() error
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
@ -219,8 +224,8 @@ A database including one supertable and two subtables is created as follows:
|
|||
|
||||
```sql
|
||||
DROP DATABASE IF EXISTS tmqdb;
|
||||
CREATE DATABASE tmqdb;
|
||||
CREATE TABLE tmqdb.stb (ts TIMESTAMP, c1 INT, c2 FLOAT, c3 VARCHAR(16) TAGS(t1 INT, t3 VARCHAR(16));
|
||||
CREATE DATABASE tmqdb WAL_RETENTION_PERIOD 3600;
|
||||
CREATE TABLE tmqdb.stb (ts TIMESTAMP, c1 INT, c2 FLOAT, c3 VARCHAR(16)) TAGS(t1 INT, t3 VARCHAR(16));
|
||||
CREATE TABLE tmqdb.ctb0 USING tmqdb.stb TAGS(0, "subtable0");
|
||||
CREATE TABLE tmqdb.ctb1 USING tmqdb.stb TAGS(1, "subtable1");
|
||||
INSERT INTO tmqdb.ctb0 VALUES(now, 0, 0, 'a0')(now+1s, 0, 0, 'a00');
|
||||
|
@ -235,6 +240,8 @@ The following SQL statement creates a topic in TDengine:
|
|||
CREATE TOPIC topic_name AS SELECT ts, c1, c2, c3 FROM tmqdb.stb WHERE c1 > 1;
|
||||
```
|
||||
|
||||
- There is an upper limit to the number of topics created, controlled by the parameter tmqMaxTopicNum, with a default of 20
|
||||
|
||||
Multiple subscription types are supported.
|
||||
|
||||
#### Subscribe to a Column
|
||||
|
@ -256,14 +263,15 @@ You can subscribe to a topic through a SELECT statement. Statements that specify
|
|||
Syntax:
|
||||
|
||||
```sql
|
||||
CREATE TOPIC topic_name AS STABLE stb_name
|
||||
CREATE TOPIC topic_name [with meta] AS STABLE stb_name [where_condition]
|
||||
```
|
||||
|
||||
Creating a topic in this manner differs from a `SELECT * from stbName` statement as follows:
|
||||
|
||||
- The table schema can be modified.
|
||||
- Unstructured data is returned. The format of the data returned changes based on the supertable schema.
|
||||
- A different table schema may exist for every data block to be processed.
|
||||
- The 'with meta' parameter is optional. When selected, statements such as creating super tables and sub tables will be returned, mainly used for Taosx to perform super table migration
|
||||
- The 'where_condition' parameter is optional and will be used to filter and subscribe to sub tables that meet the criteria. Where conditions cannot have ordinary columns, only tags or tbnames. Functions can be used in where conditions to filter tags, but cannot be aggregate functions because sub table tag values cannot be aggregated. It can also be a constant expression, such as 2>1 (subscribing to all child tables), Or false (subscribe to 0 sub tables)
|
||||
- The data returned does not include tags.
|
||||
|
||||
### Subscribe to a Database
|
||||
|
@ -271,10 +279,12 @@ Creating a topic in this manner differs from a `SELECT * from stbName` statement
|
|||
Syntax:
|
||||
|
||||
```sql
|
||||
CREATE TOPIC topic_name [WITH META] AS DATABASE db_name;
|
||||
CREATE TOPIC topic_name [with meta] AS DATABASE db_name;
|
||||
```
|
||||
|
||||
This SQL statement creates a subscription to all tables in the database. You can add the `WITH META` parameter to include schema changes in the subscription, including creating and deleting supertables; adding, deleting, and modifying columns; and creating, deleting, and modifying the tags of subtables. Consumers can determine the message type from the API. Note that this differs from Kafka.
|
||||
This SQL statement creates a subscription to all tables in the database.
|
||||
|
||||
- The 'with meta' parameter is optional. When selected, it will return statements for creating all super tables and sub tables in the database, mainly used for Taosx database migration
|
||||
|
||||
## Create a Consumer
|
||||
|
||||
|
@ -282,18 +292,16 @@ You configure the following parameters when creating a consumer:
|
|||
|
||||
| Parameter | Type | Description | Remarks |
|
||||
| :----------------------------: | :-----: | -------------------------------------------------------- | ------------------------------------------- |
|
||||
| `td.connect.ip` | string | Used in establishing a connection; same as `taos_connect` | |
|
||||
| `td.connect.user` | string | Used in establishing a connection; same as `taos_connect` | |
|
||||
| `td.connect.pass` | string | Used in establishing a connection; same as `taos_connect` | |
|
||||
| `td.connect.port` | string | Used in establishing a connection; same as `taos_connect` | |
|
||||
| `group.id` | string | Consumer group ID; consumers with the same ID are in the same group | **Required**. Maximum length: 192. |
|
||||
| `td.connect.ip` | string | IP address of the server side | |
|
||||
| `td.connect.user` | string | User Name | |
|
||||
| `td.connect.pass` | string | Password | |
|
||||
| `td.connect.port` | string | Port of the server side | |
|
||||
| `group.id` | string | Consumer group ID; consumers with the same ID are in the same group | **Required**. Maximum length: 192. Each topic can create up to 100 consumer groups. |
|
||||
| `client.id` | string | Client ID | Maximum length: 192. |
|
||||
| `auto.offset.reset` | enum | Initial offset for the consumer group | Specify `earliest`, `latest`, or `none`(default) |
|
||||
| `enable.auto.commit` | boolean | Commit automatically | Specify `true` or `false`. |
|
||||
| `auto.offset.reset` | enum | Initial offset for the consumer group | `earliest`: subscribe from the earliest data, this is the default behavior; `latest`: subscribe from the latest data; or `none`: can't subscribe without committed offset|
|
||||
| `enable.auto.commit` | boolean | Commit automatically; true: user application doesn't need to explicitly commit; false: user application need to handle commit by itself | Default value is true |
|
||||
| `auto.commit.interval.ms` | integer | Interval for automatic commits, in milliseconds |
|
||||
| `enable.heartbeat.background` | boolean | Backend heartbeat; if enabled, the consumer does not go offline even if it has not polled for a long time | |
|
||||
| `experimental.snapshot.enable` | boolean | Specify whether to consume messages from the WAL or from TSBS | |
|
||||
| `msg.with.table.name` | boolean | Specify whether to deserialize table names from messages |
|
||||
| `msg.with.table.name` | boolean | Specify whether to deserialize table names from messages | default value: false
|
||||
|
||||
The method of specifying these parameters depends on the language used:
|
||||
|
||||
|
@ -310,7 +318,6 @@ tmq_conf_set(conf, "group.id", "cgrpName");
|
|||
tmq_conf_set(conf, "td.connect.user", "root");
|
||||
tmq_conf_set(conf, "td.connect.pass", "taosdata");
|
||||
tmq_conf_set(conf, "auto.offset.reset", "earliest");
|
||||
tmq_conf_set(conf, "experimental.snapshot.enable", "true");
|
||||
tmq_conf_set(conf, "msg.with.table.name", "true");
|
||||
tmq_conf_set_auto_commit_cb(conf, tmq_commit_cb_print, NULL);
|
||||
|
||||
|
@ -325,6 +332,7 @@ Java programs use the following parameters:
|
|||
|
||||
| Parameter | Type | Description | Remarks |
|
||||
| ----------------------------- | ------ | ----------------------------------------------------------------------------------------------------------------------------- |
|
||||
| `td.connect.type` | string | connection type: "jni" means native connection, "ws" means websocket connection, the default is "jni" |
|
||||
| `bootstrap.servers` | string |Connection address, such as `localhost:6030` |
|
||||
| `value.deserializer` | string | Value deserializer; to use this method, implement the `com.taosdata.jdbc.tmq.Deserializer` interface or inherit the `com.taosdata.jdbc.tmq.ReferenceDeserializer` type |
|
||||
| `value.deserializer.encoding` | string | Specify the encoding for string deserialization | |
|
||||
|
@ -357,50 +365,18 @@ public class MetersDeserializer extends ReferenceDeserializer<Meters> {
|
|||
<TabItem label="Go" value="Go">
|
||||
|
||||
```go
|
||||
config := tmq.NewConfig()
|
||||
defer config.Destroy()
|
||||
err = config.SetGroupID("test")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = config.SetAutoOffsetReset("earliest")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = config.SetConnectIP("127.0.0.1")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = config.SetConnectUser("root")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = config.SetConnectPass("taosdata")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = config.SetConnectPort("6030")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = config.SetMsgWithTableName(true)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = config.EnableHeartBeat()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = config.EnableAutoCommit(func(result *wrapper.TMQCommitCallbackResult) {
|
||||
if result.ErrCode != 0 {
|
||||
errStr := wrapper.TMQErr2Str(result.ErrCode)
|
||||
err := errors.NewError(int(result.ErrCode), errStr)
|
||||
panic(err)
|
||||
}
|
||||
})
|
||||
if err != nil {
|
||||
panic(err)
|
||||
conf := &tmq.ConfigMap{
|
||||
"group.id": "test",
|
||||
"auto.offset.reset": "earliest",
|
||||
"td.connect.ip": "127.0.0.1",
|
||||
"td.connect.user": "root",
|
||||
"td.connect.pass": "taosdata",
|
||||
"td.connect.port": "6030",
|
||||
"client.id": "test_tmq_c",
|
||||
"enable.auto.commit": "false",
|
||||
"msg.with.table.name": "true",
|
||||
}
|
||||
consumer, err := NewConsumer(conf)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
@ -422,23 +398,14 @@ let mut consumer = tmq.build()?;
|
|||
|
||||
<TabItem value="Python" label="Python">
|
||||
|
||||
Python programs use the following parameters:
|
||||
```python
|
||||
from taos.tmq import Consumer
|
||||
|
||||
| Parameter | Type | Description | Remarks |
|
||||
| :----------------------------: | :----: | -------------------------------------------------------- | ------------------------------------------- |
|
||||
| `td_connect_ip` | string | Used in establishing a connection; same as `taos_connect` | |
|
||||
| `td_connect_user` | string | Used in establishing a connection; same as `taos_connect` | |
|
||||
| `td_connect_pass` | string | Used in establishing a connection; same as `taos_connect` | |
|
||||
| `td_connect_port` | string | Used in establishing a connection; same as `taos_connect` | |
|
||||
| `group_id` | string | Consumer group ID; consumers with the same ID are in the same group | **Required**. Maximum length: 192. |
|
||||
| `client_id` | string | Client ID | Maximum length: 192. |
|
||||
| `auto_offset_reset` | string | Initial offset for the consumer group | Specify `earliest`, `latest`, or `none`(default) |
|
||||
| `enable_auto_commit` | string | Commit automatically | Specify `true` or `false`. |
|
||||
| `auto_commit_interval_ms` | string | Interval for automatic commits, in milliseconds |
|
||||
| `enable_heartbeat_background` | string | Backend heartbeat; if enabled, the consumer does not go offline even if it has not polled for a long time | Specify `true` or `false`. |
|
||||
| `experimental_snapshot_enable` | string | Specify whether to consume messages from the WAL or from TSBS | Specify `true` or `false`. |
|
||||
| `msg_with_table_name` | string | Specify whether to deserialize table names from messages | Specify `true` or `false`.
|
||||
| `timeout` | int | Consumer pull timeout | |
|
||||
# Syntax: `consumer = Consumer(configs)`
|
||||
#
|
||||
# Example:
|
||||
consumer = Consumer({"group.id": "local", "td.connect.ip": "127.0.0.1"})
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
||||
|
@ -523,11 +490,7 @@ consumer.subscribe(topics);
|
|||
<TabItem value="Go" label="Go">
|
||||
|
||||
```go
|
||||
consumer, err := tmq.NewConsumer(config)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = consumer.Subscribe([]string{"example_tmq_topic"})
|
||||
err = consumer.Subscribe("example_tmq_topic", nil)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
@ -545,7 +508,7 @@ consumer.subscribe(["tmq_meters"]).await?;
|
|||
<TabItem value="Python" label="Python">
|
||||
|
||||
```python
|
||||
consumer = TaosConsumer('topic_ctb_column', group_id='vg2')
|
||||
consumer.subscribe(['topic1', 'topic2'])
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
@ -611,13 +574,17 @@ while(running){
|
|||
|
||||
```go
|
||||
for {
|
||||
result, err := consumer.Poll(time.Second)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
ev := consumer.Poll(0)
|
||||
if ev != nil {
|
||||
switch e := ev.(type) {
|
||||
case *tmqcommon.DataMessage:
|
||||
fmt.Println(e.Value())
|
||||
case tmqcommon.Error:
|
||||
fmt.Fprintf(os.Stderr, "%% Error: %v: %v\n", e.Code(), e)
|
||||
panic(e)
|
||||
}
|
||||
consumer.Commit()
|
||||
}
|
||||
fmt.Println(result)
|
||||
consumer.Commit(context.Background(), result.Message)
|
||||
consumer.FreeMessage(result.Message)
|
||||
}
|
||||
```
|
||||
|
||||
|
@ -660,9 +627,17 @@ for {
|
|||
<TabItem value="Python" label="Python">
|
||||
|
||||
```python
|
||||
for msg in consumer:
|
||||
for row in msg:
|
||||
print(row)
|
||||
while True:
|
||||
res = consumer.poll(100)
|
||||
if not res:
|
||||
continue
|
||||
err = res.error()
|
||||
if err is not None:
|
||||
raise err
|
||||
val = res.value()
|
||||
|
||||
for block in val:
|
||||
print(block.fetchall())
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
@ -729,7 +704,11 @@ consumer.close();
|
|||
<TabItem value="Go" label="Go">
|
||||
|
||||
```go
|
||||
consumer.Close()
|
||||
/* Unsubscribe */
|
||||
_ = consumer.Unsubscribe()
|
||||
|
||||
/* Close consumer */
|
||||
_ = consumer.Close()
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
@ -815,7 +794,14 @@ The following section shows sample code in various languages.
|
|||
</TabItem>
|
||||
|
||||
<TabItem label="Java" value="java">
|
||||
<Java />
|
||||
<Tabs defaultValue="native">
|
||||
<TabItem value="native" label="native connection">
|
||||
<Java />
|
||||
</TabItem>
|
||||
<TabItem value="ws" label="WebSocket connection">
|
||||
<JavaWS />
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
</TabItem>
|
||||
|
||||
<TabItem label="Go" value="Go">
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
sidebar_label: Caching
|
||||
title: Caching
|
||||
description: "This document describes the caching component of TDengine."
|
||||
sidebar_label: Caching
|
||||
description: This document describes the caching component of TDengine.
|
||||
---
|
||||
|
||||
TDengine uses various kinds of caching techniques to efficiently write and query data. This document describes the caching component of TDengine.
|
||||
|
@ -10,10 +10,10 @@ TDengine uses various kinds of caching techniques to efficiently write and query
|
|||
|
||||
TDengine uses an insert-driven cache management policy, known as first in, first out (FIFO). This policy differs from read-driven "least recently used (LRU)" cache management. A FIFO policy stores the latest data in cache and flushes the oldest data from cache to disk when the cache usage reaches a threshold. In IoT use cases, the most recent data or the current state is most important. The cache policy in TDengine, like much of the design and architecture of TDengine, is based on the nature of IoT data.
|
||||
|
||||
When you create a database, you can configure the size of the write cache on each vnode. The **vgroups** parameter determines the number of vgroups that process data in the database, and the **buffer** parameter determines the size of the write cache for each vnode.
|
||||
When you create a database, you can configure the size of the write cache on each vnode. The **vgroups** parameter determines the number of vgroups that process data in the database, and the **buffer** parameter determines the size of the write cache for each vnode. The unit of buffer is MB.
|
||||
|
||||
```sql
|
||||
create database db0 vgroups 100 buffer 16MB
|
||||
create database db0 vgroups 100 buffer 16
|
||||
```
|
||||
|
||||
In theory, larger cache sizes are always better. However, at a certain point, it becomes impossible to improve performance by increasing cache size. In most scenarios, you can retain the default cache settings.
|
||||
|
@ -28,10 +28,10 @@ When you create a database, you can configure whether the latest data from every
|
|||
|
||||
## Metadata Cache
|
||||
|
||||
To improve query and write performance, each vnode caches the metadata that it receives. When you create a database, you can configure the size of the metadata cache through the *pages* and *pagesize* parameters.
|
||||
To improve query and write performance, each vnode caches the metadata that it receives. When you create a database, you can configure the size of the metadata cache through the *pages* and *pagesize* parameters. The unit of pagesize is kb.
|
||||
|
||||
```sql
|
||||
create database db0 pages 128 pagesize 16kb
|
||||
create database db0 pages 128 pagesize 16
|
||||
```
|
||||
|
||||
The preceding SQL statement creates 128 pages on each vnode in the `db0` database. Each page has a 16 KB metadata cache.
|
||||
|
|
|
@ -1,23 +1,25 @@
|
|||
---
|
||||
sidebar_label: UDF
|
||||
title: User-Defined Functions (UDF)
|
||||
description: "You can define your own scalar and aggregate functions to expand the query capabilities of TDengine."
|
||||
sidebar_label: UDF
|
||||
description: This document describes how to create user-defined functions (UDF), your own scalar and aggregate functions that can expand the query capabilities of TDengine.
|
||||
---
|
||||
|
||||
The built-in functions of TDengine may not be sufficient for the use cases of every application. In this case, you can define custom functions for use in TDengine queries. These are known as user-defined functions (UDF). A user-defined function takes one column of data or the result of a subquery as its input.
|
||||
|
||||
TDengine supports user-defined functions written in C or C++. This document describes the usage of user-defined functions.
|
||||
|
||||
User-defined functions can be scalar functions or aggregate functions. Scalar functions, such as `abs`, `sin`, and `concat`, output a value for every row of data. Aggregate functions, such as `avg` and `max` output one value for multiple rows of data.
|
||||
|
||||
TDengine supports user-defined functions written in C or Python. This document describes the usage of user-defined functions.
|
||||
|
||||
## Implement a UDF in C
|
||||
|
||||
When you create a user-defined function, you must implement standard interface functions:
|
||||
- For scalar functions, implement the `scalarfn` interface function.
|
||||
- For aggregate functions, implement the `aggfn_start`, `aggfn`, and `aggfn_finish` interface functions.
|
||||
- To initialize your function, implement the `udf_init` function. To terminate your function, implement the `udf_destroy` function.
|
||||
|
||||
There are strict naming conventions for these interface functions. The names of the start, finish, init, and destroy interfaces must be <udf-name\>_start, <udf-name\>_finish, <udf-name\>_init, and <udf-name\>_destroy, respectively. Replace `scalarfn`, `aggfn`, and `udf` with the name of your user-defined function.
|
||||
There are strict naming conventions for these interface functions. The names of the start, finish, init, and destroy interfaces must be `_start`, `_finish`, `_init`, and `_destroy`, respectively. Replace `scalarfn`, `aggfn`, and `udf` with the name of your user-defined function.
|
||||
|
||||
## Implementing a Scalar Function
|
||||
### Implementing a Scalar Function in C
|
||||
The implementation of a scalar function is described as follows:
|
||||
```c
|
||||
#include "taos.h"
|
||||
|
@ -49,7 +51,7 @@ int32_t scalarfn_destroy() {
|
|||
```
|
||||
Replace `scalarfn` with the name of your function.
|
||||
|
||||
## Implementing an Aggregate Function
|
||||
### Implementing an Aggregate Function in C
|
||||
|
||||
The implementation of an aggregate function is described as follows:
|
||||
```c
|
||||
|
@ -65,11 +67,11 @@ int32_t aggfn_init() {
|
|||
}
|
||||
|
||||
// aggregate start function. The intermediate value or the state(@interBuf) is initialized in this function. The function name shall be concatenation of udf name and _start suffix
|
||||
// @param interbuf intermediate value to intialize
|
||||
// @param interbuf intermediate value to initialize
|
||||
// @return error number defined in taoserror.h
|
||||
int32_t aggfn_start(SUdfInterBuf* interBuf) {
|
||||
// initialize intermediate value in interBuf
|
||||
return TSDB_CODE_SUCESS;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
// aggregate reduce function. This function aggregate old state(@interbuf) and one data bock(inputBlock) and output a new state(@newInterBuf).
|
||||
|
@ -100,7 +102,7 @@ int32_t aggfn_destroy() {
|
|||
```
|
||||
Replace `aggfn` with the name of your function.
|
||||
|
||||
## Interface Functions
|
||||
### UDF Interface Definition in C
|
||||
|
||||
There are strict naming conventions for interface functions. The names of the start, finish, init, and destroy interfaces must be <udf-name\>_start, <udf-name\>_finish, <udf-name\>_init, and <udf-name\>_destroy, respectively. Replace `scalarfn`, `aggfn`, and `udf` with the name of your user-defined function.
|
||||
|
||||
|
@ -108,17 +110,16 @@ Interface functions return a value that indicates whether the operation was succ
|
|||
|
||||
For information about the parameters for interface functions, see Data Model
|
||||
|
||||
### Interfaces for Scalar Functions
|
||||
#### Scalar Interface
|
||||
`int32_t scalarfn(SUdfDataBlock* inputDataBlock, SUdfColumn *resultColumn)`
|
||||
|
||||
`int32_t scalarfn(SUdfDataBlock* inputDataBlock, SUdfColumn *resultColumn)`
|
||||
|
||||
Replace `scalarfn` with the name of your function. This function performs scalar calculations on data blocks. You can configure a value through the parameters in the `resultColumn` structure.
|
||||
|
||||
The parameters in the function are defined as follows:
|
||||
- inputDataBlock: The data block to input.
|
||||
- resultColumn: The column to output. The column to output.
|
||||
- resultColumn: The column to output. The column to output.
|
||||
|
||||
### Interfaces for Aggregate Functions
|
||||
#### Aggregate Interface
|
||||
|
||||
`int32_t aggfn_start(SUdfInterBuf *interBuf)`
|
||||
|
||||
|
@ -126,7 +127,7 @@ The parameters in the function are defined as follows:
|
|||
|
||||
`int32_t aggfn_finish(SUdfInterBuf* interBuf, SUdfInterBuf *result)`
|
||||
|
||||
Replace `aggfn` with the name of your function. In the function, aggfn_start is called to generate a result buffer. Data is then divided between multiple blocks, and aggfn is called on each block to update the result. Finally, aggfn_finish is called to generate final results from the intermediate results. The final result contains only one or zero data points.
|
||||
Replace `aggfn` with the name of your function. In the function, aggfn_start is called to generate a result buffer. Data is then divided between multiple blocks, and the `aggfn` function is called on each block to update the result. Finally, aggfn_finish is called to generate the final results from the intermediate results. The final result contains only one or zero data points.
|
||||
|
||||
The parameters in the function are defined as follows:
|
||||
- interBuf: The intermediate result buffer.
|
||||
|
@ -135,15 +136,15 @@ The parameters in the function are defined as follows:
|
|||
- result: The final result.
|
||||
|
||||
|
||||
### Initializing and Terminating User-Defined Functions
|
||||
#### Initialization and Cleanup Interface
|
||||
`int32_t udf_init()`
|
||||
|
||||
`int32_t udf_destroy()`
|
||||
|
||||
Replace `udf`with the name of your function. udf_init initializes the function. udf_destroy terminates the function. If it is not necessary to initialize your function, udf_init is not required. If it is not necessary to terminate your function, udf_destroy is not required.
|
||||
Replace `udf` with the name of your function. udf_init initializes the function. udf_destroy terminates the function. If it is not necessary to initialize your function, udf_init is not required. If it is not necessary to terminate your function, udf_destroy is not required.
|
||||
|
||||
|
||||
## Data Structure of User-Defined Functions
|
||||
### Data Structures for UDF in C
|
||||
```c
|
||||
typedef struct SUdfColumnMeta {
|
||||
int16_t type;
|
||||
|
@ -193,32 +194,29 @@ typedef struct SUdfInterBuf {
|
|||
```
|
||||
The data structure is described as follows:
|
||||
|
||||
- The SUdfDataBlock block includes the number of rows (numOfRows) and number of columns (numCols). udfCols[i] (0 <= i <= numCols-1) indicates that each column is of type SUdfColumn.
|
||||
- The SUdfDataBlock block includes the number of rows (numOfRows) and the number of columns (numCols). udfCols[i] (0 <= i <= numCols-1) indicates that each column is of type SUdfColumn.
|
||||
- SUdfColumn includes the definition of the data type of the column (colMeta) and the data in the column (colData).
|
||||
- The member definitions of SUdfColumnMeta are the same as the data type definitions in `taos.h`.
|
||||
- The data in SUdfColumnData can become longer. varLenCol indicates variable-length data, and fixLenCol indicates fixed-length data.
|
||||
- The data in SUdfColumnData can become longer. varLenCol indicates variable-length data, and fixLenCol indicates fixed-length data.
|
||||
- SUdfInterBuf defines the intermediate structure `buffer` and the number of results in the buffer `numOfResult`.
|
||||
|
||||
Additional functions are defined in `taosudf.h` to make it easier to work with these structures.
|
||||
|
||||
## Compile UDF
|
||||
### Compiling C UDF
|
||||
|
||||
To use your user-defined function in TDengine, first compile it to a dynamically linked library (DLL).
|
||||
To use your user-defined function in TDengine, first, compile it to a shared library.
|
||||
|
||||
For example, the sample UDF `add_one.c` can be compiled into a DLL as follows:
|
||||
For example, the sample UDF `bit_and.c` can be compiled into a DLL as follows:
|
||||
|
||||
```bash
|
||||
gcc -g -O0 -fPIC -shared add_one.c -o add_one.so
|
||||
gcc -g -O0 -fPIC -shared bit_and.c -o libbitand.so
|
||||
```
|
||||
|
||||
The generated DLL file `add_one.so` can now be used to implement your function. Note: GCC 7.5 or later is required.
|
||||
The generated DLL file `libbitand.so` can now be used to implement your function. Note: GCC 7.5 or later is required.
|
||||
|
||||
## Manage and Use User-Defined Functions
|
||||
After compiling your function into a DLL, you add it to TDengine. For more information, see [User-Defined Functions](../12-taos-sql/26-udf.md).
|
||||
### UDF Sample Code in C
|
||||
|
||||
## Sample Code
|
||||
|
||||
### Sample scalar function: [bit_and](https://github.com/taosdata/TDengine/blob/3.0/tests/script/sh/bit_and.c)
|
||||
#### Scalar function: [bit_and](https://github.com/taosdata/TDengine/blob/3.0/tests/script/sh/bit_and.c)
|
||||
|
||||
The bit_and function implements bitwise addition for multiple columns. If there is only one column, the column is returned. The bit_and function ignores null values.
|
||||
|
||||
|
@ -231,7 +229,7 @@ The bit_and function implements bitwise addition for multiple columns. If there
|
|||
|
||||
</details>
|
||||
|
||||
### Sample aggregate function: [l2norm](https://github.com/taosdata/TDengine/blob/3.0/tests/script/sh/l2norm.c)
|
||||
#### Aggregate function 1: [l2norm](https://github.com/taosdata/TDengine/blob/3.0/tests/script/sh/l2norm.c)
|
||||
|
||||
The l2norm function finds the second-order norm for all data in the input column. This squares the values, takes a cumulative sum, and finds the square root.
|
||||
|
||||
|
@ -243,3 +241,650 @@ The l2norm function finds the second-order norm for all data in the input column
|
|||
```
|
||||
|
||||
</details>
|
||||
|
||||
#### Aggregate function 2: [max_vol](https://github.com/taosdata/TDengine/blob/3.0/tests/script/sh/max_vol.c)
|
||||
|
||||
The max_vol function returns a string concatenating the deviceId column, the row number and column number of the maximum voltage and the maximum voltage given several voltage columns as input.
|
||||
|
||||
Create Table:
|
||||
```bash
|
||||
create table battery(ts timestamp, vol1 float, vol2 float, vol3 float, deviceId varchar(16));
|
||||
```
|
||||
Create the UDF:
|
||||
```bash
|
||||
create aggregate function max_vol as '/root/udf/libmaxvol.so' outputtype binary(64) bufsize 10240 language 'C';
|
||||
```
|
||||
Use the UDF in the query:
|
||||
```bash
|
||||
select max_vol(vol1,vol2,vol3,deviceid) from battery;
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary>max_vol.c</summary>
|
||||
|
||||
```c
|
||||
{{#include tests/script/sh/max_vol.c}}
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
## Implement a UDF in Python
|
||||
|
||||
### Prepare Environment
|
||||
|
||||
1. Prepare Python Environment
|
||||
|
||||
Please follow standard procedure of python environment preparation.
|
||||
|
||||
2. Install Python package `taospyudf`
|
||||
|
||||
```shell
|
||||
pip3 install taospyudf
|
||||
```
|
||||
|
||||
During this process, some C++ code needs to be compiled. So it's required to have `cmake` and `gcc` on your system. The compiled `libtaospyudf.so` will be automatically copied to `/usr/local/lib` path. If you are not root user, please use `sudo`. After installation is done, please check using the command below.
|
||||
|
||||
```shell
|
||||
root@slave11 ~/udf $ ls -l /usr/local/lib/libtaos*
|
||||
-rw-r--r-- 1 root root 671344 May 24 22:54 /usr/local/lib/libtaospyudf.so
|
||||
```
|
||||
|
||||
Then execute the command below.
|
||||
|
||||
```shell
|
||||
ldconfig
|
||||
```
|
||||
|
||||
3. If you want to utilize some 3rd party python packages in your Python UDF, please set configuration parameter `UdfdLdLibPath` to the value of `PYTHONPATH` before starting `taosd`.
|
||||
|
||||
4. Launch `taosd` service
|
||||
|
||||
Please refer to [Get Started](../../get-started)
|
||||
|
||||
### Interface definition
|
||||
|
||||
#### Introduction to Interface
|
||||
|
||||
Implement the specified interface functions when implementing a UDF in Python.
|
||||
- implement `process` function for the scalar UDF.
|
||||
- implement `start`, `reduce`, `finish` for the aggregate UDF.
|
||||
- implement `init` for initialization and `destroy` for termination.
|
||||
|
||||
#### Scalar UDF Interface
|
||||
|
||||
The implementation of a scalar UDF is described as follows:
|
||||
|
||||
```Python
|
||||
def process(input: datablock) -> tuple[output_type]:
|
||||
```
|
||||
|
||||
Description: this function processes datablock, which is the input; you can use datablock.data(row, col) to access the python object at location(row,col); the output is a tuple object consisted of objects of type outputtype
|
||||
|
||||
#### Aggregate UDF Interface
|
||||
|
||||
The implementation of an aggregate function is described as follows:
|
||||
|
||||
```Python
|
||||
def start() -> bytes:
|
||||
def reduce(inputs: datablock, buf: bytes) -> bytes
|
||||
def finish(buf: bytes) -> output_type:
|
||||
```
|
||||
|
||||
Description: first the start() is invoked to generate the initial result `buffer`; then the input data is divided into multiple row blocks, and reduce() is invoked for each block `inputs` and current intermediate result `buf`; finally finish() is invoked to generate the final result from intermediate `buf`, the final result can only contains 0 or 1 data.
|
||||
|
||||
#### Initialization and Cleanup Interface
|
||||
|
||||
```python
|
||||
def init()
|
||||
def destroy()
|
||||
```
|
||||
|
||||
Description: init() does the work of initialization before processing any data; destroy() does the work of cleanup after the data is processed.
|
||||
|
||||
### Python UDF Template
|
||||
|
||||
#### Scalar Template
|
||||
|
||||
```Python
|
||||
def init():
|
||||
# initialization
|
||||
def destroy():
|
||||
# destroy
|
||||
def process(input: datablock) -> tuple[output_type]:
|
||||
# process input datablock,
|
||||
# datablock.data(row, col) is to access the python object in location(row,col)
|
||||
# return tuple object consisted of object of type outputtype
|
||||
```
|
||||
|
||||
Note:process() must be implemented, init() and destroy() must be defined too but they can do nothing.
|
||||
|
||||
#### Aggregate Template
|
||||
|
||||
```Python
|
||||
def init():
|
||||
#initialization
|
||||
def destroy():
|
||||
#destroy
|
||||
def start() -> bytes:
|
||||
#return serialize(init_state)
|
||||
def reduce(inputs: datablock, buf: bytes) -> bytes
|
||||
# deserialize buf to state
|
||||
# reduce the inputs and state into new_state.
|
||||
# use inputs.data(i,j) to access python object of location(i,j)
|
||||
# serialize new_state into new_state_bytes
|
||||
return new_state_bytes
|
||||
def finish(buf: bytes) -> output_type:
|
||||
#return obj of type outputtype
|
||||
```
|
||||
|
||||
Note: aggregate UDF requires init(), destroy(), start(), reduce() and finish() to be implemented. start() generates the initial result in buffer, then the input data is divided into multiple row data blocks, reduce() is invoked for each data block `inputs` and intermediate `buf`, finally finish() is invoked to generate final result from the intermediate result `buf`.
|
||||
|
||||
### Data Mapping between TDengine SQL and Python UDF
|
||||
|
||||
The following table describes the mapping between TDengine SQL data type and Python UDF Data Type. The `NULL` value of all TDengine SQL types is mapped to the `None` value in Python.
|
||||
|
||||
| **TDengine SQL Data Type** | **Python Data Type** |
|
||||
| :-----------------------: | ------------ |
|
||||
|TINYINT / SMALLINT / INT / BIGINT | int |
|
||||
|TINYINT UNSIGNED / SMALLINT UNSIGNED / INT UNSIGNED / BIGINT UNSIGNED | int |
|
||||
|FLOAT / DOUBLE | float |
|
||||
|BOOL | bool |
|
||||
|BINARY / VARCHAR / NCHAR | bytes|
|
||||
|TIMESTAMP | int |
|
||||
|JSON and other types | Not Supported |
|
||||
|
||||
### Development Guide
|
||||
|
||||
In this section we will demonstrate 5 examples of developing UDF in Python language. In this guide, you will learn the development skills from easy case to hard case, the examples include:
|
||||
1. A scalar function which accepts only one integer as input and outputs ln(n^2 + 1)。
|
||||
2. A scalar function which accepts n integers, like(x1, x2, ..., xn)and output the sum of the product of each input and its sequence number, i.e. x1 + 2 * x2 + ... + n * xn。
|
||||
3. A scalar function which accepts a timestamp and output the next closest Sunday of the timestamp. In this case, we will demonstrate how to use 3rd party library `moment`.
|
||||
4. An aggregate function which calculates the difference between the maximum and the minimum of a specific column, i.e. same functionality of built-in spread().
|
||||
|
||||
In the guide, some debugging skills of using Python UDF will be explained too.
|
||||
|
||||
We assume you are using Linux system and already have TDengine 3.0.4.0+ and Python 3.7+.
|
||||
|
||||
Note:**You can't use print() function to output log inside a UDF, you have to write the log to a specific file or use logging module of Python.**
|
||||
|
||||
#### Sample 1: Simplest UDF
|
||||
|
||||
This scalar UDF accepts an integer as input and output ln(n^2 + 1).
|
||||
|
||||
Firstly, please compose a Python source code file in your system and save it, e.g. `/root/udf/myfun.py`, the code is like below.
|
||||
|
||||
```python
|
||||
from math import log
|
||||
|
||||
def init():
|
||||
pass
|
||||
|
||||
def destroy():
|
||||
pass
|
||||
|
||||
def process(block):
|
||||
rows, _ = block.shape()
|
||||
return [log(block.data(i, 0) ** 2 + 1) for i in range(rows)]
|
||||
```
|
||||
|
||||
This program consists of 3 functions, init() and destroy() do nothing, but they have to be defined even though there is nothing to do in them because they are critical parts of a python UDF. The most important function is process(), which accepts a data block and the data block object has two methods:
|
||||
1. shape() returns the number of rows and the number of columns of the data block
|
||||
2. data(i, j) returns the value at (i,j) in the block
|
||||
|
||||
The output of the process() function of a scalar UDF returns exactly same number of data as the number of input rows. We will ignore the number of columns because we just want to compute on the first column.
|
||||
|
||||
Then, we create the UDF using the SQL command below.
|
||||
|
||||
```sql
|
||||
create function myfun as '/root/udf/myfun.py' outputtype double language 'Python'
|
||||
```
|
||||
|
||||
Here is the output example, it may change a little depending on your version being used.
|
||||
|
||||
```shell
|
||||
taos> create function myfun as '/root/udf/myfun.py' outputtype double language 'Python';
|
||||
Create OK, 0 row(s) affected (0.005202s)
|
||||
```
|
||||
|
||||
Then, we used the `show` command to prove the creation of the UDF is successful.
|
||||
|
||||
```text
|
||||
taos> show functions;
|
||||
name |
|
||||
=================================
|
||||
myfun |
|
||||
Query OK, 1 row(s) in set (0.005767s)
|
||||
```
|
||||
|
||||
Next, we can try to test the function. Before executing the UDF, we need to prepare some data using the command below in TDengine CLI.
|
||||
|
||||
```sql
|
||||
create database test;
|
||||
create table t(ts timestamp, v1 int, v2 int, v3 int);
|
||||
insert into t values('2023-05-01 12:13:14', 1, 2, 3);
|
||||
insert into t values('2023-05-03 08:09:10', 2, 3, 4);
|
||||
insert into t values('2023-05-10 07:06:05', 3, 4, 5);
|
||||
```
|
||||
|
||||
Execute the UDF to test it:
|
||||
|
||||
```sql
|
||||
taos> select myfun(v1, v2) from t;
|
||||
|
||||
DB error: udf function execution failure (0.011088s)
|
||||
```
|
||||
|
||||
Unfortunately, the UDF execution failed. We need to check the log `udfd` daemon to find out why.
|
||||
|
||||
```shell
|
||||
tail -10 /var/log/taos/udfd.log
|
||||
```
|
||||
|
||||
Below is the output.
|
||||
|
||||
```text
|
||||
05/24 22:46:28.733545 01665799 UDF ERROR can not load library libtaospyudf.so. error: operation not permitted
|
||||
05/24 22:46:28.733561 01665799 UDF ERROR can not load python plugin. lib path libtaospyudf.so
|
||||
```
|
||||
|
||||
From the error message we can find out that `libtaospyudf.so` was not loaded successfully. Please refer to the [Prepare Environment] section.
|
||||
|
||||
After correcting environment issues, execute the UDF:
|
||||
|
||||
```sql
|
||||
taos> select myfun(v1) from t;
|
||||
myfun(v1) |
|
||||
============================
|
||||
0.693147181 |
|
||||
1.609437912 |
|
||||
2.302585093 |
|
||||
```
|
||||
|
||||
Now, we have finished the first PDF in Python, and learned some basic debugging skills.
|
||||
|
||||
#### Sample 2: Abnormal Processing
|
||||
|
||||
The `myfun` UDF example in sample 1 has passed, but it has two drawbacks.
|
||||
|
||||
1. It the program accepts only one column of data as input, but it doesn't throw exception if you passes multiple columns.
|
||||
|
||||
```sql
|
||||
taos> select myfun(v1, v2) from t;
|
||||
myfun(v1, v2) |
|
||||
============================
|
||||
0.693147181 |
|
||||
1.609437912 |
|
||||
2.302585093 |
|
||||
```
|
||||
|
||||
2. `null` value is not processed. We expect the program to throw exception and terminate if `null` is passed as input.
|
||||
|
||||
So, we try to optimize the process() function as below.
|
||||
|
||||
```python
|
||||
def process(block):
|
||||
rows, cols = block.shape()
|
||||
if cols > 1:
|
||||
raise Exception(f"require 1 parameter but given {cols}")
|
||||
return [ None if block.data(i, 0) is None else log(block.data(i, 0) ** 2 + 1) for i in range(rows)]
|
||||
```
|
||||
|
||||
The update the UDF with command below.
|
||||
|
||||
```sql
|
||||
create or replace function myfun as '/root/udf/myfun.py' outputtype double language 'Python';
|
||||
```
|
||||
|
||||
At this time, if we pass two arguments to `myfun`, the execution would fail.
|
||||
|
||||
```sql
|
||||
taos> select myfun(v1, v2) from t;
|
||||
|
||||
DB error: udf function execution failure (0.014643s)
|
||||
```
|
||||
|
||||
However, the exception is not shown to end user, but displayed in the log file `/var/log/taos/taospyudf.log`
|
||||
|
||||
```text
|
||||
2023-05-24 23:21:06.790 ERROR [1666188] [doPyUdfScalarProc@507] call pyUdfScalar proc function. context 0x7faade26d180. error: Exception: require 1 parameter but given 2
|
||||
|
||||
At:
|
||||
/var/lib/taos//.udf/myfun_3_1884e1281d9.py(12): process
|
||||
|
||||
```
|
||||
|
||||
Now, we have learned how to update a UDF and check the log of a UDF.
|
||||
|
||||
Note: Prior to TDengine 3.0.5.0 (excluding), updating a UDF requires to restart `taosd` service. After 3.0.5.0, restarting is not required.
|
||||
|
||||
#### Sample 3: UDF with n arguments
|
||||
|
||||
A UDF which accepts n integers, likee (x1, x2, ..., xn) and output the sum of the product of each value and its sequence number: 1 * x1 + 2 * x2 + ... + n * xn. If there is `null` in the input, then the result is `null`. The difference from sample 1 is that it can accept any number of columns as input and process each column. Assume the program is written in /root/udf/nsum.py:
|
||||
|
||||
```python
|
||||
def init():
|
||||
pass
|
||||
|
||||
|
||||
def destroy():
|
||||
pass
|
||||
|
||||
|
||||
def process(block):
|
||||
rows, cols = block.shape()
|
||||
result = []
|
||||
for i in range(rows):
|
||||
total = 0
|
||||
for j in range(cols):
|
||||
v = block.data(i, j)
|
||||
if v is None:
|
||||
total = None
|
||||
break
|
||||
total += (j + 1) * block.data(i, j)
|
||||
result.append(total)
|
||||
return result
|
||||
```
|
||||
|
||||
Crate and test the UDF:
|
||||
|
||||
```sql
|
||||
create function nsum as '/root/udf/nsum.py' outputtype double language 'Python';
|
||||
```
|
||||
|
||||
```sql
|
||||
taos> insert into t values('2023-05-25 09:09:15', 6, null, 8);
|
||||
Insert OK, 1 row(s) affected (0.003675s)
|
||||
|
||||
taos> select ts, v1, v2, v3, nsum(v1, v2, v3) from t;
|
||||
ts | v1 | v2 | v3 | nsum(v1, v2, v3) |
|
||||
================================================================================================
|
||||
2023-05-01 12:13:14.000 | 1 | 2 | 3 | 14.000000000 |
|
||||
2023-05-03 08:09:10.000 | 2 | 3 | 4 | 20.000000000 |
|
||||
2023-05-10 07:06:05.000 | 3 | 4 | 5 | 26.000000000 |
|
||||
2023-05-25 09:09:15.000 | 6 | NULL | 8 | NULL |
|
||||
Query OK, 4 row(s) in set (0.010653s)
|
||||
```
|
||||
|
||||
#### Sample 4: Utilize 3rd party package
|
||||
|
||||
A UDF which accepts a timestamp and output the next closed Sunday. This sample requires to use third party package `moment`, you need to install it firstly.
|
||||
|
||||
```shell
|
||||
pip3 install moment
|
||||
```
|
||||
|
||||
Then compose the Python code in /root/udf/nextsunday.py
|
||||
|
||||
```python
|
||||
import moment
|
||||
|
||||
|
||||
def init():
|
||||
pass
|
||||
|
||||
|
||||
def destroy():
|
||||
pass
|
||||
|
||||
|
||||
def process(block):
|
||||
rows, cols = block.shape()
|
||||
if cols > 1:
|
||||
raise Exception("require only 1 parameter")
|
||||
if not type(block.data(0, 0)) is int:
|
||||
raise Exception("type error")
|
||||
return [moment.unix(block.data(i, 0)).replace(weekday=7).format('YYYY-MM-DD')
|
||||
for i in range(rows)]
|
||||
```
|
||||
|
||||
UDF framework will map the TDengine timestamp to Python int type, so this function only accepts an integer representing millisecond. process() firstly validates the parameters, then use `moment` to replace the time, format the result and output.
|
||||
|
||||
Create and test the UDF.
|
||||
|
||||
```sql
|
||||
create function nextsunday as '/root/udf/nextsunday.py' outputtype binary(10) language 'Python';
|
||||
```
|
||||
|
||||
If your `taosd` is started using `systemd`, you may encounter the error below. Next we will show how to debug.
|
||||
|
||||
```sql
|
||||
taos> select ts, nextsunday(ts) from t;
|
||||
|
||||
DB error: udf function execution failure (1.123615s)
|
||||
```
|
||||
|
||||
```shell
|
||||
tail -20 taospyudf.log
|
||||
2023-05-25 11:42:34.541 ERROR [1679419] [PyUdf::PyUdf@217] py udf load module failure. error ModuleNotFoundError: No module named 'moment'
|
||||
```
|
||||
|
||||
This is because `moment` doesn't exist in the default library search path of python UDF, please check the log file `taosdpyudf.log`.
|
||||
|
||||
```shell
|
||||
grep 'sys path' taospyudf.log | tail -1
|
||||
```
|
||||
|
||||
```text
|
||||
2023-05-25 10:58:48.554 INFO [1679419] [doPyOpen@592] python sys path: ['', '/lib/python38.zip', '/lib/python3.8', '/lib/python3.8/lib-dynload', '/lib/python3/dist-packages', '/var/lib/taos//.udf']
|
||||
```
|
||||
|
||||
You may find that the default library search path is `/lib/python3/dist-packages` (just for example, it may be different in your system), but `moment` is installed to `/usr/local/lib/python3.8/dist-packages` (for example, it may be different in your system). Then we change the library search path of python UDF.
|
||||
|
||||
Check `sys.path`, which must include the packages you install with pip3 command previously, as shown below:
|
||||
|
||||
```python
|
||||
>>> import sys
|
||||
>>> ":".join(sys.path)
|
||||
'/usr/lib/python3.8:/usr/lib/python3.8/lib-dynload:/usr/local/lib/python3.8/dist-packages:/usr/lib/python3/dist-packages'
|
||||
```
|
||||
|
||||
Copy the output and edit /var/taos/taos.cfg to add below configuration parameter.
|
||||
|
||||
```shell
|
||||
UdfdLdLibPath /usr/lib/python3.8:/usr/lib/python3.8/lib-dynload:/usr/local/lib/python3.8/dist-packages:/usr/lib/python3/dist-packages
|
||||
```
|
||||
|
||||
Save it, then restart `taosd`, using `systemctl restart taosd`, and test again, it will succeed this time.
|
||||
|
||||
Note: If your cluster consists of multiple `taosd` instances, you have to repeat same process for each of them.
|
||||
|
||||
```sql
|
||||
taos> select ts, nextsunday(ts) from t;
|
||||
ts | nextsunday(ts) |
|
||||
===========================================
|
||||
2023-05-01 12:13:14.000 | 2023-05-07 |
|
||||
2023-05-03 08:09:10.000 | 2023-05-07 |
|
||||
2023-05-10 07:06:05.000 | 2023-05-14 |
|
||||
2023-05-25 09:09:15.000 | 2023-05-28 |
|
||||
Query OK, 4 row(s) in set (1.011474s)
|
||||
```
|
||||
|
||||
#### Sample 5: Aggregate Function
|
||||
|
||||
An aggregate function which calculates the difference of the maximum and the minimum in a column. An aggregate funnction takes multiple rows as input and output only one data. The execution process of an aggregate UDF is like map-reduce, the framework divides the input into multiple parts, each mapper processes one block and the reducer aggregates the result of the mappers. The reduce() of Python UDF has the functionality of both map() and reduce(). The reduce() takes two arguments: the data to be processed; and the result of other tasks executing reduce(). For example, assume the code is in `/root/udf/myspread.py`.
|
||||
|
||||
```python
|
||||
import io
|
||||
import math
|
||||
import pickle
|
||||
|
||||
LOG_FILE: io.TextIOBase = None
|
||||
|
||||
|
||||
def init():
|
||||
global LOG_FILE
|
||||
LOG_FILE = open("/var/log/taos/spread.log", "wt")
|
||||
log("init function myspead success")
|
||||
|
||||
|
||||
def log(o):
|
||||
LOG_FILE.write(str(o) + '\n')
|
||||
|
||||
|
||||
def destroy():
|
||||
log("close log file: spread.log")
|
||||
LOG_FILE.close()
|
||||
|
||||
|
||||
def start():
|
||||
return pickle.dumps((-math.inf, math.inf))
|
||||
|
||||
|
||||
def reduce(block, buf):
|
||||
max_number, min_number = pickle.loads(buf)
|
||||
log(f"initial max_number={max_number}, min_number={min_number}")
|
||||
rows, _ = block.shape()
|
||||
for i in range(rows):
|
||||
v = block.data(i, 0)
|
||||
if v > max_number:
|
||||
log(f"max_number={v}")
|
||||
max_number = v
|
||||
if v < min_number:
|
||||
log(f"min_number={v}")
|
||||
min_number = v
|
||||
return pickle.dumps((max_number, min_number))
|
||||
|
||||
|
||||
def finish(buf):
|
||||
max_number, min_number = pickle.loads(buf)
|
||||
return max_number - min_number
|
||||
```
|
||||
|
||||
In this example, we implemented an aggregate function, and added some logging.
|
||||
1. init() opens a file for logging
|
||||
2. log() is the function for logging, it converts the input object to string and output with an end of line
|
||||
3. destroy() closes the log file \
|
||||
4. start() returns the initial buffer for storing the intermediate result
|
||||
5. reduce() processes each data block and aggregates the result
|
||||
6. finish() converts the final buffer() to final result\
|
||||
|
||||
Create the UDF.
|
||||
|
||||
```sql
|
||||
create or replace aggregate function myspread as '/root/udf/myspread.py' outputtype double bufsize 128 language 'Python';
|
||||
```
|
||||
|
||||
This SQL command has two important different points from the command creating scalar UDF.
|
||||
1. keyword `aggregate` is used
|
||||
2. keyword `bufsize` is used to specify the memory size for storing the intermediate result. In this example, the result is 32 bytes, but we specified 128 bytes for `bufsize`. You can use the `python` CLI to print actual size.
|
||||
|
||||
```python
|
||||
>>> len(pickle.dumps((12345.6789, 23456789.9877)))
|
||||
32
|
||||
```
|
||||
|
||||
Test this function, you can see the result is same as built-in spread() function. \
|
||||
|
||||
```sql
|
||||
taos> select myspread(v1) from t;
|
||||
myspread(v1) |
|
||||
============================
|
||||
5.000000000 |
|
||||
Query OK, 1 row(s) in set (0.013486s)
|
||||
|
||||
taos> select spread(v1) from t;
|
||||
spread(v1) |
|
||||
============================
|
||||
5.000000000 |
|
||||
Query OK, 1 row(s) in set (0.005501s)
|
||||
```
|
||||
|
||||
At last, check the log file, we can see that the reduce() function is executed 3 times, max value is updated 3 times and min value is updated only one time.
|
||||
|
||||
```shell
|
||||
root@slave11 /var/log/taos $ cat spread.log
|
||||
init function myspead success
|
||||
initial max_number=-inf, min_number=inf
|
||||
max_number=1
|
||||
min_number=1
|
||||
initial max_number=1, min_number=1
|
||||
max_number=2
|
||||
max_number=3
|
||||
initial max_number=3, min_number=1
|
||||
max_number=6
|
||||
close log file: spread.log
|
||||
```
|
||||
|
||||
### SQL Commands
|
||||
|
||||
1. Create Scalar UDF
|
||||
|
||||
```sql
|
||||
CREATE FUNCTION function_name AS library_path OUTPUTTYPE output_type LANGUAGE 'Python';
|
||||
```
|
||||
|
||||
2. Create Aggregate UDF
|
||||
|
||||
```sql
|
||||
CREATE AGGREGATE FUNCTION function_name library_path OUTPUTTYPE output_type LANGUAGE 'Python';
|
||||
```
|
||||
|
||||
3. Update Scalar UDF
|
||||
|
||||
```sql
|
||||
CREATE OR REPLACE FUNCTION function_name AS OUTPUTTYPE int LANGUAGE 'Python';
|
||||
```
|
||||
|
||||
4. Update Aggregate UDF
|
||||
|
||||
```sql
|
||||
CREATE OR REPLACE AGGREGATE FUNCTION function_name AS OUTPUTTYPE BUFSIZE buf_size int LANGUAGE 'Python';
|
||||
```
|
||||
|
||||
Note: If keyword `AGGREGATE` used, the UDF will be treated as aggregate UDF despite what it was before; Similarly, if there is no keyword `aggregate`, the UDF will be treated as scalar function despite what it was before.
|
||||
|
||||
5. Show the UDF
|
||||
|
||||
The version of a UDF is increased by one every time it's updated.
|
||||
|
||||
```sql
|
||||
select * from ins_functions \G;
|
||||
```
|
||||
|
||||
6. Show and Drop existing UDF
|
||||
|
||||
```sql
|
||||
SHOW functions;
|
||||
DROP FUNCTION function_name;
|
||||
```
|
||||
|
||||
### More Python UDF Samples
|
||||
|
||||
#### Scalar Function [pybitand](https://github.com/taosdata/TDengine/blob/3.0/tests/script/sh/pybitand.py)
|
||||
|
||||
The `pybitand` function implements bitwise addition for multiple columns. If there is only one column, the column is returned. The `pybitand` function ignores null values.
|
||||
|
||||
<details>
|
||||
<summary>pybitand.py</summary>
|
||||
|
||||
```Python
|
||||
{{#include tests/script/sh/pybitand.py}}
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
#### Aggregate Function [pyl2norm](https://github.com/taosdata/TDengine/blob/3.0/tests/script/sh/pyl2norm.py)
|
||||
|
||||
The `pyl2norm` function finds the second-order norm for all data in the input columns. This squares the values, takes a cumulative sum, and finds the square root.
|
||||
<details>
|
||||
<summary>pyl2norm.py</summary>
|
||||
|
||||
```c
|
||||
{{#include tests/script/sh/pyl2norm.py}}
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
#### Aggregate Function [pycumsum](https://github.com/taosdata/TDengine/blob/3.0/tests/script/sh/pycumsum.py)
|
||||
|
||||
The `pycumsum` function finds the cumulative sum for all data in the input columns.
|
||||
<details>
|
||||
<summary>pycumsum.py</summary>
|
||||
|
||||
```c
|
||||
{{#include tests/script/sh/pycumsum.py}}
|
||||
```
|
||||
|
||||
</details>
|
||||
## Manage and Use UDF
|
||||
You need to add UDF to TDengine before using it in SQL queries. For more information about how to manage UDF and how to invoke UDF, please see [Manage and Use UDF](../12-taos-sql/26-udf.md).
|
||||
|
|
|
@ -1,11 +1,9 @@
|
|||
```java
|
||||
{{#include docs/examples/java/src/main/java/com/taos/example/SubscribeDemo.java}}
|
||||
{{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}}
|
||||
{{#include docs/examples/java/src/main/java/com/taos/example/Meters.java}}
|
||||
```
|
||||
```java
|
||||
{{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}}
|
||||
```
|
||||
```java
|
||||
{{#include docs/examples/java/src/main/java/com/taos/example/Meters.java}}
|
||||
```
|
||||
```
|
||||
|
|
|
@ -0,0 +1,9 @@
|
|||
```java
|
||||
{{#include docs/examples/java/src/main/java/com/taos/example/WebsocketSubscribeDemo.java}}
|
||||
```
|
||||
```java
|
||||
{{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}}
|
||||
```
|
||||
```java
|
||||
{{#include docs/examples/java/src/main/java/com/taos/example/Meters.java}}
|
||||
```
|
|
@ -1,5 +1,6 @@
|
|||
---
|
||||
title: Developer Guide
|
||||
description: This document describes how to use the various components of TDengine from a developer's perspective.
|
||||
---
|
||||
|
||||
Before creating an application to process time-series data with TDengine, consider the following:
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
sidebar_label: Manual Deployment
|
||||
title: Manual Deployment and Management
|
||||
sidebar_label: Manual Deployment
|
||||
description: This document describes how to deploy TDengine on a server.
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
|
|
@ -1,25 +1,34 @@
|
|||
---
|
||||
sidebar_label: Kubernetes
|
||||
title: Deploying a TDengine Cluster in Kubernetes
|
||||
sidebar_label: Kubernetes
|
||||
description: This document describes how to deploy TDengine on Kubernetes.
|
||||
---
|
||||
|
||||
TDengine is a cloud-native time-series database that can be deployed on Kubernetes. This document gives a step-by-step description of how you can use YAML files to create a TDengine cluster and introduces common operations for TDengine in a Kubernetes environment.
|
||||
## Overview
|
||||
|
||||
As a time series database for Cloud Native architecture design, TDengine supports Kubernetes deployment. Firstly we introduce how to use YAML files to create a highly available TDengine cluster from scratch step by step for production usage, and highlight the common operations of TDengine in Kubernetes environment.
|
||||
|
||||
To meet [high availability ](https://docs.taosdata.com/tdinternal/high-availability/)requirements, clusters need to meet the following requirements:
|
||||
|
||||
- 3 or more dnodes: multiple vnodes in the same vgroup of TDengine are not allowed to be distributed in one dnode at the same time, so if you create a database with 3 replicas, the number of dnodes is greater than or equal to 3
|
||||
- 3 mnodes: mnode is responsible for the management of the entire TDengine cluster. The default number of mnode in TDengine cluster is only one. If the dnode where the mnode located is dropped, the entire cluster is unavailable.
|
||||
- Database 3 replicas: The TDengine replica configuration is the database level, so 3 replicas for the database must need three dnodes in the cluster. If any one dnode is offline, does not affect the normal usage of the whole cluster. **If the number of offline** **dnodes** **is 2, then the cluster is not available,** **because** ** the cluster can not complete the election based on RAFT** **.** (Enterprise version: in the disaster recovery scenario, any node data file is damaged, can be restored by pulling up the dnode again)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before deploying TDengine on Kubernetes, perform the following:
|
||||
|
||||
* Current steps are compatible with Kubernetes v1.5 and later version.
|
||||
* Install and configure minikube, kubectl, and helm.
|
||||
* Install and deploy Kubernetes and ensure that it can be accessed and used normally. Update any container registries or other services as necessary.
|
||||
- This article applies Kubernetes 1.19 and above
|
||||
- This article uses the **kubectl** tool to install and deploy, please install the corresponding software in advance
|
||||
- Kubernetes have been installed and deployed and can access or update the necessary container repositories or other services
|
||||
|
||||
You can download the configuration files in this document from [GitHub](https://github.com/taosdata/TDengine-Operator/tree/3.0/src/tdengine).
|
||||
|
||||
## Configure the service
|
||||
|
||||
Create a service configuration file named `taosd-service.yaml`. Record the value of `metadata.name` (in this example, `taos`) for use in the next step. Add the ports required by TDengine:
|
||||
Create a service configuration file named `taosd-service.yaml`. Record the value of `metadata.name` (in this example, `taos`) for use in the next step. And then add the ports required by TDengine and record the value of the selector label "app" (in this example, `tdengine`) for use in the next step:
|
||||
|
||||
```yaml
|
||||
```YAML
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
|
@ -30,10 +39,10 @@ metadata:
|
|||
spec:
|
||||
ports:
|
||||
- name: tcp6030
|
||||
- protocol: "TCP"
|
||||
protocol: "TCP"
|
||||
port: 6030
|
||||
- name: tcp6041
|
||||
- protocol: "TCP"
|
||||
protocol: "TCP"
|
||||
port: 6041
|
||||
selector:
|
||||
app: "tdengine"
|
||||
|
@ -41,10 +50,11 @@ spec:
|
|||
|
||||
## Configure the service as StatefulSet
|
||||
|
||||
Configure the TDengine service as a StatefulSet.
|
||||
Create the `tdengine.yaml` file and set `replicas` to 3. In this example, the region is set to Asia/Shanghai and 10 GB of standard storage are allocated per node. You can change the configuration based on your environment and business requirements.
|
||||
According to Kubernetes instructions for various deployments, we will use StatefulSet as the deployment resource type of TDengine. Create the file `tdengine.yaml `, where replicas defines the number of cluster nodes as 3. The node time zone is China (Asia/Shanghai), and each node is allocated 5G standard storage (refer to the [Storage Classes ](https://kubernetes.io/docs/concepts/storage/storage-classes/)configuration storage class). You can also modify accordingly according to the actual situation.
|
||||
|
||||
```yaml
|
||||
Please pay special attention to the startupProbe configuration. If dnode's Pod drops for a period of time and then restart, the newly launched dnode Pod will be temporarily unavailable. The reason is the startupProbe configuration is too small, Kubernetes will know that the Pod is in an abnormal state and try to restart it, then the dnode's Pod will restart frequently and never return to the normal status. Refer to [Configure Liveness, Readiness and Startup Probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)
|
||||
|
||||
```YAML
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
|
@ -68,14 +78,14 @@ spec:
|
|||
spec:
|
||||
containers:
|
||||
- name: "tdengine"
|
||||
image: "tdengine/tdengine:3.0.0.0"
|
||||
image: "tdengine/tdengine:3.0.7.1"
|
||||
imagePullPolicy: "IfNotPresent"
|
||||
ports:
|
||||
- name: tcp6030
|
||||
- protocol: "TCP"
|
||||
protocol: "TCP"
|
||||
containerPort: 6030
|
||||
- name: tcp6041
|
||||
- protocol: "TCP"
|
||||
protocol: "TCP"
|
||||
containerPort: 6041
|
||||
env:
|
||||
# POD_NAME for FQDN config
|
||||
|
@ -101,12 +111,18 @@ spec:
|
|||
# Must set if you want a cluster.
|
||||
- name: TAOS_FIRST_EP
|
||||
value: "$(STS_NAME)-0.$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local:$(TAOS_SERVER_PORT)"
|
||||
# TAOS_FQDN should always be set in k8s env.
|
||||
# TAOS_FQND should always be set in k8s env.
|
||||
- name: TAOS_FQDN
|
||||
value: "$(POD_NAME).$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local"
|
||||
volumeMounts:
|
||||
- name: taosdata
|
||||
mountPath: /var/lib/taos
|
||||
startupProbe:
|
||||
exec:
|
||||
command:
|
||||
- taos-check
|
||||
failureThreshold: 360
|
||||
periodSeconds: 10
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
|
@ -128,266 +144,401 @@ spec:
|
|||
storageClassName: "standard"
|
||||
resources:
|
||||
requests:
|
||||
storage: "10Gi"
|
||||
storage: "5Gi"
|
||||
```
|
||||
|
||||
## Use kubectl to deploy TDengine
|
||||
|
||||
Run the following commands:
|
||||
First create the corresponding namespace, and then execute the following command in sequence :
|
||||
|
||||
```bash
|
||||
kubectl apply -f taosd-service.yaml
|
||||
kubectl apply -f tdengine.yaml
|
||||
```Bash
|
||||
kubectl apply -f taosd-service.yaml -n tdengine-test
|
||||
kubectl apply -f tdengine.yaml -n tdengine-test
|
||||
```
|
||||
|
||||
The preceding configuration generates a TDengine cluster with three nodes in which dnodes are automatically configured. You can run the `show dnodes` command to query the nodes in the cluster:
|
||||
The above configuration will generate a three-node TDengine cluster, dnode is automatically configured, you can use the **show dnodes** command to view the nodes of the current cluster:
|
||||
|
||||
```bash
|
||||
kubectl exec -i -t tdengine-0 -- taos -s "show dnodes"
|
||||
kubectl exec -i -t tdengine-1 -- taos -s "show dnodes"
|
||||
kubectl exec -i -t tdengine-2 -- taos -s "show dnodes"
|
||||
```Bash
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
|
||||
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show dnodes"
|
||||
kubectl exec -it tdengine-2 -n tdengine-test -- taos -s "show dnodes"
|
||||
```
|
||||
|
||||
The output is as follows:
|
||||
|
||||
```
|
||||
```Bash
|
||||
taos> show dnodes
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | note |
|
||||
============================================================================================================================================
|
||||
1 | tdengine-0.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:14:57.285 | |
|
||||
2 | tdengine-1.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:11.302 | |
|
||||
3 | tdengine-2.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:23.290 | |
|
||||
Query OK, 3 rows in database (0.003655s)
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
|
||||
=============================================================================================================================================================================================================================================
|
||||
1 | tdengine-0.ta... | 0 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-19 17:54:18.469 | | | |
|
||||
2 | tdengine-1.ta... | 0 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-19 17:54:38.698 | | | |
|
||||
3 | tdengine-2.ta... | 0 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-19 17:55:02.039 | | | |
|
||||
Query OK, 3 row(s) in set (0.001853s)
|
||||
```
|
||||
|
||||
View the current mnode
|
||||
|
||||
```Bash
|
||||
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
|
||||
taos> show mnodes\G
|
||||
*************************** 1.row ***************************
|
||||
id: 1
|
||||
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: leader
|
||||
status: ready
|
||||
create_time: 2023-07-19 17:54:18.559
|
||||
reboot_time: 2023-07-19 17:54:19.520
|
||||
Query OK, 1 row(s) in set (0.001282s)
|
||||
```
|
||||
|
||||
## Create mnode
|
||||
|
||||
```Bash
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 2"
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 3"
|
||||
```
|
||||
|
||||
View mnode
|
||||
|
||||
```Bash
|
||||
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
|
||||
|
||||
taos> show mnodes\G
|
||||
*************************** 1.row ***************************
|
||||
id: 1
|
||||
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: leader
|
||||
status: ready
|
||||
create_time: 2023-07-19 17:54:18.559
|
||||
reboot_time: 2023-07-20 09:19:36.060
|
||||
*************************** 2.row ***************************
|
||||
id: 2
|
||||
endpoint: tdengine-1.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: follower
|
||||
status: ready
|
||||
create_time: 2023-07-20 09:22:05.600
|
||||
reboot_time: 2023-07-20 09:22:12.838
|
||||
*************************** 3.row ***************************
|
||||
id: 3
|
||||
endpoint: tdengine-2.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: follower
|
||||
status: ready
|
||||
create_time: 2023-07-20 09:22:20.042
|
||||
reboot_time: 2023-07-20 09:22:23.271
|
||||
Query OK, 3 row(s) in set (0.003108s)
|
||||
```
|
||||
|
||||
## Enable port forwarding
|
||||
|
||||
The kubectl port forwarding feature allows applications to access the TDengine cluster running on Kubernetes.
|
||||
Kubectl port forwarding enables applications to access TDengine clusters running in Kubernetes environments.
|
||||
|
||||
```
|
||||
kubectl port-forward tdengine-0 6041:6041 &
|
||||
```bash
|
||||
kubectl port-forward -n tdengine-test tdengine-0 6041:6041 &
|
||||
```
|
||||
|
||||
Use curl to verify that the TDengine REST API is working on port 6041:
|
||||
Use **curl** to verify that the TDengine REST API is working on port 6041:
|
||||
|
||||
```
|
||||
$ curl -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
|
||||
Handling connection for 6041
|
||||
{"code":0,"column_meta":[["name","VARCHAR",64],["create_time","TIMESTAMP",8],["vgroups","SMALLINT",2],["ntables","BIGINT",8],["replica","TINYINT",1],["strict","VARCHAR",4],["duration","VARCHAR",10],["keep","VARCHAR",32],["buffer","INT",4],["pagesize","INT",4],["pages","INT",4],["minrows","INT",4],["maxrows","INT",4],["comp","TINYINT",1],["precision","VARCHAR",2],["status","VARCHAR",10],["retention","VARCHAR",60],["single_stable","BOOL",1],["cachemodel","VARCHAR",11],["cachesize","INT",4],["wal_level","TINYINT",1],["wal_fsync_period","INT",4],["wal_retention_period","INT",4],["wal_retention_size","BIGINT",8],["wal_roll_period","INT",4],["wal_segment_size","BIGINT",8]],"data":[["information_schema",null,null,16,null,null,null,null,null,null,null,null,null,null,null,"ready",null,null,null,null,null,null,null,null,null,null],["performance_schema",null,null,10,null,null,null,null,null,null,null,null,null,null,null,"ready",null,null,null,null,null,null,null,null,null,null]],"rows":2}
|
||||
```bash
|
||||
curl -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
|
||||
{"code":0,"column_meta":[["name","VARCHAR",64]],"data":[["information_schema"],["performance_schema"],["test"],["test1"]],"rows":4}
|
||||
```
|
||||
|
||||
## Enable the dashboard for visualization
|
||||
## Test cluster
|
||||
|
||||
The minikube dashboard command enables visualized cluster management.
|
||||
### Data preparation
|
||||
|
||||
```
|
||||
$ minikube dashboard
|
||||
* Verifying dashboard health ...
|
||||
* Launching proxy ...
|
||||
* Verifying proxy health ...
|
||||
* Opening http://127.0.0.1:46617/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...
|
||||
http://127.0.0.1:46617/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
|
||||
#### taosBenchmark
|
||||
|
||||
Create a 3 replicas database with taosBenchmark, write 100 million data at the same time, and view the data at the same time
|
||||
|
||||
```Bash
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taosBenchmark -I stmt -d test -n 10000 -t 10000 -a 3
|
||||
|
||||
# query data
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "select count(*) from test.meters;"
|
||||
|
||||
taos> select count(*) from test.meters;
|
||||
count(*) |
|
||||
========================
|
||||
100000000 |
|
||||
Query OK, 1 row(s) in set (0.103537s)
|
||||
```
|
||||
|
||||
In some public clouds, minikube cannot be remotely accessed if it is bound to 127.0.0.1. In this case, use the kubectl proxy command to map the port to 0.0.0.0. Then, you can access the dashboard by using a web browser to open the dashboard URL above on the public IP address and port of the virtual machine.
|
||||
View vnode distribution by showing dnodes
|
||||
|
||||
```Bash
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
|
||||
|
||||
taos> show dnodes
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
|
||||
=============================================================================================================================================================================================================================================
|
||||
1 | tdengine-0.ta... | 8 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-19 17:54:18.469 | | | |
|
||||
2 | tdengine-1.ta... | 8 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-19 17:54:38.698 | | | |
|
||||
3 | tdengine-2.ta... | 8 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-19 17:55:02.039 | | | |
|
||||
Query OK, 3 row(s) in set (0.001357s)
|
||||
```
|
||||
$ kubectl proxy --accept-hosts='^.*$' --address='0.0.0.0'
|
||||
|
||||
View xnode distribution by showing vgroup
|
||||
|
||||
```Bash
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show test.vgroups"
|
||||
|
||||
taos> show test.vgroups
|
||||
vgroup_id | db_name | tables | v1_dnode | v1_status | v2_dnode | v2_status | v3_dnode | v3_status | v4_dnode | v4_status | cacheload | cacheelements | tsma |
|
||||
==============================================================================================================================================================================================
|
||||
2 | test | 1267 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
|
||||
3 | test | 1215 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
|
||||
4 | test | 1215 | 1 | leader | 2 | follower | 3 | follower | NULL | NULL | 0 | 0 | 0 |
|
||||
5 | test | 1307 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
|
||||
6 | test | 1245 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
|
||||
7 | test | 1275 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
|
||||
8 | test | 1231 | 1 | leader | 2 | follower | 3 | follower | NULL | NULL | 0 | 0 | 0 |
|
||||
9 | test | 1245 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
|
||||
Query OK, 8 row(s) in set (0.001488s)
|
||||
```
|
||||
|
||||
#### Manually created
|
||||
|
||||
Common a three-copy test1, and create a table, write 2 pieces of data
|
||||
|
||||
```Bash
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- \
|
||||
taos -s \
|
||||
"create database if not exists test1 replica 3;
|
||||
use test1;
|
||||
create table if not exists t1(ts timestamp, n int);
|
||||
insert into t1 values(now, 1)(now+1s, 2);"
|
||||
```
|
||||
|
||||
View xnode distribution by showing test1.vgroup
|
||||
|
||||
```Bash
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show test1.vgroups"
|
||||
|
||||
taos> show test1.vgroups
|
||||
vgroup_id | db_name | tables | v1_dnode | v1_status | v2_dnode | v2_status | v3_dnode | v3_status | v4_dnode | v4_status | cacheload | cacheelements | tsma |
|
||||
==============================================================================================================================================================================================
|
||||
10 | test1 | 1 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
|
||||
11 | test1 | 0 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
|
||||
Query OK, 2 row(s) in set (0.001489s)
|
||||
```
|
||||
|
||||
### Test fault tolerance
|
||||
|
||||
The dnode where the mnode leader is located is disconnected, dnode1
|
||||
|
||||
```Bash
|
||||
kubectl get pod -l app=tdengine -n tdengine-test -o wide
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
tdengine-0 0/1 ErrImagePull 2 (2s ago) 20m 10.244.2.75 node86 <none> <none>
|
||||
tdengine-1 1/1 Running 1 (6m48s ago) 20m 10.244.0.59 node84 <none> <none>
|
||||
tdengine-2 1/1 Running 0 21m 10.244.1.223 node85 <none> <none>
|
||||
```
|
||||
|
||||
At this time, the cluster mnode has a re-election, and the monde on dnode1 becomes the leader.
|
||||
|
||||
```Bash
|
||||
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
|
||||
Welcome to the TDengine Command Line Interface, Client Version:3.0.7.1.202307190706
|
||||
Copyright (c) 2022 by TDengine, all rights reserved.
|
||||
|
||||
taos> show mnodes\G
|
||||
*************************** 1.row ***************************
|
||||
id: 1
|
||||
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: offline
|
||||
status: offline
|
||||
create_time: 2023-07-19 17:54:18.559
|
||||
reboot_time: 1970-01-01 08:00:00.000
|
||||
*************************** 2.row ***************************
|
||||
id: 2
|
||||
endpoint: tdengine-1.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: leader
|
||||
status: ready
|
||||
create_time: 2023-07-20 09:22:05.600
|
||||
reboot_time: 2023-07-20 09:32:00.227
|
||||
*************************** 3.row ***************************
|
||||
id: 3
|
||||
endpoint: tdengine-2.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: follower
|
||||
status: ready
|
||||
create_time: 2023-07-20 09:22:20.042
|
||||
reboot_time: 2023-07-20 09:32:00.026
|
||||
Query OK, 3 row(s) in set (0.001513s)
|
||||
```
|
||||
|
||||
Cluster can read and write normally
|
||||
|
||||
```Bash
|
||||
# insert
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "insert into test1.t1 values(now, 1)(now+1s, 2);"
|
||||
|
||||
taos> insert into test1.t1 values(now, 1)(now+1s, 2);
|
||||
Insert OK, 2 row(s) affected (0.002098s)
|
||||
|
||||
# select
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "select *from test1.t1"
|
||||
|
||||
taos> select *from test1.t1
|
||||
ts | n |
|
||||
========================================
|
||||
2023-07-19 18:04:58.104 | 1 |
|
||||
2023-07-19 18:04:59.104 | 2 |
|
||||
2023-07-19 18:06:00.303 | 1 |
|
||||
2023-07-19 18:06:01.303 | 2 |
|
||||
Query OK, 4 row(s) in set (0.001994s)
|
||||
```
|
||||
|
||||
Similarly, as for the non-leader mnode dropped, read and write can of course be normal, here will not do too much display .
|
||||
|
||||
## Scaling Out Your Cluster
|
||||
|
||||
TDengine clusters can scale automatically:
|
||||
TDengine cluster supports automatic expansion:
|
||||
|
||||
```bash
|
||||
```Bash
|
||||
kubectl scale statefulsets tdengine --replicas=4
|
||||
```
|
||||
|
||||
The preceding command increases the number of replicas to 4. After running this command, query the pod status:
|
||||
The parameter `--replica = 4 `in the above command line indicates that you want to expand the TDengine cluster to 4 nodes. After execution, first check the status of the Pod:
|
||||
|
||||
```bash
|
||||
kubectl get pods -l app=tdengine
|
||||
```Bash
|
||||
kubectl get pod -l app=tdengine -n tdengine-test -o wide
|
||||
```
|
||||
|
||||
The output is as follows:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
tdengine-0 1/1 Running 0 161m
|
||||
tdengine-1 1/1 Running 0 161m
|
||||
tdengine-2 1/1 Running 0 32m
|
||||
tdengine-3 1/1 Running 0 32m
|
||||
```Plain
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
tdengine-0 1/1 Running 4 (6h26m ago) 6h53m 10.244.2.75 node86 <none> <none>
|
||||
tdengine-1 1/1 Running 1 (6h39m ago) 6h53m 10.244.0.59 node84 <none> <none>
|
||||
tdengine-2 1/1 Running 0 5h16m 10.244.1.224 node85 <none> <none>
|
||||
tdengine-3 1/1 Running 0 3m24s 10.244.2.76 node86 <none> <none>
|
||||
```
|
||||
|
||||
The status of all pods is Running. Once the pod status changes to Ready, you can check the dnode status:
|
||||
At this time, the state of the POD is still Running, and the dnode state in the TDengine cluster can only be seen after the Pod status is `ready `:
|
||||
|
||||
```bash
|
||||
kubectl exec -i -t tdengine-3 -- taos -s "show dnodes"
|
||||
```Bash
|
||||
kubectl exec -it tdengine-3 -n tdengine-test -- taos -s "show dnodes"
|
||||
```
|
||||
|
||||
The following output shows that the TDengine cluster has been expanded to 4 replicas:
|
||||
The dnode list of the expanded four-node TDengine cluster:
|
||||
|
||||
```
|
||||
```Plain
|
||||
taos> show dnodes
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | note |
|
||||
============================================================================================================================================
|
||||
1 | tdengine-0.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:14:57.285 | |
|
||||
2 | tdengine-1.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:11.302 | |
|
||||
3 | tdengine-2.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:23.290 | |
|
||||
4 | tdengine-3.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:33:16.039 | |
|
||||
Query OK, 4 rows in database (0.008377s)
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
|
||||
=============================================================================================================================================================================================================================================
|
||||
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
|
||||
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
|
||||
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
|
||||
4 | tdengine-3.ta... | 0 | 16 | ready | 2023-07-20 16:01:44.007 | 2023-07-20 16:01:44.889 | | | |
|
||||
Query OK, 4 row(s) in set (0.003628s)
|
||||
```
|
||||
|
||||
## Scaling In Your Cluster
|
||||
|
||||
When you scale in a TDengine cluster, your data is migrated to different nodes. You must run the drop dnodes command in TDengine to remove dnodes before scaling in your Kubernetes environment.
|
||||
Since the TDengine cluster will migrate data between nodes during volume expansion and contraction, using the **kubectl** command to reduce the volume requires first using the "drop dnodes" command ( **If there are 3 replicas of db in the cluster, the number of dnodes after reduction must also be greater than or equal to 3, otherwise the drop dnode operation will be aborted** ), the node deletion is completed before Kubernetes cluster reduction.
|
||||
|
||||
Note: In a Kubernetes StatefulSet service, the newest pods are always removed first. For this reason, when you scale in your TDengine cluster, ensure that you drop the newest dnodes.
|
||||
Note: Since Kubernetes Pods in the Statefulset can only be removed in reverse order of creation, the TDengine drop dnode also needs to be removed in reverse order of creation, otherwise the Pod will be in an error state.
|
||||
|
||||
```
|
||||
$ kubectl exec -i -t tdengine-0 -- taos -s "drop dnode 4"
|
||||
```
|
||||
|
||||
```bash
|
||||
$ kubectl exec -it tdengine-0 -- taos -s "show dnodes"
|
||||
```Bash
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "drop dnode 4"
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
|
||||
|
||||
taos> show dnodes
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | note |
|
||||
============================================================================================================================================
|
||||
1 | tdengine-0.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:14:57.285 | |
|
||||
2 | tdengine-1.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:11.302 | |
|
||||
3 | tdengine-2.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:23.290 | |
|
||||
Query OK, 3 rows in database (0.004861s)
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
|
||||
=============================================================================================================================================================================================================================================
|
||||
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
|
||||
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
|
||||
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
|
||||
Query OK, 3 row(s) in set (0.003324s)
|
||||
```
|
||||
|
||||
Verify that the dnode have been successfully removed by running the `kubectl exec -i -t tdengine-0 -- taos -s "show dnodes"` command. Then run the following command to remove the pod:
|
||||
After confirming that the removal is successful (use kubectl exec -i -t tdengine-0 --taos -s "show dnodes" to view and confirm the dnode list), use the kubectl command to remove the Pod:
|
||||
|
||||
```
|
||||
kubectl scale statefulsets tdengine --replicas=3
|
||||
```Plain
|
||||
kubectl scale statefulsets tdengine --replicas=3 -n tdengine-test
|
||||
```
|
||||
|
||||
The newest pod in the deployment is removed. Run the `kubectl get pods -l app=tdengine` command to query the pod status:
|
||||
The last Pod will be deleted. Use the command kubectl get pods -l app = tdengine to check the Pod status:
|
||||
|
||||
```
|
||||
$ kubectl get pods -l app=tdengine
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
tdengine-0 1/1 Running 0 4m7s
|
||||
tdengine-1 1/1 Running 0 3m55s
|
||||
tdengine-2 1/1 Running 0 2m28s
|
||||
```Plain
|
||||
kubectl get pod -l app=tdengine -n tdengine-test -o wide
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
tdengine-0 1/1 Running 4 (6h55m ago) 7h22m 10.244.2.75 node86 <none> <none>
|
||||
tdengine-1 1/1 Running 1 (7h9m ago) 7h23m 10.244.0.59 node84 <none> <none>
|
||||
tdengine-2 1/1 Running 0 5h45m 10.244.1.224 node85 <none> <none>
|
||||
```
|
||||
|
||||
After the pod has been removed, manually delete the PersistentVolumeClaim (PVC). Otherwise, future scale-outs will attempt to use existing data.
|
||||
After the Pod is deleted, the PVC needs to be deleted manually, otherwise the previous data will continue to be used for the next expansion, resulting in the inability to join the cluster normally.
|
||||
|
||||
```bash
|
||||
$ kubectl delete pvc taosdata-tdengine-3
|
||||
```Bash
|
||||
kubectl delete pvc aosdata-tdengine-3 -n tdengine-test
|
||||
```
|
||||
|
||||
Your cluster has now been safely scaled in, and you can scale it out again as necessary.
|
||||
The cluster state at this time is safe and can be scaled up again if needed.
|
||||
|
||||
```bash
|
||||
$ kubectl scale statefulsets tdengine --replicas=4
|
||||
```Bash
|
||||
kubectl scale statefulsets tdengine --replicas=4 -n tdengine-test
|
||||
statefulset.apps/tdengine scaled
|
||||
it@k8s-2:~/TDengine-Operator/src/tdengine$ kubectl get pods -l app=tdengine
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
tdengine-0 1/1 Running 0 35m
|
||||
tdengine-1 1/1 Running 0 34m
|
||||
tdengine-2 1/1 Running 0 12m
|
||||
tdengine-3 0/1 ContainerCreating 0 4s
|
||||
it@k8s-2:~/TDengine-Operator/src/tdengine$ kubectl get pods -l app=tdengine
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
tdengine-0 1/1 Running 0 35m
|
||||
tdengine-1 1/1 Running 0 34m
|
||||
tdengine-2 1/1 Running 0 12m
|
||||
tdengine-3 0/1 Running 0 7s
|
||||
it@k8s-2:~/TDengine-Operator/src/tdengine$ kubectl exec -it tdengine-0 -- taos -s "show dnodes"
|
||||
|
||||
kubectl get pod -l app=tdengine -n tdengine-test -o wide
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
tdengine-0 1/1 Running 4 (6h59m ago) 7h27m 10.244.2.75 node86 <none> <none>
|
||||
tdengine-1 1/1 Running 1 (7h13m ago) 7h27m 10.244.0.59 node84 <none> <none>
|
||||
tdengine-2 1/1 Running 0 5h49m 10.244.1.224 node85 <none> <none>
|
||||
tdengine-3 1/1 Running 0 20s 10.244.2.77 node86 <none> <none>
|
||||
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
|
||||
|
||||
taos> show dnodes
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | offline reason |
|
||||
======================================================================================================================================
|
||||
1 | tdengine-0.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 17:38:49.012 | |
|
||||
2 | tdengine-1.taosd.default.sv... | 1 | 4 | ready | 2022-07-25 17:39:01.517 | |
|
||||
5 | tdengine-2.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 18:01:36.479 | |
|
||||
6 | tdengine-3.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 18:13:54.411 | |
|
||||
Query OK, 4 row(s) in set (0.001348s)
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
|
||||
=============================================================================================================================================================================================================================================
|
||||
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
|
||||
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
|
||||
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
|
||||
5 | tdengine-3.ta... | 0 | 16 | ready | 2023-07-20 16:31:34.092 | 2023-07-20 16:38:17.419 | | | |
|
||||
Query OK, 4 row(s) in set (0.003881s)
|
||||
```
|
||||
|
||||
## Remove a TDengine Cluster
|
||||
|
||||
To fully remove a TDengine cluster, you must delete its statefulset, svc, configmap, and pvc entries:
|
||||
> **When deleting the PVC, you need to pay attention to the pv persistentVolumeReclaimPolicy policy. It is recommended to change to Delete, so that the PV will be automatically cleaned up when the PVC is deleted, and the underlying CSI storage resources will be cleaned up at the same time. If the policy of deleting the PVC to automatically clean up the PV is not configured, and then after deleting the pvc, when manually cleaning up the PV, the CSI storage resources corresponding to the PV may not be released.**
|
||||
|
||||
```bash
|
||||
kubectl delete statefulset -l app=tdengine
|
||||
kubectl delete svc -l app=tdengine
|
||||
kubectl delete pvc -l app=tdengine
|
||||
kubectl delete configmap taoscfg
|
||||
Complete removal of TDengine cluster, need to clean up statefulset, svc, configmap, pvc respectively.
|
||||
|
||||
```Bash
|
||||
kubectl delete statefulset -l app=tdengine -n tdengine-test
|
||||
kubectl delete svc -l app=tdengine -n tdengine-test
|
||||
kubectl delete pvc -l app=tdengine -n tdengine-test
|
||||
kubectl delete configmap taoscfg -n tdengine-test
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Error 1
|
||||
|
||||
If you remove a pod without first running `drop dnode`, some TDengine nodes will go offline.
|
||||
No "drop dnode" is directly reduced. Since the TDengine has not deleted the node, the reduced pod causes some nodes in the TDengine cluster to be offline.
|
||||
|
||||
```
|
||||
$ kubectl exec -it tdengine-0 -- taos -s "show dnodes"
|
||||
```Plain
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
|
||||
|
||||
taos> show dnodes
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | offline reason |
|
||||
======================================================================================================================================
|
||||
1 | tdengine-0.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 17:38:49.012 | |
|
||||
2 | tdengine-1.taosd.default.sv... | 1 | 4 | ready | 2022-07-25 17:39:01.517 | |
|
||||
5 | tdengine-2.taosd.default.sv... | 0 | 4 | offline | 2022-07-25 18:01:36.479 | status msg timeout |
|
||||
6 | tdengine-3.taosd.default.sv... | 0 | 4 | offline | 2022-07-25 18:13:54.411 | status msg timeout |
|
||||
Query OK, 4 row(s) in set (0.001323s)
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
|
||||
=============================================================================================================================================================================================================================================
|
||||
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
|
||||
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
|
||||
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
|
||||
5 | tdengine-3.ta... | 0 | 16 | offline | 2023-07-20 16:31:34.092 | 2023-07-20 16:38:17.419 | status msg timeout | | |
|
||||
Query OK, 4 row(s) in set (0.003862s)
|
||||
```
|
||||
|
||||
### Error 2
|
||||
## Finally
|
||||
|
||||
If the number of nodes after a scale-in is less than the value of the replica parameter, the cluster will go down:
|
||||
For the high availability and high reliability of TDengine in a Kubernetes environment, hardware damage and disaster recovery are divided into two levels:
|
||||
|
||||
Create a database with replica set to 2 and add data.
|
||||
1. The disaster recovery capability of the underlying distributed Block Storage, the multi-copy of Block Storage, the current popular distributed Block Storage such as Ceph, has the multi-copy capability, extending the storage copy to different racks, cabinets, computer rooms, Data center (or directly use the Block Storage service provided by Public Cloud vendors)
|
||||
2. TDengine disaster recovery, in TDengine Enterprise, itself has when a dnode permanently offline (TCE-metal disk damage, data sorting loss), re-pull a blank dnode to restore the original dnode work.
|
||||
|
||||
```bash
|
||||
kubectl exec -i -t tdengine-0 -- \
|
||||
taos -s \
|
||||
"create database if not exists test replica 2;
|
||||
use test;
|
||||
create table if not exists t1(ts timestamp, n int);
|
||||
insert into t1 values(now, 1)(now+1s, 2);"
|
||||
Finally, welcome to [TDengine Cloud ](https://cloud.tdengine.com/)to experience the one-stop fully managed TDengine Cloud as a Service.
|
||||
|
||||
|
||||
```
|
||||
|
||||
Scale in to one node:
|
||||
|
||||
```bash
|
||||
kubectl scale statefulsets tdengine --replicas=1
|
||||
|
||||
```
|
||||
|
||||
In the TDengine CLI, you can see that no database operations succeed:
|
||||
|
||||
```
|
||||
taos> show dnodes;
|
||||
id | end_point | vnodes | cores | status | role | create_time | offline reason |
|
||||
======================================================================================================================================
|
||||
1 | tdengine-0.taosd.default.sv... | 2 | 40 | ready | any | 2021-06-01 15:55:52.562 | |
|
||||
2 | tdengine-1.taosd.default.sv... | 1 | 40 | offline | any | 2021-06-01 15:56:07.212 | status msg timeout |
|
||||
Query OK, 2 row(s) in set (0.000845s)
|
||||
|
||||
taos> show dnodes;
|
||||
id | end_point | vnodes | cores | status | role | create_time | offline reason |
|
||||
======================================================================================================================================
|
||||
1 | tdengine-0.taosd.default.sv... | 2 | 40 | ready | any | 2021-06-01 15:55:52.562 | |
|
||||
2 | tdengine-1.taosd.default.sv... | 1 | 40 | offline | any | 2021-06-01 15:56:07.212 | status msg timeout |
|
||||
Query OK, 2 row(s) in set (0.000837s)
|
||||
|
||||
taos> use test;
|
||||
Database changed.
|
||||
|
||||
taos> insert into t1 values(now, 3);
|
||||
|
||||
DB error: Unable to resolve FQDN (0.013874s)
|
||||
|
||||
```
|
||||
> TDengine Cloud is a minimalist fully managed time series data processing Cloud as a Service platform developed based on the open source time series database TDengine. In addition to high-performance time series database, it also has system functions such as caching, subscription and stream computing, and provides convenient and secure data sharing, as well as numerous enterprise-level functions. It allows enterprises in the fields of Internet of Things, Industrial Internet, Finance, IT operation and maintenance monitoring to significantly reduce labor costs and operating costs in the management of time series data.
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
sidebar_label: Helm
|
||||
title: Use Helm to deploy TDengine
|
||||
sidebar_label: Helm
|
||||
description: This document describes how to deploy TDengine on Kubernetes by using Helm.
|
||||
---
|
||||
|
||||
Helm is a package manager for Kubernetes that can provide more capabilities in deploying on Kubernetes.
|
||||
|
@ -22,7 +23,7 @@ Helm uses the kubectl and kubeconfig configurations to perform Kubernetes operat
|
|||
To use TDengine Chart, download it from GitHub:
|
||||
|
||||
```bash
|
||||
wget https://github.com/taosdata/TDengine-Operator/raw/3.0/helm/tdengine-3.0.0.tgz
|
||||
wget https://github.com/taosdata/TDengine-Operator/raw/3.0/helm/tdengine-3.0.2.tgz
|
||||
|
||||
```
|
||||
|
||||
|
@ -38,7 +39,7 @@ With minikube, the default value is standard.
|
|||
Use Helm commands to install TDengine:
|
||||
|
||||
```bash
|
||||
helm install tdengine tdengine-3.0.0.tgz \
|
||||
helm install tdengine tdengine-3.0.2.tgz \
|
||||
--set storage.className=<your storage class name>
|
||||
|
||||
```
|
||||
|
@ -46,7 +47,7 @@ helm install tdengine tdengine-3.0.0.tgz \
|
|||
You can configure a small storage size in minikube to ensure that your deployment does not exceed your available disk space.
|
||||
|
||||
```bash
|
||||
helm install tdengine tdengine-3.0.0.tgz \
|
||||
helm install tdengine tdengine-3.0.2.tgz \
|
||||
--set storage.className=standard \
|
||||
--set storage.dataSize=2Gi \
|
||||
--set storage.logSize=10Mi
|
||||
|
@ -83,14 +84,14 @@ You can configure custom parameters in TDengine with the `values.yaml` file.
|
|||
Run the `helm show values` command to see all parameters supported by TDengine Chart.
|
||||
|
||||
```bash
|
||||
helm show values tdengine-3.0.0.tgz
|
||||
helm show values tdengine-3.0.2.tgz
|
||||
|
||||
```
|
||||
|
||||
Save the output of this command as `values.yaml`. Then you can modify this file with your desired values and use it to deploy a TDengine cluster:
|
||||
|
||||
```bash
|
||||
helm install tdengine tdengine-3.0.0.tgz -f values.yaml
|
||||
helm install tdengine tdengine-3.0.2.tgz -f values.yaml
|
||||
|
||||
```
|
||||
|
||||
|
@ -107,7 +108,7 @@ image:
|
|||
prefix: tdengine/tdengine
|
||||
#pullPolicy: Always
|
||||
# Overrides the image tag whose default is the chart appVersion.
|
||||
# tag: "3.0.0.0"
|
||||
# tag: "3.0.2.0"
|
||||
|
||||
service:
|
||||
# ClusterIP is the default service type, use NodeIP only if you know what you are doing.
|
||||
|
@ -155,15 +156,15 @@ clusterDomainSuffix: ""
|
|||
# See the [Configuration Variables](../../reference/config)
|
||||
#
|
||||
# Note:
|
||||
# 1. firstEp/secondEp: should not be setted here, it's auto generated at scale-up.
|
||||
# 2. serverPort: should not be setted, we'll use the default 6030 in many places.
|
||||
# 3. fqdn: will be auto generated in kubenetes, user should not care about it.
|
||||
# 1. firstEp/secondEp: should not be set here, it's auto generated at scale-up.
|
||||
# 2. serverPort: should not be set, we'll use the default 6030 in many places.
|
||||
# 3. fqdn: will be auto generated in kubernetes, user should not care about it.
|
||||
# 4. role: currently role is not supported - every node is able to be mnode and vnode.
|
||||
#
|
||||
# Btw, keep quotes "" around the value like below, even the value will be number or not.
|
||||
taoscfg:
|
||||
# Starts as cluster or not, must be 0 or 1.
|
||||
# 0: all pods will start as a seperate TDengine server
|
||||
# 0: all pods will start as a separate TDengine server
|
||||
# 1: pods will start as TDengine server cluster. [default]
|
||||
CLUSTER: "1"
|
||||
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
---
|
||||
title: Deployment
|
||||
description: This document describes how to deploy a TDengine cluster on a server, on Kubernetes, and by using Helm.
|
||||
---
|
||||
|
||||
TDengine has a native distributed design and provides the ability to scale out. A few nodes can form a TDengine cluster. If you need higher processing power, you just need to add more nodes into the cluster. TDengine uses virtual node technology to virtualize a node into multiple virtual nodes to achieve load balancing. At the same time, TDengine can group virtual nodes on different nodes into virtual node groups, and use the replication mechanism to ensure the high availability of the system. The cluster feature of TDengine is completely open source.
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
sidebar_label: Data Types
|
||||
title: Data Types
|
||||
description: 'TDengine supports a variety of data types including timestamp, float, JSON and many others.'
|
||||
sidebar_label: Data Types
|
||||
description: This document describes the data types that TDengine supports.
|
||||
---
|
||||
|
||||
## Timestamp
|
||||
|
@ -11,7 +11,7 @@ When using TDengine to store and query data, the most important part of the data
|
|||
- The format must be `YYYY-MM-DD HH:mm:ss.MS`, the default time precision is millisecond (ms), for example `2017-08-12 18:25:58.128`.
|
||||
- Internal function `NOW` can be used to get the current timestamp on the client side.
|
||||
- The current timestamp of the client side is applied when `NOW` is used to insert data.
|
||||
- Epoch Time:timestamp can also be a long integer number, which means the number of seconds, milliseconds or nanoseconds, depending on the time precision, from UTC 1970-01-01 00:00:00.
|
||||
- Epoch Time: timestamp can also be a long integer number, which means the number of seconds, milliseconds or nanoseconds, depending on the time precision, from UTC 1970-01-01 00:00:00.
|
||||
- Add/subtract operations can be carried out on timestamps. For example `NOW-2h` means 2 hours prior to the time at which query is executed. The units of time in operations can be b(nanosecond), u(microsecond), a(millisecond), s(second), m(minute), h(hour), d(day), or w(week). So `SELECT * FROM t1 WHERE ts > NOW-2w AND ts <= NOW-1w` means the data between two weeks ago and one week ago. The time unit can also be n (calendar month) or y (calendar year) when specifying the time window for down sampling operations.
|
||||
|
||||
Time precision in TDengine can be set by the `PRECISION` parameter when executing `CREATE DATABASE`. The default time precision is millisecond. In the statement below, the precision is set to nanonseconds.
|
||||
|
@ -24,29 +24,38 @@ CREATE DATABASE db_name PRECISION 'ns';
|
|||
|
||||
In TDengine, the data types below can be used when specifying a column or tag.
|
||||
|
||||
| # | **type** | **Bytes** | **Description** |
|
||||
| --- | :--------------: | ------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| 1 | TIMESTAMP | 8 | Default precision is millisecond, microsecond and nanosecond are also supported. |
|
||||
| 2 | INT | 4 | Integer, the value range is [-2^31, 2^31-1]. |
|
||||
| 3 | INT UNSIGNED | 4 | Unsigned integer, the value range is [0, 2^32-1]. |
|
||||
| 4 | BIGINT | 8 | Long integer, the value range is [-2^63, 2^63-1]. |
|
||||
| 5 | BIGINT UNSIGNED | 8 | unsigned long integer, the value range is [0, 2^64-1]. |
|
||||
| 6 | FLOAT | 4 | Floating point number, the effective number of digits is 6-7, the value range is [-3.4E38, 3.4E38]. |
|
||||
| 7 | DOUBLE | 8 | Double precision floating point number, the effective number of digits is 15-16, the value range is [-1.7E308, 1.7E308]. |
|
||||
| 8 | BINARY | User Defined | Single-byte string for ASCII visible characters. Length must be specified when defining a column or tag of binary type. |
|
||||
| 9 | SMALLINT | 2 | Short integer, the value range is [-32768, 32767]. |
|
||||
| 10 | INT UNSIGNED | 2 | unsigned integer, the value range is [0, 65535]. |
|
||||
| 11 | TINYINT | 1 | Single-byte integer, the value range is [-128, 127]. |
|
||||
| 12 | TINYINT UNSIGNED | 1 | unsigned single-byte integer, the value range is [0, 255]. |
|
||||
| 13 | BOOL | 1 | Bool, the value range is {true, false}. |
|
||||
| 14 | NCHAR | User Defined | Multi-byte string that can include multi byte characters like Chinese characters. Each character of NCHAR type consumes 4 bytes storage. The string value should be quoted with single quotes. Literal single quote inside the string must be preceded with backslash, like `\'`. The length must be specified when defining a column or tag of NCHAR type, for example nchar(10) means it can store at most 10 characters of nchar type and will consume fixed storage of 40 bytes. An error will be reported if the string value exceeds the length defined. |
|
||||
| 15 | JSON | | JSON type can only be used on tags. A tag of json type is excluded with any other tags of any other type. |
|
||||
| 16 | VARCHAR | User-defined | Alias of BINARY |
|
||||
|
||||
| # | **type** | **Bytes** | **Description** |
|
||||
| --- | :---------------: | ------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| 1 | TIMESTAMP | 8 | Default precision is millisecond, microsecond and nanosecond are also supported. |
|
||||
| 2 | INT | 4 | Integer, the value range is [-2^31, 2^31-1]. |
|
||||
| 3 | INT UNSIGNED | 4 | Unsigned integer, the value range is [0, 2^32-1]. |
|
||||
| 4 | BIGINT | 8 | Long integer, the value range is [-2^63, 2^63-1]. |
|
||||
| 5 | BIGINT UNSIGNED | 8 | unsigned long integer, the value range is [0, 2^64-1]. |
|
||||
| 6 | FLOAT | 4 | Floating point number, the effective number of digits is 6-7, the value range is [-3.4E38, 3.4E38]. |
|
||||
| 7 | DOUBLE | 8 | Double precision floating point number, the effective number of digits is 15-16, the value range is [-1.7E308, 1.7E308]. |
|
||||
| 8 | BINARY | User Defined | Single-byte string for ASCII visible characters. Length must be specified when defining a column or tag of binary type. |
|
||||
| 9 | SMALLINT | 2 | Short integer, the value range is [-32768, 32767]. |
|
||||
| 10 | SMALLINT UNSIGNED | 2 | unsigned integer, the value range is [0, 65535]. |
|
||||
| 11 | TINYINT | 1 | Single-byte integer, the value range is [-128, 127]. |
|
||||
| 12 | TINYINT UNSIGNED | 1 | unsigned single-byte integer, the value range is [0, 255]. |
|
||||
| 13 | BOOL | 1 | Bool, the value range is {true, false}. |
|
||||
| 14 | NCHAR | User Defined | Multi-byte string that can include multi byte characters like Chinese characters. Each character of NCHAR type consumes 4 bytes storage. The string value should be quoted with single quotes. Literal single quote inside the string must be preceded with backslash, like `\'`. The length must be specified when defining a column or tag of NCHAR type, for example nchar(10) means it can store at most 10 characters of nchar type and will consume fixed storage of 40 bytes. An error will be reported if the string value exceeds the length defined. |
|
||||
| 15 | JSON | | JSON type can only be used on tags. A tag of json type is excluded with any other tags of any other type. |
|
||||
| 16 | VARCHAR | User-defined | Alias of BINARY |
|
||||
| 16 | GEOMETRY | User-defined | Geometry |
|
||||
:::note
|
||||
|
||||
- Each row of the table cannot be longer than 48KB (64KB since version 3.0.5.0) (note that each BINARY/NCHAR/GEOMETRY column takes up an additional 2 bytes of storage space).
|
||||
- Only ASCII visible characters are suggested to be used in a column or tag of BINARY type. Multi-byte characters must be stored in NCHAR type.
|
||||
- The length of BINARY can be up to 16,374 bytes. The string value must be quoted with single quotes. You must specify a length in bytes for a BINARY value, for example binary(20) for up to twenty single-byte characters. If the data exceeds the specified length, an error will occur. The literal single quote inside the string must be preceded with back slash like `\'`
|
||||
- The length of BINARY can be up to 16,374(data column is 65,517 and tag column is 16,382 since version 3.0.5.0) bytes. The string value must be quoted with single quotes. You must specify a length in bytes for a BINARY value, for example binary(20) for up to twenty single-byte characters. If the data exceeds the specified length, an error will occur. The literal single quote inside the string must be preceded with back slash like `\'`
|
||||
- The maximum length of the GEOMETRY data column is 65,517 bytes, and the maximum length of the tag column is 16,382 bytes. Supports POINT, LINESTRING, and POLYGON subtypes of 2D. The following table describes the length calculation method:
|
||||
|
||||
| # | **Syntax** | **MinLen** | **MaxLen** | **Growth of each point** |
|
||||
|---|--------------------------------------|------------|------------|--------------------------|
|
||||
| 1 | POINT(1.0 1.0) | 21 | 21 | NA |
|
||||
| 2 | LINESTRING(1.0 1.0, 2.0 2.0) | 9+2*16 | 9+4094*16 | +16 |
|
||||
| 3 | POLYGON((1.0 1.0, 2.0 2.0, 1.0 1.0)) | 13+3*16 | 13+4094*16 | +16 |
|
||||
|
||||
- Numeric values in SQL statements will be determined as integer or float type according to whether there is decimal point or whether scientific notation is used, so attention must be paid to avoid overflow. For example, 9999999999999999999 will be considered as overflow because it exceeds the upper limit of long integer, but 9999999999999999999.0 will be considered as a legal float number.
|
||||
|
||||
:::
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
sidebar_label: Database
|
||||
title: Database
|
||||
description: "create and drop database, show or change database parameters"
|
||||
sidebar_label: Database
|
||||
description: This document describes how to create and perform operations on databases.
|
||||
---
|
||||
|
||||
## Create a Database
|
||||
|
@ -27,20 +27,21 @@ database_option: {
|
|||
| PRECISION {'ms' | 'us' | 'ns'}
|
||||
| REPLICA value
|
||||
| RETENTIONS ingestion_duration:keep_duration ...
|
||||
| STRICT {'off' | 'on'}
|
||||
| WAL_LEVEL {1 | 2}
|
||||
| VGROUPS value
|
||||
| SINGLE_STABLE {0 | 1}
|
||||
| STT_TRIGGER value
|
||||
| TABLE_PREFIX value
|
||||
| TABLE_SUFFIX value
|
||||
| TSDB_PAGESIZE value
|
||||
| WAL_RETENTION_PERIOD value
|
||||
| WAL_ROLL_PERIOD value
|
||||
| WAL_RETENTION_SIZE value
|
||||
| WAL_SEGMENT_SIZE value
|
||||
}
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
- BUFFER: specifies the size (in MB) of the write buffer for each vnode. Enter a value between 3 and 16384. The default value is 96.
|
||||
- BUFFER: specifies the size (in MB) of the write buffer for each vnode. Enter a value between 3 and 16384. The default value is 256.
|
||||
- CACHEMODEL: specifies how the latest data in subtables is stored in the cache. The default value is none.
|
||||
- none: The latest data is not cached.
|
||||
- last_row: The last row of each subtable is cached. This option significantly improves the performance of the LAST_ROW function.
|
||||
|
@ -55,15 +56,12 @@ database_option: {
|
|||
- WAL_FSYNC_PERIOD: specifies the interval (in milliseconds) at which data is written from the WAL to disk. This parameter takes effect only when the WAL parameter is set to 2. The default value is 3000. Enter a value between 0 and 180000. The value 0 indicates that incoming data is immediately written to disk.
|
||||
- MAXROWS: specifies the maximum number of rows recorded in a block. The default value is 4096.
|
||||
- MINROWS: specifies the minimum number of rows recorded in a block. The default value is 100.
|
||||
- KEEP: specifies the time for which data is retained. Enter a value between 1 and 365000. The default value is 3650. The value of the KEEP parameter must be greater than or equal to the value of the DURATION parameter. TDengine automatically deletes data that is older than the value of the KEEP parameter. You can use m (minutes), h (hours), and d (days) as the unit, for example KEEP 100h or KEEP 10d. If you do not include a unit, d is used by default.
|
||||
- KEEP: specifies the time for which data is retained. Enter a value between 1 and 365000. The default value is 3650. The value of the KEEP parameter must be greater than or equal to the value of the DURATION parameter. TDengine automatically deletes data that is older than the value of the KEEP parameter. You can use m (minutes), h (hours), and d (days) as the unit, for example KEEP 100h or KEEP 10d. If you do not include a unit, d is used by default. TDengine Enterprise supports [Tiered Storage](https://docs.tdengine.com/tdinternal/arch/#tiered-storage) function, thus multiple KEEP values (comma separated and up to 3 values supported, and meet keep 0 <= keep 1 <= keep 2, e.g. KEEP 100h,100d,3650d) are supported; TDengine OSS does not support Tiered Storage function (although multiple keep values are configured, they do not take effect, only the maximum keep value is used as KEEP).
|
||||
- PAGES: specifies the number of pages in the metadata storage engine cache on each vnode. Enter a value greater than or equal to 64. The default value is 256. The space occupied by metadata storage on each vnode is equal to the product of the values of the PAGESIZE and PAGES parameters. The space occupied by default is 1 MB.
|
||||
- PAGESIZE: specifies the size (in KB) of each page in the metadata storage engine cache on each vnode. The default value is 4. Enter a value between 1 and 16384.
|
||||
- PRECISION: specifies the precision at which a database records timestamps. Enter ms for milliseconds, us for microseconds, or ns for nanoseconds. The default value is ms.
|
||||
- REPLICA: specifies the number of replicas that are made of the database. Enter 1 or 3. The default value is 1. The value of the REPLICA parameter cannot exceed the number of dnodes in the cluster.
|
||||
- RETENTIONS: specifies the retention period for data aggregated at various intervals. For example, RETENTIONS 15s:7d,1m:21d,15m:50d indicates that data aggregated every 15 seconds is retained for 7 days, data aggregated every 1 minute is retained for 21 days, and data aggregated every 15 minutes is retained for 50 days. You must enter three aggregation intervals and corresponding retention periods.
|
||||
- STRICT: specifies whether strong data consistency is enabled. The default value is off.
|
||||
- on: Strong consistency is enabled and implemented through the Raft consensus algorithm. In this mode, an operation is considered successful once it is confirmed by half of the nodes in the cluster.
|
||||
- off: Strong consistency is disabled. In this mode, an operation is considered successful when it is initiated by the local node.
|
||||
- WAL_LEVEL: specifies whether fsync is enabled. The default value is 1.
|
||||
- 1: WAL is enabled but fsync is disabled.
|
||||
- 2: WAL and fsync are both enabled.
|
||||
|
@ -71,11 +69,12 @@ database_option: {
|
|||
- SINGLE_STABLE: specifies whether the database can contain more than one supertable.
|
||||
- 0: The database can contain multiple supertables.
|
||||
- 1: The database can contain only one supertable.
|
||||
- WAL_RETENTION_PERIOD: specifies the time after which WAL files are deleted. This parameter is used for data subscription. Enter a time in seconds. The default value of single copy is 0. A value of 0 indicates that each WAL file is deleted immediately after its contents are written to disk. -1: WAL files are never deleted. The default value of multiple copy is 4 days.
|
||||
- WAL_RETENTION_SIZE: specifies the size at which WAL files are deleted. This parameter is used for data subscription. Enter a size in KB. The default value of single copy is 0. A value of 0 indicates that each WAL file is deleted immediately after its contents are written to disk. -1: WAL files are never deleted. The default value of multiple copy is -1.
|
||||
- WAL_ROLL_PERIOD: specifies the time after which WAL files are rotated. After this period elapses, a new WAL file is created. The default value of single copy is 0. A value of 0 indicates that a new WAL file is created only after the previous WAL file was written to disk. The default values of multiple copy is 1 day.
|
||||
- WAL_SEGMENT_SIZE: specifies the maximum size of a WAL file. After the current WAL file reaches this size, a new WAL file is created. The default value is 0. A value of 0 indicates that a new WAL file is created only after the previous WAL file was written to disk.
|
||||
|
||||
- STT_TRIGGER: specifies the number of file merges triggered by flushed files. The default is 8, ranging from 1 to 16. For high-frequency scenarios with few tables, it is recommended to use the default configuration or a smaller value for this parameter; For multi-table low-frequency scenarios, it is recommended to configure this parameter with a larger value.
|
||||
- TABLE_PREFIX: The prefix in the table name that is ignored when distributing a table to a vgroup when it's a positive number, or only the prefix is used when distributing a table to a vgroup, the default value is 0; For example, if the table name v30001, then "0001" is used if TSDB_PREFIX is set to 2 but "v3" is used if TSDB_PREFIX is set to -2; It can help you to control the distribution of tables.
|
||||
- TABLE_SUFFIX: The suffix in the table name that is ignored when distributing a table to a vgroup when it's a positive number, or only the suffix is used when distributing a table to a vgroup, the default value is 0; For example, if the table name v30001, then "v300" is used if TSDB_SUFFIX is set to 2 but "01" is used if TSDB_SUFFIX is set to -2; It can help you to control the distribution of tables.
|
||||
- TSDB_PAGESIZE: The page size of the data storage engine in a vnode. The unit is KB. The default is 4 KB. The range is 1 to 16384, that is, 1 KB to 16 MB.
|
||||
- WAL_RETENTION_PERIOD: specifies the maximum time of which WAL files are to be kept for consumption. This parameter is used for data subscription. Enter a time in seconds. The default value is 3600, which means the data in latest 3600 seconds will be kept in WAL for data subscription. Please adjust this parameter to a more proper value for your data subscription.
|
||||
- WAL_RETENTION_SIZE: specifies the maximum total size of which WAL files are to be kept for consumption. This parameter is used for data subscription. Enter a size in KB. The default value is 0. A value of 0 indicates that the total size of WAL files to keep for consumption has no upper limit.
|
||||
### Example Statement
|
||||
|
||||
```sql
|
||||
|
@ -112,12 +111,34 @@ alter_database_options:
|
|||
alter_database_option: {
|
||||
CACHEMODEL {'none' | 'last_row' | 'last_value' | 'both'}
|
||||
| CACHESIZE value
|
||||
| BUFFER value
|
||||
| PAGES value
|
||||
| REPLICA value
|
||||
| STT_TRIGGER value
|
||||
| WAL_LEVEL value
|
||||
| WAL_FSYNC_PERIOD value
|
||||
| KEEP value
|
||||
| WAL_RETENTION_PERIOD value
|
||||
| WAL_RETENTION_SIZE value
|
||||
}
|
||||
```
|
||||
|
||||
### ALTER CACHESIZE
|
||||
|
||||
The command of changing database configuration parameters is easy to use, but it's hard to determine whether a parameter is proper or not. In this section we will describe how to determine whether cachesize is big enough.
|
||||
|
||||
1. How to check cachesize?
|
||||
|
||||
You can use `select * from information_schema.ins_databases;` to get the value of cachesize.
|
||||
|
||||
2. How to check cacheload?
|
||||
|
||||
You can use `show <db_name>.vgroups;` to check the value of cacheload.
|
||||
|
||||
3. Determine whether cachesize is big engough
|
||||
|
||||
If the value of `cacheload` is very close to the value of `cachesize`, then it's very probably that `cachesize` is too small. If the value of `cacheload` is much smaller than the value of `cachesize`, then `cachesize` is big enough. You can use this simple principle to determine. Depending on how much memory is available in your system, you can choose to double `cachesize` or incrase it by even 5 or more times.
|
||||
|
||||
:::note
|
||||
Other parameters cannot be modified after the database has been created.
|
||||
|
||||
|
@ -142,7 +163,7 @@ The preceding SQL statement can be used in migration scenarios. This command can
|
|||
### View Database Configuration
|
||||
|
||||
```sql
|
||||
SHOW DATABASES \G;
|
||||
SELECT * FROM INFORMATION_SCHEMA.INS_DATABASES WHERE NAME='DBNAME' \G;
|
||||
```
|
||||
|
||||
The preceding SQL statement shows the value of each parameter for the specified database. One value is displayed per line.
|
||||
|
@ -154,3 +175,27 @@ TRIM DATABASE db_name;
|
|||
```
|
||||
|
||||
The preceding SQL statement deletes data that has expired and orders the remaining data in accordance with the storage configuration.
|
||||
|
||||
## Flush Data
|
||||
|
||||
```sql
|
||||
FLUSH DATABASE db_name;
|
||||
```
|
||||
|
||||
Flush data from memory onto disk. Before shutting down a node, executing this command can avoid data restore after restarting and speed up the startup process.
|
||||
|
||||
## Redistribute Vgroup
|
||||
|
||||
```sql
|
||||
REDISTRIBUTE VGROUP vgroup_no DNODE dnode_id1 [DNODE dnode_id2] [DNODE dnode_id3]
|
||||
```
|
||||
|
||||
Adjust the distribution of vnodes in the vgroup according to the given list of dnodes.
|
||||
|
||||
## Balance Vgroup
|
||||
|
||||
```sql
|
||||
BALANCE VGROUP
|
||||
```
|
||||
|
||||
Automatically adjusts the distribution of vnodes in all vgroups of the cluster, which is equivalent to load balancing the data of the cluster at the vnode level.
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
---
|
||||
title: Table
|
||||
description: This document describes how to create and perform operations on standard tables and subtables.
|
||||
---
|
||||
|
||||
## Create Table
|
||||
|
@ -8,27 +9,27 @@ You create standard tables and subtables with the `CREATE TABLE` statement.
|
|||
|
||||
```sql
|
||||
CREATE TABLE [IF NOT EXISTS] [db_name.]tb_name (create_definition [, create_definition] ...) [table_options]
|
||||
|
||||
|
||||
CREATE TABLE create_subtable_clause
|
||||
|
||||
|
||||
CREATE TABLE [IF NOT EXISTS] [db_name.]tb_name (create_definition [, create_definition] ...)
|
||||
[TAGS (create_definition [, create_definition] ...)]
|
||||
[table_options]
|
||||
|
||||
|
||||
create_subtable_clause: {
|
||||
create_subtable_clause [create_subtable_clause] ...
|
||||
| [IF NOT EXISTS] [db_name.]tb_name USING [db_name.]stb_name [(tag_name [, tag_name] ...)] TAGS (tag_value [, tag_value] ...)
|
||||
}
|
||||
|
||||
|
||||
create_definition:
|
||||
col_name column_definition
|
||||
|
||||
|
||||
column_definition:
|
||||
type_name [comment 'string_value']
|
||||
|
||||
|
||||
table_options:
|
||||
table_option ...
|
||||
|
||||
|
||||
table_option: {
|
||||
COMMENT 'string_value'
|
||||
| WATERMARK duration[,duration]
|
||||
|
@ -44,9 +45,9 @@ table_option: {
|
|||
|
||||
1. The first column of a table MUST be of type TIMESTAMP. It is automatically set as the primary key.
|
||||
2. The maximum length of the table name is 192 bytes.
|
||||
3. The maximum length of each row is 48k bytes, please note that the extra 2 bytes used by each BINARY/NCHAR column are also counted.
|
||||
3. The maximum length of each row is 48k(64k since version 3.0.5.0) bytes, please note that the extra 2 bytes used by each BINARY/NCHAR/GEOMETRY column are also counted.
|
||||
4. The name of the subtable can only consist of characters from the English alphabet, digits and underscore. Table names can't start with a digit. Table names are case insensitive.
|
||||
5. The maximum length in bytes must be specified when using BINARY or NCHAR types.
|
||||
5. The maximum length in bytes must be specified when using BINARY/NCHAR/GEOMETRY types.
|
||||
6. Escape character "\`" can be used to avoid the conflict between table names and reserved keywords, above rules will be bypassed when using escape character on table names, but the upper limit for the name length is still valid. The table names specified using escape character are case sensitive.
|
||||
For example \`aBc\` and \`abc\` are different table names but `abc` and `aBc` are same table names because they are both converted to `abc` internally.
|
||||
Only ASCII visible characters can be used with escape character.
|
||||
|
@ -57,7 +58,7 @@ table_option: {
|
|||
3. MAX_DELAY: specifies the maximum latency for pushing computation results. The default value is 15 minutes or the value of the INTERVAL parameter, whichever is smaller. Enter a value between 0 and 15 minutes in milliseconds, seconds, or minutes. You can enter multiple values separated by commas (,). Note: Retain the default value if possible. Configuring a small MAX_DELAY may cause results to be frequently pushed, affecting storage and query performance. This parameter applies only to supertables and takes effect only when the RETENTIONS parameter has been specified for the database.
|
||||
4. ROLLUP: specifies aggregate functions to roll up. Rolling up a function provides downsampled results based on multiple axes. This parameter applies only to supertables and takes effect only when the RETENTIONS parameter has been specified for the database. You can specify only one function to roll up. The rollup takes effect on all columns except TS. Enter one of the following values: avg, sum, min, max, last, or first.
|
||||
5. SMA: specifies functions on which to enable small materialized aggregates (SMA). SMA is user-defined precomputation of aggregates based on data blocks. Enter one of the following values: max, min, or sum This parameter can be used with supertables and standard tables.
|
||||
6. TTL: specifies the time to live (TTL) for the table. If TTL is specified when creatinga table, after the time period for which the table has been existing is over TTL, TDengine will automatically delete the table. Please be noted that the system may not delete the table at the exact moment that the TTL expires but guarantee there is such a system and finally the table will be deleted. The unit of TTL is in days. The default value is 0, i.e. never expire.
|
||||
6. TTL: specifies the time to live (TTL) for the table. If TTL is specified when creatinga table, after the time period for which the table has been existing is over TTL, TDengine will automatically delete the table. Please be noted that the system may not delete the table at the exact moment that the TTL expires but guarantee there is such a system and finally the table will be deleted. The unit of TTL is in days. The default value is 0, i.e. never expire.
|
||||
|
||||
## Create Subtables
|
||||
|
||||
|
@ -87,7 +88,7 @@ You can create multiple subtables in a single SQL statement provided that all su
|
|||
|
||||
```sql
|
||||
ALTER TABLE [db_name.]tb_name alter_table_clause
|
||||
|
||||
|
||||
alter_table_clause: {
|
||||
alter_table_options
|
||||
| ADD COLUMN col_name column_type
|
||||
|
@ -95,10 +96,10 @@ alter_table_clause: {
|
|||
| MODIFY COLUMN col_name column_type
|
||||
| RENAME COLUMN old_col_name new_col_name
|
||||
}
|
||||
|
||||
|
||||
alter_table_options:
|
||||
alter_table_option ...
|
||||
|
||||
|
||||
alter_table_option: {
|
||||
TTL value
|
||||
| COMMENT 'string_value'
|
||||
|
@ -141,15 +142,15 @@ ALTER TABLE tb_name RENAME COLUMN old_col_name new_col_name
|
|||
|
||||
```sql
|
||||
ALTER TABLE [db_name.]tb_name alter_table_clause
|
||||
|
||||
|
||||
alter_table_clause: {
|
||||
alter_table_options
|
||||
| SET TAG tag_name = new_tag_value
|
||||
}
|
||||
|
||||
|
||||
alter_table_options:
|
||||
alter_table_option ...
|
||||
|
||||
|
||||
alter_table_option: {
|
||||
TTL value
|
||||
| COMMENT 'string_value'
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
sidebar_label: Supertable
|
||||
title: Supertable
|
||||
sidebar_label: Supertable
|
||||
description: This document describes how to create and perform operations on supertables.
|
||||
---
|
||||
|
||||
## Create a Supertable
|
||||
|
@ -12,12 +13,11 @@ create_definition:
|
|||
col_name column_definition
|
||||
|
||||
column_definition:
|
||||
type_name [COMMENT 'string_value']
|
||||
type_name
|
||||
```
|
||||
|
||||
**More explanations**
|
||||
- Each supertable can have a maximum of 4096 columns, including tags. The minimum number of columns is 3: a timestamp column used as the key, one tag column, and one data column.
|
||||
- When you create a supertable, you can add comments to columns and tags.
|
||||
- The TAGS keyword defines the tag columns for the supertable. The following restrictions apply to tag columns:
|
||||
- A tag column can use the TIMESTAMP data type, but the values in the column must be fixed numbers. Timestamps including formulae, such as "now + 10s", cannot be stored in a tag column.
|
||||
- The name of a tag column cannot be the same as the name of any other column.
|
||||
|
@ -33,7 +33,7 @@ column_definition:
|
|||
SHOW STABLES [LIKE tb_name_wildcard];
|
||||
```
|
||||
|
||||
The preceding SQL statement shows all supertables in the current TDengine database, including the name, creation time, number of columns, number of tags, and number of subtabels for each supertable.
|
||||
The preceding SQL statement shows all supertables in the current TDengine database.
|
||||
|
||||
### View the CREATE Statement for a Supertable
|
||||
|
||||
|
@ -51,6 +51,11 @@ DESCRIBE [db_name.]stb_name;
|
|||
|
||||
### View tag information for all child tables in the supertable
|
||||
|
||||
```
|
||||
SHOW TABLE TAGS FROM table_name [FROM db_name];
|
||||
SHOW TABLE TAGS FROM [db_name.]table_name;
|
||||
```
|
||||
|
||||
```
|
||||
taos> SHOW TABLE TAGS FROM st1;
|
||||
tbname | id | loc |
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
sidebar_label: Insert
|
||||
title: Insert
|
||||
sidebar_label: Insert
|
||||
description: This document describes how to insert data into TDengine.
|
||||
---
|
||||
|
||||
## Syntax
|
||||
|
@ -27,7 +28,7 @@ INSERT INTO tb_name [(field1_name, ...)] subquery
|
|||
2. The precision of a timestamp depends on its format. The precision configured for the database affects only timestamps that are inserted as long integers (UNIX time). Timestamps inserted as date and time strings are not affected. As an example, the timestamp 2021-07-13 16:16:48 is equivalent to 1626164208 in UNIX time. This UNIX time is modified to 1626164208000 for databases with millisecond precision, 1626164208000000 for databases with microsecond precision, and 1626164208000000000 for databases with nanosecond precision.
|
||||
|
||||
3. If you want to insert multiple rows simultaneously, do not use the NOW function in the timestamp. Using the NOW function in this situation will cause multiple rows to have the same timestamp and prevent them from being stored correctly. This is because the NOW function obtains the current time on the client, and multiple instances of NOW in a single statement will return the same time.
|
||||
The earliest timestamp that you can use when inserting data is equal to the current time on the server minus the value of the KEEP parameter. The latest timestamp that you can use when inserting data is equal to the current time on the server plus the value of the DURATION parameter. You can configure the KEEP and DURATION parameters when you create a database. The default values are 3650 days for the KEEP parameter and 10 days for the DURATION parameter.
|
||||
The earliest timestamp that you can use when inserting data is equal to the current time on the server minus the value of the KEEP parameter (You can configure the KEEP parameter when you create a database and the default value is 3650 days). The latest timestamp you can use when inserting data depends on the PRECISION parameter (You can configure the PRECISION parameter when you create a database, ms means milliseconds, us means microseconds, ns means nanoseconds, and the default value is milliseconds). If the timestamp precision is milliseconds or microseconds, the latest timestamp is the Unix epoch (January 1st, 1970 at 00:00:00.000 UTC) plus 1000 years, that is, January 1st, 2970 at 00:00:00.000 UTC; If the timestamp precision is nanoseconds, the latest timestamp is the Unix epoch plus 292 years, that is, January 1st, 2262 at 00:00:00.000000000 UTC.
|
||||
|
||||
**Syntax**
|
||||
|
||||
|
@ -81,7 +82,7 @@ One or multiple rows can be inserted into multiple tables in a single SQL statem
|
|||
|
||||
```sql
|
||||
INSERT INTO d1001 VALUES ('2021-07-13 14:06:34.630', 10.2, 219, 0.32) ('2021-07-13 14:06:35.779', 10.15, 217, 0.33)
|
||||
d1002 (ts, current, phase) VALUES ('2021-07-13 14:06:34.255', 10.27, 0.31);
|
||||
d1002 (ts, current, phase) VALUES ('2021-07-13 14:06:34.255', 10.27, 0.31);
|
||||
```
|
||||
|
||||
## Automatically Create Table When Inserting
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
sidebar_label: Select
|
||||
title: Select
|
||||
sidebar_label: Select
|
||||
description: This document describes how to query data in TDengine.
|
||||
---
|
||||
|
||||
## Syntax
|
||||
|
@ -12,6 +13,7 @@ SELECT [DISTINCT] select_list
|
|||
from_clause
|
||||
[WHERE condition]
|
||||
[partition_by_clause]
|
||||
[interp_clause]
|
||||
[window_clause]
|
||||
[group_by_clause]
|
||||
[order_by_clasue]
|
||||
|
@ -52,8 +54,11 @@ window_clause: {
|
|||
| STATE_WINDOW(col)
|
||||
| INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)] [WATERMARK(watermark_val)] [FILL(fill_mod_and_val)]
|
||||
|
||||
interp_clause:
|
||||
RANGE(ts_val [, ts_val]) EVERY(every_val) FILL(fill_mod_and_val)
|
||||
|
||||
partition_by_clause:
|
||||
PARTITION BY expr [, expr] ...
|
||||
PARTITION BY expr [, expr] ...
|
||||
|
||||
group_by_clause:
|
||||
GROUP BY expr [, expr] ... HAVING condition
|
||||
|
@ -243,19 +248,19 @@ You can also use the NULLS keyword to specify the position of null values. Ascen
|
|||
|
||||
The LIMIT keyword controls the number of results that are displayed. You can also use the OFFSET keyword to specify the result to display first. `LIMIT` and `OFFSET` are executed after `ORDER BY` in the query execution. You can include an offset in a LIMIT clause. For example, LIMIT 5 OFFSET 2 can also be written LIMIT 2, 5. Both of these clauses display the third through the seventh results.
|
||||
|
||||
In a statement that includes a PARTITON BY clause, the LIMIT keyword is performed on each partition, not on the entire set of results.
|
||||
In a statement that includes a PARTITION BY/GROUP BY clause, the LIMIT keyword is performed on each partition/group, not on the entire set of results.
|
||||
|
||||
## SLIMIT
|
||||
|
||||
The SLIMIT keyword is used with a PARTITION BY clause to control the number of partitions that are displayed. You can include an offset in a SLIMIT clause. For example, SLIMIT 5 OFFSET 2 can also be written LIMIT 2, 5. Both of these clauses display the third through the seventh partitions.
|
||||
The SLIMIT keyword is used with a PARTITION BY/GROUP BY clause to control the number of partitions/groups that are displayed. You can include an offset in a SLIMIT clause. For example, SLIMIT 5 OFFSET 2 can also be written LIMIT 2, 5. Both of these clauses display the third through the seventh partitions/groups.
|
||||
|
||||
Note: If you include an ORDER BY clause, only one partition can be displayed.
|
||||
Note: If you include an ORDER BY clause, only one partition/group can be displayed.
|
||||
|
||||
## Special Query
|
||||
|
||||
Some special query functions can be invoked without `FROM` sub-clause.
|
||||
|
||||
## Obtain Current Database
|
||||
### Obtain Current Database
|
||||
|
||||
The following SQL statement returns the current database. If a database has not been specified on login or with the `USE` command, a null value is returned.
|
||||
|
||||
|
@ -270,7 +275,7 @@ SELECT CLIENT_VERSION();
|
|||
SELECT SERVER_VERSION();
|
||||
```
|
||||
|
||||
## Obtain Server Status
|
||||
### Obtain Server Status
|
||||
|
||||
The following SQL statement returns the status of the TDengine server. An integer indicates that the server is running normally. An error code indicates that an error has occurred. This statement can also detect whether a connection pool or third-party tool is connected to TDengine properly. By using this statement, you can ensure that connections in a pool are not lost due to an incorrect heartbeat detection statement.
|
||||
|
||||
|
@ -301,7 +306,7 @@ SELECT TIMEZONE();
|
|||
### Syntax
|
||||
|
||||
```txt
|
||||
WHERE (column|tbname) **match/MATCH/nmatch/NMATCH** _regex_
|
||||
WHERE (column|tbname) match/MATCH/nmatch/NMATCH _regex_
|
||||
```
|
||||
|
||||
### Specification
|
||||
|
@ -314,11 +319,45 @@ Regular expression filtering is supported only on table names (TBNAME), BINARY t
|
|||
|
||||
A regular expression string cannot exceed 128 bytes. You can configure this value by modifying the maxRegexStringLen parameter on the TDengine Client. The modified value takes effect when the client is restarted.
|
||||
|
||||
## CASE Expressions
|
||||
|
||||
### Syntax
|
||||
|
||||
```txt
|
||||
CASE value WHEN compare_value THEN result [WHEN compare_value THEN result ...] [ELSE result] END
|
||||
CASE WHEN condition THEN result [WHEN condition THEN result ...] [ELSE result] END
|
||||
```
|
||||
|
||||
### Description
|
||||
CASE expressions let you use IF ... THEN ... ELSE logic in SQL statements without having to invoke procedures.
|
||||
|
||||
The first CASE syntax returns the `result` for the first `value`=`compare_value` comparison that is true.
|
||||
|
||||
The second syntax returns the `result` for the first `condition` that is true.
|
||||
|
||||
If no comparison or condition is true, the result after ELSE is returned, or NULL if there is no ELSE part.
|
||||
|
||||
The return type of the CASE expression is the result type of the first WHEN WHEN part, and the result type of the other WHEN WHEN parts and ELSE parts can be converted to them, otherwise TDengine will report an error.
|
||||
|
||||
### Examples
|
||||
|
||||
A device has three status codes to display its status. The statements are as follows:
|
||||
|
||||
```sql
|
||||
SELECT CASE dev_status WHEN 1 THEN 'Running' WHEN 2 THEN 'Warning' WHEN 3 THEN 'Downtime' ELSE 'Unknown' END FROM dev_table;
|
||||
```
|
||||
|
||||
The average voltage value of the smart meter is counted. When the voltage is less than 200 or more than 250, it is considered that the statistics is wrong, and the value is corrected to 220. The statement is as follows:
|
||||
|
||||
```sql
|
||||
SELECT AVG(CASE WHEN voltage < 200 or voltage > 250 THEN 220 ELSE voltage END) FROM meters;
|
||||
```
|
||||
|
||||
## JOIN
|
||||
|
||||
TDengine supports natural joins between supertables, between standard tables, and between subqueries. The difference between natural joins and inner joins is that natural joins require that the fields being joined in the supertables or standard tables must have the same name. Data or tag columns must be joined with the equivalent column in another table.
|
||||
TDengine supports the `INTER JOIN` based on the timestamp primary key, that is, the `JOIN` condition must contain the timestamp primary key. As long as the requirement of timestamp-based primary key is met, `INTER JOIN` can be made between normal tables, sub-tables, super tables and sub-queries at will, and there is no limit on the number of tables.
|
||||
|
||||
For standard tables, only the timestamp (primary key) can be used in join operations. For example:
|
||||
For standard tables:
|
||||
|
||||
```sql
|
||||
SELECT *
|
||||
|
@ -326,7 +365,7 @@ FROM temp_tb_1 t1, pressure_tb_1 t2
|
|||
WHERE t1.ts = t2.ts
|
||||
```
|
||||
|
||||
For supertables, tags as well as timestamps can be used in join operations. For example:
|
||||
For supertables:
|
||||
|
||||
```sql
|
||||
SELECT *
|
||||
|
@ -334,21 +373,16 @@ FROM temp_stable t1, temp_stable t2
|
|||
WHERE t1.ts = t2.ts AND t1.deviceid = t2.deviceid AND t1.status=0;
|
||||
```
|
||||
|
||||
For sub-table and super table:
|
||||
|
||||
```sql
|
||||
SELECT *
|
||||
FROM temp_ctable t1, temp_stable t2
|
||||
WHERE t1.ts = t2.ts AND t1.deviceid = t2.deviceid AND t1.status=0;
|
||||
```
|
||||
|
||||
Similarly, join operations can be performed on the result sets of multiple subqueries.
|
||||
|
||||
:::note
|
||||
|
||||
The following restriction apply to JOIN statements:
|
||||
|
||||
- The number of tables or supertables in a single join operation cannot exceed 10.
|
||||
- `FILL` cannot be used in a JOIN statement.
|
||||
- Arithmetic operations cannot be performed on the result sets of join operation.
|
||||
- `GROUP BY` is not allowed on a segment of the tables that participate in a join operation.
|
||||
- `OR` cannot be used in the conditions for join operation
|
||||
- Join operation can be performed only on tags or timestamps. You cannot perform a join operation on data columns.
|
||||
|
||||
:::
|
||||
|
||||
## Nested Query
|
||||
|
||||
Nested query is also called sub query. This means that in a single SQL statement the result of inner query can be used as the data source of the outer query.
|
||||
|
|
|
@ -0,0 +1,51 @@
|
|||
---
|
||||
sidebar_label: Tag Index
|
||||
title: Tag Index
|
||||
description: Use Tag Index to Improve Query Performance
|
||||
---
|
||||
|
||||
## Introduction
|
||||
|
||||
Prior to TDengine 3.0.3.0 (excluded), only one index is created by default on the first tag of each super table, but it's not allowed to dynamically create index on any other tags. From version 3.0.30, you can dynamically create index on any tag of any type. The index created automatically by TDengine is still valid. Query performance can benefit from indexes if you use properly.
|
||||
|
||||
## Syntax
|
||||
|
||||
1. The syntax of creating an index
|
||||
|
||||
```sql
|
||||
CREATE INDEX index_name ON tbl_name (tagColName)
|
||||
```
|
||||
|
||||
In the above statement, `index_name` if the name of the index, `tbl_name` is the name of the super table,`tagColName` is the name of the tag on which the index is being created. `tagColName` can be any type supported by TDengine.
|
||||
|
||||
2. The syntax of drop an index
|
||||
|
||||
```sql
|
||||
DROP INDEX index_name
|
||||
```
|
||||
|
||||
In the above statement, `index_name` is the name of an existing index. If the index doesn't exist, the command would fail but doesn't generate any impact to the system.
|
||||
|
||||
3. The syntax of show indexes in the system
|
||||
|
||||
```sql
|
||||
SELECT * FROM information_schema.INS_INDEXES
|
||||
```
|
||||
|
||||
You can also add filter conditions to limit the results.
|
||||
|
||||
## Detailed Specification
|
||||
|
||||
1. Indexes can improve query performance significantly if they are used properly. The operators supported by tag index include `=`, `>`, `>=`, `<`, `<=`. If you use these operators with tags, indexes can improve query performance significantly. However, for operators not in this scope, indexes don't help. More and more operators will be added in future.
|
||||
|
||||
2. Only one index can be created on each tag, error would be reported if you try to create more than one indexes on same tag.
|
||||
|
||||
3. Each time you can create an index on a single tag, you are not allowed to create indexes on multiple tags together.
|
||||
|
||||
4. The name of each index must be unique across the whole system, regardless of the type of the index, e.g. tag index or sma index.
|
||||
|
||||
5. There is no limit on the number of indexes, but each index may add some burden on the metadata subsystem. So too many indexes may decrease the efficiency of reading or writing metadata and then decrease the system performance. So it's better not to add unnecessary indexes.
|
||||
|
||||
6. You can' create index on a normal table or a child table.
|
||||
|
||||
7. If the unique values of a tag column are too few, it's better not to create index on such tag columns, the benefit would be very small.
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
sidebar_label: Delete Data
|
||||
description: "Delete data from table or Stable"
|
||||
title: Delete Data
|
||||
sidebar_label: Delete Data
|
||||
description: This document describes how to delete data from TDengine.
|
||||
---
|
||||
|
||||
TDengine provides the functionality of deleting data from a table or STable according to specified time range, it can be used to cleanup abnormal data generated due to device failure.
|
||||
|
|
|
@ -1,12 +1,13 @@
|
|||
---
|
||||
sidebar_label: Functions
|
||||
title: Functions
|
||||
sidebar_label: Functions
|
||||
description: This document describes the standard SQL functions available in TDengine.
|
||||
toc_max_heading_level: 4
|
||||
---
|
||||
|
||||
## Single Row Functions
|
||||
## Scalar Functions
|
||||
|
||||
Single row functions return a result for each row.
|
||||
Scalar functions return one result for each row.
|
||||
|
||||
### Mathematical Functions
|
||||
|
||||
|
@ -433,7 +434,7 @@ TO_ISO8601(expr [, timezone])
|
|||
|
||||
**More explanations**:
|
||||
|
||||
- You can specify a time zone in the following format: [z/Z, +/-hhmm, +/-hh, +/-hh:mm]。 For example, TO_ISO8601(1, "+00:00").
|
||||
- You can specify a time zone in the following format: [z/Z, +/-hhmm, +/-hh, +/-hh:mm]. For example, TO_ISO8601(1, "+00:00").
|
||||
- If the input is a UNIX timestamp, the precision of the returned value is determined by the digits of the input timestamp
|
||||
- If the input is a column of TIMESTAMP type, the precision of the returned value is same as the precision set for the current data base in use
|
||||
|
||||
|
@ -458,12 +459,17 @@ TO_JSON(str_literal)
|
|||
#### TO_UNIXTIMESTAMP
|
||||
|
||||
```sql
|
||||
TO_UNIXTIMESTAMP(expr)
|
||||
TO_UNIXTIMESTAMP(expr [, return_timestamp])
|
||||
|
||||
return_timestamp: {
|
||||
0
|
||||
| 1
|
||||
}
|
||||
```
|
||||
|
||||
**Description**: UNIX timestamp converted from a string of date/time format
|
||||
|
||||
**Return value type**: BIGINT
|
||||
**Return value type**: BIGINT, TIMESTAMP
|
||||
|
||||
**Applicable column types**: VARCHAR and NCHAR
|
||||
|
||||
|
@ -475,6 +481,7 @@ TO_UNIXTIMESTAMP(expr)
|
|||
|
||||
- The input string must be compatible with ISO8601/RFC3339 standard, NULL will be returned if the string can't be converted
|
||||
- The precision of the returned timestamp is same as the precision set for the current data base in use
|
||||
- return_timestamp indicates whether the returned value type is TIMESTAMP or not. If this parameter set to 1, function will return TIMESTAMP type. Otherwise function will return BIGINT type. If parameter is omitted, default return value type is BIGINT.
|
||||
|
||||
|
||||
### Time and Date Functions
|
||||
|
@ -532,7 +539,12 @@ TIMEDIFF(expr1, expr2 [, time_unit])
|
|||
#### TIMETRUNCATE
|
||||
|
||||
```sql
|
||||
TIMETRUNCATE(expr, time_unit)
|
||||
TIMETRUNCATE(expr, time_unit [, ignore_timezone])
|
||||
|
||||
ignore_timezone: {
|
||||
0
|
||||
| 1
|
||||
}
|
||||
```
|
||||
|
||||
**Description**: Truncate the input timestamp with unit specified by `time_unit`
|
||||
|
@ -548,7 +560,10 @@ TIMETRUNCATE(expr, time_unit)
|
|||
1b (nanoseconds), 1u (microseconds), 1a (milliseconds), 1s (seconds), 1m (minutes), 1h (hours), 1d (days), or 1w (weeks)
|
||||
- The precision of the returned timestamp is same as the precision set for the current data base in use
|
||||
- If the input data is not formatted as a timestamp, the returned value is null.
|
||||
|
||||
- If `1d` is used as `time_unit` to truncate the timestamp, `ignore_timezone` option can be set to indicate if the returned result is affected by client timezone or not.
|
||||
For example, if client timezone is set to UTC+0800, TIMETRUNCATE('2020-01-01 23:00:00', 1d, 0) will return '2020-01-01 08:00:00'.
|
||||
Otherwise, TIMETRUNCATE('2020-01-01 23:00:00', 1d, 1) will return '2020-01-01 00:00:00'.
|
||||
If `ignore_timezone` option is omitted, the default value is set to 1.
|
||||
|
||||
#### TIMEZONE
|
||||
|
||||
|
@ -611,7 +626,7 @@ algo_type: {
|
|||
|
||||
**Applicable table types**: standard tables and supertables
|
||||
|
||||
**Explanations**:
|
||||
**Explanations**:
|
||||
- _p_ is in range [0,100], when _p_ is 0, the result is same as using function MIN; when _p_ is 100, the result is same as function MAX.
|
||||
- `algo_type` can only be input as `default` or `t-digest` Enter `default` to use a histogram-based algorithm. Enter `t-digest` to use the t-digest algorithm to calculate the approximation of the quantile. `default` is used by default.
|
||||
- The approximation result of `t-digest` algorithm is sensitive to input data order. For example, when querying STable with different input data order there might be minor differences in calculated results.
|
||||
|
@ -657,15 +672,15 @@ If you input a specific column, the number of non-null values in the column is r
|
|||
ELAPSED(ts_primary_key [, time_unit])
|
||||
```
|
||||
|
||||
**Description**:`elapsed` function can be used to calculate the continuous time length in which there is valid data. If it's used with `INTERVAL` clause, the returned result is the calcualted time length within each time window. If it's used without `INTERVAL` caluse, the returned result is the calculated time length within the specified time range. Please be noted that the return value of `elapsed` is the number of `time_unit` in the calculated time length.
|
||||
**Description**: `elapsed` function can be used to calculate the continuous time length in which there is valid data. If it's used with `INTERVAL` clause, the returned result is the calculated time length within each time window. If it's used without `INTERVAL` clause, the returned result is the calculated time length within the specified time range. Please be noted that the return value of `elapsed` is the number of `time_unit` in the calculated time length.
|
||||
|
||||
**Return value type**: Double if the input value is not NULL;
|
||||
|
||||
**Return value type**: TIMESTAMP
|
||||
|
||||
**Applicable tables**: table, STable, outter in nested query
|
||||
**Applicable tables**: table, STable, outer in nested query
|
||||
|
||||
**Explanations**:
|
||||
**Explanations**:
|
||||
- `ts_primary_key` parameter can only be the first column of a table, i.e. timestamp primary key.
|
||||
- The minimum value of `time_unit` is the time precision of the database. If `time_unit` is not specified, the time precision of the database is used as the default time unit. Time unit specified by `time_unit` can be:
|
||||
1b (nanoseconds), 1u (microseconds), 1a (milliseconds), 1s (seconds), 1m (minutes), 1h (hours), 1d (days), or 1w (weeks)
|
||||
|
@ -743,9 +758,9 @@ SUM(expr)
|
|||
HYPERLOGLOG(expr)
|
||||
```
|
||||
|
||||
**Description**:
|
||||
**Description**:
|
||||
The cardinal number of a specific column is returned by using hyperloglog algorithm. The benefit of using hyperloglog algorithm is that the memory usage is under control when the data volume is huge.
|
||||
However, when the data volume is very small, the result may be not accurate, it's recommented to use `select count(data) from (select unique(col) as data from table)` in this case.
|
||||
However, when the data volume is very small, the result may be not accurate, it's recommended to use `select count(data) from (select unique(col) as data from table)` in this case.
|
||||
|
||||
**Return value type**: Integer
|
||||
|
||||
|
@ -757,10 +772,10 @@ HYPERLOGLOG(expr)
|
|||
### HISTOGRAM
|
||||
|
||||
```sql
|
||||
HISTOGRAM(expr,bin_type, bin_description, normalized)
|
||||
HISTOGRAM(expr, bin_type, bin_description, normalized)
|
||||
```
|
||||
|
||||
**Description**:Returns count of data points in user-specified ranges.
|
||||
**Description**: Returns count of data points in user-specified ranges.
|
||||
|
||||
**Return value type** If normalized is set to 1, a DOUBLE is returned; otherwise a BIGINT is returned
|
||||
|
||||
|
@ -768,18 +783,18 @@ HISTOGRAM(expr,bin_type, bin_description, normalized)
|
|||
|
||||
**Applicable table types**: table, STable
|
||||
|
||||
**Explanations**:
|
||||
- bin_type: parameter to indicate the bucket type, valid inputs are: "user_input", "linear_bin", "log_bin"。
|
||||
- bin_description: parameter to describe how to generate buckets,can be in the following JSON formats for each bin_type respectively:
|
||||
**Explanations**:
|
||||
- bin_type: parameter to indicate the bucket type, valid inputs are: "user_input", "linear_bin", "log_bin".
|
||||
- bin_description: parameter to describe how to generate buckets can be in the following JSON formats for each bin_type respectively:
|
||||
- "user_input": "[1, 3, 5, 7]":
|
||||
User specified bin values.
|
||||
|
||||
- "linear_bin": "{"start": 0.0, "width": 5.0, "count": 5, "infinity": true}"
|
||||
"start" - bin starting point. "width" - bin offset. "count" - number of bins generated. "infinity" - whether to add(-inf, inf)as start/end point in generated set of bins.
|
||||
"start" - bin starting point. "width" - bin offset. "count" - number of bins generated. "infinity" - whether to add (-inf, inf) as start/end point in generated set of bins.
|
||||
The above "linear_bin" descriptor generates a set of bins: [-inf, 0.0, 5.0, 10.0, 15.0, 20.0, +inf].
|
||||
|
||||
- "log_bin": "{"start":1.0, "factor": 2.0, "count": 5, "infinity": true}"
|
||||
"start" - bin starting point. "factor" - exponential factor of bin offset. "count" - number of bins generated. "infinity" - whether to add(-inf, inf)as start/end point in generated range of bins.
|
||||
"start" - bin starting point. "factor" - exponential factor of bin offset. "count" - number of bins generated. "infinity" - whether to add (-inf, inf) as start/end point in generated range of bins.
|
||||
The above "linear_bin" descriptor generates a set of bins: [-inf, 1.0, 2.0, 4.0, 8.0, 16.0, +inf].
|
||||
- normalized: setting to 1/0 to turn on/off result normalization. Valid values are 0 or 1.
|
||||
|
||||
|
@ -787,19 +802,23 @@ HISTOGRAM(expr,bin_type, bin_description, normalized)
|
|||
### PERCENTILE
|
||||
|
||||
```sql
|
||||
PERCENTILE(expr, p)
|
||||
PERCENTILE(expr, p [, p1] ...)
|
||||
```
|
||||
|
||||
**Description**: The value whose rank in a specific column matches the specified percentage. If such a value matching the specified percentage doesn't exist in the column, an interpolation value will be returned.
|
||||
|
||||
**Return value type**: DOUBLE
|
||||
**Return value type**: This function takes 2 minimum and 11 maximum parameters, and it can simultaneously return 10 percentiles at most. If 2 parameters are given, a single percentile is returned and the value type is DOUBLE.
|
||||
If more than 2 parameters are given, the return value type is a VARCHAR string, the format of which is a JSON ARRAY containing all return values.
|
||||
|
||||
**Applicable column types**: Numeric
|
||||
|
||||
**Applicable table types**: table only
|
||||
|
||||
**More explanations**: _p_ is in range [0,100], when _p_ is 0, the result is same as using function MIN; when _p_ is 100, the result is same as function MAX.
|
||||
**More explanations**:
|
||||
|
||||
- _p_ is in range [0,100], when _p_ is 0, the result is same as using function MIN; when _p_ is 100, the result is same as function MAX.
|
||||
- When calculating multiple percentiles of a specific column, a single PERCENTILE function with multiple parameters is advised, as this can largely reduce the query response time.
|
||||
For example, using SELECT percentile(col, 90, 95, 99) FROM table will perform better than SELECT percentile(col, 90), percentile(col, 95), percentile(col, 99) from table.
|
||||
|
||||
## Selection Functions
|
||||
|
||||
|
@ -848,10 +867,16 @@ FIRST(expr)
|
|||
### INTERP
|
||||
|
||||
```sql
|
||||
INTERP(expr)
|
||||
INTERP(expr [, ignore_null_values])
|
||||
|
||||
ignore_null_values: {
|
||||
0
|
||||
| 1
|
||||
}
|
||||
```
|
||||
|
||||
**Description**: The value that matches the specified timestamp range is returned, if existing; or an interpolation value is returned.
|
||||
**Description**: The value that matches the specified timestamp range is returned, if existing; or an interpolation value is returned. The value of `ignore_null_values` can be 0 or 1, 1 means null values are ignored. The default value of this parameter is 0.
|
||||
|
||||
|
||||
**Return value type**: Same as the column being operated upon
|
||||
|
||||
|
@ -864,11 +889,22 @@ INTERP(expr)
|
|||
- `INTERP` is used to get the value that matches the specified time slice from a column. If no such value exists an interpolation value will be returned based on `FILL` parameter.
|
||||
- The input data of `INTERP` is the value of the specified column and a `where` clause can be used to filter the original data. If no `where` condition is specified then all original data is the input.
|
||||
- `INTERP` must be used along with `RANGE`, `EVERY`, `FILL` keywords.
|
||||
- The output time range of `INTERP` is specified by `RANGE(timestamp1,timestamp2)` parameter, with timestamp1<=timestamp2. timestamp1 is the starting point of the output time range and must be specified. timestamp2 is the ending point of the output time range and must be specified.
|
||||
- The number of rows in the result set of `INTERP` is determined by the parameter `EVERY`. Starting from timestamp1, one interpolation is performed for every time interval specified `EVERY` parameter. The parameter `EVERY` must be an integer, with no quotes, with a time unit of: b(nanosecond), u(microsecond), a(millisecond)), s(second), m(minute), h(hour), d(day), or w(week). For example, `EVERY(500a)` will interpolate every 500 milliseconds.
|
||||
- Interpolation is performed based on `FILL` parameter.
|
||||
- `INTERP` can only be used to interpolate in single timeline. So it must be used with `partition by tbname` when it's used on a STable.
|
||||
- Pseudo column `_irowts` can be used along with `INTERP` to return the timestamps associated with interpolation points(support after version 3.0.1.4).
|
||||
- The output time range of `INTERP` is specified by `RANGE(timestamp1,timestamp2)` parameter, with timestamp1 <= timestamp2. timestamp1 is the starting point of the output time range. timestamp2 is the ending point of the output time range.
|
||||
- The number of rows in the result set of `INTERP` is determined by the parameter `EVERY(time_unit)`. Starting from timestamp1, one interpolation is performed for every time interval specified `time_unit` parameter. The parameter `time_unit` must be an integer, with no quotes, with a time unit of: a(millisecond)), s(second), m(minute), h(hour), d(day), or w(week). For example, `EVERY(500a)` will interpolate every 500 milliseconds.
|
||||
- Interpolation is performed based on `FILL` parameter. For more information about FILL clause, see [FILL Clause](../distinguished/#fill-clause).
|
||||
- When only one timestamp value is specified in `RANGE` clause, `INTERP` is used to generate interpolation at this point in time. In this case, `EVERY` clause can be omitted. For example, SELECT INTERP(col) FROM tb RANGE('2023-01-01 00:00:00') FILL(linear).
|
||||
- `INTERP` can be applied to supertable by interpolating primary key sorted data of all its childtables. It can also be used with `partition by tbname` when applied to supertable to generate interpolation on each single timeline.
|
||||
- Pseudocolumn `_irowts` can be used along with `INTERP` to return the timestamps associated with interpolation points(support after version 3.0.2.0).
|
||||
- Pseudocolumn `_isfilled` can be used along with `INTERP` to indicate whether the results are original records or data points generated by interpolation algorithm(support after version 3.0.3.0).
|
||||
|
||||
**Example**
|
||||
|
||||
- We use the smart meters example used in this documentation to illustrate how to use the INTERP function.
|
||||
- We want to downsample every 1 hour and use a linear fill for missing values. Note the order in which the "partition by" clause and the "range", "every" and "fill" parameters are used.
|
||||
|
||||
```sql
|
||||
SELECT _irowts,INTERP(current) FROM test.meters PARTITION BY TBNAME RANGE('2017-07-22 00:00:00','2017-07-24 12:25:00') EVERY(1h) FILL(LINEAR)
|
||||
```
|
||||
|
||||
### LAST
|
||||
|
||||
|
@ -963,19 +999,14 @@ SAMPLE(expr, k)
|
|||
|
||||
**Description**: _k_ sampling values of a specific column. The applicable range of _k_ is [1,1000].
|
||||
|
||||
**Return value type**: Same as the column being operated plus the associated timestamp
|
||||
**Return value type**: Same as the column being operated
|
||||
|
||||
**Applicable data types**: Any data type except for tags of STable
|
||||
**Applicable data types**: Any data type
|
||||
|
||||
**Applicable nested query**: Inner query and Outer query
|
||||
|
||||
**Applicable table types**: standard tables and supertables
|
||||
|
||||
**More explanations**:
|
||||
|
||||
This function cannot be used in expression calculation.
|
||||
- Must be used with `PARTITION BY tbname` when it's used on a STable to force the result on each single timeline
|
||||
|
||||
|
||||
### TAIL
|
||||
|
||||
|
@ -1020,11 +1051,11 @@ TOP(expr, k)
|
|||
UNIQUE(expr)
|
||||
```
|
||||
|
||||
**Description**: The values that occur the first time in the specified column. The effect is similar to `distinct` keyword, but it can also be used to match tags or timestamp. The first occurrence of a timestamp or tag is used.
|
||||
**Description**: The values that occur the first time in the specified column. The effect is similar to `distinct` keyword.
|
||||
|
||||
**Return value type**:Same as the data type of the column being operated upon
|
||||
|
||||
**Applicable column types**: Any data types except for timestamp
|
||||
**Applicable column types**: Any data types
|
||||
|
||||
**Applicable table types**: table, STable
|
||||
|
||||
|
@ -1053,7 +1084,6 @@ CSUM(expr)
|
|||
|
||||
- Arithmetic operation can't be performed on the result of `csum` function
|
||||
- Can only be used with aggregate functions This function can be used with supertables and standard tables.
|
||||
- Must be used with `PARTITION BY tbname` when it's used on a STable to force the result on each single timeline
|
||||
|
||||
|
||||
### DERIVATIVE
|
||||
|
@ -1077,8 +1107,7 @@ ignore_negative: {
|
|||
|
||||
**More explanation**:
|
||||
|
||||
- It can be used together with `PARTITION BY tbname` against a STable.
|
||||
- It can be used together with a selected column. For example: select \_rowts, DERIVATIVE() from。
|
||||
- It can be used together with a selected column. For example: select \_rowts, DERIVATIVE() from.
|
||||
|
||||
### DIFF
|
||||
|
||||
|
@ -1102,7 +1131,7 @@ ignore_negative: {
|
|||
**More explanation**:
|
||||
|
||||
- The number of result rows is the number of rows subtracted by one, no output for the first row
|
||||
- It can be used together with a selected column. For example: select \_rowts, DIFF() from。
|
||||
- It can be used together with a selected column. For example: select \_rowts, DIFF() from.
|
||||
|
||||
|
||||
### IRATE
|
||||
|
@ -1140,7 +1169,6 @@ MAVG(expr, k)
|
|||
|
||||
- Arithmetic operation can't be performed on the result of `MAVG`.
|
||||
- Can only be used with data columns, can't be used with tags. - Can't be used with aggregate functions.
|
||||
- Must be used with `PARTITION BY tbname` when it's used on a STable to force the result on each single timeline
|
||||
|
||||
|
||||
### STATECOUNT
|
||||
|
@ -1154,7 +1182,7 @@ STATECOUNT(expr, oper, val)
|
|||
**Applicable parameter values**:
|
||||
|
||||
- oper : Can be one of `'LT'` (lower than), `'GT'` (greater than), `'LE'` (lower than or equal to), `'GE'` (greater than or equal to), `'NE'` (not equal to), `'EQ'` (equal to), the value is case insensitive, the value must be in quotes.
|
||||
- val : Numeric types
|
||||
- val: Numeric types
|
||||
|
||||
**Return value type**: Integer
|
||||
|
||||
|
@ -1166,7 +1194,6 @@ STATECOUNT(expr, oper, val)
|
|||
|
||||
**More explanations**:
|
||||
|
||||
- Must be used together with `PARTITION BY tbname` when it's used on a STable to force the result into each single timeline]
|
||||
- Can't be used with window operation, like interval/state_window/session_window
|
||||
|
||||
|
||||
|
@ -1181,7 +1208,7 @@ STATEDURATION(expr, oper, val, unit)
|
|||
**Applicable parameter values**:
|
||||
|
||||
- oper : Can be one of `'LT'` (lower than), `'GT'` (greater than), `'LE'` (lower than or equal to), `'GE'` (greater than or equal to), `'NE'` (not equal to), `'EQ'` (equal to), the value is case insensitive, the value must be in quotes.
|
||||
- val : Numeric types
|
||||
- val: Numeric types
|
||||
- unit: The unit of time interval. Enter one of the following options: 1b (nanoseconds), 1u (microseconds), 1a (milliseconds), 1s (seconds), 1m (minutes), 1h (hours), 1d (days), or 1w (weeks) If you do not enter a unit of time, the precision of the current database is used by default.
|
||||
|
||||
**Return value type**: Integer
|
||||
|
@ -1194,7 +1221,6 @@ STATEDURATION(expr, oper, val, unit)
|
|||
|
||||
**More explanations**:
|
||||
|
||||
- Must be used together with `PARTITION BY tbname` when it's used on a STable to force the result into each single timeline]
|
||||
- Can't be used with window operation, like interval/state_window/session_window
|
||||
|
||||
|
||||
|
@ -1212,7 +1238,6 @@ TWA(expr)
|
|||
|
||||
**Applicable table types**: standard tables and supertables
|
||||
|
||||
- Must be used together with `PARTITION BY tbname` to force the result into each single timeline.
|
||||
|
||||
|
||||
## System Information Functions
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
sidebar_label: Time-Series Extensions
|
||||
title: Time-Series Extensions
|
||||
sidebar_label: Time-Series Extensions
|
||||
description: This document describes the extended functions specific to time-series data processing available in TDengine.
|
||||
---
|
||||
|
||||
As a purpose-built database for storing and processing time-series data, TDengine provides time-series-specific extensions to standard SQL.
|
||||
|
@ -20,7 +21,7 @@ part_list can be any scalar expression, such as a column, constant, scalar funct
|
|||
A PARTITION BY clause is processed as follows:
|
||||
|
||||
- The PARTITION BY clause must occur after the WHERE clause
|
||||
- The PARTITION BY caluse partitions the data according to the specified dimentions, then perform computation on each partition. The performed computation is determined by the rest of the statement - a window clause, GROUP BY clause, or SELECT clause.
|
||||
- The PARTITION BY clause partitions the data according to the specified dimensions, then perform computation on each partition. The performed computation is determined by the rest of the statement - a window clause, GROUP BY clause, or SELECT clause.
|
||||
- The PARTITION BY clause can be used together with a window clause or GROUP BY clause. In this case, the window or GROUP BY clause takes effect on every partition. For example, the following statement partitions the table by the location tag, performs downsampling over a 10 minute window, and returns the maximum value:
|
||||
|
||||
```sql
|
||||
|
@ -31,15 +32,15 @@ The most common usage of PARTITION BY is partitioning the data in subtables by t
|
|||
|
||||
## Windowed Queries
|
||||
|
||||
Aggregation by time window is supported in TDengine. For example, in the case where temperature sensors report the temperature every seconds, the average temperature for every 10 minutes can be retrieved by performing a query with a time window. Window related clauses are used to divide the data set to be queried into subsets and then aggregation is performed across the subsets. There are three kinds of windows: time window, status window, and session window. There are two kinds of time windows: sliding window and flip time/tumbling window. The query syntax is as follows:
|
||||
Aggregation by time window is supported in TDengine. For example, in the case where temperature sensors report the temperature every seconds, the average temperature for every 10 minutes can be retrieved by performing a query with a time window. Window related clauses are used to divide the data set to be queried into subsets and then aggregation is performed across the subsets. There are four kinds of windows: time window, status window, session window, and event window. There are two kinds of time windows: sliding window and flip time/tumbling window. The syntax of window clause is as follows:
|
||||
|
||||
```sql
|
||||
SELECT select_list FROM tb_name
|
||||
[WHERE where_condition]
|
||||
[SESSION(ts_col, tol_val)]
|
||||
[STATE_WINDOW(col)]
|
||||
[INTERVAL(interval [, offset]) [SLIDING sliding]]
|
||||
[FILL({NONE | VALUE | PREV | NULL | LINEAR | NEXT})]
|
||||
window_clause: {
|
||||
SESSION(ts_col, tol_val)
|
||||
| STATE_WINDOW(col)
|
||||
| INTERVAL(interval [, offset]) [SLIDING sliding] [FILL({NONE | VALUE | PREV | NULL | LINEAR | NEXT})]
|
||||
| EVENT_WINDOW START WITH start_trigger_condition END WITH end_trigger_condition
|
||||
}
|
||||
```
|
||||
|
||||
The following restrictions apply:
|
||||
|
@ -68,11 +69,22 @@ These pseudocolumns occur after the aggregation clause.
|
|||
`FILL` clause is used to specify how to fill when there is data missing in any window, including:
|
||||
|
||||
1. NONE: No fill (the default fill mode)
|
||||
2. VALUE:Fill with a fixed value, which should be specified together, for example `FILL(VALUE, 1.23)` Note: The value filled depends on the data type. For example, if you run FILL(VALUE 1.23) on an integer column, the value 1 is filled.
|
||||
3. PREV:Fill with the previous non-NULL value, `FILL(PREV)`
|
||||
4. NULL:Fill with NULL, `FILL(NULL)`
|
||||
5. LINEAR:Fill with the closest non-NULL value, `FILL(LINEAR)`
|
||||
6. NEXT:Fill with the next non-NULL value, `FILL(NEXT)`
|
||||
2. VALUE: Fill with a fixed value, which should be specified together, for example `FILL(VALUE, 1.23)` Note: The value filled depends on the data type. For example, if you run FILL(VALUE 1.23) on an integer column, the value 1 is filled.
|
||||
3. PREV: Fill with the previous non-NULL value, `FILL(PREV)`
|
||||
4. NULL: Fill with NULL, `FILL(NULL)`
|
||||
5. LINEAR: Fill with the closest non-NULL value, `FILL(LINEAR)`
|
||||
6. NEXT: Fill with the next non-NULL value, `FILL(NEXT)`
|
||||
|
||||
In the above filling modes, except for `NONE` mode, the `fill` clause will be ignored if there is no data in the defined time range, i.e. no data would be filled and the query result would be empty. This behavior is reasonable when the filling mode is `PREV`, `NEXT`, `LINEAR`, because filling can't be performed if there is not any data. For filling modes `NULL` and `VALUE`, however, filling can be performed even though there is not any data, filling or not depends on the choice of user's application. To accomplish the need of this force filling behavior and not break the behavior of existing filling modes, TDengine added two new filling modes since version 3.0.3.0.
|
||||
|
||||
1. NULL_F: Fill `NULL` by force
|
||||
2. VALUE_F: Fill `VALUE` by force
|
||||
|
||||
The detailed beaviors of `NULL`, `NULL_F`, `VALUE`, and VALUE_F are described below:
|
||||
|
||||
- When used with `INTERVAL`: `NULL_F` and `VALUE_F` are filling by force; `NULL` and `VALUE` don't fill by force. The behavior of each filling mode is exactly same as what the name suggests.
|
||||
- When used with `INTERVAL` in stream processing: `NULL_F` and `NULL` are same, i.e. don't fill by force; `VALUE_F` and `VALUE` and same, i.e. don't fill by force. It's suggested that there is no filling by force in stream processing.
|
||||
- When used with `INTERP`: `NULL` and `NULL_F` and same, i.e. filling by force; `VALUE` and `VALUE_F` are same, i.e. filling by force. It's suggested that there is always filling by force when used with `INTERP`.
|
||||
|
||||
:::info
|
||||
|
||||
|
@ -86,7 +98,7 @@ These pseudocolumns occur after the aggregation clause.
|
|||
|
||||
There are two kinds of time windows: sliding window and flip time/tumbling window.
|
||||
|
||||
The `INTERVAL` clause is used to generate time windows of the same time interval. The `SLIDING` parameter is used to specify the time step for which the time window moves forward. The query is performed on one time window each time, and the time window moves forward with time. When defining a continuous query, both the size of the time window and the step of forward sliding time need to be specified. As shown in the figure blow, [t0s, t0e] ,[t1s , t1e], [t2s, t2e] are respectively the time ranges of three time windows on which continuous queries are executed. The time step for which time window moves forward is marked by `sliding time`. Query, filter and aggregate operations are executed on each time window respectively. When the time step specified by `SLIDING` is same as the time interval specified by `INTERVAL`, the sliding time window is actually a flip time/tumbling window.
|
||||
The `INTERVAL` clause is used to generate time windows of the same time interval. The `SLIDING` parameter is used to specify the time step for which the time window moves forward. The query is performed on one time window each time, and the time window moves forward with time. When defining a continuous query, both the size of the time window and the step of forward sliding time need to be specified. As shown in the figure blow, [t0s, t0e], [t1s, t1e], [t2s, t2e] are respectively the time ranges of three time windows on which continuous queries are executed. The time step for which time window moves forward is marked by `sliding time`. Query, filter and aggregate operations are executed on each time window respectively. When the time step specified by `SLIDING` is same as the time interval specified by `INTERVAL`, the sliding time window is actually a flip time/tumbling window.
|
||||
|
||||

|
||||
|
||||
|
@ -104,13 +116,13 @@ SELECT COUNT(*) FROM temp_tb_1 INTERVAL(1m) SLIDING(2m);
|
|||
|
||||
When using time windows, note the following:
|
||||
|
||||
- The window length for aggregation depends on the value of INTERVAL. The minimum interval is 10 ms. You can configure a window as an offset from UTC 0:00. The offset cannot be smaler than the interval. You can use SLIDING to specify the length of time that the window moves forward.
|
||||
- The window length for aggregation depends on the value of INTERVAL. The minimum interval is 10 ms. You can configure a window as an offset from UTC 0:00. The offset cannot be smaller than the interval. You can use SLIDING to specify the length of time that the window moves forward.
|
||||
Please note that the `timezone` parameter should be configured to be the same value in the `taos.cfg` configuration file on client side and server side.
|
||||
- The result set is in ascending order of timestamp when you aggregate by time window.
|
||||
|
||||
### State Window
|
||||
|
||||
In case of using integer, bool, or string to represent the status of a device at any given moment, continuous rows with the same status belong to a status window. Once the status changes, the status window closes. As shown in the following figure, there are two state windows according to status, [2019-04-28 14:22:07,2019-04-28 14:22:10] and [2019-04-28 14:22:11,2019-04-28 14:22:12].
|
||||
In case of using integer, bool, or string to represent the status of a device at any given moment, continuous rows with the same status belong to a status window. Once the status changes, the status window closes. As shown in the following figure, there are two state windows according to status, [2019-04-28 14:22:07, 2019-04-28 14:22:10] and [2019-04-28 14:22:11, 2019-04-28 14:22:12].
|
||||
|
||||

|
||||
|
||||
|
@ -126,9 +138,15 @@ Only care about the information of the status window when the status is 2. For e
|
|||
SELECT * FROM (SELECT COUNT(*) AS cnt, FIRST(ts) AS fst, status FROM temp_tb_1 STATE_WINDOW(status)) t WHERE status = 2;
|
||||
```
|
||||
|
||||
TDengine also supports the use of CASE expressions in state quantities. It can express that the beginning of a state is triggered by meeting a certain condition, and the end of this state is triggered by meeting another condition. For example, if the normal voltage range of the smart meter is 205V to 235V, you can judge whether the circuit is normal by monitoring the voltage.
|
||||
|
||||
```
|
||||
SELECT tbname, _wstart, CASE WHEN voltage >= 205 and voltage <= 235 THEN 1 ELSE 0 END status FROM meters PARTITION BY tbname STATE_WINDOW(CASE WHEN voltage >= 205 and voltage <= 235 THEN 1 ELSE 0 END);
|
||||
```
|
||||
|
||||
### Session Window
|
||||
|
||||
The primary key, i.e. timestamp, is used to determine which session window a row belongs to. As shown in the figure below, if the limit of time interval for the session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:10,2019-04-28 14:22:30] and [2019-04-28 14:23:10,2019-04-28 14:23:30] because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds.
|
||||
The primary key, i.e. timestamp, is used to determine which session window a row belongs to. As shown in the figure below, if the limit of time interval for the session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:10, 2019-04-28 14:22:30] and [2019-04-28 14:23:10, 2019-04-28 14:23:30] because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds.
|
||||
|
||||

|
||||
|
||||
|
@ -139,9 +157,29 @@ If the time interval between two continuous rows are within the time interval sp
|
|||
SELECT COUNT(*), FIRST(ts) FROM temp_tb_1 SESSION(ts, tol_val);
|
||||
```
|
||||
|
||||
### Event Window
|
||||
|
||||
Event window is determined according to the window start condition and the window close condition. The window is started when `start_trigger_condition` is evaluated to true, the window is closed when `end_trigger_condition` is evaluated to true. `start_trigger_condition` and `end_trigger_condition` can be any conditional expressions supported by TDengine and can include multiple columns.
|
||||
|
||||
There may be only one row of data in an event window, when a row meets both the `start_trigger_condition` and the `end_trigger_condition`.
|
||||
|
||||
The window is treated as invalid or non-existing if the `end_trigger_condition` can't be met. There will be no output in case that a window can't be closed.
|
||||
|
||||
If the event window query is performed on a super table, TDengine consolidates all the data of all child tables into a single timeline then perform event window based query.
|
||||
|
||||
If you want to perform event window based query on the result set of a sub-query, the result set of the sub-query should be arranged in the order of timestamp and include the column of timestamp.
|
||||
|
||||
For example, the diagram below illustrates the event windows generated by the query below:
|
||||
|
||||
```sql
|
||||
select _wstart, _wend, count(*) from t event_window start with c1 > 0 end with c2 < 10
|
||||
```
|
||||
|
||||

|
||||
|
||||
### Examples
|
||||
|
||||
A table of intelligent meters can be created by the SQL statement below:
|
||||
A table of intelligent meters can be created by the SQL statement below:
|
||||
|
||||
```
|
||||
CREATE TABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT);
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
sidebar_label: Data Subscription
|
||||
title: Data Subscription
|
||||
sidebar_label: Data Subscription
|
||||
description: This document describes the SQL statements related to the data subscription component of TDengine.
|
||||
---
|
||||
|
||||
The information in this document is related to the TDengine data subscription feature.
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
sidebar_label: Stream Processing
|
||||
title: Stream Processing
|
||||
sidebar_label: Stream Processing
|
||||
description: This document describes the SQL statements related to the stream processing component of TDengine.
|
||||
---
|
||||
|
||||
Raw time-series data is often cleaned and preprocessed before being permanently stored in a database. Stream processing components like Kafka, Flink, and Spark are often deployed alongside a time-series database to handle these operations, increasing system complexity and maintenance costs.
|
||||
|
@ -10,10 +11,13 @@ Because stream processing is built in to TDengine, you are no longer reliant on
|
|||
## Create a Stream
|
||||
|
||||
```sql
|
||||
CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name AS subquery
|
||||
CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name SUBTABLE(expression) AS subquery
|
||||
stream_options: {
|
||||
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
|
||||
WATERMARK time
|
||||
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
|
||||
WATERMARK time
|
||||
IGNORE EXPIRED [0|1]
|
||||
DELETE_MARK time
|
||||
FILL_HISTORY [0|1]
|
||||
}
|
||||
|
||||
```
|
||||
|
@ -30,6 +34,8 @@ subquery: SELECT [DISTINCT] select_list
|
|||
|
||||
Session windows, state windows, and sliding windows are supported. When you configure a session or state window for a supertable, you must use PARTITION BY TBNAME.
|
||||
|
||||
Subtable Clause defines the naming rules of auto-created subtable, you can see more details in below part: Partitions of Stream.
|
||||
|
||||
```sql
|
||||
window_clause: {
|
||||
SESSION(ts_col, tol_val)
|
||||
|
@ -47,6 +53,47 @@ CREATE STREAM avg_vol_s INTO avg_vol AS
|
|||
SELECT _wstart, count(*), avg(voltage) FROM meters PARTITION BY tbname INTERVAL(1m) SLIDING(30s);
|
||||
```
|
||||
|
||||
## Partitions of Stream
|
||||
|
||||
A Stream can process data in multiple partitions. Partition rules can be defined by PARTITION BY clause in stream processing. Each partition will have different timelines and windows, and will be processed separately and be written into different subtables of target supertable.
|
||||
|
||||
If a stream is created without PARTITION BY clause, all data will be written into one subtable.
|
||||
|
||||
If a stream is created with PARTITION BY clause without SUBTABLE clause, each partition will be given a random name.
|
||||
|
||||
If a stream is created with PARTITION BY clause and SUBTABLE clause, the name of each partition will be calculated according to SUBTABLE clause. For example:
|
||||
|
||||
```sql
|
||||
CREATE STREAM avg_vol_s INTO avg_vol SUBTABLE(CONCAT('new-', tname)) AS SELECT _wstart, count(*), avg(voltage) FROM meters PARTITION BY tbname tname INTERVAL(1m);
|
||||
```
|
||||
|
||||
IN PARTITION clause, 'tbname', representing each subtable name of source supertable, is given alias 'tname'. And 'tname' is used in SUBTABLE clause. In SUBTABLE clause, each auto created subtable will concat 'new-' and source subtable name as their name. Other expressions are also allowed in SUBTABLE clause, but the output type must be varchar.
|
||||
|
||||
If the output length exceeds the limitation of TDengine(192), the name will be truncated. If the generated name is occupied by some other table, the creation and writing of the new subtable will be failed.
|
||||
|
||||
## Filling history data
|
||||
|
||||
Normally a stream does not process data already or being written into source table when it's being creating. But adding FILL_HISTORY 1 as a stream option when creating the stream will allow it to process data written before and while creating the stream. For example:
|
||||
|
||||
```sql
|
||||
create stream if not exists s1 fill_history 1 into st1 as select count(*) from t1 interval(10s)
|
||||
```
|
||||
|
||||
Combining fill_history option and where clause, stream can processing data of specific time range. For example, only process data after a past time. (In this case, 2020-01-30)
|
||||
|
||||
```sql
|
||||
create stream if not exists s1 fill_history 1 into st1 as select count(*) from t1 where ts > '2020-01-30' interval(10s)
|
||||
```
|
||||
|
||||
As another example, only processing data starting from some past time, and ending at some future time.
|
||||
|
||||
```sql
|
||||
create stream if not exists s1 fill_history 1 into st1 as select count(*) from t1 where ts > '2020-01-30' and ts < '2023-01-01' interval(10s)
|
||||
```
|
||||
|
||||
If some streams are totally outdated, and you do not want it to monitor or process anymore, those streams can be manually dropped and output data will be still kept.
|
||||
|
||||
|
||||
## Delete a Stream
|
||||
|
||||
```sql
|
||||
|
@ -65,7 +112,7 @@ SHOW STREAMS;
|
|||
|
||||
When you create a stream, you can use the TRIGGER parameter to specify triggering conditions for it.
|
||||
|
||||
For non-windowed processing, triggering occurs in real time. For windowed processing, there are three methods of triggering:
|
||||
For non-windowed processing, triggering occurs in real time. For windowed processing, there are three methods of triggering, the default value is AT_ONCE:
|
||||
|
||||
1. AT_ONCE: triggers on write
|
||||
|
||||
|
@ -97,3 +144,27 @@ The data in expired windows is tagged as expired. TDengine stream processing pro
|
|||
2. Recalculate the data. In this method, all data in the window is reobtained from the database and recalculated. The latest results are then returned.
|
||||
|
||||
In both of these methods, configuring the watermark is essential for obtaining accurate results (if expired data is dropped) and avoiding repeated triggers that affect system performance (if expired data is recalculated).
|
||||
|
||||
## Supported functions
|
||||
|
||||
All [scalar functions](../function/#scalar-functions) are available in stream processing. All [Aggregate functions](../function/#aggregate-functions) and [Selection functions](../function/#selection-functions) are available in stream processing, except the followings:
|
||||
- [leastsquares](../function/#leastsquares)
|
||||
- [percentile](../function/#percentile)
|
||||
- [top](../function/#top)
|
||||
- [bottom](../function/#bottom)
|
||||
- [elapsed](../function/#elapsed)
|
||||
- [interp](../function/#interp)
|
||||
- [derivative](../function/#derivative)
|
||||
- [irate](../function/#irate)
|
||||
- [twa](../function/#twa)
|
||||
- [histogram](../function/#histogram)
|
||||
- [diff](../function/#diff)
|
||||
- [statecount](../function/#statecount)
|
||||
- [stateduration](../function/#stateduration)
|
||||
- [csum](../function/#csum)
|
||||
- [mavg](../function/#mavg)
|
||||
- [sample](../function/#sample)
|
||||
- [tail](../function/#tail)
|
||||
- [unique](../function/#unique)
|
||||
- [mode](../function/#mode)
|
||||
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
sidebar_label: Operators
|
||||
title: Operators
|
||||
sidebar_label: Operators
|
||||
description: This document describes the SQL operators available in TDengine.
|
||||
---
|
||||
|
||||
## Arithmetic Operators
|
||||
|
@ -38,7 +39,7 @@ TDengine supports the `UNION` and `UNION ALL` operations. UNION ALL collects all
|
|||
| 3 | \>, < | All types except BLOB, MEDIUMBLOB, and JSON | Greater than and less than |
|
||||
| 4 | \>=, <= | All types except BLOB, MEDIUMBLOB, and JSON | Greater than or equal to and less than or equal to |
|
||||
| 5 | IS [NOT] NULL | All types | Indicates whether the value is null |
|
||||
| 6 | [NOT] BETWEEN AND | All types except BLOB, MEDIUMBLOB, and JSON | Closed interval comparison |
|
||||
| 6 | [NOT] BETWEEN AND | All types except BLOB, MEDIUMBLOB, JSON and GEOMETRY | Closed interval comparison |
|
||||
| 7 | IN | All types except BLOB, MEDIUMBLOB, and JSON; the primary key (timestamp) is also not supported | Equal to any value in the list |
|
||||
| 8 | LIKE | BINARY, NCHAR, and VARCHAR | Wildcard match |
|
||||
| 9 | MATCH, NMATCH | BINARY, NCHAR, and VARCHAR | Regular expression match |
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
sidebar_label: JSON Type
|
||||
title: JSON Type
|
||||
sidebar_label: JSON Type
|
||||
description: This document describes the JSON data type in TDengine.
|
||||
---
|
||||
|
||||
|
||||
|
@ -54,7 +55,7 @@ title: JSON Type
|
|||
|
||||
4. Tag Operations
|
||||
|
||||
The value of a JSON tag can be altered. Please note that the full JSON will be overriden when doing this.
|
||||
The value of a JSON tag can be altered. Please note that the full JSON will be overridden when doing this.
|
||||
|
||||
The name of a JSON tag can be altered.
|
||||
|
||||
|
@ -66,7 +67,7 @@ title: JSON Type
|
|||
|
||||
- The maximum length of keys in JSON is 256 bytes, and key must be printable ASCII characters. The maximum total length of a JSON is 4,096 bytes.
|
||||
|
||||
- JSON format:
|
||||
- JSON format:
|
||||
|
||||
- The input string for JSON can be empty, i.e. "", "\t", or NULL, but it can't be non-NULL string, bool or array.
|
||||
- object can be {}, and the entire JSON is empty if so. Key can be "", and it's ignored if so.
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
---
|
||||
title: Escape Characters
|
||||
description: This document describes the usage of escape characters in TDengine.
|
||||
---
|
||||
|
||||
## Escape Characters
|
||||
|
@ -19,7 +20,7 @@ title: Escape Characters
|
|||
|
||||
1. If there are escape characters in identifiers (database name, table name, column name)
|
||||
- Identifier without ``: Error will be returned because identifier must be constituted of digits, ASCII characters or underscore and can't be started with digits
|
||||
- Identifier quoted with ``: Original content is kept, no escaping
|
||||
- Identifier quoted with ``: Original content is kept, no escaping
|
||||
2. If there are escape characters in values
|
||||
- The escape characters will be escaped as the above table. If the escape character doesn't match any supported one, the escape character "\" will be ignored.
|
||||
- "%" and "\_" are used as wildcards in `like`. `\%` and `\_` should be used to represent literal "%" and "\_" in `like`,. If `\%` and `\_` are used out of `like` context, the evaluation result is "`\%`"and "`\_`", instead of "%" and "\_".
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
sidebar_label: Name and Size Limits
|
||||
title: Name and Size Limits
|
||||
sidebar_label: Name and Size Limits
|
||||
description: This document describes the name and size limits in TDengine.
|
||||
---
|
||||
|
||||
## Naming Rules
|
||||
|
@ -23,9 +24,9 @@ The following characters cannot occur in a password: single quotation marks ('),
|
|||
|
||||
## General Limits
|
||||
|
||||
- Maximum length of database name is 32 bytes
|
||||
- Maximum length of database name is 64 bytes
|
||||
- Maximum length of table name is 192 bytes, excluding the database name prefix and the separator.
|
||||
- Maximum length of each data row is 48K bytes. Note that the upper limit includes the extra 2 bytes consumed by each column of BINARY/NCHAR type.
|
||||
- Maximum length of each data row is 48K(64K since version 3.0.5.0) bytes. Note that the upper limit includes the extra 2 bytes consumed by each column of BINARY/NCHAR type.
|
||||
- The maximum length of a column name is 64 bytes.
|
||||
- Maximum number of columns is 4096. There must be at least 2 columns, and the first column must be timestamp.
|
||||
- The maximum length of a tag name is 64 bytes
|
||||
|
@ -35,7 +36,7 @@ The following characters cannot occur in a password: single quotation marks ('),
|
|||
- Maximum numbers of databases, STables, tables are dependent only on the system resources.
|
||||
- The number of replicas can only be 1 or 3.
|
||||
- The maximum length of a username is 23 bytes.
|
||||
- The maximum length of a password is 15 bytes.
|
||||
- The maximum length of a password is 31 bytes.
|
||||
- The maximum number of rows depends on system resources.
|
||||
- The maximum number of vnodes in a database is 1024.
|
||||
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
sidebar_label: Reserved Keywords
|
||||
title: Reserved Keywords
|
||||
sidebar_label: Reserved Keywords
|
||||
description: This document describes the reserved keywords in TDengine that cannot be used in object names.
|
||||
---
|
||||
|
||||
## Keyword List
|
||||
|
@ -17,6 +18,7 @@ The following list shows all reserved keywords:
|
|||
- ADD
|
||||
- AFTER
|
||||
- AGGREGATE
|
||||
- ALIVE
|
||||
- ALL
|
||||
- ALTER
|
||||
- ANALYZE
|
||||
|
@ -332,8 +334,6 @@ The following list shows all reserved keywords:
|
|||
- WAL_LEVEL
|
||||
- WAL_RETENTION_PERIOD
|
||||
- WAL_RETENTION_SIZE
|
||||
- WAL_ROLL_PERIOD
|
||||
- WAL_SEGMENT_SIZE
|
||||
- WATERMARK
|
||||
- WHERE
|
||||
- WINDOW_CLOSE
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
sidebar_label: Cluster
|
||||
title: Cluster
|
||||
sidebar_label: Cluster
|
||||
description: This document describes the SQL statements related to cluster management in TDengine.
|
||||
---
|
||||
|
||||
The physical entities that form TDengine clusters are known as data nodes (dnodes). Each dnode is a process running on the operating system of the physical machine. Dnodes can contain virtual nodes (vnodes), which store time-series data. Virtual nodes are formed into vgroups, which have 1 or 3 vnodes depending on the replica setting. If you want to enable replication on your cluster, it must contain at least three nodes. Dnodes can also contain management nodes (mnodes). Each cluster has up to three mnodes. Finally, dnodes can contain query nodes (qnodes), which compute time-series data, thus separating compute from storage. A single dnode can contain a vnode, qnode, and mnode.
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
sidebar_label: Metadata
|
||||
title: Information_Schema Database
|
||||
sidebar_label: Metadata
|
||||
description: This document describes how to use the INFORMATION_SCHEMA database in TDengine.
|
||||
---
|
||||
|
||||
TDengine includes a built-in database named `INFORMATION_SCHEMA` to provide access to database metadata, system information, and status information. This information includes database names, table names, and currently running SQL statements. All information related to TDengine maintenance is stored in this database. It contains several read-only tables. These tables are more accurately described as views, and they do not correspond to specific files. You can query these tables but cannot write data to them. The INFORMATION_SCHEMA database is intended to provide a unified method for SHOW commands to access data. However, using SELECT ... FROM INFORMATION_SCHEMA.tablename offers several advantages over SHOW commands:
|
||||
|
@ -27,47 +28,47 @@ This document introduces the tables of INFORMATION_SCHEMA and their structure.
|
|||
|
||||
Provides information about dnodes. Similar to SHOW DNODES.
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :------------: | ------------ | ------------------------- |
|
||||
| 1 | vnodes | SMALLINT | Current number of vnodes on the dnode. It should be noted that `vnodes` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 2 | support_vnodes | SMALLINT | Maximum number of vnodes on the dnode |
|
||||
| 3 | status | BINARY(10) | Current status |
|
||||
| 4 | note | BINARY(256) | Reason for going offline or other information |
|
||||
| 5 | id | SMALLINT | Dnode ID |
|
||||
| 6 | endpoint | BINARY(134) | Dnode endpoint |
|
||||
| 7 | create | TIMESTAMP | Creation time |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :------------: | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| 1 | vnodes | SMALLINT | Current number of vnodes on the dnode. It should be noted that `vnodes` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 2 | support_vnodes | SMALLINT | Maximum number of vnodes on the dnode |
|
||||
| 3 | status | BINARY(10) | Current status |
|
||||
| 4 | note | BINARY(256) | Reason for going offline or other information |
|
||||
| 5 | id | SMALLINT | Dnode ID |
|
||||
| 6 | endpoint | BINARY(134) | Dnode endpoint |
|
||||
| 7 | create | TIMESTAMP | Creation time |
|
||||
|
||||
## INS_MNODES
|
||||
|
||||
Provides information about mnodes. Similar to SHOW MNODES.
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------ | ------------------ |
|
||||
| 1 | id | SMALLINT | Mnode ID |
|
||||
| 2 | endpoint | BINARY(134) | Mnode endpoint |
|
||||
| 3 | role | BINARY(10) | Current role |
|
||||
| 4 | role_time | TIMESTAMP | Time at which the current role was assumed |
|
||||
| 5 | create_time | TIMESTAMP | Creation time |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------- | ------------------------------------------ |
|
||||
| 1 | id | SMALLINT | Mnode ID |
|
||||
| 2 | endpoint | BINARY(134) | Mnode endpoint |
|
||||
| 3 | role | BINARY(10) | Current role |
|
||||
| 4 | role_time | TIMESTAMP | Time at which the current role was assumed |
|
||||
| 5 | create_time | TIMESTAMP | Creation time |
|
||||
|
||||
## INS_QNODES
|
||||
|
||||
Provides information about qnodes. Similar to SHOW QNODES.
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------ | ------------ |
|
||||
| 1 | id | SMALLINT | Qnode ID |
|
||||
| 2 | endpoint | BINARY(134) | Qnode endpoint |
|
||||
| 3 | create_time | TIMESTAMP | Creation time |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------- | --------------- |
|
||||
| 1 | id | SMALLINT | Qnode ID |
|
||||
| 2 | endpoint | BINARY(134) | Qnode endpoint |
|
||||
| 3 | create_time | TIMESTAMP | Creation time |
|
||||
|
||||
## INS_CLUSTER
|
||||
|
||||
Provides information about the cluster.
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------ | ---------- |
|
||||
| 1 | id | BIGINT | Cluster ID |
|
||||
| 2 | name | BINARY(134) | Cluster name |
|
||||
| 3 | create_time | TIMESTAMP | Creation time |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------- | --------------- |
|
||||
| 1 | id | BIGINT | Cluster ID |
|
||||
| 2 | name | BINARY(134) | Cluster name |
|
||||
| 3 | create_time | TIMESTAMP | Creation time |
|
||||
|
||||
## INS_DATABASES
|
||||
|
||||
|
@ -80,7 +81,7 @@ Provides information about user-created databases. Similar to SHOW DATABASES.
|
|||
| 3 | ntables | INT | Number of standard tables and subtables (not including supertables) |
|
||||
| 4 | vgroups | INT | Number of vgroups. It should be noted that `vnodes` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 6 | replica | INT | Number of replicas. It should be noted that `replica` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 7 | strict | BINARY(3) | Strong consistency. It should be noted that `strict` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 7 | strict | BINARY(4) | Obsoleted |
|
||||
| 8 | duration | INT | Duration for storage of single files. It should be noted that `duration` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 9 | keep | INT | Data retention period. It should be noted that `keep` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 10 | buffer | INT | Write cache size per vnode, in MB. It should be noted that `buffer` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
|
@ -99,183 +100,200 @@ Provides information about user-created databases. Similar to SHOW DATABASES.
|
|||
| 23 | wal_fsync_period | INT | Interval at which WAL is written to disk. It should be noted that `wal_fsync_period` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 24 | wal_retention_period | INT | WAL retention period. It should be noted that `wal_retention_period` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 25 | wal_retention_size | INT | Maximum WAL size. It should be noted that `wal_retention_size` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 26 | wal_roll_period | INT | WAL rotation period. It should be noted that `wal_roll_period` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 27 | wal_segment_size | BIGINT | WAL file size. It should be noted that `wal_segment_size` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 28 | stt_trigger | SMALLINT | The threshold for number of files to trigger file merging. It should be noted that `stt_trigger` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 29 | table_prefix | SMALLINT | The prefix length in the table name that is ignored when distributing table to vnode based on table name. It should be noted that `table_prefix` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 30 | table_suffix | SMALLINT | The suffix length in the table name that is ignored when distributing table to vnode based on table name. It should be noted that `table_suffix` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 31 | tsdb_pagesize | INT | The page size for internal storage engine, its unit is KB. It should be noted that `tsdb_pagesize` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 26 | stt_trigger | SMALLINT | The threshold for number of files to trigger file merging. It should be noted that `stt_trigger` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 27 | table_prefix | SMALLINT | The prefix length in the table name that is ignored when distributing table to vnode based on table name. It should be noted that `table_prefix` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 28 | table_suffix | SMALLINT | The suffix length in the table name that is ignored when distributing table to vnode based on table name. It should be noted that `table_suffix` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 29 | tsdb_pagesize | INT | The page size for internal storage engine, its unit is KB. It should be noted that `tsdb_pagesize` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
|
||||
## INS_FUNCTIONS
|
||||
|
||||
Provides information about user-defined functions.
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------ | -------------- |
|
||||
| 1 | name | BINARY(64) | Function name |
|
||||
| 2 | comment | BINARY(255) | Function description. It should be noted that `comment` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 3 | aggregate | INT | Whether the UDF is an aggregate function. It should be noted that `aggregate` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 4 | output_type | BINARY(31) | Output data type |
|
||||
| 5 | create_time | TIMESTAMP | Creation time |
|
||||
| 6 | code_len | INT | Length of the source code |
|
||||
| 7 | bufsize | INT | Buffer size |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :-----------: | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| 1 | name | BINARY(64) | Function name |
|
||||
| 2 | comment | BINARY(255) | Function description. It should be noted that `comment` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 3 | aggregate | INT | Whether the UDF is an aggregate function. It should be noted that `aggregate` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 4 | output_type | BINARY(31) | Output data type |
|
||||
| 5 | create_time | TIMESTAMP | Creation time |
|
||||
| 6 | code_len | INT | Length of the source code |
|
||||
| 7 | bufsize | INT | Buffer size |
|
||||
| 8 | func_language | BINARY(31) | UDF programming language |
|
||||
| 9 | func_body | BINARY(16384) | UDF function body |
|
||||
| 10 | func_version | INT | UDF function version. starting from 0. Increasing by 1 each time it is updated |
|
||||
|
||||
## INS_INDEXES
|
||||
|
||||
Provides information about user-created indices. Similar to SHOW INDEX.
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :--------------: | ------------ | ---------------------------------------------------------------------------------- |
|
||||
| 1 | db_name | BINARY(32) | Database containing the table with the specified index |
|
||||
| 2 | table_name | BINARY(192) | Table containing the specified index |
|
||||
| 3 | index_name | BINARY(192) | Index name |
|
||||
| 4 | db_name | BINARY(64) | Index column |
|
||||
| 5 | index_type | BINARY(10) | SMA or FULLTEXT index |
|
||||
| 6 | index_extensions | BINARY(256) | Other information For SMA indices, this shows a list of functions. For FULLTEXT indices, this is null. |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :--------------: | ------------- | --------------------------------------------------------------------- |
|
||||
| 1 | db_name | BINARY(32) | Database containing the table with the specified index |
|
||||
| 2 | table_name | BINARY(192) | Table containing the specified index |
|
||||
| 3 | index_name | BINARY(192) | Index name |
|
||||
| 4 | db_name | BINARY(64) | Index column |
|
||||
| 5 | index_type | BINARY(10) | SMA or tag index |
|
||||
| 6 | index_extensions | BINARY(256) | Other information For SMA/tag indices, this shows a list of functions |
|
||||
|
||||
## INS_STABLES
|
||||
|
||||
Provides information about supertables.
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :-----------: | ------------ | ------------------------ |
|
||||
| 1 | stable_name | BINARY(192) | Supertable name |
|
||||
| 2 | db_name | BINARY(64) | All databases in the supertable |
|
||||
| 3 | create_time | TIMESTAMP | Creation time |
|
||||
| 4 | columns | INT | Number of columns |
|
||||
| 5 | tags | INT | Number of tags. It should be noted that `tags` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 6 | last_update | TIMESTAMP | Last updated time |
|
||||
| 7 | table_comment | BINARY(1024) | Table description |
|
||||
| 8 | watermark | BINARY(64) | Window closing time. It should be noted that `watermark` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 9 | max_delay | BINARY(64) | Maximum delay for pushing stream processing results. It should be noted that `max_delay` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 10 | rollup | BINARY(128) | Rollup aggregate function. It should be noted that `rollup` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :-----------: | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| 1 | stable_name | BINARY(192) | Supertable name |
|
||||
| 2 | db_name | BINARY(64) | All databases in the supertable |
|
||||
| 3 | create_time | TIMESTAMP | Creation time |
|
||||
| 4 | columns | INT | Number of columns |
|
||||
| 5 | tags | INT | Number of tags. It should be noted that `tags` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 6 | last_update | TIMESTAMP | Last updated time |
|
||||
| 7 | table_comment | BINARY(1024) | Table description |
|
||||
| 8 | watermark | BINARY(64) | Window closing time. It should be noted that `watermark` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 9 | max_delay | BINARY(64) | Maximum delay for pushing stream processing results. It should be noted that `max_delay` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 10 | rollup | BINARY(128) | Rollup aggregate function. It should be noted that `rollup` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
|
||||
## INS_TABLES
|
||||
|
||||
Provides information about standard tables and subtables.
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :-----------: | ------------ | ---------------- |
|
||||
| 1 | table_name | BINARY(192) | Table name |
|
||||
| 2 | db_name | BINARY(64) | Database name |
|
||||
| 3 | create_time | TIMESTAMP | Creation time |
|
||||
| 4 | columns | INT | Number of columns |
|
||||
| 5 | stable_name | BINARY(192) | Supertable name |
|
||||
| 6 | uid | BIGINT | Table ID |
|
||||
| 7 | vgroup_id | INT | Vgroup ID |
|
||||
| 8 | ttl | INT | Table time-to-live. It should be noted that `ttl` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 9 | table_comment | BINARY(1024) | Table description |
|
||||
| 10 | type | BINARY(20) | Table type |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :-----------: | ------------- | ---------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| 1 | table_name | BINARY(192) | Table name |
|
||||
| 2 | db_name | BINARY(64) | Database name |
|
||||
| 3 | create_time | TIMESTAMP | Creation time |
|
||||
| 4 | columns | INT | Number of columns |
|
||||
| 5 | stable_name | BINARY(192) | Supertable name |
|
||||
| 6 | uid | BIGINT | Table ID |
|
||||
| 7 | vgroup_id | INT | Vgroup ID |
|
||||
| 8 | ttl | INT | Table time-to-live. It should be noted that `ttl` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 9 | table_comment | BINARY(1024) | Table description |
|
||||
| 10 | type | BINARY(20) | Table type |
|
||||
|
||||
## INS_TAGS
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------- | ---------------------- |
|
||||
| 1 | table_name | BINARY(192) | Table name |
|
||||
| 2 | db_name | BINARY(64) | Database name |
|
||||
| 3 | stable_name | BINARY(192) | Supertable name |
|
||||
| 4 | tag_name | BINARY(64) | Tag name |
|
||||
| 5 | tag_type | BINARY(64) | Tag type |
|
||||
| 6 | tag_value | BINARY(16384) | Tag value |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------- | --------------- |
|
||||
| 1 | table_name | BINARY(192) | Table name |
|
||||
| 2 | db_name | BINARY(64) | Database name |
|
||||
| 3 | stable_name | BINARY(192) | Supertable name |
|
||||
| 4 | tag_name | BINARY(64) | Tag name |
|
||||
| 5 | tag_type | BINARY(64) | Tag type |
|
||||
| 6 | tag_value | BINARY(16384) | Tag value |
|
||||
|
||||
## INS_COLUMNS
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :-----------: | ------------- | ---------------- |
|
||||
| 1 | table_name | BINARY(192) | Table name |
|
||||
| 2 | db_name | BINARY(64) | Database name |
|
||||
| 3 | table_type | BINARY(21) | Table type |
|
||||
| 4 | col_name | BINARY(64) | Column name |
|
||||
| 5 | col_type | BINARY(32) | Column type |
|
||||
| 6 | col_length | INT | Column length |
|
||||
| 7 | col_precision | INT | Column precision |
|
||||
| 8 | col_scale | INT | Column scale |
|
||||
| 9 | col_nullable | INT | Column nullable |
|
||||
|
||||
## INS_USERS
|
||||
|
||||
Provides information about TDengine users.
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------ | -------- |
|
||||
| 1 | user_name | BINARY(23) | User name |
|
||||
| 2 | privilege | BINARY(256) | User permissions |
|
||||
| 3 | create_time | TIMESTAMP | Creation time |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------- | ---------------- |
|
||||
| 1 | user_name | BINARY(23) | User name |
|
||||
| 2 | privilege | BINARY(256) | User permissions |
|
||||
| 3 | create_time | TIMESTAMP | Creation time |
|
||||
|
||||
## INS_GRANTS
|
||||
|
||||
Provides information about TDengine Enterprise Edition permissions.
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------ | -------------------------------------------------- |
|
||||
| 1 | version | BINARY(9) | Whether the deployment is a licensed or trial version |
|
||||
| 2 | cpu_cores | BINARY(9) | CPU cores included in license |
|
||||
| 3 | dnodes | BINARY(10) | Dnodes included in license. It should be noted that `dnodes` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 4 | streams | BINARY(10) | Streams included in license. It should be noted that `streams` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 5 | users | BINARY(10) | Users included in license. It should be noted that `users` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 6 | accounts | BINARY(10) | Accounts included in license. It should be noted that `accounts` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 7 | storage | BINARY(21) | Storage space included in license. It should be noted that `storage` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 8 | connections | BINARY(21) | Client connections included in license. It should be noted that `connections` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 9 | databases | BINARY(11) | Databases included in license. It should be noted that `databases` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 10 | speed | BINARY(9) | Write speed specified in license (data points per second) |
|
||||
| 11 | querytime | BINARY(9) | Total query time specified in license |
|
||||
| 12 | timeseries | BINARY(21) | Number of metrics included in license |
|
||||
| 13 | expired | BINARY(5) | Whether the license has expired |
|
||||
| 14 | expire_time | BINARY(19) | When the trial period expires |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| 1 | version | BINARY(9) | Whether the deployment is a licensed or trial version |
|
||||
| 2 | cpu_cores | BINARY(9) | CPU cores included in license |
|
||||
| 3 | dnodes | BINARY(10) | Dnodes included in license. It should be noted that `dnodes` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 4 | streams | BINARY(10) | Streams included in license. It should be noted that `streams` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 5 | users | BINARY(10) | Users included in license. It should be noted that `users` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 6 | accounts | BINARY(10) | Accounts included in license. It should be noted that `accounts` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 7 | storage | BINARY(21) | Storage space included in license. It should be noted that `storage` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 8 | connections | BINARY(21) | Client connections included in license. It should be noted that `connections` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 9 | databases | BINARY(11) | Databases included in license. It should be noted that `databases` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 10 | speed | BINARY(9) | Write speed specified in license (data points per second) |
|
||||
| 11 | querytime | BINARY(9) | Total query time specified in license |
|
||||
| 12 | timeseries | BINARY(21) | Number of metrics included in license |
|
||||
| 13 | expired | BINARY(5) | Whether the license has expired |
|
||||
| 14 | expire_time | BINARY(19) | When the trial period expires |
|
||||
|
||||
## INS_VGROUPS
|
||||
|
||||
Provides information about vgroups.
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :-------: | ------------ | ------------------------------------------------------ |
|
||||
| 1 | vgroup_id | INT | Vgroup ID |
|
||||
| 2 | db_name | BINARY(32) | Database name |
|
||||
| 3 | tables | INT | Tables in vgroup. It should be noted that `tables` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 4 | status | BINARY(10) | Vgroup status |
|
||||
| 5 | v1_dnode | INT | Dnode ID of first vgroup member |
|
||||
| 6 | v1_status | BINARY(10) | Status of first vgroup member |
|
||||
| 7 | v2_dnode | INT | Dnode ID of second vgroup member |
|
||||
| 8 | v2_status | BINARY(10) | Status of second vgroup member |
|
||||
| 9 | v3_dnode | INT | Dnode ID of third vgroup member |
|
||||
| 10 | v3_status | BINARY(10) | Status of third vgroup member |
|
||||
| 11 | nfiles | INT | Number of data and metadata files in the vgroup |
|
||||
| 12 | file_size | INT | Size of the data and metadata files in the vgroup |
|
||||
| 13 | tsma | TINYINT | Whether time-range-wise SMA is enabled. 1 means enabled; 0 means disabled. |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :--------: | ------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| 1 | vgroup_id | INT | Vgroup ID |
|
||||
| 2 | db_name | BINARY(32) | Database name |
|
||||
| 3 | tables | INT | Tables in vgroup. It should be noted that `tables` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 4 | status | BINARY(10) | Vgroup status |
|
||||
| 5 | v1_dnode | INT | Dnode ID of first vgroup member |
|
||||
| 6 | v1_status | BINARY(10) | Status of first vgroup member |
|
||||
| 7 | v2_dnode | INT | Dnode ID of second vgroup member |
|
||||
| 8 | v2_status | BINARY(10) | Status of second vgroup member |
|
||||
| 9 | v3_dnode | INT | Dnode ID of third vgroup member |
|
||||
| 10 | v3_status | BINARY(10) | Status of third vgroup member |
|
||||
| 11 | nfiles | INT | Number of data and metadata files in the vgroup |
|
||||
| 12 | file_size | INT | Size of the data and metadata files in the vgroup |
|
||||
| 13 | tsma | TINYINT | Whether time-range-wise SMA is enabled. 1 means enabled; 0 means disabled. |
|
||||
|
||||
## INS_CONFIGS
|
||||
|
||||
Provides system configuration information.
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :------: | ------------ | ------------ |
|
||||
| 1 | name | BINARY(32) | Parameter |
|
||||
| 2 | value | BINARY(64) | Value. It should be noted that `value` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :--------: | ------------- | ----------------------------------------------------------------------------------------------------------------------- |
|
||||
| 1 | name | BINARY(32) | Parameter |
|
||||
| 2 | value | BINARY(64) | Value. It should be noted that `value` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
|
||||
## INS_DNODE_VARIABLES
|
||||
|
||||
Provides dnode configuration information.
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :------: | ------------ | ------------ |
|
||||
| 1 | dnode_id | INT | Dnode ID |
|
||||
| 2 | name | BINARY(32) | Parameter |
|
||||
| 3 | value | BINARY(64) | Value. It should be noted that `value` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :--------: | ------------- | ----------------------------------------------------------------------------------------------------------------------- |
|
||||
| 1 | dnode_id | INT | Dnode ID |
|
||||
| 2 | name | BINARY(32) | Parameter |
|
||||
| 3 | value | BINARY(64) | Value. It should be noted that `value` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
|
||||
## INS_TOPICS
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------ | ------------------------------ |
|
||||
| 1 | topic_name | BINARY(192) | Topic name |
|
||||
| 2 | db_name | BINARY(64) | Database for the topic |
|
||||
| 3 | create_time | TIMESTAMP | Creation time |
|
||||
| 4 | sql | BINARY(1024) | SQL statement used to create the topic |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------- | -------------------------------------- |
|
||||
| 1 | topic_name | BINARY(192) | Topic name |
|
||||
| 2 | db_name | BINARY(64) | Database for the topic |
|
||||
| 3 | create_time | TIMESTAMP | Creation time |
|
||||
| 4 | sql | BINARY(1024) | SQL statement used to create the topic |
|
||||
|
||||
## INS_SUBSCRIPTIONS
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :------------: | ------------ | ------------------------ |
|
||||
| 1 | topic_name | BINARY(204) | Subscribed topic |
|
||||
| 2 | consumer_group | BINARY(193) | Subscribed consumer group |
|
||||
| 3 | vgroup_id | INT | Vgroup ID for the consumer |
|
||||
| 4 | consumer_id | BIGINT | Consumer ID |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :------------: | ------------- | --------------------------- |
|
||||
| 1 | topic_name | BINARY(204) | Subscribed topic |
|
||||
| 2 | consumer_group | BINARY(193) | Subscribed consumer group |
|
||||
| 3 | vgroup_id | INT | Vgroup ID for the consumer |
|
||||
| 4 | consumer_id | BIGINT | Consumer ID |
|
||||
| 5 | offset | BINARY(64) | Consumption progress |
|
||||
| 6 | rows | BIGINT | Number of consumption items |
|
||||
|
||||
## INS_STREAMS
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :----------: | ------------ | --------------------------------------- |
|
||||
| 1 | stream_name | BINARY(64) | Stream name |
|
||||
| 2 | create_time | TIMESTAMP | Creation time |
|
||||
| 3 | sql | BINARY(1024) | SQL statement used to create the stream |
|
||||
| 4 | status | BIANRY(20) | Current status |
|
||||
| 5 | source_db | BINARY(64) | Source database |
|
||||
| 6 | target_db | BIANRY(64) | Target database |
|
||||
| 7 | target_table | BINARY(192) | Target table |
|
||||
| 8 | watermark | BIGINT | Watermark (see stream processing documentation). It should be noted that `watermark` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 9 | trigger | INT | Method of triggering the result push (see stream processing documentation). It should be noted that `trigger` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :----------: | ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| 1 | stream_name | BINARY(64) | Stream name |
|
||||
| 2 | create_time | TIMESTAMP | Creation time |
|
||||
| 3 | sql | BINARY(1024) | SQL statement used to create the stream |
|
||||
| 4 | status | BINARY(20) | Current status |
|
||||
| 5 | source_db | BINARY(64) | Source database |
|
||||
| 6 | target_db | BINARY(64) | Target database |
|
||||
| 7 | target_table | BINARY(192) | Target table |
|
||||
| 8 | watermark | BIGINT | Watermark (see stream processing documentation). It should be noted that `watermark` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 9 | trigger | INT | Method of triggering the result push (see stream processing documentation). It should be noted that `trigger` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
sidebar_label: Statistics
|
||||
title: Performance_Schema Database
|
||||
sidebar_label: Statistics
|
||||
description: This document describes how to use the PERFORMANCE_SCHEMA database in TDengine.
|
||||
---
|
||||
|
||||
TDengine includes a built-in database named `PERFORMANCE_SCHEMA` to provide access to database performance statistics. This document introduces the tables of PERFORMANCE_SCHEMA and their structure.
|
||||
|
@ -68,7 +69,7 @@ Provides information about SQL queries currently running. Similar to SHOW QUERIE
|
|||
| 1 | consumer_id | BIGINT | Consumer ID |
|
||||
| 2 | consumer_group | BINARY(192) | Consumer group |
|
||||
| 3 | client_id | BINARY(192) | Client ID (user-defined) |
|
||||
| 4 | status | BINARY(20) | Consumer status |
|
||||
| 4 | status | BINARY(20) | Consumer status. All possible status include: ready(consumer is in normal state), lost(the connection between consumer and mnode is broken), rebalance(the redistribution of vgroups that belongs to current consumer is now in progress), unknown(consumer is in invalid state)
|
||||
| 5 | topics | BINARY(204) | Subscribed topic. Returns one row for each topic. |
|
||||
| 6 | up_time | TIMESTAMP | Time of first connection to TDengine Server |
|
||||
| 7 | subscribe_time | TIMESTAMP | Time of first subscription |
|
||||
|
|
|
@ -1,9 +1,10 @@
|
|||
---
|
||||
sidebar_label: SHOW Statement
|
||||
title: SHOW Statement for Metadata
|
||||
sidebar_label: SHOW Statement
|
||||
description: This document describes how to use the SHOW statement in TDengine.
|
||||
---
|
||||
|
||||
`SHOW` command can be used to get brief system information. To get details about metatadata, information, and status in the system, please use `select` to query the tables in database `INFORMATION_SCHEMA`.
|
||||
`SHOW` command can be used to get brief system information. To get details about metadata, information, and status in the system, please use `select` to query the tables in database `INFORMATION_SCHEMA`.
|
||||
|
||||
## SHOW APPS
|
||||
|
||||
|
@ -35,7 +36,7 @@ Shows information about connections to the system.
|
|||
SHOW CONSUMERS;
|
||||
```
|
||||
|
||||
Shows information about all active consumers in the system.
|
||||
Shows information about all consumers in the system.
|
||||
|
||||
## SHOW CREATE DATABASE
|
||||
|
||||
|
@ -85,10 +86,10 @@ SHOW FUNCTIONS;
|
|||
|
||||
Shows all user-defined functions in the system.
|
||||
|
||||
## SHOW LICENSE
|
||||
## SHOW LICENCES
|
||||
|
||||
```sql
|
||||
SHOW LICENSE;
|
||||
SHOW LICENCES;
|
||||
SHOW GRANTS;
|
||||
```
|
||||
|
||||
|
@ -100,6 +101,7 @@ Note: TDengine Enterprise Edition only.
|
|||
|
||||
```sql
|
||||
SHOW INDEXES FROM tbl_name [FROM db_name];
|
||||
SHOW INDEXES FROM [db_name.]tbl_name;
|
||||
```
|
||||
|
||||
Shows indices that have been created.
|
||||
|
@ -128,6 +130,14 @@ SHOW QNODES;
|
|||
|
||||
Shows information about qnodes in the system.
|
||||
|
||||
## SHOW QUERIES
|
||||
|
||||
```sql
|
||||
SHOW QUERIES;
|
||||
```
|
||||
|
||||
Shows the queries in progress in the system.
|
||||
|
||||
## SHOW SCORES
|
||||
|
||||
```sql
|
||||
|
@ -178,10 +188,146 @@ SHOW TABLE DISTRIBUTED table_name;
|
|||
|
||||
Shows how table data is distributed.
|
||||
|
||||
Examples: Below is an example of this command to display the block distribution of table `d0` in detailed format.
|
||||
|
||||
```sql
|
||||
show table distributed d0\G;
|
||||
```
|
||||
|
||||
<details>
|
||||
<summary> Show Example </summary>
|
||||
<pre><code>
|
||||
*************************** 1.row ***************************
|
||||
_block_dist: Total_Blocks=[5] Total_Size=[93.65 KB] Average_size=[18.73 KB] Compression_Ratio=[23.98 %]
|
||||
|
||||
Total_Blocks : Table `d0` contains total 5 blocks
|
||||
|
||||
Total_Size: The total size of all the data blocks in table `d0` is 93.65 KB
|
||||
|
||||
Average_size: The average size of each block is 18.73 KB
|
||||
|
||||
Compression_Ratio: The data compression rate is 23.98%
|
||||
|
||||
*************************** 2.row ***************************
|
||||
_block_dist: Total_Rows=[20000] Inmem_Rows=[0] MinRows=[3616] MaxRows=[4096] Average_Rows=[4000]
|
||||
|
||||
Total_Rows: Table `d0` contains 20,000 rows
|
||||
|
||||
Inmem_Rows: The rows still in memory, i.e. not committed in disk, is 0, i.e. none such rows
|
||||
|
||||
MinRows: The minimum number of rows in a block is 3,616
|
||||
|
||||
MaxRows: The maximum number of rows in a block is 4,096B
|
||||
|
||||
Average_Rows: The average number of rows in a block is 4,000
|
||||
|
||||
*************************** 3.row ***************************
|
||||
_block_dist: Total_Tables=[1] Total_Files=[2]
|
||||
|
||||
Total_Tables: The number of child tables, 1 in this example
|
||||
|
||||
Total_Files: The number of files storing the table's data, 2 in this example
|
||||
|
||||
*************************** 4.row ***************************
|
||||
|
||||
_block_dist: --------------------------------------------------------------------------------
|
||||
|
||||
*************************** 5.row ***************************
|
||||
|
||||
_block_dist: 0100 |
|
||||
|
||||
*************************** 6.row ***************************
|
||||
|
||||
_block_dist: 0299 |
|
||||
|
||||
*************************** 7.row ***************************
|
||||
|
||||
_block_dist: 0498 |
|
||||
|
||||
*************************** 8.row ***************************
|
||||
|
||||
_block_dist: 0697 |
|
||||
|
||||
*************************** 9.row ***************************
|
||||
|
||||
_block_dist: 0896 |
|
||||
|
||||
*************************** 10.row ***************************
|
||||
|
||||
_block_dist: 1095 |
|
||||
|
||||
*************************** 11.row ***************************
|
||||
|
||||
_block_dist: 1294 |
|
||||
|
||||
*************************** 12.row ***************************
|
||||
|
||||
_block_dist: 1493 |
|
||||
|
||||
*************************** 13.row ***************************
|
||||
|
||||
_block_dist: 1692 |
|
||||
|
||||
*************************** 14.row ***************************
|
||||
|
||||
_block_dist: 1891 |
|
||||
|
||||
*************************** 15.row ***************************
|
||||
|
||||
_block_dist: 2090 |
|
||||
|
||||
*************************** 16.row ***************************
|
||||
|
||||
_block_dist: 2289 |
|
||||
|
||||
*************************** 17.row ***************************
|
||||
|
||||
_block_dist: 2488 |
|
||||
|
||||
*************************** 18.row ***************************
|
||||
|
||||
_block_dist: 2687 |
|
||||
|
||||
*************************** 19.row ***************************
|
||||
|
||||
_block_dist: 2886 |
|
||||
|
||||
*************************** 20.row ***************************
|
||||
|
||||
_block_dist: 3085 |
|
||||
|
||||
*************************** 21.row ***************************
|
||||
|
||||
_block_dist: 3284 |
|
||||
|
||||
*************************** 22.row ***************************
|
||||
|
||||
_block_dist: 3483 ||||||||||||||||| 1 (20.00%)
|
||||
|
||||
*************************** 23.row ***************************
|
||||
|
||||
_block_dist: 3682 |
|
||||
|
||||
*************************** 24.row ***************************
|
||||
|
||||
_block_dist: 3881 ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 4 (80.00%)
|
||||
|
||||
Query OK, 24 row(s) in set (0.002444s)
|
||||
|
||||
</code></pre>
|
||||
</details>
|
||||
|
||||
The above show the block distribution percentage according to the number of rows in each block. In the above example, we can get below information:
|
||||
- `_block_dist: 3483 ||||||||||||||||| 1 (20.00%)` means there is one block whose rows is between 3,483 and 3,681.
|
||||
- `_block_dist: 3881 ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 4 (80.00%)` means there are 4 blocks whose rows is between 3,881 and 4,096. - The number of blocks whose rows fall in other range is zero.
|
||||
|
||||
Note that only the information about the data blocks in the data file will be displayed here, and the information about the data in the stt file will not be displayed.
|
||||
|
||||
## SHOW TAGS
|
||||
|
||||
```sql
|
||||
SHOW TAGS FROM child_table_name [FROM db_name];
|
||||
SHOW TAGS FROM [db_name.]child_table_name;
|
||||
```
|
||||
|
||||
Shows all tag information in a subtable.
|
||||
|
@ -217,7 +363,7 @@ SHOW VARIABLES;
|
|||
SHOW DNODE dnode_id VARIABLES;
|
||||
```
|
||||
|
||||
Shows the working configuration of the parameters that must be the same on each node. You can also specify a dnode to show the working configuration for that node.
|
||||
Shows the working configuration of the parameters that must be the same on each node. You can also specify a dnode to show the working configuration for that node.
|
||||
|
||||
## SHOW VGROUPS
|
||||
|
||||
|
@ -225,12 +371,12 @@ Shows the working configuration of the parameters that must be the same on each
|
|||
SHOW [db_name.]VGROUPS;
|
||||
```
|
||||
|
||||
Shows information about all vgroups in the system or about the vgroups for a specified database.
|
||||
Shows information about all vgroups in the current database.
|
||||
|
||||
## SHOW VNODES
|
||||
|
||||
```sql
|
||||
SHOW VNODES [dnode_name];
|
||||
SHOW VNODES {dnode_id | dnode_endpoint};
|
||||
```
|
||||
|
||||
Shows information about all vnodes in the system or about the vnodes for a specified dnode.
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
sidebar_label: Access Control
|
||||
title: User and Access Control
|
||||
description: Manage user and user's permission
|
||||
sidebar_label: Access Control
|
||||
description: This document describes how to manage users and permissions in TDengine.
|
||||
---
|
||||
|
||||
This document describes how to manage permissions in TDengine.
|
||||
|
@ -16,7 +16,7 @@ This statement creates a user account.
|
|||
|
||||
The maximum length of user_name is 23 bytes.
|
||||
|
||||
The maximum length of password is 128 bytes. The password can include leters, digits, and special characters excluding single quotation marks, double quotation marks, backticks, backslashes, and spaces. The password cannot be empty.
|
||||
The maximum length of password is 31 bytes. The password can include leters, digits, and special characters excluding single quotation marks, double quotation marks, backticks, backslashes, and spaces. The password cannot be empty.
|
||||
|
||||
`SYSINFO` indicates whether the user is allowed to view system information. `1` means allowed, `0` means not allowed. System information includes server configuration, dnode, vnode, storage. The default value is `1`.
|
||||
|
||||
|
|
|
@ -1,22 +1,24 @@
|
|||
---
|
||||
sidebar_label: User-Defined Functions
|
||||
title: User-Defined Functions (UDF)
|
||||
sidebar_label: User-Defined Functions
|
||||
description: This document describes the SQL statements related to user-defined functions (UDF) in TDengine.
|
||||
---
|
||||
|
||||
You can create user-defined functions and import them into TDengine.
|
||||
## Create UDF
|
||||
|
||||
SQL command can be executed on the host where the generated UDF DLL resides to load the UDF DLL into TDengine. This operation cannot be done through REST interface or web console. Once created, any client of the current TDengine can use these UDF functions in their SQL commands. UDF are stored in the management node of TDengine. The UDFs loaded in TDengine would be still available after TDengine is restarted.
|
||||
SQL command can be executed on the host where the generated UDF DLL resides to load the UDF DLL into TDengine. This operation cannot be done through REST interface or web console. Once created, any client of the current TDengine can use these UDF functions in their SQL commands. UDF is stored in the management node of TDengine. The UDFs loaded in TDengine would be still available after TDengine is restarted.
|
||||
|
||||
When creating UDF, the type of UDF, i.e. a scalar function or aggregate function must be specified. If the specified type is wrong, the SQL statements using the function would fail with errors. The input data type and output data type must be consistent with the UDF definition.
|
||||
|
||||
- Create Scalar Function
|
||||
```sql
|
||||
CREATE FUNCTION function_name AS library_path OUTPUTTYPE output_type;
|
||||
CREATE [OR REPLACE] FUNCTION function_name AS library_path OUTPUTTYPE output_type [LANGUAGE 'C|Python'];
|
||||
```
|
||||
|
||||
- function_name: The scalar function name to be used in SQL statement which must be consistent with the UDF name and is also the name of the compiled DLL (.so file).
|
||||
- library_path: The absolute path of the DLL file including the name of the shared object file (.so). The path must be quoted with single or double quotes.
|
||||
- OR REPLACE: if the UDF exists, the UDF properties are modified
|
||||
- function_name: The scalar function name to be used in the SQL statement
|
||||
- LANGUAGE 'C|Python': the programming language of UDF. Now C or Python (v3.7+) is supported. If this clause is omitted, C is assumed as the programming language.
|
||||
- library_path: For C programming language, The absolute path of the DLL file including the name of the shared object file (.so). For Python programming language, the absolute path of the Python UDF script. The path must be quoted with single or double quotes.
|
||||
- output_type: The data type of the results of the UDF.
|
||||
|
||||
For example, the following SQL statement can be used to create a UDF from `libbitand.so`.
|
||||
|
@ -24,14 +26,20 @@ CREATE FUNCTION function_name AS library_path OUTPUTTYPE output_type;
|
|||
```sql
|
||||
CREATE FUNCTION bit_and AS "/home/taos/udf_example/libbitand.so" OUTPUTTYPE INT;
|
||||
```
|
||||
For Example, the following SQL statement can be used to modify the existing function `bit_and`. The OUTPUT type is changed to BIGINT and the programming language is changed to Python.
|
||||
|
||||
```sql
|
||||
CREATE OR REPLACE FUNCTION bit_and AS "/home/taos/udf_example/bit_and.py" OUTPUTTYPE BIGINT LANGUAGE 'Python';
|
||||
```
|
||||
|
||||
- Create Aggregate Function
|
||||
```sql
|
||||
CREATE AGGREGATE FUNCTION function_name AS library_path OUTPUTTYPE output_type [ BUFSIZE buffer_size ];
|
||||
```
|
||||
|
||||
- function_name: The aggregate function name to be used in SQL statement which must be consistent with the udfNormalFunc name and is also the name of the compiled DLL (.so file).
|
||||
- library_path: The absolute path of the DLL file including the name of the shared object file (.so). The path must be quoted with single or double quotes.
|
||||
- OR REPLACE: if the UDF exists, the UDF properties are modified
|
||||
- function_name: The aggregate function name to be used in the SQL statement
|
||||
- LANGUAGE 'C|Python': the programming language of the UDF. Now C or Python is supported. If this clause is omitted, C is assumed as the programming language.
|
||||
- library_path: For C programming language, The absolute path of the DLL file including the name of the shared object file (.so). For Python programming language, the absolute path of the Python UDF script. The path must be quoted with single or double quotes.
|
||||
- output_type: The output data type, the value is the literal string of the supported TDengine data type.
|
||||
- buffer_size: The size of the intermediate buffer in bytes. This parameter is optional.
|
||||
|
||||
|
@ -40,7 +48,12 @@ CREATE AGGREGATE FUNCTION function_name AS library_path OUTPUTTYPE output_type [
|
|||
```sql
|
||||
CREATE AGGREGATE FUNCTION l2norm AS "/home/taos/udf_example/libl2norm.so" OUTPUTTYPE DOUBLE bufsize 8;
|
||||
```
|
||||
For more information about user-defined functions, see [User-Defined Functions](../../develop/udf).
|
||||
For example, the following SQL statement modifies the buffer size of existing UDF `l2norm` to 64
|
||||
```sql
|
||||
CREATE AGGREGATE FUNCTION l2norm AS "/home/taos/udf_example/libl2norm.so" OUTPUTTYPE DOUBLE bufsize 64;
|
||||
```
|
||||
|
||||
For more information about user-defined functions, see [User-Defined Functions](/develop/udf).
|
||||
|
||||
## Manage UDF
|
||||
|
||||
|
@ -60,9 +73,9 @@ SHOW FUNCTIONS;
|
|||
|
||||
## Call UDF
|
||||
|
||||
The function name specified when creating UDF can be used directly in SQL statements, just like builtin functions. For example:
|
||||
The function name specified when creating UDF can be used directly in SQL statements, just like built-in functions. For example:
|
||||
```sql
|
||||
SELECT X(c1,c2) FROM table/stable;
|
||||
SELECT bit_and(c1,c2) FROM table;
|
||||
```
|
||||
|
||||
The above SQL statement invokes function X for column c1 and c2. You can use query keywords like WHERE with user-defined functions.
|
||||
The above SQL statement invokes function X for columns c1 and c2 on the table. You can use query keywords like WHERE with user-defined functions.
|
||||
|
|
|
@ -1,14 +1,15 @@
|
|||
---
|
||||
sidebar_label: Index
|
||||
title: Using Indices
|
||||
title: Indexing
|
||||
sidebar_label: Indexing
|
||||
description: This document describes the SQL statements related to indexing in TDengine.
|
||||
---
|
||||
|
||||
TDengine supports SMA and FULLTEXT indexing.
|
||||
TDengine supports SMA and tag indexing.
|
||||
|
||||
## Create an Index
|
||||
|
||||
```sql
|
||||
CREATE FULLTEXT INDEX index_name ON tb_name (col_name [, col_name] ...)
|
||||
CREATE INDEX index_name ON tb_name (col_name [, col_name] ...)
|
||||
|
||||
CREATE SMA INDEX index_name ON tb_name index_option
|
||||
|
||||
|
@ -27,9 +28,23 @@ Performs pre-aggregation on the specified column over the time window defined by
|
|||
- WATERMARK: Enter a value between 0ms and 900000ms. The most precise unit supported is milliseconds. The default value is 5 seconds. This option can be used only on supertables.
|
||||
- MAX_DELAY: Enter a value between 1ms and 900000ms. The most precise unit supported is milliseconds. The default value is the value of interval provided that it does not exceed 900000ms. This option can be used only on supertables. Note: Retain the default value if possible. Configuring a small MAX_DELAY may cause results to be frequently pushed, affecting storage and query performance.
|
||||
|
||||
### FULLTEXT Indexing
|
||||
|
||||
Creates a text index for the specified column. FULLTEXT indexing improves performance for queries with text filtering. The index_option syntax is not supported for FULLTEXT indexing. FULLTEXT indexing is supported for JSON tag columns only. Multiple columns cannot be indexed together. However, separate indices can be created for each column.
|
||||
```sql
|
||||
DROP DATABASE IF EXISTS d0;
|
||||
CREATE DATABASE d0;
|
||||
USE d0;
|
||||
CREATE TABLE IF NOT EXISTS st1 (ts timestamp, c1 int, c2 float, c3 double) TAGS (t1 int unsigned);
|
||||
CREATE TABLE ct1 USING st1 TAGS(1000);
|
||||
CREATE TABLE ct2 USING st1 TAGS(2000);
|
||||
INSERT INTO ct1 VALUES(now+0s, 10, 2.0, 3.0);
|
||||
INSERT INTO ct1 VALUES(now+1s, 11, 2.1, 3.1)(now+2s, 12, 2.2, 3.2)(now+3s, 13, 2.3, 3.3);
|
||||
CREATE SMA INDEX sma_index_name1 ON st1 FUNCTION(max(c1),max(c2),min(c1)) INTERVAL(5m,10s) SLIDING(5m) WATERMARK 5s MAX_DELAY 1m;
|
||||
-- query from SMA Index
|
||||
ALTER LOCAL 'querySmaOptimize' '1';
|
||||
SELECT max(c2),min(c1) FROM st1 INTERVAL(5m,10s) SLIDING(5m);
|
||||
SELECT _wstart,_wend,_wduration,max(c2),min(c1) FROM st1 INTERVAL(5m,10s) SLIDING(5m);
|
||||
-- query from raw data
|
||||
ALTER LOCAL 'querySmaOptimize' '0';
|
||||
```
|
||||
|
||||
## Delete an Index
|
||||
|
||||
|
@ -40,8 +55,8 @@ DROP INDEX index_name;
|
|||
## View Indices
|
||||
|
||||
````sql
|
||||
```sql
|
||||
SHOW INDEXES FROM tbl_name [FROM db_name];
|
||||
SHOW INDEXES FROM [db_name.]tbl_name ;
|
||||
````
|
||||
|
||||
Shows indices that have been created for the specified database or table.
|
||||
|
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
sidebar_label: Error Recovery
|
||||
title: Error Recovery
|
||||
sidebar_label: Error Recovery
|
||||
description: This document describes the SQL statements related to error recovery in TDengine.
|
||||
---
|
||||
|
||||
In a complex environment, connections and query tasks may encounter errors or fail to return in a reasonable time. If this occurs, you can terminate the connection or task.
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
---
|
||||
sidebar_label: Changes in TDengine 3.0
|
||||
title: Changes in TDengine 3.0
|
||||
description: "This document explains how TDengine SQL has changed in version 3.0."
|
||||
sidebar_label: Changes in TDengine 3.0
|
||||
description: This document describes how TDengine SQL has changed in version 3.0 compared with previous versions.
|
||||
---
|
||||
|
||||
## Basic SQL Elements
|
||||
|
@ -18,6 +18,7 @@ description: "This document explains how TDengine SQL has changed in version 3.0
|
|||
| 8 | Mixed operations | Enhanced | Mixing scalar and vector operations in queries has been enhanced and is supported in all SELECT clauses.
|
||||
| 9 | Tag operations | Added | Tag columns can be used in queries and clauses like data columns.
|
||||
| 10 | Timeline clauses and time functions in supertables | Enhanced | When PARTITION BY is not used, data in supertables is merged into a single timeline.
|
||||
| 11 | GEOMETRY | Added | Geometry
|
||||
|
||||
## SQL Syntax
|
||||
|
||||
|
@ -27,13 +28,13 @@ The following data types can be used in the schema for standard tables.
|
|||
| - | :------- | :-------- | :------- |
|
||||
| 1 | ALTER ACCOUNT | Deprecated| This Enterprise Edition-only statement has been removed. It returns the error "This statement is no longer supported."
|
||||
| 2 | ALTER ALL DNODES | Added | Modifies the configuration of all dnodes.
|
||||
| 3 | ALTER DATABASE | Modified | Deprecated<ul><li>QUORUM: Specified the required number of confirmations. STRICT is now used to specify strong or weak consistency. The STRICT parameter cannot be modified. </li><li>BLOCKS: Specified the memory blocks used by each vnode. BUFFER is now used to specify the size of the write cache pool for each vnode. </li><li>UPDATE: Specified whether update operations were supported. All databases now support updating data in certain columns. </li><li>CACHELAST: Specified how to cache the newest row of data. CACHEMODEL now replaces CACHELAST. </li><li>COMP: Cannot be modified. <br/>Added</li><li>CACHEMODEL: Specifies whether to cache the latest subtable data. </li><li>CACHESIZE: Specifies the size of the cache for the newest subtable data. </li><li>WAL_FSYNC_PERIOD: Replaces the FSYNC parameter. </li><li>WAL_LEVEL: Replaces the WAL parameter. <br/>Modified</li><li>REPLICA: Cannot be modified. </li><li>KEEP: Now supports units. </li></ul>
|
||||
| 3 | ALTER DATABASE | Modified | Deprecated<ul><li>QUORUM: Specified the required number of confirmations. TDengine 3.0 provides strict consistency by default and doesn't allow to change to weak consistency. </li><li>BLOCKS: Specified the memory blocks used by each vnode. BUFFER is now used to specify the size of the write cache pool for each vnode. </li><li>UPDATE: Specified whether update operations were supported. All databases now support updating data in certain columns. </li><li>CACHELAST: Specified how to cache the newest row of data. CACHEMODEL now replaces CACHELAST. </li><li>COMP: Cannot be modified. <br/>Added</li><li>CACHEMODEL: Specifies whether to cache the latest subtable data. </li><li>CACHESIZE: Specifies the size of the cache for the newest subtable data. </li><li>WAL_FSYNC_PERIOD: Replaces the FSYNC parameter. </li><li>WAL_LEVEL: Replaces the WAL parameter. </li><li>WAL_RETENTION_PERIOD: specifies the time after which WAL files are deleted. This parameter is used for data subscription. </li><li>WAL_RETENTION_SIZE: specifies the size at which WAL files are deleted. This parameter is used for data subscription. <br/>Modified</li><li>REPLICA: Cannot be modified. </li><li>KEEP: Now supports units. </li></ul>
|
||||
| 4 | ALTER STABLE | Modified | Deprecated<ul><li>CHANGE TAG: Modified the name of a tag. Replaced by RENAME TAG. <br/>Added</li><li>RENAME TAG: Replaces CHANGE TAG. </li><li>COMMENT: Specifies comments for a supertable. </li></ul>
|
||||
| 5 | ALTER TABLE | Modified | Deprecated<ul><li>CHANGE TAG: Modified the name of a tag. Replaced by RENAME TAG. <br/>Added</li><li>RENAME TAG: Replaces CHANGE TAG. </li><li>COMMENT: Specifies comments for a standard table. </li><li>TTL: Specifies the time-to-live for a standard table. </li></ul>
|
||||
| 6 | ALTER USER | Modified | Deprecated<ul><li>PRIVILEGE: Specified user permissions. Replaced by GRANT and REVOKE. <br/>Added</li><li>ENABLE: Enables or disables a user. </li><li>SYSINFO: Specifies whether a user can query system information. </li></ul>
|
||||
| 7 | COMPACT VNODES | Not supported | Compacted the data on a vnode. Not supported.
|
||||
| 8 | CREATE ACCOUNT | Deprecated| This Enterprise Edition-only statement has been removed. It returns the error "This statement is no longer supported."
|
||||
| 9 | CREATE DATABASE | Modified | Deprecated<ul><li>BLOCKS: Specified the number of blocks for each vnode. BUFFER is now used to specify the size of the write cache pool for each vnode. </li><li>CACHE: Specified the size of the memory blocks used by each vnode. BUFFER is now used to specify the size of the write cache pool for each vnode. </li><li>CACHELAST: Specified how to cache the newest row of data. CACHEMODEL now replaces CACHELAST. </li><li>DAYS: The length of time to store in a single file. Replaced by DURATION. </li><li>FSYNC: Specified the fsync interval when WAL was set to 2. Replaced by WAL_FSYNC_PERIOD. </li><li>QUORUM: Specified the number of confirmations required. STRICT is now used to specify strong or weak consistency. </li><li>UPDATE: Specified whether update operations were supported. All databases now support updating data in certain columns. </li><li>WAL: Specified the WAL level. Replaced by WAL_LEVEL. <br/>Added</li><li>BUFFER: Specifies the size of the write cache pool for each vnode. </li><li>CACHEMODEL: Specifies whether to cache the latest subtable data. </li><li>CACHESIZE: Specifies the size of the cache for the newest subtable data. </li><li>DURATION: Replaces DAYS. Now supports units. </li><li>PAGES: Specifies the number of pages in the metadata storage engine cache on each vnode. </li><li>PAGESIZE: specifies the size (in KB) of each page in the metadata storage engine cache on each vnode. </li><li>RETENTIONS: Specifies the aggregation interval and retention period </li><li>STRICT: Specifies whether strong data consistency is enabled. </li><li>SINGLE_STABLE: Specifies whether a database can contain multiple supertables. </li><li>VGROUPS: Specifies the initial number of vgroups when a database is created. </li><li>WAL_FSYNC_PERIOD: Replaces the FSYNC parameter. </li><li>WAL_LEVEL: Replaces the WAL parameter. </li><li>WAL_RETENTION_PERIOD: specifies the time after which WAL files are deleted. This parameter is used for data subscription. </li><li>WAL_RETENTION_SIZE: specifies the size at which WAL files are deleted. This parameter is used for data subscription. </li><li>WAL_ROLL_PERIOD: Specifies the WAL rotation period. </li><li>WAL_SEGMENT_SIZE: specifies the maximum size of a WAL file. <br/>Modified</li><li>KEEP: Now supports units. </li></ul>
|
||||
| 9 | CREATE DATABASE | Modified | Deprecated<ul><li>BLOCKS: Specified the number of blocks for each vnode. BUFFER is now used to specify the size of the write cache pool for each vnode. </li><li>CACHE: Specified the size of the memory blocks used by each vnode. BUFFER is now used to specify the size of the write cache pool for each vnode. </li><li>CACHELAST: Specified how to cache the newest row of data. CACHEMODEL now replaces CACHELAST. </li><li>DAYS: The length of time to store in a single file. Replaced by DURATION. </li><li>FSYNC: Specified the fsync interval when WAL was set to 2. Replaced by WAL_FSYNC_PERIOD. </li><li>QUORUM: Specified the number of confirmations required. STRICT is now used to specify strong or weak consistency. </li><li>UPDATE: Specified whether update operations were supported. All databases now support updating data in certain columns. </li><li>WAL: Specified the WAL level. Replaced by WAL_LEVEL. <br/>Added</li><li>BUFFER: Specifies the size of the write cache pool for each vnode. </li><li>CACHEMODEL: Specifies whether to cache the latest subtable data. </li><li>CACHESIZE: Specifies the size of the cache for the newest subtable data. </li><li>DURATION: Replaces DAYS. Now supports units. </li><li>PAGES: Specifies the number of pages in the metadata storage engine cache on each vnode. </li><li>PAGESIZE: specifies the size (in KB) of each page in the metadata storage engine cache on each vnode. </li><li>RETENTIONS: Specifies the aggregation interval and retention period </li><li>STRICT: Specifies whether strong data consistency is enabled. </li><li>SINGLE_STABLE: Specifies whether a database can contain multiple supertables. </li><li>VGROUPS: Specifies the initial number of vgroups when a database is created. </li><li>WAL_FSYNC_PERIOD: Replaces the FSYNC parameter. </li><li>WAL_LEVEL: Replaces the WAL parameter. </li><li>WAL_RETENTION_PERIOD: specifies the time after which WAL files are deleted. This parameter is used for data subscription. </li><li>WAL_RETENTION_SIZE: specifies the size at which WAL files are deleted. This parameter is used for data subscription. <br/>Modified</li><li>KEEP: Now supports units. </li></ul>
|
||||
| 10 | CREATE DNODE | Modified | Now supports specifying hostname and port separately<ul><li>CREATE DNODE dnode_host_name PORT port_val</li></ul>
|
||||
| 11 | CREATE INDEX | Added | Creates an SMA index.
|
||||
| 12 | CREATE MNODE | Added | Creates an mnode.
|
||||
|
@ -54,7 +55,6 @@ The following data types can be used in the schema for standard tables.
|
|||
| 27 | GRANT | Added | Grants permissions to a user.
|
||||
| 28 | KILL TRANSACTION | Added | Terminates an mnode transaction.
|
||||
| 29 | KILL STREAM | Deprecated | Terminated a continuous query. The continuous query feature has been replaced with the stream processing feature.
|
||||
| 30 | MERGE VGROUP | Added | Merges vgroups.
|
||||
| 31 | REVOKE | Added | Revokes permissions from a user.
|
||||
| 32 | SELECT | Modified | <ul><li>SELECT does not use the implicit results column. Output columns must be specified in the SELECT clause. </li><li>DISTINCT support is enhanced. In previous versions, DISTINCT only worked on the tag column and could not be used with JOIN or GROUP BY. </li><li>JOIN support is enhanced. The following are now supported after JOIN: a WHERE clause with OR, operations on multiple tables, and GROUP BY on multiple tables. </li><li>Subqueries after FROM are enhanced. Levels of nesting are no longer restricted. Subqueries can be used with UNION ALL. Other syntax restrictions are eliminated. </li><li>All scalar functions can be used after WHERE. </li><li>GROUP BY is enhanced. You can group by any scalar expression or combination thereof. </li><li>SESSION can be used on supertables. When PARTITION BY is not used, data in supertables is merged into a single timeline. </li><li>STATE_WINDOW can be used on supertables. When PARTITION BY is not used, data in supertables is merged into a single timeline. </li><li>ORDER BY is enhanced. It is no longer required to use ORDER BY and GROUP BY together. There is no longer a restriction on the number of order expressions. NULLS FIRST and NULLS LAST syntax has been added. Any expression that conforms to the ORDER BY semantics can be used. </li><li>Added PARTITION BY syntax. PARTITION BY replaces GROUP BY tags. </li></ul>
|
||||
| 33 | SHOW ACCOUNTS | Deprecated | This Enterprise Edition-only statement has been removed. It returns the error "This statement is no longer supported."
|
||||
|
@ -76,8 +76,9 @@ The following data types can be used in the schema for standard tables.
|
|||
| 49 | SHOW TRANSACTIONS | Added | Shows all running transactions in the system.
|
||||
| 50 | SHOW DNODE VARIABLES | Added | Shows the configuration of the specified dnode.
|
||||
| 51 | SHOW VNODES | Not supported | Shows information about vnodes in the system. Not supported.
|
||||
| 52 | SPLIT VGROUP | Added | Splits a vgroup into two vgroups.
|
||||
| 53 | TRIM DATABASE | Added | Deletes data that has expired and orders the remaining data in accordance with the storage configuration.
|
||||
| 52 | TRIM DATABASE | Added | Deletes data that has expired and orders the remaining data in accordance with the storage configuration.
|
||||
| 53 | REDISTRIBUTE VGROUP | Added | Adjust the distribution of VNODES in VGROUP.
|
||||
| 54 | BALANCE VGROUP | Added | Auto adjust the distribution of VNODES in VGROUP.
|
||||
|
||||
## SQL Functions
|
||||
|
||||
|
|
After Width: | Height: | Size: 210 KiB |
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: TDengine SQL
|
||||
description: 'The syntax supported by TDengine SQL '
|
||||
description: This document describes the syntax and functions supported by TDengine SQL.
|
||||
---
|
||||
|
||||
This section explains the syntax of SQL to perform operations on databases, tables and STables, insert data, select data and use functions. We also provide some tips that can be used in TDengine SQL. If you have previous experience with SQL this section will be fairly easy to understand. If you do not have previous experience with SQL, you'll come to appreciate the simplicity and power of SQL. TDengine SQL has been enhanced in version 3.0, and the query engine has been rearchitected. For information about how TDengine SQL has changed, see [Changes in TDengine 3.0](../taos-sql/changes).
|
||||
|
@ -13,7 +13,7 @@ Syntax Specifications used in this chapter:
|
|||
- Information that you input is given in lowercase.
|
||||
- \[ \] means optional input, excluding [] itself.
|
||||
- | means one of a few options, excluding | itself.
|
||||
- … means the item prior to it can be repeated multiple times.
|
||||
- ... means the item prior to it can be repeated multiple times.
|
||||
|
||||
To better demonstrate the syntax, usage and rules of TDengine SQL, hereinafter it's assumed that there is a data set of data from electric meters. Each meter collects 3 data measurements: current, voltage, phase. The data model is shown below:
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
title: Install and Uninstall
|
||||
description: Install, Uninstall, Start, Stop and Upgrade
|
||||
description: This document describes how to install, upgrade, and uninstall TDengine.
|
||||
---
|
||||
|
||||
import Tabs from "@theme/Tabs";
|
||||
|
@ -15,14 +15,14 @@ About details of installing TDenine, please refer to [Installation Guide](../../
|
|||
## Uninstall
|
||||
|
||||
<Tabs>
|
||||
<TabItem label="Uninstall apt-get" value="aptremove">
|
||||
<TabItem label="Uninstall by apt-get" value="aptremove">
|
||||
|
||||
Apt-get package of TDengine can be uninstalled as below:
|
||||
Uninstall package of TDengine by apt-get can be uninstalled as below:
|
||||
|
||||
```bash
|
||||
$ sudo apt-get remove tdengine
|
||||
Reading package lists... Done
|
||||
Building dependency tree
|
||||
Building dependency tree
|
||||
Reading state information... Done
|
||||
The following packages will be REMOVED:
|
||||
tdengine
|
||||
|
@ -35,7 +35,7 @@ TDengine is removed successfully!
|
|||
|
||||
```
|
||||
|
||||
Apt-get package of taosTools can be uninstalled as below:
|
||||
If you have installed taos-tools, please uninstall it first before uninstall TDengine. The command of uninstall is following:
|
||||
|
||||
```
|
||||
$ sudo apt remove taostools
|
||||
|
@ -111,8 +111,20 @@ taos tools is uninstalled successfully!
|
|||
```
|
||||
|
||||
</TabItem>
|
||||
|
||||
<TabItem label="Windows uninstall" value="windows">
|
||||
Run C:\TDengine\unins000.exe to uninstall TDengine on a Windows system.
|
||||
</TabItem>
|
||||
|
||||
<TabItem label="Mac uninstall" value="mac">
|
||||
|
||||
TDengine can be uninstalled as below:
|
||||
|
||||
```
|
||||
$ rmtaos
|
||||
TDengine is removed successfully!
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
|
@ -150,13 +162,13 @@ There are two aspects in upgrade operation: upgrade installation package and upg
|
|||
|
||||
To upgrade a package, follow the steps mentioned previously to first uninstall the old version then install the new version.
|
||||
|
||||
Upgrading a running server is much more complex. First please check the version number of the old version and the new version. The version number of TDengine consists of 4 sections, only if the first 3 sections match can the old version be upgraded to the new version. The steps of upgrading a running server are as below:
|
||||
Upgrading a running server is much more complex. First please check the version number of the old version and the new version. The version number of TDengine consists of 4 sections, only if the first 2 sections match can the old version be upgraded to the new version. The steps of upgrading a running server are as below:
|
||||
- Stop inserting data
|
||||
- Make sure all data is persisted to disk
|
||||
- Make sure all data is persisted to disk, please use command `flush database`
|
||||
- Stop the cluster of TDengine
|
||||
- Uninstall old version and install new version
|
||||
- Start the cluster of TDengine
|
||||
- Execute simple queries, such as the ones executed prior to installing the new package, to make sure there is no data loss
|
||||
- Execute simple queries, such as the ones executed prior to installing the new package, to make sure there is no data loss
|
||||
- Run some simple data insertion statements to make sure the cluster works well
|
||||
- Restore business services
|
||||
|
||||
|
|