Merge branch '3.0' into test3.0/lihui
This commit is contained in:
commit
b18b4956fc
|
@ -88,4 +88,3 @@ Standard: Auto
|
||||||
TabWidth: 8
|
TabWidth: 8
|
||||||
UseTab: Never
|
UseTab: Never
|
||||||
...
|
...
|
||||||
|
|
||||||
|
|
|
@ -0,0 +1 @@
|
||||||
|
*.py linguist-detectable=false
|
|
@ -0,0 +1,58 @@
|
||||||
|
# 贡献指南
|
||||||
|
|
||||||
|
我们感谢所有开发者提交贡献。随时关注我们,Fork 存储库,报告错误,以及在 GitHub 上提交您的代码。但是,我们希望开发者遵循我们的指南,才能更好的做出贡献。
|
||||||
|
|
||||||
|
## 报告错误
|
||||||
|
|
||||||
|
- 任何用户都可以通过 **[GitHub issue tracker](https://github.com/taosdata/TDengine/issues)** 向我们报告错误。请您对所遇到的问题进行**详细描述**,最好提供重现错误的详细步骤。
|
||||||
|
- 欢迎提供包含由 Bug 生成的日志文件的附录。
|
||||||
|
|
||||||
|
## 需要强调的代码提交规则
|
||||||
|
|
||||||
|
- 在提交代码之前,需要**同意贡献者许可协议(CLA)**。点击 [TaosData CLA](https://cla-assistant.io/taosdata/TDengine) 阅读并签署协议。如果您不接受该协议,请停止提交。
|
||||||
|
- 请在 [GitHub issue tracker](https://github.com/taosdata/TDengine/issues) 中解决问题或添加注册功能。
|
||||||
|
- 如果在 [GitHub issue tracker](https://github.com/taosdata/TDengine/issues) 中没有找到相应的问题或功能,请**创建一个新的 issue**。
|
||||||
|
- 将代码提交到我们的存储库时,请创建**包含问题编号的 PR**。
|
||||||
|
|
||||||
|
## 贡献指南
|
||||||
|
|
||||||
|
1. 请用友好的语气书写。
|
||||||
|
|
||||||
|
2. **主动语态**总体上优于被动语态。主动语态中的句子会突出执行动作的人,而不是被动语态突出动作的接受者。
|
||||||
|
|
||||||
|
3. 文档写作建议
|
||||||
|
|
||||||
|
- 正确拼写产品名称 “TDengine”。 “TD” 用大写字母,“TD” 和 “engine” 之间没有空格 **(正确拼写:TDengine)**。
|
||||||
|
- 在句号或其他标点符号后只留一个空格。
|
||||||
|
|
||||||
|
4. 尽量**使用简单句**,而不是复杂句。
|
||||||
|
|
||||||
|
## 给贡献者的礼品
|
||||||
|
|
||||||
|
只要您是为 TDengine 做贡献的开发者,不管是代码贡献、修复 bug 或功能请求,还是文档更改,您都将会获得一份**特别的贡献者纪念品礼物**!
|
||||||
|
|
||||||
|
<p align="left">
|
||||||
|
<img
|
||||||
|
src="docs/assets/contributing-cup.jpg"
|
||||||
|
alt=""
|
||||||
|
width="200"
|
||||||
|
/>
|
||||||
|
<img
|
||||||
|
src="docs/assets/contributing-notebook.jpg"
|
||||||
|
alt=""
|
||||||
|
width="200"
|
||||||
|
/>
|
||||||
|
<img
|
||||||
|
src="docs/assets/contributing-shirt.jpg"
|
||||||
|
alt=""
|
||||||
|
width="200"
|
||||||
|
/>
|
||||||
|
|
||||||
|
TDengine 社区致力于让更多的开发者理解和使用它。
|
||||||
|
请填写**贡献者提交表**以选择您想收到的礼物。
|
||||||
|
|
||||||
|
- [贡献者提交表](https://page.ma.scrmtech.com/form/index?pf_uid=27715_2095&id=12100)
|
||||||
|
|
||||||
|
## 联系我们
|
||||||
|
|
||||||
|
如果您有什么问题需要解决,或者有什么问题需要解答,可以添加微信:TDengineECO
|
|
@ -1,15 +1,64 @@
|
||||||
# Contributing
|
# Contributing
|
||||||
|
|
||||||
We appreciate contributions from all developers. Feel free to follow us, fork the repository, report bugs and even submit your code on GitHub. However, we would like developers to follow our guides to contribute for better corporation.
|
We appreciate contributions from all developers. Feel free to follow us, fork the repository, report bugs, and even submit your code on GitHub. However, we would like developers to follow the guidelines in this document to ensure effective cooperation.
|
||||||
|
|
||||||
## Report bugs
|
## Reporting a bug
|
||||||
|
|
||||||
Any users can report bugs to us through the [github issue tracker](https://github.com/taosdata/TDengine/issues). We appreciate a detailed description of the problem you met. It is better to provide the detailed steps on reproducing the bug. Otherwise, an appendix with log files generated by the bug is welcome.
|
- Any users can report bugs to us through the **[GitHub issue tracker](https://github.com/taosdata/TDengine/issues)**. We would appreciate if you could provide **a detailed description** of the problem you encountered, including steps to reproduce it.
|
||||||
|
|
||||||
## Read the contributor license agreement
|
- Attaching log files caused by the bug is really appreciated.
|
||||||
|
|
||||||
It is required to agree the Contributor Licence Agreement(CLA) before a user submitting his/her code patch. Follow the [TaosData CLA](https://www.taosdata.com/en/contributor/) link to read through the agreement.
|
## Guidelines for committing code
|
||||||
|
|
||||||
## Submit your code
|
- You must agree to the **Contributor License Agreement(CLA) before submitting your code patch**. Follow the **[TAOSData CLA](https://cla-assistant.io/taosdata/TDengine)** link to read through and sign the agreement. If you do not accept the agreement, your contributions cannot be accepted.
|
||||||
|
|
||||||
Before submitting your code, make sure to [read the contributor license agreement](#read-the-contributor-license-agreement) beforehand. If you don't accept the aggreement, please stop submitting. Your submission means you have accepted the agreement. Your submission should solve an issue or add a feature registered in the [github issue tracker](https://github.com/taosdata/TDengine/issues). If no corresponding issue or feature is found in the issue tracker, please create one. When submitting your code to our repository, please create a pull request with the issue number included.
|
- Please solve an issue or add a feature registered in the **[GitHub issue tracker](https://github.com/taosdata/TDengine/issues)**.
|
||||||
|
- If no corresponding issue or feature is found in the issue tracker, please **create one**.
|
||||||
|
- When submitting your code to our repository, please create a pull request with the **issue number** included.
|
||||||
|
|
||||||
|
## Guidelines for communicating
|
||||||
|
|
||||||
|
1. Please be **nice and polite** in the description.
|
||||||
|
2. **Active voice is better than passive voice in general**. Sentences in the active voice will highlight who is performing the action rather than the recipient of the action highlighted by the passive voice.
|
||||||
|
3. Documentation writing advice
|
||||||
|
|
||||||
|
- Spell the product name "TDengine" correctly. "TD" is written in capital letters, and there is no space between "TD" and "engine" (**Correct spelling: TDengine**).
|
||||||
|
- Please **capitalize the first letter** of every sentence.
|
||||||
|
- Leave **only one space** after periods or other punctuation marks.
|
||||||
|
- Use **American spelling**.
|
||||||
|
- When possible, **use second person** rather than first person (e.g.“You are recommended to use a reverse proxy such as Nginx.” rather than “We recommend to use a reverse proxy such as Nginx.”).
|
||||||
|
|
||||||
|
5. Use **simple sentences**, rather than complex sentences.
|
||||||
|
|
||||||
|
## Gifts for the contributors
|
||||||
|
|
||||||
|
Developers, as long as you contribute to TDengine, whether it's code contributions to fix bugs or feature requests, or documentation changes, **you are eligible for a very special Contributor Souvenir Gift!**
|
||||||
|
|
||||||
|
**You can choose one of the following gifts:**
|
||||||
|
|
||||||
|
<p align="left">
|
||||||
|
<img
|
||||||
|
src="docs/assets/contributing-cup.jpg"
|
||||||
|
alt=""
|
||||||
|
width="200"
|
||||||
|
/>
|
||||||
|
<img
|
||||||
|
src="docs/assets/contributing-notebook.jpg"
|
||||||
|
alt=""
|
||||||
|
width="200"
|
||||||
|
/>
|
||||||
|
<img
|
||||||
|
src="docs/assets/contributing-shirt.jpg"
|
||||||
|
alt=""
|
||||||
|
width="200"
|
||||||
|
/>
|
||||||
|
|
||||||
|
The TDengine community is committed to making TDengine accepted and used by more developers.
|
||||||
|
|
||||||
|
Just fill out the **Contributor Submission Form** to choose your desired gift.
|
||||||
|
|
||||||
|
- [Contributor Submission Form](https://page.ma.scrmtech.com/form/index?pf_uid=27715_2095&id=12100)
|
||||||
|
|
||||||
|
## Contact us
|
||||||
|
|
||||||
|
If you have any problems or questions that need help from us, please feel free to add our WeChat account: TDengineECO.
|
||||||
|
|
62
Jenkinsfile2
62
Jenkinsfile2
|
@ -1,6 +1,7 @@
|
||||||
import hudson.model.Result
|
import hudson.model.Result
|
||||||
import hudson.model.*;
|
import hudson.model.*;
|
||||||
import jenkins.model.CauseOfInterruption
|
import jenkins.model.CauseOfInterruption
|
||||||
|
docs_only=0
|
||||||
node {
|
node {
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -29,6 +30,49 @@ def abort_previous(){
|
||||||
if (buildNumber > 1) milestone(buildNumber - 1)
|
if (buildNumber > 1) milestone(buildNumber - 1)
|
||||||
milestone(buildNumber)
|
milestone(buildNumber)
|
||||||
}
|
}
|
||||||
|
def check_docs() {
|
||||||
|
if (env.CHANGE_URL =~ /\/TDengine\//) {
|
||||||
|
sh '''
|
||||||
|
hostname
|
||||||
|
date
|
||||||
|
env
|
||||||
|
'''
|
||||||
|
sh '''
|
||||||
|
cd ${WKC}
|
||||||
|
git reset --hard
|
||||||
|
git clean -fxd
|
||||||
|
rm -rf examples/rust/
|
||||||
|
git remote prune origin
|
||||||
|
git fetch
|
||||||
|
'''
|
||||||
|
script {
|
||||||
|
sh '''
|
||||||
|
cd ${WKC}
|
||||||
|
git checkout ''' + env.CHANGE_TARGET + '''
|
||||||
|
'''
|
||||||
|
}
|
||||||
|
sh '''
|
||||||
|
cd ${WKC}
|
||||||
|
git remote prune origin
|
||||||
|
git pull >/dev/null
|
||||||
|
git fetch origin +refs/pull/${CHANGE_ID}/merge
|
||||||
|
git checkout -qf FETCH_HEAD
|
||||||
|
'''
|
||||||
|
def file_changed = sh (
|
||||||
|
script: '''
|
||||||
|
cd ${WKC}
|
||||||
|
git --no-pager diff --name-only FETCH_HEAD `git merge-base FETCH_HEAD ${CHANGE_TARGET}`|grep -v "^docs/en/"|grep -v "^docs/zh/" || :
|
||||||
|
''',
|
||||||
|
returnStdout: true
|
||||||
|
).trim()
|
||||||
|
if (file_changed == '') {
|
||||||
|
echo "docs PR"
|
||||||
|
docs_only=1
|
||||||
|
} else {
|
||||||
|
echo file_changed
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
def pre_test(){
|
def pre_test(){
|
||||||
sh '''
|
sh '''
|
||||||
hostname
|
hostname
|
||||||
|
@ -43,6 +87,7 @@ def pre_test(){
|
||||||
cd ${WKC}
|
cd ${WKC}
|
||||||
git reset --hard
|
git reset --hard
|
||||||
git clean -fxd
|
git clean -fxd
|
||||||
|
rm -rf examples/rust/
|
||||||
git remote prune origin
|
git remote prune origin
|
||||||
git fetch
|
git fetch
|
||||||
'''
|
'''
|
||||||
|
@ -306,10 +351,27 @@ pipeline {
|
||||||
WKPY = '/var/lib/jenkins/workspace/taos-connector-python'
|
WKPY = '/var/lib/jenkins/workspace/taos-connector-python'
|
||||||
}
|
}
|
||||||
stages {
|
stages {
|
||||||
|
stage('check') {
|
||||||
|
when {
|
||||||
|
allOf {
|
||||||
|
not { expression { env.CHANGE_BRANCH =~ /docs\// }}
|
||||||
|
not { expression { env.CHANGE_URL =~ /\/TDinternal\// }}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
parallel {
|
||||||
|
stage('check docs') {
|
||||||
|
agent{label " worker03 || slave215 || slave217 || slave219 || Mac_catalina "}
|
||||||
|
steps {
|
||||||
|
check_docs()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
stage('run test') {
|
stage('run test') {
|
||||||
when {
|
when {
|
||||||
allOf {
|
allOf {
|
||||||
not { expression { env.CHANGE_BRANCH =~ /docs\// }}
|
not { expression { env.CHANGE_BRANCH =~ /docs\// }}
|
||||||
|
expression { docs_only == 0 }
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
parallel {
|
parallel {
|
||||||
|
|
32
README-CN.md
32
README-CN.md
|
@ -21,17 +21,17 @@
|
||||||
|
|
||||||
TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series Database, TSDB)。TDengine 能被广泛运用于物联网、工业互联网、车联网、IT 运维、金融等领域。除核心的时序数据库功能外,TDengine 还提供缓存、数据订阅、流式计算等功能,是一极简的时序数据处理平台,最大程度的减小系统设计的复杂度,降低研发和运营成本。与其他时序数据库相比,TDengine 的主要优势如下:
|
TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series Database, TSDB)。TDengine 能被广泛运用于物联网、工业互联网、车联网、IT 运维、金融等领域。除核心的时序数据库功能外,TDengine 还提供缓存、数据订阅、流式计算等功能,是一极简的时序数据处理平台,最大程度的减小系统设计的复杂度,降低研发和运营成本。与其他时序数据库相比,TDengine 的主要优势如下:
|
||||||
|
|
||||||
- 高性能:通过创新的存储引擎设计,无论是数据写入还是查询,TDengine 的性能比通用数据库快 10 倍以上,也远超其他时序数据库,存储空间不及通用数据库的1/10。
|
- **高性能**:通过创新的存储引擎设计,无论是数据写入还是查询,TDengine 的性能比通用数据库快 10 倍以上,也远超其他时序数据库,存储空间不及通用数据库的1/10。
|
||||||
|
|
||||||
- 云原生:通过原生分布式的设计,充分利用云平台的优势,TDengine 提供了水平扩展能力,具备弹性、韧性和可观测性,支持k8s部署,可运行在公有云、私有云和混合云上。
|
- **云原生**:通过原生分布式的设计,充分利用云平台的优势,TDengine 提供了水平扩展能力,具备弹性、韧性和可观测性,支持k8s部署,可运行在公有云、私有云和混合云上。
|
||||||
|
|
||||||
- 极简时序数据平台:TDengine 内建消息队列、缓存、流式计算等功能,应用无需再集成 Kafka/Redis/HBase/Spark 等软件,大幅降低系统的复杂度,降低应用开发和运营成本。
|
- **极简时序数据平台**:TDengine 内建消息队列、缓存、流式计算等功能,应用无需再集成 Kafka/Redis/HBase/Spark 等软件,大幅降低系统的复杂度,降低应用开发和运营成本。
|
||||||
|
|
||||||
- 分析能力:支持 SQL,同时为时序数据特有的分析提供SQL扩展。通过超级表、存储计算分离、分区分片、预计算、自定义函数等技术,TDengine 具备强大的分析能力。
|
- **分析能力**:支持 SQL,同时为时序数据特有的分析提供SQL扩展。通过超级表、存储计算分离、分区分片、预计算、自定义函数等技术,TDengine 具备强大的分析能力。
|
||||||
|
|
||||||
- 简单易用:无任何依赖,安装、集群几秒搞定;提供REST以及各种语言连接器,与众多第三方工具无缝集成;提供命令行程序,便于管理和即席查询;提供各种运维工具。
|
- **简单易用**:无任何依赖,安装、集群几秒搞定;提供REST以及各种语言连接器,与众多第三方工具无缝集成;提供命令行程序,便于管理和即席查询;提供各种运维工具。
|
||||||
|
|
||||||
- 核心开源:TDengine 的核心代码包括集群功能全部开源,截止到2022年8月1日,全球超过 135.9k 个运行实例,GitHub Star 18.7k,Fork 4.4k,社区活跃。
|
- **核心开源**:TDengine 的核心代码包括集群功能全部开源,截止到2022年8月1日,全球超过 135.9k 个运行实例,GitHub Star 18.7k,Fork 4.4k,社区活跃。
|
||||||
|
|
||||||
# 文档
|
# 文档
|
||||||
|
|
||||||
|
@ -210,14 +210,14 @@ cmake .. -G "NMake Makefiles"
|
||||||
nmake
|
nmake
|
||||||
```
|
```
|
||||||
|
|
||||||
### macOS 系统
|
<!-- ### macOS 系统
|
||||||
|
|
||||||
安装 Xcode 命令行工具和 cmake. 在 Catalina 和 Big Sur 操作系统上,需要安装 XCode 11.4+ 版本。
|
安装 Xcode 命令行工具和 cmake. 在 Catalina 和 Big Sur 操作系统上,需要安装 XCode 11.4+ 版本。
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
mkdir debug && cd debug
|
mkdir debug && cd debug
|
||||||
cmake .. && cmake --build .
|
cmake .. && cmake --build .
|
||||||
```
|
``` -->
|
||||||
|
|
||||||
# 安装
|
# 安装
|
||||||
|
|
||||||
|
@ -303,14 +303,14 @@ Query OK, 2 row(s) in set (0.001700s)
|
||||||
|
|
||||||
TDengine 提供了丰富的应用程序开发接口,其中包括 C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用:
|
TDengine 提供了丰富的应用程序开发接口,其中包括 C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用:
|
||||||
|
|
||||||
- [Java](https://docs.taosdata.com/reference/connector/java/)
|
- [Java](https://docs.taosdata.com/connector/java/)
|
||||||
- [C/C++](https://www.taosdata.com/cn/documentation/connector#c-cpp)
|
- [C/C++](https://docs.taosdata.com/connector/cpp/)
|
||||||
- [Python](https://docs.taosdata.com/reference/connector/python/)
|
- [Python](https://docs.taosdata.com/connector/python/)
|
||||||
- [Go](https://docs.taosdata.com/reference/connector/go/)
|
- [Go](https://docs.taosdata.com/connector/go/)
|
||||||
- [Node.js](https://docs.taosdata.com/reference/connector/node/)
|
- [Node.js](https://docs.taosdata.com/connector/node/)
|
||||||
- [Rust](https://docs.taosdata.com/reference/connector/rust/)
|
- [Rust](https://docs.taosdata.com/connector/rust/)
|
||||||
- [C#](https://docs.taosdata.com/reference/connector/csharp/)
|
- [C#](https://docs.taosdata.com/connector/csharp/)
|
||||||
- [RESTful API](https://docs.taosdata.com/reference/rest-api/)
|
- [RESTful API](https://docs.taosdata.com/connector/rest-api/)
|
||||||
|
|
||||||
# 成为社区贡献者
|
# 成为社区贡献者
|
||||||
|
|
||||||
|
|
68
README.md
68
README.md
|
@ -15,43 +15,33 @@
|
||||||
[](https://coveralls.io/github/taosdata/TDengine?branch=develop)
|
[](https://coveralls.io/github/taosdata/TDengine?branch=develop)
|
||||||
[](https://bestpractices.coreinfrastructure.org/projects/4201)
|
[](https://bestpractices.coreinfrastructure.org/projects/4201)
|
||||||
|
|
||||||
|
|
||||||
English | [简体中文](README-CN.md) | We are hiring, check [here](https://tdengine.com/careers)
|
English | [简体中文](README-CN.md) | We are hiring, check [here](https://tdengine.com/careers)
|
||||||
|
|
||||||
# What is TDengine?
|
# What is TDengine?
|
||||||
|
|
||||||
|
TDengine is an open source, high-performance, cloud native [time-series database](https://tdengine.com/tsdb/what-is-a-time-series-database/) optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. TDengine differentiates itself from other time-seires databases with the following advantages:
|
||||||
|
|
||||||
TDengine is an open source, high performance , cloud native time-series database (Time-Series Database, TSDB).
|
- **[High-Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression.
|
||||||
|
|
||||||
TDengine can be optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT, IT operation and maintenance, finance and other fields. In addition to the core time series database functions, TDengine also provides functions such as caching, data subscription, and streaming computing. It is a minimalist time series data processing platform that minimizes the complexity of system design and reduces R&D and operating costs. Compared with other time series databases, the main advantages of TDengine are as follows:
|
- **[Simplified Solution](https://tdengine.com/tdengine/simplified-time-series-data-solution/)**: Through built-in caching, stream processing and data subscription features, TDengine provides a simplified solution for time-series data processing. It reduces system design complexity and operation costs significantly.
|
||||||
|
|
||||||
|
- **[Cloud Native](https://tdengine.com/tdengine/cloud-native-time-series-database/)**: Through native distributed design, sharding and partitioning, separation of compute and storage, RAFT, support for kubernetes deployment and full observability, TDengine is a cloud native Time-Series Database and can be deployed on public, private or hybrid clouds.
|
||||||
|
|
||||||
- High-Performance: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression.
|
- **[Ease of Use](https://docs.tdengine.com/get-started/docker/)**: For administrators, TDengine significantly reduces the effort to deploy and maintain. For developers, it provides a simple interface, simplified solution and seamless integrations for third party tools. For data users, it gives easy data access.
|
||||||
|
|
||||||
- Simplified Solution: Through built-in caching, stream processing and data subscription features, TDengine provides a simplified solution for time-series data processing. It reduces system design complexity and operation costs significantly.
|
- **[Easy Data Analytics](https://tdengine.com/tdengine/time-series-data-analytics-made-easy/)**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way.
|
||||||
|
|
||||||
- Cloud Native: Through native distributed design, sharding and partitioning, separation of compute and storage, RAFT, support for kubernetes deployment and full observability, TDengine is a cloud native Time-Series Database and can be deployed on public, private or hybrid clouds.
|
- **[Open Source](https://tdengine.com/tdengine/open-source-time-series-database/)**: TDengine’s core modules, including cluster feature, are all available under open source licenses. It has gathered 18.8k stars on GitHub. There is an active developer community, and over 139k running instances worldwide.
|
||||||
|
|
||||||
- Ease of Use: For administrators, TDengine significantly reduces the effort to deploy and maintain. For developers, it provides a simple interface, simplified solution and seamless integrations for third party tools. For data users, it gives easy data access.
|
|
||||||
|
|
||||||
- Easy Data Analytics: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way.
|
|
||||||
|
|
||||||
- Open Source: TDengine’s core modules, including cluster feature, are all available under open source licenses. It has gathered 18.8k stars on GitHub, an active developer community, and over 137k running instances worldwide.
|
|
||||||
|
|
||||||
# Documentation
|
# Documentation
|
||||||
|
|
||||||
For user manual, system design and architecture, please refer to [TDengine Documentation](https://docs.taosdata.com) ([TDengine 文档](https://docs.taosdata.com))
|
For user manual, system design and architecture, please refer to [TDengine Documentation](https://docs.tdengine.com) ([TDengine 文档](https://docs.taosdata.com))
|
||||||
|
|
||||||
# Building
|
# Building
|
||||||
|
|
||||||
|
At the moment, TDengine server supports running on Linux and Windows systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future.
|
||||||
|
|
||||||
At the moment, TDengine server supports running on Linux, Windows systems.Any OS application can also choose the RESTful interface of taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU , and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future.
|
You can choose to install through source code, [container](https://docs.tdengine.com/get-started/docker/), [installation package](https://docs.tdengine.com/get-started/package/) or [Kubernetes](https://docs.tdengine.com/deployment/k8s/). This quick guide only applies to installing from source.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
You can choose to install through source code according to your needs, [container](https://docs.taosdata.com/get-started/docker/), [installation package](https://docs.taosdata.com/get-started/package/) or [Kubenetes](https://docs.taosdata.com/deployment/k8s/) to install. This quick guide only applies to installing from source.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine.
|
TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine.
|
||||||
|
|
||||||
|
@ -67,7 +57,6 @@ sudo apt-get install -y gcc cmake build-essential git libssl-dev
|
||||||
|
|
||||||
#### Install build dependencies for taosTools
|
#### Install build dependencies for taosTools
|
||||||
|
|
||||||
|
|
||||||
To build the [taosTools](https://github.com/taosdata/taos-tools) on Ubuntu/Debian, the following packages need to be installed.
|
To build the [taosTools](https://github.com/taosdata/taos-tools) on Ubuntu/Debian, the following packages need to be installed.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
@ -91,7 +80,6 @@ sudo dnf install -y gcc gcc-c++ make cmake epel-release git openssl-devel
|
||||||
|
|
||||||
#### Install build dependencies for taosTools on CentOS
|
#### Install build dependencies for taosTools on CentOS
|
||||||
|
|
||||||
|
|
||||||
#### CentOS 7.9
|
#### CentOS 7.9
|
||||||
|
|
||||||
```
|
```
|
||||||
|
@ -109,14 +97,14 @@ sudo yum install -y zlib-devel xz-devel snappy-devel jansson jansson-devel pkgco
|
||||||
|
|
||||||
Note: Since snappy lacks pkg-config support (refer to [link](https://github.com/google/snappy/pull/86)), it leads a cmake prompt libsnappy not found. But snappy still works well.
|
Note: Since snappy lacks pkg-config support (refer to [link](https://github.com/google/snappy/pull/86)), it leads a cmake prompt libsnappy not found. But snappy still works well.
|
||||||
|
|
||||||
If the powertools installation fails, you can try to use:
|
If the PowerTools installation fails, you can try to use:
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo yum config-manager --set-enabled Powertools
|
sudo yum config-manager --set-enabled powertools
|
||||||
```
|
```
|
||||||
|
|
||||||
### Setup golang environment
|
### Setup golang environment
|
||||||
|
|
||||||
|
|
||||||
TDengine includes a few components like taosAdapter developed by Go language. Please refer to golang.org official documentation for golang environment setup.
|
TDengine includes a few components like taosAdapter developed by Go language. Please refer to golang.org official documentation for golang environment setup.
|
||||||
|
|
||||||
Please use version 1.14+. For the user in China, we recommend using a proxy to accelerate package downloading.
|
Please use version 1.14+. For the user in China, we recommend using a proxy to accelerate package downloading.
|
||||||
|
@ -134,7 +122,7 @@ cmake .. -DBUILD_HTTP=false
|
||||||
|
|
||||||
### Setup rust environment
|
### Setup rust environment
|
||||||
|
|
||||||
TDengine includes a few compoments developed by Rust language. Please refer to rust-lang.org official documentation for rust environment setup.
|
TDengine includes a few components developed by Rust language. Please refer to rust-lang.org official documentation for rust environment setup.
|
||||||
|
|
||||||
## Get the source codes
|
## Get the source codes
|
||||||
|
|
||||||
|
@ -145,7 +133,6 @@ git clone https://github.com/taosdata/TDengine.git
|
||||||
cd TDengine
|
cd TDengine
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
You can modify the file ~/.gitconfig to use ssh protocol instead of https for better download speed. You will need to upload ssh public key to GitHub first. Please refer to GitHub official documentation for detail.
|
You can modify the file ~/.gitconfig to use ssh protocol instead of https for better download speed. You will need to upload ssh public key to GitHub first. Please refer to GitHub official documentation for detail.
|
||||||
|
|
||||||
```
|
```
|
||||||
|
@ -155,14 +142,12 @@ You can modify the file ~/.gitconfig to use ssh protocol instead of https for be
|
||||||
|
|
||||||
## Special Note
|
## Special Note
|
||||||
|
|
||||||
|
|
||||||
[JDBC Connector](https://github.com/taosdata/taos-connector-jdbc), [Go Connector](https://github.com/taosdata/driver-go),[Python Connector](https://github.com/taosdata/taos-connector-python),[Node.js Connector](https://github.com/taosdata/taos-connector-node),[C# Connector](https://github.com/taosdata/taos-connector-dotnet) ,[Rust Connector](https://github.com/taosdata/taos-connector-rust) and [Grafana plugin](https://github.com/taosdata/grafanaplugin) has been moved to standalone repository.
|
[JDBC Connector](https://github.com/taosdata/taos-connector-jdbc), [Go Connector](https://github.com/taosdata/driver-go),[Python Connector](https://github.com/taosdata/taos-connector-python),[Node.js Connector](https://github.com/taosdata/taos-connector-node),[C# Connector](https://github.com/taosdata/taos-connector-dotnet) ,[Rust Connector](https://github.com/taosdata/taos-connector-rust) and [Grafana plugin](https://github.com/taosdata/grafanaplugin) has been moved to standalone repository.
|
||||||
|
|
||||||
## Build TDengine
|
## Build TDengine
|
||||||
|
|
||||||
### On Linux platform
|
### On Linux platform
|
||||||
|
|
||||||
|
|
||||||
You can run the bash script `build.sh` to build both TDengine and taosTools including taosBenchmark and taosdump as below:
|
You can run the bash script `build.sh` to build both TDengine and taosTools including taosBenchmark and taosdump as below:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
@ -178,7 +163,6 @@ cmake .. -DBUILD_TOOLS=true
|
||||||
make
|
make
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
You can use Jemalloc as memory allocator instead of glibc:
|
You can use Jemalloc as memory allocator instead of glibc:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
@ -227,14 +211,14 @@ cmake .. -G "NMake Makefiles"
|
||||||
nmake
|
nmake
|
||||||
```
|
```
|
||||||
|
|
||||||
### On macOS platform
|
<!-- ### On macOS platform
|
||||||
|
|
||||||
Please install XCode command line tools and cmake. Verified with XCode 11.4+ on Catalina and Big Sur.
|
Please install XCode command line tools and cmake. Verified with XCode 11.4+ on Catalina and Big Sur.
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
mkdir debug && cd debug
|
mkdir debug && cd debug
|
||||||
cmake .. && cmake --build .
|
cmake .. && cmake --build .
|
||||||
```
|
``` -->
|
||||||
|
|
||||||
# Installing
|
# Installing
|
||||||
|
|
||||||
|
@ -272,6 +256,7 @@ After building successfully, TDengine can be installed by:
|
||||||
nmake install
|
nmake install
|
||||||
```
|
```
|
||||||
|
|
||||||
|
<!--
|
||||||
## On macOS platform
|
## On macOS platform
|
||||||
|
|
||||||
After building successfully, TDengine can be installed by:
|
After building successfully, TDengine can be installed by:
|
||||||
|
@ -279,6 +264,7 @@ After building successfully, TDengine can be installed by:
|
||||||
```bash
|
```bash
|
||||||
sudo make install
|
sudo make install
|
||||||
```
|
```
|
||||||
|
-->
|
||||||
|
|
||||||
## Quick Run
|
## Quick Run
|
||||||
|
|
||||||
|
@ -318,16 +304,16 @@ Query OK, 2 row(s) in set (0.001700s)
|
||||||
|
|
||||||
## Official Connectors
|
## Official Connectors
|
||||||
|
|
||||||
TDengine provides abundant developing tools for users to develop on TDengine. include C/C++、Java、Python、Go、Node.js、C# 、RESTful ,Follow the links below to find your desired connectors and relevant documentation.
|
TDengine provides abundant developing tools for users to develop on TDengine. Follow the links below to find your desired connectors and relevant documentation.
|
||||||
|
|
||||||
- [Java](https://docs.taosdata.com/reference/connector/java/)
|
- [Java](https://docs.tdengine.com/reference/connector/java/)
|
||||||
- [C/C++](https://docs.taosdata.com/reference/connector/cpp/)
|
- [C/C++](https://docs.tdengine.com/reference/connector/cpp/)
|
||||||
- [Python](https://docs.taosdata.com/reference/connector/python/)
|
- [Python](https://docs.tdengine.com/reference/connector/python/)
|
||||||
- [Go](https://docs.taosdata.com/reference/connector/go/)
|
- [Go](https://docs.tdengine.com/reference/connector/go/)
|
||||||
- [Node.js](https://docs.taosdata.com/reference/connector/node/)
|
- [Node.js](https://docs.tdengine.com/reference/connector/node/)
|
||||||
- [Rust](https://docs.taosdata.com/reference/connector/rust/)
|
- [Rust](https://docs.tdengine.com/reference/connector/rust/)
|
||||||
- [C#](https://docs.taosdata.com/reference/connector/csharp/)
|
- [C#](https://docs.tdengine.com/reference/connector/csharp/)
|
||||||
- [RESTful API](https://docs.taosdata.com/reference/rest-api/)
|
- [RESTful API](https://docs.tdengine.com/reference/rest-api/)
|
||||||
|
|
||||||
# Contribute to TDengine
|
# Contribute to TDengine
|
||||||
|
|
||||||
|
|
|
@ -2,8 +2,6 @@ cmake_minimum_required(VERSION 3.0)
|
||||||
|
|
||||||
set(CMAKE_VERBOSE_MAKEFILE OFF)
|
set(CMAKE_VERBOSE_MAKEFILE OFF)
|
||||||
|
|
||||||
SET(BUILD_SHARED_LIBS "OFF")
|
|
||||||
|
|
||||||
#set output directory
|
#set output directory
|
||||||
SET(LIBRARY_OUTPUT_PATH ${PROJECT_BINARY_DIR}/build/lib)
|
SET(LIBRARY_OUTPUT_PATH ${PROJECT_BINARY_DIR}/build/lib)
|
||||||
SET(EXECUTABLE_OUTPUT_PATH ${PROJECT_BINARY_DIR}/build/bin)
|
SET(EXECUTABLE_OUTPUT_PATH ${PROJECT_BINARY_DIR}/build/bin)
|
||||||
|
@ -81,7 +79,7 @@ ENDIF ()
|
||||||
|
|
||||||
IF (TD_WINDOWS)
|
IF (TD_WINDOWS)
|
||||||
MESSAGE("${Yellow} set compiler flag for Windows! ${ColourReset}")
|
MESSAGE("${Yellow} set compiler flag for Windows! ${ColourReset}")
|
||||||
SET(COMMON_FLAGS "/w /D_WIN32 /DWIN32 /Zi")
|
SET(COMMON_FLAGS "/w /D_WIN32 /DWIN32 /Zi /MTd")
|
||||||
SET(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} /MANIFEST:NO")
|
SET(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} /MANIFEST:NO")
|
||||||
# IF (MSVC AND (MSVC_VERSION GREATER_EQUAL 1900))
|
# IF (MSVC AND (MSVC_VERSION GREATER_EQUAL 1900))
|
||||||
# SET(COMMON_FLAGS "${COMMON_FLAGS} /Wv:18")
|
# SET(COMMON_FLAGS "${COMMON_FLAGS} /Wv:18")
|
||||||
|
@ -92,11 +90,20 @@ IF (TD_WINDOWS)
|
||||||
IF (CMAKE_DEPFILE_FLAGS_CXX)
|
IF (CMAKE_DEPFILE_FLAGS_CXX)
|
||||||
SET(CMAKE_DEPFILE_FLAGS_CXX "")
|
SET(CMAKE_DEPFILE_FLAGS_CXX "")
|
||||||
ENDIF ()
|
ENDIF ()
|
||||||
|
IF (CMAKE_C_FLAGS_DEBUG)
|
||||||
|
SET(CMAKE_C_FLAGS_DEBUG "" CACHE STRING "" FORCE)
|
||||||
|
ENDIF ()
|
||||||
|
IF (CMAKE_CXX_FLAGS_DEBUG)
|
||||||
|
SET(CMAKE_CXX_FLAGS_DEBUG "" CACHE STRING "" FORCE)
|
||||||
|
ENDIF ()
|
||||||
|
|
||||||
SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${COMMON_FLAGS}")
|
SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${COMMON_FLAGS}")
|
||||||
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${COMMON_FLAGS}")
|
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${COMMON_FLAGS}")
|
||||||
|
|
||||||
ELSE ()
|
ELSE ()
|
||||||
|
IF (${TD_DARWIN})
|
||||||
|
set(CMAKE_MACOSX_RPATH 0)
|
||||||
|
ENDIF ()
|
||||||
IF (${COVER} MATCHES "true")
|
IF (${COVER} MATCHES "true")
|
||||||
MESSAGE(STATUS "Test coverage mode, add extra flags")
|
MESSAGE(STATUS "Test coverage mode, add extra flags")
|
||||||
SET(GCC_COVERAGE_COMPILE_FLAGS "-fprofile-arcs -ftest-coverage")
|
SET(GCC_COVERAGE_COMPILE_FLAGS "-fprofile-arcs -ftest-coverage")
|
||||||
|
|
|
@ -1,38 +1,24 @@
|
||||||
IF (EXISTS /var/lib/taos/dnode/dnodeCfg.json)
|
SET(PREPARE_ENV_CMD "prepare_env_cmd")
|
||||||
INSTALL(CODE "MESSAGE(\"The default data directory /var/lib/taos contains old data of tdengine 2.x, please clear it before installing!\")")
|
SET(PREPARE_ENV_TARGET "prepare_env_target")
|
||||||
ELSEIF (EXISTS C:/TDengine/data/dnode/dnodeCfg.json)
|
ADD_CUSTOM_COMMAND(OUTPUT ${PREPARE_ENV_CMD}
|
||||||
INSTALL(CODE "MESSAGE(\"The default data directory C:/TDengine/data contains old data of tdengine 2.x, please clear it before installing!\")")
|
POST_BUILD
|
||||||
ELSEIF (TD_LINUX)
|
COMMAND echo "make test directory"
|
||||||
|
DEPENDS taosd
|
||||||
|
COMMAND ${CMAKE_COMMAND} -E make_directory ${TD_TESTS_OUTPUT_DIR}/cfg/
|
||||||
|
COMMAND ${CMAKE_COMMAND} -E make_directory ${TD_TESTS_OUTPUT_DIR}/log/
|
||||||
|
COMMAND ${CMAKE_COMMAND} -E make_directory ${TD_TESTS_OUTPUT_DIR}/data/
|
||||||
|
COMMAND ${CMAKE_COMMAND} -E echo dataDir ${TD_TESTS_OUTPUT_DIR}/data > ${TD_TESTS_OUTPUT_DIR}/cfg/taos.cfg
|
||||||
|
COMMAND ${CMAKE_COMMAND} -E echo logDir ${TD_TESTS_OUTPUT_DIR}/log >> ${TD_TESTS_OUTPUT_DIR}/cfg/taos.cfg
|
||||||
|
COMMAND ${CMAKE_COMMAND} -E echo charset UTF-8 >> ${TD_TESTS_OUTPUT_DIR}/cfg/taos.cfg
|
||||||
|
COMMAND ${CMAKE_COMMAND} -E echo monitor 0 >> ${TD_TESTS_OUTPUT_DIR}/cfg/taos.cfg
|
||||||
|
COMMENT "prepare taosd environment")
|
||||||
|
ADD_CUSTOM_TARGET(${PREPARE_ENV_TARGET} ALL WORKING_DIRECTORY ${TD_EXECUTABLE_OUTPUT_PATH} DEPENDS ${PREPARE_ENV_CMD})
|
||||||
|
|
||||||
|
IF (TD_LINUX)
|
||||||
SET(TD_MAKE_INSTALL_SH "${TD_SOURCE_DIR}/packaging/tools/make_install.sh")
|
SET(TD_MAKE_INSTALL_SH "${TD_SOURCE_DIR}/packaging/tools/make_install.sh")
|
||||||
INSTALL(CODE "MESSAGE(\"make install script: ${TD_MAKE_INSTALL_SH}\")")
|
INSTALL(CODE "MESSAGE(\"make install script: ${TD_MAKE_INSTALL_SH}\")")
|
||||||
INSTALL(CODE "execute_process(COMMAND bash ${TD_MAKE_INSTALL_SH} ${TD_SOURCE_DIR} ${PROJECT_BINARY_DIR} Linux ${TD_VER_NUMBER})")
|
INSTALL(CODE "execute_process(COMMAND bash ${TD_MAKE_INSTALL_SH} ${TD_SOURCE_DIR} ${PROJECT_BINARY_DIR} Linux ${TD_VER_NUMBER})")
|
||||||
ELSEIF (TD_WINDOWS)
|
ELSEIF (TD_WINDOWS)
|
||||||
SET(CMAKE_INSTALL_PREFIX C:/TDengine)
|
|
||||||
|
|
||||||
# INSTALL(DIRECTORY ${TD_SOURCE_DIR}/src/connector/go DESTINATION connector)
|
|
||||||
# INSTALL(DIRECTORY ${TD_SOURCE_DIR}/src/connector/nodejs DESTINATION connector)
|
|
||||||
# INSTALL(DIRECTORY ${TD_SOURCE_DIR}/src/connector/python DESTINATION connector)
|
|
||||||
# INSTALL(DIRECTORY ${TD_SOURCE_DIR}/src/connector/C\# DESTINATION connector)
|
|
||||||
# INSTALL(DIRECTORY ${TD_SOURCE_DIR}/examples DESTINATION .)
|
|
||||||
INSTALL(CODE "IF (NOT EXISTS ${CMAKE_INSTALL_PREFIX}/cfg/taos.cfg)
|
|
||||||
execute_process(COMMAND ${CMAKE_COMMAND} -E copy ${TD_SOURCE_DIR}/packaging/cfg/taos.cfg ${CMAKE_INSTALL_PREFIX}/cfg/taos.cfg)
|
|
||||||
ENDIF ()")
|
|
||||||
INSTALL(FILES ${TD_SOURCE_DIR}/include/client/taos.h DESTINATION include)
|
|
||||||
INSTALL(FILES ${TD_SOURCE_DIR}/include/util/taoserror.h DESTINATION include)
|
|
||||||
INSTALL(FILES ${TD_SOURCE_DIR}/include/libs/function/taosudf.h DESTINATION include)
|
|
||||||
INSTALL(FILES ${LIBRARY_OUTPUT_PATH}/taos.lib DESTINATION driver)
|
|
||||||
INSTALL(FILES ${LIBRARY_OUTPUT_PATH}/taos_static.lib DESTINATION driver)
|
|
||||||
INSTALL(FILES ${LIBRARY_OUTPUT_PATH}/taos.dll DESTINATION driver)
|
|
||||||
INSTALL(FILES ${EXECUTABLE_OUTPUT_PATH}/taos.exe DESTINATION .)
|
|
||||||
INSTALL(FILES ${EXECUTABLE_OUTPUT_PATH}/taosd.exe DESTINATION .)
|
|
||||||
INSTALL(FILES ${EXECUTABLE_OUTPUT_PATH}/udfd.exe DESTINATION .)
|
|
||||||
IF (BUILD_TOOLS)
|
|
||||||
INSTALL(FILES ${EXECUTABLE_OUTPUT_PATH}/taosBenchmark.exe DESTINATION .)
|
|
||||||
ENDIF ()
|
|
||||||
|
|
||||||
IF (TD_MVN_INSTALLED)
|
|
||||||
INSTALL(FILES ${LIBRARY_OUTPUT_PATH}/taos-jdbcdriver-2.0.38-dist.jar DESTINATION connector/jdbc)
|
|
||||||
ENDIF ()
|
|
||||||
SET(TD_MAKE_INSTALL_SH "${TD_SOURCE_DIR}/packaging/tools/make_install.bat")
|
SET(TD_MAKE_INSTALL_SH "${TD_SOURCE_DIR}/packaging/tools/make_install.bat")
|
||||||
INSTALL(CODE "MESSAGE(\"make install script: ${TD_MAKE_INSTALL_SH}\")")
|
INSTALL(CODE "MESSAGE(\"make install script: ${TD_MAKE_INSTALL_SH}\")")
|
||||||
INSTALL(CODE "execute_process(COMMAND ${TD_MAKE_INSTALL_SH} :needAdmin ${TD_SOURCE_DIR} ${PROJECT_BINARY_DIR} Windows ${TD_VER_NUMBER})")
|
INSTALL(CODE "execute_process(COMMAND ${TD_MAKE_INSTALL_SH} :needAdmin ${TD_SOURCE_DIR} ${PROJECT_BINARY_DIR} Windows ${TD_VER_NUMBER})")
|
||||||
|
|
|
@ -90,6 +90,12 @@ ELSE ()
|
||||||
ENDIF ()
|
ENDIF ()
|
||||||
ENDIF ()
|
ENDIF ()
|
||||||
|
|
||||||
|
option(
|
||||||
|
BUILD_SHARED_LIBS
|
||||||
|
""
|
||||||
|
OFF
|
||||||
|
)
|
||||||
|
|
||||||
option(
|
option(
|
||||||
RUST_BINDINGS
|
RUST_BINDINGS
|
||||||
"If build with rust-bindings"
|
"If build with rust-bindings"
|
||||||
|
|
|
@ -46,7 +46,7 @@ IF (${CMAKE_SYSTEM_NAME} MATCHES "Linux" OR ${CMAKE_SYSTEM_NAME} MATCHES "Darwin
|
||||||
|
|
||||||
MESSAGE("Current system processor is ${CMAKE_SYSTEM_PROCESSOR}.")
|
MESSAGE("Current system processor is ${CMAKE_SYSTEM_PROCESSOR}.")
|
||||||
IF (${CMAKE_SYSTEM_PROCESSOR} MATCHES "arm64" OR ${CMAKE_SYSTEM_PROCESSOR} MATCHES "x86_64")
|
IF (${CMAKE_SYSTEM_PROCESSOR} MATCHES "arm64" OR ${CMAKE_SYSTEM_PROCESSOR} MATCHES "x86_64")
|
||||||
MESSAGE("Current system arch is arm64")
|
MESSAGE("Current system arch is 64")
|
||||||
SET(TD_DARWIN_64 TRUE)
|
SET(TD_DARWIN_64 TRUE)
|
||||||
ADD_DEFINITIONS("-D_TD_DARWIN_64")
|
ADD_DEFINITIONS("-D_TD_DARWIN_64")
|
||||||
ENDIF ()
|
ENDIF ()
|
||||||
|
|
|
@ -2,7 +2,7 @@
|
||||||
IF (DEFINED VERNUMBER)
|
IF (DEFINED VERNUMBER)
|
||||||
SET(TD_VER_NUMBER ${VERNUMBER})
|
SET(TD_VER_NUMBER ${VERNUMBER})
|
||||||
ELSE ()
|
ELSE ()
|
||||||
SET(TD_VER_NUMBER "3.0.0.0")
|
SET(TD_VER_NUMBER "3.0.0.1")
|
||||||
ENDIF ()
|
ENDIF ()
|
||||||
|
|
||||||
IF (DEFINED VERCOMPATIBLE)
|
IF (DEFINED VERCOMPATIBLE)
|
||||||
|
|
|
@ -2,7 +2,7 @@
|
||||||
# libuv
|
# libuv
|
||||||
ExternalProject_Add(libuv
|
ExternalProject_Add(libuv
|
||||||
GIT_REPOSITORY https://github.com/libuv/libuv.git
|
GIT_REPOSITORY https://github.com/libuv/libuv.git
|
||||||
GIT_TAG v1.42.0
|
GIT_TAG v1.44.2
|
||||||
SOURCE_DIR "${TD_CONTRIB_DIR}/libuv"
|
SOURCE_DIR "${TD_CONTRIB_DIR}/libuv"
|
||||||
BINARY_DIR "${TD_CONTRIB_DIR}/libuv"
|
BINARY_DIR "${TD_CONTRIB_DIR}/libuv"
|
||||||
CONFIGURE_COMMAND ""
|
CONFIGURE_COMMAND ""
|
||||||
|
|
|
@ -1,12 +0,0 @@
|
||||||
|
|
||||||
# rust-bindings
|
|
||||||
ExternalProject_Add(rust-bindings
|
|
||||||
GIT_REPOSITORY https://github.com/songtianyi/tdengine-rust-bindings.git
|
|
||||||
GIT_TAG 7ed7a97
|
|
||||||
SOURCE_DIR "${TD_SOURCE_DIR}/examples/rust"
|
|
||||||
BINARY_DIR "${TD_SOURCE_DIR}/examples/rust"
|
|
||||||
CONFIGURE_COMMAND ""
|
|
||||||
BUILD_COMMAND ""
|
|
||||||
INSTALL_COMMAND ""
|
|
||||||
TEST_COMMAND ""
|
|
||||||
)
|
|
|
@ -2,7 +2,7 @@
|
||||||
# taosadapter
|
# taosadapter
|
||||||
ExternalProject_Add(taosadapter
|
ExternalProject_Add(taosadapter
|
||||||
GIT_REPOSITORY https://github.com/taosdata/taosadapter.git
|
GIT_REPOSITORY https://github.com/taosdata/taosadapter.git
|
||||||
GIT_TAG 3d21433
|
GIT_TAG abed566
|
||||||
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taosadapter"
|
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taosadapter"
|
||||||
BINARY_DIR ""
|
BINARY_DIR ""
|
||||||
#BUILD_IN_SOURCE TRUE
|
#BUILD_IN_SOURCE TRUE
|
||||||
|
|
|
@ -2,7 +2,7 @@
|
||||||
# taos-tools
|
# taos-tools
|
||||||
ExternalProject_Add(taos-tools
|
ExternalProject_Add(taos-tools
|
||||||
GIT_REPOSITORY https://github.com/taosdata/taos-tools.git
|
GIT_REPOSITORY https://github.com/taosdata/taos-tools.git
|
||||||
GIT_TAG 53a0103
|
GIT_TAG 9cb965f
|
||||||
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools"
|
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools"
|
||||||
BINARY_DIR ""
|
BINARY_DIR ""
|
||||||
#BUILD_IN_SOURCE TRUE
|
#BUILD_IN_SOURCE TRUE
|
||||||
|
|
|
@ -105,11 +105,6 @@ if(${BUILD_WITH_SQLITE})
|
||||||
cat("${TD_SUPPORT_DIR}/sqlite_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
|
cat("${TD_SUPPORT_DIR}/sqlite_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
|
||||||
endif(${BUILD_WITH_SQLITE})
|
endif(${BUILD_WITH_SQLITE})
|
||||||
|
|
||||||
# rust-bindings
|
|
||||||
if(${RUST_BINDINGS})
|
|
||||||
cat("${TD_SUPPORT_DIR}/rust-bindings_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
|
|
||||||
endif(${RUST_BINDINGS})
|
|
||||||
|
|
||||||
# lucene
|
# lucene
|
||||||
if(${BUILD_WITH_LUCENE})
|
if(${BUILD_WITH_LUCENE})
|
||||||
cat("${TD_SUPPORT_DIR}/lucene_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
|
cat("${TD_SUPPORT_DIR}/lucene_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
|
||||||
|
@ -141,24 +136,6 @@ execute_process(COMMAND "${CMAKE_COMMAND}" -G "${CMAKE_GENERATOR}" .
|
||||||
execute_process(COMMAND "${CMAKE_COMMAND}" --build .
|
execute_process(COMMAND "${CMAKE_COMMAND}" --build .
|
||||||
WORKING_DIRECTORY "${TD_CONTRIB_DIR}/deps-download")
|
WORKING_DIRECTORY "${TD_CONTRIB_DIR}/deps-download")
|
||||||
|
|
||||||
# clear submodule
|
|
||||||
execute_process(COMMAND git submodule deinit -f tools/taos-tools
|
|
||||||
WORKING_DIRECTORY "${TD_SOURCE_DIR}")
|
|
||||||
execute_process(COMMAND git rm --cached tools/taos-tools
|
|
||||||
WORKING_DIRECTORY "${TD_SOURCE_DIR}")
|
|
||||||
execute_process(COMMAND git submodule deinit -f tools/taosadapter
|
|
||||||
WORKING_DIRECTORY "${TD_SOURCE_DIR}")
|
|
||||||
execute_process(COMMAND git rm --cached tools/taosadapter
|
|
||||||
WORKING_DIRECTORY "${TD_SOURCE_DIR}")
|
|
||||||
execute_process(COMMAND git submodule deinit -f tools/taosws-rs
|
|
||||||
WORKING_DIRECTORY "${TD_SOURCE_DIR}")
|
|
||||||
execute_process(COMMAND git rm --cached tools/taosws-rs
|
|
||||||
WORKING_DIRECTORY "${TD_SOURCE_DIR}")
|
|
||||||
execute_process(COMMAND git submodule deinit -f examples/rust
|
|
||||||
WORKING_DIRECTORY "${TD_SOURCE_DIR}")
|
|
||||||
execute_process(COMMAND git rm --cached examples/rust
|
|
||||||
WORKING_DIRECTORY "${TD_SOURCE_DIR}")
|
|
||||||
|
|
||||||
# ================================================================================================
|
# ================================================================================================
|
||||||
# Build
|
# Build
|
||||||
# ================================================================================================
|
# ================================================================================================
|
||||||
|
@ -273,7 +250,7 @@ endif(${BUILD_WITH_NURAFT})
|
||||||
|
|
||||||
# pthread
|
# pthread
|
||||||
if(${BUILD_PTHREAD})
|
if(${BUILD_PTHREAD})
|
||||||
set(CMAKE_BUILD_TYPE release)
|
set(CMAKE_BUILD_TYPE debug)
|
||||||
add_definitions(-DPTW32_STATIC_LIB)
|
add_definitions(-DPTW32_STATIC_LIB)
|
||||||
add_subdirectory(pthread EXCLUDE_FROM_ALL)
|
add_subdirectory(pthread EXCLUDE_FROM_ALL)
|
||||||
set_target_properties(libpthreadVC3 PROPERTIES OUTPUT_NAME pthread)
|
set_target_properties(libpthreadVC3 PROPERTIES OUTPUT_NAME pthread)
|
||||||
|
@ -354,9 +331,11 @@ endif(${BUILD_WITH_TRAFT})
|
||||||
|
|
||||||
# LIBUV
|
# LIBUV
|
||||||
if(${BUILD_WITH_UV})
|
if(${BUILD_WITH_UV})
|
||||||
if (NOT ${CMAKE_SYSTEM_NAME} MATCHES "Windows")
|
if (TD_WINDOWS)
|
||||||
MESSAGE("Windows need set no-sign-compare")
|
# There is no GetHostNameW function on win7.
|
||||||
add_compile_options(-Wno-sign-compare)
|
file(READ "libuv/src/win/util.c" LIBUV_WIN_UTIL_CONTENT)
|
||||||
|
string(REPLACE "if (GetHostNameW(buf, UV_MAXHOSTNAMESIZE" "DWORD nSize = UV_MAXHOSTNAMESIZE;\n if (GetComputerNameW(buf, &nSize" LIBUV_WIN_UTIL_CONTENT "${LIBUV_WIN_UTIL_CONTENT}")
|
||||||
|
file(WRITE "libuv/src/win/util.c" "${LIBUV_WIN_UTIL_CONTENT}")
|
||||||
endif ()
|
endif ()
|
||||||
add_subdirectory(libuv EXCLUDE_FROM_ALL)
|
add_subdirectory(libuv EXCLUDE_FROM_ALL)
|
||||||
endif(${BUILD_WITH_UV})
|
endif(${BUILD_WITH_UV})
|
||||||
|
|
Binary file not shown.
After Width: | Height: | Size: 344 KiB |
Binary file not shown.
After Width: | Height: | Size: 266 KiB |
Binary file not shown.
After Width: | Height: | Size: 258 KiB |
|
@ -4,24 +4,24 @@ sidebar_label: Documentation Home
|
||||||
slug: /
|
slug: /
|
||||||
---
|
---
|
||||||
|
|
||||||
TDengine is a [high-performance](https://tdengine.com/fast), [scalable](https://tdengine.com/scalable) time series database with [SQL support](https://tdengine.com/sql-support). This document is the TDengine user manual. It introduces the basic, as well as novel concepts, in TDengine, and also talks in detail about installation, features, SQL, APIs, operation, maintenance, kernel design and other topics. It’s written mainly for architects, developers and system administrators.
|
TDengine is an [open-source](https://tdengine.com/tdengine/open-source-time-series-database/), [cloud-native](https://tdengine.com/tdengine/cloud-native-time-series-database/) time-series database optimized for the Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. This document is the TDengine user manual. It introduces the basic, as well as novel concepts, in TDengine, and also talks in detail about installation, features, SQL, APIs, operation, maintenance, kernel design, and other topics. It’s written mainly for architects, developers, and system administrators.
|
||||||
|
|
||||||
To get an overview of TDengine, such as a feature list, benchmarks, and competitive advantages, please browse through the [Introduction](./intro) section.
|
To get an overview of TDengine, such as a feature list, benchmarks, and competitive advantages, please browse through the [Introduction](./intro) section.
|
||||||
|
|
||||||
TDengine greatly improves the efficiency of data ingestion, querying and storage by exploiting the characteristics of time series data, introducing the novel concepts of "one table for one data collection point" and "super table", and designing an innovative storage engine. To understand the new concepts in TDengine and make full use of the features and capabilities of TDengine, please read [“Concepts”](./concept) thoroughly.
|
TDengine greatly improves the efficiency of data ingestion, querying, and storage by exploiting the characteristics of time series data, introducing the novel concepts of "one table for one data collection point" and "super table", and designing an innovative storage engine. To understand the new concepts in TDengine and make full use of the features and capabilities of TDengine, please read [Concepts](./concept) thoroughly.
|
||||||
|
|
||||||
If you are a developer, please read the [“Developer Guide”](./develop) carefully. This section introduces the database connection, data modeling, data ingestion, query, continuous query, cache, data subscription, user-defined functions, and other functionality in detail. Sample code is provided for a variety of programming languages. In most cases, you can just copy and paste the sample code, make a few changes to accommodate your application, and it will work.
|
If you are a developer, please read the [Developer Guide](./develop) carefully. This section introduces the database connection, data modeling, data ingestion, query, continuous query, cache, data subscription, user-defined functions, and other functionality in detail. Sample code is provided for a variety of programming languages. In most cases, you can just copy and paste the sample code, and make a few changes to accommodate your application, and it will work.
|
||||||
|
|
||||||
We live in the era of big data, and scale-up is unable to meet the growing needs of business. Any modern data system must have the ability to scale out, and clustering has become an indispensable feature of big data systems. Not only did the TDengine team develop the cluster feature, but also decided to open source this important feature. To learn how to deploy, manage and maintain a TDengine cluster please refer to ["cluster"](./cluster).
|
We live in the era of big data, and scale-up is unable to meet the growing needs of the business. Any modern data system must have the ability to scale out, and clustering has become an indispensable feature of big data systems. Not only did the TDengine team develop the cluster feature, but also decided to open source this important feature. To learn how to deploy, manage and maintain a TDengine cluster please refer to [Cluster Deployment](../deployment).
|
||||||
|
|
||||||
TDengine uses ubiquitious SQL as its query language, which greatly reduces learning costs and migration costs. In addition to the standard SQL, TDengine has extensions to better support time series data analysis. These extensions include functions such as roll up, interpolation and time weighted average, among many others. The ["SQL Reference"](./taos-sql) chapter describes the SQL syntax in detail, and lists the various supported commands and functions.
|
TDengine uses ubiquitous SQL as its query language, which greatly reduces learning costs and migration costs. In addition to the standard SQL, TDengine has extensions to better support time series data analysis. These extensions include functions such as roll-up, interpolation, and time-weighted average, among many others. The [SQL Reference](./taos-sql) chapter describes the SQL syntax in detail and lists the various supported commands and functions.
|
||||||
|
|
||||||
If you are a system administrator who cares about installation, upgrade, fault tolerance, disaster recovery, data import, data export, system configuration, how to monitor whether TDengine is running healthily, and how to improve system performance, please refer to, and thoroughly read the ["Administration"](./operation) section.
|
If you are a system administrator who cares about installation, upgrade, fault tolerance, disaster recovery, data import, data export, system configuration, how to monitor whether TDengine is running healthily, and how to improve system performance, please refer to, and thoroughly read the [Administration](./operation) section.
|
||||||
|
|
||||||
If you want to know more about TDengine tools, the REST API, and connectors for various programming languages, please see the ["Reference"](./reference) chapter.
|
If you want to know more about TDengine tools, the REST API, and connectors for various programming languages, please see the [Reference](./reference) chapter.
|
||||||
|
|
||||||
If you are very interested in the internal design of TDengine, please read the chapter ["Inside TDengine”](./tdinternal), which introduces the cluster design, data partitioning, sharding, writing, and reading processes in detail. If you want to study TDengine code or even contribute code, please read this chapter carefully.
|
If you are very interested in the internal design of TDengine, please read the chapter [Inside TDengine](./tdinternal), which introduces the cluster design, data partitioning, sharding, writing, and reading processes in detail. If you want to study TDengine code or even contribute code, please read this chapter carefully.
|
||||||
|
|
||||||
TDengine is an open source database, and we would love for you to be a part of TDengine. If you find any errors in the documentation, or see parts where more clarity or elaboration is needed, please click "Edit this page" at the bottom of each page to edit it directly.
|
TDengine is an open-source database, and we would love for you to be a part of TDengine. If you find any errors in the documentation or see parts where more clarity or elaboration is needed, please click "Edit this page" at the bottom of each page to edit it directly.
|
||||||
|
|
||||||
Together, we make a difference.
|
Together, we make a difference!
|
||||||
|
|
|
@ -3,7 +3,7 @@ title: Introduction
|
||||||
toc_max_heading_level: 2
|
toc_max_heading_level: 2
|
||||||
---
|
---
|
||||||
|
|
||||||
TDengine is a high-performance, scalable time-series database with SQL support. Its code, including its cluster feature is open source under GNU AGPL v3.0. Besides the database engine, it provides [caching](/develop/cache), [stream processing](/develop/continuous-query), [data subscription](/develop/subscribe) and other functionalities to reduce the complexity and cost of development and operation.
|
TDengine is an open source, high-performance, cloud native [time-series database](https://tdengine.com/tsdb/) optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. Its code, including its cluster feature is open source under GNU AGPL v3.0. Besides the database engine, it provides [caching](../develop/cache), [stream processing](../develop/stream), [data subscription](../develop/tmq) and other functionalities to reduce the system complexity and cost of development and operation.
|
||||||
|
|
||||||
This section introduces the major features, competitive advantages, typical use-cases and benchmarks to help you get a high level overview of TDengine.
|
This section introduces the major features, competitive advantages, typical use-cases and benchmarks to help you get a high level overview of TDengine.
|
||||||
|
|
||||||
|
@ -11,52 +11,69 @@ This section introduces the major features, competitive advantages, typical use-
|
||||||
|
|
||||||
The major features are listed below:
|
The major features are listed below:
|
||||||
|
|
||||||
1. While TDengine supports [using SQL to insert](/develop/insert-data/sql-writing), it also supports [Schemaless writing](/reference/schemaless/) just like NoSQL databases. TDengine also supports standard protocols like [InfluxDB LINE](/develop/insert-data/influxdb-line),[OpenTSDB Telnet](/develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](/develop/insert-data/opentsdb-json) among others.
|
1. Insert data
|
||||||
2. TDengine supports seamless integration with third-party data collection agents like [Telegraf](/third-party/telegraf),[Prometheus](/third-party/prometheus),[StatsD](/third-party/statsd),[collectd](/third-party/collectd),[icinga2](/third-party/icinga2), [TCollector](/third-party/tcollector), [EMQX](/third-party/emq-broker), [HiveMQ](/third-party/hive-mq-broker). These agents can write data into TDengine with simple configuration and without a single line of code.
|
- Supports [using SQL to insert](../develop/insert-data/sql-writing).
|
||||||
3. Support for [all kinds of queries](/develop/query-data), including aggregation, nested query, downsampling, interpolation and others.
|
- Supports [schemaless writing](../reference/schemaless/) just like NoSQL databases. It also supports standard protocols like [InfluxDB Line](../develop/insert-data/influxdb-line), [OpenTSDB Telnet](../develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](../develop/insert-data/opentsdb-json) among others.
|
||||||
4. Support for [user defined functions](/develop/udf).
|
- Supports seamless integration with third-party tools like [Telegraf](../third-party/telegraf/), [Prometheus](../third-party/prometheus/), [collectd](../third-party/collectd/), [StatsD](../third-party/statsd/), [TCollector](../third-party/tcollector/), [EMQX](../third-party/emq-broker), [HiveMQ](../third-party/hive-mq-broker), and [Icinga2](../third-party/icinga2/), they can write data into TDengine with simple configuration and without a single line of code.
|
||||||
5. Support for [caching](/develop/cache). TDengine always saves the last data point in cache, so Redis is not needed in some scenarios.
|
2. Query data
|
||||||
6. Support for [continuous query](/develop/continuous-query).
|
- Supports standard [SQL](../taos-sql/), including nested query.
|
||||||
7. Support for [data subscription](/develop/subscribe) with the capability to specify filter conditions.
|
- Supports [time series specific functions](../taos-sql/function/#time-series-extensions) and [time series specific queries](../taos-sql/distinguished), like downsampling, interpolation, cumulated sum, time weighted average, state window, session window and many others.
|
||||||
8. Support for [cluster](/cluster/), with the capability of increasing processing power by adding more nodes. High availability is supported by replication.
|
- Supports [User Defined Functions (UDF)](../taos-sql/udf).
|
||||||
9. Provides an interactive [command-line interface](/reference/taos-shell) for management, maintenance and ad-hoc queries.
|
3. [Caching](../develop/cache/): TDengine always saves the last data point in cache, so Redis is not needed for time-series data processing.
|
||||||
10. Provides many ways to [import](/operation/import) and [export](/operation/export) data.
|
4. [Stream Processing](../develop/stream/): Not only is the continuous query is supported, but TDengine also supports event driven stream processing, so Flink or Spark is not needed for time-series data processing.
|
||||||
11. Provides [monitoring](/operation/monitor) on running instances of TDengine.
|
5. [Data Subscription](../develop/tmq/): Application can subscribe a table or a set of tables. API is the same as Kafka, but you can specify filter conditions.
|
||||||
12. Provides [connectors](/reference/connector/) for [C/C++](/reference/connector/cpp), [Java](/reference/connector/java), [Python](/reference/connector/python), [Go](/reference/connector/go), [Rust](/reference/connector/rust), [Node.js](/reference/connector/node) and other programming languages.
|
6. Visualization
|
||||||
13. Provides a [REST API](/reference/rest-api/).
|
- Supports seamless integration with [Grafana](../third-party/grafana/) for visualization.
|
||||||
14. Supports seamless integration with [Grafana](/third-party/grafana) for visualization.
|
- Supports seamless integration with Google Data Studio.
|
||||||
15. Supports seamless integration with Google Data Studio.
|
7. Cluster
|
||||||
|
- Supports [cluster](../deployment/) with the capability of increasing processing power by adding more nodes.
|
||||||
|
- Supports [deployment on Kubernetes](../deployment/k8s/).
|
||||||
|
- Supports high availability via data replication.
|
||||||
|
8. Administration
|
||||||
|
- Provides [monitoring](../operation/monitor) on running instances of TDengine.
|
||||||
|
- Provides many ways to [import](../operation/import) and [export](../operation/export) data.
|
||||||
|
9. Tools
|
||||||
|
- Provides an interactive [Command-line Interface (CLI)](../reference/taos-shell) for management, maintenance and ad-hoc queries.
|
||||||
|
- Provides a tool [taosBenchmark](../reference/taosbenchmark/) for testing the performance of TDengine.
|
||||||
|
10. Programming
|
||||||
|
- Provides [connectors](../reference/connector/) for [C/C++](../reference/connector/cpp), [Java](../reference/connector/java), [Python](../reference/connector/python), [Go](../reference/connector/go), [Rust](../reference/connector/rust), [Node.js](../reference/connector/node) and other programming languages.
|
||||||
|
- Provides a [REST API](../reference/rest-api/).
|
||||||
|
|
||||||
For more details on features, please read through the entire documentation.
|
For more details on features, please read through the entire documentation.
|
||||||
|
|
||||||
## Competitive Advantages
|
## Competitive Advantages
|
||||||
|
|
||||||
Time-series data is structured, not transactional, and is rarely deleted or updated. TDengine makes full use of [these characteristics of time series data](https://tdengine.com/2019/07/09/86.html) to build its own innovative storage engine and computing engine to differentiate itself from other time series databases, with the following advantages.
|
By making full use of [characteristics of time series data](https://tdengine.com/tsdb/characteristics-of-time-series-data/), TDengine differentiates itself from other time series databases, with the following advantages.
|
||||||
|
|
||||||
- **[High Performance](https://tdengine.com/fast)**: With an innovatively designed and purpose-built storage engine, TDengine outperforms other time series databases in data ingestion and querying while significantly reducing storage costs and compute costs.
|
- **[High-Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression.
|
||||||
|
|
||||||
- **[Scalable](https://tdengine.com/scalable)**: TDengine provides out-of-box scalability and high-availability through its native distributed design. Nodes can be added through simple configuration to achieve greater data processing power. In addition, this feature is open source.
|
- **[Simplified Solution](https://tdengine.com/tdengine/simplified-time-series-data-solution/)**: Through built-in caching, stream processing and data subscription features, TDengine provides a simplified solution for time-series data processing. It reduces system design complexity and operation costs significantly.
|
||||||
|
|
||||||
- **[SQL Support](https://tdengine.com/sql-support)**: TDengine uses SQL as the query language, thereby reducing learning and migration costs, while adding SQL extensions to better handle time-series. Keeping NoSQL developers in mind, TDengine also supports convenient and flexible, schemaless data ingestion.
|
- **[Cloud Native](https://tdengine.com/tdengine/cloud-native-time-series-database/)**: Through native distributed design, sharding and partitioning, separation of compute and storage, RAFT, support for Kubernetes deployment and full observability, TDengine is a cloud native Time-series Database and can be deployed on public, private or hybrid clouds.
|
||||||
|
|
||||||
- **All in One**: TDengine has built-in caching, stream processing and data subscription functions. It is no longer necessary to integrate Kafka/Redis/HBase/Spark or other software in some scenarios. It makes the system architecture much simpler, cost-effective and easier to maintain.
|
- **[Ease of Use](https://tdengine.com/tdengine/easy-time-series-data-platform/)**: For administrators, TDengine significantly reduces the effort to[
|
||||||
|
](https://tdengine.com/tdengine/easy-time-series-data-platform/) deploy and maintain. For developers, it provides a simple interface, simplified solution and seamless integrations for third party tools. For data users, it gives easy data access.
|
||||||
|
|
||||||
- **Seamless Integration**: Without a single line of code, TDengine provide seamless, configurable integration with third-party tools such as Telegraf, Grafana, EMQX, Prometheus, StatsD, collectd, etc. More third-party tools are being integrated.
|
- **[Easy Data Analytics](https://tdengine.com/tdengine/time-series-data-analytics-made-easy/)**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way.
|
||||||
|
|
||||||
- **Zero Management**: Installation and cluster setup can be done in seconds. Data partitioning and sharding are executed automatically. TDengine’s running status can be monitored via Grafana or other DevOps tools.
|
- **[Open Source](https://tdengine.com/tdengine/open-source-time-series-database/)**: TDengine’s core modules, including cluster feature, are all available under open source licenses. It has gathered over 19k stars on GitHub. There is an active developer community, and over 140k running instances worldwide.
|
||||||
|
|
||||||
- **Zero Learning Costs**: With SQL as the query language and support for ubiquitous tools like Python, Java, C/C++, Go, Rust, and Node.js connectors, and a REST API, there are zero learning costs.
|
With TDengine, the total cost of ownership of your time-series data platform can be greatly reduced.
|
||||||
|
|
||||||
- **Interactive Console**: TDengine provides convenient console access to the database, through a CLI, to run ad hoc queries, maintain the database, or manage the cluster, without any programming.
|
1. With its superior performance, the computing and storage resources are reduced significantly.
|
||||||
|
2. With SQL support, it can be seamlessly integrated with many third party tools, and learning costs/migration costs are reduced significantly.
|
||||||
With TDengine, the total cost of ownership of your time-series data platform can be greatly reduced. 1: With its superior performance, the computing and storage resources are reduced significantly 2: With SQL support, it can be seamlessly integrated with many third party tools, and learning costs/migration costs are reduced significantly 3: With its simple architecture and zero management, the operation and maintenance costs are reduced.
|
3. With its simplified solution and nearly zero management, the operation and maintenance costs are reduced significantly.
|
||||||
|
|
||||||
## Technical Ecosystem
|
## Technical Ecosystem
|
||||||
|
|
||||||
This is how TDengine would be situated, in a typical time-series data processing platform:
|
This is how TDengine would be situated, in a typical time-series data processing platform:
|
||||||
|
|
||||||
|
<figure>
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
<center>Figure 1. TDengine Technical Ecosystem</center>
|
<center><figcaption>Figure 1. TDengine Technical Ecosystem</figcaption></center>
|
||||||
|
</figure>
|
||||||
|
|
||||||
On the left-hand side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right-hand side, visualization/BI tools, HMI, Python/R, and IoT Apps can be connected. TDengine itself provides an interactive command-line interface and a web interface for management and maintenance.
|
On the left-hand side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right-hand side, visualization/BI tools, HMI, Python/R, and IoT Apps can be connected. TDengine itself provides an interactive command-line interface and a web interface for management and maintenance.
|
||||||
|
|
||||||
|
@ -67,7 +84,7 @@ As a high-performance, scalable and SQL supported time-series database, TDengine
|
||||||
### Characteristics and Requirements of Data Sources
|
### Characteristics and Requirements of Data Sources
|
||||||
|
|
||||||
| **Data Source Characteristics and Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
|
| **Data Source Characteristics and Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
|
||||||
| -------------------------------------------------------- | ------------------ | ----------------------- | ------------------- | :----------------------------------------------------------- |
|
| ------------------------------------------------ | ------------------ | ----------------------- | ------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||||
| A massive amount of total data | | | √ | TDengine provides excellent scale-out functions in terms of capacity, and has a storage structure with matching high compression ratio to achieve the best storage efficiency in the industry. |
|
| A massive amount of total data | | | √ | TDengine provides excellent scale-out functions in terms of capacity, and has a storage structure with matching high compression ratio to achieve the best storage efficiency in the industry. |
|
||||||
| Data input velocity is extremely high | | | √ | TDengine's performance is much higher than that of other similar products. It can continuously process larger amounts of input data in the same hardware environment, and provides a performance evaluation tool that can easily run in the user environment. |
|
| Data input velocity is extremely high | | | √ | TDengine's performance is much higher than that of other similar products. It can continuously process larger amounts of input data in the same hardware environment, and provides a performance evaluation tool that can easily run in the user environment. |
|
||||||
| A huge number of data sources | | | √ | TDengine is optimized specifically for a huge number of data sources. It is especially suitable for efficiently ingesting, writing and querying data from billions of data sources. |
|
| A huge number of data sources | | | √ | TDengine is optimized specifically for a huge number of data sources. It is especially suitable for efficiently ingesting, writing and querying data from billions of data sources. |
|
||||||
|
@ -75,7 +92,7 @@ As a high-performance, scalable and SQL supported time-series database, TDengine
|
||||||
### System Architecture Requirements
|
### System Architecture Requirements
|
||||||
|
|
||||||
| **System Architecture Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
|
| **System Architecture Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
|
||||||
| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ |
|
| ----------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| A simple and reliable system architecture | | | √ | TDengine's system architecture is very simple and reliable, with its own message queue, cache, stream computing, monitoring and other functions. There is no need to integrate any additional third-party products. |
|
| A simple and reliable system architecture | | | √ | TDengine's system architecture is very simple and reliable, with its own message queue, cache, stream computing, monitoring and other functions. There is no need to integrate any additional third-party products. |
|
||||||
| Fault-tolerance and high-reliability | | | √ | TDengine has cluster functions to automatically provide high-reliability and high-availability functions such as fault tolerance and disaster recovery. |
|
| Fault-tolerance and high-reliability | | | √ | TDengine has cluster functions to automatically provide high-reliability and high-availability functions such as fault tolerance and disaster recovery. |
|
||||||
| Standardization support | | | √ | TDengine supports standard SQL and provides SQL extensions for time-series data analysis. |
|
| Standardization support | | | √ | TDengine supports standard SQL and provides SQL extensions for time-series data analysis. |
|
||||||
|
@ -83,14 +100,14 @@ As a high-performance, scalable and SQL supported time-series database, TDengine
|
||||||
### System Function Requirements
|
### System Function Requirements
|
||||||
|
|
||||||
| **System Function Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
|
| **System Function Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
|
||||||
| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ |
|
| -------------------------------------------- | ------------------ | ----------------------- | ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| Complete data processing algorithms built-in | | √ | | While TDengine implements various general data processing algorithms, industry specific algorithms and special types of processing will need to be implemented at the application level. |
|
| Complete data processing algorithms built-in | | √ | | While TDengine implements various general data processing algorithms, industry specific algorithms and special types of processing will need to be implemented at the application level. |
|
||||||
| A large number of crosstab queries | | √ | | This type of processing is better handled by general purpose relational database systems but TDengine can work in concert with relational database systems to provide more complete solutions. |
|
| A large number of crosstab queries | | √ | | This type of processing is better handled by general purpose relational database systems but TDengine can work in concert with relational database systems to provide more complete solutions. |
|
||||||
|
|
||||||
### System Performance Requirements
|
### System Performance Requirements
|
||||||
|
|
||||||
| **System Performance Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
|
| **System Performance Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
|
||||||
| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ |
|
| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | --------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| Very large total processing capacity | | | √ | TDengine’s cluster functions can easily improve processing capacity via multi-server coordination. |
|
| Very large total processing capacity | | | √ | TDengine’s cluster functions can easily improve processing capacity via multi-server coordination. |
|
||||||
| Extremely high-speed data processing | | | √ | TDengine’s storage and data processing are optimized for IoT, and can process data many times faster than similar products. |
|
| Extremely high-speed data processing | | | √ | TDengine’s storage and data processing are optimized for IoT, and can process data many times faster than similar products. |
|
||||||
| Extremely fast processing of high resolution data | | | √ | TDengine has achieved the same or better performance than other relational and NoSQL data processing systems. |
|
| Extremely fast processing of high resolution data | | | √ | TDengine has achieved the same or better performance than other relational and NoSQL data processing systems. |
|
||||||
|
@ -98,16 +115,15 @@ As a high-performance, scalable and SQL supported time-series database, TDengine
|
||||||
### System Maintenance Requirements
|
### System Maintenance Requirements
|
||||||
|
|
||||||
| **System Maintenance Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
|
| **System Maintenance Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
|
||||||
| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ |
|
| --------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
||||||
| Native high-reliability | | | √ | TDengine has a very robust, reliable and easily configurable system architecture to simplify routine operation. Human errors and accidents are eliminated to the greatest extent, with a streamlined experience for operators. |
|
| Native high-reliability | | | √ | TDengine has a very robust, reliable and easily configurable system architecture to simplify routine operation. Human errors and accidents are eliminated to the greatest extent, with a streamlined experience for operators. |
|
||||||
| Minimize learning and maintenance costs | | | √ | In addition to being easily configurable, standard SQL support and the Taos shell for ad hoc queries makes maintenance simpler, allows reuse and reduces learning costs.|
|
| Minimize learning and maintenance costs | | | √ | In addition to being easily configurable, standard SQL support and the TDengine CLI for ad hoc queries makes maintenance simpler, allows reuse and reduces learning costs. |
|
||||||
| Abundant talent supply | √ | | | Given the above, and given the extensive training and professional services provided by TDengine, it is easy to migrate from existing solutions or create a new and lasting solution based on TDengine. |
|
| Abundant talent supply | √ | | | Given the above, and given the extensive training and professional services provided by TDengine, it is easy to migrate from existing solutions or create a new and lasting solution based on TDengine. |
|
||||||
|
|
||||||
## Comparison with other databases
|
## Comparison with other databases
|
||||||
|
|
||||||
- [Writing Performance Comparison of TDengine and InfluxDB ](https://tdengine.com/2022/02/23/4975.html)
|
- [Writing Performance Comparison of TDengine and InfluxDB ](https://tdengine.com/2022/02/23/4975.html)
|
||||||
- [Query Performance Comparison of TDengine and InfluxDB](https://tdengine.com/2022/02/24/5120.html)
|
- [Query Performance Comparison of TDengine and InfluxDB](https://tdengine.com/2022/02/24/5120.html)
|
||||||
- [TDengine vs InfluxDB、OpenTSDB、Cassandra、MySQL、ClickHouse](https://www.tdengine.com/downloads/TDengine_Testing_Report_en.pdf)
|
|
||||||
- [TDengine vs OpenTSDB](https://tdengine.com/2019/09/12/710.html)
|
- [TDengine vs OpenTSDB](https://tdengine.com/2019/09/12/710.html)
|
||||||
- [TDengine vs Cassandra](https://tdengine.com/2019/09/12/708.html)
|
- [TDengine vs Cassandra](https://tdengine.com/2019/09/12/708.html)
|
||||||
- [TDengine vs InfluxDB](https://tdengine.com/2019/09/12/706.html)
|
- [TDengine vs InfluxDB](https://tdengine.com/2019/09/12/706.html)
|
||||||
|
|
|
@ -2,7 +2,7 @@
|
||||||
title: Concepts
|
title: Concepts
|
||||||
---
|
---
|
||||||
|
|
||||||
In order to explain the basic concepts and provide some sample code, the TDengine documentation smart meters as a typical time series use case. We assume the following: 1. Each smart meter collects three metrics i.e. current, voltage, and phase 2. There are multiple smart meters, and 3. Each meter has static attributes like location and group ID. Based on this, collected data will look similar to the following table:
|
In order to explain the basic concepts and provide some sample code, the TDengine documentation smart meters as a typical time series use case. We assume the following: 1. Each smart meter collects three metrics i.e. current, voltage, and phase; 2. There are multiple smart meters; 3. Each meter has static attributes like location and group ID. Based on this, collected data will look similar to the following table:
|
||||||
|
|
||||||
<div className="center-table">
|
<div className="center-table">
|
||||||
<table>
|
<table>
|
||||||
|
@ -104,15 +104,15 @@ Each row contains the device ID, time stamp, collected metrics (current, voltage
|
||||||
|
|
||||||
## Metric
|
## Metric
|
||||||
|
|
||||||
Metric refers to the physical quantity collected by sensors, equipment or other types of data collection devices, such as current, voltage, temperature, pressure, GPS position, etc., which change with time, and the data type can be integer, float, Boolean, or strings. As time goes by, the amount of collected metric data stored increases.
|
Metric refers to the physical quantity collected by sensors, equipment or other types of data collection devices, such as current, voltage, temperature, pressure, GPS position, etc., which change with time, and the data type can be integer, float, Boolean, or strings. As time goes by, the amount of collected metric data stored increases. In the smart meters example, current, voltage and phase are the metrics.
|
||||||
|
|
||||||
## Label/Tag
|
## Label/Tag
|
||||||
|
|
||||||
Label/Tag refers to the static properties of sensors, equipment or other types of data collection devices, which do not change with time, such as device model, color, fixed location of the device, etc. The data type can be any type. Although static, TDengine allows users to add, delete or update tag values at any time. Unlike the collected metric data, the amount of tag data stored does not change over time.
|
Label/Tag refers to the static properties of sensors, equipment or other types of data collection devices, which do not change with time, such as device model, color, fixed location of the device, etc. The data type can be any type. Although static, TDengine allows users to add, delete or update tag values at any time. Unlike the collected metric data, the amount of tag data stored does not change over time. In the meters example, `location` and `groupid` are the tags.
|
||||||
|
|
||||||
## Data Collection Point
|
## Data Collection Point
|
||||||
|
|
||||||
Data Collection Point (DCP) refers to hardware or software that collects metrics based on preset time periods or triggered by events. A data collection point can collect one or multiple metrics, but these metrics are collected at the same time and have the same time stamp. For some complex equipment, there are often multiple data collection points, and the sampling rate of each collection point may be different, and fully independent. For example, for a car, there could be a data collection point to collect GPS position metrics, a data collection point to collect engine status metrics, and a data collection point to collect the environment metrics inside the car. So in this example the car would have three data collection points.
|
Data Collection Point (DCP) refers to hardware or software that collects metrics based on preset time periods or triggered by events. A data collection point can collect one or multiple metrics, but these metrics are collected at the same time and have the same time stamp. For some complex equipment, there are often multiple data collection points, and the sampling rate of each collection point may be different, and fully independent. For example, for a car, there could be a data collection point to collect GPS position metrics, a data collection point to collect engine status metrics, and a data collection point to collect the environment metrics inside the car. So in this example the car would have three data collection points. In the smart meters example, d1001, d1002, d1003, and d1004 are the data collection points.
|
||||||
|
|
||||||
## Table
|
## Table
|
||||||
|
|
||||||
|
@ -129,17 +129,20 @@ If the metric data of multiple DCPs are traditionally written into a single tabl
|
||||||
|
|
||||||
TDengine suggests using DCP ID as the table name (like D1001 in the above table). Each DCP may collect one or multiple metrics (like the current, voltage, phase as above). Each metric has a corresponding column in the table. The data type for a column can be int, float, string and others. In addition, the first column in the table must be a timestamp. TDengine uses the time stamp as the index, and won’t build the index on any metrics stored. Column wise storage is used.
|
TDengine suggests using DCP ID as the table name (like D1001 in the above table). Each DCP may collect one or multiple metrics (like the current, voltage, phase as above). Each metric has a corresponding column in the table. The data type for a column can be int, float, string and others. In addition, the first column in the table must be a timestamp. TDengine uses the time stamp as the index, and won’t build the index on any metrics stored. Column wise storage is used.
|
||||||
|
|
||||||
|
Complex devices, such as connected cars, may have multiple DCPs. In this case, multiple tables are created for a single device, one table per DCP.
|
||||||
|
|
||||||
## Super Table (STable)
|
## Super Table (STable)
|
||||||
|
|
||||||
The design of one table for one data collection point will require a huge number of tables, which is difficult to manage. Furthermore, applications often need to take aggregation operations among DCPs, thus aggregation operations will become complicated. To support aggregation over multiple tables efficiently, the STable(Super Table) concept is introduced by TDengine.
|
The design of one table for one data collection point will require a huge number of tables, which is difficult to manage. Furthermore, applications often need to take aggregation operations among DCPs, thus aggregation operations will become complicated. To support aggregation over multiple tables efficiently, the STable(Super Table) concept is introduced by TDengine.
|
||||||
|
|
||||||
STable is a template for a type of data collection point. A STable contains a set of data collection points (tables) that have the same schema or data structure, but with different static attributes (tags). To describe a STable, in addition to defining the table structure of the metrics, it is also necessary to define the schema of its tags. The data type of tags can be int, float, string, and there can be multiple tags, which can be added, deleted, or modified afterward. If the whole system has N different types of data collection points, N STables need to be established.
|
STable is a template for a type of data collection point. A STable contains a set of data collection points (tables) that have the same schema or data structure, but with different static attributes (tags). To describe a STable, in addition to defining the table structure of the metrics, it is also necessary to define the schema of its tags. The data type of tags can be int, float, string, and there can be multiple tags, which can be added, deleted, or modified afterward. If the whole system has N different types of data collection points, N STables need to be established.
|
||||||
|
|
||||||
In the design of TDengine, **a table is used to represent a specific data collection point, and STable is used to represent a set of data collection points of the same type**.
|
In the design of TDengine, **a table is used to represent a specific data collection point, and STable is used to represent a set of data collection points of the same type**. In the smart meters example, we can create a super table named `meters`.
|
||||||
|
|
||||||
## Subtable
|
## Subtable
|
||||||
|
|
||||||
When creating a table for a specific data collection point, the user can use a STable as a template and specify the tag values of this specific DCP to create it. ** The table created by using a STable as the template is called subtable** in TDengine. The difference between regular table and subtable is:
|
When creating a table for a specific data collection point, the user can use a STable as a template and specify the tag values of this specific DCP to create it. ** The table created by using a STable as the template is called subtable** in TDengine. The difference between regular table and subtable is:
|
||||||
|
|
||||||
1. Subtable is a table, all SQL commands applied on a regular table can be applied on subtable.
|
1. Subtable is a table, all SQL commands applied on a regular table can be applied on subtable.
|
||||||
2. Subtable is a table with extensions, it has static tags (labels), and these tags can be added, deleted, and updated after it is created. But a regular table does not have tags.
|
2. Subtable is a table with extensions, it has static tags (labels), and these tags can be added, deleted, and updated after it is created. But a regular table does not have tags.
|
||||||
3. A subtable belongs to only one STable, but a STable may have many subtables. Regular tables do not belong to a STable.
|
3. A subtable belongs to only one STable, but a STable may have many subtables. Regular tables do not belong to a STable.
|
||||||
|
@ -151,13 +154,15 @@ The relationship between a STable and the subtables created based on this STable
|
||||||
2. The schema of metrics or labels cannot be adjusted through subtables, and it can only be changed via STable. Changes to the schema of a STable takes effect immediately for all associated subtables.
|
2. The schema of metrics or labels cannot be adjusted through subtables, and it can only be changed via STable. Changes to the schema of a STable takes effect immediately for all associated subtables.
|
||||||
3. STable defines only one template and does not store any data or label information by itself. Therefore, data cannot be written to a STable, only to subtables.
|
3. STable defines only one template and does not store any data or label information by itself. Therefore, data cannot be written to a STable, only to subtables.
|
||||||
|
|
||||||
Queries can be executed on both a table (subtable) and a STable. For a query on a STable, TDengine will treat the data in all its subtables as a whole data set for processing. TDengine will first find the subtables that meet the tag filter conditions, then scan the time-series data of these subtables to perform aggregation operation, which reduces the number of data sets to be scanned which in turn greatly improves the performance of data aggregation across multiple DCPs.
|
Queries can be executed on both a table (subtable) and a STable. For a query on a STable, TDengine will treat the data in all its subtables as a whole data set for processing. TDengine will first find the subtables that meet the tag filter conditions, then scan the time-series data of these subtables to perform aggregation operation, which reduces the number of data sets to be scanned which in turn greatly improves the performance of data aggregation across multiple DCPs. In essence, querying a supertable is a very efficient aggregate query on multiple DCPs of the same type.
|
||||||
|
|
||||||
In TDengine, it is recommended to use a subtable instead of a regular table for a DCP.
|
In TDengine, it is recommended to use a subtable instead of a regular table for a DCP. In the smart meters example, we can create subtables like d1001, d1002, d1003, and d1004 under super table meters.
|
||||||
|
|
||||||
|
To better understand the data model using metri, tags, super table and subtable, please refer to the diagram below which demonstrates the data model of the smart meters example. 
|
||||||
|
|
||||||
## Database
|
## Database
|
||||||
|
|
||||||
A database is a collection of tables. TDengine allows a running instance to have multiple databases, and each database can be configured with different storage policies. Different types of DCPs often have different data characteristics, including the frequency of data collection, data retention time, the number of replications, the size of data blocks, whether data is allowed to be updated, and so on. In order for TDengine to work with maximum efficiency in various scenarios, TDengine recommends that STables with different data characteristics be created in different databases.
|
A database is a collection of tables. TDengine allows a running instance to have multiple databases, and each database can be configured with different storage policies. The [characteristics of time-series data](https://tdengine.com/tsdb/characteristics-of-time-series-data/) from different data collection points may be different. Characteristics include collection frequency, retention policy and others which determine how you create and configure the database. For e.g. days to keep, number of replicas, data block size, whether data updates are allowed and other configurable parameters would be determined by the characteristics of your data and your business requirements. In order for TDengine to work with maximum efficiency in various scenarios, TDengine recommends that STables with different data characteristics be created in different databases.
|
||||||
|
|
||||||
In a database, there can be one or more STables, but a STable belongs to only one database. All tables owned by a STable are stored in only one database.
|
In a database, there can be one or more STables, but a STable belongs to only one database. All tables owned by a STable are stored in only one database.
|
||||||
|
|
||||||
|
|
Binary file not shown.
After Width: | Height: | Size: 33 KiB |
|
@ -1,103 +1,99 @@
|
||||||
---
|
---
|
||||||
sidebar_label: Docker
|
sidebar_label: Docker
|
||||||
title: 通过 Docker 快速体验 TDengine
|
title: Quick Install on Docker
|
||||||
---
|
---
|
||||||
:::info
|
|
||||||
如果您希望对 TDengine 贡献代码或对内部实现感兴趣,请参考我们的 [TDengine GitHub 主页](https://github.com/taosdata/TDengine) 下载源码构建和安装.
|
|
||||||
:::
|
|
||||||
|
|
||||||
本节首先介绍如何通过 Docker 快速体验 TDengine,然后介绍如何在 Docker 环境下体验 TDengine 的写入和查询功能。
|
This document describes how to install TDengine in a Docker container and perform queries and inserts. To get started with TDengine in a non-containerized environment, see [Quick Install](../../get-started/package). If you want to view the source code, build TDengine yourself, or contribute to the project, see the [TDengine GitHub repository](https://github.com/taosdata/TDengine).
|
||||||
|
|
||||||
## 启动 TDengine
|
## Run TDengine
|
||||||
|
|
||||||
如果已经安装了 docker, 只需执行下面的命令。
|
If Docker is already installed on your computer, run the following command:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
docker run -d -p 6030:6030 -p 6041/6041 -p 6043-6049/6043-6049 -p 6043-6049:6043-6049/udp tdengine/tdengine
|
docker run -d -p 6030:6030 -p 6041:6041 -p 6043-6049:6043-6049 -p 6043-6049:6043-6049/udp tdengine/tdengine
|
||||||
```
|
```
|
||||||
|
|
||||||
注意:TDengine 3.0 服务端仅使用 6030 TCP 端口。6041 为 taosAdapter 所使用提供 REST 服务端口。6043-6049 为 taosAdapter 提供第三方应用接入所使用端口,可根据需要选择是否打开。
|
Note that TDengine Server uses TCP port 6030. Port 6041 is used by taosAdapter for the REST API service. Ports 6043 through 6049 are used by taosAdapter for other connectors. You can open these ports as needed.
|
||||||
|
|
||||||
确定该容器已经启动并且在正常运行
|
Run the following command to ensure that your container is running:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
docker ps
|
docker ps
|
||||||
```
|
```
|
||||||
|
|
||||||
进入该容器并执行 bash
|
Enter the container and open the bash shell:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
docker exec -it <container name> bash
|
docker exec -it <container name> bash
|
||||||
```
|
```
|
||||||
|
|
||||||
然后就可以执行相关的 Linux 命令操作和访问 TDengine
|
You can now access TDengine or run other Linux commands.
|
||||||
|
|
||||||
## 运行 TDengine CLI
|
Note: For information about installing docker, see the [official documentation](https://docs.docker.com/get-docker/).
|
||||||
|
|
||||||
进入容器,执行 taos
|
## Insert Data into TDengine
|
||||||
|
|
||||||
```
|
You can use the `taosBenchmark` tool included with TDengine to write test data into your deployment.
|
||||||
$ taos
|
|
||||||
Welcome to the TDengine shell from Linux, Client Version:3.0.0.0
|
|
||||||
Copyright (c) 2022 by TAOS Data, Inc. All rights reserved.
|
|
||||||
|
|
||||||
Server is Community Edition.
|
To do so, run the following command:
|
||||||
|
|
||||||
taos>
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
## 写入数据
|
|
||||||
|
|
||||||
可以使用 TDengine 的自带工具 taosBenchmark 快速体验 TDengine 的写入。
|
|
||||||
|
|
||||||
进入容器,启动 taosBenchmark:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ taosBenchmark
|
$ taosBenchmark
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
该命令将在数据库 test 下面自动创建一张超级表 meters,该超级表下有 1 万张表,表名为 "d0" 到 "d9999",每张表有 1 万条记录,每条记录有 (ts, current, voltage, phase) 四个字段,时间戳从 "2017-07-14 10:40:00 000" 到 "2017-07-14 10:40:09 999",每张表带有标签 location 和 groupId,groupId 被设置为 1 到 10, location 被设置为 "San Francisco" 或者 "Los Angeles"等城市名称。
|
This command creates the `meters` supertable in the `test` database. In the `meters` supertable, it then creates 10,000 subtables named `d0` to `d9999`. Each table has 10,000 rows and each row has four columns: `ts`, `current`, `voltage`, and `phase`. The timestamps of the data in these columns range from 2017-07-14 10:40:00 000 to 2017-07-14 10:40:09 999. Each table is randomly assigned a `groupId` tag from 1 to 10 and a `location` tag of either `Campbell`, `Cupertino`, `Los Angeles`, `Mountain View`, `Palo Alto`, `San Diego`, `San Francisco`, `San Jose`, `Santa Clara` or `Sunnyvale`.
|
||||||
|
|
||||||
这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能。
|
The `taosBenchmark` command creates a deployment with 100 million data points that you can use for testing purposes. The time required depends on the hardware specifications of the local system.
|
||||||
|
|
||||||
taosBenchmark 命令本身带有很多选项,配置表的数目、记录条数等等,您可以设置不同参数进行体验,请执行 `taosBenchmark --help` 详细列出。taosBenchmark 详细使用方法请参照 [taosBenchmark 参考手册](../../reference/taosbenchmark)。
|
You can customize the test deployment that taosBenchmark creates by specifying command-line parameters. For information about command-line parameters, run the `taosBenchmark --help` command. For more information about taosBenchmark, see [taosBenchmark](/reference/taosbenchmark).
|
||||||
|
|
||||||
## 体验查询
|
## Open the TDengine CLI
|
||||||
|
|
||||||
使用上述 taosBenchmark 插入数据后,可以在 TDengine CLI 输入查询命令,体验查询速度。。
|
On the container, run the following command to open the TDengine CLI:
|
||||||
|
|
||||||
查询超级表下记录总条数:
|
```
|
||||||
|
$ taos
|
||||||
|
|
||||||
|
taos>
|
||||||
|
|
||||||
```sql
|
|
||||||
taos> select count(*) from test.meters;
|
|
||||||
```
|
```
|
||||||
|
|
||||||
查询 1 亿条记录的平均值、最大值、最小值等:
|
## Query Data in TDengine
|
||||||
|
|
||||||
|
After using taosBenchmark to create your test deployment, you can run queries in the TDengine CLI to test its performance. For example:
|
||||||
|
|
||||||
|
From the TDengine CLI query the number of rows in the `meters` supertable:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
taos> select avg(current), max(voltage), min(phase) from test.meters;
|
select count(*) from test.meters;
|
||||||
```
|
```
|
||||||
|
|
||||||
查询 location="San Francisco" 的记录总条数:
|
Query the average, maximum, and minimum values of all 100 million rows of data:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
taos> select count(*) from test.meters where location="San Francisco";
|
select avg(current), max(voltage), min(phase) from test.meters;
|
||||||
```
|
```
|
||||||
|
|
||||||
查询 groupId=10 的所有记录的平均值、最大值、最小值等:
|
Query the number of rows whose `location` tag is `San Francisco`:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
taos> select avg(current), max(voltage), min(phase) from test.meters where groupId=10;
|
select count(*) from test.meters where location="San Francisco";
|
||||||
```
|
```
|
||||||
|
|
||||||
对表 d10 按 10s 进行平均值、最大值和最小值聚合统计:
|
Query the average, maximum, and minimum values of all rows whose `groupId` tag is `10`:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
taos> select avg(current), max(voltage), min(phase) from test.d10 interval(10s);
|
select avg(current), max(voltage), min(phase) from test.meters where groupId=10;
|
||||||
```
|
```
|
||||||
|
|
||||||
## 其它
|
Query the average, maximum, and minimum values for table `d10` in 1 second intervals:
|
||||||
|
|
||||||
更多关于在 Docker 环境下使用 TDengine 的细节,请参考 [在 Docker 下使用 TDengine](../../reference/docker)
|
```sql
|
||||||
|
select first(ts), avg(current), max(voltage), min(phase) from test.d10 interval(1s);
|
||||||
|
```
|
||||||
|
In the query above you are selecting the first timestamp (ts) in the interval, another way of selecting this would be _wstart which will give the start of the time window. For more information about windowed queries, see [Time-Series Extensions](../../taos-sql/distinguished/).
|
||||||
|
|
||||||
|
## Additional Information
|
||||||
|
|
||||||
|
For more information about deploying TDengine in a Docker environment, see [Using TDengine in Docker](../../reference/docker).
|
||||||
|
|
|
@ -1,38 +1,90 @@
|
||||||
---
|
---
|
||||||
sidebar_label: 安装包
|
sidebar_label: Package
|
||||||
title: 使用安装包立即开始
|
title: Quick Install from Package
|
||||||
---
|
---
|
||||||
|
|
||||||
import Tabs from "@theme/Tabs";
|
import Tabs from "@theme/Tabs";
|
||||||
import TabItem from "@theme/TabItem";
|
import TabItem from "@theme/TabItem";
|
||||||
|
import PkgListV3 from "/components/PkgListV3";
|
||||||
|
|
||||||
:::info
|
For information about installing TDengine on Docker, see [Quick Install on Docker](../../get-started/docker). If you want to view the source code, build TDengine yourself, or contribute to the project, see the [TDengine GitHub repository](https://github.com/taosdata/TDengine).
|
||||||
如果您希望对 TDengine 贡献代码或对内部实现感兴趣,请参考我们的 [TDengine GitHub 主页](https://github.com/taosdata/TDengine) 下载源码构建和安装.
|
|
||||||
|
|
||||||
:::
|
The full package of TDengine includes the TDengine Server (`taosd`), TDengine Client (`taosc`), taosAdapter for connecting with third-party systems and providing a RESTful interface, a command-line interface, and some tools. Note that taosAdapter supports Linux only. In addition to connectors for multiple languages, TDengine also provides a [REST API](../../reference/rest-api) through [taosAdapter](../../reference/taosadapter).
|
||||||
|
|
||||||
TDengine 开源版本提供 deb 和 rpm 格式安装包,用户可以根据自己的运行环境选择合适的安装包。其中 deb 支持 Debian/Ubuntu 及衍生系统,rpm 支持 CentOS/RHEL/SUSE 及衍生系统。同时我们也为企业用户提供 tar.gz 格式安装包,也支持通过 `apt-get` 工具从线上进行安装。
|
The standard server installation package includes `taos`, `taosd`, `taosAdapter`, `taosBenchmark`, and sample code. You can also download a lite package that includes only `taosd` and the C/C++ connector.
|
||||||
|
|
||||||
## 安装
|
The TDengine Community Edition is released as .deb and .rpm packages. The .deb package can be installed on Debian, Ubuntu, and derivative systems. The .rpm package can be installed on CentOS, RHEL, SUSE, and derivative systems. A .tar.gz package is also provided for enterprise customers, and you can install TDengine over `apt-get` as well. The .tar.tz package includes `taosdump` and the TDinsight installation script. If you want to use these utilities with the .deb or .rpm package, download and install taosTools separately. TDengine can also be installed on 64-bit Windows servers.
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
<Tabs>
|
<Tabs>
|
||||||
<TabItem value="apt-get" label="apt-get">
|
<TabItem label=".deb" value="debinst">
|
||||||
可以使用 apt-get 工具从官方仓库安装。
|
|
||||||
|
|
||||||
**安装包仓库**
|
1. Download the .deb installation package.
|
||||||
|
<PkgListV3 type={6}/>
|
||||||
|
2. In the directory where the package is located, use `dpkg` to install the package:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Enter the name of the package that you downloaded.
|
||||||
|
sudo dpkg -i TDengine-server-<version>-Linux-x64.deb
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem label=".rpm" value="rpminst">
|
||||||
|
|
||||||
|
1. Download the .rpm installation package.
|
||||||
|
<PkgListV3 type={5}/>
|
||||||
|
2. In the directory where the package is located, use rpm to install the package:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Enter the name of the package that you downloaded.
|
||||||
|
sudo rpm -ivh TDengine-server-<version>-Linux-x64.rpm
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem label=".tar.gz" value="tarinst">
|
||||||
|
|
||||||
|
1. Download the .tar.gz installation package.
|
||||||
|
<PkgListV3 type={0}/>
|
||||||
|
2. In the directory where the package is located, use `tar` to decompress the package:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Enter the name of the package that you downloaded.
|
||||||
|
tar -zxvf TDengine-server-<version>-Linux-x64.tar.gz
|
||||||
|
```
|
||||||
|
|
||||||
|
In the directory to which the package was decompressed, run `install.sh`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo ./install.sh
|
||||||
|
```
|
||||||
|
|
||||||
|
:::info
|
||||||
|
Users will be prompted to enter some configuration information when install.sh is executing. The interactive mode can be disabled by executing `./install.sh -e no`. `./install.sh -h` can show all parameters with detailed explanation.
|
||||||
|
:::
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem value="apt-get" label="apt-get">
|
||||||
|
You can use `apt-get` to install TDengine from the official package repository.
|
||||||
|
|
||||||
|
**Configure the package repository**
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
|
wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
|
||||||
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-stable stable main" | sudo tee /etc/apt/sources.list.d/tdengine-stable.list
|
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-stable stable main" | sudo tee /etc/apt/sources.list.d/tdengine-stable.list
|
||||||
```
|
```
|
||||||
|
|
||||||
如果安装 Beta 版需要安装包仓库
|
You can install beta versions by configuring the following repository:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
|
||||||
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-beta beta main" | sudo tee /etc/apt/sources.list.d/tdengine-beta.list
|
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-beta beta main" | sudo tee /etc/apt/sources.list.d/tdengine-beta.list
|
||||||
```
|
```
|
||||||
|
|
||||||
**使用 apt-get 命令安装**
|
**Install TDengine with `apt-get`**
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
sudo apt-get update
|
sudo apt-get update
|
||||||
|
@ -41,120 +93,116 @@ sudo apt-get install tdengine
|
||||||
```
|
```
|
||||||
|
|
||||||
:::tip
|
:::tip
|
||||||
apt-get 方式只适用于 Debian 或 Ubuntu 系统
|
This installation method is supported only for Debian and Ubuntu.
|
||||||
::::
|
::::
|
||||||
</TabItem>
|
</TabItem>
|
||||||
<TabItem label="Deb 安装" value="debinst">
|
<TabItem label="Windows" value="windows">
|
||||||
|
|
||||||
1、从官网下载获得 deb 安装包,例如 TDengine-server-3.0.0.0-Linux-x64.deb;
|
Note: TDengine only supports Windows Server 2016/2019 and windows 10/11 system versions on the windows platform.
|
||||||
2、进入到 TDengine-server-3.0.0.0-Linux-x64.deb 安装包所在目录,执行如下的安装命令:
|
|
||||||
|
|
||||||
```bash
|
1. Download the Windows installation package.
|
||||||
sudo dpkg -i TDengine-server-3.0.0.0-Linux-x64.deb
|
<PkgListV3 type={3}/>
|
||||||
```
|
2. Run the downloaded package to install TDengine.
|
||||||
|
|
||||||
</TabItem>
|
|
||||||
|
|
||||||
<TabItem label="RPM 安装" value="rpminst">
|
|
||||||
|
|
||||||
1、从官网下载获得 rpm 安装包,例如 TDengine-server-3.0.0.0-Linux-x64.rpm;
|
|
||||||
2、进入到 TDengine-server-3.0.0.0-Linux-x64.rpm 安装包所在目录,执行如下的安装命令:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo rpm -ivh TDengine-server-3.0.0.0-Linux-x64.rpm
|
|
||||||
```
|
|
||||||
|
|
||||||
</TabItem>
|
|
||||||
|
|
||||||
<TabItem label="tar.gz 安装" value="tarinst">
|
|
||||||
|
|
||||||
1、从官网下载获得 tar.gz 安装包,例如 TDengine-server-3.0.0.0-Linux-x64.tar.gz;
|
|
||||||
2、进入到 TDengine-server-3.0.0.0-Linux-x64.tar.gz 安装包所在目录,先解压文件后,进入子目录,执行其中的 install.sh 安装脚本:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
tar -zxvf TDengine-server-3.0.0.0-Linux-x64.tar.gz
|
|
||||||
```
|
|
||||||
|
|
||||||
解压后进入相应路径,执行
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo ./install.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
:::info
|
|
||||||
install.sh 安装脚本在执行过程中,会通过命令行交互界面询问一些配置信息。如果希望采取无交互安装方式,那么可以用 -e no 参数来执行 install.sh 脚本。运行 `./install.sh -h` 指令可以查看所有参数的详细说明信息。
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
</TabItem>
|
</TabItem>
|
||||||
</Tabs>
|
</Tabs>
|
||||||
|
|
||||||
|
:::info
|
||||||
|
For information about TDengine releases, see [Release History](../../releases).
|
||||||
|
:::
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
当安装第一个节点时,出现 Enter FQDN:提示的时候,不需要输入任何内容。只有当安装第二个或以后更多的节点时,才需要输入已有集群中任何一个可用节点的 FQDN,支持该新节点加入集群。当然也可以不输入,而是在新节点启动前,配置到新节点的配置文件中。
|
On the first node in your TDengine cluster, leave the `Enter FQDN:` prompt blank and press **Enter**. On subsequent nodes, you can enter the end point of the first dnode in the cluster. You can also configure this setting after you have finished installing TDengine.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
## 启动
|
## Quick Launch
|
||||||
|
|
||||||
安装后,请使用 `systemctl` 命令来启动 TDengine 的服务进程。
|
<Tabs>
|
||||||
|
<TabItem label="Linux" value="linux">
|
||||||
|
|
||||||
|
After the installation is complete, run the following command to start the TDengine service:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
systemctl start taosd
|
systemctl start taosd
|
||||||
```
|
```
|
||||||
|
|
||||||
检查服务是否正常工作:
|
Run the following command to confirm that TDengine is running normally:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
systemctl status taosd
|
systemctl status taosd
|
||||||
```
|
```
|
||||||
|
|
||||||
如果服务进程处于活动状态,则 status 指令会显示如下的相关信息:
|
Output similar to the following indicates that TDengine is running normally:
|
||||||
|
|
||||||
```
|
```
|
||||||
Active: active (running)
|
Active: active (running)
|
||||||
```
|
```
|
||||||
|
|
||||||
如果后台服务进程处于停止状态,则 status 指令会显示如下的相关信息:
|
Output similar to the following indicates that TDengine has not started successfully:
|
||||||
|
|
||||||
```
|
```
|
||||||
Active: inactive (dead)
|
Active: inactive (dead)
|
||||||
```
|
```
|
||||||
|
|
||||||
如果 TDengine 服务正常工作,那么您可以通过 TDengine 的命令行程序 `taos` 来访问并体验 TDengine。
|
After confirming that TDengine is running, run the `taos` command to access the TDengine CLI.
|
||||||
|
|
||||||
systemctl 命令汇总:
|
The following `systemctl` commands can help you manage TDengine:
|
||||||
|
|
||||||
- 启动服务进程:`systemctl start taosd`
|
- Start TDengine Server: `systemctl start taosd`
|
||||||
|
|
||||||
- 停止服务进程:`systemctl stop taosd`
|
- Stop TDengine Server: `systemctl stop taosd`
|
||||||
|
|
||||||
- 重启服务进程:`systemctl restart taosd`
|
- Restart TDengine Server: `systemctl restart taosd`
|
||||||
|
|
||||||
- 查看服务状态:`systemctl status taosd`
|
- Check TDengine Server status: `systemctl status taosd`
|
||||||
|
|
||||||
:::info
|
:::info
|
||||||
|
|
||||||
- systemctl 命令需要 _root_ 权限来运行,如果您非 _root_ 用户,请在命令前添加 sudo 。
|
- The `systemctl` command requires _root_ privileges. If you are not logged in as the `root` user, use the `sudo` command.
|
||||||
- `systemctl stop taosd` 指令在执行后并不会马上停止 TDengine 服务,而是会等待系统中必要的落盘工作正常完成。在数据量很大的情况下,这可能会消耗较长时间。
|
- The `systemctl stop taosd` command does not instantly stop TDengine Server. The server is stopped only after all data in memory is flushed to disk. The time required depends on the cache size.
|
||||||
- 如果系统中不支持 `systemd`,也可以用手动运行 `/usr/local/taos/bin/taosd` 方式启动 TDengine 服务。
|
- If your system does not include `systemd`, you can run `/usr/local/taos/bin/taosd` to start TDengine manually.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
## TDengine 命令行 (CLI)
|
</TabItem>
|
||||||
|
|
||||||
为便于检查 TDengine 的状态,执行数据库 (Database) 的各种即席(Ad Hoc)查询,TDengine 提供一命令行应用程序(以下简称为 TDengine CLI) taos。要进入 TDengine 命令行,您只要在安装有 TDengine 的 Linux 终端执行 `taos` 即可。
|
<TabItem label="Windows" value="windows">
|
||||||
|
|
||||||
|
After the installation is complete, run `C:\TDengine\taosd.exe` to start TDengine Server.
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
|
## Test data insert performance
|
||||||
|
|
||||||
|
After your TDengine Server is running normally, you can run the taosBenchmark utility to test its performance:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
taosBenchmark
|
||||||
|
```
|
||||||
|
|
||||||
|
This command creates the `meters` supertable in the `test` database. In the `meters` supertable, it then creates 10,000 subtables named `d0` to `d9999`. Each table has 10,000 rows and each row has four columns: `ts`, `current`, `voltage`, and `phase`. The timestamps of the data in these columns range from 2017-07-14 10:40:00 000 to 2017-07-14 10:40:09 999. Each table is randomly assigned a `groupId` tag from 1 to 10 and a `location` tag of either `Campbell`, `Cupertino`, `Los Angeles`, `Mountain View`, `Palo Alto`, `San Diego`, `San Francisco`, `San Jose`, `Santa Clara` or `Sunnyvale`.
|
||||||
|
|
||||||
|
The `taosBenchmark` command creates a deployment with 100 million data points that you can use for testing purposes. The time required to create the deployment depends on your hardware. On most modern servers, the deployment is created in less than a minute.
|
||||||
|
|
||||||
|
You can customize the test deployment that taosBenchmark creates by specifying command-line parameters. For information about command-line parameters, run the `taosBenchmark --help` command. For more information about taosBenchmark, see [taosBenchmark](../../reference/taosbenchmark).
|
||||||
|
|
||||||
|
## Command Line Interface
|
||||||
|
|
||||||
|
You can use the TDengine CLI to monitor your TDengine deployment and execute ad hoc queries. To open the CLI, run the following command:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
taos
|
taos
|
||||||
```
|
```
|
||||||
|
|
||||||
如果连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印错误消息出来(请参考 [FAQ](/train-faq/faq) 来解决终端连接服务端失败的问题)。 TDengine CLI 的提示符号如下:
|
The TDengine CLI displays a welcome message and version information to indicate that its connection to the TDengine service was successful. If an error message is displayed, see the [FAQ](/train-faq/faq) for troubleshooting information. At the following prompt, you can execute SQL commands.
|
||||||
|
|
||||||
```cmd
|
```cmd
|
||||||
taos>
|
taos>
|
||||||
```
|
```
|
||||||
|
|
||||||
在 TDengine CLI 中,用户可以通过 SQL 命令来创建/删除数据库、表等,并进行数据库(database)插入查询操作。在终端中运行的 SQL 语句需要以分号结束来运行。示例:
|
For example, you can create and delete databases and tables and run all types of queries. Each SQL command must be end with a semicolon (;). For example:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
create database demo;
|
create database demo;
|
||||||
|
@ -170,52 +218,39 @@ select * from t;
|
||||||
Query OK, 2 row(s) in set (0.003128s)
|
Query OK, 2 row(s) in set (0.003128s)
|
||||||
```
|
```
|
||||||
|
|
||||||
除执行 SQL 语句外,系统管理员还可以从 TDengine CLI 进行检查系统运行状态、添加删除用户账号等操作。TDengine CLI 连同应用驱动也可以独立安装在 Linux 或 Windows 机器上运行,更多细节请参考 [这里](../../reference/taos-shell/)
|
You can also can monitor the deployment status, add and remove user accounts, and manage running instances. You can run the TDengine CLI on either Linux or Windows machines. For more information, see [TDengine CLI](../../reference/taos-shell/).
|
||||||
|
|
||||||
## 使用 taosBenchmark 体验写入速度
|
## Test data query performance
|
||||||
|
|
||||||
启动 TDengine 的服务,在 Linux 终端执行 `taosBenchmark` (曾命名为 `taosdemo`):
|
After using taosBenchmark to create your test deployment, you can run queries in the TDengine CLI to test its performance:
|
||||||
|
|
||||||
```bash
|
From the TDengine CLI query the number of rows in the `meters` supertable:
|
||||||
taosBenchmark
|
|
||||||
```
|
|
||||||
|
|
||||||
该命令将在数据库 test 下面自动创建一张超级表 meters,该超级表下有 1 万张表,表名为 "d0" 到 "d9999",每张表有 1 万条记录,每条记录有 (ts, current, voltage, phase) 四个字段,时间戳从 "2017-07-14 10:40:00 000" 到 "2017-07-14 10:40:09 999",每张表带有标签 location 和 groupId,groupId 被设置为 1 到 10, location 被设置为 "California.SanFrancisco" 或者 "California.LosAngeles"。
|
|
||||||
|
|
||||||
这条命令很快完成 1 亿条记录的插入。具体时间取决于硬件性能,即使在一台普通的 PC 服务器往往也仅需十几秒。
|
|
||||||
|
|
||||||
taosBenchmark 命令本身带有很多选项,配置表的数目、记录条数等等,您可以设置不同参数进行体验,请执行 `taosBenchmark --help` 详细列出。taosBenchmark 详细使用方法请参照 [如何使用 taosBenchmark 对 TDengine 进行性能测试](https://www.taosdata.com/2021/10/09/3111.html)。
|
|
||||||
|
|
||||||
## 使用 TDengine CLI 体验查询速度
|
|
||||||
|
|
||||||
使用上述 taosBenchmark 插入数据后,可以在 TDengine CLI 输入查询命令,体验查询速度。
|
|
||||||
|
|
||||||
查询超级表下记录总条数:
|
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
taos> select count(*) from test.meters;
|
select count(*) from test.meters;
|
||||||
```
|
```
|
||||||
|
|
||||||
查询 1 亿条记录的平均值、最大值、最小值等:
|
Query the average, maximum, and minimum values of all 100 million rows of data:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
taos> select avg(current), max(voltage), min(phase) from test.meters;
|
select avg(current), max(voltage), min(phase) from test.meters;
|
||||||
```
|
```
|
||||||
|
|
||||||
查询 location="California.SanFrancisco" 的记录总条数:
|
Query the number of rows whose `location` tag is `San Francisco`:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
taos> select count(*) from test.meters where location="California.SanFrancisco";
|
select count(*) from test.meters where location="San Francisco";
|
||||||
```
|
```
|
||||||
|
|
||||||
查询 groupId=10 的所有记录的平均值、最大值、最小值等:
|
Query the average, maximum, and minimum values of all rows whose `groupId` tag is `10`:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
taos> select avg(current), max(voltage), min(phase) from test.meters where groupId=10;
|
select avg(current), max(voltage), min(phase) from test.meters where groupId=10;
|
||||||
```
|
```
|
||||||
|
|
||||||
对表 d10 按 10s 进行平均值、最大值和最小值聚合统计:
|
Query the average, maximum, and minimum values for table `d10` in 1 second intervals:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
taos> select avg(current), max(voltage), min(phase) from test.d10 interval(10s);
|
select first(ts), avg(current), max(voltage), min(phase) from test.d10 interval(1s);
|
||||||
```
|
```
|
||||||
|
In the query above you are selecting the first timestamp (ts) in the interval, another way of selecting this would be _wstart which will give the start of the time window. For more information about windowed queries, see [Time-Series Extensions](../../taos-sql/distinguished/).
|
||||||
|
|
|
@ -1,19 +1,19 @@
|
||||||
可以使用 apt-get 工具从官方仓库安装。
|
You can use `apt-get` to install TDengine from the official package repository.
|
||||||
|
|
||||||
**安装包仓库**
|
**Configure the package repository**
|
||||||
|
|
||||||
```
|
```
|
||||||
wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
|
wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
|
||||||
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-stable stable main" | sudo tee /etc/apt/sources.list.d/tdengine-stable.list
|
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-stable stable main" | sudo tee /etc/apt/sources.list.d/tdengine-stable.list
|
||||||
```
|
```
|
||||||
|
|
||||||
如果安装 Beta 版需要安装包仓库
|
You can install beta versions by configuring the following package repository:
|
||||||
|
|
||||||
```
|
```
|
||||||
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-beta beta main" | sudo tee /etc/apt/sources.list.d/tdengine-beta.list
|
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-beta beta main" | sudo tee /etc/apt/sources.list.d/tdengine-beta.list
|
||||||
```
|
```
|
||||||
|
|
||||||
**使用 apt-get 命令安装**
|
**Install TDengine with `apt-get`**
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo apt-get update
|
sudo apt-get update
|
||||||
|
@ -22,5 +22,5 @@ sudo apt-get install tdengine
|
||||||
```
|
```
|
||||||
|
|
||||||
:::tip
|
:::tip
|
||||||
apt-get 方式只适用于 Debian 或 Ubuntu 系统
|
This installation method is supported only for Debian and Ubuntu.
|
||||||
::::
|
::::
|
||||||
|
|
|
@ -1 +1 @@
|
||||||
label: 立即开始
|
label: Get Started
|
||||||
|
|
|
@ -1,17 +1,17 @@
|
||||||
import PkgList from "/components/PkgList";
|
import PkgList from "/components/PkgList";
|
||||||
|
|
||||||
TDengine 的安装非常简单,从下载到安装成功仅仅只要几秒钟。
|
TDengine is easy to download and install.
|
||||||
|
|
||||||
为方便使用,从 2.4.0.10 开始,标准的服务端安装包包含了 taos、taosd、taosAdapter、taosdump、taosBenchmark、TDinsight 安装脚本和示例代码;如果您只需要用到服务端程序和客户端连接的 C/C++ 语言支持,也可以仅下载 lite 版本的安装包。
|
The standard server installation package includes `taos`, `taosd`, `taosAdapter`, `taosBenchmark`, and sample code. You can also download a lite package that includes only `taosd` and the C/C++ connector.
|
||||||
|
|
||||||
在安装包格式上,我们提供 tar.gz, rpm 和 deb 格式,为企业客户提供 tar.gz 格式安装包,以方便在特定操作系统上使用。需要注意的是,rpm 和 deb 包不含 taosdump、taosBenchmark 和 TDinsight 安装脚本,这些工具需要通过安装 taosTool 包获得。
|
You can download the TDengine installation package in .rpm, .deb, or .tar.gz format. The .tar.tz package includes `taosdump` and the TDinsight installation script. If you want to use these utilities with the .deb or .rpm package, download and install taosTools separately.
|
||||||
|
|
||||||
发布版本包括稳定版和 Beta 版,Beta 版含有更多新功能。正式上线或测试建议安装稳定版。您可以根据需要选择下载:
|
Between official releases, beta versions may be released that contain new features. Do not use beta versions for production or testing environments. Select the installation package appropriate for your system.
|
||||||
|
|
||||||
<PkgList type={0}/>
|
<PkgList type={0}/>
|
||||||
|
|
||||||
具体的安装方法,请参见[安装包的安装和卸载](/operation/pkg-install)。
|
For information about installing TDengine, see [Install and Uninstall](/operation/pkg-install).
|
||||||
|
|
||||||
下载其他组件、最新 Beta 版及之前版本的安装包,请点击[这里](https://www.taosdata.com/all-downloads)
|
For information about TDengine releases, see [All Downloads](https://tdengine.com/all-downloads)
|
||||||
|
|
||||||
查看 Release Notes, 请点击[这里](https://github.com/taosdata/TDengine/releases)
|
and [Release Notes](https://github.com/taosdata/TDengine/releases).
|
||||||
|
|
|
@ -1,11 +1,11 @@
|
||||||
---
|
---
|
||||||
title: 立即开始
|
title: Get Started
|
||||||
description: '快速设置 TDengine 环境并体验其高效写入和查询'
|
description: This article describes how to install TDengine and test its performance.
|
||||||
---
|
---
|
||||||
|
|
||||||
TDengine 完整的软件包包括服务端(taosd)、用于与第三方系统对接并提供 RESTful 接口的 taosAdapter、应用驱动(taosc)、命令行程序 (CLI,taos) 和一些工具软件。TDengine 除了提供多种语言的连接器之外,还通过 [taosAdapter](/reference/taosadapter) 提供 [RESTful 接口](/reference/rest-api)。
|
The full package of TDengine includes the TDengine Server (`taosd`), TDengine Client (`taosc`), taosAdapter for connecting with third-party systems and providing a RESTful interface, a command-line interface, and some tools. In addition to connectors for multiple languages, TDengine also provides a [RESTful interface](/reference/rest-api) through [taosAdapter](/reference/taosadapter).
|
||||||
|
|
||||||
本章主要介绍如何利用 Docker 或者安装包快速设置 TDengine 环境并体验其高效写入和查询。
|
You can install and run TDengine on Linux and Windows machines as well as Docker containers.
|
||||||
|
|
||||||
```mdx-code-block
|
```mdx-code-block
|
||||||
import DocCardList from '@theme/DocCardList';
|
import DocCardList from '@theme/DocCardList';
|
||||||
|
|
|
@ -0,0 +1,3 @@
|
||||||
|
```php title="原生连接"
|
||||||
|
{{#include docs/examples/php/connect.php}}
|
||||||
|
```
|
|
@ -1,38 +1,39 @@
|
||||||
---
|
---
|
||||||
sidebar_label: Connect
|
|
||||||
title: Connect
|
title: Connect
|
||||||
description: "This document explains how to establish connections to TDengine, and briefly introduces how to install and use TDengine connectors."
|
description: "This document explains how to establish connections to TDengine and how to install and use TDengine connectors."
|
||||||
---
|
---
|
||||||
|
|
||||||
import Tabs from "@theme/Tabs";
|
import Tabs from "@theme/Tabs";
|
||||||
import TabItem from "@theme/TabItem";
|
import TabItem from "@theme/TabItem";
|
||||||
import ConnJava from "./\_connect_java.mdx";
|
import ConnJava from "./_connect_java.mdx";
|
||||||
import ConnGo from "./\_connect_go.mdx";
|
import ConnGo from "./_connect_go.mdx";
|
||||||
import ConnRust from "./\_connect_rust.mdx";
|
import ConnRust from "./_connect_rust.mdx";
|
||||||
import ConnNode from "./\_connect_node.mdx";
|
import ConnNode from "./_connect_node.mdx";
|
||||||
import ConnPythonNative from "./\_connect_python.mdx";
|
import ConnPythonNative from "./_connect_python.mdx";
|
||||||
import ConnCSNative from "./\_connect_cs.mdx";
|
import ConnCSNative from "./_connect_cs.mdx";
|
||||||
import ConnC from "./\_connect_c.mdx";
|
import ConnC from "./_connect_c.mdx";
|
||||||
import ConnR from "./\_connect_r.mdx";
|
import ConnR from "./_connect_r.mdx";
|
||||||
import InstallOnWindows from "../../14-reference/03-connector/\_linux_install.mdx";
|
import ConnPHP from "./_connect_php.mdx";
|
||||||
import InstallOnLinux from "../../14-reference/03-connector/\_windows_install.mdx";
|
import InstallOnWindows from "../../14-reference/03-connector/_linux_install.mdx";
|
||||||
import VerifyLinux from "../../14-reference/03-connector/\_verify_linux.mdx";
|
import InstallOnLinux from "../../14-reference/03-connector/_windows_install.mdx";
|
||||||
import VerifyWindows from "../../14-reference/03-connector/\_verify_windows.mdx";
|
import VerifyLinux from "../../14-reference/03-connector/_verify_linux.mdx";
|
||||||
|
import VerifyWindows from "../../14-reference/03-connector/_verify_windows.mdx";
|
||||||
|
|
||||||
Any application programs running on any kind of platform can access TDengine through the REST API provided by TDengine. For details, please refer to [REST API](/reference/rest-api/). Additionally, application programs can use the connectors of multiple programming languages including C/C++, Java, Python, Go, Node.js, C#, Rust to access TDengine. This chapter describes how to establish a connection to TDengine and briefly introduces how to install and use connectors. TDengine community also provides connectors in LUA and PHP languages. For details about the connectors, please refer to [Connectors](/reference/connector/).
|
Any application running on any platform can access TDengine through the REST API provided by TDengine. For information, see [REST API](/reference/rest-api/). Applications can also use the connectors for various programming languages, including C/C++, Java, Python, Go, Node.js, C#, and Rust, to access TDengine. These connectors support connecting to TDengine clusters using both native interfaces (taosc). Some connectors also support connecting over a REST interface. Community developers have also contributed several unofficial connectors, such as the ADO.NET connector, the Lua connector, and the PHP connector.
|
||||||
|
|
||||||
## Establish Connection
|
## Establish Connection
|
||||||
|
|
||||||
There are two ways for a connector to establish connections to TDengine:
|
There are two ways for a connector to establish connections to TDengine:
|
||||||
|
|
||||||
1. Connection through the REST API provided by the taosAdapter component, this way is called "REST connection" hereinafter.
|
1. REST connection through the REST API provided by the taosAdapter component.
|
||||||
2. Connection through the TDengine client driver (taosc), this way is called "Native connection" hereinafter.
|
2. Native connection through the TDengine client driver (taosc).
|
||||||
|
|
||||||
|
For REST and native connections, connectors provide similar APIs for performing operations and running SQL statements on your databases. The main difference is the method of establishing the connection, which is not visible to users.
|
||||||
|
|
||||||
Key differences:
|
Key differences:
|
||||||
|
|
||||||
1. The TDengine client driver (taosc) has the highest performance with all the features of TDengine like [Parameter Binding](/reference/connector/cpp#parameter-binding-api), [Subscription](/reference/connector/cpp#subscription-and-consumption-api), etc.
|
|
||||||
2. The TDengine client driver (taosc) is not supported across all platforms, and applications built on taosc may need to be modified when updating taosc to newer versions.
|
|
||||||
3. The REST connection is more accessible with cross-platform support, however it results in a 30% performance downgrade.
|
3. The REST connection is more accessible with cross-platform support, however it results in a 30% performance downgrade.
|
||||||
|
1. The TDengine client driver (taosc) has the highest performance with all the features of TDengine like [Parameter Binding](/reference/connector/cpp#parameter-binding-api), [Subscription](/reference/connector/cpp#subscription-and-consumption-api), etc.
|
||||||
|
|
||||||
## Install Client Driver taosc
|
## Install Client Driver taosc
|
||||||
|
|
||||||
|
@ -222,7 +223,7 @@ phpize && ./configure && make -j && make install
|
||||||
**Specify TDengine Location:**
|
**Specify TDengine Location:**
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
phpize && ./configure --with-tdengine-dir=/usr/local/Cellar/tdengine/2.4.0.0 && make -j && make install
|
phpize && ./configure --with-tdengine-dir=/usr/local/Cellar/tdengine/3.0.0.0 && make -j && make install
|
||||||
```
|
```
|
||||||
|
|
||||||
> `--with-tdengine-dir=` is followed by the TDengine installation location.
|
> `--with-tdengine-dir=` is followed by the TDengine installation location.
|
||||||
|
@ -243,7 +244,7 @@ Option Two: Specify the extension on CLI `php -d extension=tdengine test.php`
|
||||||
</TabItem>
|
</TabItem>
|
||||||
</Tabs>
|
</Tabs>
|
||||||
|
|
||||||
## Establish Connection
|
## Establish a connection
|
||||||
|
|
||||||
Prior to establishing connection, please make sure TDengine is already running and accessible. The following sample code assumes TDengine is running on the same host as the client program, with FQDN configured to "localhost" and serverPort configured to "6030".
|
Prior to establishing connection, please make sure TDengine is already running and accessible. The following sample code assumes TDengine is running on the same host as the client program, with FQDN configured to "localhost" and serverPort configured to "6030".
|
||||||
|
|
||||||
|
@ -272,9 +273,12 @@ Prior to establishing connection, please make sure TDengine is already running a
|
||||||
<TabItem label="C" value="c">
|
<TabItem label="C" value="c">
|
||||||
<ConnC />
|
<ConnC />
|
||||||
</TabItem>
|
</TabItem>
|
||||||
|
<TabItem label="PHP" value="php">
|
||||||
|
<ConnPHP />
|
||||||
|
</TabItem>
|
||||||
</Tabs>
|
</Tabs>
|
||||||
|
|
||||||
:::tip
|
:::tip
|
||||||
If the connection fails, in most cases it's caused by improper configuration for FQDN or firewall. Please refer to the section "Unable to establish connection" in [FAQ](https://docs.taosdata.com/train-faq/faq).
|
If the connection fails, in most cases it's caused by improper configuration for FQDN or firewall. Please refer to the section "Unable to establish connection" in [FAQ](https://docs.tdengine.com/train-faq/faq).
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
|
@ -2,31 +2,36 @@
|
||||||
title: Data Model
|
title: Data Model
|
||||||
---
|
---
|
||||||
|
|
||||||
The data model employed by TDengine is similar to that of a relational database. You have to create databases and tables. You must design the data model based on your own business and application requirements. You should design the STable (an abbreviation for super table) schema to fit your data. This chapter will explain the big picture without getting into syntactical details.
|
The data model employed by TDengine is similar to that of a relational database. You have to create databases and tables. You must design the data model based on your own business and application requirements. You should design the [STable](/concept/#super-table-stable) (an abbreviation for super table) schema to fit your data. This chapter will explain the big picture without getting into syntactical details.
|
||||||
|
|
||||||
|
Note: before you read this chapter, please make sure you have already read through [Key Concepts](/concept/), since TDengine introduces new concepts like "one table for one [data collection point](/concept/#data-collection-point)" and "[super table](/concept/#super-table-stable)".
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
## Create Database
|
## Create Database
|
||||||
|
|
||||||
The [characteristics of time-series data](https://www.taosdata.com/blog/2019/07/09/86.html) from different data collection points may be different. Characteristics include collection frequency, retention policy and others which determine how you create and configure the database. For e.g. days to keep, number of replicas, data block size, whether data updates are allowed and other configurable parameters would be determined by the characteristics of your data and your business requirements. For TDengine to operate with the best performance, we strongly recommend that you create and configure different databases for data with different characteristics. This allows you, for example, to set up different storage and retention policies. When creating a database, there are a lot of parameters that can be configured such as, the days to keep data, the number of replicas, the number of memory blocks, time precision, the minimum and maximum number of rows in each data block, whether compression is enabled, the time range of the data in single data file and so on. Below is an example of the SQL statement to create a database.
|
The characteristics of time-series data from different data collection points may be different. Characteristics include collection frequency, retention policy and others which determine how you create and configure the database. For e.g. days to keep, number of replicas, data block size, whether data updates are allowed and other configurable parameters would be determined by the characteristics of your data and your business requirements. For TDengine to operate with the best performance, we strongly recommend that you create and configure different databases for data with different characteristics. This allows you, for example, to set up different storage and retention policies. When creating a database, there are a lot of parameters that can be configured such as, the days to keep data, the number of replicas, the size of the cache, time precision, the minimum and maximum number of rows in each data block, whether compression is enabled, the time range of the data in single data file and so on. An example is shown as follows:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
CREATE DATABASE power KEEP 365 DURATION 10 BUFFER 16 VGROUPS 100 WAL 1;
|
CREATE DATABASE power KEEP 365 DURATION 10 BUFFER 16 WAL_LEVEL 1;
|
||||||
```
|
```
|
||||||
|
|
||||||
In the above SQL statement:
|
In the above SQL statement:
|
||||||
- a database named "power" will be created
|
- a database named "power" is created
|
||||||
- the data in it will be kept for 365 days, which means that data older than 365 days will be deleted automatically
|
- the data in it is retained for 365 days, which means that data older than 365 days will be deleted automatically
|
||||||
- a new data file will be created every 10 days
|
- a new data file will be created every 10 days
|
||||||
- the size of memory cache for writing is 16 MB
|
- the size of the write cache pool on each vnode is 16 MB
|
||||||
- data will be firstly written to WAL without FSYNC
|
- the number of vgroups is 100
|
||||||
|
- WAL is enabled but fsync is disabled For more details please refer to [Database](/taos-sql/database).
|
||||||
|
|
||||||
For more details please refer to [Database](/taos-sql/database).
|
After creating a database, the current database in use can be switched using SQL command `USE`. For example the SQL statement below switches the current database to `power`.
|
||||||
|
|
||||||
After creating a database, the current database in use can be switched using SQL command `USE`. For example the SQL statement below switches the current database to `power`. Without the current database specified, table name must be preceded with the corresponding database name.
|
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
USE power;
|
USE power;
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Without the current database specified, table name must be preceded with the corresponding database name.
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
|
|
||||||
- Any table or STable must belong to a database. To create a table or STable, the database it belongs to must be ready.
|
- Any table or STable must belong to a database. To create a table or STable, the database it belongs to must be ready.
|
||||||
|
@ -39,14 +44,9 @@ USE power;
|
||||||
In a time-series application, there may be multiple kinds of data collection points. For example, in the electrical power system there are meters, transformers, bus bars, switches, etc. For easy and efficient aggregation of multiple tables, one STable needs to be created for each kind of data collection point. For example, for the meters in [table 1](/concept/#model_table1), the SQL statement below can be used to create the super table.
|
In a time-series application, there may be multiple kinds of data collection points. For example, in the electrical power system there are meters, transformers, bus bars, switches, etc. For easy and efficient aggregation of multiple tables, one STable needs to be created for each kind of data collection point. For example, for the meters in [table 1](/concept/#model_table1), the SQL statement below can be used to create the super table.
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
CREATE STable meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int);
|
CREATE STABLE meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int);
|
||||||
```
|
```
|
||||||
|
|
||||||
:::note
|
|
||||||
If you are using versions prior to 2.0.15, the `STable` keyword needs to be replaced with `TABLE`.
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
Similar to creating a regular table, when creating a STable, the name and schema need to be provided. In the STable schema, the first column must always be a timestamp (like ts in the example), and the other columns (like current, voltage and phase in the example) are the data collected. The remaining columns can [contain data of type](/taos-sql/data-type/) integer, float, double, string etc. In addition, the schema for tags, like location and groupId in the example, must be provided. The tag type can be integer, float, string, etc. Tags are essentially the static properties of a data collection point. For example, properties like the location, device type, device group ID, manager ID are tags. Tags in the schema can be added, removed or updated. Please refer to [STable](/taos-sql/stable) for more details.
|
Similar to creating a regular table, when creating a STable, the name and schema need to be provided. In the STable schema, the first column must always be a timestamp (like ts in the example), and the other columns (like current, voltage and phase in the example) are the data collected. The remaining columns can [contain data of type](/taos-sql/data-type/) integer, float, double, string etc. In addition, the schema for tags, like location and groupId in the example, must be provided. The tag type can be integer, float, string, etc. Tags are essentially the static properties of a data collection point. For example, properties like the location, device type, device group ID, manager ID are tags. Tags in the schema can be added, removed or updated. Please refer to [STable](/taos-sql/stable) for more details.
|
||||||
|
|
||||||
For each kind of data collection point, a corresponding STable must be created. There may be many STables in an application. For electrical power system, we need to create a STable respectively for meters, transformers, busbars, switches. There may be multiple kinds of data collection points on a single device, for example there may be one data collection point for electrical data like current and voltage and another data collection point for environmental data like temperature, humidity and wind direction. Multiple STables are required for these kinds of devices.
|
For each kind of data collection point, a corresponding STable must be created. There may be many STables in an application. For electrical power system, we need to create a STable respectively for meters, transformers, busbars, switches. There may be multiple kinds of data collection points on a single device, for example there may be one data collection point for electrical data like current and voltage and another data collection point for environmental data like temperature, humidity and wind direction. Multiple STables are required for these kinds of devices.
|
||||||
|
@ -63,13 +63,8 @@ CREATE TABLE d1001 USING meters TAGS ("California.SanFrancisco", 2);
|
||||||
|
|
||||||
In the above SQL statement, "d1001" is the table name, "meters" is the STable name, followed by the value of tag "Location" and the value of tag "groupId", which are "California.SanFrancisco" and "2" respectively in the example. The tag values can be updated after the table is created. Please refer to [Tables](/taos-sql/table) for details.
|
In the above SQL statement, "d1001" is the table name, "meters" is the STable name, followed by the value of tag "Location" and the value of tag "groupId", which are "California.SanFrancisco" and "2" respectively in the example. The tag values can be updated after the table is created. Please refer to [Tables](/taos-sql/table) for details.
|
||||||
|
|
||||||
In the TDengine system, it's recommended to create a table for a data collection point via STable. A table created via STable is called subtable in some parts of the TDengine documentation. All SQL commands applied on regular tables can be applied on subtables.
|
|
||||||
|
|
||||||
:::tip
|
|
||||||
It's suggested to use the globally unique ID of a data collection point as the table name. For example the device serial number could be used as a unique ID. If a unique ID doesn't exist, multiple IDs that are not globally unique can be combined to form a globally unique ID. It's not recommended to use a globally unique ID as tag value.
|
It's suggested to use the globally unique ID of a data collection point as the table name. For example the device serial number could be used as a unique ID. If a unique ID doesn't exist, multiple IDs that are not globally unique can be combined to form a globally unique ID. It's not recommended to use a globally unique ID as tag value.
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
## Create Table Automatically
|
## Create Table Automatically
|
||||||
|
|
||||||
In some circumstances, it's unknown whether the table already exists when inserting rows. The table can be created automatically using the SQL statement below, and nothing will happen if the table already exists.
|
In some circumstances, it's unknown whether the table already exists when inserting rows. The table can be created automatically using the SQL statement below, and nothing will happen if the table already exists.
|
||||||
|
@ -84,8 +79,6 @@ For more details please refer to [Create Table Automatically](/taos-sql/insert#a
|
||||||
|
|
||||||
## Single Column vs Multiple Column
|
## Single Column vs Multiple Column
|
||||||
|
|
||||||
A multiple columns data model is supported in TDengine. As long as multiple metrics are collected by the same data collection point at the same time, i.e. the timestamps are identical, these metrics can be put in a single STable as columns.
|
A multiple columns data model is supported in TDengine. As long as multiple metrics are collected by the same data collection point at the same time, i.e. the timestamps are identical, these metrics can be put in a single STable as columns. However, there is another kind of design, i.e. single column data model in which a table is created for each metric. This means that a STable is required for each kind of metric. For example in a single column model, 3 STables would be required for current, voltage and phase.
|
||||||
|
|
||||||
However, there is another kind of design, i.e. single column data model in which a table is created for each metric. This means that a STable is required for each kind of metric. For example in a single column model, 3 STables would be required for current, voltage and phase.
|
|
||||||
|
|
||||||
It's recommended to use a multiple column data model as much as possible because insert and query performance is higher. In some cases, however, the collected metrics may vary frequently and so the corresponding STable schema needs to be changed frequently too. In such cases, it's more convenient to use single column data model.
|
It's recommended to use a multiple column data model as much as possible because insert and query performance is higher. In some cases, however, the collected metrics may vary frequently and so the corresponding STable schema needs to be changed frequently too. In such cases, it's more convenient to use single column data model.
|
||||||
|
|
|
@ -1,5 +1,4 @@
|
||||||
---
|
---
|
||||||
sidebar_label: Insert Using SQL
|
|
||||||
title: Insert Using SQL
|
title: Insert Using SQL
|
||||||
---
|
---
|
||||||
|
|
||||||
|
@ -19,13 +18,14 @@ import CsSQL from "./_cs_sql.mdx";
|
||||||
import CsStmt from "./_cs_stmt.mdx";
|
import CsStmt from "./_cs_stmt.mdx";
|
||||||
import CSQL from "./_c_sql.mdx";
|
import CSQL from "./_c_sql.mdx";
|
||||||
import CStmt from "./_c_stmt.mdx";
|
import CStmt from "./_c_stmt.mdx";
|
||||||
|
import PhpSQL from "./_php_sql.mdx";
|
||||||
|
import PhpStmt from "./_php_stmt.mdx";
|
||||||
|
|
||||||
## Introduction
|
## Introduction
|
||||||
|
|
||||||
Application programs can execute `INSERT` statement through connectors to insert rows. The TDengine CLI can also be used to manually insert data.
|
Application programs can execute `INSERT` statement through connectors to insert rows. The TDengine CLI can also be used to manually insert data.
|
||||||
|
|
||||||
### Insert Single Row
|
### Insert Single Row
|
||||||
|
|
||||||
The below SQL statement is used to insert one row into table "d1001".
|
The below SQL statement is used to insert one row into table "d1001".
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
|
@ -42,7 +42,7 @@ INSERT INTO d1001 VALUES (1538548684000, 10.2, 220, 0.23) (1538548696650, 10.3,
|
||||||
|
|
||||||
### Insert into Multiple Tables
|
### Insert into Multiple Tables
|
||||||
|
|
||||||
Data can be inserted into multiple tables in single SQL statement. The example below inserts 2 rows into table "d1001" and 1 row into table "d1002".
|
Data can be inserted into multiple tables in the same SQL statement. The example below inserts 2 rows into table "d1001" and 1 row into table "d1002".
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6, 218, 0.33) d1002 VALUES (1538548696800, 12.3, 221, 0.31);
|
INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6, 218, 0.33) d1002 VALUES (1538548696800, 12.3, 221, 0.31);
|
||||||
|
@ -52,19 +52,19 @@ For more details about `INSERT` please refer to [INSERT](/taos-sql/insert).
|
||||||
|
|
||||||
:::info
|
:::info
|
||||||
|
|
||||||
- Inserting in batches can improve performance. Normally, the higher the batch size, the better the performance. Please note that a single row can't exceed 48 KB bytes and each SQL statement can't exceed 1 MB.
|
- Inserting in batches can improve performance. The higher the batch size, the better the performance. Please note that a single row can't exceed 48K bytes and each SQL statement can't exceed 1MB.
|
||||||
- Inserting with multiple threads can also improve performance. However, depending on the system resources on the application side and the server side, when the number of inserting threads grows beyond a specific point the performance may drop instead of improving. The proper number of threads needs to be tested in a specific environment to find the best number. The proper number of threads may be impacted by the system resources on the server side, the system resources on the client side, the table schemas, etc.
|
- Inserting with multiple threads can also improve performance. However, at a certain point, increasing the number of threads no longer offers any benefit and can even decrease performance due to the overhead involved in frequent thread switching. The optimal number of threads for a system depends on the processing capabilities and configuration of the server, the configuration of the database, the data schema, and the batch size for writing data. In general, more powerful clients and servers can support higher numbers of concurrently writing threads. Given a sufficiently powerful server, a higher number of vgroups for a database also increases the number of concurrent writes. Finally, a simpler data schema enables more concurrent writes as well.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
:::warning
|
:::warning
|
||||||
|
|
||||||
- If the timestamp for the row to be inserted already exists in the table, the old data will be overritten by the new values for the columns for which new values are provided, columns for which no new values are provided are not impacted.
|
- If the timestamp of a new record already exists in a table, columns with new data for that timestamp replace old data with new data, while columns without new data are not affected.
|
||||||
- The timestamp to be inserted must be newer than the timestamp of subtracting current time by the parameter `KEEP`. If `KEEP` is set to 3650 days, then the data older than 3650 days ago can't be inserted. The timestamp to be inserted can't be newer than the timestamp of current time plus parameter `DURATION`. If `DAYS` is set to 2, the data newer than 2 days later can't be inserted.
|
- The timestamp to be inserted must be newer than the timestamp of subtracting current time by the parameter `KEEP`. If `KEEP` is set to 3650 days, then the data older than 3650 days ago can't be inserted. The timestamp to be inserted cannot be newer than the timestamp of current time plus parameter `DURATION`. If `DURATION` is set to 2, the data newer than 2 days later can't be inserted.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
## Examples
|
## Sample program
|
||||||
|
|
||||||
### Insert Using SQL
|
### Insert Using SQL
|
||||||
|
|
||||||
|
@ -90,6 +90,9 @@ For more details about `INSERT` please refer to [INSERT](/taos-sql/insert).
|
||||||
<TabItem label="C" value="c">
|
<TabItem label="C" value="c">
|
||||||
<CSQL />
|
<CSQL />
|
||||||
</TabItem>
|
</TabItem>
|
||||||
|
<TabItem label="PHP" value="php">
|
||||||
|
<PhpSQL />
|
||||||
|
</TabItem>
|
||||||
</Tabs>
|
</Tabs>
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
|
@ -101,7 +104,7 @@ For more details about `INSERT` please refer to [INSERT](/taos-sql/insert).
|
||||||
|
|
||||||
### Insert with Parameter Binding
|
### Insert with Parameter Binding
|
||||||
|
|
||||||
TDengine also provides API support for parameter binding. Similar to MySQL, only `?` can be used in these APIs to represent the parameters to bind. Parameter binding support for inserting data has improved significantly to improve the insert performance by avoiding the cost of parsing SQL statements.
|
TDengine also provides API support for parameter binding. Similar to MySQL, only `?` can be used in these APIs to represent the parameters to bind. This avoids the resource consumption of SQL syntax parsing when writing data through the parameter binding interface, thus significantly improving write performance in most cases.
|
||||||
|
|
||||||
Parameter binding is available only with native connection.
|
Parameter binding is available only with native connection.
|
||||||
|
|
||||||
|
@ -127,4 +130,8 @@ Parameter binding is available only with native connection.
|
||||||
<TabItem label="C" value="c">
|
<TabItem label="C" value="c">
|
||||||
<CStmt />
|
<CStmt />
|
||||||
</TabItem>
|
</TabItem>
|
||||||
|
<TabItem label="PHP" value="php">
|
||||||
|
<PhpStmt />
|
||||||
|
</TabItem>
|
||||||
</Tabs>
|
</Tabs>
|
||||||
|
|
||||||
|
|
|
@ -21,15 +21,15 @@ In the InfluxDB Line protocol format, a single line of text is used to represent
|
||||||
measurement,tag_set field_set timestamp
|
measurement,tag_set field_set timestamp
|
||||||
```
|
```
|
||||||
|
|
||||||
- `measurement` will be used as the name of the STable
|
- `measurement` will be used as the name of the STable Enter a comma (,) between `measurement` and `tag_set`.
|
||||||
- `tag_set` will be used as tags, with format like `<tag_key>=<tag_value>,<tag_key>=<tag_value>`
|
- `tag_set` will be used as tags, with format like `<tag_key>=<tag_value>,<tag_key>=<tag_value>` Enter a space between `tag_set` and `field_set`.
|
||||||
- `field_set`will be used as data columns, with format like `<field_key>=<field_value>,<field_key>=<field_value>`
|
- `field_set`will be used as data columns, with format like `<field_key>=<field_value>,<field_key>=<field_value>` Enter a space between `field_set` and `timestamp`.
|
||||||
- `timestamp` is the primary key timestamp corresponding to this row of data
|
- `timestamp` is the primary key timestamp corresponding to this row of data
|
||||||
|
|
||||||
For example:
|
For example:
|
||||||
|
|
||||||
```
|
```
|
||||||
meters,location=California.LoSangeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611249500
|
meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0.29 1648432611249500
|
||||||
```
|
```
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
|
@ -42,7 +42,6 @@ meters,location=California.LoSangeles,groupid=2 current=13.4,voltage=223,phase=0
|
||||||
|
|
||||||
For more details please refer to [InfluxDB Line Protocol](https://docs.influxdata.com/influxdb/v2.0/reference/syntax/line-protocol/) and [TDengine Schemaless](/reference/schemaless/#Schemaless-Line-Protocol)
|
For more details please refer to [InfluxDB Line Protocol](https://docs.influxdata.com/influxdb/v2.0/reference/syntax/line-protocol/) and [TDengine Schemaless](/reference/schemaless/#Schemaless-Line-Protocol)
|
||||||
|
|
||||||
|
|
||||||
## Examples
|
## Examples
|
||||||
|
|
||||||
<Tabs defaultValue="java" groupId="lang">
|
<Tabs defaultValue="java" groupId="lang">
|
||||||
|
|
|
@ -17,19 +17,19 @@ import CTelnet from "./_c_opts_telnet.mdx";
|
||||||
|
|
||||||
A single line of text is used in OpenTSDB line protocol to represent one row of data. OpenTSDB employs a single column data model, so each line can only contain a single data column. There can be multiple tags. Each line contains 4 parts as below:
|
A single line of text is used in OpenTSDB line protocol to represent one row of data. OpenTSDB employs a single column data model, so each line can only contain a single data column. There can be multiple tags. Each line contains 4 parts as below:
|
||||||
|
|
||||||
```
|
```txt
|
||||||
<metric> <timestamp> <value> <tagk_1>=<tagv_1>[ <tagk_n>=<tagv_n>]
|
<metric> <timestamp> <value> <tagk_1>=<tagv_1>[ <tagk_n>=<tagv_n>]
|
||||||
```
|
```
|
||||||
|
|
||||||
- `metric` will be used as the STable name.
|
- `metric` will be used as the STable name.
|
||||||
- `timestamp` is the timestamp of current row of data. The time precision will be determined automatically based on the length of the timestamp. Second and millisecond time precision are supported.
|
- `timestamp` is the timestamp of current row of data. The time precision will be determined automatically based on the length of the timestamp. Second and millisecond time precision are supported.
|
||||||
- `value` is a metric which must be a numeric value, the corresponding column name is "_value".
|
- `value` is a metric which must be a numeric value, The corresponding column name is "value".
|
||||||
- The last part is the tag set separated by spaces, all tags will be converted to nchar type automatically.
|
- The last part is the tag set separated by spaces, all tags will be converted to nchar type automatically.
|
||||||
|
|
||||||
For example:
|
For example:
|
||||||
|
|
||||||
```txt
|
```txt
|
||||||
meters.current 1648432611250 11.3 location=California.LoSangeles groupid=3
|
meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3
|
||||||
```
|
```
|
||||||
|
|
||||||
Please refer to [OpenTSDB Telnet API](http://opentsdb.net/docs/build/html/api_telnet/put.html) for more details.
|
Please refer to [OpenTSDB Telnet API](http://opentsdb.net/docs/build/html/api_telnet/put.html) for more details.
|
||||||
|
@ -63,7 +63,7 @@ Please refer to [OpenTSDB Telnet API](http://opentsdb.net/docs/build/html/api_te
|
||||||
taos> use test;
|
taos> use test;
|
||||||
Database changed.
|
Database changed.
|
||||||
|
|
||||||
taos> show STables;
|
taos> show stables;
|
||||||
name | created_time | columns | tags | tables |
|
name | created_time | columns | tags | tables |
|
||||||
============================================================================================
|
============================================================================================
|
||||||
meters.current | 2022-03-30 17:04:10.877 | 2 | 2 | 2 |
|
meters.current | 2022-03-30 17:04:10.877 | 2 | 2 | 2 |
|
||||||
|
@ -73,8 +73,8 @@ Query OK, 2 row(s) in set (0.002544s)
|
||||||
taos> select tbname, * from `meters.current`;
|
taos> select tbname, * from `meters.current`;
|
||||||
tbname | _ts | _value | groupid | location |
|
tbname | _ts | _value | groupid | location |
|
||||||
==================================================================================================================================
|
==================================================================================================================================
|
||||||
t_0e7bcfa21a02331c06764f275... | 2022-03-28 09:56:51.249 | 10.800000000 | 3 | California.LoSangeles |
|
t_0e7bcfa21a02331c06764f275... | 2022-03-28 09:56:51.249 | 10.800000000 | 3 | California.LosAngeles |
|
||||||
t_0e7bcfa21a02331c06764f275... | 2022-03-28 09:56:51.250 | 11.300000000 | 3 | California.LoSangeles |
|
t_0e7bcfa21a02331c06764f275... | 2022-03-28 09:56:51.250 | 11.300000000 | 3 | California.LosAngeles |
|
||||||
t_7e7b26dd860280242c6492a16... | 2022-03-28 09:56:51.249 | 10.300000000 | 2 | California.SanFrancisco |
|
t_7e7b26dd860280242c6492a16... | 2022-03-28 09:56:51.249 | 10.300000000 | 2 | California.SanFrancisco |
|
||||||
t_7e7b26dd860280242c6492a16... | 2022-03-28 09:56:51.250 | 12.600000000 | 2 | California.SanFrancisco |
|
t_7e7b26dd860280242c6492a16... | 2022-03-28 09:56:51.250 | 12.600000000 | 2 | California.SanFrancisco |
|
||||||
Query OK, 4 row(s) in set (0.005399s)
|
Query OK, 4 row(s) in set (0.005399s)
|
||||||
|
|
|
@ -15,7 +15,7 @@ import CJson from "./_c_opts_json.mdx";
|
||||||
|
|
||||||
## Introduction
|
## Introduction
|
||||||
|
|
||||||
A JSON string is used in OpenTSDB JSON to represent one or more rows of data, for example:
|
A JSON string is used in OpenTSDB JSON to represent one or more rows of data, for example: For example:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
[
|
[
|
||||||
|
@ -42,10 +42,10 @@ A JSON string is used in OpenTSDB JSON to represent one or more rows of data, fo
|
||||||
|
|
||||||
Similar to OpenTSDB line protocol, `metric` will be used as the STable name, `timestamp` is the timestamp to be used, `value` represents the metric collected, `tags` are the tag sets.
|
Similar to OpenTSDB line protocol, `metric` will be used as the STable name, `timestamp` is the timestamp to be used, `value` represents the metric collected, `tags` are the tag sets.
|
||||||
|
|
||||||
|
|
||||||
Please refer to [OpenTSDB HTTP API](http://opentsdb.net/docs/build/html/api_http/put.html) for more details.
|
Please refer to [OpenTSDB HTTP API](http://opentsdb.net/docs/build/html/api_http/put.html) for more details.
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
|
|
||||||
- In JSON protocol, strings will be converted to nchar type and numeric values will be converted to double type.
|
- In JSON protocol, strings will be converted to nchar type and numeric values will be converted to double type.
|
||||||
- Only data in array format is accepted and so an array must be used even if there is only one row.
|
- Only data in array format is accepted and so an array must be used even if there is only one row.
|
||||||
|
|
||||||
|
@ -74,13 +74,13 @@ Please refer to [OpenTSDB HTTP API](http://opentsdb.net/docs/build/html/api_http
|
||||||
</TabItem>
|
</TabItem>
|
||||||
</Tabs>
|
</Tabs>
|
||||||
|
|
||||||
The above sample code will created 2 STables automatically while each STable has 2 rows of data.
|
2 STables will be created automatically and each STable has 2 rows of data in the above sample code.
|
||||||
|
|
||||||
```cmd
|
```cmd
|
||||||
taos> use test;
|
taos> use test;
|
||||||
Database changed.
|
Database changed.
|
||||||
|
|
||||||
taos> show STables;
|
taos> show stables;
|
||||||
name | created_time | columns | tags | tables |
|
name | created_time | columns | tags | tables |
|
||||||
============================================================================================
|
============================================================================================
|
||||||
meters.current | 2022-03-29 16:05:25.193 | 2 | 2 | 1 |
|
meters.current | 2022-03-29 16:05:25.193 | 2 | 2 | 1 |
|
||||||
|
|
|
@ -0,0 +1,441 @@
|
||||||
|
---
|
||||||
|
sidebar_label: High Performance Writing
|
||||||
|
title: High Performance Writing
|
||||||
|
---
|
||||||
|
|
||||||
|
import Tabs from "@theme/Tabs";
|
||||||
|
import TabItem from "@theme/TabItem";
|
||||||
|
|
||||||
|
This chapter introduces how to write data into TDengine with high throughput.
|
||||||
|
|
||||||
|
## How to achieve high performance data writing
|
||||||
|
|
||||||
|
To achieve high performance writing, there are a few aspects to consider. In the following sections we will describe these important factors in achieving high performance writing.
|
||||||
|
|
||||||
|
### Application Program
|
||||||
|
|
||||||
|
From the perspective of application program, you need to consider:
|
||||||
|
|
||||||
|
1. The data size of each single write, also known as batch size. Generally speaking, higher batch size generates better writing performance. However, once the batch size is over a specific value, you will not get any additional benefit anymore. When using SQL to write into TDengine, it's better to put as much as possible data in single SQL. The maximum SQL length supported by TDengine is 1,048,576 bytes, i.e. 1 MB.
|
||||||
|
|
||||||
|
2. The number of concurrent connections. Normally more connections can get better result. However, once the number of connections exceeds the processing ability of the server side, the performance may downgrade.
|
||||||
|
|
||||||
|
3. The distribution of data to be written across tables or sub-tables. Writing to single table in one batch is more efficient than writing to multiple tables in one batch.
|
||||||
|
|
||||||
|
4. Data Writing Protocol.
|
||||||
|
- Prameter binding mode is more efficient than SQL because it doesn't have the cost of parsing SQL.
|
||||||
|
- Writing to known existing tables is more efficient than wirting to uncertain tables in automatic creating mode because the later needs to check whether the table exists or not before actually writing data into it
|
||||||
|
- Writing in SQL is more efficient than writing in schemaless mode because schemaless writing creats table automatically and may alter table schema
|
||||||
|
|
||||||
|
Application programs need to take care of the above factors and try to take advantage of them. The application progam should write to single table in each write batch. The batch size needs to be tuned to a proper value on a specific system. The number of concurrent connections needs to be tuned to a proper value too to achieve the best writing throughput.
|
||||||
|
|
||||||
|
### Data Source
|
||||||
|
|
||||||
|
Application programs need to read data from data source then write into TDengine. If you meet one or more of below situations, you need to setup message queues between the threads for reading from data source and the threads for writing into TDengine.
|
||||||
|
|
||||||
|
1. There are multiple data sources, the data generation speed of each data source is much slower than the speed of single writing thread. In this case, the purpose of message queues is to consolidate the data from multiple data sources together to increase the batch size of single write.
|
||||||
|
2. The speed of data generation from single data source is much higher than the speed of single writing thread. The purpose of message queue in this case is to provide buffer so that data is not lost and multiple writing threads can get data from the buffer.
|
||||||
|
3. The data for single table are from multiple data source. In this case the purpose of message queues is to combine the data for single table together to improve the write efficiency.
|
||||||
|
|
||||||
|
If the data source is Kafka, then the appication program is a consumer of Kafka, you can benefit from some kafka features to achieve high performance writing:
|
||||||
|
|
||||||
|
1. Put the data for a table in single partition of single topic so that it's easier to put the data for each table together and write in batch
|
||||||
|
2. Subscribe multiple topics to accumulate data together.
|
||||||
|
3. Add more consumers to gain more concurrency and throughput.
|
||||||
|
4. Incrase the size of single fetch to increase the size of write batch.
|
||||||
|
|
||||||
|
### Tune TDengine
|
||||||
|
|
||||||
|
On the server side, database configuration parameter `vgroups` needs to be set carefully to maximize the system performance. If it's set too low, the system capability can't be utilized fully; if it's set too big, unnecessary resource competition may be produced. A normal recommendation for `vgroups` parameter is 2 times of the number of CPU cores. However, depending on the actual system resources, it may still need to tuned.
|
||||||
|
|
||||||
|
For more configuration parameters, please refer to [Database Configuration](../../../taos-sql/database) and [Server Configuration](../../../reference/config)。
|
||||||
|
|
||||||
|
## Sample Programs
|
||||||
|
|
||||||
|
This section will introduce the sample programs to demonstrate how to write into TDengine with high performance.
|
||||||
|
|
||||||
|
### Scenario
|
||||||
|
|
||||||
|
Below are the scenario for the sample programs of high performance wrting.
|
||||||
|
|
||||||
|
- Application program reads data from data source, the sample program simulates a data source by generating data
|
||||||
|
- The speed of single writing thread is much slower than the speed of generating data, so the program starts multiple writing threads while each thread establish a connection to TDengine and each thread has a message queue of fixed size.
|
||||||
|
- Application program maps the received data to different writing threads based on table name to make sure all the data for each table is always processed by a specific writing thread.
|
||||||
|
- Each writing thread writes the received data into TDengine once the message queue becomes empty or the read data meets a threshold.
|
||||||
|
|
||||||
|

|
||||||
|
|
||||||
|
### Sample Programs
|
||||||
|
|
||||||
|
The sample programs listed in this section are based on the scenario described previously. If your scenarios is different, please try to adjust the code based on the principles described in this chapter.
|
||||||
|
|
||||||
|
The sample programs assume the source data is for all the different sub tables in same super table (meters). The super table has been created before the sample program starts to writing data. Sub tables are created automatically according to received data. If there are multiple super tables in your case, please try to adjust the part of creating table automatically.
|
||||||
|
|
||||||
|
<Tabs defaultValue="java" groupId="lang">
|
||||||
|
<TabItem label="Java" value="java">
|
||||||
|
|
||||||
|
**Program Inventory**
|
||||||
|
|
||||||
|
| Class | Description |
|
||||||
|
| ---------------- | ----------------------------------------------------------------------------------------------------- |
|
||||||
|
| FastWriteExample | Main Program |
|
||||||
|
| ReadTask | Read data from simulated data source and put into a queue according to the hash value of table name |
|
||||||
|
| WriteTask | Read data from Queue, compose a wirte batch and write into TDengine |
|
||||||
|
| MockDataSource | Generate data for some sub tables of super table meters |
|
||||||
|
| SQLWriter | WriteTask uses this class to compose SQL, create table automatically, check SQL length and write data |
|
||||||
|
| StmtWriter | Write in Parameter binding mode (Not finished yet) |
|
||||||
|
| DataBaseMonitor | Calculate the writing speed and output on console every 10 seconds |
|
||||||
|
|
||||||
|
Below is the list of complete code of the classes in above table and more detailed description.
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>FastWriteExample</summary>
|
||||||
|
The main Program is responsible for:
|
||||||
|
|
||||||
|
1. Create message queues
|
||||||
|
2. Start writing threads
|
||||||
|
3. Start reading threads
|
||||||
|
4. Otuput writing speed every 10 seconds
|
||||||
|
|
||||||
|
The main program provides 4 parameters for tuning:
|
||||||
|
|
||||||
|
1. The number of reading threads, default value is 1
|
||||||
|
2. The number of writing threads, default alue is 2
|
||||||
|
3. The total number of tables in the generated data, default value is 1000. These tables are distributed evenly across all writing threads. If the number of tables is very big, it will cost much time to firstly create these tables.
|
||||||
|
4. The batch size of single write, default value is 3,000
|
||||||
|
|
||||||
|
The capacity of message queue also impacts performance and can be tuned by modifying program. Normally it's always better to have a larger message queue. A larger message queue means lower possibility of being blocked when enqueueing and higher throughput. But a larger message queue consumes more memory space. The default value used in the sample programs is already big enoug.
|
||||||
|
|
||||||
|
```java
|
||||||
|
{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/FastWriteExample.java}}
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>ReadTask</summary>
|
||||||
|
|
||||||
|
ReadTask reads data from data source. Each ReadTask is associated with a simulated data source, each data source generates data for a group of specific tables, and the data of any table is only generated from a single specific data source.
|
||||||
|
|
||||||
|
ReadTask puts data in message queue in blocking mode. That means, the putting operation is blocked if the message queue is full.
|
||||||
|
|
||||||
|
```java
|
||||||
|
{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/ReadTask.java}}
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>WriteTask</summary>
|
||||||
|
|
||||||
|
```java
|
||||||
|
{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/WriteTask.java}}
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary>MockDataSource</summary>
|
||||||
|
|
||||||
|
```java
|
||||||
|
{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/MockDataSource.java}}
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary>SQLWriter</summary>
|
||||||
|
|
||||||
|
SQLWriter class encapsulates the logic of composing SQL and writing data. Please be noted that the tables have not been created before writing, but are created automatically when catching the exception of table doesn't exist. For other exceptions caught, the SQL which caused the exception are logged for you to debug.
|
||||||
|
|
||||||
|
```java
|
||||||
|
{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/SQLWriter.java}}
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary>DataBaseMonitor</summary>
|
||||||
|
|
||||||
|
```java
|
||||||
|
{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/DataBaseMonitor.java}}
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
**Steps to Launch**
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>Launch Java Sample Program</summary>
|
||||||
|
|
||||||
|
You need to set environment variable `TDENGINE_JDBC_URL` before launching the program. If TDengine Server is setup on localhost, then the default value for user name, password and port can be used, like below:
|
||||||
|
|
||||||
|
```
|
||||||
|
TDENGINE_JDBC_URL="jdbc:TAOS://localhost:6030?user=root&password=taosdata"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Launch in IDE**
|
||||||
|
|
||||||
|
1. Clone TDengine repolitory
|
||||||
|
```
|
||||||
|
git clone git@github.com:taosdata/TDengine.git --depth 1
|
||||||
|
```
|
||||||
|
2. Use IDE to open `docs/examples/java` directory
|
||||||
|
3. Configure environment variable `TDENGINE_JDBC_URL`, you can also configure it before launching the IDE, if so you can skip this step.
|
||||||
|
4. Run class `com.taos.example.highvolume.FastWriteExample`
|
||||||
|
|
||||||
|
**Launch on server**
|
||||||
|
|
||||||
|
If you want to launch the sample program on a remote server, please follow below steps:
|
||||||
|
|
||||||
|
1. Package the sample programs. Execute below command under directory `TDengine/docs/examples/java` :
|
||||||
|
```
|
||||||
|
mvn package
|
||||||
|
```
|
||||||
|
2. Create `examples/java` directory on the server
|
||||||
|
```
|
||||||
|
mkdir -p examples/java
|
||||||
|
```
|
||||||
|
3. Copy dependencies (below commands assume you are working on a local Windows host and try to launch on a remote Linux host)
|
||||||
|
- Copy dependent packages
|
||||||
|
```
|
||||||
|
scp -r .\target\lib <user>@<host>:~/examples/java
|
||||||
|
```
|
||||||
|
- Copy the jar of sample programs
|
||||||
|
```
|
||||||
|
scp -r .\target\javaexample-1.0.jar <user>@<host>:~/examples/java
|
||||||
|
```
|
||||||
|
4. Configure environment variable
|
||||||
|
Edit `~/.bash_profile` or `~/.bashrc` and add below:
|
||||||
|
|
||||||
|
```
|
||||||
|
export TDENGINE_JDBC_URL="jdbc:TAOS://localhost:6030?user=root&password=taosdata"
|
||||||
|
```
|
||||||
|
|
||||||
|
If your TDengine server is not deployed on localhost or doesn't use default port, you need to change the above URL to correct value in your environment.
|
||||||
|
|
||||||
|
5. Launch the sample program
|
||||||
|
|
||||||
|
```
|
||||||
|
java -classpath lib/*:javaexample-1.0.jar com.taos.example.highvolume.FastWriteExample <read_thread_count> <white_thread_count> <total_table_count> <max_batch_size>
|
||||||
|
```
|
||||||
|
|
||||||
|
6. The sample program doesn't exit unless you press <kbd>CTRL</kbd> + <kbd>C</kbd> to terminate it.
|
||||||
|
Below is the output of running on a server of 16 cores, 64GB memory and SSD hard disk.
|
||||||
|
|
||||||
|
```
|
||||||
|
root@vm85$ java -classpath lib/*:javaexample-1.0.jar com.taos.example.highvolume.FastWriteExample 2 12
|
||||||
|
18:56:35.896 [main] INFO c.t.e.highvolume.FastWriteExample - readTaskCount=2, writeTaskCount=12 tableCount=1000 maxBatchSize=3000
|
||||||
|
18:56:36.011 [WriteThread-0] INFO c.taos.example.highvolume.WriteTask - started
|
||||||
|
18:56:36.015 [WriteThread-0] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
|
||||||
|
18:56:36.021 [WriteThread-1] INFO c.taos.example.highvolume.WriteTask - started
|
||||||
|
18:56:36.022 [WriteThread-1] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
|
||||||
|
18:56:36.031 [WriteThread-2] INFO c.taos.example.highvolume.WriteTask - started
|
||||||
|
18:56:36.032 [WriteThread-2] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
|
||||||
|
18:56:36.041 [WriteThread-3] INFO c.taos.example.highvolume.WriteTask - started
|
||||||
|
18:56:36.042 [WriteThread-3] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
|
||||||
|
18:56:36.093 [WriteThread-4] INFO c.taos.example.highvolume.WriteTask - started
|
||||||
|
18:56:36.094 [WriteThread-4] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
|
||||||
|
18:56:36.099 [WriteThread-5] INFO c.taos.example.highvolume.WriteTask - started
|
||||||
|
18:56:36.100 [WriteThread-5] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
|
||||||
|
18:56:36.100 [WriteThread-6] INFO c.taos.example.highvolume.WriteTask - started
|
||||||
|
18:56:36.101 [WriteThread-6] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
|
||||||
|
18:56:36.103 [WriteThread-7] INFO c.taos.example.highvolume.WriteTask - started
|
||||||
|
18:56:36.104 [WriteThread-7] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
|
||||||
|
18:56:36.105 [WriteThread-8] INFO c.taos.example.highvolume.WriteTask - started
|
||||||
|
18:56:36.107 [WriteThread-8] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
|
||||||
|
18:56:36.108 [WriteThread-9] INFO c.taos.example.highvolume.WriteTask - started
|
||||||
|
18:56:36.109 [WriteThread-9] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
|
||||||
|
18:56:36.156 [WriteThread-10] INFO c.taos.example.highvolume.WriteTask - started
|
||||||
|
18:56:36.157 [WriteThread-11] INFO c.taos.example.highvolume.WriteTask - started
|
||||||
|
18:56:36.158 [WriteThread-10] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
|
||||||
|
18:56:36.158 [ReadThread-0] INFO com.taos.example.highvolume.ReadTask - started
|
||||||
|
18:56:36.158 [ReadThread-1] INFO com.taos.example.highvolume.ReadTask - started
|
||||||
|
18:56:36.158 [WriteThread-11] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
|
||||||
|
18:56:46.369 [main] INFO c.t.e.highvolume.FastWriteExample - count=18554448 speed=1855444
|
||||||
|
18:56:56.946 [main] INFO c.t.e.highvolume.FastWriteExample - count=39059660 speed=2050521
|
||||||
|
18:57:07.322 [main] INFO c.t.e.highvolume.FastWriteExample - count=59403604 speed=2034394
|
||||||
|
18:57:18.032 [main] INFO c.t.e.highvolume.FastWriteExample - count=80262938 speed=2085933
|
||||||
|
18:57:28.432 [main] INFO c.t.e.highvolume.FastWriteExample - count=101139906 speed=2087696
|
||||||
|
18:57:38.921 [main] INFO c.t.e.highvolume.FastWriteExample - count=121807202 speed=2066729
|
||||||
|
18:57:49.375 [main] INFO c.t.e.highvolume.FastWriteExample - count=142952417 speed=2114521
|
||||||
|
18:58:00.689 [main] INFO c.t.e.highvolume.FastWriteExample - count=163650306 speed=2069788
|
||||||
|
18:58:11.646 [main] INFO c.t.e.highvolume.FastWriteExample - count=185019808 speed=2136950
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem label="Python" value="python">
|
||||||
|
|
||||||
|
**Program Inventory**
|
||||||
|
|
||||||
|
Sample programs in Python uses multi-process and cross-process message queues.
|
||||||
|
|
||||||
|
| Function/CLass | Description |
|
||||||
|
| ---------------------------- | --------------------------------------------------------------------------- |
|
||||||
|
| main Function | Program entry point, create child processes and message queues |
|
||||||
|
| run_monitor_process Function | Create database, super table, calculate writing speed and output to console |
|
||||||
|
| run_read_task Function | Read data and distribute to message queues |
|
||||||
|
| MockDataSource Class | Simulate data source, return next 1,000 rows of each table |
|
||||||
|
| run_write_task Function | Read as much as possible data from message queue and write in batch |
|
||||||
|
| SQLWriter Class | Write in SQL and create table utomatically |
|
||||||
|
| StmtWriter Class | Write in parameter binding mode (not finished yet) |
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>main function</summary>
|
||||||
|
|
||||||
|
`main` function is responsible for creating message queues and fork child processes, there are 3 kinds of child processes:
|
||||||
|
|
||||||
|
1. Monitoring process, initializes database and calculating writing speed
|
||||||
|
2. Reading process (n), reads data from data source
|
||||||
|
3. Writing process (m), wirtes data into TDengine
|
||||||
|
|
||||||
|
`main` function provides 5 parameters:
|
||||||
|
|
||||||
|
1. The number of reading tasks, default value is 1
|
||||||
|
2. The number of writing tasks, default value is 1
|
||||||
|
3. The number of tables, default value is 1,000
|
||||||
|
4. The capacity of message queue, default value is 1,000,000 bytes
|
||||||
|
5. The batch size in single write, default value is 3000
|
||||||
|
|
||||||
|
```python
|
||||||
|
{{#include docs/examples/python/fast_write_example.py:main}}
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>run_monitor_process</summary>
|
||||||
|
|
||||||
|
Monitoring process initilizes database and monitoring writing speed.
|
||||||
|
|
||||||
|
```python
|
||||||
|
{{#include docs/examples/python/fast_write_example.py:monitor}}
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary>run_read_task function</summary>
|
||||||
|
|
||||||
|
Reading process reads data from other data system and distributes to the message queue allocated for it.
|
||||||
|
|
||||||
|
```python
|
||||||
|
{{#include docs/examples/python/fast_write_example.py:read}}
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary>MockDataSource</summary>
|
||||||
|
|
||||||
|
Below is the simulated data source, we assume table name exists in each generated data.
|
||||||
|
|
||||||
|
```python
|
||||||
|
{{#include docs/examples/python/mockdatasource.py}}
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
<summary>run_write_task function</summary>
|
||||||
|
|
||||||
|
Writing process tries to read as much as possible data from message queue and writes in batch.
|
||||||
|
|
||||||
|
```python
|
||||||
|
{{#include docs/examples/python/fast_write_example.py:write}}
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
<details>
|
||||||
|
|
||||||
|
SQLWriter class encapsulates the logic of composing SQL and writing data. Please be noted that the tables have not been created before writing, but are created automatically when catching the exception of table doesn't exist. For other exceptions caught, the SQL which caused the exception are logged for you to debug. This class also checks the SQL length, and passes the maximum SQL length by parameter maxSQLLength according to actual TDengine limit.
|
||||||
|
|
||||||
|
<summary>SQLWriter</summary>
|
||||||
|
|
||||||
|
```python
|
||||||
|
{{#include docs/examples/python/sql_writer.py}}
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
**Steps to Launch**
|
||||||
|
|
||||||
|
<details>
|
||||||
|
|
||||||
|
<summary>Launch Sample Program in Python</summary>
|
||||||
|
|
||||||
|
1. Prerequisities
|
||||||
|
|
||||||
|
- TDengine client driver has been installed
|
||||||
|
- Python3 has been installed, the the version >= 3.8
|
||||||
|
- TDengine Python connector `taospy` has been installed
|
||||||
|
|
||||||
|
2. Install faster-fifo to replace python builtin multiprocessing.Queue
|
||||||
|
|
||||||
|
```
|
||||||
|
pip3 install faster-fifo
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Click the "Copy" in the above sample programs to copy `fast_write_example.py` 、 `sql_writer.py` and `mockdatasource.py`.
|
||||||
|
|
||||||
|
4. Execute the program
|
||||||
|
|
||||||
|
```
|
||||||
|
python3 fast_write_example.py <READ_TASK_COUNT> <WRITE_TASK_COUNT> <TABLE_COUNT> <QUEUE_SIZE> <MAX_BATCH_SIZE>
|
||||||
|
```
|
||||||
|
|
||||||
|
Below is the output of running on a server of 16 cores, 64GB memory and SSD hard disk.
|
||||||
|
|
||||||
|
```
|
||||||
|
root@vm85$ python3 fast_write_example.py 8 8
|
||||||
|
2022-07-14 19:13:45,869 [root] - READ_TASK_COUNT=8, WRITE_TASK_COUNT=8, TABLE_COUNT=1000, QUEUE_SIZE=1000000, MAX_BATCH_SIZE=3000
|
||||||
|
2022-07-14 19:13:48,882 [root] - WriteTask-0 started with pid 718347
|
||||||
|
2022-07-14 19:13:48,883 [root] - WriteTask-1 started with pid 718348
|
||||||
|
2022-07-14 19:13:48,884 [root] - WriteTask-2 started with pid 718349
|
||||||
|
2022-07-14 19:13:48,884 [root] - WriteTask-3 started with pid 718350
|
||||||
|
2022-07-14 19:13:48,885 [root] - WriteTask-4 started with pid 718351
|
||||||
|
2022-07-14 19:13:48,885 [root] - WriteTask-5 started with pid 718352
|
||||||
|
2022-07-14 19:13:48,886 [root] - WriteTask-6 started with pid 718353
|
||||||
|
2022-07-14 19:13:48,886 [root] - WriteTask-7 started with pid 718354
|
||||||
|
2022-07-14 19:13:48,887 [root] - ReadTask-0 started with pid 718355
|
||||||
|
2022-07-14 19:13:48,888 [root] - ReadTask-1 started with pid 718356
|
||||||
|
2022-07-14 19:13:48,889 [root] - ReadTask-2 started with pid 718357
|
||||||
|
2022-07-14 19:13:48,889 [root] - ReadTask-3 started with pid 718358
|
||||||
|
2022-07-14 19:13:48,890 [root] - ReadTask-4 started with pid 718359
|
||||||
|
2022-07-14 19:13:48,891 [root] - ReadTask-5 started with pid 718361
|
||||||
|
2022-07-14 19:13:48,892 [root] - ReadTask-6 started with pid 718364
|
||||||
|
2022-07-14 19:13:48,893 [root] - ReadTask-7 started with pid 718365
|
||||||
|
2022-07-14 19:13:56,042 [DataBaseMonitor] - count=6676310 speed=667631.0
|
||||||
|
2022-07-14 19:14:06,196 [DataBaseMonitor] - count=20004310 speed=1332800.0
|
||||||
|
2022-07-14 19:14:16,366 [DataBaseMonitor] - count=32290310 speed=1228600.0
|
||||||
|
2022-07-14 19:14:26,527 [DataBaseMonitor] - count=44438310 speed=1214800.0
|
||||||
|
2022-07-14 19:14:36,673 [DataBaseMonitor] - count=56608310 speed=1217000.0
|
||||||
|
2022-07-14 19:14:46,834 [DataBaseMonitor] - count=68757310 speed=1214900.0
|
||||||
|
2022-07-14 19:14:57,280 [DataBaseMonitor] - count=80992310 speed=1223500.0
|
||||||
|
2022-07-14 19:15:07,689 [DataBaseMonitor] - count=93805310 speed=1281300.0
|
||||||
|
2022-07-14 19:15:18,020 [DataBaseMonitor] - count=106111310 speed=1230600.0
|
||||||
|
2022-07-14 19:15:28,356 [DataBaseMonitor] - count=118394310 speed=1228300.0
|
||||||
|
2022-07-14 19:15:38,690 [DataBaseMonitor] - count=130742310 speed=1234800.0
|
||||||
|
2022-07-14 19:15:49,000 [DataBaseMonitor] - count=143051310 speed=1230900.0
|
||||||
|
2022-07-14 19:15:59,323 [DataBaseMonitor] - count=155276310 speed=1222500.0
|
||||||
|
2022-07-14 19:16:09,649 [DataBaseMonitor] - count=167603310 speed=1232700.0
|
||||||
|
2022-07-14 19:16:19,995 [DataBaseMonitor] - count=179976310 speed=1237300.0
|
||||||
|
```
|
||||||
|
|
||||||
|
</details>
|
||||||
|
|
||||||
|
:::note
|
||||||
|
Don't establish connection to TDengine in the parent process if using Python connector in multi-process way, otherwise all the connections in child processes are blocked always. This is a known issue.
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
</Tabs>
|
|
@ -3,6 +3,6 @@
|
||||||
```
|
```
|
||||||
|
|
||||||
:::tip
|
:::tip
|
||||||
`github.com/taosdata/driver-go/v2/wrapper` module in driver-go is the wrapper for C API, it can be used to insert data with parameter binding.
|
`github.com/taosdata/driver-go/v3/wrapper` module in driver-go is the wrapper for C API, it can be used to insert data with parameter binding.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
|
@ -0,0 +1,3 @@
|
||||||
|
```php
|
||||||
|
{{#include docs/examples/php/insert.php}}
|
||||||
|
```
|
|
@ -0,0 +1,3 @@
|
||||||
|
```php
|
||||||
|
{{#include docs/examples/php/insert_stmt.php}}
|
||||||
|
```
|
Binary file not shown.
After Width: | Height: | Size: 7.1 KiB |
|
@ -0,0 +1,3 @@
|
||||||
|
```go
|
||||||
|
{{#include docs/examples/php/query.php}}
|
||||||
|
```
|
|
@ -1,6 +1,5 @@
|
||||||
---
|
---
|
||||||
Sidebar_label: Query data
|
title: Query Data
|
||||||
title: Query data
|
|
||||||
description: "This chapter introduces major query functionalities and how to perform sync and async query using connectors."
|
description: "This chapter introduces major query functionalities and how to perform sync and async query using connectors."
|
||||||
---
|
---
|
||||||
|
|
||||||
|
@ -13,6 +12,7 @@ import RustQuery from "./_rust.mdx";
|
||||||
import NodeQuery from "./_js.mdx";
|
import NodeQuery from "./_js.mdx";
|
||||||
import CsQuery from "./_cs.mdx";
|
import CsQuery from "./_cs.mdx";
|
||||||
import CQuery from "./_c.mdx";
|
import CQuery from "./_c.mdx";
|
||||||
|
import PhpQuery from "./_php.mdx";
|
||||||
import PyAsync from "./_py_async.mdx";
|
import PyAsync from "./_py_async.mdx";
|
||||||
import NodeAsync from "./_js_async.mdx";
|
import NodeAsync from "./_js_async.mdx";
|
||||||
import CsAsync from "./_cs_async.mdx";
|
import CsAsync from "./_cs_async.mdx";
|
||||||
|
@ -24,9 +24,8 @@ SQL is used by TDengine as its query language. Application programs can send SQL
|
||||||
|
|
||||||
- Query on single column or multiple columns
|
- Query on single column or multiple columns
|
||||||
- Filter on tags or data columns:>, <, =, <\>, like
|
- Filter on tags or data columns:>, <, =, <\>, like
|
||||||
- Grouping of results: `Group By`
|
- Grouping of results: `Group By` - Sorting of results: `Order By` - Limit the number of results: `Limit/Offset`
|
||||||
- Sorting of results: `Order By`
|
- Windowed aggregate queries for time windows (interval), session windows (session), and state windows (state_window)
|
||||||
- Limit the number of results: `Limit/Offset`
|
|
||||||
- Arithmetic on columns of numeric types or aggregate results
|
- Arithmetic on columns of numeric types or aggregate results
|
||||||
- Join query with timestamp alignment
|
- Join query with timestamp alignment
|
||||||
- Aggregate functions: count, max, min, avg, sum, twa, stddev, leastsquares, top, bottom, first, last, percentile, apercentile, last_row, spread, diff
|
- Aggregate functions: count, max, min, avg, sum, twa, stddev, leastsquares, top, bottom, first, last, percentile, apercentile, last_row, spread, diff
|
||||||
|
@ -34,10 +33,6 @@ SQL is used by TDengine as its query language. Application programs can send SQL
|
||||||
For example, the SQL statement below can be executed in TDengine CLI `taos` to select records with voltage greater than 215 and limit the output to only 2 rows.
|
For example, the SQL statement below can be executed in TDengine CLI `taos` to select records with voltage greater than 215 and limit the output to only 2 rows.
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
select * from d1001 where voltage > 215 order by ts desc limit 2;
|
|
||||||
```
|
|
||||||
|
|
||||||
```title=Output
|
|
||||||
taos> select * from d1001 where voltage > 215 order by ts desc limit 2;
|
taos> select * from d1001 where voltage > 215 order by ts desc limit 2;
|
||||||
ts | current | voltage | phase |
|
ts | current | voltage | phase |
|
||||||
======================================================================================
|
======================================================================================
|
||||||
|
@ -46,89 +41,88 @@ taos> select * from d1001 where voltage > 215 order by ts desc limit 2;
|
||||||
Query OK, 2 row(s) in set (0.001100s)
|
Query OK, 2 row(s) in set (0.001100s)
|
||||||
```
|
```
|
||||||
|
|
||||||
To meet the requirements of varied use cases, some special functions have been added in TDengine. Some examples are `twa` (Time Weighted Average), `spread` (The difference between the maximum and the minimum), and `last_row` (the last row). Furthermore, continuous query is also supported in TDengine.
|
To meet the requirements of varied use cases, some special functions have been added in TDengine. Some examples are `twa` (Time Weighted Average), `spread` (The difference between the maximum and the minimum), and `last_row` (the last row).
|
||||||
|
|
||||||
For detailed query syntax please refer to [Select](/taos-sql/select).
|
For detailed query syntax, see [Select](../../taos-sql/select).
|
||||||
|
|
||||||
## Aggregation among Tables
|
## Aggregation among Tables
|
||||||
|
|
||||||
In most use cases, there are always multiple kinds of data collection points. A new concept, called STable (abbreviation for super table), is used in TDengine to represent one type of data collection point, and a subtable is used to represent a specific data collection point of that type. Tags are used by TDengine to represent the static properties of data collection points. A specific data collection point has its own values for static properties. By specifying filter conditions on tags, aggregation can be performed efficiently among all the subtables created via the same STable, i.e. same type of data collection points. Aggregate functions applicable for tables can be used directly on STables; the syntax is exactly the same.
|
In most use cases, there are always multiple kinds of data collection points. A new concept, called STable (abbreviation for super table), is used in TDengine to represent one type of data collection point, and a subtable is used to represent a specific data collection point of that type. Tags are used by TDengine to represent the static properties of data collection points. A specific data collection point has its own values for static properties. By specifying filter conditions on tags, aggregation can be performed efficiently among all the subtables created via the same STable, i.e. same type of data collection points. Aggregate functions applicable for tables can be used directly on STables; the syntax is exactly the same.
|
||||||
|
|
||||||
In summary, records across subtables can be aggregated by a simple query on their STable. It is like a join operation. However, tables belonging to different STables can not be aggregated.
|
|
||||||
|
|
||||||
### Example 1
|
### Example 1
|
||||||
|
|
||||||
In TDengine CLI `taos`, use the SQL below to get the average voltage of all the meters in California grouped by location.
|
In TDengine CLI `taos`, use the SQL below to get the average voltage of all the meters in California grouped by location.
|
||||||
|
|
||||||
```
|
```
|
||||||
taos> SELECT AVG(voltage) FROM meters GROUP BY location;
|
taos> SELECT AVG(voltage), location FROM meters GROUP BY location;
|
||||||
avg(voltage) | location |
|
avg(voltage) | location |
|
||||||
=============================================================
|
===============================================================================================
|
||||||
222.000000000 | California.LosAngeles |
|
|
||||||
219.200000000 | California.SanFrancisco |
|
219.200000000 | California.SanFrancisco |
|
||||||
Query OK, 2 row(s) in set (0.002136s)
|
221.666666667 | California.LosAngeles |
|
||||||
|
Query OK, 2 rows in database (0.005995s)
|
||||||
```
|
```
|
||||||
|
|
||||||
### Example 2
|
### Example 2
|
||||||
|
|
||||||
In TDengine CLI `taos`, use the SQL below to get the number of rows and the maximum current in the past 24 hours from meters whose groupId is 2.
|
In TDengine CLI `taos`, use the SQL below to get the number of rows and the maximum current from meters whose groupId is 2.
|
||||||
|
|
||||||
```
|
```
|
||||||
taos> SELECT count(*), max(current) FROM meters where groupId = 2 and ts > now - 24h;
|
taos> SELECT count(*), max(current) FROM meters where groupId = 2;
|
||||||
count(*) | max(current) |
|
count(*) | max(current) |
|
||||||
==================================
|
==================================
|
||||||
5 | 13.4 |
|
5 | 13.4 |
|
||||||
Query OK, 1 row(s) in set (0.002136s)
|
Query OK, 1 row(s) in set (0.002136s)
|
||||||
```
|
```
|
||||||
|
|
||||||
Join queries are only allowed between subtables of the same STable. In [Select](/taos-sql/select), all query operations are marked as to whether they support STables or not.
|
In [Select](../../taos-sql/select), all query operations are marked as to whether they support STables or not.
|
||||||
|
|
||||||
## Down Sampling and Interpolation
|
## Down Sampling and Interpolation
|
||||||
|
|
||||||
In IoT use cases, down sampling is widely used to aggregate data by time range. The `INTERVAL` keyword in TDengine can be used to simplify the query by time window. For example, the SQL statement below can be used to get the sum of current every 10 seconds from meters table d1001.
|
In IoT use cases, down sampling is widely used to aggregate data by time range. The `INTERVAL` keyword in TDengine can be used to simplify the query by time window. For example, the SQL statement below can be used to get the sum of current every 10 seconds from meters table d1001.
|
||||||
|
|
||||||
```
|
```
|
||||||
taos> SELECT sum(current) FROM d1001 INTERVAL(10s);
|
taos> SELECT _wstart, sum(current) FROM d1001 INTERVAL(10s);
|
||||||
ts | sum(current) |
|
_wstart | sum(current) |
|
||||||
======================================================
|
======================================================
|
||||||
2018-10-03 14:38:00.000 | 10.300000191 |
|
2018-10-03 14:38:00.000 | 10.300000191 |
|
||||||
2018-10-03 14:38:10.000 | 24.900000572 |
|
2018-10-03 14:38:10.000 | 24.900000572 |
|
||||||
Query OK, 2 row(s) in set (0.000883s)
|
Query OK, 2 rows in database (0.003139s)
|
||||||
```
|
```
|
||||||
|
|
||||||
Down sampling can also be used for STable. For example, the below SQL statement can be used to get the sum of current from all meters in California.
|
Down sampling can also be used for STable. For example, the below SQL statement can be used to get the sum of current from all meters in California.
|
||||||
|
|
||||||
```
|
```
|
||||||
taos> SELECT SUM(current) FROM meters where location like "California%" INTERVAL(1s);
|
taos> SELECT _wstart, SUM(current) FROM meters where location like "California%" INTERVAL(1s);
|
||||||
ts | sum(current) |
|
_wstart | sum(current) |
|
||||||
======================================================
|
======================================================
|
||||||
2018-10-03 14:38:04.000 | 10.199999809 |
|
2018-10-03 14:38:04.000 | 10.199999809 |
|
||||||
2018-10-03 14:38:05.000 | 32.900000572 |
|
2018-10-03 14:38:05.000 | 23.699999809 |
|
||||||
2018-10-03 14:38:06.000 | 11.500000000 |
|
2018-10-03 14:38:06.000 | 11.500000000 |
|
||||||
2018-10-03 14:38:15.000 | 12.600000381 |
|
2018-10-03 14:38:15.000 | 12.600000381 |
|
||||||
2018-10-03 14:38:16.000 | 36.000000000 |
|
2018-10-03 14:38:16.000 | 34.400000572 |
|
||||||
Query OK, 5 row(s) in set (0.001538s)
|
Query OK, 5 rows in database (0.007413s)
|
||||||
```
|
```
|
||||||
|
|
||||||
Down sampling also supports time offset. For example, the below SQL statement can be used to get the sum of current from all meters but each time window must start at the boundary of 500 milliseconds.
|
Down sampling also supports time offset. For example, the below SQL statement can be used to get the sum of current from all meters but each time window must start at the boundary of 500 milliseconds.
|
||||||
|
|
||||||
```
|
```
|
||||||
taos> SELECT SUM(current) FROM meters INTERVAL(1s, 500a);
|
taos> SELECT _wstart, SUM(current) FROM meters INTERVAL(1s, 500a);
|
||||||
ts | sum(current) |
|
_wstart | sum(current) |
|
||||||
======================================================
|
======================================================
|
||||||
2018-10-03 14:38:04.500 | 11.189999809 |
|
2018-10-03 14:38:03.500 | 10.199999809 |
|
||||||
2018-10-03 14:38:05.500 | 31.900000572 |
|
2018-10-03 14:38:04.500 | 10.300000191 |
|
||||||
2018-10-03 14:38:06.500 | 11.600000000 |
|
2018-10-03 14:38:05.500 | 13.399999619 |
|
||||||
2018-10-03 14:38:15.500 | 12.300000381 |
|
2018-10-03 14:38:06.500 | 11.500000000 |
|
||||||
2018-10-03 14:38:16.500 | 35.000000000 |
|
2018-10-03 14:38:14.500 | 12.600000381 |
|
||||||
Query OK, 5 row(s) in set (0.001521s)
|
2018-10-03 14:38:16.500 | 34.400000572 |
|
||||||
|
Query OK, 6 rows in database (0.005515s)
|
||||||
```
|
```
|
||||||
|
|
||||||
In many use cases, it's hard to align the timestamp of the data collected by each collection point. However, a lot of algorithms like FFT require the data to be aligned with same time interval and application programs have to handle this by themselves. In TDengine, it's easy to achieve the alignment using down sampling.
|
In many use cases, it's hard to align the timestamp of the data collected by each collection point. However, a lot of algorithms like FFT require the data to be aligned with same time interval and application programs have to handle this by themselves. In TDengine, it's easy to achieve the alignment using down sampling.
|
||||||
|
|
||||||
Interpolation can be performed in TDengine if there is no data in a time range.
|
Interpolation can be performed in TDengine if there is no data in a time range.
|
||||||
|
|
||||||
For more details please refer to [Aggregate by Window](/taos-sql/interval).
|
For more information, see [Aggregate by Window](../../taos-sql/distinguished).
|
||||||
|
|
||||||
## Examples
|
## Examples
|
||||||
|
|
||||||
|
@ -158,6 +152,9 @@ In the section describing [Insert](/develop/insert-data/sql-writing), a database
|
||||||
<TabItem label="C" value="c">
|
<TabItem label="C" value="c">
|
||||||
<CQuery />
|
<CQuery />
|
||||||
</TabItem>
|
</TabItem>
|
||||||
|
<TabItem label="PHP" value="php">
|
||||||
|
<PhpQuery />
|
||||||
|
</TabItem>
|
||||||
</Tabs>
|
</Tabs>
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
|
|
|
@ -1,83 +0,0 @@
|
||||||
---
|
|
||||||
sidebar_label: Continuous Query
|
|
||||||
description: "Continuous query is a query that's executed automatically at a predefined frequency to provide aggregate query capability by time window. It is essentially simplified, time driven, stream computing."
|
|
||||||
title: "Continuous Query"
|
|
||||||
---
|
|
||||||
|
|
||||||
A continuous query is a query that's executed automatically at a predefined frequency to provide aggregate query capability by time window. It is essentially simplified, time driven, stream computing. A continuous query can be performed on a table or STable in TDengine. The results of a continuous query can be pushed to clients or written back to TDengine. Each query is executed on a time window, which moves forward with time. The size of time window and the forward sliding time need to be specified with parameter `INTERVAL` and `SLIDING` respectively.
|
|
||||||
|
|
||||||
A continuous query in TDengine is time driven, and can be defined using TAOS SQL directly without any extra operations. With a continuous query, the result can be generated based on a time window to achieve down sampling of the original data. Once a continuous query is defined using TAOS SQL, the query is automatically executed at the end of each time window and the result is pushed back to clients or written to TDengine.
|
|
||||||
|
|
||||||
There are some differences between continuous query in TDengine and time window computation in stream computing:
|
|
||||||
|
|
||||||
- The computation is performed and the result is returned in real time in stream computing, but the computation in continuous query is only started when a time window closes. For example, if the time window is 1 day, then the result will only be generated at 23:59:59.
|
|
||||||
- If a historical data row is written in to a time window for which the computation has already finished, the computation will not be performed again and the result will not be pushed to client applications again. If the results have already been written into TDengine, they will not be updated.
|
|
||||||
- In continuous query, if the result is pushed to a client, the client status is not cached on the server side and Exactly-once is not guaranteed by the server. If the client program crashes, a new time window will be generated from the time where the continuous query is restarted. If the result is written into TDengine, the data written into TDengine can be guaranteed as valid and continuous.
|
|
||||||
|
|
||||||
## Syntax
|
|
||||||
|
|
||||||
```sql
|
|
||||||
[CREATE TABLE AS] SELECT select_expr [, select_expr ...]
|
|
||||||
FROM {tb_name_list}
|
|
||||||
[WHERE where_condition]
|
|
||||||
[INTERVAL(interval_val [, interval_offset]) [SLIDING sliding_val]]
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
INTERVAL: The time window for which continuous query is performed
|
|
||||||
|
|
||||||
SLIDING: The time step for which the time window moves forward each time
|
|
||||||
|
|
||||||
## How to Use
|
|
||||||
|
|
||||||
In this section the use case of meters will be used to introduce how to use continuous query. Assume the STable and subtables have been created using the SQL statements below.
|
|
||||||
|
|
||||||
```sql
|
|
||||||
create table meters (ts timestamp, current float, voltage int, phase float) tags (location binary(64), groupId int);
|
|
||||||
create table D1001 using meters tags ("California.SanFrancisco", 2);
|
|
||||||
create table D1002 using meters tags ("California.LosAngeles", 2);
|
|
||||||
```
|
|
||||||
|
|
||||||
The SQL statement below retrieves the average voltage for a one minute time window, with each time window moving forward by 30 seconds.
|
|
||||||
|
|
||||||
```sql
|
|
||||||
select avg(voltage) from meters interval(1m) sliding(30s);
|
|
||||||
```
|
|
||||||
|
|
||||||
Whenever the above SQL statement is executed, all the existing data will be computed again. If the computation needs to be performed every 30 seconds automatically to compute on the data in the past one minute, the above SQL statement needs to be revised as below, in which `{startTime}` stands for the beginning timestamp in the latest time window.
|
|
||||||
|
|
||||||
```sql
|
|
||||||
select avg(voltage) from meters where ts > {startTime} interval(1m) sliding(30s);
|
|
||||||
```
|
|
||||||
|
|
||||||
An easier way to achieve this is to prepend `create table {tableName} as` before the `select`.
|
|
||||||
|
|
||||||
```sql
|
|
||||||
create table avg_vol as select avg(voltage) from meters interval(1m) sliding(30s);
|
|
||||||
```
|
|
||||||
|
|
||||||
A table named as `avg_vol` will be created automatically, then every 30 seconds the `select` statement will be executed automatically on the data in the past 1 minute, i.e. the latest time window, and the result is written into table `avg_vol`. The client program just needs to query from table `avg_vol`. For example:
|
|
||||||
|
|
||||||
```sql
|
|
||||||
taos> select * from avg_vol;
|
|
||||||
ts | avg_voltage_ |
|
|
||||||
===================================================
|
|
||||||
2020-07-29 13:37:30.000 | 222.0000000 |
|
|
||||||
2020-07-29 13:38:00.000 | 221.3500000 |
|
|
||||||
2020-07-29 13:38:30.000 | 220.1700000 |
|
|
||||||
2020-07-29 13:39:00.000 | 223.0800000 |
|
|
||||||
```
|
|
||||||
|
|
||||||
Please note that the minimum allowed time window is 10 milliseconds, and there is no upper limit.
|
|
||||||
|
|
||||||
It's possible to specify the start and end time of a continuous query. If the start time is not specified, the timestamp of the first row will be considered as the start time; if the end time is not specified, the continuous query will be performed indefinitely, otherwise it will be terminated once the end time is reached. For example, the continuous query in the SQL statement below will be started from now and terminated one hour later.
|
|
||||||
|
|
||||||
```sql
|
|
||||||
create table avg_vol as select avg(voltage) from meters where ts > now and ts <= now + 1h interval(1m) sliding(30s);
|
|
||||||
```
|
|
||||||
|
|
||||||
`now` in the above SQL statement stands for the time when the continuous query is created, not the time when the computation is actually performed. To avoid the trouble caused by a delay in receiving data as much as possible, the actual computation in a continuous query is started after a little delay. That means, once a time window closes, the computation is not started immediately. Normally, the result are available after a little time, normally within one minute, after the time window closes.
|
|
||||||
|
|
||||||
## How to Manage
|
|
||||||
|
|
||||||
`show streams` command can be used in the TDengine CLI `taos` to show all the continuous queries in the system, and `kill stream` can be used to terminate a continuous query.
|
|
|
@ -0,0 +1,113 @@
|
||||||
|
---
|
||||||
|
sidebar_label: Stream Processing
|
||||||
|
description: "The TDengine stream processing engine combines data inserts, preprocessing, analytics, real-time computation, and alerting into a single component."
|
||||||
|
title: Stream Processing
|
||||||
|
---
|
||||||
|
|
||||||
|
Raw time-series data is often cleaned and preprocessed before being permanently stored in a database. In a traditional time-series solution, this generally requires the deployment of stream processing systems such as Kafka or Flink. However, the complexity of such systems increases the cost of development and maintenance.
|
||||||
|
|
||||||
|
With the stream processing engine built into TDengine, you can process incoming data streams in real time and define stream transformations in SQL. Incoming data is automatically processed, and the results are pushed to specified tables based on triggering rules that you define. This is a lightweight alternative to complex processing engines that returns computation results in milliseconds even in high throughput scenarios.
|
||||||
|
|
||||||
|
The stream processing engine includes data filtering, scalar function computation (including user-defined functions), and window aggregation, with support for sliding windows, session windows, and event windows. Stream processing can write data to supertables from other supertables, standard tables, or subtables. When you create a stream, the target supertable is automatically created. New data is then processed and written to that supertable according to the rules defined for the stream. You can use PARTITION BY statements to partition the data by table name or tag. Separate partitions are then written to different subtables within the target supertable.
|
||||||
|
|
||||||
|
TDengine stream processing supports the aggregation of supertables that are deployed across multiple vnodes. It can also handle out-of-order writes and includes a watermark mechanism that determines the extent to which out-of-order data is accepted by the system. You can configure whether to drop or reprocess out-of-order data through the **ignore expired** parameter.
|
||||||
|
|
||||||
|
For more information, see [Stream Processing](../../taos-sql/stream).
|
||||||
|
|
||||||
|
|
||||||
|
## Create a Stream
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name AS subquery
|
||||||
|
stream_options: {
|
||||||
|
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
|
||||||
|
WATERMARK time
|
||||||
|
IGNORE EXPIRED [0 | 1]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
For more information, see [Stream Processing](../../taos-sql/stream).
|
||||||
|
|
||||||
|
## Usage Scenario 1
|
||||||
|
|
||||||
|
It is common that smart electrical meter systems for businesses generate millions of data points that are widely dispersed and not ordered. The time required to clean and convert this data makes efficient, real-time processing impossible for traditional solutions. This scenario shows how you can configure TDengine stream processing to drop data points over 220 V, find the maximum voltage for 5 second windows, and output this data to a table.
|
||||||
|
|
||||||
|
### Create a Database for Raw Data
|
||||||
|
|
||||||
|
A database including one supertable and four subtables is created as follows:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
DROP DATABASE IF EXISTS power;
|
||||||
|
CREATE DATABASE power;
|
||||||
|
USE power;
|
||||||
|
|
||||||
|
CREATE STABLE meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int);
|
||||||
|
|
||||||
|
CREATE TABLE d1001 USING meters TAGS ("California.SanFrancisco", 2);
|
||||||
|
CREATE TABLE d1002 USING meters TAGS ("California.SanFrancisco", 3);
|
||||||
|
CREATE TABLE d1003 USING meters TAGS ("California.LosAngeles", 2);
|
||||||
|
CREATE TABLE d1004 USING meters TAGS ("California.LosAngeles", 3);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Create a Stream
|
||||||
|
|
||||||
|
```sql
|
||||||
|
create stream current_stream into current_stream_output_stb as select _wstart as start, _wend as end, max(current) as max_current from meters where voltage <= 220 interval (5s);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Write Data
|
||||||
|
```sql
|
||||||
|
insert into d1001 values("2018-10-03 14:38:05.000", 10.30000, 219, 0.31000);
|
||||||
|
insert into d1001 values("2018-10-03 14:38:15.000", 12.60000, 218, 0.33000);
|
||||||
|
insert into d1001 values("2018-10-03 14:38:16.800", 12.30000, 221, 0.31000);
|
||||||
|
insert into d1002 values("2018-10-03 14:38:16.650", 10.30000, 218, 0.25000);
|
||||||
|
insert into d1003 values("2018-10-03 14:38:05.500", 11.80000, 221, 0.28000);
|
||||||
|
insert into d1003 values("2018-10-03 14:38:16.600", 13.40000, 223, 0.29000);
|
||||||
|
insert into d1004 values("2018-10-03 14:38:05.000", 10.80000, 223, 0.29000);
|
||||||
|
insert into d1004 values("2018-10-03 14:38:06.500", 11.50000, 221, 0.35000);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Query the Results
|
||||||
|
|
||||||
|
```sql
|
||||||
|
taos> select start, end, max_current from current_stream_output_stb;
|
||||||
|
start | end | max_current |
|
||||||
|
===========================================================================
|
||||||
|
2018-10-03 14:38:05.000 | 2018-10-03 14:38:10.000 | 10.30000 |
|
||||||
|
2018-10-03 14:38:15.000 | 2018-10-03 14:38:20.000 | 12.60000 |
|
||||||
|
Query OK, 2 rows in database (0.018762s)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage Scenario 2
|
||||||
|
|
||||||
|
In this scenario, the active power and reactive power are determined from the data gathered in the previous scenario. The location and name of each meter are concatenated with a period (.) between them, and the data set is partitioned by meter name and written to a new database.
|
||||||
|
|
||||||
|
### Create a Database for Raw Data
|
||||||
|
|
||||||
|
The procedure from the previous scenario is used to create the database.
|
||||||
|
|
||||||
|
### Create a Stream
|
||||||
|
|
||||||
|
```sql
|
||||||
|
create stream power_stream into power_stream_output_stb as select ts, concat_ws(".", location, tbname) as meter_location, current*voltage*cos(phase) as active_power, current*voltage*sin(phase) as reactive_power from meters partition by tbname;
|
||||||
|
```
|
||||||
|
|
||||||
|
### Write data
|
||||||
|
|
||||||
|
The procedure from the previous scenario is used to write the data.
|
||||||
|
|
||||||
|
### Query the Results
|
||||||
|
```sql
|
||||||
|
taos> select ts, meter_location, active_power, reactive_power from power_stream_output_stb;
|
||||||
|
ts | meter_location | active_power | reactive_power |
|
||||||
|
===================================================================================================================
|
||||||
|
2018-10-03 14:38:05.000 | California.LosAngeles.d1004 | 2307.834596289 | 688.687331847 |
|
||||||
|
2018-10-03 14:38:06.500 | California.LosAngeles.d1004 | 2387.415754896 | 871.474763418 |
|
||||||
|
2018-10-03 14:38:05.500 | California.LosAngeles.d1003 | 2506.240411679 | 720.680274962 |
|
||||||
|
2018-10-03 14:38:16.600 | California.LosAngeles.d1003 | 2863.424274422 | 854.482390839 |
|
||||||
|
2018-10-03 14:38:05.000 | California.SanFrancisco.d1001 | 2148.178871730 | 688.120784090 |
|
||||||
|
2018-10-03 14:38:15.000 | California.SanFrancisco.d1001 | 2598.589176205 | 890.081451418 |
|
||||||
|
2018-10-03 14:38:16.800 | California.SanFrancisco.d1001 | 2588.728381186 | 829.240910475 |
|
||||||
|
2018-10-03 14:38:16.650 | California.SanFrancisco.d1002 | 2175.595991997 | 555.520860397 |
|
||||||
|
Query OK, 8 rows in database (0.014753s)
|
||||||
|
```
|
|
@ -1,259 +0,0 @@
|
||||||
---
|
|
||||||
sidebar_label: Data Subscription
|
|
||||||
description: "Lightweight service for data subscription and publishing. Time series data inserted into TDengine continuously can be pushed automatically to subscribing clients."
|
|
||||||
title: Data Subscription
|
|
||||||
---
|
|
||||||
|
|
||||||
import Tabs from "@theme/Tabs";
|
|
||||||
import TabItem from "@theme/TabItem";
|
|
||||||
import Java from "./_sub_java.mdx";
|
|
||||||
import Python from "./_sub_python.mdx";
|
|
||||||
import Go from "./_sub_go.mdx";
|
|
||||||
import Rust from "./_sub_rust.mdx";
|
|
||||||
import Node from "./_sub_node.mdx";
|
|
||||||
import CSharp from "./_sub_cs.mdx";
|
|
||||||
import CDemo from "./_sub_c.mdx";
|
|
||||||
|
|
||||||
## Introduction
|
|
||||||
|
|
||||||
Due to the nature of time series data, data insertion into TDengine is similar to data publishing in message queues. Data is stored in ascending order of timestamp inside TDengine, and so each table in TDengine can essentially be considered as a message queue.
|
|
||||||
|
|
||||||
A lightweight service for data subscription and publishing is built into TDengine. With the API provided by TDengine, client programs can use `select` statements to subscribe to data from one or more tables. The subscription and state maintenance is performed on the client side. The client programs poll the server to check whether there is new data, and if so the new data will be pushed back to the client side. If the client program is restarted, where to start retrieving new data is up to the client side.
|
|
||||||
|
|
||||||
There are 3 major APIs related to subscription provided in the TDengine client driver.
|
|
||||||
|
|
||||||
```c
|
|
||||||
taos_subscribe
|
|
||||||
taos_consume
|
|
||||||
taos_unsubscribe
|
|
||||||
```
|
|
||||||
|
|
||||||
For more details about these APIs please refer to [C/C++ Connector](/reference/connector/cpp). Their usage will be introduced below using the use case of meters, in which the schema of STable and subtables from the previous section [Continuous Query](/develop/continuous-query) are used. Full sample code can be found [here](https://github.com/taosdata/TDengine/blob/master/examples/c/subscribe.c).
|
|
||||||
|
|
||||||
If we want to get a notification and take some actions if the current exceeds a threshold, like 10A, from some meters, there are two ways:
|
|
||||||
|
|
||||||
The first way is to query each sub table and record the last timestamp matching the criteria. Then after some time, query the data later than the recorded timestamp, and repeat this process. The SQL statements for this way are as below.
|
|
||||||
|
|
||||||
```sql
|
|
||||||
select * from D1001 where ts > {last_timestamp1} and current > 10;
|
|
||||||
select * from D1002 where ts > {last_timestamp2} and current > 10;
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
The above way works, but the problem is that the number of `select` statements increases with the number of meters. Additionally, the performance of both client side and server side will be unacceptable once the number of meters grows to a big enough number.
|
|
||||||
|
|
||||||
A better way is to query on the STable, only one `select` is enough regardless of the number of meters, like below:
|
|
||||||
|
|
||||||
```sql
|
|
||||||
select * from meters where ts > {last_timestamp} and current > 10;
|
|
||||||
```
|
|
||||||
|
|
||||||
However, this presents a new problem in how to choose `last_timestamp`. First, the timestamp when the data is generated is different from the timestamp when the data is inserted into the database, sometimes the difference between them may be very big. Second, the time when the data from different meters arrives at the database may be different too. If the timestamp of the "slowest" meter is used as `last_timestamp` in the query, the data from other meters may be selected repeatedly; but if the timestamp of the "fastest" meter is used as `last_timestamp`, some data from other meters may be missed.
|
|
||||||
|
|
||||||
All the problems mentioned above can be resolved easily using the subscription functionality provided by TDengine.
|
|
||||||
|
|
||||||
The first step is to create subscription using `taos_subscribe`.
|
|
||||||
|
|
||||||
```c
|
|
||||||
TAOS_SUB* tsub = NULL;
|
|
||||||
if (async) {
|
|
||||||
// create an asynchronous subscription, the callback function will be called every 1s
|
|
||||||
tsub = taos_subscribe(taos, restart, topic, sql, subscribe_callback, &blockFetch, 1000);
|
|
||||||
} else {
|
|
||||||
// create an synchronous subscription, need to call 'taos_consume' manually
|
|
||||||
tsub = taos_subscribe(taos, restart, topic, sql, NULL, NULL, 0);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
The subscription in TDengine can be either synchronous or asynchronous. In the above sample code, the value of variable `async` is determined from the CLI input, then it's used to create either an async or sync subscription. Sync subscription means the client program needs to invoke `taos_consume` to retrieve data, and async subscription means another thread created by `taos_subscribe` internally invokes `taos_consume` to retrieve data and pass the data to `subscribe_callback` for processing. `subscribe_callback` is a callback function provided by the client program. You should not perform time consuming operations in the callback function.
|
|
||||||
|
|
||||||
The parameter `taos` is an established connection. Nothing special needs to be done for thread safety for synchronous subscription. For asynchronous subscription, the taos_subscribe function should be called exclusively by the current thread, to avoid unpredictable errors.
|
|
||||||
|
|
||||||
The parameter `sql` is a `select` statement in which the `where` clause can be used to specify filter conditions. In our example, we can subscribe to the records in which the current exceeds 10A, with the following SQL statement:
|
|
||||||
|
|
||||||
```sql
|
|
||||||
select * from meters where current > 10;
|
|
||||||
```
|
|
||||||
|
|
||||||
Please note that, all the data will be processed because no start time is specified. If we only want to process data for the past day, a time related condition can be added:
|
|
||||||
|
|
||||||
```sql
|
|
||||||
select * from meters where ts > now - 1d and current > 10;
|
|
||||||
```
|
|
||||||
|
|
||||||
The parameter `topic` is the name of the subscription. The client application must guarantee that the name is unique. However, it doesn't have to be globally unique because subscription is implemented in the APIs on the client side.
|
|
||||||
|
|
||||||
If the subscription named as `topic` doesn't exist, the parameter `restart` will be ignored. If the subscription named as `topic` has been created before by the client program, when the client program is restarted with the subscription named `topic`, parameter `restart` is used to determine whether to retrieve data from the beginning or from the last point where the subscription was broken.
|
|
||||||
|
|
||||||
If the value of `restart` is **true** (i.e. a non-zero value), data will be retrieved from the beginning. If it is **false** (i.e. zero), the data already consumed before will not be processed again.
|
|
||||||
|
|
||||||
The last parameter of `taos_subscribe` is the polling interval in units of millisecond. In sync mode, if the time difference between two continuous invocations to `taos_consume` is smaller than the interval specified by `taos_subscribe`, `taos_consume` will be blocked until the interval is reached. In async mode, this interval is the minimum interval between two invocations to the call back function.
|
|
||||||
|
|
||||||
The second to last parameter of `taos_subscribe` is used to pass arguments to the call back function. `taos_subscribe` doesn't process this parameter and simply passes it to the call back function. This parameter is simply ignored in sync mode.
|
|
||||||
|
|
||||||
After a subscription is created, its data can be consumed and processed. Shown below is the sample code to consume data in sync mode, in the else condition of `if (async)`.
|
|
||||||
|
|
||||||
```c
|
|
||||||
if (async) {
|
|
||||||
getchar();
|
|
||||||
} else while(1) {
|
|
||||||
TAOS_RES* res = taos_consume(tsub);
|
|
||||||
if (res == NULL) {
|
|
||||||
printf("failed to consume data.");
|
|
||||||
break;
|
|
||||||
} else {
|
|
||||||
print_result(res, blockFetch);
|
|
||||||
getchar();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
In the above sample code in the else condition, there is an infinite loop. Each time carriage return is entered `taos_consume` is invoked. The return value of `taos_consume` is the selected result set. In the above sample, `print_result` is used to simplify the printing of the result set. It is similar to `taos_use_result`. Below is the implementation of `print_result`.
|
|
||||||
|
|
||||||
```c
|
|
||||||
void print_result(TAOS_RES* res, int blockFetch) {
|
|
||||||
TAOS_ROW row = NULL;
|
|
||||||
int num_fields = taos_num_fields(res);
|
|
||||||
TAOS_FIELD* fields = taos_fetch_fields(res);
|
|
||||||
int nRows = 0;
|
|
||||||
if (blockFetch) {
|
|
||||||
nRows = taos_fetch_block(res, &row);
|
|
||||||
for (int i = 0; i < nRows; i++) {
|
|
||||||
char temp[256];
|
|
||||||
taos_print_row(temp, row + i, fields, num_fields);
|
|
||||||
puts(temp);
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
while ((row = taos_fetch_row(res))) {
|
|
||||||
char temp[256];
|
|
||||||
taos_print_row(temp, row, fields, num_fields);
|
|
||||||
puts(temp);
|
|
||||||
nRows++;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
printf("%d rows consumed.\n", nRows);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
In the above code `taos_print_row` is used to process the data consumed. All matching rows are printed.
|
|
||||||
|
|
||||||
In async mode, consuming data is simpler as shown below.
|
|
||||||
|
|
||||||
```c
|
|
||||||
void subscribe_callback(TAOS_SUB* tsub, TAOS_RES *res, void* param, int code) {
|
|
||||||
print_result(res, *(int*)param);
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
`taos_unsubscribe` can be invoked to terminate a subscription.
|
|
||||||
|
|
||||||
```c
|
|
||||||
taos_unsubscribe(tsub, keep);
|
|
||||||
```
|
|
||||||
|
|
||||||
The second parameter `keep` is used to specify whether to keep the subscription progress on the client sde. If it is **false**, i.e. **0**, then subscription will be restarted from beginning regardless of the `restart` parameter's value when `taos_subscribe` is invoked again. The subscription progress information is stored in _{DataDir}/subscribe/_ , under which there is a file with the same name as `topic` for each subscription(Note: The default value of `DataDir` in the `taos.cfg` file is **/var/lib/taos/**. However, **/var/lib/taos/** does not exist on the Windows server. So you need to change the `DataDir` value to the corresponding existing directory."), the subscription will be restarted from the beginning if the corresponding progress file is removed.
|
|
||||||
|
|
||||||
Now let's see the effect of the above sample code, assuming below prerequisites have been done.
|
|
||||||
|
|
||||||
- The sample code has been downloaded to local system
|
|
||||||
- TDengine has been installed and launched properly on same system
|
|
||||||
- The database, STable, and subtables required in the sample code are ready
|
|
||||||
|
|
||||||
Launch the command below in the directory where the sample code resides to compile and start the program.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
make
|
|
||||||
./subscribe -sql='select * from meters where current > 10;'
|
|
||||||
```
|
|
||||||
|
|
||||||
After the program is started, open another terminal and launch TDengine CLI `taos`, then use the below SQL commands to insert a row whose current is 12A into table **D1001**.
|
|
||||||
|
|
||||||
```sql
|
|
||||||
use test;
|
|
||||||
insert into D1001 values(now, 12, 220, 1);
|
|
||||||
```
|
|
||||||
|
|
||||||
Then, this row of data will be shown by the example program on the first terminal because its current exceeds 10A. More data can be inserted for you to observe the output of the example program.
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
The example program below demonstrates how to subscribe, using connectors, to data rows in which current exceeds 10A.
|
|
||||||
|
|
||||||
### Prepare Data
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# create database "power"
|
|
||||||
taos> create database power;
|
|
||||||
# use "power" as the database in following operations
|
|
||||||
taos> use power;
|
|
||||||
# create super table "meters"
|
|
||||||
taos> create table meters(ts timestamp, current float, voltage int, phase int) tags(location binary(64), groupId int);
|
|
||||||
# create tabes using the schema defined by super table "meters"
|
|
||||||
taos> create table d1001 using meters tags ("California.SanFrancisco", 2);
|
|
||||||
taos> create table d1002 using meters tags ("California.LoSangeles", 2);
|
|
||||||
# insert some rows
|
|
||||||
taos> insert into d1001 values("2020-08-15 12:00:00.000", 12, 220, 1),("2020-08-15 12:10:00.000", 12.3, 220, 2),("2020-08-15 12:20:00.000", 12.2, 220, 1);
|
|
||||||
taos> insert into d1002 values("2020-08-15 12:00:00.000", 9.9, 220, 1),("2020-08-15 12:10:00.000", 10.3, 220, 1),("2020-08-15 12:20:00.000", 11.2, 220, 1);
|
|
||||||
# filter out the rows in which current is bigger than 10A
|
|
||||||
taos> select * from meters where current > 10;
|
|
||||||
ts | current | voltage | phase | location | groupid |
|
|
||||||
===========================================================================================================
|
|
||||||
2020-08-15 12:10:00.000 | 10.30000 | 220 | 1 | California.LoSangeles | 2 |
|
|
||||||
2020-08-15 12:20:00.000 | 11.20000 | 220 | 1 | California.LoSangeles | 2 |
|
|
||||||
2020-08-15 12:00:00.000 | 12.00000 | 220 | 1 | California.SanFrancisco | 2 |
|
|
||||||
2020-08-15 12:10:00.000 | 12.30000 | 220 | 2 | California.SanFrancisco | 2 |
|
|
||||||
2020-08-15 12:20:00.000 | 12.20000 | 220 | 1 | California.SanFrancisco | 2 |
|
|
||||||
Query OK, 5 row(s) in set (0.004896s)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Example Programs
|
|
||||||
|
|
||||||
<Tabs defaultValue="java" groupId="lang">
|
|
||||||
<TabItem label="Java" value="java">
|
|
||||||
<Java />
|
|
||||||
</TabItem>
|
|
||||||
<TabItem label="Python" value="Python">
|
|
||||||
<Python />
|
|
||||||
</TabItem>
|
|
||||||
{/* <TabItem label="Go" value="go">
|
|
||||||
<Go/>
|
|
||||||
</TabItem> */}
|
|
||||||
<TabItem label="Rust" value="rust">
|
|
||||||
<Rust />
|
|
||||||
</TabItem>
|
|
||||||
{/* <TabItem label="Node.js" value="nodejs">
|
|
||||||
<Node/>
|
|
||||||
</TabItem>
|
|
||||||
<TabItem label="C#" value="csharp">
|
|
||||||
<CSharp/>
|
|
||||||
</TabItem> */}
|
|
||||||
<TabItem label="C" value="c">
|
|
||||||
<CDemo />
|
|
||||||
</TabItem>
|
|
||||||
</Tabs>
|
|
||||||
|
|
||||||
### Run the Examples
|
|
||||||
|
|
||||||
The example programs first consume all historical data matching the criteria.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
ts: 1597464000000 current: 12.0 voltage: 220 phase: 1 location: California.SanFrancisco groupid : 2
|
|
||||||
ts: 1597464600000 current: 12.3 voltage: 220 phase: 2 location: California.SanFrancisco groupid : 2
|
|
||||||
ts: 1597465200000 current: 12.2 voltage: 220 phase: 1 location: California.SanFrancisco groupid : 2
|
|
||||||
ts: 1597464600000 current: 10.3 voltage: 220 phase: 1 location: California.LoSangeles groupid : 2
|
|
||||||
ts: 1597465200000 current: 11.2 voltage: 220 phase: 1 location: California.LoSangeles groupid : 2
|
|
||||||
```
|
|
||||||
|
|
||||||
Next, use TDengine CLI to insert a new row.
|
|
||||||
|
|
||||||
```
|
|
||||||
# taos
|
|
||||||
taos> use power;
|
|
||||||
taos> insert into d1001 values(now, 12.4, 220, 1);
|
|
||||||
```
|
|
||||||
|
|
||||||
Because the current in the inserted row exceeds 10A, it will be consumed by the example program.
|
|
||||||
|
|
||||||
```
|
|
||||||
ts: 1651146662805 current: 12.4 voltage: 220 phase: 1 location: California.SanFrancisco groupid: 2
|
|
||||||
```
|
|
|
@ -0,0 +1,841 @@
|
||||||
|
---
|
||||||
|
sidebar_label: Data Subscription
|
||||||
|
description: "The TDengine data subscription service automatically pushes data written in TDengine to subscribing clients."
|
||||||
|
title: Data Subscription
|
||||||
|
---
|
||||||
|
|
||||||
|
import Tabs from "@theme/Tabs";
|
||||||
|
import TabItem from "@theme/TabItem";
|
||||||
|
import Java from "./_sub_java.mdx";
|
||||||
|
import Python from "./_sub_python.mdx";
|
||||||
|
import Go from "./_sub_go.mdx";
|
||||||
|
import Rust from "./_sub_rust.mdx";
|
||||||
|
import Node from "./_sub_node.mdx";
|
||||||
|
import CSharp from "./_sub_cs.mdx";
|
||||||
|
import CDemo from "./_sub_c.mdx";
|
||||||
|
|
||||||
|
TDengine provides data subscription and consumption interfaces similar to message queue products. These interfaces make it easier for applications to obtain data written to TDengine either in real time and to process data in the order that events occurred. This simplifies your time-series data processing systems and reduces your costs because it is no longer necessary to deploy a message queue product such as Kafka.
|
||||||
|
|
||||||
|
To use TDengine data subscription, you define topics like in Kafka. However, a topic in TDengine is based on query conditions for an existing supertable, table, or subtable - in other words, a SELECT statement. You can use SQL to filter data by tag, table name, column, or expression and then perform a scalar function or user-defined function on the data. Aggregate functions are not supported. This gives TDengine data subscription more flexibility than similar products. The granularity of data can be controlled on demand by applications, while filtering and preprocessing are handled by TDengine instead of the application layer. This implementation reduces the amount of data transmitted and the complexity of applications.
|
||||||
|
|
||||||
|
By subscribing to a topic, a consumer can obtain the latest data in that topic in real time. Multiple consumers can be formed into a consumer group that consumes messages together. Consumer groups enable faster speed through multi-threaded, distributed data consumption. Note that consumers in different groups that are subscribed to the same topic do not consume messages together. A single consumer can subscribe to multiple topics. If the data in a supertable is sharded across multiple vnodes, consumer groups can consume it much more efficiently than single consumers. TDengine also includes an acknowledgement mechanism that ensures at-least-once delivery in complicated environments where machines may crash or restart.
|
||||||
|
|
||||||
|
To implement these features, TDengine indexes its write-ahead log (WAL) file for fast random access and provides configurable methods for replacing and retaining this file. You can define a retention period and size for this file. For information, see the CREATE DATABASE statement. In this way, the WAL file is transformed into a persistent storage engine that remembers the order in which events occur. However, note that configuring an overly long retention period for your WAL files makes database compression inefficient. TDengine then uses the WAL file instead of the time-series database as its storage engine for queries in the form of topics. TDengine reads the data from the WAL file; uses a unified query engine instance to perform filtering, transformations, and other operations; and finally pushes the data to consumers.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Data Schema and API
|
||||||
|
|
||||||
|
The related schemas and APIs in various languages are described as follows:
|
||||||
|
|
||||||
|
<Tabs defaultValue="java" groupId="lang">
|
||||||
|
<TabItem value="c" label="C">
|
||||||
|
|
||||||
|
```c
|
||||||
|
typedef struct tmq_t tmq_t;
|
||||||
|
typedef struct tmq_conf_t tmq_conf_t;
|
||||||
|
typedef struct tmq_list_t tmq_list_t;
|
||||||
|
|
||||||
|
typedef void(tmq_commit_cb(tmq_t *, int32_t code, void *param));
|
||||||
|
|
||||||
|
DLL_EXPORT tmq_list_t *tmq_list_new();
|
||||||
|
DLL_EXPORT int32_t tmq_list_append(tmq_list_t *, const char *);
|
||||||
|
DLL_EXPORT void tmq_list_destroy(tmq_list_t *);
|
||||||
|
DLL_EXPORT tmq_t *tmq_consumer_new(tmq_conf_t *conf, char *errstr, int32_t errstrLen);
|
||||||
|
DLL_EXPORT const char *tmq_err2str(int32_t code);
|
||||||
|
|
||||||
|
DLL_EXPORT int32_t tmq_subscribe(tmq_t *tmq, const tmq_list_t *topic_list);
|
||||||
|
DLL_EXPORT int32_t tmq_unsubscribe(tmq_t *tmq);
|
||||||
|
DLL_EXPORT TAOS_RES *tmq_consumer_poll(tmq_t *tmq, int64_t timeout);
|
||||||
|
DLL_EXPORT int32_t tmq_consumer_close(tmq_t *tmq);
|
||||||
|
DLL_EXPORT int32_t tmq_commit_sync(tmq_t *tmq, const TAOS_RES *msg);
|
||||||
|
DLL_EXPORT void tmq_commit_async(tmq_t *tmq, const TAOS_RES *msg, tmq_commit_cb *cb, void *param);
|
||||||
|
|
||||||
|
enum tmq_conf_res_t {
|
||||||
|
TMQ_CONF_UNKNOWN = -2,
|
||||||
|
TMQ_CONF_INVALID = -1,
|
||||||
|
TMQ_CONF_OK = 0,
|
||||||
|
};
|
||||||
|
typedef enum tmq_conf_res_t tmq_conf_res_t;
|
||||||
|
|
||||||
|
DLL_EXPORT tmq_conf_t *tmq_conf_new();
|
||||||
|
DLL_EXPORT tmq_conf_res_t tmq_conf_set(tmq_conf_t *conf, const char *key, const char *value);
|
||||||
|
DLL_EXPORT void tmq_conf_destroy(tmq_conf_t *conf);
|
||||||
|
DLL_EXPORT void tmq_conf_set_auto_commit_cb(tmq_conf_t *conf, tmq_commit_cb *cb, void *param);
|
||||||
|
```
|
||||||
|
|
||||||
|
For more information, see [C/C++ Connector](/reference/connector/cpp).
|
||||||
|
|
||||||
|
The following example is based on the smart meter table described in Data Models. For complete sample code, see the C language section below.
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="java" label="Java">
|
||||||
|
|
||||||
|
```java
|
||||||
|
void subscribe(Collection<String> topics) throws SQLException;
|
||||||
|
|
||||||
|
void unsubscribe() throws SQLException;
|
||||||
|
|
||||||
|
Set<String> subscription() throws SQLException;
|
||||||
|
|
||||||
|
ConsumerRecords<V> poll(Duration timeout) throws SQLException;
|
||||||
|
|
||||||
|
void commitAsync();
|
||||||
|
|
||||||
|
void commitAsync(OffsetCommitCallback callback);
|
||||||
|
|
||||||
|
void commitSync() throws SQLException;
|
||||||
|
|
||||||
|
void close() throws SQLException;
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem value="Python" label="Python">
|
||||||
|
|
||||||
|
```python
|
||||||
|
class TaosConsumer():
|
||||||
|
def __init__(self, *topics, **configs)
|
||||||
|
|
||||||
|
def __iter__(self)
|
||||||
|
|
||||||
|
def __next__(self)
|
||||||
|
|
||||||
|
def sync_next(self)
|
||||||
|
|
||||||
|
def subscription(self)
|
||||||
|
|
||||||
|
def unsubscribe(self)
|
||||||
|
|
||||||
|
def close(self)
|
||||||
|
|
||||||
|
def __del__(self)
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem label="Go" value="Go">
|
||||||
|
|
||||||
|
```go
|
||||||
|
func NewConsumer(conf *Config) (*Consumer, error)
|
||||||
|
|
||||||
|
func (c *Consumer) Close() error
|
||||||
|
|
||||||
|
func (c *Consumer) Commit(ctx context.Context, message unsafe.Pointer) error
|
||||||
|
|
||||||
|
func (c *Consumer) FreeMessage(message unsafe.Pointer)
|
||||||
|
|
||||||
|
func (c *Consumer) Poll(timeout time.Duration) (*Result, error)
|
||||||
|
|
||||||
|
func (c *Consumer) Subscribe(topics []string) error
|
||||||
|
|
||||||
|
func (c *Consumer) Unsubscribe() error
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem label="Rust" value="Rust">
|
||||||
|
|
||||||
|
```rust
|
||||||
|
impl TBuilder for TmqBuilder
|
||||||
|
fn from_dsn<D: IntoDsn>(dsn: D) -> Result<Self, Self::Error>
|
||||||
|
fn build(&self) -> Result<Self::Target, Self::Error>
|
||||||
|
|
||||||
|
impl AsAsyncConsumer for Consumer
|
||||||
|
async fn subscribe<T: Into<String>, I: IntoIterator<Item = T> + Send>(
|
||||||
|
&mut self,
|
||||||
|
topics: I,
|
||||||
|
) -> Result<(), Self::Error>;
|
||||||
|
fn stream(
|
||||||
|
&self,
|
||||||
|
) -> Pin<
|
||||||
|
Box<
|
||||||
|
dyn '_
|
||||||
|
+ Send
|
||||||
|
+ futures::Stream<
|
||||||
|
Item = Result<(Self::Offset, MessageSet<Self::Meta, Self::Data>), Self::Error>,
|
||||||
|
>,
|
||||||
|
>,
|
||||||
|
>;
|
||||||
|
async fn commit(&self, offset: Self::Offset) -> Result<(), Self::Error>;
|
||||||
|
|
||||||
|
async fn unsubscribe(self);
|
||||||
|
```
|
||||||
|
|
||||||
|
For more information, see [Crate taos](https://docs.rs/taos).
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem label="Node.JS" value="Node.JS">
|
||||||
|
|
||||||
|
```js
|
||||||
|
function TMQConsumer(config)
|
||||||
|
|
||||||
|
function subscribe(topic)
|
||||||
|
|
||||||
|
function consume(timeout)
|
||||||
|
|
||||||
|
function subscription()
|
||||||
|
|
||||||
|
function unsubscribe()
|
||||||
|
|
||||||
|
function commit(msg)
|
||||||
|
|
||||||
|
function close()
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem value="C#" label="C#">
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
ConsumerBuilder(IEnumerable<KeyValuePair<string, string>> config)
|
||||||
|
|
||||||
|
virtual IConsumer Build()
|
||||||
|
|
||||||
|
Consumer(ConsumerBuilder builder)
|
||||||
|
|
||||||
|
void Subscribe(IEnumerable<string> topics)
|
||||||
|
|
||||||
|
void Subscribe(string topic)
|
||||||
|
|
||||||
|
ConsumeResult Consume(int millisecondsTimeout)
|
||||||
|
|
||||||
|
List<string> Subscription()
|
||||||
|
|
||||||
|
void Unsubscribe()
|
||||||
|
|
||||||
|
void Commit(ConsumeResult consumerResult)
|
||||||
|
|
||||||
|
void Close()
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
|
## Insert Data into TDengine
|
||||||
|
|
||||||
|
A database including one supertable and two subtables is created as follows:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
DROP DATABASE IF EXISTS tmqdb;
|
||||||
|
CREATE DATABASE tmqdb;
|
||||||
|
CREATE TABLE tmqdb.stb (ts TIMESTAMP, c1 INT, c2 FLOAT, c3 VARCHAR(16) TAGS(t1 INT, t3 VARCHAR(16));
|
||||||
|
CREATE TABLE tmqdb.ctb0 USING tmqdb.stb TAGS(0, "subtable0");
|
||||||
|
CREATE TABLE tmqdb.ctb1 USING tmqdb.stb TAGS(1, "subtable1");
|
||||||
|
INSERT INTO tmqdb.ctb0 VALUES(now, 0, 0, 'a0')(now+1s, 0, 0, 'a00');
|
||||||
|
INSERT INTO tmqdb.ctb1 VALUES(now, 1, 1, 'a1')(now+1s, 11, 11, 'a11');
|
||||||
|
```
|
||||||
|
|
||||||
|
## Create a Topic
|
||||||
|
|
||||||
|
The following SQL statement creates a topic in TDengine:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TOPIC topic_name AS SELECT ts, c1, c2, c3 FROM tmqdb.stb WHERE c1 > 1;
|
||||||
|
```
|
||||||
|
|
||||||
|
Multiple subscription types are supported.
|
||||||
|
|
||||||
|
#### Subscribe to a Column
|
||||||
|
|
||||||
|
Syntax:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TOPIC topic_name as subquery
|
||||||
|
```
|
||||||
|
|
||||||
|
You can subscribe to a topic through a SELECT statement. Statements that specify columns, such as `SELECT *` and `SELECT ts, cl` are supported, as are filtering conditions and scalar functions. Aggregate functions and time window aggregation are not supported. Note:
|
||||||
|
|
||||||
|
- The schema of topics created in this manner is determined by the subscribed data.
|
||||||
|
- You cannot modify (`ALTER <table> MODIFY`) or delete (`ALTER <table> DROP`) columns or tags that are used in a subscription or calculation.
|
||||||
|
- Columns added to a table after the subscription is created are not displayed in the results. Deleting columns will cause an error.
|
||||||
|
|
||||||
|
### Subscribe to a Supertable
|
||||||
|
|
||||||
|
Syntax:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TOPIC topic_name AS STABLE stb_name
|
||||||
|
```
|
||||||
|
|
||||||
|
Creating a topic in this manner differs from a `SELECT * from stbName` statement as follows:
|
||||||
|
|
||||||
|
- The table schema can be modified.
|
||||||
|
- Unstructured data is returned. The format of the data returned changes based on the supertable schema.
|
||||||
|
- A different table schema may exist for every data block to be processed.
|
||||||
|
- The data returned does not include tags.
|
||||||
|
|
||||||
|
### Subscribe to a Database
|
||||||
|
|
||||||
|
Syntax:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TOPIC topic_name [WITH META] AS DATABASE db_name;
|
||||||
|
```
|
||||||
|
|
||||||
|
This SQL statement creates a subscription to all tables in the database. You can add the `WITH META` parameter to include schema changes in the subscription, including creating and deleting supertables; adding, deleting, and modifying columns; and creating, deleting, and modifying the tags of subtables. Consumers can determine the message type from the API. Note that this differs from Kafka.
|
||||||
|
|
||||||
|
## Create a Consumer
|
||||||
|
|
||||||
|
You configure the following parameters when creating a consumer:
|
||||||
|
|
||||||
|
| Parameter | Type | Description | Remarks |
|
||||||
|
| :----------------------------: | :-----: | -------------------------------------------------------- | ------------------------------------------- |
|
||||||
|
| `td.connect.ip` | string | Used in establishing a connection; same as `taos_connect` | |
|
||||||
|
| `td.connect.user` | string | Used in establishing a connection; same as `taos_connect` | |
|
||||||
|
| `td.connect.pass` | string | Used in establishing a connection; same as `taos_connect` | |
|
||||||
|
| `td.connect.port` | string | Used in establishing a connection; same as `taos_connect` | |
|
||||||
|
| `group.id` | string | Consumer group ID; consumers with the same ID are in the same group | **Required**. Maximum length: 192. |
|
||||||
|
| `client.id` | string | Client ID | Maximum length: 192. |
|
||||||
|
| `auto.offset.reset` | enum | Initial offset for the consumer group | Specify `earliest`, `latest`, or `none`(default) |
|
||||||
|
| `enable.auto.commit` | boolean | Commit automatically | Specify `true` or `false`. |
|
||||||
|
| `auto.commit.interval.ms` | integer | Interval for automatic commits, in milliseconds |
|
||||||
|
| `enable.heartbeat.background` | boolean | Backend heartbeat; if enabled, the consumer does not go offline even if it has not polled for a long time | |
|
||||||
|
| `experimental.snapshot.enable` | boolean | Specify whether to consume messages from the WAL or from TSBS | |
|
||||||
|
| `msg.with.table.name` | boolean | Specify whether to deserialize table names from messages |
|
||||||
|
|
||||||
|
The method of specifying these parameters depends on the language used:
|
||||||
|
|
||||||
|
<Tabs defaultValue="java" groupId="lang">
|
||||||
|
<TabItem value="c" label="C">
|
||||||
|
|
||||||
|
```c
|
||||||
|
/* Create consumer groups on demand (group.id) and enable automatic commits (enable.auto.commit),
|
||||||
|
an automatic commit interval (auto.commit.interval.ms), and a username (td.connect.user) and password (td.connect.pass) */
|
||||||
|
tmq_conf_t* conf = tmq_conf_new();
|
||||||
|
tmq_conf_set(conf, "enable.auto.commit", "true");
|
||||||
|
tmq_conf_set(conf, "auto.commit.interval.ms", "1000");
|
||||||
|
tmq_conf_set(conf, "group.id", "cgrpName");
|
||||||
|
tmq_conf_set(conf, "td.connect.user", "root");
|
||||||
|
tmq_conf_set(conf, "td.connect.pass", "taosdata");
|
||||||
|
tmq_conf_set(conf, "auto.offset.reset", "earliest");
|
||||||
|
tmq_conf_set(conf, "experimental.snapshot.enable", "true");
|
||||||
|
tmq_conf_set(conf, "msg.with.table.name", "true");
|
||||||
|
tmq_conf_set_auto_commit_cb(conf, tmq_commit_cb_print, NULL);
|
||||||
|
|
||||||
|
tmq_t* tmq = tmq_consumer_new(conf, NULL, 0);
|
||||||
|
tmq_conf_destroy(conf);
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="java" label="Java">
|
||||||
|
|
||||||
|
Java programs use the following parameters:
|
||||||
|
|
||||||
|
| Parameter | Type | Description | Remarks |
|
||||||
|
| ----------------------------- | ------ | ----------------------------------------------------------------------------------------------------------------------------- |
|
||||||
|
| `bootstrap.servers` | string |Connection address, such as `localhost:6030` |
|
||||||
|
| `value.deserializer` | string | Value deserializer; to use this method, implement the `com.taosdata.jdbc.tmq.Deserializer` interface or inherit the `com.taosdata.jdbc.tmq.ReferenceDeserializer` type |
|
||||||
|
| `value.deserializer.encoding` | string | Specify the encoding for string deserialization | |
|
||||||
|
|
||||||
|
Note: The `bootstrap.servers` parameter is used instead of `td.connect.ip` and `td.connect.port` to provide an interface that is consistent with Kafka.
|
||||||
|
|
||||||
|
```java
|
||||||
|
Properties properties = new Properties();
|
||||||
|
properties.setProperty("enable.auto.commit", "true");
|
||||||
|
properties.setProperty("auto.commit.interval.ms", "1000");
|
||||||
|
properties.setProperty("group.id", "cgrpName");
|
||||||
|
properties.setProperty("bootstrap.servers", "127.0.0.1:6030");
|
||||||
|
properties.setProperty("td.connect.user", "root");
|
||||||
|
properties.setProperty("td.connect.pass", "taosdata");
|
||||||
|
properties.setProperty("auto.offset.reset", "earliest");
|
||||||
|
properties.setProperty("msg.with.table.name", "true");
|
||||||
|
properties.setProperty("value.deserializer", "com.taos.example.MetersDeserializer");
|
||||||
|
|
||||||
|
TaosConsumer<Meters> consumer = new TaosConsumer<>(properties);
|
||||||
|
|
||||||
|
/* value deserializer definition. */
|
||||||
|
import com.taosdata.jdbc.tmq.ReferenceDeserializer;
|
||||||
|
|
||||||
|
public class MetersDeserializer extends ReferenceDeserializer<Meters> {
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem label="Go" value="Go">
|
||||||
|
|
||||||
|
```go
|
||||||
|
config := tmq.NewConfig()
|
||||||
|
defer config.Destroy()
|
||||||
|
err = config.SetGroupID("test")
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
err = config.SetAutoOffsetReset("earliest")
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
err = config.SetConnectIP("127.0.0.1")
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
err = config.SetConnectUser("root")
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
err = config.SetConnectPass("taosdata")
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
err = config.SetConnectPort("6030")
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
err = config.SetMsgWithTableName(true)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
err = config.EnableHeartBeat()
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
err = config.EnableAutoCommit(func(result *wrapper.TMQCommitCallbackResult) {
|
||||||
|
if result.ErrCode != 0 {
|
||||||
|
errStr := wrapper.TMQErr2Str(result.ErrCode)
|
||||||
|
err := errors.NewError(int(result.ErrCode), errStr)
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem label="Rust" value="Rust">
|
||||||
|
|
||||||
|
```rust
|
||||||
|
let mut dsn: Dsn = "taos://".parse()?;
|
||||||
|
dsn.set("group.id", "group1");
|
||||||
|
dsn.set("client.id", "test");
|
||||||
|
dsn.set("auto.offset.reset", "earliest");
|
||||||
|
|
||||||
|
let tmq = TmqBuilder::from_dsn(dsn)?;
|
||||||
|
|
||||||
|
let mut consumer = tmq.build()?;
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem value="Python" label="Python">
|
||||||
|
|
||||||
|
Python programs use the following parameters:
|
||||||
|
|
||||||
|
| Parameter | Type | Description | Remarks |
|
||||||
|
| :----------------------------: | :----: | -------------------------------------------------------- | ------------------------------------------- |
|
||||||
|
| `td_connect_ip` | string | Used in establishing a connection; same as `taos_connect` | |
|
||||||
|
| `td_connect_user` | string | Used in establishing a connection; same as `taos_connect` | |
|
||||||
|
| `td_connect_pass` | string | Used in establishing a connection; same as `taos_connect` | |
|
||||||
|
| `td_connect_port` | string | Used in establishing a connection; same as `taos_connect` | |
|
||||||
|
| `group_id` | string | Consumer group ID; consumers with the same ID are in the same group | **Required**. Maximum length: 192. |
|
||||||
|
| `client_id` | string | Client ID | Maximum length: 192. |
|
||||||
|
| `auto_offset_reset` | string | Initial offset for the consumer group | Specify `earliest`, `latest`, or `none`(default) |
|
||||||
|
| `enable_auto_commit` | string | Commit automatically | Specify `true` or `false`. |
|
||||||
|
| `auto_commit_interval_ms` | string | Interval for automatic commits, in milliseconds |
|
||||||
|
| `enable_heartbeat_background` | string | Backend heartbeat; if enabled, the consumer does not go offline even if it has not polled for a long time | Specify `true` or `false`. |
|
||||||
|
| `experimental_snapshot_enable` | string | Specify whether to consume messages from the WAL or from TSBS | Specify `true` or `false`. |
|
||||||
|
| `msg_with_table_name` | string | Specify whether to deserialize table names from messages | Specify `true` or `false`.
|
||||||
|
| `timeout` | int | Consumer pull timeout | |
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem label="Node.JS" value="Node.JS">
|
||||||
|
|
||||||
|
```js
|
||||||
|
// Create consumer groups on demand (group.id) and enable automatic commits (enable.auto.commit),
|
||||||
|
// an automatic commit interval (auto.commit.interval.ms), and a username (td.connect.user) and password (td.connect.pass)
|
||||||
|
|
||||||
|
let consumer = taos.consumer({
|
||||||
|
'enable.auto.commit': 'true',
|
||||||
|
'auto.commit.interval.ms','1000',
|
||||||
|
'group.id': 'tg2',
|
||||||
|
'td.connect.user': 'root',
|
||||||
|
'td.connect.pass': 'taosdata',
|
||||||
|
'auto.offset.reset','earliest',
|
||||||
|
'msg.with.table.name': 'true',
|
||||||
|
'td.connect.ip','127.0.0.1',
|
||||||
|
'td.connect.port','6030'
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem value="C#" label="C#">
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
using TDengineTMQ;
|
||||||
|
|
||||||
|
// Create consumer groups on demand (GourpID) and enable automatic commits (EnableAutoCommit),
|
||||||
|
// an automatic commit interval (AutoCommitIntervalMs), and a username (TDConnectUser) and password (TDConnectPasswd)
|
||||||
|
var cfg = new ConsumerConfig
|
||||||
|
{
|
||||||
|
EnableAutoCommit = "true"
|
||||||
|
AutoCommitIntervalMs = "1000"
|
||||||
|
GourpId = "TDengine-TMQ-C#",
|
||||||
|
TDConnectUser = "root",
|
||||||
|
TDConnectPasswd = "taosdata",
|
||||||
|
AutoOffsetReset = "earliest"
|
||||||
|
MsgWithTableName = "true",
|
||||||
|
TDConnectIp = "127.0.0.1",
|
||||||
|
TDConnectPort = "6030"
|
||||||
|
};
|
||||||
|
|
||||||
|
var consumer = new ConsumerBuilder(cfg).Build();
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
|
A consumer group is automatically created when multiple consumers are configured with the same consumer group ID.
|
||||||
|
|
||||||
|
## Subscribe to a Topic
|
||||||
|
|
||||||
|
A single consumer can subscribe to multiple topics.
|
||||||
|
|
||||||
|
<Tabs defaultValue="java" groupId="lang">
|
||||||
|
<TabItem value="c" label="C">
|
||||||
|
|
||||||
|
```c
|
||||||
|
// Create a list of subscribed topics
|
||||||
|
tmq_list_t* topicList = tmq_list_new();
|
||||||
|
tmq_list_append(topicList, "topicName");
|
||||||
|
// Enable subscription
|
||||||
|
tmq_subscribe(tmq, topicList);
|
||||||
|
tmq_list_destroy(topicList);
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="java" label="Java">
|
||||||
|
|
||||||
|
```java
|
||||||
|
List<String> topics = new ArrayList<>();
|
||||||
|
topics.add("tmq_topic");
|
||||||
|
consumer.subscribe(topics);
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="Go" label="Go">
|
||||||
|
|
||||||
|
```go
|
||||||
|
consumer, err := tmq.NewConsumer(config)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
err = consumer.Subscribe([]string{"example_tmq_topic"})
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="Rust" label="Rust">
|
||||||
|
|
||||||
|
```rust
|
||||||
|
consumer.subscribe(["tmq_meters"]).await?;
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem value="Python" label="Python">
|
||||||
|
|
||||||
|
```python
|
||||||
|
consumer = TaosConsumer('topic_ctb_column', group_id='vg2')
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem label="Node.JS" value="Node.JS">
|
||||||
|
|
||||||
|
```js
|
||||||
|
// Create a list of subscribed topics
|
||||||
|
let topics = ['topic_test']
|
||||||
|
|
||||||
|
// Enable subscription
|
||||||
|
consumer.subscribe(topics);
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem value="C#" label="C#">
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
// Create a list of subscribed topics
|
||||||
|
List<String> topics = new List<string>();
|
||||||
|
topics.add("tmq_topic");
|
||||||
|
// Enable subscription
|
||||||
|
consumer.Subscribe(topics);
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
|
## Consume messages
|
||||||
|
|
||||||
|
The following code demonstrates how to consume the messages in a queue.
|
||||||
|
|
||||||
|
<Tabs defaultValue="java" groupId="lang">
|
||||||
|
<TabItem value="c" label="C">
|
||||||
|
|
||||||
|
```c
|
||||||
|
## Consume data
|
||||||
|
while (running) {
|
||||||
|
TAOS_RES* msg = tmq_consumer_poll(tmq, timeOut);
|
||||||
|
msg_process(msg);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The `while` loop obtains a message each time it calls `tmq_consumer_poll()`. This message is exactly the same as the result returned by a query, and the same deserialization API can be used on it.
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="java" label="Java">
|
||||||
|
|
||||||
|
```java
|
||||||
|
while(running){
|
||||||
|
ConsumerRecords<Meters> meters = consumer.poll(Duration.ofMillis(100));
|
||||||
|
for (Meters meter : meters) {
|
||||||
|
processMsg(meter);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem value="Go" label="Go">
|
||||||
|
|
||||||
|
```go
|
||||||
|
for {
|
||||||
|
result, err := consumer.Poll(time.Second)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
fmt.Println(result)
|
||||||
|
consumer.Commit(context.Background(), result.Message)
|
||||||
|
consumer.FreeMessage(result.Message)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem value="Rust" label="Rust">
|
||||||
|
|
||||||
|
```rust
|
||||||
|
{
|
||||||
|
let mut stream = consumer.stream();
|
||||||
|
|
||||||
|
while let Some((offset, message)) = stream.try_next().await? {
|
||||||
|
// get information from offset
|
||||||
|
|
||||||
|
// the topic
|
||||||
|
let topic = offset.topic();
|
||||||
|
// the vgroup id, like partition id in kafka.
|
||||||
|
let vgroup_id = offset.vgroup_id();
|
||||||
|
println!("* in vgroup id {vgroup_id} of topic {topic}\n");
|
||||||
|
|
||||||
|
if let Some(data) = message.into_data() {
|
||||||
|
while let Some(block) = data.fetch_raw_block().await? {
|
||||||
|
// one block for one table, get table name if needed
|
||||||
|
let name = block.table_name();
|
||||||
|
let records: Vec<Record> = block.deserialize().try_collect()?;
|
||||||
|
println!(
|
||||||
|
"** table: {}, got {} records: {:#?}\n",
|
||||||
|
name.unwrap(),
|
||||||
|
records.len(),
|
||||||
|
records
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
consumer.commit(offset).await?;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="Python" label="Python">
|
||||||
|
|
||||||
|
```python
|
||||||
|
for msg in consumer:
|
||||||
|
for row in msg:
|
||||||
|
print(row)
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem label="Node.JS" value="Node.JS">
|
||||||
|
|
||||||
|
```js
|
||||||
|
while(true){
|
||||||
|
msg = consumer.consume(200);
|
||||||
|
// process message(consumeResult)
|
||||||
|
console.log(msg.topicPartition);
|
||||||
|
console.log(msg.block);
|
||||||
|
console.log(msg.fields)
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem value="C#" label="C#">
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
## Consume data
|
||||||
|
while (true)
|
||||||
|
{
|
||||||
|
var consumerRes = consumer.Consume(100);
|
||||||
|
// process ConsumeResult
|
||||||
|
ProcessMsg(consumerRes);
|
||||||
|
consumer.Commit(consumerRes);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
|
## Close the consumer
|
||||||
|
|
||||||
|
After message consumption is finished, the consumer is unsubscribed.
|
||||||
|
|
||||||
|
<Tabs defaultValue="java" groupId="lang">
|
||||||
|
<TabItem value="c" label="C">
|
||||||
|
|
||||||
|
```c
|
||||||
|
/* Unsubscribe */
|
||||||
|
tmq_unsubscribe(tmq);
|
||||||
|
|
||||||
|
/* Close consumer object */
|
||||||
|
tmq_consumer_close(tmq);
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem value="java" label="Java">
|
||||||
|
|
||||||
|
```java
|
||||||
|
/* Unsubscribe */
|
||||||
|
consumer.unsubscribe();
|
||||||
|
|
||||||
|
/* Close consumer */
|
||||||
|
consumer.close();
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem value="Go" label="Go">
|
||||||
|
|
||||||
|
```go
|
||||||
|
consumer.Close()
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem value="Rust" label="Rust">
|
||||||
|
|
||||||
|
```rust
|
||||||
|
consumer.unsubscribe().await;
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem value="Python" label="Python">
|
||||||
|
|
||||||
|
```py
|
||||||
|
# Unsubscribe
|
||||||
|
consumer.unsubscribe()
|
||||||
|
# Close consumer
|
||||||
|
consumer.close()
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem label="Node.JS" value="Node.JS">
|
||||||
|
|
||||||
|
```js
|
||||||
|
consumer.unsubscribe();
|
||||||
|
consumer.close();
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem value="C#" label="C#">
|
||||||
|
|
||||||
|
```csharp
|
||||||
|
// Unsubscribe
|
||||||
|
consumer.Unsubscribe();
|
||||||
|
|
||||||
|
// Close consumer
|
||||||
|
consumer.Close();
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
</Tabs>
|
||||||
|
|
||||||
|
## Delete a Topic
|
||||||
|
|
||||||
|
You can delete topics that are no longer useful. Note that you must unsubscribe all consumers from a topic before deleting it.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
/* Delete topic/
|
||||||
|
DROP TOPIC topic_name;
|
||||||
|
```
|
||||||
|
|
||||||
|
## Check Status
|
||||||
|
|
||||||
|
1. Query all existing topics.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SHOW TOPICS;
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Query the status and subscribed topics of all consumers.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SHOW CONSUMERS;
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Query the relationships between consumers and vgroups.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SHOW SUBSCRIPTIONS;
|
||||||
|
```
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
The following section shows sample code in various languages.
|
||||||
|
|
||||||
|
<Tabs defaultValue="java" groupId="lang">
|
||||||
|
|
||||||
|
<TabItem label="C" value="c">
|
||||||
|
<CDemo />
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem label="Java" value="java">
|
||||||
|
<Java />
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem label="Go" value="Go">
|
||||||
|
<Go/>
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem label="Rust" value="Rust">
|
||||||
|
<Rust />
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem label="Python" value="Python">
|
||||||
|
<Python />
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem label="Node.JS" value="Node.JS">
|
||||||
|
<Node/>
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
<TabItem label="C#" value="C#">
|
||||||
|
<CSharp/>
|
||||||
|
</TabItem>
|
||||||
|
|
||||||
|
</Tabs>
|
|
@ -1,52 +1,49 @@
|
||||||
---
|
---
|
||||||
sidebar_label: Cache
|
sidebar_label: Caching
|
||||||
title: Cache
|
title: Caching
|
||||||
description: "Caching System inside TDengine"
|
description: "This document describes the caching component of TDengine."
|
||||||
---
|
---
|
||||||
|
|
||||||
To achieve the purpose of high performance data writing and querying, TDengine employs a lot of caching technologies in both server side and client side.
|
TDengine uses various kinds of caching techniques to efficiently write and query data. This document describes the caching component of TDengine.
|
||||||
|
|
||||||
## Write Cache
|
## Write Cache
|
||||||
|
|
||||||
The cache management policy in TDengine is First-In-First-Out (FIFO). FIFO is also known as insert driven cache management policy and it is different from read driven cache management, which is more commonly known as Least-Recently-Used (LRU). FIFO simply stores the latest data in cache and flushes the oldest data in cache to disk, when the cache usage reaches a threshold. In IoT use cases, it is the current state i.e. the latest or most recent data that is important. The cache policy in TDengine, like much of the design and architecture of TDengine, is based on the nature of IoT data.
|
TDengine uses an insert-driven cache management policy, known as first in, first out (FIFO). This policy differs from read-driven "least recently used (LRU)" cache management. A FIFO policy stores the latest data in cache and flushes the oldest data from cache to disk when the cache usage reaches a threshold. In IoT use cases, the most recent data or the current state is most important. The cache policy in TDengine, like much of the design and architecture of TDengine, is based on the nature of IoT data.
|
||||||
|
|
||||||
The memory space used by each vnode as write cache is determined when creating a database. Parameter `vgroups` and `buffer` can be used to specify the number of vnode and the size of write cache for each vnode when creating the database. Then, the total size of write cache for this database is `vgroups * buffer`.
|
When you create a database, you can configure the size of the write cache on each vnode. The **vgroups** parameter determines the number of vgroups that process data in the database, and the **buffer** parameter determines the size of the write cache for each vnode.
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
create database db0 vgroups 100 buffer 16MB
|
create database db0 vgroups 100 buffer 16MB
|
||||||
```
|
```
|
||||||
|
|
||||||
The above statement creates a database of 100 vnodes while each vnode has a write cache of 16MB.
|
In theory, larger cache sizes are always better. However, at a certain point, it becomes impossible to improve performance by increasing cache size. In most scenarios, you can retain the default cache settings.
|
||||||
|
|
||||||
Even though in theory it's always better to have a larger cache, the extra effect would be very minor once the size of cache grows beyond a threshold. So normally it's enough to use the default value of `buffer` parameter.
|
|
||||||
|
|
||||||
## Read Cache
|
## Read Cache
|
||||||
|
|
||||||
When creating a database, it's also possible to specify whether to cache the latest data of each sub table, using parameter `cachelast`. There are 3 cases:
|
When you create a database, you can configure whether the latest data from every subtable is cached. To do so, set the *cachemodel* parameter as follows:
|
||||||
- 0: No cache for latest data
|
- none: Caching is disabled.
|
||||||
- 1: The last row of each table is cached, `last_row` function can benefit significantly from it
|
- last_row: The latest row of data in each subtable is cached. This option significantly improves the performance of the `LAST_ROW` function
|
||||||
- 2: The latest non-NULL value of each column for each table is cached, `last` function can benefit very much when there is no `where`, `group by`, `order by` or `interval` clause
|
- last_value: The latest non-null value in each column of each subtable is cached. This option significantly improves the performance of the `LAST` function in normal situations, such as WHERE, ORDER BY, GROUP BY, and INTERVAL statements.
|
||||||
- 3: Bot hthe last row and the latest non-NULL value of each column for each table are cached, identical to the behavior of both 1 and 2 are set together
|
- both: Rows and columns are both cached. This option is equivalent to simultaneously enabling option last_row and last_value.
|
||||||
|
|
||||||
|
## Metadata Cache
|
||||||
|
|
||||||
## Meta Cache
|
To improve query and write performance, each vnode caches the metadata that it receives. When you create a database, you can configure the size of the metadata cache through the *pages* and *pagesize* parameters.
|
||||||
|
|
||||||
To process data writing and querying efficiently, each vnode caches the metadata that's already retrieved. Parameters `pages` and `pagesize` are used to specify the size of metadata cache for each vnode.
|
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
create database db0 pages 128 pagesize 16kb
|
create database db0 pages 128 pagesize 16kb
|
||||||
```
|
```
|
||||||
|
|
||||||
The above statement will create a database db0 each of whose vnode is allocated a meta cache of `128 * 16 KB = 2 MB` .
|
The preceding SQL statement creates 128 pages on each vnode in the `db0` database. Each page has a 16 KB metadata cache.
|
||||||
|
|
||||||
## File System Cache
|
## File System Cache
|
||||||
|
|
||||||
TDengine utilizes WAL to provide basic reliability. The essential of WAL is to append data in a disk file, so the file system cache also plays an important role in the writing performance. Parameter `wal` can be used to specify the policy of writing WAL, there are 2 cases:
|
TDengine implements data reliability by means of a write-ahead log (WAL). Writing data to the WAL is essentially writing data to the disk in an ordered, append-only manner. For this reason, the file system cache plays an important role in write performance. When you create a database, you can use the *wal* parameter to choose higher performance or higher reliability.
|
||||||
- 1: Write data to WAL without calling fsync, the data is actually written to the file system cache without flushing immediately, in this way you can get better write performance
|
- 1: This option writes to the WAL but does not enable fsync. New data written to the WAL is stored in the file system cache but not written to disk. This provides better performance.
|
||||||
- 2: Write data to WAL and invoke fsync, the data is immediately flushed to disk, in this way you can get higher reliability
|
- 2: This option writes to the WAL and enables fsync. New data written to the WAL is immediately written to disk. This provides better data reliability.
|
||||||
|
|
||||||
## Client Cache
|
## Client Cache
|
||||||
|
|
||||||
To improve the overall efficiency of processing data, besides the above caches, the core library `libtaos.so` (also referred to as `taosc`) which all client programs depend on also has its own cache. `taosc` caches the metadata of the databases, super tables, tables that the invoking client has accessed, plus other critical metadata such as the cluster topology.
|
In addition to the server-side caching discussed previously, the core client library `libtaos.so` also makes use of caching. TDengine Client caches the metadata of all databases, supertables, and subtables that it has accessed, as well as the cluster topology.
|
||||||
|
|
||||||
When multiple client programs are accessing a TDengine cluster, if one of the clients modifies some metadata, the cache may become invalid in other clients. If this case happens, the client programs need to "reset query cache" to invalidate the whole cache so that `taosc` is enforced to repull the metadata it needs to rebuild the cache.
|
If a client modifies certain metadata while multiple clients are simultaneously accessing a TDengine cluster, the metadata caches on each client may fail or become out of sync. If this occurs, run the `reset query cache` command on the affected clientsto force them to obtain fresh metadata and reset their caches.
|
||||||
|
|
|
@ -1,240 +1,245 @@
|
||||||
---
|
---
|
||||||
sidebar_label: UDF
|
sidebar_label: UDF
|
||||||
title: User Defined Functions(UDF)
|
title: User-Defined Functions (UDF)
|
||||||
description: "Scalar functions and aggregate functions developed by users can be utilized by the query framework to expand query capability"
|
description: "You can define your own scalar and aggregate functions to expand the query capabilities of TDengine."
|
||||||
---
|
---
|
||||||
|
|
||||||
In some use cases, built-in functions are not adequate for the query capability required by application programs. With UDF, the functions developed by users can be utilized by the query framework to meet business and application requirements. UDF normally takes one column of data as input, but can also support the result of a sub-query as input.
|
The built-in functions of TDengine may not be sufficient for the use cases of every application. In this case, you can define custom functions for use in TDengine queries. These are known as user-defined functions (UDF). A user-defined function takes one column of data or the result of a subquery as its input.
|
||||||
|
|
||||||
From version 2.2.0.0, UDF written in C/C++ are supported by TDengine.
|
TDengine supports user-defined functions written in C or C++. This document describes the usage of user-defined functions.
|
||||||
|
|
||||||
|
User-defined functions can be scalar functions or aggregate functions. Scalar functions, such as `abs`, `sin`, and `concat`, output a value for every row of data. Aggregate functions, such as `avg` and `max` output one value for multiple rows of data.
|
||||||
|
|
||||||
|
When you create a user-defined function, you must implement standard interface functions:
|
||||||
|
- For scalar functions, implement the `scalarfn` interface function.
|
||||||
|
- For aggregate functions, implement the `aggfn_start`, `aggfn`, and `aggfn_finish` interface functions.
|
||||||
|
- To initialize your function, implement the `udf_init` function. To terminate your function, implement the `udf_destroy` function.
|
||||||
|
|
||||||
|
There are strict naming conventions for these interface functions. The names of the start, finish, init, and destroy interfaces must be <udf-name\>_start, <udf-name\>_finish, <udf-name\>_init, and <udf-name\>_destroy, respectively. Replace `scalarfn`, `aggfn`, and `udf` with the name of your user-defined function.
|
||||||
|
|
||||||
|
## Implementing a Scalar Function
|
||||||
|
The implementation of a scalar function is described as follows:
|
||||||
|
```c
|
||||||
|
#include "taos.h"
|
||||||
|
#include "taoserror.h"
|
||||||
|
#include "taosudf.h"
|
||||||
|
|
||||||
|
// initialization function. if no initialization, we can skip definition of it. The initialization function shall be concatenation of the udf name and _init suffix
|
||||||
|
// @return error number defined in taoserror.h
|
||||||
|
int32_t scalarfn_init() {
|
||||||
|
// initialization.
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
|
||||||
|
// scalar function main computation function
|
||||||
|
// @param inputDataBlock, input data block composed of multiple columns with each column defined by SUdfColumn
|
||||||
|
// @param resultColumn, output column
|
||||||
|
// @return error number defined in taoserror.h
|
||||||
|
int32_t scalarfn(SUdfDataBlock* inputDataBlock, SUdfColumn* resultColumn) {
|
||||||
|
// read data from inputDataBlock and process, then output to resultColumn.
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
|
||||||
|
// cleanup function. if no cleanup related processing, we can skip definition of it. The destroy function shall be concatenation of the udf name and _destroy suffix.
|
||||||
|
// @return error number defined in taoserror.h
|
||||||
|
int32_t scalarfn_destroy() {
|
||||||
|
// clean up
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
Replace `scalarfn` with the name of your function.
|
||||||
|
|
||||||
|
## Implementing an Aggregate Function
|
||||||
|
|
||||||
|
The implementation of an aggregate function is described as follows:
|
||||||
|
```c
|
||||||
|
#include "taos.h"
|
||||||
|
#include "taoserror.h"
|
||||||
|
#include "taosudf.h"
|
||||||
|
|
||||||
|
// Initialization function. if no initialization, we can skip definition of it. The initialization function shall be concatenation of the udf name and _init suffix
|
||||||
|
// @return error number defined in taoserror.h
|
||||||
|
int32_t aggfn_init() {
|
||||||
|
// initialization.
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
|
||||||
|
// aggregate start function. The intermediate value or the state(@interBuf) is initialized in this function. The function name shall be concatenation of udf name and _start suffix
|
||||||
|
// @param interbuf intermediate value to intialize
|
||||||
|
// @return error number defined in taoserror.h
|
||||||
|
int32_t aggfn_start(SUdfInterBuf* interBuf) {
|
||||||
|
// initialize intermediate value in interBuf
|
||||||
|
return TSDB_CODE_SUCESS;
|
||||||
|
}
|
||||||
|
|
||||||
|
// aggregate reduce function. This function aggregate old state(@interbuf) and one data bock(inputBlock) and output a new state(@newInterBuf).
|
||||||
|
// @param inputBlock input data block
|
||||||
|
// @param interBuf old state
|
||||||
|
// @param newInterBuf new state
|
||||||
|
// @return error number defined in taoserror.h
|
||||||
|
int32_t aggfn(SUdfDataBlock* inputBlock, SUdfInterBuf *interBuf, SUdfInterBuf *newInterBuf) {
|
||||||
|
// read from inputBlock and interBuf and output to newInterBuf
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
|
||||||
|
// aggregate function finish function. This function transforms the intermediate value(@interBuf) into the final output(@result). The function name must be concatenation of aggfn and _finish suffix.
|
||||||
|
// @interBuf : intermediate value
|
||||||
|
// @result: final result
|
||||||
|
// @return error number defined in taoserror.h
|
||||||
|
int32_t int32_t aggfn_finish(SUdfInterBuf* interBuf, SUdfInterBuf *result) {
|
||||||
|
// read data from inputDataBlock and process, then output to result
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
|
||||||
|
// cleanup function. if no cleanup related processing, we can skip definition of it. The destroy function shall be concatenation of the udf name and _destroy suffix.
|
||||||
|
// @return error number defined in taoserror.h
|
||||||
|
int32_t aggfn_destroy() {
|
||||||
|
// clean up
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
Replace `aggfn` with the name of your function.
|
||||||
|
|
||||||
|
## Interface Functions
|
||||||
|
|
||||||
|
There are strict naming conventions for interface functions. The names of the start, finish, init, and destroy interfaces must be <udf-name\>_start, <udf-name\>_finish, <udf-name\>_init, and <udf-name\>_destroy, respectively. Replace `scalarfn`, `aggfn`, and `udf` with the name of your user-defined function.
|
||||||
|
|
||||||
|
Interface functions return a value that indicates whether the operation was successful. If an operation fails, the interface function returns an error code. Otherwise, it returns TSDB_CODE_SUCCESS. The error codes are defined in `taoserror.h` and in the common API error codes in `taos.h`. For example, TSDB_CODE_UDF_INVALID_INPUT indicates invalid input. TSDB_CODE_OUT_OF_MEMORY indicates insufficient memory.
|
||||||
|
|
||||||
|
For information about the parameters for interface functions, see Data Model
|
||||||
|
|
||||||
|
### Interfaces for Scalar Functions
|
||||||
|
|
||||||
|
`int32_t scalarfn(SUdfDataBlock* inputDataBlock, SUdfColumn *resultColumn)`
|
||||||
|
|
||||||
|
Replace `scalarfn` with the name of your function. This function performs scalar calculations on data blocks. You can configure a value through the parameters in the `resultColumn` structure.
|
||||||
|
|
||||||
|
The parameters in the function are defined as follows:
|
||||||
|
- inputDataBlock: The data block to input.
|
||||||
|
- resultColumn: The column to output. The column to output.
|
||||||
|
|
||||||
|
### Interfaces for Aggregate Functions
|
||||||
|
|
||||||
|
`int32_t aggfn_start(SUdfInterBuf *interBuf)`
|
||||||
|
|
||||||
|
`int32_t aggfn(SUdfDataBlock* inputBlock, SUdfInterBuf *interBuf, SUdfInterBuf *newInterBuf)`
|
||||||
|
|
||||||
|
`int32_t aggfn_finish(SUdfInterBuf* interBuf, SUdfInterBuf *result)`
|
||||||
|
|
||||||
|
Replace `aggfn` with the name of your function. In the function, aggfn_start is called to generate a result buffer. Data is then divided between multiple blocks, and aggfn is called on each block to update the result. Finally, aggfn_finish is called to generate final results from the intermediate results. The final result contains only one or zero data points.
|
||||||
|
|
||||||
|
The parameters in the function are defined as follows:
|
||||||
|
- interBuf: The intermediate result buffer.
|
||||||
|
- inputBlock: The data block to input.
|
||||||
|
- newInterBuf: The new intermediate result buffer.
|
||||||
|
- result: The final result.
|
||||||
|
|
||||||
|
|
||||||
## Types of UDF
|
### Initializing and Terminating User-Defined Functions
|
||||||
|
`int32_t udf_init()`
|
||||||
|
|
||||||
Two kinds of functions can be implemented by UDF: scalar functions and aggregate functions.
|
`int32_t udf_destroy()`
|
||||||
|
|
||||||
Scalar functions return multiple rows and aggregate functions return either 0 or 1 row.
|
Replace `udf`with the name of your function. udf_init initializes the function. udf_destroy terminates the function. If it is not necessary to initialize your function, udf_init is not required. If it is not necessary to terminate your function, udf_destroy is not required.
|
||||||
|
|
||||||
In the case of a scalar function you only have to implement the "normal" function template.
|
|
||||||
|
|
||||||
In the case of an aggregate function, in addition to the "normal" function, you also need to implement the "merge" and "finalize" function templates even if the implementation is empty. This will become clear in the sections below.
|
## Data Structure of User-Defined Functions
|
||||||
|
```c
|
||||||
|
typedef struct SUdfColumnMeta {
|
||||||
|
int16_t type;
|
||||||
|
int32_t bytes;
|
||||||
|
uint8_t precision;
|
||||||
|
uint8_t scale;
|
||||||
|
} SUdfColumnMeta;
|
||||||
|
|
||||||
### Scalar Function
|
typedef struct SUdfColumnData {
|
||||||
|
int32_t numOfRows;
|
||||||
|
int32_t rowsAlloc;
|
||||||
|
union {
|
||||||
|
struct {
|
||||||
|
int32_t nullBitmapLen;
|
||||||
|
char *nullBitmap;
|
||||||
|
int32_t dataLen;
|
||||||
|
char *data;
|
||||||
|
} fixLenCol;
|
||||||
|
|
||||||
As mentioned earlier, a scalar UDF only has to implement the "normal" function template. The function template below can be used to define your own scalar function.
|
struct {
|
||||||
|
int32_t varOffsetsLen;
|
||||||
|
int32_t *varOffsets;
|
||||||
|
int32_t payloadLen;
|
||||||
|
char *payload;
|
||||||
|
int32_t payloadAllocLen;
|
||||||
|
} varLenCol;
|
||||||
|
};
|
||||||
|
} SUdfColumnData;
|
||||||
|
|
||||||
`void udfNormalFunc(char* data, short itype, short ibytes, int numOfRows, long long* ts, char* dataOutput, char* interBuf, char* tsOutput, int* numOfOutput, short otype, short obytes, SUdfInit* buf)`
|
typedef struct SUdfColumn {
|
||||||
|
SUdfColumnMeta colMeta;
|
||||||
|
bool hasNull;
|
||||||
|
SUdfColumnData colData;
|
||||||
|
} SUdfColumn;
|
||||||
|
|
||||||
`udfNormalFunc` is the place holder for a function name. A function implemented based on the above template can be used to perform scalar computation on data rows. The parameters are fixed to control the data exchange between UDF and TDengine.
|
typedef struct SUdfDataBlock {
|
||||||
|
int32_t numOfRows;
|
||||||
|
int32_t numOfCols;
|
||||||
|
SUdfColumn **udfCols;
|
||||||
|
} SUdfDataBlock;
|
||||||
|
|
||||||
- Definitions of the parameters:
|
typedef struct SUdfInterBuf {
|
||||||
|
int32_t bufLen;
|
||||||
|
char* buf;
|
||||||
|
int8_t numOfResult; //zero or one
|
||||||
|
} SUdfInterBuf;
|
||||||
|
```
|
||||||
|
The data structure is described as follows:
|
||||||
|
|
||||||
- data:input data
|
- The SUdfDataBlock block includes the number of rows (numOfRows) and number of columns (numCols). udfCols[i] (0 <= i <= numCols-1) indicates that each column is of type SUdfColumn.
|
||||||
- itype:the type of input data, for details please refer to [type definition in column_meta](/reference/rest-api/), for example 4 represents INT
|
- SUdfColumn includes the definition of the data type of the column (colMeta) and the data in the column (colData).
|
||||||
- iBytes:the number of bytes consumed by each value in the input data
|
- The member definitions of SUdfColumnMeta are the same as the data type definitions in `taos.h`.
|
||||||
- oType:the type of output data, similar to iType
|
- The data in SUdfColumnData can become longer. varLenCol indicates variable-length data, and fixLenCol indicates fixed-length data.
|
||||||
- oBytes:the number of bytes consumed by each value in the output data
|
- SUdfInterBuf defines the intermediate structure `buffer` and the number of results in the buffer `numOfResult`.
|
||||||
- numOfRows:the number of rows in the input data
|
|
||||||
- ts: the column of timestamp corresponding to the input data
|
|
||||||
- dataOutput:the buffer for output data, total size is `oBytes * numberOfRows`
|
|
||||||
- interBuf:the buffer for an intermediate result. Its size is specified by the `BUFSIZE` parameter when creating a UDF. It's normally used when the intermediate result is not same as the final result. This buffer is allocated and freed by TDengine.
|
|
||||||
- tsOutput:the column of timestamps corresponding to the output data; it can be used to output timestamp together with the output data if it's not NULL
|
|
||||||
- numOfOutput:the number of rows in output data
|
|
||||||
- buf:for the state exchange between UDF and TDengine
|
|
||||||
|
|
||||||
[add_one.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/add_one.c) is one example of a very simple UDF implementation, i.e. one instance of the above `udfNormalFunc` template. It adds one to each value of a passed in column, which can be filtered using the `where` clause, and outputs the result.
|
Additional functions are defined in `taosudf.h` to make it easier to work with these structures.
|
||||||
|
|
||||||
### Aggregate Function
|
|
||||||
|
|
||||||
For aggregate UDF, as mentioned earlier you must implement a "normal" function template (described above) and also implement the "merge" and "finalize" templates.
|
|
||||||
|
|
||||||
#### Merge Function Template
|
|
||||||
|
|
||||||
The function template below can be used to define your own merge function for an aggregate UDF.
|
|
||||||
|
|
||||||
`void udfMergeFunc(char* data, int32_t numOfRows, char* dataOutput, int32_t* numOfOutput, SUdfInit* buf)`
|
|
||||||
|
|
||||||
`udfMergeFunc` is the place holder for a function name. The function implemented with the above template is used to aggregate intermediate results and can only be used in the aggregate query for STable.
|
|
||||||
|
|
||||||
Definitions of the parameters:
|
|
||||||
|
|
||||||
- data:array of output data, if interBuf is used it's an array of interBuf
|
|
||||||
- numOfRows:number of rows in `data`
|
|
||||||
- dataOutput:the buffer for output data, the size is same as that of the final result; If the result is not final, it can be put in the interBuf, i.e. `data`.
|
|
||||||
- numOfOutput:number of rows in the output data
|
|
||||||
- buf:for the state exchange between UDF and TDengine
|
|
||||||
|
|
||||||
#### Finalize Function Template
|
|
||||||
|
|
||||||
The function template below can be used to finalize the result of your own UDF, normally used when interBuf is used.
|
|
||||||
|
|
||||||
`void udfFinalizeFunc(char* dataOutput, char* interBuf, int* numOfOutput, SUdfInit* buf)`
|
|
||||||
|
|
||||||
`udfFinalizeFunc` is the place holder of function name, definitions of the parameter are as below:
|
|
||||||
|
|
||||||
- dataOutput:buffer for output data
|
|
||||||
- interBuf:buffer for intermediate result, can be used as input for next processing step
|
|
||||||
- numOfOutput:number of output data, can only be 0 or 1 for aggregate function
|
|
||||||
- buf:for state exchange between UDF and TDengine
|
|
||||||
|
|
||||||
### Example abs_max.c
|
|
||||||
|
|
||||||
[abs_max.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/abs_max.c) is an example of a user defined aggregate function to get the maximum from the absolute values of a column.
|
|
||||||
|
|
||||||
The internal processing happens as follows. The results of the select statement are divided into multiple row blocks and `udfNormalFunc`, i.e. `abs_max` in this case, is performed on each row block to generate the intermediate results for each sub table. Then `udfMergeFunc`, i.e. `abs_max_merge` in this case, is performed on the intermediate result of sub tables to aggregate and generate the final or intermediate result of STable. The intermediate result of STable is finally processed by `udfFinalizeFunc`, i.e. `abs_max_finalize` in this example, to generate the final result, which contains either 0 or 1 row.
|
|
||||||
|
|
||||||
Other typical aggregation functions such as covariance, can also be implemented using aggregate UDF.
|
|
||||||
|
|
||||||
## UDF Naming Conventions
|
|
||||||
|
|
||||||
The naming convention for the 3 kinds of function templates required by UDF is as follows:
|
|
||||||
- udfNormalFunc, udfMergeFunc, and udfFinalizeFunc are required to have same prefix, i.e. the actual name of udfNormalFunc. The udfNormalFunc doesn't need a suffix following the function name.
|
|
||||||
- udfMergeFunc should be udfNormalFunc followed by `_merge`
|
|
||||||
- udfFinalizeFunc should be udfNormalFunc followed by `_finalize`.
|
|
||||||
|
|
||||||
The naming convention is part of TDengine's UDF framework. TDengine follows this convention to invoke the corresponding actual functions.
|
|
||||||
|
|
||||||
Depending on whether you are creating a scalar UDF or aggregate UDF, the functions that you need to implement are different.
|
|
||||||
|
|
||||||
- Scalar function:udfNormalFunc is required.
|
|
||||||
- Aggregate function:udfNormalFunc, udfMergeFunc (if query on STable) and udfFinalizeFunc are required.
|
|
||||||
|
|
||||||
For clarity, assuming we want to implement a UDF named "foo":
|
|
||||||
- If the function is a scalar function, we only need to implement the "normal" function template and it should be named simply `foo`.
|
|
||||||
- If the function is an aggregate function, we need to implement `foo`, `foo_merge`, and `foo_finalize`. Note that for aggregate UDF, even though one of the three functions is not necessary, there must be an empty implementation.
|
|
||||||
|
|
||||||
## Compile UDF
|
## Compile UDF
|
||||||
|
|
||||||
The source code of UDF in C can't be utilized by TDengine directly. UDF can only be loaded into TDengine after compiling to dynamically linked library (DLL).
|
To use your user-defined function in TDengine, first compile it to a dynamically linked library (DLL).
|
||||||
|
|
||||||
For example, the example UDF `add_one.c` mentioned earlier, can be compiled into DLL using the command below, in a Linux Shell.
|
For example, the sample UDF `add_one.c` can be compiled into a DLL as follows:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
gcc -g -O0 -fPIC -shared add_one.c -o add_one.so
|
gcc -g -O0 -fPIC -shared add_one.c -o add_one.so
|
||||||
```
|
```
|
||||||
|
|
||||||
The generated DLL file `add_one.so` can be used later when creating a UDF. It's recommended to use GCC not older than 7.5.
|
The generated DLL file `add_one.so` can now be used to implement your function. Note: GCC 7.5 or later is required.
|
||||||
|
|
||||||
## Create and Use UDF
|
## Manage and Use User-Defined Functions
|
||||||
|
After compiling your function into a DLL, you add it to TDengine. For more information, see [User-Defined Functions](../12-taos-sql/26-udf.md).
|
||||||
|
|
||||||
When a UDF is created in a TDengine instance, it is available across the databases in that instance.
|
## Sample Code
|
||||||
|
|
||||||
### Create UDF
|
### Sample scalar function: [bit_and](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/bit_and.c)
|
||||||
|
|
||||||
SQL command can be executed on the host where the generated UDF DLL resides to load the UDF DLL into TDengine. This operation cannot be done through REST interface or web console. Once created, any client of the current TDengine can use these UDF functions in their SQL commands. UDF are stored in the management node of TDengine. The UDFs loaded in TDengine would be still available after TDengine is restarted.
|
The bit_add function implements bitwise addition for multiple columns. If there is only one column, the column is returned. The bit_add function ignores null values.
|
||||||
|
|
||||||
When creating UDF, the type of UDF, i.e. a scalar function or aggregate function must be specified. If the specified type is wrong, the SQL statements using the function would fail with errors. The input type and output type don't need to be the same in UDF, but the input data type and output data type must be consistent with the UDF definition.
|
|
||||||
|
|
||||||
- Create Scalar Function
|
|
||||||
|
|
||||||
```sql
|
|
||||||
CREATE FUNCTION userDefinedFunctionName AS "/absolute/path/to/userDefinedFunctionName.so" OUTPUTTYPE <supported TDengine type> [BUFSIZE B];
|
|
||||||
```
|
|
||||||
|
|
||||||
- userDefinedFunctionName:The function name to be used in SQL statement which must be consistent with the function name defined by `udfNormalFunc` and is also the name of the compiled DLL (.so file).
|
|
||||||
- path:The absolute path of the DLL file including the name of the shared object file (.so). The path must be quoted with single or double quotes.
|
|
||||||
- outputtype:The output data type, the value is the literal string of the supported TDengine data type.
|
|
||||||
- B:the size of intermediate buffer, in bytes; it is an optional parameter and the range is [0,512].
|
|
||||||
|
|
||||||
For example, below SQL statement can be used to create a UDF from `add_one.so`.
|
|
||||||
|
|
||||||
```sql
|
|
||||||
CREATE FUNCTION add_one AS "/home/taos/udf_example/add_one.so" OUTPUTTYPE INT;
|
|
||||||
```
|
|
||||||
|
|
||||||
- Create Aggregate Function
|
|
||||||
|
|
||||||
```sql
|
|
||||||
CREATE AGGREGATE FUNCTION userDefinedFunctionName AS "/absolute/path/to/userDefinedFunctionName.so" OUTPUTTYPE <supported TDengine data type> [ BUFSIZE B ];
|
|
||||||
```
|
|
||||||
|
|
||||||
- userDefinedFunctionName:the function name to be used in SQL statement which must be consistent with the function name defined by `udfNormalFunc` and is also the name of the compiled DLL (.so file).
|
|
||||||
- path:the absolute path of the DLL file including the name of the shared object file (.so). The path needs to be quoted by single or double quotes.
|
|
||||||
- OUTPUTTYPE:the output data type, the value is the literal string of the type
|
|
||||||
- B:the size of intermediate buffer, in bytes; it's an optional parameter and the range is [0,512]
|
|
||||||
|
|
||||||
For details about how to use intermediate result, please refer to example program [demo.c](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/demo.c).
|
|
||||||
|
|
||||||
For example, below SQL statement can be used to create a UDF from `demo.so`.
|
|
||||||
|
|
||||||
```sql
|
|
||||||
CREATE AGGREGATE FUNCTION demo AS "/home/taos/udf_example/demo.so" OUTPUTTYPE DOUBLE bufsize 14;
|
|
||||||
```
|
|
||||||
|
|
||||||
### Manage UDF
|
|
||||||
|
|
||||||
- Delete UDF
|
|
||||||
|
|
||||||
```
|
|
||||||
DROP FUNCTION ids(X);
|
|
||||||
```
|
|
||||||
|
|
||||||
- ids(X):same as that in `CREATE FUNCTION` statement
|
|
||||||
|
|
||||||
```sql
|
|
||||||
DROP FUNCTION add_one;
|
|
||||||
```
|
|
||||||
|
|
||||||
- Show Available UDF
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SHOW FUNCTIONS;
|
|
||||||
```
|
|
||||||
|
|
||||||
### Use UDF
|
|
||||||
|
|
||||||
The function name specified when creating UDF can be used directly in SQL statements, just like builtin functions.
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SELECT X(c) FROM table/STable;
|
|
||||||
```
|
|
||||||
|
|
||||||
The above SQL statement invokes function X for column c.
|
|
||||||
|
|
||||||
## Restrictions for UDF
|
|
||||||
|
|
||||||
In current version there are some restrictions for UDF
|
|
||||||
|
|
||||||
1. Only Linux is supported when creating and invoking UDF for both client side and server side
|
|
||||||
2. UDF can't be mixed with builtin functions
|
|
||||||
3. Only one UDF can be used in a SQL statement
|
|
||||||
4. Only a single column is supported as input for UDF
|
|
||||||
5. Once created successfully, UDF is persisted in MNode of TDengineUDF
|
|
||||||
6. UDF can't be created through REST interface
|
|
||||||
7. The function name used when creating UDF in SQL must be consistent with the function name defined in the DLL, i.e. the name defined by `udfNormalFunc`
|
|
||||||
8. The name of a UDF should not conflict with any of TDengine's built-in functions
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
### Scalar function example [add_one](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/add_one.c)
|
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
<summary>add_one.c</summary>
|
<summary>bit_and.c</summary>
|
||||||
|
|
||||||
```c
|
```c
|
||||||
{{#include tests/script/sh/add_one.c}}
|
{{#include tests/script/sh/bit_and.c}}
|
||||||
```
|
```
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
### Aggregate function example [abs_max](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/abs_max.c)
|
### Sample aggregate function: [l2norm](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/l2norm.c)
|
||||||
|
|
||||||
|
The l2norm function finds the second-order norm for all data in the input column. This squares the values, takes a cumulative sum, and finds the square root.
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
<summary>abs_max.c</summary>
|
<summary>l2norm.c</summary>
|
||||||
|
|
||||||
```c
|
```c
|
||||||
{{#include tests/script/sh/abs_max.c}}
|
{{#include tests/script/sh/l2norm.c}}
|
||||||
```
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
### Example for using intermediate result [demo](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/demo.c)
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary>demo.c</summary>
|
|
||||||
|
|
||||||
```c
|
|
||||||
{{#include tests/script/sh/demo.c}}
|
|
||||||
```
|
```
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
|
@ -1,3 +1,3 @@
|
||||||
```c
|
```c
|
||||||
{{#include docs/examples/c/subscribe_demo.c}}
|
{{#include docs/examples/c/tmq_example.c}}
|
||||||
```
|
```
|
|
@ -1,7 +1,11 @@
|
||||||
```java
|
```java
|
||||||
{{#include docs/examples/java/src/main/java/com/taos/example/SubscribeDemo.java}}
|
{{#include docs/examples/java/src/main/java/com/taos/example/SubscribeDemo.java}}
|
||||||
|
{{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}}
|
||||||
|
{{#include docs/examples/java/src/main/java/com/taos/example/Meters.java}}
|
||||||
|
```
|
||||||
|
```java
|
||||||
|
{{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}}
|
||||||
|
```
|
||||||
|
```java
|
||||||
|
{{#include docs/examples/java/src/main/java/com/taos/example/Meters.java}}
|
||||||
```
|
```
|
||||||
:::note
|
|
||||||
For now Java connector doesn't provide asynchronous subscription, but `TimerTask` can be used to achieve similar purpose.
|
|
||||||
|
|
||||||
:::
|
|
|
@ -1,3 +1,3 @@
|
||||||
```py
|
```py
|
||||||
{{#include docs/examples/python/subscribe_demo.py}}
|
{{#include docs/examples/python/tmq_example.py}}
|
||||||
```
|
```
|
|
@ -1,3 +1,3 @@
|
||||||
```rs
|
```rust
|
||||||
{{#include docs/examples/rust/nativeexample/examples/subscribe_demo.rs}}
|
{{#include docs/examples/rust/nativeexample/examples/subscribe_demo.rs}}
|
||||||
```
|
```
|
|
@ -2,13 +2,12 @@
|
||||||
title: Developer Guide
|
title: Developer Guide
|
||||||
---
|
---
|
||||||
|
|
||||||
To develop an application to process time-series data using TDengine, we recommend taking the following steps:
|
Before creating an application to process time-series data with TDengine, consider the following:
|
||||||
|
1. Choose the method to connect to TDengine. TDengine offers a REST API that can be used with any programming language. It also has connectors for a variety of languages.
|
||||||
1. Choose the method to connect to TDengine. No matter what programming language you use, you can always use the REST interface to access TDengine, but you can also use connectors unique to each programming language.
|
2. Design the data model based on your own use cases. Consider the main [concepts](/concept/) of TDengine, including "one table per data collection point" and the supertable. Learn about static labels, collected metrics, and subtables. Depending on the characteristics of your data and your requirements, you decide to create one or more databases and design a supertable schema that fit your data.
|
||||||
2. Design the data model based on your own use cases. Learn the [concepts](/concept/) of TDengine including "one table for one data collection point" and the "super table" (STable) concept; learn about static labels, collected metrics, and subtables. Depending on the characteristics of your data and your requirements, you may decide to create one or more databases, and you should design the STable schema to fit your data.
|
|
||||||
3. Decide how you will insert data. TDengine supports writing using standard SQL, but also supports schemaless writing, so that data can be written directly without creating tables manually.
|
3. Decide how you will insert data. TDengine supports writing using standard SQL, but also supports schemaless writing, so that data can be written directly without creating tables manually.
|
||||||
4. Based on business requirements, find out what SQL query statements need to be written. You may be able to repurpose any existing SQL.
|
4. Based on business requirements, find out what SQL query statements need to be written. You may be able to repurpose any existing SQL.
|
||||||
5. If you want to run real-time analysis based on time series data, including various dashboards, it is recommended that you use the TDengine continuous query feature instead of deploying complex streaming processing systems such as Spark or Flink.
|
5. If you want to run real-time analysis based on time series data, including various dashboards, use the TDengine stream processing component instead of deploying complex systems such as Spark or Flink.
|
||||||
6. If your application has modules that need to consume inserted data, and they need to be notified when new data is inserted, it is recommended that you use the data subscription function provided by TDengine without the need to deploy Kafka.
|
6. If your application has modules that need to consume inserted data, and they need to be notified when new data is inserted, it is recommended that you use the data subscription function provided by TDengine without the need to deploy Kafka.
|
||||||
7. In many use cases (such as fleet management), the application needs to obtain the latest status of each data collection point. It is recommended that you use the cache function of TDengine instead of deploying Redis separately.
|
7. In many use cases (such as fleet management), the application needs to obtain the latest status of each data collection point. It is recommended that you use the cache function of TDengine instead of deploying Redis separately.
|
||||||
8. If you find that the SQL functions of TDengine cannot meet your requirements, then you can use user-defined functions to solve the problem.
|
8. If you find that the SQL functions of TDengine cannot meet your requirements, then you can use user-defined functions to solve the problem.
|
||||||
|
|
|
@ -1,126 +0,0 @@
|
||||||
---
|
|
||||||
title: Deployment
|
|
||||||
---
|
|
||||||
|
|
||||||
## Prerequisites
|
|
||||||
|
|
||||||
### Step 1
|
|
||||||
|
|
||||||
The FQDN of all hosts must be setup properly. All FQDNs need to be configured in the /etc/hosts file on each host. You must confirm that each FQDN can be accessed from any other host, you can do this by using the `ping` command.
|
|
||||||
|
|
||||||
The command `hostname -f` can be executed to get the hostname on any host. `ping <FQDN>` command can be executed on each host to check whether any other host is accessible from it. If any host is not accessible, the network configuration, like /etc/hosts or DNS configuration, needs to be checked and revised, to make any two hosts accessible to each other.
|
|
||||||
|
|
||||||
:::note
|
|
||||||
|
|
||||||
- The host where the client program runs also needs to be configured properly for FQDN, to make sure all hosts for client or server can be accessed from any other. In other words, the hosts where the client is running are also considered as a part of the cluster.
|
|
||||||
|
|
||||||
- Please ensure that your firewall rules do not block TCP/UDP on ports 6030-6042 on all hosts in the cluster.
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
### Step 2
|
|
||||||
|
|
||||||
If any previous version of TDengine has been installed and configured on any host, the installation needs to be removed and the data needs to be cleaned up. For details about uninstalling please refer to [Install and Uninstall](/operation/pkg-install). To clean up the data, please use `rm -rf /var/lib/taos/\*` assuming the `dataDir` is configured as `/var/lib/taos`.
|
|
||||||
|
|
||||||
:::note
|
|
||||||
|
|
||||||
As a best practice, before cleaning up any data files or directories, please ensure that your data has been backed up correctly, if required by your data integrity, backup, security, or other standard operating protocols (SOP).
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
### Step 3
|
|
||||||
|
|
||||||
Now it's time to install TDengine on all hosts but without starting `taosd`. Note that the versions on all hosts should be same. If you are prompted to input the existing TDengine cluster, simply press carriage return to ignore the prompt. `install.sh -e no` can also be used to disable this prompt. For details please refer to [Install and Uninstall](/operation/pkg-install).
|
|
||||||
|
|
||||||
### Step 4
|
|
||||||
|
|
||||||
Now each physical node (referred to, hereinafter, as `dnode` which is an abbreviation for "data node") of TDengine needs to be configured properly. Please note that one dnode doesn't stand for one host. Multiple TDengine dnodes can be started on a single host as long as they are configured properly without conflicting. More specifically each instance of the configuration file `taos.cfg` stands for a dnode. Assuming the first dnode of TDengine cluster is "h1.taosdata.com:6030", its `taos.cfg` is configured as following.
|
|
||||||
|
|
||||||
```c
|
|
||||||
// firstEp is the end point to connect to when any dnode starts
|
|
||||||
firstEp h1.taosdata.com:6030
|
|
||||||
|
|
||||||
// must be configured to the FQDN of the host where the dnode is launched
|
|
||||||
fqdn h1.taosdata.com
|
|
||||||
|
|
||||||
// the port used by the dnode, default is 6030
|
|
||||||
serverPort 6030
|
|
||||||
|
|
||||||
// only necessary when replica is configured to an even number
|
|
||||||
#arbitrator ha.taosdata.com:6042
|
|
||||||
```
|
|
||||||
|
|
||||||
`firstEp` and `fqdn` must be configured properly. In `taos.cfg` of all dnodes in TDengine cluster, `firstEp` must be configured to point to same address, i.e. the first dnode of the cluster. `fqdn` and `serverPort` compose the address of each node itself. If you want to start multiple TDengine dnodes on a single host, please make sure all other configurations like `dataDir`, `logDir`, and other resources related parameters are not conflicting.
|
|
||||||
|
|
||||||
For all the dnodes in a TDengine cluster, the below parameters must be configured exactly the same, any node whose configuration is different from dnodes already in the cluster can't join the cluster.
|
|
||||||
|
|
||||||
| **#** | **Parameter** | **Definition** |
|
|
||||||
| ----- | -------------- | ------------------------------------------------------------- |
|
|
||||||
| 1 | statusInterval | The time interval for which dnode reports its status to mnode |
|
|
||||||
| 2 | timezone | Time Zone where the server is located |
|
|
||||||
| 3 | locale | Location code of the system |
|
|
||||||
| 4 | charset | Character set of the system |
|
|
||||||
|
|
||||||
## Start Cluster
|
|
||||||
|
|
||||||
In the following example we assume that first dnode has FQDN h1.taosdata.com and the second dnode has FQDN h2.taosdata.com.
|
|
||||||
|
|
||||||
### Start The First DNODE
|
|
||||||
|
|
||||||
Start the first dnode following the instructions in [Get Started](/get-started/). Then launch TDengine CLI `taos` and execute command `show dnodes`, the output is as following for example:
|
|
||||||
|
|
||||||
```
|
|
||||||
Welcome to the TDengine shell from Linux, Client Version:3.0.0.0
|
|
||||||
Copyright (c) 2022 by TAOS Data, Inc. All rights reserved.
|
|
||||||
|
|
||||||
Server is Enterprise trial Edition, ver:3.0.0.0 and will never expire.
|
|
||||||
|
|
||||||
taos> show dnodes;
|
|
||||||
id | endpoint | vnodes | support_vnodes | status | create_time | note |
|
|
||||||
============================================================================================================================================
|
|
||||||
1 | h1.taosdata.com:6030 | 0 | 1024 | ready | 2022-07-16 10:50:42.673 | |
|
|
||||||
Query OK, 1 rows affected (0.007984s)
|
|
||||||
|
|
||||||
taos>
|
|
||||||
```
|
|
||||||
|
|
||||||
From the above output, it is shown that the end point of the started dnode is "h1.taosdata.com:6030", which is the `firstEp` of the cluster.
|
|
||||||
|
|
||||||
### Start Other DNODEs
|
|
||||||
|
|
||||||
There are a few steps necessary to add other dnodes in the cluster.
|
|
||||||
|
|
||||||
Let's assume we are starting the second dnode with FQDN, h2.taosdata.com. Firstly we make sure the configuration is correct.
|
|
||||||
|
|
||||||
```c
|
|
||||||
// firstEp is the end point to connect to when any dnode starts
|
|
||||||
firstEp h1.taosdata.com:6030
|
|
||||||
|
|
||||||
// must be configured to the FQDN of the host where the dnode is launched
|
|
||||||
fqdn h2.taosdata.com
|
|
||||||
|
|
||||||
// the port used by the dnode, default is 6030
|
|
||||||
serverPort 6030
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
Secondly, we can start `taosd` as instructed in [Get Started](/get-started/).
|
|
||||||
|
|
||||||
Then, on the first dnode i.e. h1.taosdata.com in our example, use TDengine CLI `taos` to execute the following command to add the end point of the dnode in the cluster. In the command "fqdn:port" should be quoted using double quotes.
|
|
||||||
|
|
||||||
```sql
|
|
||||||
CREATE DNODE "h2.taos.com:6030";
|
|
||||||
```
|
|
||||||
|
|
||||||
Then on the first dnode h1.taosdata.com, execute `show dnodes` in `taos` to show whether the second dnode has been added in the cluster successfully or not.
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SHOW DNODES;
|
|
||||||
```
|
|
||||||
|
|
||||||
If the status of the newly added dnode is offline, please check:
|
|
||||||
|
|
||||||
- Whether the `taosd` process is running properly or not
|
|
||||||
- In the log file `taosdlog.0` to see whether the fqdn and port are correct
|
|
||||||
|
|
||||||
The above process can be repeated to add more dnodes in the cluster.
|
|
|
@ -1,138 +0,0 @@
|
||||||
---
|
|
||||||
sidebar_label: Operation
|
|
||||||
title: Manage DNODEs
|
|
||||||
---
|
|
||||||
|
|
||||||
The previous section, [Deployment],(/cluster/deploy) showed you how to deploy and start a cluster from scratch. Once a cluster is ready, the status of dnode(s) in the cluster can be shown at any time. Dnodes can be managed from the TDengine CLI. New dnode(s) can be added to scale out the cluster, an existing dnode can be removed and you can even perform load balancing manually, if necessary.
|
|
||||||
|
|
||||||
:::note
|
|
||||||
All the commands introduced in this chapter must be run in the TDengine CLI - `taos`. Note that sometimes it is necessary to use root privilege.
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
## Show DNODEs
|
|
||||||
|
|
||||||
The below command can be executed in TDengine CLI `taos` to list all dnodes in the cluster, including ID, end point (fqdn:port), status (ready, offline), number of vnodes, number of free vnodes and so on. We recommend executing this command after adding or removing a dnode.
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SHOW DNODES;
|
|
||||||
```
|
|
||||||
|
|
||||||
Below is the example output of this command.
|
|
||||||
|
|
||||||
```
|
|
||||||
taos> show dnodes;
|
|
||||||
id | end_point | vnodes | cores | status | role | create_time | offline reason |
|
|
||||||
======================================================================================================================================
|
|
||||||
1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
|
|
||||||
Query OK, 1 row(s) in set (0.008298s)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Show VGROUPs
|
|
||||||
|
|
||||||
To utilize system resources efficiently and provide scalability, data sharding is required. The data of each database is divided into multiple shards and stored in multiple vnodes. These vnodes may be located on different dnodes. One way of scaling out is to add more vnodes on dnodes. Each vnode can only be used for a single DB, but one DB can have multiple vnodes. The allocation of vnode is scheduled automatically by mnode based on system resources of the dnodes.
|
|
||||||
|
|
||||||
Launch TDengine CLI `taos` and execute below command:
|
|
||||||
|
|
||||||
```sql
|
|
||||||
USE SOME_DATABASE;
|
|
||||||
SHOW VGROUPS;
|
|
||||||
```
|
|
||||||
|
|
||||||
The output is like below:
|
|
||||||
|
|
||||||
taos> use db;
|
|
||||||
Database changed.
|
|
||||||
|
|
||||||
taos> show vgroups;
|
|
||||||
vgId | tables | status | onlines | v1_dnode | v1_status | compacting |
|
|
||||||
==========================================================================================
|
|
||||||
14 | 38000 | ready | 1 | 1 | leader | 0 |
|
|
||||||
15 | 38000 | ready | 1 | 1 | leader | 0 |
|
|
||||||
16 | 38000 | ready | 1 | 1 | leader | 0 |
|
|
||||||
17 | 38000 | ready | 1 | 1 | leader | 0 |
|
|
||||||
18 | 37001 | ready | 1 | 1 | leader | 0 |
|
|
||||||
19 | 37000 | ready | 1 | 1 | leader | 0 |
|
|
||||||
20 | 37000 | ready | 1 | 1 | leader | 0 |
|
|
||||||
21 | 37000 | ready | 1 | 1 | leader | 0 |
|
|
||||||
Query OK, 8 row(s) in set (0.001154s)
|
|
||||||
|
|
||||||
````
|
|
||||||
|
|
||||||
## Add DNODE
|
|
||||||
|
|
||||||
Launch TDengine CLI `taos` and execute the command below to add the end point of a new dnode into the EPI (end point) list of the cluster. "fqdn:port" must be quoted using double quotes.
|
|
||||||
|
|
||||||
```sql
|
|
||||||
CREATE DNODE "fqdn:port";
|
|
||||||
````
|
|
||||||
|
|
||||||
The example output is as below:
|
|
||||||
|
|
||||||
```
|
|
||||||
taos> create dnode "localhost:7030";
|
|
||||||
Query OK, 0 of 0 row(s) in database (0.008203s)
|
|
||||||
|
|
||||||
taos> show dnodes;
|
|
||||||
id | end_point | vnodes | cores | status | role | create_time | offline reason |
|
|
||||||
======================================================================================================================================
|
|
||||||
1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
|
|
||||||
2 | localhost:7030 | 0 | 0 | offline | any | 2022-04-19 08:11:42.158 | status not received |
|
|
||||||
Query OK, 2 row(s) in set (0.001017s)
|
|
||||||
```
|
|
||||||
|
|
||||||
It can be seen that the status of the new dnode is "offline". Once the dnode is started and connects to the firstEp of the cluster, you can execute the command again and get the example output below. As can be seen, both dnodes are in "ready" status.
|
|
||||||
|
|
||||||
```
|
|
||||||
taos> show dnodes;
|
|
||||||
id | end_point | vnodes | cores | status | role | create_time | offline reason |
|
|
||||||
======================================================================================================================================
|
|
||||||
1 | localhost:6030 | 3 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
|
|
||||||
2 | localhost:7030 | 6 | 8 | ready | any | 2022-04-19 08:14:59.165 | |
|
|
||||||
Query OK, 2 row(s) in set (0.001316s)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Drop DNODE
|
|
||||||
|
|
||||||
Launch TDengine CLI `taos` and execute the command below to drop or remove a dnode from the cluster. In the command, you can get `dnodeId` from `show dnodes`.
|
|
||||||
|
|
||||||
```sql
|
|
||||||
DROP DNODE "fqdn:port";
|
|
||||||
```
|
|
||||||
|
|
||||||
or
|
|
||||||
|
|
||||||
```sql
|
|
||||||
DROP DNODE dnodeId;
|
|
||||||
```
|
|
||||||
|
|
||||||
The example output is below:
|
|
||||||
|
|
||||||
```
|
|
||||||
taos> show dnodes;
|
|
||||||
id | end_point | vnodes | cores | status | role | create_time | offline reason |
|
|
||||||
======================================================================================================================================
|
|
||||||
1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
|
|
||||||
2 | localhost:7030 | 0 | 0 | offline | any | 2022-04-19 08:11:42.158 | status not received |
|
|
||||||
Query OK, 2 row(s) in set (0.001017s)
|
|
||||||
|
|
||||||
taos> drop dnode 2;
|
|
||||||
Query OK, 0 of 0 row(s) in database (0.000518s)
|
|
||||||
|
|
||||||
taos> show dnodes;
|
|
||||||
id | end_point | vnodes | cores | status | role | create_time | offline reason |
|
|
||||||
======================================================================================================================================
|
|
||||||
1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
|
|
||||||
Query OK, 1 row(s) in set (0.001137s)
|
|
||||||
```
|
|
||||||
|
|
||||||
In the above example, when `show dnodes` is executed the first time, two dnodes are shown. After `drop dnode 2` is executed, you can execute `show dnodes` again and it can be seen that only the dnode with ID 1 is still in the cluster.
|
|
||||||
|
|
||||||
:::note
|
|
||||||
|
|
||||||
- Once a dnode is dropped, it can't rejoin the cluster. To rejoin, the dnode needs to deployed again after cleaning up the data directory. Before dropping a dnode, the data belonging to the dnode MUST be migrated/backed up according to your data retention, data security or other SOPs.
|
|
||||||
- Please note that `drop dnode` is different from stopping `taosd` process. `drop dnode` just removes the dnode out of TDengine cluster. Only after a dnode is dropped, can the corresponding `taosd` process be stopped.
|
|
||||||
- Once a dnode is dropped, other dnodes in the cluster will be notified of the drop and will not accept the request from the dropped dnode.
|
|
||||||
- dnodeID is allocated automatically and can't be manually modified. dnodeID is generated in ascending order without duplication.
|
|
||||||
|
|
||||||
:::
|
|
|
@ -1 +0,0 @@
|
||||||
label: Cluster
|
|
|
@ -0,0 +1,193 @@
|
||||||
|
---
|
||||||
|
sidebar_label: Manual Deployment
|
||||||
|
title: Manual Deployment and Management
|
||||||
|
---
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
### Step 0
|
||||||
|
|
||||||
|
The FQDN of all hosts must be setup properly. For e.g. FQDNs may have to be configured in the /etc/hosts file on each host. You must confirm that each FQDN can be accessed from any other host. For e.g. you can do this by using the `ping` command. If you have a DNS server on your network, contact your network administrator for assistance.
|
||||||
|
|
||||||
|
### Step 1
|
||||||
|
|
||||||
|
If any previous version of TDengine has been installed and configured on any host, the installation needs to be removed and the data needs to be cleaned up. For details about uninstalling please refer to [Install and Uninstall](/operation/pkg-install). To clean up the data, please use `rm -rf /var/lib/taos/\*` assuming the `dataDir` is configured as `/var/lib/taos`.
|
||||||
|
|
||||||
|
:::note
|
||||||
|
FQDN information is written to file. If you have started TDengine without configuring or changing the FQDN, ensure that data is backed up or no longer needed before running the `rm -rf /var/lib\taos/\*` command.
|
||||||
|
:::
|
||||||
|
|
||||||
|
:::note
|
||||||
|
- The host where the client program runs also needs to be configured properly for FQDN, to make sure all hosts for client or server can be accessed from any other. In other words, the hosts where the client is running are also considered as a part of the cluster.
|
||||||
|
:::
|
||||||
|
|
||||||
|
### Step 2
|
||||||
|
|
||||||
|
- Please ensure that your firewall rules do not block TCP/UDP on ports 6030-6042 on all hosts in the cluster.
|
||||||
|
|
||||||
|
### Step 3
|
||||||
|
|
||||||
|
Now it's time to install TDengine on all hosts but without starting `taosd`. Note that the versions on all hosts should be same. If you are prompted to input the existing TDengine cluster, simply press carriage return to ignore the prompt.
|
||||||
|
|
||||||
|
### Step 4
|
||||||
|
|
||||||
|
Now each physical node (referred to, hereinafter, as `dnode` which is an abbreviation for "data node") of TDengine needs to be configured properly.
|
||||||
|
|
||||||
|
To get the hostname on any host, the command `hostname -f` can be executed.
|
||||||
|
|
||||||
|
`ping <FQDN>` command can be executed on each host to check whether any other host is accessible from it. If any host is not accessible, the network configuration, like /etc/hosts or DNS configuration, needs to be checked and revised, to make any two hosts accessible to each other. Hosts that are not accessible to each other cannot form a cluster.
|
||||||
|
|
||||||
|
On the physical machine running the application, ping the dnode that is running taosd. If the dnode is not accessible, the application cannot connect to taosd. In this case, verify the DNS and hosts settings on the physical node running the application.
|
||||||
|
|
||||||
|
The end point of each dnode is the output hostname and port, such as h1.tdengine.com:6030.
|
||||||
|
|
||||||
|
### Step 5
|
||||||
|
|
||||||
|
Modify the TDengine configuration file `/etc/taos/taos.cfg` on each node. Assuming the first dnode of TDengine cluster is "h1.tdengine.com:6030", its `taos.cfg` is configured as following.
|
||||||
|
|
||||||
|
```c
|
||||||
|
// firstEp is the end point to connect to when any dnode starts
|
||||||
|
firstEp h1.tdengine.com:6030
|
||||||
|
|
||||||
|
// must be configured to the FQDN of the host where the dnode is launched
|
||||||
|
fqdn h1.tdengine.com
|
||||||
|
|
||||||
|
// the port used by the dnode, default is 6030
|
||||||
|
serverPort 6030
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
`firstEp` and `fqdn` must be configured properly. In `taos.cfg` of all dnodes in TDengine cluster, `firstEp` must be configured to point to same address, i.e. the first dnode of the cluster. `fqdn` and `serverPort` compose the address of each node itself. Retain the default values for other parameters.
|
||||||
|
|
||||||
|
For all the dnodes in a TDengine cluster, the below parameters must be configured exactly the same, any node whose configuration is different from dnodes already in the cluster can't join the cluster.
|
||||||
|
|
||||||
|
| **#** | **Parameter** | **Definition** |
|
||||||
|
| ----- | ------------------ | ------------------------------------------- |
|
||||||
|
| 1 | statusInterval | The interval by which dnode reports its status to mnode |
|
||||||
|
| 2 | timezone | Timezone |
|
||||||
|
| 3 | locale | System region and encoding |
|
||||||
|
| 4 | charset | Character set |
|
||||||
|
|
||||||
|
## Start Cluster
|
||||||
|
|
||||||
|
The first dnode can be started following the instructions in [Get Started](/get-started/). Then TDengine CLI `taos` can be launched to execute command `show dnodes`, the output is as following for example:
|
||||||
|
|
||||||
|
```
|
||||||
|
taos> show dnodes;
|
||||||
|
id | endpoint | vnodes | support_vnodes | status | create_time | note |
|
||||||
|
============================================================================================================================================
|
||||||
|
1 | h1.tdengine.com:6030 | 0 | 1024 | ready | 2022-07-16 10:50:42.673 | |
|
||||||
|
Query OK, 1 rows affected (0.007984s)
|
||||||
|
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
From the above output, it is shown that the end point of the started dnode is "h1.tdengine.com:6030", which is the `firstEp` of the cluster.
|
||||||
|
|
||||||
|
## Add DNODE
|
||||||
|
|
||||||
|
There are a few steps necessary to add other dnodes in the cluster.
|
||||||
|
|
||||||
|
Second, we can start `taosd` as instructed in [Get Started](/get-started/).
|
||||||
|
|
||||||
|
Then, on the first dnode i.e. h1.tdengine.com in our example, use TDengine CLI `taos` to execute the following command:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE DNODE "h2.taos.com:6030";
|
||||||
|
````
|
||||||
|
|
||||||
|
This adds the end point of the new dnode (from Step 4) into the end point list of the cluster. In the command "fqdn:port" should be quoted using double quotes. Change `"h2.taos.com:6030"` to the end point of your new dnode.
|
||||||
|
|
||||||
|
Then on the first dnode h1.tdengine.com, execute `show dnodes` in `taos`
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SHOW DNODES;
|
||||||
|
```
|
||||||
|
|
||||||
|
to show whether the second dnode has been added in the cluster successfully or not. If the status of the newly added dnode is offline, please check:
|
||||||
|
|
||||||
|
- Whether the `taosd` process is running properly or not
|
||||||
|
- In the log file `taosdlog.0` to see whether the fqdn and port are correct and add the correct end point if not.
|
||||||
|
The above process can be repeated to add more dnodes in the cluster.
|
||||||
|
|
||||||
|
:::tip
|
||||||
|
|
||||||
|
Any node that is in the cluster and online can be the firstEp of new nodes.
|
||||||
|
Nodes use the firstEp parameter only when joining a cluster for the first time. After a node has joined the cluster, it stores the latest mnode in its end point list and no longer makes use of firstEp.
|
||||||
|
|
||||||
|
However, firstEp is used by clients that connect to the cluster. For example, if you run TDengine CLI `taos` without arguments, it connects to the firstEp by default.
|
||||||
|
|
||||||
|
Two dnodes that are launched without a firstEp value operate independently of each other. It is not possible to add one dnode to the other dnode and form a cluster. It is also not possible to form two independent clusters into a new cluster.
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
|
## Show DNODEs
|
||||||
|
|
||||||
|
The below command can be executed in TDengine CLI `taos`
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SHOW DNODES;
|
||||||
|
```
|
||||||
|
|
||||||
|
to list all dnodes in the cluster, including ID, end point (fqdn:port), status (ready, offline), number of vnodes, number of free vnodes and so on. We recommend executing this command after adding or removing a dnode.
|
||||||
|
|
||||||
|
Below is the example output of this command.
|
||||||
|
|
||||||
|
```
|
||||||
|
taos> show dnodes;
|
||||||
|
id | endpoint | vnodes | support_vnodes | status | create_time | note |
|
||||||
|
============================================================================================================================================
|
||||||
|
1 | trd01:6030 | 100 | 1024 | ready | 2022-07-15 16:47:47.726 | |
|
||||||
|
Query OK, 1 rows affected (0.006684s)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Show VGROUPs
|
||||||
|
|
||||||
|
To utilize system resources efficiently and provide scalability, data sharding is required. The data of each database is divided into multiple shards and stored in multiple vnodes. These vnodes may be located on different dnodes. One way of scaling out is to add more vnodes on dnodes. Each vnode can only be used for a single DB, but one DB can have multiple vnodes. The allocation of vnode is scheduled automatically by mnode based on system resources of the dnodes.
|
||||||
|
|
||||||
|
Launch TDengine CLI `taos` and execute below command:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
USE SOME_DATABASE;
|
||||||
|
SHOW VGROUPS;
|
||||||
|
```
|
||||||
|
|
||||||
|
The example output is below:
|
||||||
|
|
||||||
|
```
|
||||||
|
taos> use db;
|
||||||
|
Database changed.
|
||||||
|
|
||||||
|
taos> show vgroups;
|
||||||
|
vgroup_id | db_name | tables | v1_dnode | v1_status | v2_dnode | v2_status | v3_dnode | v3_status | status | nfiles | file_size | tsma |
|
||||||
|
================================================================================================================================================================================================
|
||||||
|
2 | db | 0 | 1 | leader | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 0 |
|
||||||
|
3 | db | 0 | 1 | leader | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 0 |
|
||||||
|
4 | db | 0 | 1 | leader | NULL | NULL | NULL | NULL | NULL | NULL | NULL | 0 |
|
||||||
|
Query OK, 8 row(s) in set (0.001154s)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Drop DNODE
|
||||||
|
|
||||||
|
Before running the TDengine CLI, ensure that the taosd process has been stopped on the dnode that you want to delete.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
DROP DNODE "fqdn:port";
|
||||||
|
```
|
||||||
|
|
||||||
|
or
|
||||||
|
|
||||||
|
```sql
|
||||||
|
DROP DNODE dnodeId;
|
||||||
|
```
|
||||||
|
|
||||||
|
to drop or remove a dnode from the cluster. In the command, you can get `dnodeId` from `show dnodes`.
|
||||||
|
|
||||||
|
:::warning
|
||||||
|
|
||||||
|
- Once a dnode is dropped, it can't rejoin the cluster. To rejoin, the dnode needs to deployed again after cleaning up the data directory. Before dropping a dnode, the data belonging to the dnode MUST be migrated/backed up according to your data retention, data security or other SOPs.
|
||||||
|
- Please note that `drop dnode` is different from stopping `taosd` process. `drop dnode` just removes the dnode out of TDengine cluster. Only after a dnode is dropped, can the corresponding `taosd` process be stopped.
|
||||||
|
- Once a dnode is dropped, other dnodes in the cluster will be notified of the drop and will not accept the request from the dropped dnode.
|
||||||
|
- dnodeID is allocated automatically and can't be manually modified. dnodeID is generated in ascending order without duplication.
|
||||||
|
|
||||||
|
:::
|
|
@ -0,0 +1,393 @@
|
||||||
|
---
|
||||||
|
sidebar_label: Kubernetes
|
||||||
|
title: Deploying a TDengine Cluster in Kubernetes
|
||||||
|
---
|
||||||
|
|
||||||
|
TDengine is a cloud-native time-series database that can be deployed on Kubernetes. This document gives a step-by-step description of how you can use YAML files to create a TDengine cluster and introduces common operations for TDengine in a Kubernetes environment.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
Before deploying TDengine on Kubernetes, perform the following:
|
||||||
|
|
||||||
|
* Current steps are compatible with Kubernetes v1.5 and later version.
|
||||||
|
* Install and configure minikube, kubectl, and helm.
|
||||||
|
* Install and deploy Kubernetes and ensure that it can be accessed and used normally. Update any container registries or other services as necessary.
|
||||||
|
|
||||||
|
You can download the configuration files in this document from [GitHub](https://github.com/taosdata/TDengine-Operator/tree/3.0/src/tdengine).
|
||||||
|
|
||||||
|
## Configure the service
|
||||||
|
|
||||||
|
Create a service configuration file named `taosd-service.yaml`. Record the value of `metadata.name` (in this example, `taos`) for use in the next step. Add the ports required by TDengine:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: "taosd"
|
||||||
|
labels:
|
||||||
|
app: "tdengine"
|
||||||
|
spec:
|
||||||
|
ports:
|
||||||
|
- name: tcp6030
|
||||||
|
- protocol: "TCP"
|
||||||
|
port: 6030
|
||||||
|
- name: tcp6041
|
||||||
|
- protocol: "TCP"
|
||||||
|
port: 6041
|
||||||
|
selector:
|
||||||
|
app: "tdengine"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configure the service as StatefulSet
|
||||||
|
|
||||||
|
Configure the TDengine service as a StatefulSet.
|
||||||
|
Create the `tdengine.yaml` file and set `replicas` to 3. In this example, the region is set to Asia/Shanghai and 10 GB of standard storage are allocated per node. You can change the configuration based on your environment and business requirements.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: StatefulSet
|
||||||
|
metadata:
|
||||||
|
name: "tdengine"
|
||||||
|
labels:
|
||||||
|
app: "tdengine"
|
||||||
|
spec:
|
||||||
|
serviceName: "taosd"
|
||||||
|
replicas: 3
|
||||||
|
updateStrategy:
|
||||||
|
type: RollingUpdate
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: "tdengine"
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
name: "tdengine"
|
||||||
|
labels:
|
||||||
|
app: "tdengine"
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: "tdengine"
|
||||||
|
image: "tdengine/tdengine:3.0.0.0"
|
||||||
|
imagePullPolicy: "IfNotPresent"
|
||||||
|
ports:
|
||||||
|
- name: tcp6030
|
||||||
|
- protocol: "TCP"
|
||||||
|
containerPort: 6030
|
||||||
|
- name: tcp6041
|
||||||
|
- protocol: "TCP"
|
||||||
|
containerPort: 6041
|
||||||
|
env:
|
||||||
|
# POD_NAME for FQDN config
|
||||||
|
- name: POD_NAME
|
||||||
|
valueFrom:
|
||||||
|
fieldRef:
|
||||||
|
fieldPath: metadata.name
|
||||||
|
# SERVICE_NAME and NAMESPACE for fqdn resolve
|
||||||
|
- name: SERVICE_NAME
|
||||||
|
value: "taosd"
|
||||||
|
- name: STS_NAME
|
||||||
|
value: "tdengine"
|
||||||
|
- name: STS_NAMESPACE
|
||||||
|
valueFrom:
|
||||||
|
fieldRef:
|
||||||
|
fieldPath: metadata.namespace
|
||||||
|
# TZ for timezone settings, we recommend to always set it.
|
||||||
|
- name: TZ
|
||||||
|
value: "Asia/Shanghai"
|
||||||
|
# TAOS_ prefix will configured in taos.cfg, strip prefix and camelCase.
|
||||||
|
- name: TAOS_SERVER_PORT
|
||||||
|
value: "6030"
|
||||||
|
# Must set if you want a cluster.
|
||||||
|
- name: TAOS_FIRST_EP
|
||||||
|
value: "$(STS_NAME)-0.$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local:$(TAOS_SERVER_PORT)"
|
||||||
|
# TAOS_FQDN should always be set in k8s env.
|
||||||
|
- name: TAOS_FQDN
|
||||||
|
value: "$(POD_NAME).$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local"
|
||||||
|
volumeMounts:
|
||||||
|
- name: taosdata
|
||||||
|
mountPath: /var/lib/taos
|
||||||
|
readinessProbe:
|
||||||
|
exec:
|
||||||
|
command:
|
||||||
|
- taos-check
|
||||||
|
initialDelaySeconds: 5
|
||||||
|
timeoutSeconds: 5000
|
||||||
|
livenessProbe:
|
||||||
|
exec:
|
||||||
|
command:
|
||||||
|
- taos-check
|
||||||
|
initialDelaySeconds: 15
|
||||||
|
periodSeconds: 20
|
||||||
|
volumeClaimTemplates:
|
||||||
|
- metadata:
|
||||||
|
name: taosdata
|
||||||
|
spec:
|
||||||
|
accessModes:
|
||||||
|
- "ReadWriteOnce"
|
||||||
|
storageClassName: "standard"
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: "10Gi"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Use kubectl to deploy TDengine
|
||||||
|
|
||||||
|
Run the following commands:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl apply -f taosd-service.yaml
|
||||||
|
kubectl apply -f tdengine.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
The preceding configuration generates a TDengine cluster with three nodes in which dnodes are automatically configured. You can run the `show dnodes` command to query the nodes in the cluster:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl exec -i -t tdengine-0 -- taos -s "show dnodes"
|
||||||
|
kubectl exec -i -t tdengine-1 -- taos -s "show dnodes"
|
||||||
|
kubectl exec -i -t tdengine-2 -- taos -s "show dnodes"
|
||||||
|
```
|
||||||
|
|
||||||
|
The output is as follows:
|
||||||
|
|
||||||
|
```
|
||||||
|
taos> show dnodes
|
||||||
|
id | endpoint | vnodes | support_vnodes | status | create_time | note |
|
||||||
|
============================================================================================================================================
|
||||||
|
1 | tdengine-0.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:14:57.285 | |
|
||||||
|
2 | tdengine-1.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:11.302 | |
|
||||||
|
3 | tdengine-2.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:23.290 | |
|
||||||
|
Query OK, 3 rows in database (0.003655s)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Enable port forwarding
|
||||||
|
|
||||||
|
The kubectl port forwarding feature allows applications to access the TDengine cluster running on Kubernetes.
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl port-forward tdengine-0 6041:6041 &
|
||||||
|
```
|
||||||
|
|
||||||
|
Use curl to verify that the TDengine REST API is working on port 6041:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ curl -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
|
||||||
|
Handling connection for 6041
|
||||||
|
{"code":0,"column_meta":[["name","VARCHAR",64],["create_time","TIMESTAMP",8],["vgroups","SMALLINT",2],["ntables","BIGINT",8],["replica","TINYINT",1],["strict","VARCHAR",4],["duration","VARCHAR",10],["keep","VARCHAR",32],["buffer","INT",4],["pagesize","INT",4],["pages","INT",4],["minrows","INT",4],["maxrows","INT",4],["comp","TINYINT",1],["precision","VARCHAR",2],["status","VARCHAR",10],["retention","VARCHAR",60],["single_stable","BOOL",1],["cachemodel","VARCHAR",11],["cachesize","INT",4],["wal_level","TINYINT",1],["wal_fsync_period","INT",4],["wal_retention_period","INT",4],["wal_retention_size","BIGINT",8],["wal_roll_period","INT",4],["wal_segment_size","BIGINT",8]],"data":[["information_schema",null,null,16,null,null,null,null,null,null,null,null,null,null,null,"ready",null,null,null,null,null,null,null,null,null,null],["performance_schema",null,null,10,null,null,null,null,null,null,null,null,null,null,null,"ready",null,null,null,null,null,null,null,null,null,null]],"rows":2}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Enable the dashboard for visualization
|
||||||
|
|
||||||
|
The minikube dashboard command enables visualized cluster management.
|
||||||
|
|
||||||
|
```
|
||||||
|
$ minikube dashboard
|
||||||
|
* Verifying dashboard health ...
|
||||||
|
* Launching proxy ...
|
||||||
|
* Verifying proxy health ...
|
||||||
|
* Opening http://127.0.0.1:46617/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...
|
||||||
|
http://127.0.0.1:46617/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
|
||||||
|
```
|
||||||
|
|
||||||
|
In some public clouds, minikube cannot be remotely accessed if it is bound to 127.0.0.1. In this case, use the kubectl proxy command to map the port to 0.0.0.0. Then, you can access the dashboard by using a web browser to open the dashboard URL above on the public IP address and port of the virtual machine.
|
||||||
|
|
||||||
|
```
|
||||||
|
$ kubectl proxy --accept-hosts='^.*$' --address='0.0.0.0'
|
||||||
|
```
|
||||||
|
|
||||||
|
## Scaling Out Your Cluster
|
||||||
|
|
||||||
|
TDengine clusters can scale automatically:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl scale statefulsets tdengine --replicas=4
|
||||||
|
```
|
||||||
|
|
||||||
|
The preceding command increases the number of replicas to 4. After running this command, query the pod status:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl get pods -l app=tdengine
|
||||||
|
```
|
||||||
|
|
||||||
|
The output is as follows:
|
||||||
|
|
||||||
|
```
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
tdengine-0 1/1 Running 0 161m
|
||||||
|
tdengine-1 1/1 Running 0 161m
|
||||||
|
tdengine-2 1/1 Running 0 32m
|
||||||
|
tdengine-3 1/1 Running 0 32m
|
||||||
|
```
|
||||||
|
|
||||||
|
The status of all pods is Running. Once the pod status changes to Ready, you can check the dnode status:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl exec -i -t tdengine-3 -- taos -s "show dnodes"
|
||||||
|
```
|
||||||
|
|
||||||
|
The following output shows that the TDengine cluster has been expanded to 4 replicas:
|
||||||
|
|
||||||
|
```
|
||||||
|
taos> show dnodes
|
||||||
|
id | endpoint | vnodes | support_vnodes | status | create_time | note |
|
||||||
|
============================================================================================================================================
|
||||||
|
1 | tdengine-0.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:14:57.285 | |
|
||||||
|
2 | tdengine-1.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:11.302 | |
|
||||||
|
3 | tdengine-2.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:23.290 | |
|
||||||
|
4 | tdengine-3.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:33:16.039 | |
|
||||||
|
Query OK, 4 rows in database (0.008377s)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Scaling In Your Cluster
|
||||||
|
|
||||||
|
When you scale in a TDengine cluster, your data is migrated to different nodes. You must run the drop dnodes command in TDengine to remove dnodes before scaling in your Kubernetes environment.
|
||||||
|
|
||||||
|
Note: In a Kubernetes StatefulSet service, the newest pods are always removed first. For this reason, when you scale in your TDengine cluster, ensure that you drop the newest dnodes.
|
||||||
|
|
||||||
|
```
|
||||||
|
$ kubectl exec -i -t tdengine-0 -- taos -s "drop dnode 4"
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ kubectl exec -it tdengine-0 -- taos -s "show dnodes"
|
||||||
|
|
||||||
|
taos> show dnodes
|
||||||
|
id | endpoint | vnodes | support_vnodes | status | create_time | note |
|
||||||
|
============================================================================================================================================
|
||||||
|
1 | tdengine-0.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:14:57.285 | |
|
||||||
|
2 | tdengine-1.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:11.302 | |
|
||||||
|
3 | tdengine-2.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:23.290 | |
|
||||||
|
Query OK, 3 rows in database (0.004861s)
|
||||||
|
```
|
||||||
|
|
||||||
|
Verify that the dnode have been successfully removed by running the `kubectl exec -i -t tdengine-0 -- taos -s "show dnodes"` command. Then run the following command to remove the pod:
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl scale statefulsets tdengine --replicas=3
|
||||||
|
```
|
||||||
|
|
||||||
|
The newest pod in the deployment is removed. Run the `kubectl get pods -l app=tdengine` command to query the pod status:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ kubectl get pods -l app=tdengine
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
tdengine-0 1/1 Running 0 4m7s
|
||||||
|
tdengine-1 1/1 Running 0 3m55s
|
||||||
|
tdengine-2 1/1 Running 0 2m28s
|
||||||
|
```
|
||||||
|
|
||||||
|
After the pod has been removed, manually delete the PersistentVolumeClaim (PVC). Otherwise, future scale-outs will attempt to use existing data.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ kubectl delete pvc taosdata-tdengine-3
|
||||||
|
```
|
||||||
|
|
||||||
|
Your cluster has now been safely scaled in, and you can scale it out again as necessary.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ kubectl scale statefulsets tdengine --replicas=4
|
||||||
|
statefulset.apps/tdengine scaled
|
||||||
|
it@k8s-2:~/TDengine-Operator/src/tdengine$ kubectl get pods -l app=tdengine
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
tdengine-0 1/1 Running 0 35m
|
||||||
|
tdengine-1 1/1 Running 0 34m
|
||||||
|
tdengine-2 1/1 Running 0 12m
|
||||||
|
tdengine-3 0/1 ContainerCreating 0 4s
|
||||||
|
it@k8s-2:~/TDengine-Operator/src/tdengine$ kubectl get pods -l app=tdengine
|
||||||
|
NAME READY STATUS RESTARTS AGE
|
||||||
|
tdengine-0 1/1 Running 0 35m
|
||||||
|
tdengine-1 1/1 Running 0 34m
|
||||||
|
tdengine-2 1/1 Running 0 12m
|
||||||
|
tdengine-3 0/1 Running 0 7s
|
||||||
|
it@k8s-2:~/TDengine-Operator/src/tdengine$ kubectl exec -it tdengine-0 -- taos -s "show dnodes"
|
||||||
|
|
||||||
|
taos> show dnodes
|
||||||
|
id | endpoint | vnodes | support_vnodes | status | create_time | offline reason |
|
||||||
|
======================================================================================================================================
|
||||||
|
1 | tdengine-0.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 17:38:49.012 | |
|
||||||
|
2 | tdengine-1.taosd.default.sv... | 1 | 4 | ready | 2022-07-25 17:39:01.517 | |
|
||||||
|
5 | tdengine-2.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 18:01:36.479 | |
|
||||||
|
6 | tdengine-3.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 18:13:54.411 | |
|
||||||
|
Query OK, 4 row(s) in set (0.001348s)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Remove a TDengine Cluster
|
||||||
|
|
||||||
|
To fully remove a TDengine cluster, you must delete its statefulset, svc, configmap, and pvc entries:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl delete statefulset -l app=tdengine
|
||||||
|
kubectl delete svc -l app=tdengine
|
||||||
|
kubectl delete pvc -l app=tdengine
|
||||||
|
kubectl delete configmap taoscfg
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Error 1
|
||||||
|
|
||||||
|
If you remove a pod without first running `drop dnode`, some TDengine nodes will go offline.
|
||||||
|
|
||||||
|
```
|
||||||
|
$ kubectl exec -it tdengine-0 -- taos -s "show dnodes"
|
||||||
|
|
||||||
|
taos> show dnodes
|
||||||
|
id | endpoint | vnodes | support_vnodes | status | create_time | offline reason |
|
||||||
|
======================================================================================================================================
|
||||||
|
1 | tdengine-0.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 17:38:49.012 | |
|
||||||
|
2 | tdengine-1.taosd.default.sv... | 1 | 4 | ready | 2022-07-25 17:39:01.517 | |
|
||||||
|
5 | tdengine-2.taosd.default.sv... | 0 | 4 | offline | 2022-07-25 18:01:36.479 | status msg timeout |
|
||||||
|
6 | tdengine-3.taosd.default.sv... | 0 | 4 | offline | 2022-07-25 18:13:54.411 | status msg timeout |
|
||||||
|
Query OK, 4 row(s) in set (0.001323s)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Error 2
|
||||||
|
|
||||||
|
If the number of nodes after a scale-in is less than the value of the replica parameter, the cluster will go down:
|
||||||
|
|
||||||
|
Create a database with replica set to 2 and add data.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl exec -i -t tdengine-0 -- \
|
||||||
|
taos -s \
|
||||||
|
"create database if not exists test replica 2;
|
||||||
|
use test;
|
||||||
|
create table if not exists t1(ts timestamp, n int);
|
||||||
|
insert into t1 values(now, 1)(now+1s, 2);"
|
||||||
|
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Scale in to one node:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl scale statefulsets tdengine --replicas=1
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
In the TDengine CLI, you can see that no database operations succeed:
|
||||||
|
|
||||||
|
```
|
||||||
|
taos> show dnodes;
|
||||||
|
id | end_point | vnodes | cores | status | role | create_time | offline reason |
|
||||||
|
======================================================================================================================================
|
||||||
|
1 | tdengine-0.taosd.default.sv... | 2 | 40 | ready | any | 2021-06-01 15:55:52.562 | |
|
||||||
|
2 | tdengine-1.taosd.default.sv... | 1 | 40 | offline | any | 2021-06-01 15:56:07.212 | status msg timeout |
|
||||||
|
Query OK, 2 row(s) in set (0.000845s)
|
||||||
|
|
||||||
|
taos> show dnodes;
|
||||||
|
id | end_point | vnodes | cores | status | role | create_time | offline reason |
|
||||||
|
======================================================================================================================================
|
||||||
|
1 | tdengine-0.taosd.default.sv... | 2 | 40 | ready | any | 2021-06-01 15:55:52.562 | |
|
||||||
|
2 | tdengine-1.taosd.default.sv... | 1 | 40 | offline | any | 2021-06-01 15:56:07.212 | status msg timeout |
|
||||||
|
Query OK, 2 row(s) in set (0.000837s)
|
||||||
|
|
||||||
|
taos> use test;
|
||||||
|
Database changed.
|
||||||
|
|
||||||
|
taos> insert into t1 values(now, 3);
|
||||||
|
|
||||||
|
DB error: Unable to resolve FQDN (0.013874s)
|
||||||
|
|
||||||
|
```
|
|
@ -0,0 +1,298 @@
|
||||||
|
---
|
||||||
|
sidebar_label: Helm
|
||||||
|
title: Use Helm to deploy TDengine
|
||||||
|
---
|
||||||
|
|
||||||
|
Helm is a package manager for Kubernetes that can provide more capabilities in deploying on Kubernetes.
|
||||||
|
|
||||||
|
## Install Helm
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -fsSL -o get_helm.sh \
|
||||||
|
https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
|
||||||
|
chmod +x get_helm.sh
|
||||||
|
./get_helm.sh
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Helm uses the kubectl and kubeconfig configurations to perform Kubernetes operations. For more information, see the Rancher configuration for Kubernetes installation.
|
||||||
|
|
||||||
|
## Install TDengine Chart
|
||||||
|
|
||||||
|
To use TDengine Chart, download it from GitHub:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
wget https://github.com/taosdata/TDengine-Operator/raw/3.0/helm/tdengine-3.0.0.tgz
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Query the storageclass of your Kubernetes deployment:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl get storageclass
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
With minikube, the default value is standard.
|
||||||
|
|
||||||
|
Use Helm commands to install TDengine:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
helm install tdengine tdengine-3.0.0.tgz \
|
||||||
|
--set storage.className=<your storage class name>
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
You can configure a small storage size in minikube to ensure that your deployment does not exceed your available disk space.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
helm install tdengine tdengine-3.0.0.tgz \
|
||||||
|
--set storage.className=standard \
|
||||||
|
--set storage.dataSize=2Gi \
|
||||||
|
--set storage.logSize=10Mi
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
After TDengine is deployed, TDengine Chart outputs information about how to use TDengine:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export POD_NAME=$(kubectl get pods --namespace default \
|
||||||
|
-l "app.kubernetes.io/name=tdengine,app.kubernetes.io/instance=tdengine" \
|
||||||
|
-o jsonpath="{.items[0].metadata.name}")
|
||||||
|
kubectl --namespace default exec $POD_NAME -- taos -s "show dnodes; show mnodes"
|
||||||
|
kubectl --namespace default exec -it $POD_NAME -- taos
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
You can test the deployment by creating a table:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl --namespace default exec $POD_NAME -- \
|
||||||
|
taos -s "create database test;
|
||||||
|
use test;
|
||||||
|
create table t1 (ts timestamp, n int);
|
||||||
|
insert into t1 values(now, 1)(now + 1s, 2);
|
||||||
|
select * from t1;"
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
## Configuring Values
|
||||||
|
|
||||||
|
You can configure custom parameters in TDengine with the `values.yaml` file.
|
||||||
|
|
||||||
|
Run the `helm show values` command to see all parameters supported by TDengine Chart.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
helm show values tdengine-3.0.0.tgz
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Save the output of this command as `values.yaml`. Then you can modify this file with your desired values and use it to deploy a TDengine cluster:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
helm install tdengine tdengine-3.0.0.tgz -f values.yaml
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
The parameters are described as follows:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# Default values for tdengine.
|
||||||
|
# This is a YAML-formatted file.
|
||||||
|
# Declare variables to be passed into helm templates.
|
||||||
|
|
||||||
|
replicaCount: 1
|
||||||
|
|
||||||
|
image:
|
||||||
|
prefix: tdengine/tdengine
|
||||||
|
#pullPolicy: Always
|
||||||
|
# Overrides the image tag whose default is the chart appVersion.
|
||||||
|
# tag: "3.0.0.0"
|
||||||
|
|
||||||
|
service:
|
||||||
|
# ClusterIP is the default service type, use NodeIP only if you know what you are doing.
|
||||||
|
type: ClusterIP
|
||||||
|
ports:
|
||||||
|
# TCP range required
|
||||||
|
tcp: [6030, 6041, 6042, 6043, 6044, 6046, 6047, 6048, 6049, 6060]
|
||||||
|
# UDP range
|
||||||
|
udp: [6044, 6045]
|
||||||
|
|
||||||
|
|
||||||
|
# Set timezone here, not in taoscfg
|
||||||
|
timezone: "Asia/Shanghai"
|
||||||
|
|
||||||
|
resources:
|
||||||
|
# We usually recommend not to specify default resources and to leave this as a conscious
|
||||||
|
# choice for the user. This also increases chances charts run on environments with little
|
||||||
|
# resources, such as Minikube. If you do want to specify resources, uncomment the following
|
||||||
|
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
|
||||||
|
# limits:
|
||||||
|
# cpu: 100m
|
||||||
|
# memory: 128Mi
|
||||||
|
# requests:
|
||||||
|
# cpu: 100m
|
||||||
|
# memory: 128Mi
|
||||||
|
|
||||||
|
storage:
|
||||||
|
# Set storageClassName for pvc. K8s use default storage class if not set.
|
||||||
|
#
|
||||||
|
className: ""
|
||||||
|
dataSize: "100Gi"
|
||||||
|
logSize: "10Gi"
|
||||||
|
|
||||||
|
nodeSelectors:
|
||||||
|
taosd:
|
||||||
|
# node selectors
|
||||||
|
|
||||||
|
clusterDomainSuffix: ""
|
||||||
|
# Config settings in taos.cfg file.
|
||||||
|
#
|
||||||
|
# The helm/k8s support will use environment variables for taos.cfg,
|
||||||
|
# converting an upper-snake-cased variable like `TAOS_DEBUG_FLAG`,
|
||||||
|
# to a camelCase taos config variable `debugFlag`.
|
||||||
|
#
|
||||||
|
# See the [Configuration Variables](../../reference/config)
|
||||||
|
#
|
||||||
|
# Note:
|
||||||
|
# 1. firstEp/secondEp: should not be setted here, it's auto generated at scale-up.
|
||||||
|
# 2. serverPort: should not be setted, we'll use the default 6030 in many places.
|
||||||
|
# 3. fqdn: will be auto generated in kubenetes, user should not care about it.
|
||||||
|
# 4. role: currently role is not supported - every node is able to be mnode and vnode.
|
||||||
|
#
|
||||||
|
# Btw, keep quotes "" around the value like below, even the value will be number or not.
|
||||||
|
taoscfg:
|
||||||
|
# Starts as cluster or not, must be 0 or 1.
|
||||||
|
# 0: all pods will start as a seperate TDengine server
|
||||||
|
# 1: pods will start as TDengine server cluster. [default]
|
||||||
|
CLUSTER: "1"
|
||||||
|
|
||||||
|
# number of replications, for cluster only
|
||||||
|
TAOS_REPLICA: "1"
|
||||||
|
|
||||||
|
#
|
||||||
|
# TAOS_NUM_OF_RPC_THREADS: number of threads for RPC
|
||||||
|
#TAOS_NUM_OF_RPC_THREADS: "2"
|
||||||
|
|
||||||
|
|
||||||
|
#
|
||||||
|
# TAOS_NUM_OF_COMMIT_THREADS: number of threads to commit cache data
|
||||||
|
#TAOS_NUM_OF_COMMIT_THREADS: "4"
|
||||||
|
|
||||||
|
# enable/disable installation / usage report
|
||||||
|
#TAOS_TELEMETRY_REPORTING: "1"
|
||||||
|
|
||||||
|
# time interval of system monitor, seconds
|
||||||
|
#TAOS_MONITOR_INTERVAL: "30"
|
||||||
|
|
||||||
|
# time interval of dnode status reporting to mnode, seconds, for cluster only
|
||||||
|
#TAOS_STATUS_INTERVAL: "1"
|
||||||
|
|
||||||
|
# time interval of heart beat from shell to dnode, seconds
|
||||||
|
#TAOS_SHELL_ACTIVITY_TIMER: "3"
|
||||||
|
|
||||||
|
# minimum sliding window time, milli-second
|
||||||
|
#TAOS_MIN_SLIDING_TIME: "10"
|
||||||
|
|
||||||
|
# minimum time window, milli-second
|
||||||
|
#TAOS_MIN_INTERVAL_TIME: "1"
|
||||||
|
|
||||||
|
# the compressed rpc message, option:
|
||||||
|
# -1 (no compression)
|
||||||
|
# 0 (all message compressed),
|
||||||
|
# > 0 (rpc message body which larger than this value will be compressed)
|
||||||
|
#TAOS_COMPRESS_MSG_SIZE: "-1"
|
||||||
|
|
||||||
|
# max number of connections allowed in dnode
|
||||||
|
#TAOS_MAX_SHELL_CONNS: "50000"
|
||||||
|
|
||||||
|
# stop writing logs when the disk size of the log folder is less than this value
|
||||||
|
#TAOS_MINIMAL_LOG_DIR_G_B: "0.1"
|
||||||
|
|
||||||
|
# stop writing temporary files when the disk size of the tmp folder is less than this value
|
||||||
|
#TAOS_MINIMAL_TMP_DIR_G_B: "0.1"
|
||||||
|
|
||||||
|
# if disk free space is less than this value, taosd service exit directly within startup process
|
||||||
|
#TAOS_MINIMAL_DATA_DIR_G_B: "0.1"
|
||||||
|
|
||||||
|
# One mnode is equal to the number of vnode consumed
|
||||||
|
#TAOS_MNODE_EQUAL_VNODE_NUM: "4"
|
||||||
|
|
||||||
|
# enbale/disable http service
|
||||||
|
#TAOS_HTTP: "1"
|
||||||
|
|
||||||
|
# enable/disable system monitor
|
||||||
|
#TAOS_MONITOR: "1"
|
||||||
|
|
||||||
|
# enable/disable async log
|
||||||
|
#TAOS_ASYNC_LOG: "1"
|
||||||
|
|
||||||
|
#
|
||||||
|
# time of keeping log files, days
|
||||||
|
#TAOS_LOG_KEEP_DAYS: "0"
|
||||||
|
|
||||||
|
# The following parameters are used for debug purpose only.
|
||||||
|
# debugFlag 8 bits mask: FILE-SCREEN-UNUSED-HeartBeat-DUMP-TRACE_WARN-ERROR
|
||||||
|
# 131: output warning and error
|
||||||
|
# 135: output debug, warning and error
|
||||||
|
# 143: output trace, debug, warning and error to log
|
||||||
|
# 199: output debug, warning and error to both screen and file
|
||||||
|
# 207: output trace, debug, warning and error to both screen and file
|
||||||
|
#
|
||||||
|
# debug flag for all log type, take effect when non-zero value\
|
||||||
|
#TAOS_DEBUG_FLAG: "143"
|
||||||
|
|
||||||
|
# generate core file when service crash
|
||||||
|
#TAOS_ENABLE_CORE_FILE: "1"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Scaling Out
|
||||||
|
|
||||||
|
For information about scaling out your deployment, see Kubernetes. Additional Helm-specific is described as follows.
|
||||||
|
|
||||||
|
First, obtain the name of the StatefulSet service for your deployment.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export STS_NAME=$(kubectl get statefulset \
|
||||||
|
-l "app.kubernetes.io/name=tdengine" \
|
||||||
|
-o jsonpath="{.items[0].metadata.name}")
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
You can scale out your deployment by adding replicas. The following command scales a deployment to three nodes:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl scale --replicas 3 statefulset/$STS_NAME
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Run the `show dnodes` and `show mnodes` commands to check whether the scale-out was successful.
|
||||||
|
|
||||||
|
## Scaling In
|
||||||
|
|
||||||
|
:::warning
|
||||||
|
Exercise caution when scaling in a cluster.
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
|
Determine which dnodes you want to remove and drop them manually.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl --namespace default exec $POD_NAME -- \
|
||||||
|
cat /var/lib/taos/dnode/dnodeEps.json \
|
||||||
|
| jq '.dnodeInfos[1:] |map(.dnodeFqdn + ":" + (.dnodePort|tostring)) | .[]' -r
|
||||||
|
kubectl --namespace default exec $POD_NAME -- taos -s "show dnodes"
|
||||||
|
kubectl --namespace default exec $POD_NAME -- taos -s 'drop dnode "<you dnode in list>"'
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
## Remove a TDengine Cluster
|
||||||
|
|
||||||
|
You can use Helm to remove your cluster:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
helm uninstall tdengine
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
However, Helm does not remove PVCs automatically. After you remove your cluster, manually remove all PVCs.
|
|
@ -0,0 +1 @@
|
||||||
|
label: Deployment
|
|
@ -1,11 +1,10 @@
|
||||||
---
|
---
|
||||||
title: Cluster
|
title: Deployment
|
||||||
keywords: ["cluster", "high availability", "load balance", "scale out"]
|
|
||||||
---
|
---
|
||||||
|
|
||||||
TDengine has a native distributed design and provides the ability to scale out. A few nodes can form a TDengine cluster. If you need higher processing power, you just need to add more nodes into the cluster. TDengine uses virtual node technology to virtualize a node into multiple virtual nodes to achieve load balancing. At the same time, TDengine can group virtual nodes on different nodes into virtual node groups, and use the replication mechanism to ensure the high availability of the system. The cluster feature of TDengine is completely open source.
|
TDengine has a native distributed design and provides the ability to scale out. A few nodes can form a TDengine cluster. If you need higher processing power, you just need to add more nodes into the cluster. TDengine uses virtual node technology to virtualize a node into multiple virtual nodes to achieve load balancing. At the same time, TDengine can group virtual nodes on different nodes into virtual node groups, and use the replication mechanism to ensure the high availability of the system. The cluster feature of TDengine is completely open source.
|
||||||
|
|
||||||
This chapter mainly introduces cluster deployment, maintenance, and how to achieve high availability and load balancing.
|
This document describes how to manually deploy a cluster on a host as well as how to deploy on Kubernetes and by using Helm.
|
||||||
|
|
||||||
```mdx-code-block
|
```mdx-code-block
|
||||||
import DocCardList from '@theme/DocCardList';
|
import DocCardList from '@theme/DocCardList';
|
|
@ -1,16 +1,17 @@
|
||||||
---
|
---
|
||||||
|
sidebar_label: Data Types
|
||||||
title: Data Types
|
title: Data Types
|
||||||
description: "TDengine supports a variety of data types including timestamp, float, JSON and many others."
|
description: "TDengine supports a variety of data types including timestamp, float, JSON and many others."
|
||||||
---
|
---
|
||||||
|
|
||||||
## TIMESTAMP
|
## Timestamp
|
||||||
|
|
||||||
When using TDengine to store and query data, the most important part of the data is timestamp. Timestamp must be specified when creating and inserting data rows. Timestamp must follow the rules below:
|
When using TDengine to store and query data, the most important part of the data is timestamp. Timestamp must be specified when creating and inserting data rows. Timestamp must follow the rules below:
|
||||||
|
|
||||||
- The format must be `YYYY-MM-DD HH:mm:ss.MS`, the default time precision is millisecond (ms), for example `2017-08-12 18:25:58.128`
|
- The format must be `YYYY-MM-DD HH:mm:ss.MS`, the default time precision is millisecond (ms), for example `2017-08-12 18:25:58.128`
|
||||||
- Internal function `now` can be used to get the current timestamp on the client side
|
- Internal function `now` can be used to get the current timestamp on the client side
|
||||||
- The current timestamp of the client side is applied when `now` is used to insert data
|
- The current timestamp of the client side is applied when `now` is used to insert data
|
||||||
- Epoch Time:timestamp can also be a long integer number, which means the number of seconds, milliseconds or nanoseconds, depending on the time precision, from 1970-01-01 00:00:00.000 (UTC/GMT)
|
- Epoch Time:timestamp can also be a long integer number, which means the number of seconds, milliseconds or nanoseconds, depending on the time precision, from UTC 1970-01-01 00:00:00.
|
||||||
- Add/subtract operations can be carried out on timestamps. For example `now-2h` means 2 hours prior to the time at which query is executed. The units of time in operations can be b(nanosecond), u(microsecond), a(millisecond), s(second), m(minute), h(hour), d(day), or w(week). So `select * from t1 where ts > now-2w and ts <= now-1w` means the data between two weeks ago and one week ago. The time unit can also be n (calendar month) or y (calendar year) when specifying the time window for down sampling operations.
|
- Add/subtract operations can be carried out on timestamps. For example `now-2h` means 2 hours prior to the time at which query is executed. The units of time in operations can be b(nanosecond), u(microsecond), a(millisecond), s(second), m(minute), h(hour), d(day), or w(week). So `select * from t1 where ts > now-2w and ts <= now-1w` means the data between two weeks ago and one week ago. The time unit can also be n (calendar month) or y (calendar year) when specifying the time window for down sampling operations.
|
||||||
|
|
||||||
Time precision in TDengine can be set by the `PRECISION` parameter when executing `CREATE DATABASE`. The default time precision is millisecond. In the statement below, the precision is set to nanonseconds.
|
Time precision in TDengine can be set by the `PRECISION` parameter when executing `CREATE DATABASE`. The default time precision is millisecond. In the statement below, the precision is set to nanonseconds.
|
||||||
|
@ -18,52 +19,54 @@ Time precision in TDengine can be set by the `PRECISION` parameter when executin
|
||||||
```sql
|
```sql
|
||||||
CREATE DATABASE db_name PRECISION 'ns';
|
CREATE DATABASE db_name PRECISION 'ns';
|
||||||
```
|
```
|
||||||
|
|
||||||
## Data Types
|
## Data Types
|
||||||
|
|
||||||
In TDengine, the data types below can be used when specifying a column or tag.
|
In TDengine, the data types below can be used when specifying a column or tag.
|
||||||
|
|
||||||
| # | **type** | **Bytes** | **Description** |
|
| # | **type** | **Bytes** | **Description** |
|
||||||
| --- | :-------: | --------- | ------------------------- |
|
| --- | :-------: | --------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||||
| 1 | TIMESTAMP | 8 | Default precision is millisecond, microsecond and nanosecond are also supported |
|
| 1 | TIMESTAMP | 8 | Default precision is millisecond, microsecond and nanosecond are also supported |
|
||||||
| 2 | INT | 4 | Integer, the value range is [-2^31, 2^31-1] |
|
| 2 | INT | 4 | Integer, the value range is [-2^31, 2^31-1] |
|
||||||
| 3 |INT UNSIGNED|4 | Unsigned integer, the value range is [0, 2^31-1] |
|
| 3 | INT UNSIGNED| 4| unsigned integer, the value range is [0, 2^32-1]
|
||||||
| 4 | BIGINT | 8 | Long integer, the value range is [-2^63, 2^63-1] |
|
| 4 | BIGINT | 8 | Long integer, the value range is [-2^63, 2^63-1] |
|
||||||
| 5 | BIGINT UNSIGNED | 8 | Unsigned long integer, the value range is [0, 2^63-1] |
|
| 5 | BIGINT UNSIGNED | 8 | unsigned long integer, the value range is [0, 2^64-1] |
|
||||||
| 6 | FLOAT | 4 | Floating point number, the effective number of digits is 6-7, the value range is [-3.4E38, 3.4E38] |
|
| 6 | FLOAT | 4 | Floating point number, the effective number of digits is 6-7, the value range is [-3.4E38, 3.4E38] |
|
||||||
| 7 | DOUBLE | 8 | Double precision floating point number, the effective number of digits is 15-16, the value range is [-1.7E308, 1.7E308] |
|
| 7 | DOUBLE | 8 | Double precision floating point number, the effective number of digits is 15-16, the value range is [-1.7E308, 1.7E308] |
|
||||||
| 8 | BINARY | User Defined | Single-byte string for ASCII visible characters. Length must be specified when defining a column or tag of binary type. The string length can be up to 16374 bytes. The string value must be quoted with single quotes. The literal single quote inside the string must be preceded with back slash like `\'` |
|
| 8 | BINARY | User Defined | Single-byte string for ASCII visible characters. Length must be specified when defining a column or tag of binary type. |
|
||||||
| 9 | SMALLINT | 2 | Short integer, the value range is [-32768, 32767] |
|
| 9 | SMALLINT | 2 | Short integer, the value range is [-32768, 32767] |
|
||||||
| 10 | SMALLINT UNSIGNED | 2 | Unsigned short integer, the value range is [0, 32767] |
|
| 10 | INT UNSIGNED| 2| unsigned integer, the value range is [0, 65535]|
|
||||||
| 11 | TINYINT | 1 | Single-byte integer, the value range is [-128, 127] |
|
| 11 | TINYINT | 1 | Single-byte integer, the value range is [-128, 127] |
|
||||||
| 12 | TINYINT UNSIGNED | 1 | Unsigned single-byte integer, the value range is [0, 127] |
|
| 12 | TINYINT UNSIGNED | 1 | unsigned single-byte integer, the value range is [0, 255] |
|
||||||
| 13 | BOOL | 1 | Bool, the value range is {true, false} |
|
| 13 | BOOL | 1 | Bool, the value range is {true, false} |
|
||||||
| 14 | NCHAR | User Defined| Multi-Byte string that can include multi byte characters like Chinese characters. Each character of NCHAR type consumes 4 bytes storage. The string value should be quoted with single quotes. Literal single quote inside the string must be preceded with backslash, like `\’`. The length must be specified when defining a column or tag of NCHAR type, for example nchar(10) means it can store at most 10 characters of nchar type and will consume fixed storage of 40 bytes. An error will be reported if the string value exceeds the length defined. |
|
| 14 | NCHAR | User Defined| Multi-Byte string that can include multi byte characters like Chinese characters. Each character of NCHAR type consumes 4 bytes storage. The string value should be quoted with single quotes. Literal single quote inside the string must be preceded with backslash, like `\’`. The length must be specified when defining a column or tag of NCHAR type, for example nchar(10) means it can store at most 10 characters of nchar type and will consume fixed storage of 40 bytes. An error will be reported if the string value exceeds the length defined. |
|
||||||
| 15 | JSON | | JSON type can only be used on tags. A tag of json type is excluded with any other tags of any other type |
|
| 15 | JSON | | JSON type can only be used on tags. A tag of json type is excluded with any other tags of any other type |
|
||||||
| 16 | VARCHAR | User Defined| Alias of BINARY type |
|
| 16 | VARCHAR | User-defined | Alias of BINARY |
|
||||||
|
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
- TDengine is case insensitive and treats any characters in the sql command as lower case by default, case sensitive strings must be quoted with single quotes.
|
- TDengine is case insensitive and treats any characters in the sql command as lower case by default, case sensitive strings must be quoted with single quotes.
|
||||||
- Only ASCII visible characters are suggested to be used in a column or tag of BINARY type. Multi-byte characters must be stored in NCHAR type.
|
- Only ASCII visible characters are suggested to be used in a column or tag of BINARY type. Multi-byte characters must be stored in NCHAR type.
|
||||||
|
- The length of BINARY can be up to 16374 bytes. The string value must be quoted with single quotes. You must specify a length in bytes for a BINARY value, for example binary(20) for up to twenty single-byte characters. If the data exceeds the specified length, an error will occur. The literal single quote inside the string must be preceded with back slash like `\'`
|
||||||
- Numeric values in SQL statements will be determined as integer or float type according to whether there is decimal point or whether scientific notation is used, so attention must be paid to avoid overflow. For example, 9999999999999999999 will be considered as overflow because it exceeds the upper limit of long integer, but 9999999999999999999.0 will be considered as a legal float number.
|
- Numeric values in SQL statements will be determined as integer or float type according to whether there is decimal point or whether scientific notation is used, so attention must be paid to avoid overflow. For example, 9999999999999999999 will be considered as overflow because it exceeds the upper limit of long integer, but 9999999999999999999.0 will be considered as a legal float number.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
|
||||||
## Constants
|
## Constants
|
||||||
TDengine supports constants of multiple data type.
|
TDengine supports a variety of constants:
|
||||||
|
|
||||||
| # | **Syntax** | **Type** | **Description** |
|
| # | **Syntax** | **Type** | **Description** |
|
||||||
| --- | :-------: | --------- | -------------------------------------- |
|
| --- | :-------: | --------- | -------------------------------------- |
|
||||||
| 1 | [{+ \| -}]123 | BIGINT | Numeric constants are treated as BIGINT type. The value will be truncated if it exceeds the range of BIGINT type. |
|
| 1 | [{+ \| -}]123 | BIGINT | Integer literals are of type BIGINT. Data that exceeds the length of the BIGINT type is truncated. |
|
||||||
| 2 | 123.45 | DOUBLE | Floating number constants are treated as DOUBLE type. TDengine determines whether it's a floating number based on if decimal point or scientific notation is used. |
|
| 2 | 123.45 | DOUBLE | Floating-point literals are of type DOUBLE. Numeric values will be determined as integer or float type according to whether there is decimal point or whether scientific notation is used. |
|
||||||
| 3 | 1.2E3 | DOUBLE | Constants in scientific notation are treated ad DOUBLE type. |
|
| 3 | 1.2E3 | DOUBLE | Literals in scientific notation are of type DOUBLE. |
|
||||||
| 4 | 'abc' | BINARY | String constants enclosed by single quotes are treated as BINARY type. Its size is determined as the acutal length. Single quote itself can be included by preceding backslash, i.e. `\'`, in a string constant. |
|
| 4 | 'abc' | BINARY | Content enclosed in single quotation marks is of type BINARY. The size of a BINARY is the size of the string in bytes. A literal single quote inside the string must be escaped with a backslash (\'). |
|
||||||
| 5 | "abc" | BINARY | String constants enclosed by double quotes are treated as BINARY type. Its size is determined as the acutal length. Double quote itself can be included by preceding backslash, i.e. `\"`, in a string constant. |
|
| 5 | 'abc' | BINARY | Content enclosed in double quotation marks is of type BINARY. The size of a BINARY is the size of the string in bytes. A literal double quote inside the string must be escaped with a backslash (\"). |
|
||||||
| 6 | TIMESTAMP {'literal' \| "literal"} | TIMESTAMP | A string constant following `TIMESTAMP` keyword is treated as TIMESTAMP type. The string should be in the format of "YYYY-MM-DD HH:mm:ss.MS". Its time precision is same as that of the current database being used. |
|
| 6 | TIMESTAMP {'literal' \| "literal"} | TIMESTAMP | The TIMESTAMP keyword indicates that the following string literal is interpreted as a timestamp. The string must be in YYYY-MM-DD HH:mm:ss.MS format. The precision is inherited from the database configuration. |
|
||||||
| 7 | {TRUE \| FALSE} | BOOL | BOOL type contant. |
|
| 7 | {TRUE \| FALSE} | BOOL | Boolean literals are of type BOOL. |
|
||||||
| 8 | {'' \| "" \| '\t' \| "\t" \| ' ' \| " " \| NULL } | -- | NULL constant, it can be used for any type.|
|
| 8 | {'' \| "" \| '\t' \| "\t" \| ' ' \| " " \| NULL } | -- | The preceding characters indicate null literals. These can be used with any data type. |
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
- TDengine determines whether it's a floating number based on if decimal point or scientific notation is used. So whether the value is determined as overflow depends on both the value and the determined type. For example, 9999999999999999999 is determined as overflow because it exceeds the upper limit of BIGINT type, while 9999999999999999999.0 is considered as a valid floating number because it is within the range of DOUBLE type.
|
Numeric values will be determined as integer or float type according to whether there is decimal point or whether scientific notation is used, so attention must be paid to avoid overflow. For example, 9999999999999999999 will be considered as overflow because it exceeds the upper limit of long integer, but 9999999999999999999.0 will be considered as a legal float number.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
|
@ -4,123 +4,153 @@ title: Database
|
||||||
description: "create and drop database, show or change database parameters"
|
description: "create and drop database, show or change database parameters"
|
||||||
---
|
---
|
||||||
|
|
||||||
## Create Database
|
## Create a Database
|
||||||
|
|
||||||
```
|
```sql
|
||||||
CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep] [DAYS days] [UPDATE 1];
|
CREATE DATABASE [IF NOT EXISTS] db_name [database_options]
|
||||||
|
|
||||||
|
database_options:
|
||||||
|
database_option ...
|
||||||
|
|
||||||
|
database_option: {
|
||||||
|
BUFFER value
|
||||||
|
| CACHEMODEL {'none' | 'last_row' | 'last_value' | 'both'}
|
||||||
|
| CACHESIZE value
|
||||||
|
| COMP {0 | 1 | 2}
|
||||||
|
| DURATION value
|
||||||
|
| WAL_FSYNC_PERIOD value
|
||||||
|
| MAXROWS value
|
||||||
|
| MINROWS value
|
||||||
|
| KEEP value
|
||||||
|
| PAGES value
|
||||||
|
| PAGESIZE value
|
||||||
|
| PRECISION {'ms' | 'us' | 'ns'}
|
||||||
|
| REPLICA value
|
||||||
|
| RETENTIONS ingestion_duration:keep_duration ...
|
||||||
|
| STRICT {'off' | 'on'}
|
||||||
|
| WAL_LEVEL {1 | 2}
|
||||||
|
| VGROUPS value
|
||||||
|
| SINGLE_STABLE {0 | 1}
|
||||||
|
| WAL_RETENTION_PERIOD value
|
||||||
|
| WAL_ROLL_PERIOD value
|
||||||
|
| WAL_RETENTION_SIZE value
|
||||||
|
| WAL_SEGMENT_SIZE value
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
:::info
|
## Parameters
|
||||||
|
|
||||||
1. KEEP specifies the number of days for which the data in the database will be retained. The default value is 3650 days, i.e. 10 years. The data will be deleted automatically once its age exceeds this threshold.
|
- BUFFER: specifies the size (in MB) of the write buffer for each vnode. Enter a value between 3 and 16384. The default value is 96.
|
||||||
2. UPDATE specifies whether the data can be updated and how the data can be updated.
|
- CACHEMODEL: specifies how the latest data in subtables is stored in the cache. The default value is none.
|
||||||
1. UPDATE set to 0 means update operation is not allowed. The update for data with an existing timestamp will be discarded silently and the original record in the database will be preserved as is.
|
- none: The latest data is not cached.
|
||||||
2. UPDATE set to 1 means the whole row will be updated. The columns for which no value is specified will be set to NULL.
|
- last_row: The last row of each subtable is cached. This option significantly improves the performance of the LAST_ROW function.
|
||||||
3. UPDATE set to 2 means updating a subset of columns for a row is allowed. The columns for which no value is specified will be kept unchanged.
|
- last_value: The last non-null value of each column in each subtable is cached. This option significantly improves the performance of the LAST function under normal circumstances, such as statements including the WHERE, ORDER BY, GROUP BY, and INTERVAL keywords.
|
||||||
3. The maximum length of database name is 33 bytes.
|
- both: The last row of each subtable and the last non-null value of each column in each subtable are cached.
|
||||||
4. The maximum length of a SQL statement is 65,480 bytes.
|
- CACHESIZE: specifies the amount (in MB) of memory used for subtable caching on each vnode. Enter a value between 1 and 65536. The default value is 1.
|
||||||
5. Below are the parameters that can be used when creating a database
|
- COMP: specifies how databases are compressed. The default value is 2.
|
||||||
- cache: [Description](/reference/config/#cache)
|
- 0: Compression is disabled.
|
||||||
- blocks: [Description](/reference/config/#blocks)
|
- 1: One-pass compression is enabled.
|
||||||
- days: [Description](/reference/config/#days)
|
- 2: Two-pass compression is enabled.
|
||||||
- keep: [Description](/reference/config/#keep)
|
- DURATION: specifies the time period contained in each data file. After the time specified by this parameter has elapsed, TDengine creates a new data file to store incoming data. You can use m (minutes), h (hours), and d (days) as the unit, for example DURATION 100h or DURATION 10d. If you do not include a unit, d is used by default.
|
||||||
- minRows: [Description](/reference/config/#minrows)
|
- WAL_FSYNC_PERIOD: specifies the interval (in milliseconds) at which data is written from the WAL to disk. This parameter takes effect only when the WAL parameter is set to 2. The default value is 3000. Enter a value between 0 and 180000. The value 0 indicates that incoming data is immediately written to disk.
|
||||||
- maxRows: [Description](/reference/config/#maxrows)
|
- MAXROWS: specifies the maximum number of rows recorded in a block. The default value is 4096.
|
||||||
- wal: [Description](/reference/config/#wallevel)
|
- MINROWS: specifies the minimum number of rows recorded in a block. The default value is 100.
|
||||||
- fsync: [Description](/reference/config/#fsync)
|
- KEEP: specifies the time for which data is retained. Enter a value between 1 and 365000. The default value is 3650. The value of the KEEP parameter must be greater than or equal to the value of the DURATION parameter. TDengine automatically deletes data that is older than the value of the KEEP parameter. You can use m (minutes), h (hours), and d (days) as the unit, for example KEEP 100h or KEEP 10d. If you do not include a unit, d is used by default.
|
||||||
- update: [Description](/reference/config/#update)
|
- PAGES: specifies the number of pages in the metadata storage engine cache on each vnode. Enter a value greater than or equal to 64. The default value is 256. The space occupied by metadata storage on each vnode is equal to the product of the values of the PAGESIZE and PAGES parameters. The space occupied by default is 1 MB.
|
||||||
- cacheLast: [Description](/reference/config/#cachelast)
|
- PAGESIZE: specifies the size (in KB) of each page in the metadata storage engine cache on each vnode. The default value is 4. Enter a value between 1 and 16384.
|
||||||
- replica: [Description](/reference/config/#replica)
|
- PRECISION: specifies the precision at which a database records timestamps. Enter ms for milliseconds, us for microseconds, or ns for nanoseconds. The default value is ms.
|
||||||
- quorum: [Description](/reference/config/#quorum)
|
- REPLICA: specifies the number of replicas that are made of the database. Enter 1 or 3. The default value is 1. The value of the REPLICA parameter cannot exceed the number of dnodes in the cluster.
|
||||||
- comp: [Description](/reference/config/#comp)
|
- RETENTIONS: specifies the retention period for data aggregated at various intervals. For example, RETENTIONS 15s:7d,1m:21d,15m:50d indicates that data aggregated every 15 seconds is retained for 7 days, data aggregated every 1 minute is retained for 21 days, and data aggregated every 15 minutes is retained for 50 days. You must enter three aggregation intervals and corresponding retention periods.
|
||||||
- precision: [Description](/reference/config/#precision)
|
- STRICT: specifies whether strong data consistency is enabled. The default value is off.
|
||||||
6. Please note that all of the parameters mentioned in this section are configured in configuration file `taos.cfg` on the TDengine server. If not specified in the `create database` statement, the values from taos.cfg are used by default. To override default parameters, they must be specified in the `create database` statement.
|
- on: Strong consistency is enabled and implemented through the Raft consensus algorithm. In this mode, an operation is considered successful once it is confirmed by half of the nodes in the cluster.
|
||||||
|
- off: Strong consistency is disabled. In this mode, an operation is considered successful when it is initiated by the local node.
|
||||||
|
- WAL_LEVEL: specifies whether fsync is enabled. The default value is 1.
|
||||||
|
- 1: WAL is enabled but fsync is disabled.
|
||||||
|
- 2: WAL and fsync are both enabled.
|
||||||
|
- VGROUPS: specifies the initial number of vgroups when a database is created.
|
||||||
|
- SINGLE_STABLE: specifies whether the database can contain more than one supertable.
|
||||||
|
- 0: The database can contain multiple supertables.
|
||||||
|
- 1: The database can contain only one supertable.
|
||||||
|
- WAL_RETENTION_PERIOD: specifies the time after which WAL files are deleted. This parameter is used for data subscription. Enter a time in seconds. The default value is 0. A value of 0 indicates that each WAL file is deleted immediately after its contents are written to disk. -1: WAL files are never deleted.
|
||||||
|
- WAL_RETENTION_SIZE: specifies the size at which WAL files are deleted. This parameter is used for data subscription. Enter a size in KB. The default value is 0. A value of 0 indicates that each WAL file is deleted immediately after its contents are written to disk. -1: WAL files are never deleted.
|
||||||
|
- WAL_ROLL_PERIOD: specifies the time after which WAL files are rotated. After this period elapses, a new WAL file is created. The default value is 0. A value of 0 indicates that a new WAL file is created only after the previous WAL file was written to disk.
|
||||||
|
- WAL_SEGMENT_SIZE: specifies the maximum size of a WAL file. After the current WAL file reaches this size, a new WAL file is created. The default value is 0. A value of 0 indicates that a new WAL file is created only after the previous WAL file was written to disk.
|
||||||
|
|
||||||
:::
|
### Example Statement
|
||||||
|
|
||||||
## Show Current Configuration
|
```sql
|
||||||
|
create database if not exists db vgroups 10 buffer 10
|
||||||
|
|
||||||
```
|
|
||||||
SHOW VARIABLES;
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Specify The Database In Use
|
The preceding SQL statement creates a database named db that has 10 vgroups and whose vnodes have a 10 MB cache.
|
||||||
|
|
||||||
|
### Specify the Database in Use
|
||||||
|
|
||||||
```
|
```
|
||||||
USE db_name;
|
USE db_name;
|
||||||
```
|
```
|
||||||
|
|
||||||
:::note
|
The preceding SQL statement switches to the specified database. (If you connect to TDengine over the REST API, this statement does not take effect.)
|
||||||
This way is not applicable when using a REST connection. In a REST connection the database name must be specified before a table or stable name. For e.g. to query the stable "meters" in database "test" the query would be "SELECT count(*) from test.meters"
|
|
||||||
|
|
||||||
:::
|
## Drop a Database
|
||||||
|
|
||||||
## Drop Database
|
|
||||||
|
|
||||||
```
|
```
|
||||||
DROP DATABASE [IF EXISTS] db_name;
|
DROP DATABASE [IF EXISTS] db_name
|
||||||
```
|
```
|
||||||
|
|
||||||
:::note
|
The preceding SQL statement deletes the specified database. This statement will delete all tables in the database and destroy all vgroups associated with it. Exercise caution when using this statement.
|
||||||
All data in the database will be deleted too. This command must be used with extreme caution. Please follow your organization's data integrity, data backup, data security or any other applicable SOPs before using this command.
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
## Change Database Configuration
|
## Change Database Configuration
|
||||||
|
|
||||||
Some examples are shown below to demonstrate how to change the configuration of a database. Please note that some configuration parameters can be changed after the database is created, but some cannot. For details of the configuration parameters of database please refer to [Configuration Parameters](/reference/config/).
|
```sql
|
||||||
|
ALTER DATABASE db_name [alter_database_options]
|
||||||
|
|
||||||
```
|
alter_database_options:
|
||||||
ALTER DATABASE db_name COMP 2;
|
alter_database_option ...
|
||||||
|
|
||||||
|
alter_database_option: {
|
||||||
|
CACHEMODEL {'none' | 'last_row' | 'last_value' | 'both'}
|
||||||
|
| CACHESIZE value
|
||||||
|
| WAL_LEVEL value
|
||||||
|
| WAL_FSYNC_PERIOD value
|
||||||
|
| KEEP value
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
COMP parameter specifies whether the data is compressed and how the data is compressed.
|
:::note
|
||||||
|
Other parameters cannot be modified after the database has been created.
|
||||||
```
|
|
||||||
ALTER DATABASE db_name REPLICA 2;
|
|
||||||
```
|
|
||||||
|
|
||||||
REPLICA parameter specifies the number of replicas of the database.
|
|
||||||
|
|
||||||
```
|
|
||||||
ALTER DATABASE db_name KEEP 365;
|
|
||||||
```
|
|
||||||
|
|
||||||
KEEP parameter specifies the number of days for which the data will be kept.
|
|
||||||
|
|
||||||
```
|
|
||||||
ALTER DATABASE db_name QUORUM 2;
|
|
||||||
```
|
|
||||||
|
|
||||||
QUORUM parameter specifies the necessary number of confirmations to determine whether the data is written successfully.
|
|
||||||
|
|
||||||
```
|
|
||||||
ALTER DATABASE db_name BLOCKS 100;
|
|
||||||
```
|
|
||||||
|
|
||||||
BLOCKS parameter specifies the number of memory blocks used by each VNODE.
|
|
||||||
|
|
||||||
```
|
|
||||||
ALTER DATABASE db_name CACHELAST 0;
|
|
||||||
```
|
|
||||||
|
|
||||||
CACHELAST parameter specifies whether and how the latest data of a sub table is cached.
|
|
||||||
|
|
||||||
:::tip
|
|
||||||
The above parameters can be changed using `ALTER DATABASE` command without restarting. For more details of all configuration parameters please refer to [Configuration Parameters](/reference/config/).
|
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
## Show All Databases
|
## View Databases
|
||||||
|
|
||||||
|
### View All Databases
|
||||||
|
|
||||||
```
|
```
|
||||||
SHOW DATABASES;
|
SHOW DATABASES;
|
||||||
```
|
```
|
||||||
|
|
||||||
## Show The Create Statement of A Database
|
### View the CREATE Statement for a Database
|
||||||
|
|
||||||
```
|
```
|
||||||
SHOW CREATE DATABASE db_name;
|
SHOW CREATE DATABASE db_name;
|
||||||
```
|
```
|
||||||
|
|
||||||
This command is useful when migrating the data from one TDengine cluster to another. This command can be used to get the CREATE statement, which can be used in another TDengine instance to create the exact same database.
|
The preceding SQL statement can be used in migration scenarios. This command can be used to get the CREATE statement, which can be used in another TDengine instance to create the exact same database.
|
||||||
|
|
||||||
|
### View Database Configuration
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SHOW DATABASES \G;
|
||||||
|
```
|
||||||
|
|
||||||
|
The preceding SQL statement shows the value of each parameter for the specified database. One value is displayed per line.
|
||||||
|
|
||||||
|
## Delete Expired Data
|
||||||
|
|
||||||
|
```sql
|
||||||
|
TRIM DATABASE db_name;
|
||||||
|
```
|
||||||
|
|
||||||
|
The preceding SQL statement deletes data that has expired and orders the remaining data in accordance with the storage configuration.
|
||||||
|
|
|
@ -1,122 +1,163 @@
|
||||||
---
|
---
|
||||||
sidebar_label: Table
|
|
||||||
title: Table
|
title: Table
|
||||||
description: create super table, normal table and sub table, drop tables and change tables
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Create Table
|
## Create Table
|
||||||
|
|
||||||
```
|
You create standard tables and subtables with the `CREATE TABLE` statement.
|
||||||
CREATE TABLE [IF NOT EXISTS] tb_name (timestamp_field_name TIMESTAMP, field1_name data_type1 [, field2_name data_type2 ...]);
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE [IF NOT EXISTS] [db_name.]tb_name (create_definition [, create_definitionn] ...) [table_options]
|
||||||
|
|
||||||
|
CREATE TABLE create_subtable_clause
|
||||||
|
|
||||||
|
CREATE TABLE [IF NOT EXISTS] [db_name.]tb_name (create_definition [, create_definitionn] ...)
|
||||||
|
[TAGS (create_definition [, create_definitionn] ...)]
|
||||||
|
[table_options]
|
||||||
|
|
||||||
|
create_subtable_clause: {
|
||||||
|
create_subtable_clause [create_subtable_clause] ...
|
||||||
|
| [IF NOT EXISTS] [db_name.]tb_name USING [db_name.]stb_name [(tag_name [, tag_name] ...)] TAGS (tag_value [, tag_value] ...)
|
||||||
|
}
|
||||||
|
|
||||||
|
create_definition:
|
||||||
|
col_name column_definition
|
||||||
|
|
||||||
|
column_definition:
|
||||||
|
type_name [comment 'string_value']
|
||||||
|
|
||||||
|
table_options:
|
||||||
|
table_option ...
|
||||||
|
|
||||||
|
table_option: {
|
||||||
|
COMMENT 'string_value'
|
||||||
|
| WATERMARK duration[,duration]
|
||||||
|
| MAX_DELAY duration[,duration]
|
||||||
|
| ROLLUP(func_name [, func_name] ...)
|
||||||
|
| SMA(col_name [, col_name] ...)
|
||||||
|
| TTL value
|
||||||
|
}
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
:::info
|
**More explanations**
|
||||||
|
|
||||||
1. The first column of a table MUST be of type TIMESTAMP. It is automatically set as the primary key.
|
1. The first column of a table MUST be of type TIMESTAMP. It is automatically set as the primary key.
|
||||||
2. The maximum length of the table name is 192 bytes.
|
2. The maximum length of the table name is 192 bytes.
|
||||||
3. The maximum length of each row is 48k bytes, please note that the extra 2 bytes used by each BINARY/NCHAR column are also counted.
|
3. The maximum length of each row is 48k bytes, please note that the extra 2 bytes used by each BINARY/NCHAR column are also counted.
|
||||||
4. The name of the subtable can only consist of characters from the English alphabet, digits and underscore. Table names can't start with a digit. Table names are case insensitive.
|
4. The name of the subtable can only consist of characters from the English alphabet, digits and underscore. Table names can't start with a digit. Table names are case insensitive.
|
||||||
5. The maximum length in bytes must be specified when using BINARY or NCHAR types.
|
5. The maximum length in bytes must be specified when using BINARY or NCHAR types.
|
||||||
6. Escape character "\`" can be used to avoid the conflict between table names and reserved keywords, above rules will be bypassed when using escape character on table names, but the upper limit for the name length is still valid. The table names specified using escape character are case sensitive. Only ASCII visible characters can be used with escape character.
|
6. Escape character "\`" can be used to avoid the conflict between table names and reserved keywords, above rules will be bypassed when using escape character on table names, but the upper limit for the name length is still valid. The table names specified using escape character are case sensitive.
|
||||||
For example \`aBc\` and \`abc\` are different table names but `abc` and `aBc` are same table names because they are both converted to `abc` internally.
|
For example \`aBc\` and \`abc\` are different table names but `abc` and `aBc` are same table names because they are both converted to `abc` internally.
|
||||||
|
Only ASCII visible characters can be used with escape character.
|
||||||
|
|
||||||
:::
|
**Parameter description**
|
||||||
|
1. COMMENT: specifies comments for the table. This parameter can be used with supertables, standard tables, and subtables.
|
||||||
|
2. WATERMARK: specifies the time after which the window is closed. The default value is 5 seconds. Enter a value between 0 and 15 minutes in milliseconds, seconds, or minutes. You can enter multiple values separated by commas (,). This parameter applies only to supertables and takes effect only when the RETENTIONS parameter has been specified for the database.
|
||||||
|
3. MAX_DELAY: specifies the maximum latency for pushing computation results. The default value is 15 minutes or the value of the INTERVAL parameter, whichever is smaller. Enter a value between 0 and 15 minutes in milliseconds, seconds, or minutes. You can enter multiple values separated by commas (,). Note: Retain the default value if possible. Configuring a small MAX_DELAY may cause results to be frequently pushed, affecting storage and query performance. This parameter applies only to supertables and takes effect only when the RETENTIONS parameter has been specified for the database.
|
||||||
|
4. ROLLUP: specifies aggregate functions to roll up. Rolling up a function provides downsampled results based on multiple axes. This parameter applies only to supertables and takes effect only when the RETENTIONS parameter has been specified for the database. You can specify only one function to roll up. The rollup takes effect on all columns except TS. Enter one of the following values: avg, sum, min, max, last, or first.
|
||||||
|
5. SMA: specifies functions on which to enable small materialized aggregates (SMA). SMA is user-defined precomputation of aggregates based on data blocks. Enter one of the following values: max, min, or sum This parameter can be used with supertables and standard tables.
|
||||||
|
6. TTL: specifies the time to live (TTL) for the table. If TTL is specified when creatinga table, after the time period for which the table has been existing is over TTL, TDengine will automatically delete the table. Please be noted that the system may not delete the table at the exact moment that the TTL expires but guarantee there is such a system and finally the table will be deleted. The unit of TTL is in days. The default value is 0, i.e. never expire.
|
||||||
|
|
||||||
### Create Subtable Using STable As Template
|
## Create Subtables
|
||||||
|
|
||||||
```
|
### Create a Subtable
|
||||||
|
|
||||||
|
```sql
|
||||||
CREATE TABLE [IF NOT EXISTS] tb_name USING stb_name TAGS (tag_value1, ...);
|
CREATE TABLE [IF NOT EXISTS] tb_name USING stb_name TAGS (tag_value1, ...);
|
||||||
```
|
```
|
||||||
|
|
||||||
The above command creates a subtable using the specified super table as a template and the specified tag values.
|
### Create a Subtable with Specified Tags
|
||||||
|
|
||||||
### Create Subtable Using STable As Template With A Subset of Tags
|
```sql
|
||||||
|
|
||||||
```
|
|
||||||
CREATE TABLE [IF NOT EXISTS] tb_name USING stb_name (tag_name1, ...) TAGS (tag_value1, ...);
|
CREATE TABLE [IF NOT EXISTS] tb_name USING stb_name (tag_name1, ...) TAGS (tag_value1, ...);
|
||||||
```
|
```
|
||||||
|
|
||||||
The tags for which no value is specified will be set to NULL.
|
The preceding SQL statement creates a subtable based on a supertable but specifies a subset of tags to use. Tags that are not included in this subset are assigned a null value.
|
||||||
|
|
||||||
### Create Tables in Batch
|
### Create Multiple Subtables
|
||||||
|
|
||||||
```
|
```sql
|
||||||
CREATE TABLE [IF NOT EXISTS] tb_name1 USING stb_name TAGS (tag_value1, ...) [IF NOT EXISTS] tb_name2 USING stb_name TAGS (tag_value2, ...) ...;
|
CREATE TABLE [IF NOT EXISTS] tb_name1 USING stb_name TAGS (tag_value1, ...) [IF NOT EXISTS] tb_name2 USING stb_name TAGS (tag_value2, ...) ...;
|
||||||
```
|
```
|
||||||
|
|
||||||
This can be used to create a lot of tables in a single SQL statement while making table creation much faster.
|
You can create multiple subtables in a single SQL statement provided that all subtables use the same supertable. For performance reasons, do not create more than 3000 tables per statement.
|
||||||
|
|
||||||
:::info
|
## Modify a Table
|
||||||
|
|
||||||
- Creating tables in batch must use a super table as a template.
|
```sql
|
||||||
- The length of single statement is suggested to be between 1,000 and 3,000 bytes for best performance.
|
ALTER TABLE [db_name.]tb_name alter_table_clause
|
||||||
|
|
||||||
:::
|
alter_table_clause: {
|
||||||
|
alter_table_options
|
||||||
|
| ADD COLUMN col_name column_type
|
||||||
|
| DROP COLUMN col_name
|
||||||
|
| MODIFY COLUMN col_name column_type
|
||||||
|
| RENAME COLUMN old_col_name new_col_name
|
||||||
|
}
|
||||||
|
|
||||||
## Drop Tables
|
alter_table_options:
|
||||||
|
alter_table_option ...
|
||||||
|
|
||||||
|
alter_table_option: {
|
||||||
|
TTL value
|
||||||
|
| COMMENT 'string_value'
|
||||||
|
}
|
||||||
|
|
||||||
```
|
|
||||||
DROP TABLE [IF EXISTS] tb_name;
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Show All Tables In Current Database
|
**More explanations**
|
||||||
|
You can perform the following modifications on existing tables:
|
||||||
|
1. ADD COLUMN: adds a column to the supertable.
|
||||||
|
2. DROP COLUMN: deletes a column from the supertable.
|
||||||
|
3. MODIFY COLUMN: changes the length of the data type specified for the column. Note that you can only specify a length greater than the current length.
|
||||||
|
4. RENAME COLUMN: renames a specified column in the table.
|
||||||
|
|
||||||
```
|
### Add a Column
|
||||||
SHOW TABLES [LIKE tb_name_wildcard];
|
|
||||||
```
|
|
||||||
|
|
||||||
## Show Create Statement of A Table
|
```sql
|
||||||
|
|
||||||
```
|
|
||||||
SHOW CREATE TABLE tb_name;
|
|
||||||
```
|
|
||||||
|
|
||||||
This is useful when migrating the data in one TDengine cluster to another one because it can be used to create the exact same tables in the target database.
|
|
||||||
|
|
||||||
## Show Table Definition
|
|
||||||
|
|
||||||
```
|
|
||||||
DESCRIBE tb_name;
|
|
||||||
```
|
|
||||||
|
|
||||||
## Change Table Definition
|
|
||||||
|
|
||||||
### Add A Column
|
|
||||||
|
|
||||||
```
|
|
||||||
ALTER TABLE tb_name ADD COLUMN field_name data_type;
|
ALTER TABLE tb_name ADD COLUMN field_name data_type;
|
||||||
```
|
```
|
||||||
|
|
||||||
:::info
|
### Delete a Column
|
||||||
|
|
||||||
1. The maximum number of columns is 4096, the minimum number of columns is 2.
|
```sql
|
||||||
2. The maximum length of a column name is 64 bytes.
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
### Remove A Column
|
|
||||||
|
|
||||||
```
|
|
||||||
ALTER TABLE tb_name DROP COLUMN field_name;
|
ALTER TABLE tb_name DROP COLUMN field_name;
|
||||||
```
|
```
|
||||||
|
|
||||||
:::note
|
### Modify the Data Length
|
||||||
If a table is created using a super table as template, the table definition can only be changed on the corresponding super table, and the change will be automatically applied to all the subtables created using this super table as template. For tables created in the normal way, the table definition can be changed directly on the table.
|
|
||||||
|
|
||||||
:::
|
```sql
|
||||||
|
|
||||||
### Change Column Length
|
|
||||||
|
|
||||||
```
|
|
||||||
ALTER TABLE tb_name MODIFY COLUMN field_name data_type(length);
|
ALTER TABLE tb_name MODIFY COLUMN field_name data_type(length);
|
||||||
```
|
```
|
||||||
|
|
||||||
If the type of a column is variable length, like BINARY or NCHAR, this command can be used to change the length of the column.
|
### Rename a Column
|
||||||
|
|
||||||
:::note
|
```sql
|
||||||
If a table is created using a super table as template, the table definition can only be changed on the corresponding super table, and the change will be automatically applied to all the subtables created using this super table as template. For tables created in the normal way, the table definition can be changed directly on the table.
|
ALTER TABLE tb_name RENAME COLUMN old_col_name new_col_name
|
||||||
|
```
|
||||||
|
|
||||||
:::
|
## Modify a Subtable
|
||||||
|
|
||||||
|
```sql
|
||||||
|
ALTER TABLE [db_name.]tb_name alter_table_clause
|
||||||
|
|
||||||
|
alter_table_clause: {
|
||||||
|
alter_table_options
|
||||||
|
| SET TAG tag_name = new_tag_value
|
||||||
|
}
|
||||||
|
|
||||||
|
alter_table_options:
|
||||||
|
alter_table_option ...
|
||||||
|
|
||||||
|
alter_table_option: {
|
||||||
|
TTL value
|
||||||
|
| COMMENT 'string_value'
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**More explanations**
|
||||||
|
1. Only the value of a tag can be modified directly. For all other modifications, you must modify the supertable from which the subtable was created.
|
||||||
|
|
||||||
### Change Tag Value Of Sub Table
|
### Change Tag Value Of Sub Table
|
||||||
|
|
||||||
|
@ -124,4 +165,34 @@ If a table is created using a super table as template, the table definition can
|
||||||
ALTER TABLE tb_name SET TAG tag_name=new_tag_value;
|
ALTER TABLE tb_name SET TAG tag_name=new_tag_value;
|
||||||
```
|
```
|
||||||
|
|
||||||
This command can be used to change the tag value if the table is created using a super table as template.
|
## Delete a Table
|
||||||
|
|
||||||
|
The following SQL statement deletes one or more tables.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
DROP TABLE [IF EXISTS] [db_name.]tb_name [, [IF EXISTS] [db_name.]tb_name] ...
|
||||||
|
```
|
||||||
|
|
||||||
|
## View Tables
|
||||||
|
|
||||||
|
### View All Tables
|
||||||
|
|
||||||
|
The following SQL statement shows all tables in the current database.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SHOW TABLES [LIKE tb_name_wildchar];
|
||||||
|
```
|
||||||
|
|
||||||
|
### View the CREATE Statement for a Table
|
||||||
|
|
||||||
|
```
|
||||||
|
SHOW CREATE TABLE tb_name;
|
||||||
|
```
|
||||||
|
|
||||||
|
This command is useful in migrating data from one TDengine cluster to another because it can be used to create the exact same tables in the target database.
|
||||||
|
|
||||||
|
## View the Table Schema
|
||||||
|
|
||||||
|
```
|
||||||
|
DESCRIBE [db_name.]tb_name;
|
||||||
|
```
|
|
@ -1,118 +1,159 @@
|
||||||
---
|
---
|
||||||
sidebar_label: STable
|
sidebar_label: Supertable
|
||||||
title: Super Table
|
title: Supertable
|
||||||
---
|
---
|
||||||
|
|
||||||
:::note
|
## Create a Supertable
|
||||||
|
|
||||||
Keyword `STable`, abbreviated for super table, is supported since version 2.0.15.
|
```sql
|
||||||
|
CREATE STABLE [IF NOT EXISTS] stb_name (create_definition [, create_definitionn] ...) TAGS (create_definition [, create_definition] ...) [table_options]
|
||||||
|
|
||||||
:::
|
create_definition:
|
||||||
|
col_name column_definition
|
||||||
|
|
||||||
## Create STable
|
column_definition:
|
||||||
|
type_name [COMMENT 'string_value']
|
||||||
```
|
|
||||||
CREATE STable [IF NOT EXISTS] stb_name (timestamp_field_name TIMESTAMP, field1_name data_type1 [, field2_name data_type2 ...]) TAGS (tag1_name tag_type1, tag2_name tag_type2 [, tag3_name tag_type3]);
|
|
||||||
```
|
```
|
||||||
|
|
||||||
The SQL statement of creating a STable is similar to that of creating a table, but a special column set named `TAGS` must be specified with the names and types of the tags.
|
**More explanations**
|
||||||
|
- Each supertable can have a maximum of 4096 columns, including tags. The minimum number of columns is 3: a timestamp column used as the key, one tag column, and one data column.
|
||||||
|
- When you create a supertable, you can add comments to columns and tags.
|
||||||
|
- The TAGS keyword defines the tag columns for the supertable. The following restrictions apply to tag columns:
|
||||||
|
- A tag column can use the TIMESTAMP data type, but the values in the column must be fixed numbers. Timestamps including formulae, such as "now + 10s", cannot be stored in a tag column.
|
||||||
|
- The name of a tag column cannot be the same as the name of any other column.
|
||||||
|
- The name of a tag column cannot be a reserved keyword.
|
||||||
|
- Each supertable must contain between 1 and 128 tags. The total length of the TAGS keyword cannot exceed 16 KB.
|
||||||
|
- For more information about table parameters, see Create a Table.
|
||||||
|
|
||||||
:::info
|
## View a Supertable
|
||||||
|
|
||||||
1. A tag can be of type timestamp, since version 2.1.3.0, but its value must be fixed and arithmetic operations cannot be performed on it. Prior to version 2.1.3.0, tag types specified in TAGS could not be of type timestamp.
|
### View All Supertables
|
||||||
2. The tag names specified in TAGS should NOT be the same as other columns.
|
|
||||||
3. The tag names specified in TAGS should NOT be the same as any reserved keywords.(Please refer to [keywords](/taos-sql/keywords/)
|
|
||||||
4. The maximum number of tags specified in TAGS is 128, there must be at least one tag, and the total length of all tag columns should NOT exceed 16KB.
|
|
||||||
|
|
||||||
:::
|
```
|
||||||
|
SHOW STABLES [LIKE tb_name_wildcard];
|
||||||
|
```
|
||||||
|
|
||||||
|
The preceding SQL statement shows all supertables in the current TDengine database, including the name, creation time, number of columns, number of tags, and number of subtabels for each supertable.
|
||||||
|
|
||||||
|
### View the CREATE Statement for a Supertable
|
||||||
|
|
||||||
|
```
|
||||||
|
SHOW CREATE STABLE stb_name;
|
||||||
|
```
|
||||||
|
|
||||||
|
The preceding SQL statement can be used in migration scenarios. It returns the CREATE statement that was used to create the specified supertable. You can then use the returned statement to create an identical supertable on another TDengine database.
|
||||||
|
|
||||||
|
## View the Supertable Schema
|
||||||
|
|
||||||
|
```
|
||||||
|
DESCRIBE [db_name.]stb_name;
|
||||||
|
```
|
||||||
|
|
||||||
## Drop STable
|
## Drop STable
|
||||||
|
|
||||||
```
|
```
|
||||||
DROP STable [IF EXISTS] stb_name;
|
DROP STABLE [IF EXISTS] [db_name.]stb_name
|
||||||
```
|
```
|
||||||
|
|
||||||
All the subtables created using the deleted STable will be deleted automatically.
|
Note: Deleting a supertable will delete all subtables created from the supertable, including all data within those subtables.
|
||||||
|
|
||||||
## Show All STables
|
## Modify a Supertable
|
||||||
|
|
||||||
|
```sql
|
||||||
|
ALTER STABLE [db_name.]tb_name alter_table_clause
|
||||||
|
|
||||||
|
alter_table_clause: {
|
||||||
|
alter_table_options
|
||||||
|
| ADD COLUMN col_name column_type
|
||||||
|
| DROP COLUMN col_name
|
||||||
|
| MODIFY COLUMN col_name column_type
|
||||||
|
| ADD TAG tag_name tag_type
|
||||||
|
| DROP TAG tag_name
|
||||||
|
| MODIFY TAG tag_name tag_type
|
||||||
|
| RENAME TAG old_tag_name new_tag_name
|
||||||
|
}
|
||||||
|
|
||||||
|
alter_table_options:
|
||||||
|
alter_table_option ...
|
||||||
|
|
||||||
|
alter_table_option: {
|
||||||
|
COMMENT 'string_value'
|
||||||
|
}
|
||||||
|
|
||||||
```
|
|
||||||
SHOW STableS [LIKE tb_name_wildcard];
|
|
||||||
```
|
```
|
||||||
|
|
||||||
This command can be used to display the information of all STables in the current database, including name, creation time, number of columns, number of tags, and number of tables created using this STable.
|
**More explanations**
|
||||||
|
|
||||||
## Show The Create Statement of A STable
|
Modifications to the table schema of a supertable take effect on all subtables within the supertable. You cannot modify the table schema of subtables individually. When you modify the tag schema of a supertable, the modifications automatically take effect on all of its subtables.
|
||||||
|
|
||||||
|
- ADD COLUMN: adds a column to the supertable.
|
||||||
|
- DROP COLUMN: deletes a column from the supertable.
|
||||||
|
- MODIFY COLUMN: changes the length of a BINARY or NCHAR column. Note that you can only specify a length greater than the current length.
|
||||||
|
- ADD TAG: adds a tag to the supertable.
|
||||||
|
- DROP TAG: deletes a tag from the supertable. When you delete a tag from a supertable, it is automatically deleted from all subtables within the supertable.
|
||||||
|
- MODIFY TAG: modifies the definition of a tag in the supertable. You can use this keyword to change the length of a BINARY or NCHAR tag column. Note that you can only specify a length greater than the current length.
|
||||||
|
- RENAME TAG: renames a specified tag in the supertable. When you rename a tag in a supertable, it is automatically renamed in all subtables within the supertable.
|
||||||
|
|
||||||
|
### Add a Column
|
||||||
|
|
||||||
```
|
```
|
||||||
SHOW CREATE STable stb_name;
|
ALTER STABLE stb_name ADD COLUMN col_name column_type;
|
||||||
```
|
```
|
||||||
|
|
||||||
This command is useful in migrating data from one TDengine cluster to another because it can be used to create the exact same STable in the target database.
|
### Delete a Column
|
||||||
|
|
||||||
## Get STable Definition
|
|
||||||
|
|
||||||
```
|
```
|
||||||
DESCRIBE stb_name;
|
ALTER STABLE stb_name DROP COLUMN col_name;
|
||||||
```
|
```
|
||||||
|
|
||||||
## Change Columns Of STable
|
### Modify the Data Length
|
||||||
|
|
||||||
### Add A Column
|
|
||||||
|
|
||||||
```
|
```
|
||||||
ALTER STable stb_name ADD COLUMN field_name data_type;
|
ALTER STABLE stb_name MODIFY COLUMN col_name data_type(length);
|
||||||
```
|
```
|
||||||
|
|
||||||
### Remove A Column
|
The preceding SQL statement changes the length of a BINARY or NCHAR data column. Note that you can only specify a length greater than the current length.
|
||||||
|
|
||||||
```
|
|
||||||
ALTER STable stb_name DROP COLUMN field_name;
|
|
||||||
```
|
|
||||||
|
|
||||||
### Change Column Length
|
|
||||||
|
|
||||||
```
|
|
||||||
ALTER STable stb_name MODIFY COLUMN field_name data_type(length);
|
|
||||||
```
|
|
||||||
|
|
||||||
This command can be used to change (or more specifically, increase) the length of a column of variable length types, like BINARY or NCHAR.
|
|
||||||
|
|
||||||
## Change Tags of A STable
|
|
||||||
|
|
||||||
### Add A Tag
|
### Add A Tag
|
||||||
|
|
||||||
```
|
```
|
||||||
ALTER STable stb_name ADD TAG new_tag_name tag_type;
|
ALTER STABLE stb_name ADD TAG tag_name tag_type;
|
||||||
```
|
```
|
||||||
|
|
||||||
This command is used to add a new tag for a STable and specify the tag type.
|
The preceding SQL statement adds a tag of the specified type to the supertable. A supertable cannot contain more than 128 tags. The total length of all tags in a supertable cannot exceed 16 KB.
|
||||||
|
|
||||||
### Remove A Tag
|
### Remove A Tag
|
||||||
|
|
||||||
```
|
```
|
||||||
ALTER STable stb_name DROP TAG tag_name;
|
ALTER STABLE stb_name DROP TAG tag_name;
|
||||||
```
|
```
|
||||||
|
|
||||||
The tag will be removed automatically from all the subtables, created using the super table as template, once a tag is removed from a super table.
|
The preceding SQL statement deletes a tag from the supertable. When you delete a tag from a supertable, it is automatically deleted from all subtables within the supertable.
|
||||||
|
|
||||||
### Change A Tag
|
### Change A Tag
|
||||||
|
|
||||||
```
|
```
|
||||||
ALTER STable stb_name CHANGE TAG old_tag_name new_tag_name;
|
ALTER STABLE stb_name RENAME TAG old_tag_name new_tag_name;
|
||||||
```
|
```
|
||||||
|
|
||||||
The tag name will be changed automatically for all the subtables, created using the super table as template, once a tag name is changed for a super table.
|
The preceding SQL statement renames a tag in the supertable. When you rename a tag in a supertable, it is automatically renamed in all subtables within the supertable.
|
||||||
|
|
||||||
### Change Tag Length
|
### Change Tag Length
|
||||||
|
|
||||||
```
|
```
|
||||||
ALTER STable stb_name MODIFY TAG tag_name data_type(length);
|
ALTER STABLE stb_name MODIFY TAG tag_name data_type(length);
|
||||||
```
|
```
|
||||||
|
|
||||||
This command can be used to change (or more specifically, increase) the length of a tag of variable length types, like BINARY or NCHAR.
|
The preceding SQL statement changes the length of a BINARY or NCHAR tag column. Note that you can only specify a length greater than the current length. (Available in 2.1.3.0 and later versions)
|
||||||
|
|
||||||
|
### View a Supertable
|
||||||
|
You can run projection and aggregate SELECT queries on supertables, and you can filter by tag or column by using the WHERE keyword.
|
||||||
|
|
||||||
|
If you do not include an ORDER BY clause, results are returned by subtable. These results are not ordered. You can include an ORDER BY clause in your query to strictly order the results.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
Changing tag values can be applied to only subtables. All other tag operations, like add tag, remove tag, however, can be applied to only STable. If a new tag is added for a STable, the tag will be added with NULL value for all its subtables.
|
All tag operations except for updating the value of a tag must be performed on the supertable and not on individual subtables. If you add a tag to an existing supertable, the tag is automatically added with a null value to all subtables within the supertable.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
|
@ -1,4 +1,5 @@
|
||||||
---
|
---
|
||||||
|
sidebar_label: Insert
|
||||||
title: Insert
|
title: Insert
|
||||||
---
|
---
|
||||||
|
|
||||||
|
@ -17,47 +18,62 @@ INSERT INTO
|
||||||
...];
|
...];
|
||||||
```
|
```
|
||||||
|
|
||||||
## Insert Single or Multiple Rows
|
**Timestamps**
|
||||||
|
|
||||||
Single row or multiple rows specified with VALUES can be inserted into a specific table. For example:
|
1. All data writes must include a timestamp. With regard to timestamps, note the following:
|
||||||
|
|
||||||
A single row is inserted using the below statement.
|
2. The precision of a timestamp depends on its format. The precision configured for the database affects only timestamps that are inserted as long integers (UNIX time). Timestamps inserted as date and time strings are not affected. As an example, the timestamp 2021-07-13 16:16:48 is equivalent to 1626164208 in UNIX time. This UNIX time is modified to 1626164208000 for databases with millisecond precision, 1626164208000000 for databases with microsecond precision, and 1626164208000000000 for databases with nanosecond precision.
|
||||||
|
|
||||||
```sq;
|
3. If you want to insert multiple rows simultaneously, do not use the NOW function in the timestamp. Using the NOW function in this situation will cause multiple rows to have the same timestamp and prevent them from being stored correctly. This is because the NOW function obtains the current time on the client, and multiple instances of NOW in a single statement will return the same time.
|
||||||
|
The earliest timestamp that you can use when inserting data is equal to the current time on the server minus the value of the KEEP parameter. The latest timestamp that you can use when inserting data is equal to the current time on the server plus the value of the DURATION parameter. You can configure the KEEP and DURATION parameters when you create a database. The default values are 3650 days for the KEEP parameter and 10 days for the DURATION parameter.
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
1. The USING clause automatically creates the specified subtable if it does not exist. If it's unknown whether the table already exists, the table can be created automatically while inserting using the SQL statement below. To use this functionality, a STable must be used as template and tag values must be provided. Any tags that you do not specify will be assigned a null value.
|
||||||
|
|
||||||
|
2. You can insert data into specified columns. Any columns in which you do not insert data will be assigned a null value.
|
||||||
|
|
||||||
|
3. The VALUES clause inserts one or more rows of data into a table.
|
||||||
|
|
||||||
|
4. The FILE clause inserts tags or data from a comma-separates values (CSV) file. Do not include headers in your CSV files.
|
||||||
|
|
||||||
|
5. A single INSERT statement can write data to multiple tables.
|
||||||
|
|
||||||
|
6. The INSERT statement is fully parsed before being executed, so that if any element of the statement fails, the entire statement will fail. For example, the following statement will not create a table because the latter part of the statement is invalid:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
INSERT INTO d1001 USING meters TAGS('Beijing.Chaoyang', 2) VALUES('a');
|
||||||
|
```
|
||||||
|
|
||||||
|
7. However, an INSERT statement that writes data to multiple subtables can succeed for some tables and fail for others. This situation is caused because vnodes perform write operations independently of each other. One vnode failing to write data does not affect the ability of other vnodes to write successfully.
|
||||||
|
|
||||||
|
## Insert a Record
|
||||||
|
|
||||||
|
Single row or multiple rows specified with VALUES can be inserted into a specific table. A single row is inserted using the below statement.
|
||||||
|
|
||||||
|
```sql
|
||||||
INSERT INTO d1001 VALUES (NOW, 10.2, 219, 0.32);
|
INSERT INTO d1001 VALUES (NOW, 10.2, 219, 0.32);
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Insert Multiple Records
|
||||||
|
|
||||||
Double rows are inserted using the below statement.
|
Double rows are inserted using the below statement.
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
INSERT INTO d1001 VALUES ('2021-07-13 14:06:32.272', 10.2, 219, 0.32) (1626164208000, 10.15, 217, 0.33);
|
INSERT INTO d1001 VALUES ('2021-07-13 14:06:32.272', 10.2, 219, 0.32) (1626164208000, 10.15, 217, 0.33);
|
||||||
```
|
```
|
||||||
|
|
||||||
:::note
|
## Write to a Specified Column
|
||||||
|
|
||||||
1. In the second example above, different formats are used in the two rows to be inserted. In the first row, the timestamp format is a date and time string, which is interpreted from the string value only. In the second row, the timestamp format is a long integer, which will be interpreted based on the database time precision.
|
Data can be inserted into specific columns, either single row or multiple row, while other columns will be inserted as NULL value. The key (timestamp) cannot be null. For example:
|
||||||
2. When trying to insert multiple rows in a single statement, only the timestamp of one row can be set as NOW, otherwise there will be duplicate timestamps among the rows and the result may be out of expectation because NOW will be interpreted as the time when the statement is executed.
|
|
||||||
3. The oldest timestamp that is allowed is subtracting the KEEP parameter from current time.
|
|
||||||
4. The newest timestamp that is allowed is adding the DAYS parameter to current time.
|
|
||||||
|
|
||||||
:::
|
```sql
|
||||||
|
|
||||||
## Insert Into Specific Columns
|
|
||||||
|
|
||||||
Data can be inserted into specific columns, either single row or multiple row, while other columns will be inserted as NULL value.
|
|
||||||
|
|
||||||
```
|
|
||||||
INSERT INTO d1001 (ts, current, phase) VALUES ('2021-07-13 14:06:33.196', 10.27, 0.31);
|
INSERT INTO d1001 (ts, current, phase) VALUES ('2021-07-13 14:06:33.196', 10.27, 0.31);
|
||||||
```
|
```
|
||||||
|
|
||||||
:::info
|
|
||||||
If no columns are explicitly specified, all the columns must be provided with values, this is called "all column mode". The insert performance of all column mode is much better than specifying a subset of columns, so it's encouraged to use "all column mode" while providing NULL value explicitly for the columns for which no actual value can be provided.
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
## Insert Into Multiple Tables
|
## Insert Into Multiple Tables
|
||||||
|
|
||||||
One or multiple rows can be inserted into multiple tables in a single SQL statement, with or without specifying specific columns.
|
One or multiple rows can be inserted into multiple tables in a single SQL statement, with or without specifying specific columns. For example:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
INSERT INTO d1001 VALUES ('2021-07-13 14:06:34.630', 10.2, 219, 0.32) ('2021-07-13 14:06:35.779', 10.15, 217, 0.33)
|
INSERT INTO d1001 VALUES ('2021-07-13 14:06:34.630', 10.2, 219, 0.32) ('2021-07-13 14:06:35.779', 10.15, 217, 0.33)
|
||||||
|
@ -66,19 +82,19 @@ INSERT INTO d1001 VALUES ('2021-07-13 14:06:34.630', 10.2, 219, 0.32) ('2021-07-
|
||||||
|
|
||||||
## Automatically Create Table When Inserting
|
## Automatically Create Table When Inserting
|
||||||
|
|
||||||
If it's unknown whether the table already exists, the table can be created automatically while inserting using the SQL statement below. To use this functionality, a STable must be used as template and tag values must be provided.
|
If it's unknown whether the table already exists, the table can be created automatically while inserting using the SQL statement below. To use this functionality, a STable must be used as template and tag values must be provided. For example:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) VALUES ('2021-07-13 14:06:32.272', 10.2, 219, 0.32);
|
INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) VALUES ('2021-07-13 14:06:32.272', 10.2, 219, 0.32);
|
||||||
```
|
```
|
||||||
|
|
||||||
It's not necessary to provide values for all tags when creating tables automatically, the tags without values provided will be set to NULL.
|
It's not necessary to provide values for all tags when creating tables automatically, the tags without values provided will be set to NULL. For example:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
INSERT INTO d21001 USING meters (groupId) TAGS (2) VALUES ('2021-07-13 14:06:33.196', 10.15, 217, 0.33);
|
INSERT INTO d21001 USING meters (groupId) TAGS (2) VALUES ('2021-07-13 14:06:33.196', 10.15, 217, 0.33);
|
||||||
```
|
```
|
||||||
|
|
||||||
Multiple rows can also be inserted into the same table in a single SQL statement.
|
Multiple rows can also be inserted into the same table in a single SQL statement. For example:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) VALUES ('2021-07-13 14:06:34.630', 10.2, 219, 0.32) ('2021-07-13 14:06:35.779', 10.15, 217, 0.33)
|
INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) VALUES ('2021-07-13 14:06:34.630', 10.2, 219, 0.32) ('2021-07-13 14:06:35.779', 10.15, 217, 0.33)
|
||||||
|
@ -86,10 +102,6 @@ INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) VALUES ('202
|
||||||
d21003 USING meters (groupId) TAGS (2) (ts, current, phase) VALUES ('2021-07-13 14:06:34.255', 10.27, 0.31);
|
d21003 USING meters (groupId) TAGS (2) (ts, current, phase) VALUES ('2021-07-13 14:06:34.255', 10.27, 0.31);
|
||||||
```
|
```
|
||||||
|
|
||||||
:::info
|
|
||||||
Prior to version 2.0.20.5, when using `INSERT` to create tables automatically and specifying the columns, the column names must follow the table name immediately. From version 2.0.20.5, the column names can follow the table name immediately, also can be put between `TAGS` and `VALUES`. In the same SQL statement, however, these two ways of specifying column names can't be mixed.
|
|
||||||
:::
|
|
||||||
|
|
||||||
## Insert Rows From A File
|
## Insert Rows From A File
|
||||||
|
|
||||||
Besides using `VALUES` to insert one or multiple rows, the data to be inserted can also be prepared in a CSV file with comma as separator and each field value quoted by single quotes. Table definition is not required in the CSV file. For example, if file "/tmp/csvfile.csv" contains the below data:
|
Besides using `VALUES` to insert one or multiple rows, the data to be inserted can also be prepared in a CSV file with comma as separator and each field value quoted by single quotes. Table definition is not required in the CSV file. For example, if file "/tmp/csvfile.csv" contains the below data:
|
||||||
|
@ -107,58 +119,13 @@ INSERT INTO d1001 FILE '/tmp/csvfile.csv';
|
||||||
|
|
||||||
## Create Tables Automatically and Insert Rows From File
|
## Create Tables Automatically and Insert Rows From File
|
||||||
|
|
||||||
From version 2.1.5.0, tables can be automatically created using a super table as template when inserting data from a CSV file, like below:
|
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) FILE '/tmp/csvfile.csv';
|
INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) FILE '/tmp/csvfile.csv';
|
||||||
```
|
```
|
||||||
|
|
||||||
Multiple tables can be automatically created and inserted in a single SQL statement, like below:
|
When writing data from a file, you can automatically create the specified subtable if it does not exist. For example:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) FILE '/tmp/csvfile_21001.csv'
|
INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) FILE '/tmp/csvfile_21001.csv'
|
||||||
d21002 USING meters (groupId) TAGS (2) FILE '/tmp/csvfile_21002.csv';
|
d21002 USING meters (groupId) TAGS (2) FILE '/tmp/csvfile_21002.csv';
|
||||||
```
|
```
|
||||||
|
|
||||||
## More About Insert
|
|
||||||
|
|
||||||
For SQL statement like `insert`, a stream parsing strategy is applied. That means before an error is found and the execution is aborted, the part prior to the error point has already been executed. Below is an experiment to help understand the behavior.
|
|
||||||
|
|
||||||
First, a super table is created.
|
|
||||||
|
|
||||||
```sql
|
|
||||||
CREATE TABLE meters(ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS(location BINARY(30), groupId INT);
|
|
||||||
```
|
|
||||||
|
|
||||||
It can be proven that the super table has been created by `SHOW STableS`, but no table exists using `SHOW TABLES`.
|
|
||||||
|
|
||||||
```
|
|
||||||
taos> SHOW STableS;
|
|
||||||
name | created_time | columns | tags | tables |
|
|
||||||
============================================================================================
|
|
||||||
meters | 2020-08-06 17:50:27.831 | 4 | 2 | 0 |
|
|
||||||
Query OK, 1 row(s) in set (0.001029s)
|
|
||||||
|
|
||||||
taos> SHOW TABLES;
|
|
||||||
Query OK, 0 row(s) in set (0.000946s)
|
|
||||||
```
|
|
||||||
|
|
||||||
Then, try to create table d1001 automatically when inserting data into it.
|
|
||||||
|
|
||||||
```sql
|
|
||||||
INSERT INTO d1001 USING meters TAGS('California.SanFrancisco', 2) VALUES('a');
|
|
||||||
```
|
|
||||||
|
|
||||||
The output shows the value to be inserted is invalid. But `SHOW TABLES` proves that the table has been created automatically by the `INSERT` statement.
|
|
||||||
|
|
||||||
```
|
|
||||||
DB error: invalid SQL: 'a' (invalid timestamp) (0.039494s)
|
|
||||||
|
|
||||||
taos> SHOW TABLES;
|
|
||||||
table_name | created_time | columns | STable_name |
|
|
||||||
======================================================================================================
|
|
||||||
d1001 | 2020-08-06 17:52:02.097 | 4 | meters |
|
|
||||||
Query OK, 1 row(s) in set (0.001091s)
|
|
||||||
```
|
|
||||||
|
|
||||||
From the above experiment, we can see that while the value to be inserted is invalid the table is still created.
|
|
||||||
|
|
|
@ -1,118 +1,124 @@
|
||||||
---
|
---
|
||||||
|
sidebar_label: Select
|
||||||
title: Select
|
title: Select
|
||||||
---
|
---
|
||||||
|
|
||||||
## Syntax
|
## Syntax
|
||||||
|
|
||||||
```SQL
|
```sql
|
||||||
SELECT select_expr [, select_expr ...]
|
SELECT {DATABASE() | CLIENT_VERSION() | SERVER_VERSION() | SERVER_STATUS() | NOW() | TODAY() | TIMEZONE()}
|
||||||
FROM {tb_name_list}
|
|
||||||
[WHERE where_condition]
|
SELECT [DISTINCT] select_list
|
||||||
[SESSION(ts_col, tol_val)]
|
from_clause
|
||||||
[STATE_WINDOW(col)]
|
[WHERE condition]
|
||||||
[INTERVAL(interval_val [, interval_offset]) [SLIDING sliding_val]]
|
[PARTITION BY tag_list]
|
||||||
[FILL(fill_mod_and_val)]
|
[window_clause]
|
||||||
[GROUP BY col_list]
|
[group_by_clause]
|
||||||
[ORDER BY col_list { DESC | ASC }]
|
[order_by_clasue]
|
||||||
[SLIMIT limit_val [SOFFSET offset_val]]
|
[SLIMIT limit_val [SOFFSET offset_val]]
|
||||||
[LIMIT limit_val [OFFSET offset_val]]
|
[LIMIT limit_val [OFFSET offset_val]]
|
||||||
[>> export_file];
|
[>> export_file]
|
||||||
|
|
||||||
|
select_list:
|
||||||
|
select_expr [, select_expr] ...
|
||||||
|
|
||||||
|
select_expr: {
|
||||||
|
*
|
||||||
|
| query_name.*
|
||||||
|
| [schema_name.] {table_name | view_name} .*
|
||||||
|
| t_alias.*
|
||||||
|
| expr [[AS] c_alias]
|
||||||
|
}
|
||||||
|
|
||||||
|
from_clause: {
|
||||||
|
table_reference [, table_reference] ...
|
||||||
|
| join_clause [, join_clause] ...
|
||||||
|
}
|
||||||
|
|
||||||
|
table_reference:
|
||||||
|
table_expr t_alias
|
||||||
|
|
||||||
|
table_expr: {
|
||||||
|
table_name
|
||||||
|
| view_name
|
||||||
|
| ( subquery )
|
||||||
|
}
|
||||||
|
|
||||||
|
join_clause:
|
||||||
|
table_reference [INNER] JOIN table_reference ON condition
|
||||||
|
|
||||||
|
window_clause: {
|
||||||
|
SESSION(ts_col, tol_val)
|
||||||
|
| STATE_WINDOW(col)
|
||||||
|
| INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)] [WATERMARK(watermark_val)] [FILL(fill_mod_and_val)]
|
||||||
|
|
||||||
|
changes_option: {
|
||||||
|
DURATION duration_val
|
||||||
|
| ROWS rows_val
|
||||||
|
}
|
||||||
|
|
||||||
|
group_by_clause:
|
||||||
|
GROUP BY expr [, expr] ... HAVING condition
|
||||||
|
|
||||||
|
order_by_clasue:
|
||||||
|
ORDER BY order_expr [, order_expr] ...
|
||||||
|
|
||||||
|
order_expr:
|
||||||
|
{expr | position | c_alias} [DESC | ASC] [NULLS FIRST | NULLS LAST]
|
||||||
```
|
```
|
||||||
|
|
||||||
## Wildcard
|
## Lists
|
||||||
|
|
||||||
Wildcard \* can be used to specify all columns. The result includes only data columns for normal tables.
|
A query can be performed on some or all columns. Data and tag columns can all be included in the SELECT list.
|
||||||
|
|
||||||
```
|
## Wildcards
|
||||||
taos> SELECT * FROM d1001;
|
|
||||||
ts | current | voltage | phase |
|
You can use an asterisk (\*) as a wildcard character to indicate all columns. For standard tables, the asterisk indicates only data columns. For supertables and subtables, tag columns are also included.
|
||||||
======================================================================================
|
|
||||||
2018-10-03 14:38:05.000 | 10.30000 | 219 | 0.31000 |
|
```sql
|
||||||
2018-10-03 14:38:15.000 | 12.60000 | 218 | 0.33000 |
|
SELECT * FROM d1001;
|
||||||
2018-10-03 14:38:16.800 | 12.30000 | 221 | 0.31000 |
|
|
||||||
Query OK, 3 row(s) in set (0.001165s)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
The result includes both data columns and tag columns for super table.
|
You can use a table name as a prefix before an asterisk. For example, the following SQL statements both return all columns from the d1001 table:
|
||||||
|
|
||||||
```
|
```sql
|
||||||
taos> SELECT * FROM meters;
|
|
||||||
ts | current | voltage | phase | location | groupid |
|
|
||||||
=====================================================================================================================================
|
|
||||||
2018-10-03 14:38:05.500 | 11.80000 | 221 | 0.28000 | California.LoSangeles | 2 |
|
|
||||||
2018-10-03 14:38:16.600 | 13.40000 | 223 | 0.29000 | California.LoSangeles | 2 |
|
|
||||||
2018-10-03 14:38:05.000 | 10.80000 | 223 | 0.29000 | California.LoSangeles | 3 |
|
|
||||||
2018-10-03 14:38:06.500 | 11.50000 | 221 | 0.35000 | California.LoSangeles | 3 |
|
|
||||||
2018-10-03 14:38:04.000 | 10.20000 | 220 | 0.23000 | California.SanFrancisco | 3 |
|
|
||||||
2018-10-03 14:38:16.650 | 10.30000 | 218 | 0.25000 | California.SanFrancisco | 3 |
|
|
||||||
2018-10-03 14:38:05.000 | 10.30000 | 219 | 0.31000 | California.SanFrancisco | 2 |
|
|
||||||
2018-10-03 14:38:15.000 | 12.60000 | 218 | 0.33000 | California.SanFrancisco | 2 |
|
|
||||||
2018-10-03 14:38:16.800 | 12.30000 | 221 | 0.31000 | California.SanFrancisco | 2 |
|
|
||||||
Query OK, 9 row(s) in set (0.002022s)
|
|
||||||
```
|
|
||||||
|
|
||||||
Wildcard can be used with table name as prefix. Both SQL statements below have the same effect and return all columns.
|
|
||||||
|
|
||||||
```SQL
|
|
||||||
SELECT * FROM d1001;
|
SELECT * FROM d1001;
|
||||||
SELECT d1001.* FROM d1001;
|
SELECT d1001.* FROM d1001;
|
||||||
```
|
```
|
||||||
|
|
||||||
In a JOIN query, however, the results are different with or without a table name prefix. \* without table prefix will return all the columns of both tables, but \* with table name as prefix will return only the columns of that table.
|
However, in a JOIN query, using a table name prefix with an asterisk returns different results. In this case, querying * returns all data in all columns in all tables (not including tags), whereas using a table name prefix returns all data in all columns in the specified table only.
|
||||||
|
|
||||||
```
|
```sql
|
||||||
taos> SELECT * FROM d1001, d1003 WHERE d1001.ts=d1003.ts;
|
SELECT * FROM d1001, d1003 WHERE d1001.ts=d1003.ts;
|
||||||
ts | current | voltage | phase | ts | current | voltage | phase |
|
SELECT d1001.* FROM d1001,d1003 WHERE d1001.ts = d1003.ts;
|
||||||
==================================================================================================================================
|
|
||||||
2018-10-03 14:38:05.000 | 10.30000| 219 | 0.31000 | 2018-10-03 14:38:05.000 | 10.80000| 223 | 0.29000 |
|
|
||||||
Query OK, 1 row(s) in set (0.017385s)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
```
|
The first of the preceding SQL statements returns all columns from the d1001 and d1003 tables, but the second of the preceding SQL statements returns all columns from the d1001 table only.
|
||||||
taos> SELECT d1001.* FROM d1001,d1003 WHERE d1001.ts = d1003.ts;
|
|
||||||
ts | current | voltage | phase |
|
With regard to the other SQL functions that support wildcards, the differences are as follows:
|
||||||
======================================================================================
|
`count(*)` only returns one column. `first`, `last`, and `last_row` return all columns.
|
||||||
2018-10-03 14:38:05.000 | 10.30000 | 219 | 0.31000 |
|
|
||||||
Query OK, 1 row(s) in set (0.020443s)
|
### Tag Columns
|
||||||
|
|
||||||
|
You can query tag columns in supertables and subtables and receive results in the same way as querying data columns.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT location, groupid, current FROM d1001 LIMIT 2;
|
||||||
```
|
```
|
||||||
|
|
||||||
Wildcard \* can be used with some functions, but the result may be different depending on the function being used. For example, `count(*)` returns only one column, i.e. the number of rows; `first`, `last` and `last_row` return all columns of the selected row.
|
### Distinct Values
|
||||||
|
|
||||||
```
|
The DISTINCT keyword returns only values that are different over one or more columns. You can use the DISTINCT keyword with tag columns and data columns.
|
||||||
taos> SELECT COUNT(*) FROM d1001;
|
|
||||||
count(*) |
|
|
||||||
========================
|
|
||||||
3 |
|
|
||||||
Query OK, 1 row(s) in set (0.001035s)
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
The following SQL statement returns distinct values from a tag column:
|
||||||
taos> SELECT FIRST(*) FROM d1001;
|
|
||||||
first(ts) | first(current) | first(voltage) | first(phase) |
|
|
||||||
=========================================================================================
|
|
||||||
2018-10-03 14:38:05.000 | 10.30000 | 219 | 0.31000 |
|
|
||||||
Query OK, 1 row(s) in set (0.000849s)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Tags
|
|
||||||
|
|
||||||
Starting from version 2.0.14, tag columns can be selected together with data columns when querying sub tables. Please note however, that, wildcard \* cannot be used to represent any tag column. This means that tag columns must be specified explicitly like the example below.
|
|
||||||
|
|
||||||
```
|
|
||||||
taos> SELECT location, groupid, current FROM d1001 LIMIT 2;
|
|
||||||
location | groupid | current |
|
|
||||||
======================================================================
|
|
||||||
California.SanFrancisco | 2 | 10.30000 |
|
|
||||||
California.SanFrancisco | 2 | 12.60000 |
|
|
||||||
Query OK, 2 row(s) in set (0.003112s)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Get distinct values
|
|
||||||
|
|
||||||
`DISTINCT` keyword can be used to get all the unique values of tag columns from a super table. It can also be used to get all the unique values of data columns from a table or subtable.
|
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT DISTINCT tag_name [, tag_name ...] FROM stb_name;
|
SELECT DISTINCT tag_name [, tag_name ...] FROM stb_name;
|
||||||
|
```
|
||||||
|
|
||||||
|
The following SQL statement returns distinct values from a data column:
|
||||||
|
|
||||||
|
```sql
|
||||||
SELECT DISTINCT col_name [, col_name ...] FROM tb_name;
|
SELECT DISTINCT col_name [, col_name ...] FROM tb_name;
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -124,231 +130,188 @@ SELECT DISTINCT col_name [, col_name ...] FROM tb_name;
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
## Columns Names of Result Set
|
### Column Names
|
||||||
|
|
||||||
When using `SELECT`, the column names in the result set will be the same as that in the select clause if `AS` is not used. `AS` can be used to rename the column names in the result set. For example
|
When using `SELECT`, the column names in the result set will be the same as that in the select clause if `AS` is not used. `AS` can be used to rename the column names in the result set. For example:
|
||||||
|
|
||||||
```
|
```sql
|
||||||
taos> SELECT ts, ts AS primary_key_ts FROM d1001;
|
taos> SELECT ts, ts AS primary_key_ts FROM d1001;
|
||||||
ts | primary_key_ts |
|
|
||||||
====================================================
|
|
||||||
2018-10-03 14:38:05.000 | 2018-10-03 14:38:05.000 |
|
|
||||||
2018-10-03 14:38:15.000 | 2018-10-03 14:38:15.000 |
|
|
||||||
2018-10-03 14:38:16.800 | 2018-10-03 14:38:16.800 |
|
|
||||||
Query OK, 3 row(s) in set (0.001191s)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
`AS` can't be used together with `first(*)`, `last(*)`, or `last_row(*)`.
|
`AS` can't be used together with `first(*)`, `last(*)`, or `last_row(*)`.
|
||||||
|
|
||||||
## Implicit Columns
|
### Pseudocolumns
|
||||||
|
|
||||||
`Select_exprs` can be column names of a table, or function expression or arithmetic expression on columns. The maximum number of allowed column names and expressions is 256. Timestamp and the corresponding tag names will be returned in the result set if `interval` or `group by tags` are used, and timestamp will always be the first column in the result set.
|
**TBNAME**
|
||||||
|
The TBNAME pseudocolumn in a supertable contains the names of subtables within the supertable.
|
||||||
|
|
||||||
## Table List
|
The following SQL statement returns all unique subtable names and locations within the meters supertable:
|
||||||
|
|
||||||
`FROM` can be followed by a number of tables or super tables, or can be followed by a sub-query. If no database is specified as current database in use, table names must be preceded with database name, like `power.d1001`.
|
```mysql
|
||||||
|
SELECT DISTINCT TBNAME, location FROM meters;
|
||||||
```SQL
|
|
||||||
SELECT * FROM power.d1001;
|
|
||||||
```
|
```
|
||||||
|
|
||||||
has same effect as
|
Use the `INS_TAGS` system table in `INFORMATION_SCHEMA` to query the information for subtables in a supertable. For example, the following statement returns the name and tag values for each subtable in the `meters` supertable.
|
||||||
|
|
||||||
```SQL
|
```mysql
|
||||||
USE power;
|
SELECT table_name, tag_name, tag_type, tag_value FROM information_schema.ins_tags WHERE stable_name='meters';
|
||||||
SELECT * FROM d1001;
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
The following SQL statement returns the number of subtables within the meters supertable.
|
||||||
|
|
||||||
|
```mysql
|
||||||
|
SELECT COUNT(*) FROM (SELECT DISTINCT TBNAME FROM meters);
|
||||||
|
```
|
||||||
|
|
||||||
|
In the preceding two statements, only tags can be used as filtering conditions in the WHERE clause. For example:
|
||||||
|
|
||||||
|
**\_QSTART and \_QEND**
|
||||||
|
|
||||||
|
The \_QSTART and \_QEND pseudocolumns contain the beginning and end of the time range of a query. If the WHERE clause in a statement does not contain valid timestamps, the time range is equal to [-2^63, 2^63 - 1].
|
||||||
|
|
||||||
|
The \_QSTART and \_QEND pseudocolumns cannot be used in a WHERE clause.
|
||||||
|
|
||||||
|
**\_WSTART, \_WEND, and \_WDURATION**
|
||||||
|
|
||||||
|
The \_WSTART, \_WEND, and \_WDURATION pseudocolumns indicate the beginning, end, and duration of a window.
|
||||||
|
|
||||||
|
These pseudocolumns can be used only in time window-based aggregations and must occur after the aggregation clause.
|
||||||
|
|
||||||
|
**\_c0 and \_ROWTS**
|
||||||
|
|
||||||
|
In TDengine, the first column of all tables must be a timestamp. This column is the primary key of the table. The \_c0 and \_ROWTS pseudocolumns both represent the values of this column. These pseudocolumns enable greater flexibility and standardization. For example, you can use functions such as MAX and MIN with these pseudocolumns.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
select _rowts, max(current) from meters;
|
||||||
|
```
|
||||||
|
|
||||||
|
## Query Objects
|
||||||
|
|
||||||
|
`FROM` can be followed by a number of tables or super tables, or can be followed by a sub-query.
|
||||||
|
If no database is specified as current database in use, table names must be preceded with database name, for example, `power.d1001`.
|
||||||
|
|
||||||
|
You can perform INNER JOIN statements based on the primary key. The following conditions apply:
|
||||||
|
|
||||||
|
1. You can use FROM table list or an explicit JOIN clause.
|
||||||
|
2. For standard tables and subtables, you must specify an ON condition and the condition must be equivalent to the primary key.
|
||||||
|
3. For supertables, the ON condition must be equivalent to the primary key. In addition, the tag columns of the tables on which the INNER JOIN is performed must have a one-to-one relationship. You cannot specify an OR condition.
|
||||||
|
4. The tables that are included in a JOIN clause must be of the same type (supertable, standard table, or subtable).
|
||||||
|
5. You can include subqueries before and after the JOIN keyword.
|
||||||
|
6. You cannot include more than ten tables in a JOIN clause.
|
||||||
|
7. You cannot include a FILL clause and a JOIN clause in the same statement.
|
||||||
|
|
||||||
|
## GROUP BY
|
||||||
|
|
||||||
|
If you use a GROUP BY clause, the SELECT list can only include the following items:
|
||||||
|
|
||||||
|
1. Constants
|
||||||
|
2. Aggregate functions
|
||||||
|
3. Expressions that are consistent with the expression following the GROUP BY clause
|
||||||
|
4. Expressions that include the preceding expression
|
||||||
|
|
||||||
|
The GROUP BY clause groups each row of data by the value of the expression following the clause and returns a combined result for each group.
|
||||||
|
|
||||||
|
The expressions in a GROUP BY clause can include any column in any table or view. It is not necessary that the expressions appear in the SELECT list.
|
||||||
|
|
||||||
|
The GROUP BY clause does not guarantee that the results are ordered. If you want to ensure that grouped data is ordered, use the ORDER BY clause.
|
||||||
|
|
||||||
|
|
||||||
|
## PARTITION BY
|
||||||
|
|
||||||
|
The PARTITION BY clause is a TDengine-specific extension to standard SQL. This clause partitions data based on the part_list and performs computations per partition.
|
||||||
|
|
||||||
|
For more information, see TDengine Extensions.
|
||||||
|
|
||||||
|
## ORDER BY
|
||||||
|
|
||||||
|
The ORDER BY keyword orders query results. If you do not include an ORDER BY clause in a query, the order of the results can be inconsistent.
|
||||||
|
|
||||||
|
You can specify integers after ORDER BY to indicate the order in which you want the items in the SELECT list to be displayed. For example, 1 indicates the first item in the select list.
|
||||||
|
|
||||||
|
You can specify ASC for ascending order or DESC for descending order.
|
||||||
|
|
||||||
|
You can also use the NULLS keyword to specify the position of null values. Ascending order uses NULLS LAST by default. Descending order uses NULLS FIRST by default.
|
||||||
|
|
||||||
|
## LIMIT
|
||||||
|
|
||||||
|
The LIMIT keyword controls the number of results that are displayed. You can also use the OFFSET keyword to specify the result to display first. `LIMIT` and `OFFSET` are executed after `ORDER BY` in the query execution. You can include an offset in a LIMIT clause. For example, LIMIT 5 OFFSET 2 can also be written LIMIT 2, 5. Both of these clauses display the third through the seventh results.
|
||||||
|
|
||||||
|
In a statement that includes a PARTITON BY clause, the LIMIT keyword is performed on each partition, not on the entire set of results.
|
||||||
|
|
||||||
|
## SLIMIT
|
||||||
|
|
||||||
|
The SLIMIT keyword is used with a PARTITION BY clause to control the number of partitions that are displayed. You can include an offset in a SLIMIT clause. For example, SLIMIT 5 OFFSET 2 can also be written LIMIT 2, 5. Both of these clauses display the third through the seventh partitions.
|
||||||
|
|
||||||
|
Note: If you include an ORDER BY clause, only one partition can be displayed.
|
||||||
|
|
||||||
## Special Query
|
## Special Query
|
||||||
|
|
||||||
Some special query functions can be invoked without `FROM` sub-clause. For example, the statement below can be used to get the current database in use.
|
Some special query functions can be invoked without `FROM` sub-clause.
|
||||||
|
|
||||||
```
|
## Obtain Current Database
|
||||||
taos> SELECT DATABASE();
|
|
||||||
database() |
|
The following SQL statement returns the current database. If a database has not been specified on login or with the `USE` command, a null value is returned.
|
||||||
=================================
|
|
||||||
power |
|
```sql
|
||||||
Query OK, 1 row(s) in set (0.000079s)
|
SELECT DATABASE();
|
||||||
```
|
```
|
||||||
|
|
||||||
If no database is specified upon logging in and no database is specified with `USE` after login, NULL will be returned by `select database()`.
|
### Obtain Current Version
|
||||||
|
|
||||||
```
|
```sql
|
||||||
taos> SELECT DATABASE();
|
SELECT CLIENT_VERSION();
|
||||||
database() |
|
SELECT SERVER_VERSION();
|
||||||
=================================
|
|
||||||
NULL |
|
|
||||||
Query OK, 1 row(s) in set (0.000184s)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
The statement below can be used to get the version of client or server.
|
## Obtain Server Status
|
||||||
|
|
||||||
```
|
The following SQL statement returns the status of the TDengine server. An integer indicates that the server is running normally. An error code indicates that an error has occurred. This statement can also detect whether a connection pool or third-party tool is connected to TDengine properly. By using this statement, you can ensure that connections in a pool are not lost due to an incorrect heartbeat detection statement.
|
||||||
taos> SELECT CLIENT_VERSION();
|
|
||||||
client_version() |
|
|
||||||
===================
|
|
||||||
2.0.0.0 |
|
|
||||||
Query OK, 1 row(s) in set (0.000070s)
|
|
||||||
|
|
||||||
taos> SELECT SERVER_VERSION();
|
```sql
|
||||||
server_version() |
|
SELECT SERVER_STATUS();
|
||||||
===================
|
|
||||||
2.0.0.0 |
|
|
||||||
Query OK, 1 row(s) in set (0.000077s)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
The statement below is used to check the server status. An integer, like `1`, is returned if the server status is OK, otherwise an error code is returned. This is compatible with the status check for TDengine from connection pool or 3rd party tools, and can avoid the problem of losing the connection from a connection pool when using the wrong heartbeat checking SQL statement.
|
### Obtain Current Time
|
||||||
|
|
||||||
```
|
```sql
|
||||||
taos> SELECT SERVER_STATUS();
|
SELECT NOW();
|
||||||
server_status() |
|
|
||||||
==================
|
|
||||||
1 |
|
|
||||||
Query OK, 1 row(s) in set (0.000074s)
|
|
||||||
|
|
||||||
taos> SELECT SERVER_STATUS() AS status;
|
|
||||||
status |
|
|
||||||
==============
|
|
||||||
1 |
|
|
||||||
Query OK, 1 row(s) in set (0.000081s)
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## \_block_dist
|
### Obtain Current Date
|
||||||
|
|
||||||
**Description**: Get the data block distribution of a table or STable.
|
```sql
|
||||||
|
SELECT TODAY();
|
||||||
```SQL title="Syntax"
|
|
||||||
SELECT _block_dist() FROM { tb_name | stb_name }
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**Restrictions**:No argument is allowed, where clause is not allowed
|
### Obtain Current Time Zone
|
||||||
|
|
||||||
**Sub Query**:Sub query or nested query are not supported
|
```sql
|
||||||
|
SELECT TIMEZONE();
|
||||||
**Return value**: A string which includes the data block distribution of the specified table or STable, i.e. the histogram of rows stored in the data blocks of the table or STable.
|
|
||||||
|
|
||||||
```text title="Result"
|
|
||||||
summary:
|
|
||||||
5th=[392], 10th=[392], 20th=[392], 30th=[392], 40th=[792], 50th=[792] 60th=[792], 70th=[792], 80th=[792], 90th=[792], 95th=[792], 99th=[792] Min=[392(Rows)] Max=[800(Rows)] Avg=[666(Rows)] Stddev=[2.17] Rows=[2000], Blocks=[3], Size=[5.440(Kb)] Comp=[0.23] RowsInMem=[0] SeekHeaderTime=[1(us)]
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**More explanation about above example**:
|
|
||||||
|
|
||||||
- Histogram about the rows stored in the data blocks of the table or STable: the value of rows for 5%, 10%, 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, 95%, and 99%
|
|
||||||
- Minimum number of rows stored in a data block, i.e. Min=[392(Rows)]
|
|
||||||
- Maximum number of rows stored in a data block, i.e. Max=[800(Rows)]
|
|
||||||
- Average number of rows stored in a data block, i.e. Avg=[666(Rows)]
|
|
||||||
- stddev of number of rows, i.e. Stddev=[2.17]
|
|
||||||
- Total number of rows, i.e. Rows[2000]
|
|
||||||
- Total number of data blocks, i.e. Blocks=[3]
|
|
||||||
- Total disk size consumed, i.e. Size=[5.440(Kb)]
|
|
||||||
- Compression ratio, which means the compressed size divided by original size, i.e. Comp=[0.23]
|
|
||||||
- Total number of rows in memory, i.e. RowsInMem=[0], which means no rows in memory
|
|
||||||
- The time spent on reading head file (to retrieve data block information), i.e. SeekHeaderTime=[1(us)], which means 1 microsecond.
|
|
||||||
|
|
||||||
## Special Keywords in TAOS SQL
|
|
||||||
|
|
||||||
- `TBNAME`: it is treated as a special tag when selecting on a super table, representing the name of subtables in that super table.
|
|
||||||
- `_c0`: represents the first column of a table or super table.
|
|
||||||
|
|
||||||
## Tips
|
|
||||||
|
|
||||||
To get all the subtables and corresponding tag values from a super table:
|
|
||||||
|
|
||||||
```SQL
|
|
||||||
SELECT TBNAME, location FROM meters;
|
|
||||||
```
|
|
||||||
|
|
||||||
To get the number of sub tables in a super table:
|
|
||||||
|
|
||||||
```SQL
|
|
||||||
SELECT COUNT(TBNAME) FROM meters;
|
|
||||||
```
|
|
||||||
|
|
||||||
Only filter on `TAGS` are allowed in the `where` clause for above two query statements. For example:
|
|
||||||
|
|
||||||
```
|
|
||||||
taos> SELECT TBNAME, location FROM meters;
|
|
||||||
tbname | location |
|
|
||||||
==================================================================
|
|
||||||
d1004 | California.LosAngeles |
|
|
||||||
d1003 | California.LosAngeles |
|
|
||||||
d1002 | California.SanFrancisco |
|
|
||||||
d1001 | California.SanFrancisco |
|
|
||||||
Query OK, 4 row(s) in set (0.000881s)
|
|
||||||
|
|
||||||
taos> SELECT COUNT(tbname) FROM meters WHERE groupId > 2;
|
|
||||||
count(tbname) |
|
|
||||||
========================
|
|
||||||
2 |
|
|
||||||
Query OK, 1 row(s) in set (0.001091s)
|
|
||||||
```
|
|
||||||
|
|
||||||
- Wildcard \* can be used to get all columns, or specific column names can be specified. Arithmetic operation can be performed on columns of numerical types, columns can be renamed in the result set.
|
|
||||||
- Arithmetic operation on columns can't be used in where clause. For example, `where a*2>6;` is not allowed but `where a>6/2;` can be used instead for the same purpose.
|
|
||||||
- Arithmetic operation on columns can't be used as the objectives of select statement. For example, `select min(2*a) from t;` is not allowed but `select 2*min(a) from t;` can be used instead.
|
|
||||||
- Logical operation can be used in `WHERE` clause to filter numeric values, wildcard can be used to filter string values.
|
|
||||||
- Result sets are arranged in ascending order of the first column, i.e. timestamp, but it can be controlled to output as descending order of timestamp. If `order by` is used on other columns, the result may not be as expected. By the way, \_c0 is used to represent the first column, i.e. timestamp.
|
|
||||||
- `LIMIT` parameter is used to control the number of rows to output. `OFFSET` parameter is used to specify from which row to output. `LIMIT` and `OFFSET` are executed after `ORDER BY` in the query execution. A simple tip is that `LIMIT 5 OFFSET 2` can be abbreviated as `LIMIT 2, 5`.
|
|
||||||
- What is controlled by `LIMIT` is the number of rows in each group when `GROUP BY` is used.
|
|
||||||
- `SLIMIT` parameter is used to control the number of groups when `GROUP BY` is used. Similar to `LIMIT`, `SLIMIT 5 OFFSET 2` can be abbreviated as `SLIMIT 2, 5`.
|
|
||||||
- ">>" can be used to output the result set of `select` statement to the specified file.
|
|
||||||
|
|
||||||
## Where
|
|
||||||
|
|
||||||
Logical operations in below table can be used in the `where` clause to filter the resulting rows.
|
|
||||||
|
|
||||||
| **Operation** | **Note** | **Applicable Data Types** |
|
|
||||||
| ------------- | ------------------------ | ----------------------------------------- |
|
|
||||||
| > | larger than | all types except bool |
|
|
||||||
| < | smaller than | all types except bool |
|
|
||||||
| >= | larger than or equal to | all types except bool |
|
|
||||||
| <= | smaller than or equal to | all types except bool |
|
|
||||||
| = | equal to | all types |
|
|
||||||
| <\> | not equal to | all types |
|
|
||||||
| is [not] null | is null or is not null | all types |
|
|
||||||
| between and | within a certain range | all types except bool |
|
|
||||||
| in | match any value in a set | all types except first column `timestamp` |
|
|
||||||
| like | match a wildcard string | **`binary`** **`nchar`** |
|
|
||||||
| match/nmatch | filter regex | **`binary`** **`nchar`** |
|
|
||||||
|
|
||||||
**Explanations**:
|
|
||||||
|
|
||||||
- Operator `<\>` is equal to `!=`, please note that this operator can't be used on the first column of any table, i.e.timestamp column.
|
|
||||||
- Operator `like` is used together with wildcards to match strings
|
|
||||||
- '%' matches 0 or any number of characters, '\_' matches any single ASCII character.
|
|
||||||
- `\_` is used to match the \_ in the string.
|
|
||||||
- The maximum length of wildcard string is 100 bytes from version 2.1.6.1 (before that the maximum length is 20 bytes). `maxWildCardsLength` in `taos.cfg` can be used to control this threshold. A very long wildcard string may slowdown the execution performance of `LIKE` operator.
|
|
||||||
- `AND` keyword can be used to filter multiple columns simultaneously. AND/OR operation can be performed on single or multiple columns from version 2.3.0.0. However, before 2.3.0.0 `OR` can't be used on multiple columns.
|
|
||||||
- For timestamp column, only one condition can be used; for other columns or tags, `OR` keyword can be used to combine multiple logical operators. For example, `((value > 20 AND value < 30) OR (value < 12))`.
|
|
||||||
- From version 2.3.0.0, multiple conditions can be used on timestamp column, but the result set can only contain single time range.
|
|
||||||
- From version 2.0.17.0, operator `BETWEEN AND` can be used in where clause, for example `WHERE col2 BETWEEN 1.5 AND 3.25` means the filter condition is equal to "1.5 ≤ col2 ≤ 3.25".
|
|
||||||
- From version 2.1.4.0, operator `IN` can be used in the where clause. For example, `WHERE city IN ('California.SanFrancisco', 'California.SanDiego')`. For bool type, both `{true, false}` and `{0, 1}` are allowed, but integers other than 0 or 1 are not allowed. FLOAT and DOUBLE types are impacted by floating point precision errors. Only values that match the condition within the tolerance will be selected. Non-primary key column of timestamp type can be used with `IN`.
|
|
||||||
- From version 2.3.0.0, regular expression is supported in the where clause with keyword `match` or `nmatch`. The regular expression is case insensitive.
|
|
||||||
|
|
||||||
## Regular Expression
|
## Regular Expression
|
||||||
|
|
||||||
### Syntax
|
### Syntax
|
||||||
|
|
||||||
```SQL
|
```txt
|
||||||
WHERE (column|tbname) **match/MATCH/nmatch/NMATCH** _regex_
|
WHERE (column|tbname) **match/MATCH/nmatch/NMATCH** _regex_
|
||||||
```
|
```
|
||||||
|
|
||||||
### Specification
|
### Specification
|
||||||
|
|
||||||
The regular expression being used must be compliant with POSIX specification, please refer to [Regular Expressions](https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap09.html).
|
TDengine supports POSIX regular expression syntax. For more information, see [Regular Expressions](https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap09.html).
|
||||||
|
|
||||||
### Restrictions
|
### Restrictions
|
||||||
|
|
||||||
Regular expression can be used against only table names, i.e. `tbname`, and tags of binary/nchar types, but can't be used against data columns.
|
Regular expression filtering is supported only on table names (TBNAME), BINARY tags, and NCHAR tags. Regular expression filtering cannot be performed on data columns.
|
||||||
|
|
||||||
The maximum length of regular expression string is 128 bytes. Configuration parameter `maxRegexStringLen` can be used to set the maximum allowed regular expression. It's a configuration parameter on the client side, and will take effect after restarting the client.
|
A regular expression string cannot exceed 128 bytes. You can configure this value by modifying the maxRegexStringLen parameter on the TDengine Client. The modified value takes effect when the client is restarted.
|
||||||
|
|
||||||
## JOIN
|
## JOIN
|
||||||
|
|
||||||
From version 2.2.0.0, inner join is fully supported in TDengine. More specifically, the inner join between table and table, between STable and STable, and between sub query and sub query are supported.
|
TDengine supports natural joins between supertables, between standard tables, and between subqueries. The difference between natural joins and inner joins is that natural joins require that the fields being joined in the supertables or standard tables must have the same name. Data or tag columns must be joined with the equivalent column in another table.
|
||||||
|
|
||||||
Only primary key, i.e. timestamp, can be used in the join operation between table and table. For example:
|
For standard tables, only the timestamp (primary key) can be used in join operations. For example:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT *
|
SELECT *
|
||||||
|
@ -356,25 +319,26 @@ FROM temp_tb_1 t1, pressure_tb_1 t2
|
||||||
WHERE t1.ts = t2.ts
|
WHERE t1.ts = t2.ts
|
||||||
```
|
```
|
||||||
|
|
||||||
In the join operation between STable and STable, besides the primary key, i.e. timestamp, tags can also be used. For example:
|
For supertables, tags as well as timestamps can be used in join operations. For example:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT *
|
SELECT *
|
||||||
FROM temp_STable t1, temp_STable t2
|
FROM temp_stable t1, temp_stable t2
|
||||||
WHERE t1.ts = t2.ts AND t1.deviceid = t2.deviceid AND t1.status=0;
|
WHERE t1.ts = t2.ts AND t1.deviceid = t2.deviceid AND t1.status=0;
|
||||||
```
|
```
|
||||||
|
|
||||||
Similarly, join operations can be performed on the result set of multiple sub queries.
|
Similarly, join operations can be performed on the result sets of multiple subqueries.
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
Restrictions on join operation:
|
|
||||||
|
|
||||||
- The number of tables or STables in a single join operation can't exceed 10.
|
The following restriction apply to JOIN statements:
|
||||||
- `FILL` is not allowed in the query statement that includes JOIN operation.
|
|
||||||
- Arithmetic operation is not allowed on the result set of join operation.
|
- The number of tables or supertables in a single join operation cannot exceed 10.
|
||||||
- `GROUP BY` is not allowed on a part of tables that participate in join operation.
|
- `FILL` cannot be used in a JOIN statement.
|
||||||
- `OR` can't be used in the conditions for join operation
|
- Arithmetic operations cannot be performed on the result sets of join operation.
|
||||||
- join operation can't be performed on data columns, i.e. can only be performed on tags or primary key, i.e. timestamp
|
- `GROUP BY` is not allowed on a segment of the tables that participate in a join operation.
|
||||||
|
- `OR` cannot be used in the conditions for join operation
|
||||||
|
- Join operation can be performed only on tags or timestamps. You cannot perform a join operation on data columns.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
@ -384,7 +348,7 @@ Nested query is also called sub query. This means that in a single SQL statement
|
||||||
|
|
||||||
From 2.2.0.0, unassociated sub query can be used in the `FROM` clause. Unassociated means the sub query doesn't use the parameters in the parent query. More specifically, in the `tb_name_list` of `SELECT` statement, an independent SELECT statement can be used. So a complete nested query looks like:
|
From 2.2.0.0, unassociated sub query can be used in the `FROM` clause. Unassociated means the sub query doesn't use the parameters in the parent query. More specifically, in the `tb_name_list` of `SELECT` statement, an independent SELECT statement can be used. So a complete nested query looks like:
|
||||||
|
|
||||||
```SQL
|
```
|
||||||
SELECT ... FROM (SELECT ... FROM ...) ...;
|
SELECT ... FROM (SELECT ... FROM ...) ...;
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -408,42 +372,42 @@ SELECT ... FROM (SELECT ... FROM ...) ...;
|
||||||
|
|
||||||
## UNION ALL
|
## UNION ALL
|
||||||
|
|
||||||
```SQL title=Syntax
|
```txt title=Syntax
|
||||||
SELECT ...
|
SELECT ...
|
||||||
UNION ALL SELECT ...
|
UNION ALL SELECT ...
|
||||||
[UNION ALL SELECT ...]
|
[UNION ALL SELECT ...]
|
||||||
```
|
```
|
||||||
|
|
||||||
`UNION ALL` operator can be used to combine the result set from multiple select statements as long as the result set of these select statements have exactly the same columns. `UNION ALL` doesn't remove redundant rows from multiple result sets. In a single SQL statement, at most 100 `UNION ALL` can be supported.
|
TDengine supports the `UNION ALL` operation. `UNION ALL` operator can be used to combine the result set from multiple select statements as long as the result set of these select statements have exactly the same columns. `UNION ALL` doesn't remove redundant rows from multiple result sets. In a single SQL statement, at most 100 `UNION ALL` can be supported.
|
||||||
|
|
||||||
### Examples
|
### Examples
|
||||||
|
|
||||||
table `tb1` is created using below SQL statement:
|
table `tb1` is created using below SQL statement:
|
||||||
|
|
||||||
```SQL
|
```
|
||||||
CREATE TABLE tb1 (ts TIMESTAMP, col1 INT, col2 FLOAT, col3 BINARY(50));
|
CREATE TABLE tb1 (ts TIMESTAMP, col1 INT, col2 FLOAT, col3 BINARY(50));
|
||||||
```
|
```
|
||||||
|
|
||||||
The rows in the past one hour in `tb1` can be selected using below SQL statement:
|
The rows in the past one hour in `tb1` can be selected using below SQL statement:
|
||||||
|
|
||||||
```SQL
|
```
|
||||||
SELECT * FROM tb1 WHERE ts >= NOW - 1h;
|
SELECT * FROM tb1 WHERE ts >= NOW - 1h;
|
||||||
```
|
```
|
||||||
|
|
||||||
The rows between 2018-06-01 08:00:00.000 and 2018-06-02 08:00:00.000 and col3 ends with 'nny' can be selected in the descending order of timestamp using below SQL statement:
|
The rows between 2018-06-01 08:00:00.000 and 2018-06-02 08:00:00.000 and col3 ends with 'nny' can be selected in the descending order of timestamp using below SQL statement:
|
||||||
|
|
||||||
```SQL
|
```
|
||||||
SELECT * FROM tb1 WHERE ts > '2018-06-01 08:00:00.000' AND ts <= '2018-06-02 08:00:00.000' AND col3 LIKE '%nny' ORDER BY ts DESC;
|
SELECT * FROM tb1 WHERE ts > '2018-06-01 08:00:00.000' AND ts <= '2018-06-02 08:00:00.000' AND col3 LIKE '%nny' ORDER BY ts DESC;
|
||||||
```
|
```
|
||||||
|
|
||||||
The sum of col1 and col2 for rows later than 2018-06-01 08:00:00.000 and whose col2 is bigger than 1.2 can be selected and renamed as "complex", while only 10 rows are output from the 5th row, by below SQL statement:
|
The sum of col1 and col2 for rows later than 2018-06-01 08:00:00.000 and whose col2 is bigger than 1.2 can be selected and renamed as "complex", while only 10 rows are output from the 5th row, by below SQL statement:
|
||||||
|
|
||||||
```SQL
|
```
|
||||||
SELECT (col1 + col2) AS 'complex' FROM tb1 WHERE ts > '2018-06-01 08:00:00.000' AND col2 > 1.2 LIMIT 10 OFFSET 5;
|
SELECT (col1 + col2) AS 'complex' FROM tb1 WHERE ts > '2018-06-01 08:00:00.000' AND col2 > 1.2 LIMIT 10 OFFSET 5;
|
||||||
```
|
```
|
||||||
|
|
||||||
The rows in the past 10 minutes and whose col2 is bigger than 3.14 are selected and output to the result file `/home/testoutput.csv` with below SQL statement:
|
The rows in the past 10 minutes and whose col2 is bigger than 3.14 are selected and output to the result file `/home/testoutput.csv` with below SQL statement:
|
||||||
|
|
||||||
```SQL
|
```
|
||||||
SELECT COUNT(*) FROM tb1 WHERE ts >= NOW - 10m AND col2 > 3.14 >> /home/testoutput.csv;
|
SELECT COUNT(*) FROM tb1 WHERE ts >= NOW - 10m AND col2 > 3.14 >> /home/testoutput.csv;
|
||||||
```
|
```
|
||||||
|
|
|
@ -4,8 +4,7 @@ description: "Delete data from table or Stable"
|
||||||
title: Delete Data
|
title: Delete Data
|
||||||
---
|
---
|
||||||
|
|
||||||
TDengine provides the functionality of deleting data from a table or STable according to specified time range, it can be used to cleanup abnormal data generated due to device failure. Please be noted that this functionality is only available in Enterprise version, please refer to [TDengine Enterprise Edition](https://tdengine.com/products#enterprise-edition-link)
|
TDengine provides the functionality of deleting data from a table or STable according to specified time range, it can be used to cleanup abnormal data generated due to device failure.
|
||||||
|
|
||||||
|
|
||||||
**Syntax:**
|
**Syntax:**
|
||||||
|
|
||||||
|
@ -19,7 +18,7 @@ DELETE FROM [ db_name. ] tb_name [WHERE condition];
|
||||||
|
|
||||||
- `db_name`: Optional parameter, specifies the database in which the table exists; if not specified, the current database will be used.
|
- `db_name`: Optional parameter, specifies the database in which the table exists; if not specified, the current database will be used.
|
||||||
- `tb_name`: Mandatory parameter, specifies the table name from which data will be deleted, it can be normal table, subtable or STable.
|
- `tb_name`: Mandatory parameter, specifies the table name from which data will be deleted, it can be normal table, subtable or STable.
|
||||||
- `condition`: Optional parameter, specifies the data filter condition. If no condition is specified all data will be deleted, so please be cautions to delete data without any condition. The condition used here is only applicable to the first column, i.e. the timestamp column. If the table is a STable, the condition is also applicable to tag columns.
|
- `condition`: Optional parameter, specifies the data filter condition. If no condition is specified all data will be deleted, so please be cautions to delete data without any condition. The condition used here is only applicable to the first column, i.e. the timestamp column.
|
||||||
|
|
||||||
**More Explanations:**
|
**More Explanations:**
|
||||||
|
|
||||||
|
@ -27,10 +26,10 @@ The data can't be recovered once deleted, so please be cautious to use the funct
|
||||||
|
|
||||||
**Example:**
|
**Example:**
|
||||||
|
|
||||||
`meters` is a STable, in which `groupid` is a tag column of int type. Now we want to delete the data older than 2021-10-01 10:40:00.100 and `groupid` is 1. The SQL for this purpose is like below:
|
`meters` is a STable, in which `groupid` is a tag column of int type. Now we want to delete the data older than 2021-10-01 10:40:00.100. You can perform this action by running the following SQL statement:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
delete from meters where ts < '2021-10-01 10:40:00.100' and groupid=1 ;
|
delete from meters where ts < '2021-10-01 10:40:00.100' ;
|
||||||
```
|
```
|
||||||
|
|
||||||
The output is:
|
The output is:
|
||||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1,12 +1,94 @@
|
||||||
---
|
---
|
||||||
sidebar_label: Distinguished
|
sidebar_label: Time-Series Extensions
|
||||||
title: Distinguished Query for Time Series Database
|
title: Time-Series Extensions
|
||||||
---
|
---
|
||||||
|
|
||||||
Aggregation by time window is supported in TDengine. For example, in the case where temperature sensors report the temperature every seconds, the average temperature for every 10 minutes can be retrieved by performing a query with a time window.
|
As a purpose-built database for storing and processing time-series data, TDengine provides time-series-specific extensions to standard SQL.
|
||||||
Window related clauses are used to divide the data set to be queried into subsets and then aggregation is performed across the subsets. There are three kinds of windows: time window, status window, and session window. There are two kinds of time windows: sliding window and flip time/tumbling window.
|
|
||||||
|
|
||||||
## Time Window
|
These extensions include tag-partitioned queries and windowed queries.
|
||||||
|
|
||||||
|
## Tag-Partitioned Queries
|
||||||
|
|
||||||
|
When you query a supertable, you may need to partition the supertable by tag and perform additional operations on a specific partition. In this case, you can use the following SQL clause:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
PARTITION BY part_list
|
||||||
|
```
|
||||||
|
|
||||||
|
part_list can be any scalar expression, such as a column, constant, scalar function, or a combination of the preceding items.
|
||||||
|
|
||||||
|
A PARTITION BY clause with a tag is processed as follows:
|
||||||
|
|
||||||
|
- The PARTITION BY clause must occur after the WHERE clause and cannot be used with a JOIN clause.
|
||||||
|
- The PARTITION BY clause partitions the super table by the specified tag group, and the specified calculation is performed on each partition. The calculation performed is determined by the rest of the statement - a window clause, GROUP BY clause, or SELECT clause.
|
||||||
|
- You can use PARTITION BY together with a window clause or GROUP BY clause. In this case, the window or GROUP BY clause takes effect on every partition. For example, the following statement partitions the table by the location tag, performs downsampling over a 10 minute window, and returns the maximum value:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
select max(current) from meters partition by location interval(10m)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Windowed Queries
|
||||||
|
|
||||||
|
Aggregation by time window is supported in TDengine. For example, in the case where temperature sensors report the temperature every seconds, the average temperature for every 10 minutes can be retrieved by performing a query with a time window. Window related clauses are used to divide the data set to be queried into subsets and then aggregation is performed across the subsets. There are three kinds of windows: time window, status window, and session window. There are two kinds of time windows: sliding window and flip time/tumbling window. The query syntax is as follows:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT function_list FROM tb_name
|
||||||
|
[WHERE where_condition]
|
||||||
|
[SESSION(ts_col, tol_val)]
|
||||||
|
[STATE_WINDOW(col)]
|
||||||
|
[INTERVAL(interval [, offset]) [SLIDING sliding]]
|
||||||
|
[FILL({NONE | VALUE | PREV | NULL | LINEAR | NEXT})]
|
||||||
|
```
|
||||||
|
|
||||||
|
The following restrictions apply:
|
||||||
|
|
||||||
|
### Restricted Functions
|
||||||
|
|
||||||
|
- Aggregate functions and select functions can be used in `function_list`, with each function having only one output. For example COUNT, AVG, SUM, STDDEV, LEASTSQUARES, PERCENTILE, MIN, MAX, FIRST, LAST. Functions having multiple outputs, such as DIFF or arithmetic operations can't be used.
|
||||||
|
- `LAST_ROW` can't be used together with window aggregate.
|
||||||
|
- Scalar functions, like CEIL/FLOOR, can't be used with window aggregate.
|
||||||
|
|
||||||
|
### Other Rules
|
||||||
|
|
||||||
|
- The window clause must occur after the PARTITION BY clause and before the GROUP BY clause. It cannot be used with a GROUP BY clause.
|
||||||
|
- SELECT clauses on windows can contain only the following expressions:
|
||||||
|
- Constants
|
||||||
|
- Aggregate functions
|
||||||
|
- Expressions that include the preceding expressions.
|
||||||
|
- The window clause cannot be used with a GROUP BY clause.
|
||||||
|
- `WHERE` clause can be used to specify the starting and ending time and other filter conditions
|
||||||
|
|
||||||
|
|
||||||
|
### Window Pseudocolumns
|
||||||
|
|
||||||
|
**\_WSTART, \_WEND, and \_WDURATION**
|
||||||
|
|
||||||
|
The \_WSTART, \_WEND, and \_WDURATION pseudocolumns indicate the beginning, end, and duration of a window.
|
||||||
|
|
||||||
|
These pseudocolumns occur after the aggregation clause.
|
||||||
|
|
||||||
|
### FILL Clause
|
||||||
|
|
||||||
|
`FILL` clause is used to specify how to fill when there is data missing in any window, including:
|
||||||
|
|
||||||
|
1. NONE: No fill (the default fill mode)
|
||||||
|
2. VALUE:Fill with a fixed value, which should be specified together, for example `FILL(VALUE, 1.23)` Note: The value filled depends on the data type. For example, if you run FILL(VALUE 1.23) on an integer column, the value 1 is filled.
|
||||||
|
3. PREV:Fill with the previous non-NULL value, `FILL(PREV)`
|
||||||
|
4. NULL:Fill with NULL, `FILL(NULL)`
|
||||||
|
5. LINEAR:Fill with the closest non-NULL value, `FILL(LINEAR)`
|
||||||
|
6. NEXT:Fill with the next non-NULL value, `FILL(NEXT)`
|
||||||
|
|
||||||
|
:::info
|
||||||
|
|
||||||
|
1. A huge volume of interpolation output may be returned using `FILL`, so it's recommended to specify the time range when using `FILL`. The maximum number of interpolation values that can be returned in a single query is 10,000,000.
|
||||||
|
2. The result set is in ascending order of timestamp when you aggregate by time window.
|
||||||
|
3. If aggregate by window is used on STable, the aggregate function is performed on all the rows matching the filter conditions. If `GROUP BY` is not used in the query, the result set will be returned in ascending order of timestamp; otherwise the result set is not exactly in the order of ascending timestamp in each group.
|
||||||
|
|
||||||
|
:::
|
||||||
|
|
||||||
|
### Time Window
|
||||||
|
|
||||||
|
There are two kinds of time windows: sliding window and flip time/tumbling window.
|
||||||
|
|
||||||
The `INTERVAL` clause is used to generate time windows of the same time interval. The `SLIDING` parameter is used to specify the time step for which the time window moves forward. The query is performed on one time window each time, and the time window moves forward with time. When defining a continuous query, both the size of the time window and the step of forward sliding time need to be specified. As shown in the figure blow, [t0s, t0e] ,[t1s , t1e], [t2s, t2e] are respectively the time ranges of three time windows on which continuous queries are executed. The time step for which time window moves forward is marked by `sliding time`. Query, filter and aggregate operations are executed on each time window respectively. When the time step specified by `SLIDING` is same as the time interval specified by `INTERVAL`, the sliding time window is actually a flip time/tumbling window.
|
The `INTERVAL` clause is used to generate time windows of the same time interval. The `SLIDING` parameter is used to specify the time step for which the time window moves forward. The query is performed on one time window each time, and the time window moves forward with time. When defining a continuous query, both the size of the time window and the step of forward sliding time need to be specified. As shown in the figure blow, [t0s, t0e] ,[t1s , t1e], [t2s, t2e] are respectively the time ranges of three time windows on which continuous queries are executed. The time step for which time window moves forward is marked by `sliding time`. Query, filter and aggregate operations are executed on each time window respectively. When the time step specified by `SLIDING` is same as the time interval specified by `INTERVAL`, the sliding time window is actually a flip time/tumbling window.
|
||||||
|
|
||||||
|
@ -24,86 +106,46 @@ The time step specified by `SLIDING` cannot exceed the time interval specified b
|
||||||
SELECT COUNT(*) FROM temp_tb_1 INTERVAL(1m) SLIDING(2m);
|
SELECT COUNT(*) FROM temp_tb_1 INTERVAL(1m) SLIDING(2m);
|
||||||
```
|
```
|
||||||
|
|
||||||
When the time length specified by `SLIDING` is the same as that specified by `INTERVAL`, the sliding window is actually a flip/tumbling window. The minimum time range specified by `INTERVAL` is 10 milliseconds (10a) prior to version 2.1.5.0. Since version 2.1.5.0, the minimum time range by `INTERVAL` can be 1 microsecond (1u). However, if the DB precision is millisecond, the minimum time range is 1 millisecond (1a). Please note that the `timezone` parameter should be configured to be the same value in the `taos.cfg` configuration file on client side and server side.
|
When using time windows, note the following:
|
||||||
|
|
||||||
## Status Window
|
- The window length for aggregation depends on the value of INTERVAL. The minimum interval is 10 ms. You can configure a window as an offset from UTC 0:00. The offset cannot be smaler than the interval. You can use SLIDING to specify the length of time that the window moves forward.
|
||||||
|
Please note that the `timezone` parameter should be configured to be the same value in the `taos.cfg` configuration file on client side and server side.
|
||||||
|
- The result set is in ascending order of timestamp when you aggregate by time window.
|
||||||
|
|
||||||
|
### Status Window
|
||||||
|
|
||||||
In case of using integer, bool, or string to represent the status of a device at any given moment, continuous rows with the same status belong to a status window. Once the status changes, the status window closes. As shown in the following figure, there are two status windows according to status, [2019-04-28 14:22:07,2019-04-28 14:22:10] and [2019-04-28 14:22:11,2019-04-28 14:22:12]. Status window is not applicable to STable for now.
|
In case of using integer, bool, or string to represent the status of a device at any given moment, continuous rows with the same status belong to a status window. Once the status changes, the status window closes. As shown in the following figure, there are two status windows according to status, [2019-04-28 14:22:07,2019-04-28 14:22:10] and [2019-04-28 14:22:11,2019-04-28 14:22:12]. Status window is not applicable to STable for now.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
`STATE_WINDOW` is used to specify the column on which the status window will be based. For example:
|
`STATE_WINDOW` is used to specify the column on which the status window will be based. For example:
|
||||||
|
|
||||||
```
|
```
|
||||||
SELECT COUNT(*), FIRST(ts), status FROM temp_tb_1 STATE_WINDOW(status);
|
SELECT COUNT(*), FIRST(ts), status FROM temp_tb_1 STATE_WINDOW(status);
|
||||||
```
|
```
|
||||||
|
|
||||||
## Session Window
|
### Session Window
|
||||||
|
|
||||||
```sql
|
The primary key, i.e. timestamp, is used to determine which session window a row belongs to. As shown in the figure below, if the limit of time interval for the session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:10,2019-04-28 14:22:30] and [2019-04-28 14:23:10,2019-04-28 14:23:30] because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds.
|
||||||
SELECT COUNT(*), FIRST(ts) FROM temp_tb_1 SESSION(ts, tol_val);
|
|
||||||
```
|
|
||||||
|
|
||||||
The primary key, i.e. timestamp, is used to determine which session window a row belongs to. If the time interval between two adjacent rows is within the time range specified by `tol_val`, they belong to the same session window; otherwise they belong to two different session windows. As shown in the figure below, if the limit of time interval for the session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:10,2019-04-28 14:22:30] and [2019-04-28 14:23:10,2019-04-28 14:23:30], because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds.
|
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
If the time interval between two continuous rows are within the time interval specified by `tol_value` they belong to the same session window; otherwise a new session window is started automatically. Session window is not supported on STable for now.
|
If the time interval between two continuous rows are within the time interval specified by `tol_value` they belong to the same session window; otherwise a new session window is started automatically. Session window is not supported on STable for now.
|
||||||
|
|
||||||
## More On Window Aggregate
|
|
||||||
|
|
||||||
### Syntax
|
|
||||||
|
|
||||||
The full syntax of aggregate by window is as follows:
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SELECT function_list FROM tb_name
|
|
||||||
[WHERE where_condition]
|
|
||||||
[SESSION(ts_col, tol_val)]
|
|
||||||
[STATE_WINDOW(col)]
|
|
||||||
[INTERVAL(interval [, offset]) [SLIDING sliding]]
|
|
||||||
[FILL({NONE | VALUE | PREV | NULL | LINEAR | NEXT})]
|
|
||||||
|
|
||||||
SELECT function_list FROM stb_name
|
|
||||||
[WHERE where_condition]
|
|
||||||
[INTERVAL(interval [, offset]) [SLIDING sliding]]
|
|
||||||
[FILL({NONE | VALUE | PREV | NULL | LINEAR | NEXT})]
|
|
||||||
[GROUP BY tags]
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Restrictions
|
SELECT COUNT(*), FIRST(ts) FROM temp_tb_1 SESSION(ts, tol_val);
|
||||||
|
```
|
||||||
|
|
||||||
- Aggregate functions and select functions can be used in `function_list`, with each function having only one output. For example COUNT, AVG, SUM, STDDEV, LEASTSQUARES, PERCENTILE, MIN, MAX, FIRST, LAST. Functions having multiple outputs, such as DIFF or arithmetic operations can't be used.
|
### Examples
|
||||||
- `LAST_ROW` can't be used together with window aggregate.
|
|
||||||
- Scalar functions, like CEIL/FLOOR, can't be used with window aggregate.
|
|
||||||
- `WHERE` clause can be used to specify the starting and ending time and other filter conditions
|
|
||||||
- `FILL` clause is used to specify how to fill when there is data missing in any window, including:
|
|
||||||
1. NONE: No fill (the default fill mode)
|
|
||||||
2. VALUE:Fill with a fixed value, which should be specified together, for example `FILL(VALUE, 1.23)`
|
|
||||||
3. PREV:Fill with the previous non-NULL value, `FILL(PREV)`
|
|
||||||
4. NULL:Fill with NULL, `FILL(NULL)`
|
|
||||||
5. LINEAR:Fill with the closest non-NULL value, `FILL(LINEAR)`
|
|
||||||
6. NEXT:Fill with the next non-NULL value, `FILL(NEXT)`
|
|
||||||
|
|
||||||
:::info
|
|
||||||
|
|
||||||
1. A huge volume of interpolation output may be returned using `FILL`, so it's recommended to specify the time range when using `FILL`. The maximum number of interpolation values that can be returned in a single query is 10,000,000.
|
|
||||||
2. The result set is in ascending order of timestamp when you aggregate by time window.
|
|
||||||
3. If aggregate by window is used on STable, the aggregate function is performed on all the rows matching the filter conditions. If `GROUP BY` is not used in the query, the result set will be returned in ascending order of timestamp; otherwise the result set is not exactly in the order of ascending timestamp in each group.
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
Aggregate by time window is also used in continuous query, please refer to [Continuous Query](/develop/continuous-query).
|
|
||||||
|
|
||||||
## Examples
|
|
||||||
|
|
||||||
A table of intelligent meters can be created by the SQL statement below:
|
A table of intelligent meters can be created by the SQL statement below:
|
||||||
|
|
||||||
```sql
|
```
|
||||||
CREATE TABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT);
|
CREATE TABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT);
|
||||||
```
|
```
|
||||||
|
|
||||||
The average current, maximum current and median of current in every 10 minutes for the past 24 hours can be calculated using the SQL statement below, with missing values filled with the previous non-NULL values.
|
The average current, maximum current and median of current in every 10 minutes for the past 24 hours can be calculated using the SQL statement below, with missing values filled with the previous non-NULL values. The query statement is as follows:
|
||||||
|
|
||||||
```
|
```
|
||||||
SELECT AVG(current), MAX(current), APERCENTILE(current, 50) FROM meters
|
SELECT AVG(current), MAX(current), APERCENTILE(current, 50) FROM meters
|
||||||
|
|
|
@ -1,41 +1,34 @@
|
||||||
---
|
---
|
||||||
sidebar_label: 消息队列
|
sidebar_label: Data Subscription
|
||||||
title: 消息队列
|
title: Data Subscription
|
||||||
---
|
---
|
||||||
|
|
||||||
TDengine 3.0.0.0 开始对消息队列做了大幅的优化和增强以简化用户的解决方案。
|
The information in this document is related to the TDengine data subscription feature.
|
||||||
|
|
||||||
## 创建订阅主题
|
## Create a Topic
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
CREATE TOPIC [IF NOT EXISTS] topic_name AS {subquery | DATABASE db_name | STABLE stb_name };
|
CREATE TOPIC [IF NOT EXISTS] topic_name AS subquery;
|
||||||
```
|
```
|
||||||
|
|
||||||
订阅主题包括三种:列订阅、超级表订阅和数据库订阅。
|
|
||||||
|
|
||||||
**列订阅是**用 subquery 描述,支持过滤和标量函数和 UDF 标量函数,不支持 JOIN、GROUP BY、窗口切分子句、聚合函数和 UDF 聚合函数。列订阅规则如下:
|
You can use filtering, scalar functions, and user-defined scalar functions with a topic. JOIN, GROUP BY, windows, aggregate functions, and user-defined aggregate functions are not supported. The following rules apply to subscribing to a column:
|
||||||
|
|
||||||
1. TOPIC 一旦创建则返回结果的字段确定
|
1. The returned field is determined when the topic is created.
|
||||||
2. 被订阅或用于计算的列不可被删除、修改
|
2. Columns to which a consumer is subscribed or that are involved in calculations cannot be deleted or modified.
|
||||||
3. 列可以新增,但新增的列不出现在订阅结果字段中
|
3. If you add a column, the new column will not appear in the results for the subscription.
|
||||||
4. 对于 select \*,则订阅展开为创建时所有的列(子表、普通表为数据列,超级表为数据列加标签列)
|
4. If you run `SELECT \*`, all columns in the subscription at the time of its creation are displayed. This includes columns in supertables, standard tables, and subtables. Supertables are shown as data columns plus tag columns.
|
||||||
|
|
||||||
**超级表订阅和数据库订阅**规则如下:
|
|
||||||
|
|
||||||
1. 被订阅主体的 schema 变更不受限
|
## Delete a Topic
|
||||||
2. 返回消息中 schema 是块级别的,每块的 schema 可能不一样
|
|
||||||
3. 列变更后写入的数据若未落盘,将以写入时的 schema 返回
|
|
||||||
4. 列变更后写入的数据若未已落盘,将以落盘时的 schema 返回
|
|
||||||
|
|
||||||
## 删除订阅主题
|
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
DROP TOPIC [IF EXISTS] topic_name;
|
DROP TOPIC [IF EXISTS] topic_name;
|
||||||
```
|
```
|
||||||
|
|
||||||
此时如果该订阅主题上存在 consumer,则此 consumer 会收到一个错误。
|
If a consumer is subscribed to the topic that you delete, the consumer will receive an error.
|
||||||
|
|
||||||
## 查看订阅主题
|
## View Topics
|
||||||
|
|
||||||
## SHOW TOPICS
|
## SHOW TOPICS
|
||||||
|
|
||||||
|
@ -43,24 +36,24 @@ DROP TOPIC [IF EXISTS] topic_name;
|
||||||
SHOW TOPICS;
|
SHOW TOPICS;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示当前数据库下的所有主题的信息。
|
The preceding command displays all topics in the current database.
|
||||||
|
|
||||||
## 创建消费组
|
## Create Consumer Group
|
||||||
|
|
||||||
消费组的创建只能通过 TDengine 客户端驱动或者连接器所提供的 API 创建。
|
You can create consumer groups only through the TDengine Client driver or the API provided by a connector.
|
||||||
|
|
||||||
## 删除消费组
|
## Delete Consumer Group
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
DROP CONSUMER GROUP [IF EXISTS] cgroup_name ON topic_name;
|
DROP CONSUMER GROUP [IF EXISTS] cgroup_name ON topic_name;
|
||||||
```
|
```
|
||||||
|
|
||||||
删除主题 topic_name 上的消费组 cgroup_name。
|
This deletes the cgroup_name in the topic_name.
|
||||||
|
|
||||||
## 查看消费组
|
## View Consumer Groups
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SHOW CONSUMERS;
|
SHOW CONSUMERS;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示当前数据库下所有活跃的消费者的信息。
|
The preceding command displays all consumer groups in the current database.
|
||||||
|
|
|
@ -1,13 +1,13 @@
|
||||||
---
|
---
|
||||||
sidebar_label: 流式计算
|
sidebar_label: Stream Processing
|
||||||
title: 流式计算
|
title: Stream Processing
|
||||||
---
|
---
|
||||||
|
|
||||||
在时序数据的处理中,经常要对原始数据进行清洗、预处理,再使用时序数据库进行长久的储存。用户通常需要在时序数据库之外再搭建 Kafka、Flink、Spark 等流计算处理引擎,增加了用户的开发成本和维护成本。
|
Raw time-series data is often cleaned and preprocessed before being permanently stored in a database. Stream processing components like Kafka, Flink, and Spark are often deployed alongside a time-series database to handle these operations, increasing system complexity and maintenance costs.
|
||||||
|
|
||||||
使用 TDengine 3.0 的流式计算引擎能够最大限度的减少对这些额外中间件的依赖,真正将数据的写入、预处理、长期存储、复杂分析、实时计算、实时报警触发等功能融为一体,并且,所有这些任务只需要使用 SQL 完成,极大降低了用户的学习成本、使用成本。
|
Because stream processing is built in to TDengine, you are no longer reliant on middleware. TDengine offers a unified platform for writing, preprocessing, permanent storage, complex analysis, and real-time computation and alerting. Additionally, you can use SQL to perform all these tasks.
|
||||||
|
|
||||||
## 创建流式计算
|
## Create a Stream
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name AS subquery
|
CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name AS subquery
|
||||||
|
@ -18,7 +18,7 @@ stream_options: {
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
其中 subquery 是 select 普通查询语法的子集:
|
The subquery is a subset of standard SELECT query syntax:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
subquery: SELECT [DISTINCT] select_list
|
subquery: SELECT [DISTINCT] select_list
|
||||||
|
@ -26,97 +26,74 @@ subquery: SELECT [DISTINCT] select_list
|
||||||
[WHERE condition]
|
[WHERE condition]
|
||||||
[PARTITION BY tag_list]
|
[PARTITION BY tag_list]
|
||||||
[window_clause]
|
[window_clause]
|
||||||
[group_by_clause]
|
|
||||||
```
|
```
|
||||||
|
|
||||||
不支持 order_by,limit,slimit,fill 语句
|
Session windows, state windows, and sliding windows are supported. When you configure a session or state window for a supertable, you must use PARTITION BY TBNAME.
|
||||||
|
|
||||||
例如,如下语句创建流式计算,同时自动创建名为 avg_vol 的超级表,此流计算以一分钟为时间窗口、30 秒为前向增量统计这些电表的平均电压,并将来自 meters 表的数据的计算结果写入 avg_vol 表,不同 partition 的数据会分别创建子表并写入不同子表。
|
```sql
|
||||||
|
window_clause: {
|
||||||
|
SESSION(ts_col, tol_val)
|
||||||
|
| STATE_WINDOW(col)
|
||||||
|
| INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
`SESSION` indicates a session window, and `tol_val` indicates the maximum range of the time interval. If the time interval between two continuous rows are within the time interval specified by `tol_val` they belong to the same session window; otherwise a new session window is started automatically.
|
||||||
|
|
||||||
|
For example, the following SQL statement creates a stream and automatically creates a supertable named `avg_vol`. The stream has a 1 minute time window that slides forward in 30 second intervals to calculate the average voltage of the meters supertable.
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
CREATE STREAM avg_vol_s INTO avg_vol AS
|
CREATE STREAM avg_vol_s INTO avg_vol AS
|
||||||
SELECT _wstartts, count(*), avg(voltage) FROM meters PARTITION BY tbname INTERVAL(1m) SLIDING(30s);
|
SELECT _wstartts, count(*), avg(voltage) FROM meters PARTITION BY tbname INTERVAL(1m) SLIDING(30s);
|
||||||
```
|
```
|
||||||
|
|
||||||
## 删除流式计算
|
## Delete a Stream
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
DROP STREAM [IF NOT EXISTS] stream_name
|
DROP STREAM [IF NOT EXISTS] stream_name
|
||||||
```
|
```
|
||||||
|
|
||||||
仅删除流式计算任务,由流式计算写入的数据不会被删除。
|
This statement deletes the stream processing service only. The data generated by the stream is retained.
|
||||||
|
|
||||||
## 展示流式计算
|
## View Streams
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SHOW STREAMS;
|
SHOW STREAMS;
|
||||||
```
|
```
|
||||||
|
|
||||||
## 流式计算的触发模式
|
## Trigger Stream Processing
|
||||||
|
|
||||||
在创建流时,可以通过 TRIGGER 指令指定流式计算的触发模式。
|
When you create a stream, you can use the TRIGGER parameter to specify triggering conditions for it.
|
||||||
|
|
||||||
对于非窗口计算,流式计算的触发是实时的;对于窗口计算,目前提供 3 种触发模式:
|
For non-windowed processing, triggering occurs in real time. For windowed processing, there are three methods of triggering:
|
||||||
|
|
||||||
1. AT_ONCE:写入立即触发
|
1. AT_ONCE: triggers on write
|
||||||
|
|
||||||
2. WINDOW_CLOSE:窗口关闭时触发(窗口关闭由事件时间决定,可配合 watermark 使用,详见《流式计算的乱序数据容忍策略》)
|
2. WINDOW_CLOSE: triggers when the window closes. This is determined by the event time. You can use WINDOW_CLOSE together with `watermark`. For more information, see Stream Processing Strategy for Out-of-Order Data.
|
||||||
|
|
||||||
3. MAX_DELAY time:若窗口关闭,则触发计算。若窗口未关闭,且未关闭时长超过 max delay 指定的时间,则触发计算。
|
3. MAX_DELAY: triggers when the window closes. If the window has not closed but the time elapsed exceeds MAX_DELAY, stream processing is also triggered.
|
||||||
|
|
||||||
由于窗口关闭是由事件时间决定的,如事件流中断、或持续延迟,则事件时间无法更新,可能导致无法得到最新的计算结果。
|
Because the window closing is determined by the event time, a delay or termination of an event stream will prevent the event time from being updated. This may result in an inability to obtain the latest results.
|
||||||
|
|
||||||
因此,流式计算提供了以事件时间结合处理时间计算的 MAX_DELAY 触发模式。
|
For this reason, MAX_DELAY is provided as a way to ensure that processing occurs even if the window does not close.
|
||||||
|
|
||||||
MAX_DELAY 模式在窗口关闭时会立即触发计算。此外,当数据写入后,计算触发的时间超过 max delay 指定的时间,则立即触发计算
|
MAX_DELAY also triggers when the window closes. Additionally, if a write occurs but the processing is not triggered before MAX_DELAY expires, processing is also triggered.
|
||||||
|
|
||||||
## 流式计算的乱序数据容忍策略
|
## Stream Processing Strategy for Out-of-Order Data
|
||||||
|
|
||||||
在创建流时,可以在 stream_option 中指定 watermark。
|
When you create a stream, you can specify a watermark in the `stream_option` parameter.
|
||||||
|
|
||||||
流式计算通过 watermark 来度量对乱序数据的容忍程度,watermark 默认为 0。
|
The watermark is used to specify the tolerance for out-of-order data. The default value is 0.
|
||||||
|
|
||||||
T = 最新事件时间 - watermark
|
T = latest event time - watermark
|
||||||
|
|
||||||
每批到来的数据都会以上述公式更新窗口关闭时间,并将窗口结束时间 < T 的所有打开的窗口关闭,若触发模式为 WINDOW_CLOSE 或 MAX_DELAY,则推送窗口聚合结果。
|
The window closing time for each batch of data that arrives at the system is updated using the preceding formula, and all windows are closed whose closing time is less than T. If the triggering method is WINDOW_CLOSE or MAX_DELAY, the aggregate result for the window is pushed.
|
||||||
|
|
||||||
流式计算的过期数据处理策略
|
Stream processing strategy for expired data
|
||||||
对于已关闭的窗口,再次落入该窗口中的数据被标记为过期数据,对于过期数据,流式计算提供两种处理方式:
|
The data in expired windows is tagged as expired. TDengine stream processing provides two methods for handling such data:
|
||||||
|
|
||||||
1. 直接丢弃:这是常见流式计算引擎提供的默认(甚至是唯一)计算模式
|
1. Drop the data. This is the default and often only handling method for most stream processing engines.
|
||||||
|
|
||||||
2. 重新计算:从 TSDB 中重新查找对应窗口的所有数据并重新计算得到最新结果
|
2. Recalculate the data. In this method, all data in the window is reobtained from the database and recalculated. The latest results are then returned.
|
||||||
|
|
||||||
无论在哪种模式下,watermark 都应该被妥善设置,来得到正确结果(直接丢弃模式)或避免频繁触发重算带来的性能开销(重新计算模式)。
|
In both of these methods, configuring the watermark is essential for obtaining accurate results (if expired data is dropped) and avoiding repeated triggers that affect system performance (if expired data is recalculated).
|
||||||
|
|
||||||
## 流式计算的数据填充策略
|
|
||||||
|
|
||||||
TODO
|
|
||||||
|
|
||||||
## 流式计算与会话窗口(session window)
|
|
||||||
|
|
||||||
```sql
|
|
||||||
window_clause: {
|
|
||||||
SESSION(ts_col, tol_val)
|
|
||||||
| STATE_WINDOW(col)
|
|
||||||
| INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)] [FILL(fill_mod_and_val)]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
其中,SESSION 是会话窗口,tol_val 是时间间隔的最大范围。在 tol_val 时间间隔范围内的数据都属于同一个窗口,如果连续的两条数据的时间超过 tol_val,则自动开启下一个窗口。
|
|
||||||
|
|
||||||
## 流式计算的监控与流任务分布查询
|
|
||||||
|
|
||||||
TODO
|
|
||||||
|
|
||||||
## 流式计算的内存控制与存算分离
|
|
||||||
|
|
||||||
TODO
|
|
||||||
|
|
||||||
## 流式计算的暂停与恢复
|
|
||||||
|
|
||||||
```sql
|
|
||||||
STOP STREAM stream_name;
|
|
||||||
|
|
||||||
RESUME STREAM stream_name;
|
|
||||||
```
|
|
||||||
|
|
|
@ -5,62 +5,62 @@ title: Operators
|
||||||
|
|
||||||
## Arithmetic Operators
|
## Arithmetic Operators
|
||||||
|
|
||||||
| # | **Operator** | **Data Types** | **Description** |
|
| # | **Operator** | **Supported Data Types** | **Description** |
|
||||||
| --- | :----------: | -------------- | --------------------------------------------------------- |
|
| --- | :--------: | -------------- | -------------------------- |
|
||||||
| 1 | +, - | Numeric Types | Representing positive or negative numbers, unary operator |
|
| 1 | +, - | Numeric | Expresses sign. Unary operators. |
|
||||||
| 2 | +, - | Numeric Types | Addition and substraction, binary operator |
|
| 2 | +, - | Numeric | Expresses addition and subtraction. Binary operators. |
|
||||||
| 3 | \*, / | Numeric Types | Multiplication and division, binary oeprator |
|
| 3 | \*, / | Numeric | Expresses multiplication and division. Binary operators. |
|
||||||
| 4 | % | Numeric Types | Taking the remainder, binary operator |
|
| 4 | % | Numeric | Expresses modulo. Binary operator. |
|
||||||
|
|
||||||
## Bitwise Operators
|
## Bitwise Operators
|
||||||
|
|
||||||
| # | **Operator** | **Data Types** | **Description** |
|
| # | **Operator** | **Supported Data Types** | **Description** |
|
||||||
| --- | :----------: | -------------- | ----------------------------- |
|
| --- | :--------: | -------------- | ------------------ |
|
||||||
| 1 | & | Numeric Types | Bitewise AND, binary operator |
|
| 1 | & | Numeric | Bitwise AND. Binary operator. |
|
||||||
| 2 | \| | Numeric Types | Bitewise OR, binary operator |
|
| 2 | \| | Numeric | Bitwise OR. Binary operator. |
|
||||||
|
|
||||||
## JSON Operator
|
## JSON Operators
|
||||||
|
|
||||||
`->` operator can be used to get the value of a key in a column of JSON type, the left oeprand is the column name, the right operand is a string constant. For example, `col->'name'` returns the value of key `'name'`.
|
The `->` operator returns the value for a key in JSON column. Specify the column indicator on the left of the operator and the key name on the right of the operator. For example, `col->name` returns the value of the name key.
|
||||||
|
|
||||||
## Set Operator
|
## Set Operators
|
||||||
|
|
||||||
Set operators are used to combine the results of two queries into single result. A query including set operators is called a combined query. The number of rows in each result in a combined query must be same, and the type is determined by the first query's result, the type of the following queriess result must be able to be converted to the type of the first query's result, the conversion rule is same as `CAST` function.
|
Set operators combine the results of two queries. Queries that include set operators are known as compound queries. The expressions corresponding to each query in the select list in a compound query must match in number. The results returned take the data type of the first query, and the data type returned by subsequent queries must be convertible into the data type of the first query. The conditions of the `CAST` function apply to this conversion.
|
||||||
|
|
||||||
TDengine provides 2 set operators: `UNION ALL` and `UNION`. `UNION ALL` combines the results without removing duplicate data. `UNION` combines the results and remove duplicate data rows. In single SQL statement, at most 100 set operators can be used.
|
TDengine supports the `UNION` and `UNION ALL` operations. UNION ALL collects all query results and returns them as a composite result without deduplication. UNION collects all query results and returns them as a deduplicated composite result. In a single SQL statement, at most 100 set operators can be supported.
|
||||||
|
|
||||||
## Comparsion Operator
|
## Comparison Operators
|
||||||
|
|
||||||
| # | **Operator** | **Data Types** | **Description** |
|
| # | **Operator** | **Supported Data Types** | **Description** |
|
||||||
| --- | :---------------: | ------------------------------------------------------------------- | ----------------------------------------------- |
|
| --- | :---------------: | -------------------------------------------------------------------- | -------------------- |
|
||||||
| 1 | = | Except for BLOB, MEDIUMBLOB and JSON | Equal |
|
| 1 | = | All types except BLOB, MEDIUMBLOB, and JSON | Equal to |
|
||||||
| 2 | <\>, != | Except for BLOB, MEDIUMBLOB, JSON and primary key of timestamp type | Not equal |
|
| 2 | <\>, != | All types except BLOB, MEDIUMBLOB, and JSON; the primary key (timestamp) is also not supported | Not equal to |
|
||||||
| 3 | \>, < | Except for BLOB, MEDIUMBLOB and JSON | Greater than, less than |
|
| 3 | \>, < | All types except BLOB, MEDIUMBLOB, and JSON | Greater than and less than |
|
||||||
| 4 | \>=, <= | Except for BLOB, MEDIUMBLOB and JSON | Greater than or equal to, less than or equal to |
|
| 4 | \>=, <= | All types except BLOB, MEDIUMBLOB, and JSON | Greater than or equal to and less than or equal to |
|
||||||
| 5 | IS [NOT] NULL | Any types | Is NULL or NOT |
|
| 5 | IS [NOT] NULL | All types | Indicates whether the value is null |
|
||||||
| 6 | [NOT] BETWEEN AND | Except for BLOB, MEDIUMBLOB and JSON | In a value range or not |
|
| 6 | [NOT] BETWEEN AND | All types except BLOB, MEDIUMBLOB, and JSON | Closed interval comparison |
|
||||||
| 7 | IN | Except for BLOB, MEDIUMBLOB, JSON and primary key of timestamp type | In a list of values or not |
|
| 7 | IN | All types except BLOB, MEDIUMBLOB, and JSON; the primary key (timestamp) is also not supported | Equal to any value in the list |
|
||||||
| 8 | LIKE | BINARY, NCHAR and VARCHAR | Wildcard matching |
|
| 8 | LIKE | BINARY, NCHAR, and VARCHAR | Wildcard match |
|
||||||
| 9 | MATCH, NMATCH | BINARY, NCHAR and VARCHAR | Regular expression matching |
|
| 9 | MATCH, NMATCH | BINARY, NCHAR, and VARCHAR | Regular expression match |
|
||||||
| 10 | CONTAINS | JSON | If A key exists in JSON |
|
| 10 | CONTAINS | JSON | Indicates whether the key exists |
|
||||||
|
|
||||||
`LIKE` operator uses wildcard to match a string, the rules are:
|
LIKE is used together with wildcards to match strings. Its usage is described as follows:
|
||||||
|
|
||||||
- '%' matches 0 to any number of characters; '\_' matches any single ASCII character.
|
- '%' matches 0 or any number of characters, '\_' matches any single ASCII character.
|
||||||
- \_ can be used to match a `_` in the string, i.e. using escape character backslash `\`
|
- `\_` is used to match the \_ in the string.
|
||||||
- Wildcard string is 100 bytes at most. Longer a wildcard string is, worse the performance of LIKE operator is.
|
- The maximum length of wildcard string is 100 bytes. A very long wildcard string may slowdown the execution performance of `LIKE` operator.
|
||||||
|
|
||||||
`MATCH` and `NMATCH` operators use regular expressions to match a string, the rules are:
|
MATCH and NMATCH are used together with regular expressions to match strings. Their usage is described as follows:
|
||||||
|
|
||||||
- Regular expressions of POSIX standard are supported.
|
- Use POSIX regular expression syntax. For more information, see Regular Expressions.
|
||||||
- Only `tbname`, i.e. table name of sub tables, and tag columns of string types can be matched with regular expression, data columns are not supported.
|
- Regular expression can be used against only table names, i.e. `tbname`, and tags of binary/nchar types, but can't be used against data columns.
|
||||||
- Regular expression string is 128 bytes at most, and can be adjusted by setting parameter `maxRegexStringLen`, which is a client side configuration and needs to restart the client to take effect.
|
- The maximum length of regular expression string is 128 bytes. Configuration parameter `maxRegexStringLen` can be used to set the maximum allowed regular expression. It's a configuration parameter on the client side, and will take effect after restarting the client.
|
||||||
|
|
||||||
## Logical Operators
|
## Logical Operators
|
||||||
|
|
||||||
| # | **Operator** | **Data Types** | **Description** |
|
| # | **Operator** | **Supported Data Types** | **Description** |
|
||||||
| --- | :----------: | -------------- | ---------------------------------------------------------------------------------------- |
|
| --- | :--------: | -------------- | --------------------------------------------------------------------------- |
|
||||||
| 1 | AND | BOOL | Logical AND, return TRUE if both conditions are TRUE; return FALSE if any one is FALSE. |
|
| 1 | AND | BOOL | Logical AND; if both conditions are true, TRUE is returned; If either condition is false, FALSE is returned.
|
||||||
| 2 | OR | BOOL | Logical OR, return TRUE if any condition is TRUE; return FALSE if both are FALSE |
|
| 2 | OR | BOOL | Logical OR; if either condition is true, TRUE is returned; If both conditions are false, FALSE is returned.
|
||||||
|
|
||||||
TDengine uses shortcircut optimization when performing logical operations. For AND operator, if the first condition is evaluated to FALSE, then the second one is not evaluated. For OR operator, if the first condition is evaluated to TRUE, then the second one is not evaluated.
|
TDengine performs short-path optimization when calculating logical conditions. If the first condition for AND is false, FALSE is returned without calculating the second condition. If the first condition for OR is true, TRUE is returned without calculating the second condition
|
||||||
|
|
|
@ -1,60 +1,64 @@
|
||||||
---
|
---
|
||||||
|
sidebar_label: JSON Type
|
||||||
title: JSON Type
|
title: JSON Type
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|
||||||
## Syntax
|
## Syntax
|
||||||
|
|
||||||
1. Tag of type JSON
|
1. Tag of type JSON
|
||||||
|
|
||||||
```sql
|
```
|
||||||
create STable s1 (ts timestamp, v1 int) tags (info json);
|
create stable s1 (ts timestamp, v1 int) tags (info json)
|
||||||
|
|
||||||
create table s1_1 using s1 tags ('{"k1": "v1"}');
|
create table s1_1 using s1 tags ('{"k1": "v1"}')
|
||||||
```
|
```
|
||||||
|
|
||||||
2. "->" Operator of JSON
|
2. "->" Operator of JSON
|
||||||
|
|
||||||
```sql
|
```
|
||||||
select * from s1 where info->'k1' = 'v1';
|
select * from s1 where info->'k1' = 'v1'
|
||||||
|
|
||||||
select info->'k1' from s1;
|
select info->'k1' from s1
|
||||||
```
|
```
|
||||||
|
|
||||||
3. "contains" Operator of JSON
|
3. "contains" Operator of JSON
|
||||||
|
|
||||||
```sql
|
```
|
||||||
select * from s1 where info contains 'k2';
|
select * from s1 where info contains 'k2'
|
||||||
|
|
||||||
select * from s1 where info contains 'k1';
|
select * from s1 where info contains 'k1'
|
||||||
```
|
```
|
||||||
|
|
||||||
## Applicable Operations
|
## Applicable Operations
|
||||||
|
|
||||||
1. When a JSON data type is used in `where`, `match/nmatch/between and/like/and/or/is null/is no null` can be used but `in` can't be used.
|
1. When a JSON data type is used in `where`, `match/nmatch/between and/like/and/or/is null/is no null` can be used but `in` can't be used.
|
||||||
|
|
||||||
```sql
|
```
|
||||||
select * from s1 where info->'k1' match 'v*';
|
select * from s1 where info->'k1' match 'v*';
|
||||||
|
|
||||||
select * from s1 where info->'k1' like 'v%' and info contains 'k2';
|
select * from s1 where info->'k1' like 'v%' and info contains 'k2';
|
||||||
|
|
||||||
select * from s1 where info is null;
|
select * from s1 where info is null;
|
||||||
|
|
||||||
select * from s1 where info->'k1' is not null;
|
select * from s1 where info->'k1' is not null
|
||||||
```
|
```
|
||||||
|
|
||||||
2. A tag of JSON type can be used in `group by`, `order by`, `join`, `union all` and sub query; for example `group by json->'key'`
|
2. A tag of JSON type can be used in `group by`, `order by`, `join`, `union all` and sub query; for example `group by json->'key'`
|
||||||
|
|
||||||
3. `Distinct` can be used with a tag of type JSON
|
3. `Distinct` can be used with a tag of type JSON
|
||||||
|
|
||||||
```sql
|
```
|
||||||
select distinct info->'k1' from s1;
|
select distinct info->'k1' from s1
|
||||||
```
|
```
|
||||||
|
|
||||||
4. Tag Operations
|
4. Tag Operations
|
||||||
|
|
||||||
The value of a JSON tag can be altered. Please note that the full JSON will be overriden when doing this.
|
The value of a JSON tag can be altered. Please note that the full JSON will be overriden when doing this.
|
||||||
|
|
||||||
The name of a JSON tag can be altered. A tag of JSON type can't be added or removed. The column length of a JSON tag can't be changed.
|
The name of a JSON tag can be altered.
|
||||||
|
|
||||||
|
A tag of JSON type can't be added or removed. The column length of a JSON tag can't be changed.
|
||||||
|
|
||||||
## Other Restrictions
|
## Other Restrictions
|
||||||
|
|
||||||
|
@ -76,7 +80,12 @@ title: JSON Type
|
||||||
|
|
||||||
For example, the SQL statements below are not supported.
|
For example, the SQL statements below are not supported.
|
||||||
|
|
||||||
```sql;
|
```
|
||||||
select jtag->'key' from (select jtag from STable);
|
select jtag->'key' from (select jtag from stable)
|
||||||
select jtag->'key' from (select jtag from STable) where jtag->'key'>0;
|
```
|
||||||
|
|
||||||
|
and
|
||||||
|
|
||||||
|
```
|
||||||
|
select jtag->'key' from (select jtag from stable) where jtag->'key'>0
|
||||||
```
|
```
|
||||||
|
|
|
@ -2,7 +2,7 @@
|
||||||
title: Escape Characters
|
title: Escape Characters
|
||||||
---
|
---
|
||||||
|
|
||||||
Below table is the list of escape characters used in TDengine.
|
## Escape Characters
|
||||||
|
|
||||||
| Escape Character | **Actual Meaning** |
|
| Escape Character | **Actual Meaning** |
|
||||||
| :--------------: | ------------------------ |
|
| :--------------: | ------------------------ |
|
||||||
|
@ -15,11 +15,6 @@ Below table is the list of escape characters used in TDengine.
|
||||||
| `\%` | % see below for details |
|
| `\%` | % see below for details |
|
||||||
| `\_` | \_ see below for details |
|
| `\_` | \_ see below for details |
|
||||||
|
|
||||||
:::note
|
|
||||||
Escape characters are available from version 2.4.0.4 .
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
## Restrictions
|
## Restrictions
|
||||||
|
|
||||||
1. If there are escape characters in identifiers (database name, table name, column name)
|
1. If there are escape characters in identifiers (database name, table name, column name)
|
||||||
|
|
|
@ -1,59 +1,59 @@
|
||||||
---
|
---
|
||||||
sidebar_label: 命名与边界限制
|
sidebar_label: Name and Size Limits
|
||||||
title: 命名与边界限制
|
title: Name and Size Limits
|
||||||
---
|
---
|
||||||
|
|
||||||
## 名称命名规则
|
## Naming Rules
|
||||||
|
|
||||||
1. 合法字符:英文字符、数字和下划线
|
1. Names can include letters, digits, and underscores (_).
|
||||||
2. 允许英文字符或下划线开头,不允许以数字开头
|
2. Names can begin with letters or underscores (_) but not with digits.
|
||||||
3. 不区分大小写
|
3. Names are not case-sensitive.
|
||||||
4. 转义后表(列)名规则:
|
4. Rules for names with escape characters are as follows:
|
||||||
为了兼容支持更多形式的表(列)名,TDengine 引入新的转义符 "`"。可用让表名与关键词不冲突,同时不受限于上述表名称合法性约束检查
|
You can escape a name by enclosing it in backticks (`). In this way, you can reuse keyword names for table names. However, the first three naming rules no longer apply.
|
||||||
转义后的表(列)名同样受到长度限制要求,且长度计算的时候不计算转义符。使用转义字符以后,不再对转义字符中的内容进行大小写统一
|
Table and column names that are enclosed in escape characters are still subject to length limits. When the length of such a name is calculated, the escape characters are not included. Names specified using escape character are case-sensitive.
|
||||||
|
|
||||||
例如:\`aBc\` 和 \`abc\` 是不同的表(列)名,但是 abc 和 aBc 是相同的表(列)名。
|
For example, \`aBc\` and \`abc\` are different table or column names, but "abc" and "aBc" are same names because internally they are all "abc".
|
||||||
需要注意的是转义字符中的内容必须是可打印字符。
|
Only ASCII visible characters can be used with escape character.
|
||||||
|
|
||||||
## 密码合法字符集
|
## Password Rules
|
||||||
|
|
||||||
`[a-zA-Z0-9!?$%^&*()_–+={[}]:;@~#|<,>.?/]`
|
`[a-zA-Z0-9!?$%^&*()_–+={[}]:;@~#|<,>.?/]`
|
||||||
|
|
||||||
去掉了 `` ‘“`\ `` (单双引号、撇号、反斜杠、空格)
|
The following characters cannot occur in a password: single quotation marks ('), double quotation marks ("), backticks (`), backslashes (\\), and spaces.
|
||||||
|
|
||||||
## 一般限制
|
## General Limits
|
||||||
|
|
||||||
- 数据库名最大长度为 32
|
- Maximum length of database name is 32 bytes
|
||||||
- 表名最大长度为 192,不包括数据库名前缀和分隔符
|
- Maximum length of table name is 192 bytes, excluding the database name prefix and the separator.
|
||||||
- 每行数据最大长度 48KB (注意:数据行内每个 BINARY/NCHAR 类型的列还会额外占用 2 个字节的存储位置)
|
- Maximum length of each data row is 48K bytes. Note that the upper limit includes the extra 2 bytes consumed by each column of BINARY/NCHAR type.
|
||||||
- 列名最大长度为 64
|
- The maximum length of a column name is 64 bytes.
|
||||||
- 最多允许 4096 列,最少需要 2 列,第一列必须是时间戳。
|
- Maximum number of columns is 4096. There must be at least 2 columns, and the first column must be timestamp.
|
||||||
- 标签名最大长度为 64
|
- The maximum length of a tag name is 64 bytes
|
||||||
- 最多允许 128 个,至少要有 1 个标签,一个表中标签值的总长度不超过 16KB
|
- Maximum number of tags is 128. There must be at least 1 tag. The total length of tag values cannot exceed 16 KB.
|
||||||
- SQL 语句最大长度 1048576 个字符,也可通过客户端配置参数 maxSQLLength 修改,取值范围 65480 ~ 1048576
|
- Maximum length of single SQL statement is 1 MB (1048576 bytes).
|
||||||
- SELECT 语句的查询结果,最多允许返回 4096 列(语句中的函数调用可能也会占用一些列空间),超限时需要显式指定较少的返回数据列,以避免语句执行报错
|
- At most 4096 columns can be returned by `SELECT`. Functions in the query statement constitute columns. An error is returned if the limit is exceeded.
|
||||||
- 库的数目,超级表的数目、表的数目,系统不做限制,仅受系统资源限制
|
- Maximum numbers of databases, STables, tables are dependent only on the system resources.
|
||||||
- 数据库的副本数只能设置为 1 或 3
|
- The number of replicas can only be 1 or 3.
|
||||||
- 用户名的最大长度是 23 个字节
|
- The maximum length of a username is 23 bytes.
|
||||||
- 用户密码的最大长度是 15 个字节
|
- The maximum length of a password is 15 bytes.
|
||||||
- 总数据行数取决于可用资源
|
- The maximum number of rows depends on system resources.
|
||||||
- 单个数据库的虚拟结点数上限为 1024
|
- The maximum number of vnodes in a database is 1024.
|
||||||
|
|
||||||
## 表(列)名合法性说明
|
## Restrictions of Table/Column Names
|
||||||
|
|
||||||
### TDengine 中的表(列)名命名规则如下:
|
### Name Restrictions of Table/Column
|
||||||
|
|
||||||
只能由字母、数字、下划线构成,数字不能在首位,长度不能超过 192 字节,不区分大小写。这里表名称不包括数据库名的前缀和分隔符。
|
The name of a table or column can only be composed of ASCII characters, digits and underscore and it cannot start with a digit. The maximum length is 192 bytes. Names are case insensitive. The name mentioned in this rule doesn't include the database name prefix and the separator.
|
||||||
|
|
||||||
### 转义后表(列)名规则:
|
### Name Restrictions After Escaping
|
||||||
|
|
||||||
为了兼容支持更多形式的表(列)名,TDengine 引入新的转义符 "`",可以避免表名与关键词的冲突,同时不受限于上述表名合法性约束检查,转义符不计入表名的长度。
|
To support more flexible table or column names, new escape character "\`" is introduced in TDengine to avoid the conflict between table name and keywords and break the above restrictions for table names. The escape character is not counted in the length of table name.
|
||||||
转义后的表(列)名同样受到长度限制要求,且长度计算的时候不计算转义符。使用转义字符以后,不再对转义字符中的内容进行大小写统一。
|
With escaping, the string inside escape characters are case sensitive, i.e. will not be converted to lower case internally. The table names specified using escape character are case sensitive.
|
||||||
|
|
||||||
例如:
|
For example:
|
||||||
\`aBc\` 和 \`abc\` 是不同的表(列)名,但是 abc 和 aBc 是相同的表(列)名。
|
\`aBc\` and \`abc\` are different table or column names, but "abc" and "aBc" are same names because internally they are all "abc".
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
转义字符中的内容必须是可打印字符。
|
The characters inside escape characters must be printable characters.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
|
@ -1,10 +1,11 @@
|
||||||
---
|
---
|
||||||
title: Keywords
|
sidebar_label: Reserved Keywords
|
||||||
|
title: Reserved Keywords
|
||||||
---
|
---
|
||||||
|
|
||||||
There are about 200 keywords reserved by TDengine, they can't be used as the name of database, STable or table with either upper case, lower case or mixed case.
|
## Keyword List
|
||||||
|
|
||||||
## Keywords List
|
There are about 200 keywords reserved by TDengine, they can't be used as the name of database, STable or table with either upper case, lower case or mixed case. The following list shows all reserved keywords:
|
||||||
|
|
||||||
### A
|
### A
|
||||||
|
|
||||||
|
@ -264,52 +265,12 @@ There are about 200 keywords reserved by TDengine, they can't be used as the nam
|
||||||
- WAL
|
- WAL
|
||||||
- WHERE
|
- WHERE
|
||||||
|
|
||||||
### _
|
### \_
|
||||||
|
|
||||||
- _C0
|
- \_C0
|
||||||
- _QSTART
|
- \_QSTART
|
||||||
- _QSTOP
|
- \_QSTOP
|
||||||
- _QDURATION
|
- \_QDURATION
|
||||||
- _WSTART
|
- \_WSTART
|
||||||
- _WSTOP
|
- \_WSTOP
|
||||||
- _WDURATION
|
- \_WDURATION
|
||||||
|
|
||||||
## Explanations
|
|
||||||
### TBNAME
|
|
||||||
`TBNAME` can be considered as a special tag, which represents the name of the subtable, in a STable.
|
|
||||||
|
|
||||||
Get the table name and tag values of all subtables in a STable.
|
|
||||||
```mysql
|
|
||||||
SELECT TBNAME, location FROM meters;
|
|
||||||
```
|
|
||||||
|
|
||||||
Count the number of subtables in a STable.
|
|
||||||
```mysql
|
|
||||||
SELECT COUNT(TBNAME) FROM meters;
|
|
||||||
```
|
|
||||||
|
|
||||||
Only filter on TAGS can be used in WHERE clause in the above two query statements.
|
|
||||||
```mysql
|
|
||||||
taos> SELECT TBNAME, location FROM meters;
|
|
||||||
tbname | location |
|
|
||||||
==================================================================
|
|
||||||
d1004 | California.SanFrancisco |
|
|
||||||
d1003 | California.SanFrancisco |
|
|
||||||
d1002 | California.LosAngeles |
|
|
||||||
d1001 | California.LosAngeles |
|
|
||||||
Query OK, 4 row(s) in set (0.000881s)
|
|
||||||
|
|
||||||
taos> SELECT COUNT(tbname) FROM meters WHERE groupId > 2;
|
|
||||||
count(tbname) |
|
|
||||||
========================
|
|
||||||
2 |
|
|
||||||
Query OK, 1 row(s) in set (0.001091s)
|
|
||||||
```
|
|
||||||
### _QSTART/_QSTOP/_QDURATION
|
|
||||||
The start, stop and duration of a query time window.
|
|
||||||
|
|
||||||
### _WSTART/_WSTOP/_WDURATION
|
|
||||||
The start, stop and duration of aggegate query by time window, like interval, session window, state window.
|
|
||||||
|
|
||||||
### _c0/_ROWTS
|
|
||||||
_c0 is equal to _ROWTS, it means the first column of a table or STable.
|
|
||||||
|
|
|
@ -1,37 +1,37 @@
|
||||||
---
|
---
|
||||||
sidebar_label: 集群管理
|
sidebar_label: Cluster
|
||||||
title: 集群管理
|
title: Cluster
|
||||||
---
|
---
|
||||||
|
|
||||||
组成 TDengine 集群的物理实体是 dnode (data node 的缩写),它是一个运行在操作系统之上的进程。在 dnode 中可以建立负责时序数据存储的 vnode (virtual node),在多节点集群环境下当某个数据库的 replica 为 3 时,该数据库中的每个 vgroup 由 3 个 vnode 组成;当数据库的 replica 为 1 时,该数据库中的每个 vgroup 由 1 个 vnode 组成。如果要想配置某个数据库为多副本,则集群中的 dnode 数量至少为 3。在 dnode 还可以创建 mnode (management node),单个集群中最多可以创建三个 mnode。在 TDengine 3.0.0.0 中为了支持存算分离,引入了一种新的逻辑节点 qnode (query node),qnode 和 vnode 既可以共存在一个 dnode 中,也可以完全分离在不同的 dnode 上。
|
The physical entities that form TDengine clusters are known as data nodes (dnodes). Each dnode is a process running on the operating system of the physical machine. Dnodes can contain virtual nodes (vnodes), which store time-series data. Virtual nodes are formed into vgroups, which have 1 or 3 vnodes depending on the replica setting. If you want to enable replication on your cluster, it must contain at least three nodes. Dnodes can also contain management nodes (mnodes). Each cluster has up to three mnodes. Finally, dnodes can contain query nodes (qnodes), which compute time-series data, thus separating compute from storage. A single dnode can contain a vnode, qnode, and mnode.
|
||||||
|
|
||||||
## 创建数据节点
|
## Create a Dnode
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
CREATE DNODE {dnode_endpoint | dnode_host_name PORT port_val}
|
CREATE DNODE {dnode_endpoint | dnode_host_name PORT port_val}
|
||||||
```
|
```
|
||||||
|
|
||||||
其中 `dnode_endpoint` 是形成 `hostname:port`的格式。也可以分开指定 hostname 和 port。
|
Enter the dnode_endpoint in hostname:port format. You can also specify the hostname and port as separate parameters.
|
||||||
|
|
||||||
实际操作中推荐先创建 dnode,再启动相应的 dnode 进程,这样该 dnode 就可以立即根据其配置文件中的 firstEP 加入集群。每个 dnode 在加入成功后都会被分配一个 ID。
|
Create the dnode before starting the corresponding dnode process. The dnode can then join the cluster based on the value of the firstEp parameter. Each dnode is assigned an ID after it joins a cluster.
|
||||||
|
|
||||||
## 查看数据节点
|
## View Dnodes
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SHOW DNODES;
|
SHOW DNODES;
|
||||||
```
|
```
|
||||||
|
|
||||||
可以列出集群中所有的数据节点,所列出的字段有 dnode 的 ID, endpoint, status。
|
The preceding SQL command shows all dnodes in the cluster with the ID, endpoint, and status.
|
||||||
|
|
||||||
## 删除数据节点
|
## Delete a DNODE
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
DROP DNODE {dnode_id | dnode_endpoint}
|
DROP DNODE {dnode_id | dnode_endpoint}
|
||||||
```
|
```
|
||||||
|
|
||||||
可以用 dnoe_id 或 endpoint 两种方式从集群中删除一个 dnode。注意删除 dnode 不等于停止相应的进程。实际中推荐先将一个 dnode 删除之后再停止其所对应的进程。
|
You can delete a dnode by its ID or by its endpoint. Note that deleting a dnode does not stop its process. You must stop the process after the dnode is deleted.
|
||||||
|
|
||||||
## 修改数据节点配置
|
## Modify Dnode Configuration
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
ALTER DNODE dnode_id dnode_option
|
ALTER DNODE dnode_id dnode_option
|
||||||
|
@ -62,59 +62,59 @@ dnode_option: {
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
上面语法中的这些可修改配置项其配置方式与 dnode 配置文件中的配置方式相同,区别是修改是动态的立即生效,且不需要重启 dnode。
|
The parameters that you can modify through this statement are the same as those located in the dnode configuration file. Modifications that you make through this statement take effect immediately, while modifications to the configuration file take effect when the dnode restarts.
|
||||||
|
|
||||||
## 添加管理节点
|
## Add an Mnode
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
CREATE MNODE ON DNODE dnode_id
|
CREATE MNODE ON DNODE dnode_id
|
||||||
```
|
```
|
||||||
|
|
||||||
系统启动默认在 firstEP 节点上创建一个 MNODE,用户可以使用此语句创建更多的 MNODE 来提高系统可用性。一个集群最多存在三个 MNODE,一个 DNODE 上只能创建一个 MNODE。
|
TDengine automatically creates an mnode on the firstEp node. You can use this statement to create more mnodes for higher system availability. A cluster can have a maximum of three mnodes. Each dnode can contain only one mnode.
|
||||||
|
|
||||||
## 查看管理节点
|
## View Mnodes
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SHOW MNODES;
|
SHOW MNODES;
|
||||||
```
|
```
|
||||||
|
|
||||||
列出集群中所有的管理节点,包括其 ID,所在 DNODE 以及状态。
|
This statement shows all mnodes in the cluster with the ID, dnode, and status.
|
||||||
|
|
||||||
## 删除管理节点
|
## Delete an Mnode
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
DROP MNODE ON DNODE dnode_id;
|
DROP MNODE ON DNODE dnode_id;
|
||||||
```
|
```
|
||||||
|
|
||||||
删除 dnode_id 所指定的 DNODE 上的 MNODE。
|
This statement deletes the mnode located on the specified dnode.
|
||||||
|
|
||||||
## 创建查询节点
|
## Create a Qnode
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
CREATE QNODE ON DNODE dnode_id;
|
CREATE QNODE ON DNODE dnode_id;
|
||||||
```
|
```
|
||||||
|
|
||||||
系统启动默认没有 QNODE,用户可以创建 QNODE 来实现计算和存储的分离。一个 DNODE 上只能创建一个 QNODE。一个 DNODE 的 `supportVnodes` 参数如果不为 0,同时又在其上创建上 QNODE,则在该 dnode 中既有负责存储管理的 vnode 又有负责查询计算的 qnode,如果还在该 dnode 上创建了 mnode,则一个 dnode 上最多三种逻辑节点都可以存在。但通过配置也可以使其彻底分离。将一个 dnode 的`supportVnodes`配置为 0,可以选择在其上创建 mnode 或者 qnode 中的一种,这样可以实现三种逻辑节点在物理上的彻底分离。
|
TDengine does not automatically create qnodes on startup. You can create qnodes as necessary for compute/storage separation. Each dnode can contain only one qnode. If a qnode is created on a dnode whose supportVnodes parameter is not 0, a vnode and qnode may coexist on the dnode. Each dnode can have a maximum of one vnode, one qnode, and one mnode. However, you can configure your cluster so that vnodes, qnodes, and mnodes are located on separate dnodes. If you set supportVnodes to 0 for a dnode, you can then decide whether to deploy an mnode or a qnode on it. In this way you can physically separate virtual node types.
|
||||||
|
|
||||||
## 查看查询节点
|
## View Qnodes
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SHOW QNODES;
|
SHOW QNODES;
|
||||||
```
|
```
|
||||||
|
|
||||||
列出集群中所有查询节点,包括 ID,及所在 DNODE。
|
This statement shows all qnodes in the cluster with the ID and dnode.
|
||||||
|
|
||||||
## 删除查询节点
|
## Delete a Qnode
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
DROP QNODE ON DNODE dnode_id;
|
DROP QNODE ON DNODE dnode_id;
|
||||||
```
|
```
|
||||||
|
|
||||||
删除 ID 为 dnode_id 的 DNODE 上的 QNODE,但并不会影响该 dnode 的状态。
|
This statement deletes the mnode located on the specified dnode. This does not affect the status of the dnode.
|
||||||
|
|
||||||
## 修改客户端配置
|
## Modify Client Configuration
|
||||||
|
|
||||||
如果将客户端也看作广义的集群的一部分,可以通过如下命令动态修改客户端配置参数。
|
The client configuration can also be modified in a similar way to other cluster components.
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
ALTER LOCAL local_option
|
ALTER LOCAL local_option
|
||||||
|
@ -129,26 +129,26 @@ local_option: {
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
上面语法中的参数与在配置文件中配置客户端的用法相同,但不需要重启客户端,修改后立即生效。
|
The parameters that you can modify through this statement are the same as those located in the client configuration file. Modifications that you make through this statement take effect immediately, while modifications to the configuration file take effect when the client restarts.
|
||||||
|
|
||||||
## 查看客户端配置
|
## View Client Configuration
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SHOW LOCAL VARIABLES;
|
SHOW LOCAL VARIABLES;
|
||||||
```
|
```
|
||||||
|
|
||||||
## 合并 vgroup
|
## Combine Vgroups
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
MERGE VGROUP vgroup_no1 vgroup_no2;
|
MERGE VGROUP vgroup_no1 vgroup_no2;
|
||||||
```
|
```
|
||||||
|
|
||||||
如果在系统实际运行一段时间后,因为不同时间线的数据特征不同导致在 vgroups 之间的数据和负载分布不均衡,可以通过合并或拆分 vgroups 的方式逐步实现负载均衡。
|
If load and data are not properly balanced among vgroups due to the data in different tim lines having different characteristics, you can combine or separate vgroups.
|
||||||
|
|
||||||
## 拆分 vgroup
|
## Separate Vgroups
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SPLIT VGROUP vgroup_no;
|
SPLIT VGROUP vgroup_no;
|
||||||
```
|
```
|
||||||
|
|
||||||
会创建一个新的 vgroup,并将指定 vgroup 中的数据按照一致性 HASH 迁移一部分到新的 vgroup 中。此过程中,原 vgroup 可以正常提供读写服务。
|
This statement creates a new vgroup and migrates part of the data from the original vgroup to the new vgroup with consistent hashing. During this process, the original vgroup can continue to provide services normally.
|
||||||
|
|
|
@ -1,247 +1,279 @@
|
||||||
---
|
---
|
||||||
sidebar_label: 元数据库
|
sidebar_label: Metadata
|
||||||
title: 元数据库
|
title: Information_Schema Database
|
||||||
---
|
---
|
||||||
|
|
||||||
TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数据库元数据、数据库系统信息和状态的访问,例如数据库或表的名称,当前执行的 SQL 语句等。该数据库存储有关 TDengine 维护的所有其他数据库的信息。它包含多个只读表。实际上,这些表都是视图,而不是基表,因此没有与它们关联的文件。所以对这些表只能查询,不能进行 INSERT 等写入操作。`INFORMATION_SCHEMA` 数据库旨在以一种更一致的方式来提供对 TDengine 支持的各种 SHOW 语句(如 SHOW TABLES、SHOW DATABASES)所提供的信息的访问。与 SHOW 语句相比,使用 SELECT ... FROM INFORMATION_SCHEMA.tablename 具有以下优点:
|
TDengine includes a built-in database named `INFORMATION_SCHEMA` to provide access to database metadata, system information, and status information. This information includes database names, table names, and currently running SQL statements. All information related to TDengine maintenance is stored in this database. It contains several read-only tables. These tables are more accurately described as views, and they do not correspond to specific files. You can query these tables but cannot write data to them. The INFORMATION_SCHEMA database is intended to provide a unified method for SHOW commands to access data. However, using SELECT ... FROM INFORMATION_SCHEMA.tablename offers several advantages over SHOW commands:
|
||||||
|
|
||||||
1. 可以使用 USE 语句将 INFORMATION_SCHEMA 设为默认数据库
|
1. You can use a USE statement to specify the INFORMATION_SCHEMA database as the current database.
|
||||||
2. 可以使用 SELECT 语句熟悉的语法,只需要学习一些表名和列名
|
2. You can use the familiar SELECT syntax to access information, provided that you know the table and column names.
|
||||||
3. 可以对查询结果进行筛选、排序等操作。事实上,可以使用任意 TDengine 支持的 SELECT 语句对 INFORMATION_SCHEMA 中的表进行查询
|
3. You can filter and order the query results. More generally, you can use any SELECT syntax that TDengine supports to query the INFORMATION_SCHEMA database.
|
||||||
4. TDengine 在后续演进中可以灵活的添加已有 INFORMATION_SCHEMA 中表的列,而不用担心对既有业务系统造成影响
|
4. Future versions of TDengine can add new columns to INFORMATION_SCHEMA tables without affecting existing business systems.
|
||||||
5. 与其他数据库系统更具互操作性。例如,Oracle 数据库用户熟悉查询 Oracle 数据字典中的表
|
5. It is easier for users coming from other database management systems. For example, Oracle users can query data dictionary tables.
|
||||||
|
|
||||||
Note: 由于 SHOW 语句已经被开发者熟悉和广泛使用,所以它们仍然被保留。
|
Note: SHOW statements are still supported for the convenience of existing users.
|
||||||
|
|
||||||
本章将详细介绍 `INFORMATION_SCHEMA` 这个内置元数据库中的表和表结构。
|
This document introduces the tables of INFORMATION_SCHEMA and their structure.
|
||||||
|
|
||||||
## INS_DNODES
|
## INS_DNODES
|
||||||
|
|
||||||
提供 dnode 的相关信息。也可以使用 SHOW DNODES 来查询这些信息。
|
Provides information about dnodes. Similar to SHOW DNODES.
|
||||||
|
|
||||||
| # | **列名** | **数据类型** | **说明** |
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
| --- | :------------: | ------------ | ------------------------- |
|
| --- | :------------: | ------------ | ------------------------- |
|
||||||
| 1 | vnodes | SMALLINT | dnode 中的实际 vnode 个数 |
|
| 1 | vnodes | SMALLINT | Current number of vnodes on the dnode |
|
||||||
| 2 | support_vnodes | SMALLINT | 最多支持的 vnode 个数 |
|
| 2 | vnodes | SMALLINT | Maximum number of vnodes on the dnode |
|
||||||
| 3 | status | BINARY(10) | 当前状态 |
|
| 3 | status | BINARY(10) | Current status |
|
||||||
| 4 | note | BINARY(256) | 离线原因等信息 |
|
| 4 | note | BINARY(256) | Reason for going offline or other information |
|
||||||
| 5 | id | SMALLINT | dnode id |
|
| 5 | id | SMALLINT | Dnode ID |
|
||||||
| 6 | endpoint | BINARY(134) | dnode 的地址 |
|
| 6 | endpoint | BINARY(134) | Dnode endpoint |
|
||||||
| 7 | create | TIMESTAMP | 创建时间 |
|
| 7 | create | TIMESTAMP | Creation time |
|
||||||
|
|
||||||
## INS_MNODES
|
## INS_MNODES
|
||||||
|
|
||||||
提供 mnode 的相关信息。也可以使用 SHOW MNODES 来查询这些信息。
|
Provides information about mnodes. Similar to SHOW MNODES.
|
||||||
|
|
||||||
| # | **列名** | **数据类型** | **说明** |
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
| --- | :---------: | ------------ | ------------------ |
|
| --- | :---------: | ------------ | ------------------ |
|
||||||
| 1 | id | SMALLINT | mnode id |
|
| 1 | id | SMALLINT | Mnode ID |
|
||||||
| 2 | endpoint | BINARY(134) | mnode 的地址 |
|
| 2 | endpoint | BINARY(134) | Mnode endpoint |
|
||||||
| 3 | role | BINARY(10) | 当前角色 |
|
| 3 | role | BINARY(10) | Current role |
|
||||||
| 4 | role_time | TIMESTAMP | 成为当前角色的时间 |
|
| 4 | role_time | TIMESTAMP | Time at which the current role was assumed |
|
||||||
| 5 | create_time | TIMESTAMP | 创建时间 |
|
| 5 | create_time | TIMESTAMP | Creation time |
|
||||||
|
|
||||||
## INS_MODULES
|
## INS_MODULES
|
||||||
|
|
||||||
提供组件的相关信息。也可以使用 SHOW MODULES 来查询这些信息
|
Provides information about modules. Similar to SHOW MODULES.
|
||||||
|
|
||||||
| # | **列名** | **数据类型** | **说明** |
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
| --- | :------: | ------------ | ---------- |
|
| --- | :------: | ------------ | ---------- |
|
||||||
| 1 | id | SMALLINT | module id |
|
| 1 | id | SMALLINT | Module ID |
|
||||||
| 2 | endpoint | BINARY(134) | 组件的地址 |
|
| 2 | endpoint | BINARY(134) | Module endpoint |
|
||||||
| 3 | module | BINARY(10) | 组件状态 |
|
| 3 | module | BINARY(10) | Module status |
|
||||||
|
|
||||||
## INS_QNODES
|
## INS_QNODES
|
||||||
|
|
||||||
当前系统中 QNODE 的信息。也可以使用 SHOW QNODES 来查询这些信息。
|
Provides information about qnodes. Similar to SHOW QNODES.
|
||||||
|
|
||||||
| # | **列名** | **数据类型** | **说明** |
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
| --- | :---------: | ------------ | ------------ |
|
| --- | :---------: | ------------ | ------------ |
|
||||||
| 1 | id | SMALLINT | qnode id |
|
| 1 | id | SMALLINT | Qnode ID |
|
||||||
| 2 | endpoint | BINARY(134) | qnode 的地址 |
|
| 2 | endpoint | BINARY(134) | Qnode endpoint |
|
||||||
| 3 | create_time | TIMESTAMP | 创建时间 |
|
| 3 | create_time | TIMESTAMP | Creation time |
|
||||||
|
|
||||||
## INS_CLUSTER
|
## INS_CLUSTER
|
||||||
|
|
||||||
存储集群相关信息。
|
Provides information about the cluster.
|
||||||
|
|
||||||
| # | **列名** | **数据类型** | **说明** |
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
| --- | :---------: | ------------ | ---------- |
|
| --- | :---------: | ------------ | ---------- |
|
||||||
| 1 | id | BIGINT | cluster id |
|
| 1 | id | BIGINT | Cluster ID |
|
||||||
| 2 | name | BINARY(134) | 集群名称 |
|
| 2 | name | BINARY(134) | Cluster name |
|
||||||
| 3 | create_time | TIMESTAMP | 创建时间 |
|
| 3 | create_time | TIMESTAMP | Creation time |
|
||||||
|
|
||||||
## INS_DATABASES
|
## INS_DATABASES
|
||||||
|
|
||||||
提供用户创建的数据库对象的相关信息。也可以使用 SHOW DATABASES 来查询这些信息。
|
Provides information about user-created databases. Similar to SHOW DATABASES.
|
||||||
|
|
||||||
| # | **列名** | **数据类型** | **说明** |
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
| --- | :------------------: | ---------------- | ------------------------------------------------ |
|
| --- | :------------------: | ---------------- | ------------------------------------------------ |
|
||||||
| 1 | name | BINARY(32) | 数据库名 |
|
| 1| name| BINARY(32)| Database name |
|
||||||
| 2 | create_time | TIMESTAMP | 创建时间 |
|
| 2 | create_time | TIMESTAMP | Creation time |
|
||||||
| 3 | ntables | INT | 数据库中表的数量,包含子表和普通表但不包含超级表 |
|
| 3 | ntables | INT | Number of standard tables and subtables (not including supertables) |
|
||||||
| 4 | vgroups | INT | 数据库中有多少个 vgroup |
|
| 4 | vgroups | INT | Number of vgroups |
|
||||||
| 6 | replica | INT | 副本数 |
|
| 6 | replica | INT | Number of replicas |
|
||||||
| 7 | quorum | BINARY(3) | 强一致性 |
|
| 7 | quorum | BINARY(3) | Strong consistency |
|
||||||
| 8 | duration | INT | 单文件存储数据的时间跨度 |
|
| 8 | duration | INT | Duration for storage of single files |
|
||||||
| 9 | keep | INT | 数据保留时长 |
|
| 9 | keep | INT | Data retention period |
|
||||||
| 10 | buffer | INT | 每个 vnode 写缓存的内存块大小,单位 MB |
|
| 10 | buffer | INT | Write cache size per vnode, in MB |
|
||||||
| 11 | pagesize | INT | 每个 VNODE 中元数据存储引擎的页大小,单位为 KB |
|
| 11 | pagesize | INT | Page size for vnode metadata storage engine, in KB |
|
||||||
| 12 | pages | INT | 每个 vnode 元数据存储引擎的缓存页个数 |
|
| 12 | pages | INT | Number of pages per vnode metadata storage engine |
|
||||||
| 13 | minrows | INT | 文件块中记录的最大条数 |
|
| 13 | minrows | INT | Maximum number of records per file block |
|
||||||
| 14 | maxrows | INT | 文件块中记录的最小条数 |
|
| 14 | maxrows | INT | Minimum number of records per file block |
|
||||||
| 15 | comp | INT | 数据压缩方式 |
|
| 15 | comp | INT | Compression method |
|
||||||
| 16 | precision | BINARY(2) | 时间分辨率 |
|
| 16 | precision | BINARY(2) | Time precision |
|
||||||
| 17 | status | BINARY(10) | 数据库状态 |
|
| 17 | status | BINARY(10) | Current database status |
|
||||||
| 18 | retention | BINARY (60) | 数据的聚合周期和保存时长 |
|
| 18 | retention | BINARY (60) | Aggregation interval and retention period |
|
||||||
| 19 | single_stable | BOOL | 表示此数据库中是否只可以创建一个超级表 |
|
| 19 | single_stable | BOOL | Whether the database can contain multiple supertables |
|
||||||
| 20 | cachemodel | BINARY(60) | 表示是否在内存中缓存子表的最近数据 |
|
| 20 | cachemodel | BINARY(60) | Caching method for the newest data |
|
||||||
| 21 | cachesize | INT | 表示每个 vnode 中用于缓存子表最近数据的内存大小 |
|
| 21 | cachesize | INT | Memory per vnode used for caching the newest data |
|
||||||
| 22 | wal_level | INT | WAL 级别 |
|
| 22 | wal_level | INT | WAL level |
|
||||||
| 23 | wal_fsync_period | INT | 数据落盘周期 |
|
| 23 | wal_fsync_period | INT | Interval at which WAL is written to disk |
|
||||||
| 24 | wal_retention_period | INT | WAL 的保存时长 |
|
| 24 | wal_retention_period | INT | WAL retention period |
|
||||||
| 25 | wal_retention_size | INT | WAL 的保存上限 |
|
| 25 | wal_retention_size | INT | Maximum WAL size |
|
||||||
| 26 | wal_roll_period | INT | wal 文件切换时长 |
|
| 26 | wal_roll_period | INT | WAL rotation period |
|
||||||
| 27 | wal_segment_size | wal 单个文件大小 |
|
| 27 | wal_segment_size | WAL file size |
|
||||||
|
|
||||||
## INS_FUNCTIONS
|
## INS_FUNCTIONS
|
||||||
|
|
||||||
用户创建的自定义函数的信息。
|
Provides information about user-defined functions.
|
||||||
|
|
||||||
| # | **列名** | **数据类型** | **说明** |
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
| --- | :---------: | ------------ | -------------- |
|
| --- | :---------: | ------------ | -------------- |
|
||||||
| 1 | name | BINARY(64) | 函数名 |
|
| 1 | name | BINARY(64) | Function name |
|
||||||
| 2 | comment | BINARY(255) | 补充说明 |
|
| 2 | comment | BINARY(255) | Function description |
|
||||||
| 3 | aggregate | INT | 是否为聚合函数 |
|
| 3 | aggregate | INT | Whether the UDF is an aggregate function |
|
||||||
| 4 | output_type | BINARY(31) | 输出类型 |
|
| 4 | output_type | BINARY(31) | Output data type |
|
||||||
| 5 | create_time | TIMESTAMP | 创建时间 |
|
| 5 | create_time | TIMESTAMP | Creation time |
|
||||||
| 6 | code_len | INT | 代码长度 |
|
| 6 | code_len | INT | Length of the source code |
|
||||||
| 7 | bufsize | INT | buffer 大小 |
|
| 7 | bufsize | INT | Buffer size |
|
||||||
|
|
||||||
## INS_INDEXES
|
## INS_INDEXES
|
||||||
|
|
||||||
提供用户创建的索引的相关信息。也可以使用 SHOW INDEX 来查询这些信息。
|
Provides information about user-created indices. Similar to SHOW INDEX.
|
||||||
|
|
||||||
| # | **列名** | **数据类型** | **说明** |
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
| --- | :--------------: | ------------ | ---------------------------------------------------------------------------------- |
|
| --- | :--------------: | ------------ | ---------------------------------------------------------------------------------- |
|
||||||
| 1 | db_name | BINARY(32) | 包含此索引的表所在的数据库名 |
|
| 1 | db_name | BINARY(32) | Database containing the table with the specified index |
|
||||||
| 2 | table_name | BINARY(192) | 包含此索引的表的名称 |
|
| 2 | table_name | BINARY(192) | Table containing the specified index |
|
||||||
| 3 | index_name | BINARY(192) | 索引名 |
|
| 3 | index_name | BINARY(192) | Index name |
|
||||||
| 4 | column_name | BINARY(64) | 建索引的列的列名 |
|
| 4 | db_name | BINARY(64) | Index column |
|
||||||
| 5 | index_type | BINARY(10) | 目前有 SMA 和 FULLTEXT |
|
| 5 | index_type | BINARY(10) | SMA or FULLTEXT index |
|
||||||
| 6 | index_extensions | BINARY(256) | 索引的额外信息。对 SMA 类型的索引,是函数名的列表。对 FULLTEXT 类型的索引为 NULL。 |
|
| 6 | index_extensions | BINARY(256) | Other information For SMA indices, this shows a list of functions. For FULLTEXT indices, this is null. |
|
||||||
|
|
||||||
## INS_STABLES
|
## INS_STABLES
|
||||||
|
|
||||||
提供用户创建的超级表的相关信息。
|
Provides information about supertables.
|
||||||
|
|
||||||
| # | **列名** | **数据类型** | **说明** |
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
| --- | :-----------: | ------------ | ------------------------ |
|
| --- | :-----------: | ------------ | ------------------------ |
|
||||||
| 1 | stable_name | BINARY(192) | 超级表表名 |
|
| 1 | stable_name | BINARY(192) | Supertable name |
|
||||||
| 2 | db_name | BINARY(64) | 超级表所在的数据库的名称 |
|
| 2 | db_name | BINARY(64) | All databases in the supertable |
|
||||||
| 3 | create_time | TIMESTAMP | 创建时间 |
|
| 3 | create_time | TIMESTAMP | Creation time |
|
||||||
| 4 | columns | INT | 列数目 |
|
| 4 | columns | INT | Number of columns |
|
||||||
| 5 | tags | INT | 标签数目 |
|
| 5 | tags | INT | Number of tags |
|
||||||
| 6 | last_update | TIMESTAMP | 最后更新时间 |
|
| 6 | last_update | TIMESTAMP | Last updated time |
|
||||||
| 7 | table_comment | BINARY(1024) | 表注释 |
|
| 7 | table_comment | BINARY(1024) | Table description |
|
||||||
| 8 | watermark | BINARY(64) | 窗口的关闭时间 |
|
| 8 | watermark | BINARY(64) | Window closing time |
|
||||||
| 9 | max_delay | BINARY(64) | 推送计算结果的最大延迟 |
|
| 9 | max_delay | BINARY(64) | Maximum delay for pushing stream processing results |
|
||||||
| 10 | rollup | BINARY(128) | rollup 聚合函数 |
|
| 10 | rollup | BINARY(128) | Rollup aggregate function |
|
||||||
|
|
||||||
## INS_TABLES
|
## INS_TABLES
|
||||||
|
|
||||||
提供用户创建的普通表和子表的相关信息
|
Provides information about standard tables and subtables.
|
||||||
|
|
||||||
| # | **列名** | **数据类型** | **说明** |
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
| --- | :-----------: | ------------ | ---------------- |
|
| --- | :-----------: | ------------ | ---------------- |
|
||||||
| 1 | table_name | BINARY(192) | 表名 |
|
| 1 | table_name | BINARY(192) | Table name |
|
||||||
| 2 | db_name | BINARY(64) | 数据库名 |
|
| 2 | db_name | BINARY(64) | Database name |
|
||||||
| 3 | create_time | TIMESTAMP | 创建时间 |
|
| 3 | create_time | TIMESTAMP | Creation time |
|
||||||
| 4 | columns | INT | 列数目 |
|
| 4 | columns | INT | Number of columns |
|
||||||
| 5 | stable_name | BINARY(192) | 所属的超级表表名 |
|
| 5 | stable_name | BINARY(192) | Supertable name |
|
||||||
| 6 | uid | BIGINT | 表 id |
|
| 6 | uid | BIGINT | Table ID |
|
||||||
| 7 | vgroup_id | INT | vgroup id |
|
| 7 | vgroup_id | INT | Vgroup ID |
|
||||||
| 8 | ttl | INT | 表的生命周期 |
|
| 8 | ttl | INT | Table time-to-live |
|
||||||
| 9 | table_comment | BINARY(1024) | 表注释 |
|
| 9 | table_comment | BINARY(1024) | Table description |
|
||||||
| 10 | type | BINARY(20) | 表类型 |
|
| 10 | type | BINARY(20) | Table type |
|
||||||
|
|
||||||
## INS_TAGS
|
## INS_TAGS
|
||||||
|
|
||||||
| # | **列名** | **数据类型** | **说明** |
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
| --- | :---------: | ------------- | ---------------------- |
|
| --- | :---------: | ------------- | ---------------------- |
|
||||||
| 1 | table_name | BINARY(192) | 表名 |
|
| 1 | table_name | BINARY(192) | Table name |
|
||||||
| 2 | db_name | BINARY(64) | 该表所在的数据库的名称 |
|
| 2 | db_name | BINARY(64) | Database name |
|
||||||
| 3 | stable_name | BINARY(192) | 所属的超级表表名 |
|
| 3 | stable_name | BINARY(192) | Supertable name |
|
||||||
| 4 | tag_name | BINARY(64) | tag 的名称 |
|
| 4 | tag_name | BINARY(64) | Tag name |
|
||||||
| 5 | tag_type | BINARY(64) | tag 的类型 |
|
| 5 | tag_type | BINARY(64) | Tag type |
|
||||||
| 6 | tag_value | BINARY(16384) | tag 的值 |
|
| 6 | tag_value | BINARY(16384) | Tag value |
|
||||||
|
|
||||||
## INS_USERS
|
## INS_USERS
|
||||||
|
|
||||||
提供系统中创建的用户的相关信息。
|
Provides information about TDengine users.
|
||||||
|
|
||||||
| # | **列名** | **数据类型** | **说明** |
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
| --- | :---------: | ------------ | -------- |
|
| --- | :---------: | ------------ | -------- |
|
||||||
| 1 | user_name | BINARY(23) | 用户名 |
|
| 1 | user_name | BINARY(23) | User name |
|
||||||
| 2 | privilege | BINARY(256) | 权限 |
|
| 2 | privilege | BINARY(256) | User permissions |
|
||||||
| 3 | create_time | TIMESTAMP | 创建时间 |
|
| 3 | create_time | TIMESTAMP | Creation time |
|
||||||
|
|
||||||
## INS_GRANTS
|
## INS_GRANTS
|
||||||
|
|
||||||
提供企业版授权的相关信息。
|
Provides information about TDengine Enterprise Edition permissions.
|
||||||
|
|
||||||
| # | **列名** | **数据类型** | **说明** |
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
| --- | :---------: | ------------ | -------------------------------------------------- |
|
| --- | :---------: | ------------ | -------------------------------------------------- |
|
||||||
| 1 | version | BINARY(9) | 企业版授权说明:official(官方授权的)/trial(试用的) |
|
| 1 | version | BINARY(9) | Whether the deployment is a licensed or trial version |
|
||||||
| 2 | cpu_cores | BINARY(9) | 授权使用的 CPU 核心数量 |
|
| 2 | cpu_cores | BINARY(9) | CPU cores included in license |
|
||||||
| 3 | dnodes | BINARY(10) | 授权使用的 dnode 节点数量 |
|
| 3 | dnodes | BINARY(10) | Dnodes included in license |
|
||||||
| 4 | streams | BINARY(10) | 授权创建的流数量 |
|
| 4 | streams | BINARY(10) | Streams included in license |
|
||||||
| 5 | users | BINARY(10) | 授权创建的用户数量 |
|
| 5 | users | BINARY(10) | Users included in license |
|
||||||
| 6 | accounts | BINARY(10) | 授权创建的帐户数量 |
|
| 6 | streams | BINARY(10) | Accounts included in license |
|
||||||
| 7 | storage | BINARY(21) | 授权使用的存储空间大小 |
|
| 7 | storage | BINARY(21) | Storage space included in license |
|
||||||
| 8 | connections | BINARY(21) | 授权使用的客户端连接数量 |
|
| 8 | connections | BINARY(21) | Client connections included in license |
|
||||||
| 9 | databases | BINARY(11) | 授权使用的数据库数量 |
|
| 9 | databases | BINARY(11) | Databases included in license |
|
||||||
| 10 | speed | BINARY(9) | 授权使用的数据点每秒写入数量 |
|
| 10 | speed | BINARY(9) | Write speed specified in license (data points per second) |
|
||||||
| 11 | querytime | BINARY(9) | 授权使用的查询总时长 |
|
| 11 | querytime | BINARY(9) | Total query time specified in license |
|
||||||
| 12 | timeseries | BINARY(21) | 授权使用的测点数量 |
|
| 12 | timeseries | BINARY(21) | Number of metrics included in license |
|
||||||
| 13 | expired | BINARY(5) | 是否到期,true:到期,false:未到期 |
|
| 13 | expired | BINARY(5) | Whether the license has expired |
|
||||||
| 14 | expire_time | BINARY(19) | 试用期到期时间 |
|
| 14 | expire_time | BINARY(19) | When the trial period expires |
|
||||||
|
|
||||||
## INS_VGROUPS
|
## INS_VGROUPS
|
||||||
|
|
||||||
系统中所有 vgroups 的信息。
|
Provides information about vgroups.
|
||||||
|
|
||||||
| # | **列名** | **数据类型** | **说明** |
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
| --- | :-------: | ------------ | ------------------------------------------------------ |
|
| --- | :-------: | ------------ | ------------------------------------------------------ |
|
||||||
| 1 | vgroup_id | INT | vgroup id |
|
| 1 | vgroup_id | INT | Vgroup ID |
|
||||||
| 2 | db_name | BINARY(32) | 数据库名 |
|
| 2 | db_name | BINARY(32) | Database name |
|
||||||
| 3 | tables | INT | 此 vgroup 内有多少表 |
|
| 3 | tables | INT | Tables in vgroup |
|
||||||
| 4 | status | BINARY(10) | 此 vgroup 的状态 |
|
| 4 | status | BINARY(10) | Vgroup status |
|
||||||
| 5 | v1_dnode | INT | 第一个成员所在的 dnode 的 id |
|
| 5 | v1_dnode | INT | Dnode ID of first vgroup member |
|
||||||
| 6 | v1_status | BINARY(10) | 第一个成员的状态 |
|
| 6 | v1_status | BINARY(10) | Status of first vgroup member |
|
||||||
| 7 | v2_dnode | INT | 第二个成员所在的 dnode 的 id |
|
| 7 | v2_dnode | INT | Dnode ID of second vgroup member |
|
||||||
| 8 | v2_status | BINARY(10) | 第二个成员的状态 |
|
| 8 | v2_status | BINARY(10) | Status of second vgroup member |
|
||||||
| 9 | v3_dnode | INT | 第三个成员所在的 dnode 的 id |
|
| 9 | v3_dnode | INT | Dnode ID of third vgroup member |
|
||||||
| 10 | v3_status | BINARY(10) | 第三个成员的状态 |
|
| 10 | v3_status | BINARY(10) | Status of third vgroup member |
|
||||||
| 11 | nfiles | INT | 此 vgroup 中数据/元数据文件的数量 |
|
| 11 | nfiles | INT | Number of data and metadata files in the vgroup |
|
||||||
| 12 | file_size | INT | 此 vgroup 中数据/元数据文件的大小 |
|
| 12 | file_size | INT | Size of the data and metadata files in the vgroup |
|
||||||
| 13 | tsma | TINYINT | 此 vgroup 是否专用于 Time-range-wise SMA,1: 是, 0: 否 |
|
| 13 | tsma | TINYINT | Whether time-range-wise SMA is enabled. 1 means enabled; 0 means disabled. |
|
||||||
|
|
||||||
## INS_CONFIGS
|
## INS_CONFIGS
|
||||||
|
|
||||||
系统配置参数。
|
Provides system configuration information.
|
||||||
|
|
||||||
| # | **列名** | **数据类型** | **说明** |
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
| --- | :------: | ------------ | ------------ |
|
| --- | :------: | ------------ | ------------ |
|
||||||
| 1 | name | BINARY(32) | 配置项名称 |
|
| 1 | name | BINARY(32) | Parameter |
|
||||||
| 2 | value | BINARY(64) | 该配置项的值 |
|
| 2 | value | BINARY(64) | Value |
|
||||||
|
|
||||||
## INS_DNODE_VARIABLES
|
## INS_DNODE_VARIABLES
|
||||||
|
|
||||||
系统中每个 dnode 的配置参数。
|
Provides dnode configuration information.
|
||||||
|
|
||||||
| # | **列名** | **数据类型** | **说明** |
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
| --- | :------: | ------------ | ------------ |
|
| --- | :------: | ------------ | ------------ |
|
||||||
| 1 | dnode_id | INT | dnode 的 ID |
|
| 1 | dnode_id | INT | Dnode ID |
|
||||||
| 2 | name | BINARY(32) | 配置项名称 |
|
| 2 | name | BINARY(32) | Parameter |
|
||||||
| 3 | value | BINARY(64) | 该配置项的值 |
|
| 3 | value | BINARY(64) | Value |
|
||||||
|
|
||||||
|
## INS_TOPICS
|
||||||
|
|
||||||
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
|
| --- | :---------: | ------------ | ------------------------------ |
|
||||||
|
| 1 | topic_name | BINARY(192) | Topic name |
|
||||||
|
| 2 | db_name | BINARY(64) | Database for the topic |
|
||||||
|
| 3 | create_time | TIMESTAMP | Creation time |
|
||||||
|
| 4 | sql | BINARY(1024) | SQL statement used to create the topic |
|
||||||
|
|
||||||
|
## INS_SUBSCRIPTIONS
|
||||||
|
|
||||||
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
|
| --- | :------------: | ------------ | ------------------------ |
|
||||||
|
| 1 | topic_name | BINARY(204) | Subscribed topic |
|
||||||
|
| 2 | consumer_group | BINARY(193) | Subscribed consumer group |
|
||||||
|
| 3 | vgroup_id | INT | Vgroup ID for the consumer |
|
||||||
|
| 4 | consumer_id | BIGINT | Consumer ID |
|
||||||
|
|
||||||
|
## INS_STREAMS
|
||||||
|
|
||||||
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
|
| --- | :----------: | ------------ | --------------------------------------- |
|
||||||
|
| 1 | stream_name | BINARY(64) | Stream name |
|
||||||
|
| 2 | create_time | TIMESTAMP | Creation time |
|
||||||
|
| 3 | sql | BINARY(1024) | SQL statement used to create the stream |
|
||||||
|
| 4 | status | BIANRY(20) | Current status |
|
||||||
|
| 5 | source_db | BINARY(64) | Source database |
|
||||||
|
| 6 | target_db | BIANRY(64) | Target database |
|
||||||
|
| 7 | target_table | BINARY(192) | Target table |
|
||||||
|
| 8 | watermark | BIGINT | Watermark (see stream processing documentation) |
|
||||||
|
| 9 | trigger | INT | Method of triggering the result push (see stream processing documentation) |
|
||||||
|
|
|
@ -0,0 +1,97 @@
|
||||||
|
---
|
||||||
|
sidebar_label: Statistics
|
||||||
|
title: Performance_Schema Database
|
||||||
|
---
|
||||||
|
|
||||||
|
TDengine includes a built-in database named `PERFORMANCE_SCHEMA` to provide access to database performance statistics. This document introduces the tables of PERFORMANCE_SCHEMA and their structure.
|
||||||
|
|
||||||
|
## PERF_APP
|
||||||
|
|
||||||
|
Provides information about clients (such as applications) that connect to the cluster. Similar to SHOW APPS.
|
||||||
|
|
||||||
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
|
| --- | :----------: | ------------ | ------------------------------- |
|
||||||
|
| 1 | app_id | UBIGINT | Client ID |
|
||||||
|
| 2 | ip | BINARY(16) | Client IP address |
|
||||||
|
| 3 | pid | INT | Client process |
|
||||||
|
| 4 | name | BINARY(24) | Client name |
|
||||||
|
| 5 | start_time | TIMESTAMP | Time when client was started |
|
||||||
|
| 6 | insert_req | UBIGINT | Insert requests |
|
||||||
|
| 7 | insert_row | UBIGINT | Rows inserted |
|
||||||
|
| 8 | insert_time | UBIGINT | Time spent processing insert requests in microseconds |
|
||||||
|
| 9 | insert_bytes | UBIGINT | Size of data inserted in byted |
|
||||||
|
| 10 | fetch_bytes | UBIGINT | Size of query results in bytes |
|
||||||
|
| 11 | query_time | UBIGINT | Time spend processing query requests |
|
||||||
|
| 12 | slow_query | UBIGINT | Number of slow queries (greater than or equal to 3 seconds) |
|
||||||
|
| 13 | total_req | UBIGINT | Total requests |
|
||||||
|
| 14 | current_req | UBIGINT | Requests currently being processed |
|
||||||
|
| 15 | last_access | TIMESTAMP | Last update time |
|
||||||
|
|
||||||
|
## PERF_CONNECTIONS
|
||||||
|
|
||||||
|
Provides information about connections to the database. Similar to SHOW CONNECTIONS.
|
||||||
|
|
||||||
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
|
| --- | :---------: | ------------ | -------------------------------------------------- |
|
||||||
|
| 1 | conn_id | INT | Connection ID |
|
||||||
|
| 2 | user | BINARY(24) | User name |
|
||||||
|
| 3 | app | BINARY(24) | Client name |
|
||||||
|
| 4 | pid | UINT | Client process ID on client device that initiated the connection |
|
||||||
|
| 5 | end_point | BINARY(128) | Client endpoint |
|
||||||
|
| 6 | login_time | TIMESTAMP | Login time |
|
||||||
|
| 7 | last_access | TIMESTAMP | Last update time |
|
||||||
|
|
||||||
|
## PERF_QUERIES
|
||||||
|
|
||||||
|
Provides information about SQL queries currently running. Similar to SHOW QUERIES.
|
||||||
|
|
||||||
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
|
| --- | :----------: | ------------ | ---------------------------- |
|
||||||
|
| 1 | kill_id | UBIGINT | ID used to stop the query |
|
||||||
|
| 2 | query_id | INT | Query ID |
|
||||||
|
| 3 | conn_id | UINT | Connection ID |
|
||||||
|
| 4 | app | BINARY(24) | Client name |
|
||||||
|
| 5 | pid | INT | Client process ID on client device |
|
||||||
|
| 6 | user | BINARY(24) | User name |
|
||||||
|
| 7 | end_point | BINARY(16) | Client endpoint |
|
||||||
|
| 8 | create_time | TIMESTAMP | Creation time |
|
||||||
|
| 9 | exec_usec | BIGINT | Elapsed time |
|
||||||
|
| 10 | stable_query | BOOL | Whether the query is on a supertable |
|
||||||
|
| 11 | sub_num | INT | Number of subqueries |
|
||||||
|
| 12 | sub_status | BINARY(1000) | Subquery status |
|
||||||
|
| 13 | sql | BINARY(1024) | SQL statement |
|
||||||
|
|
||||||
|
## PERF_CONSUMERS
|
||||||
|
|
||||||
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
|
| --- | :------------: | ------------ | ----------------------------------------------------------- |
|
||||||
|
| 1 | consumer_id | BIGINT | Consumer ID |
|
||||||
|
| 2 | consumer_group | BINARY(192) | Consumer group |
|
||||||
|
| 3 | client_id | BINARY(192) | Client ID (user-defined) |
|
||||||
|
| 4 | status | BINARY(20) | Consumer status |
|
||||||
|
| 5 | topics | BINARY(204) | Subscribed topic. Returns one row for each topic. |
|
||||||
|
| 6 | up_time | TIMESTAMP | Time of first connection to TDengine Server |
|
||||||
|
| 7 | subscribe_time | TIMESTAMP | Time of first subscription |
|
||||||
|
| 8 | rebalance_time | TIMESTAMP | Time of first rebalance triggering |
|
||||||
|
|
||||||
|
## PERF_TRANS
|
||||||
|
|
||||||
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
|
| --- | :--------------: | ------------ | -------------------------------------------------------------- |
|
||||||
|
| 1 | id | INT | ID of the transaction currently running |
|
||||||
|
| 2 | create_time | TIMESTAMP | Creation time |
|
||||||
|
| 3 | stage | BINARY(12) | Transaction stage (redoAction, undoAction, or commit) |
|
||||||
|
| 4 | db1 | BINARY(64) | First database having a conflict with the transaction |
|
||||||
|
| 5 | db2 | BINARY(64) | Second database having a conflict with the transaction |
|
||||||
|
| 6 | failed_times | INT | Times the transaction has failed |
|
||||||
|
| 7 | last_exec_time | TIMESTAMP | Previous time the transaction was run |
|
||||||
|
| 8 | last_action_info | BINARY(511) | Reason for failure on previous run |
|
||||||
|
|
||||||
|
## PERF_SMAS
|
||||||
|
|
||||||
|
| # | **Column** | **Data Type** | **Description** |
|
||||||
|
| --- | :---------: | ------------ | ------------------------------------------- |
|
||||||
|
| 1 | sma_name | BINARY(192) | Time-range-wise SMA name |
|
||||||
|
| 2 | create_time | TIMESTAMP | Creation time |
|
||||||
|
| 3 | stable_name | BINARY(192) | Supertable name |
|
||||||
|
| 4 | vgroup_id | INT | Dedicated vgroup name |
|
|
@ -1,9 +1,9 @@
|
||||||
---
|
---
|
||||||
sidebar_label: SHOW 命令
|
sidebar_label: SHOW Statement
|
||||||
title: 使用 SHOW 命令查看系统元数据
|
title: SHOW Statement for Metadata
|
||||||
---
|
---
|
||||||
|
|
||||||
除了使用 `select` 语句查询 `INFORMATION_SCHEMA` 数据库中的表获得系统中的各种元数据、系统信息和状态之外,也可以用 `SHOW` 命令来实现同样的目的。
|
`SHOW` command can be used to get brief system information. To get details about metatadata, information, and status in the system, please use `select` to query the tables in database `INFORMATION_SCHEMA`.
|
||||||
|
|
||||||
## SHOW ACCOUNTS
|
## SHOW ACCOUNTS
|
||||||
|
|
||||||
|
@ -11,9 +11,9 @@ title: 使用 SHOW 命令查看系统元数据
|
||||||
SHOW ACCOUNTS;
|
SHOW ACCOUNTS;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示当前系统中所有租户的信息。
|
Shows information about tenants on the system.
|
||||||
|
|
||||||
注:企业版独有
|
Note: TDengine Enterprise Edition only.
|
||||||
|
|
||||||
## SHOW APPS
|
## SHOW APPS
|
||||||
|
|
||||||
|
@ -21,7 +21,7 @@ SHOW ACCOUNTS;
|
||||||
SHOW APPS;
|
SHOW APPS;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示接入集群的应用(客户端)信息。
|
Shows all clients (such as applications) that connect to the cluster.
|
||||||
|
|
||||||
## SHOW BNODES
|
## SHOW BNODES
|
||||||
|
|
||||||
|
@ -29,7 +29,7 @@ SHOW APPS;
|
||||||
SHOW BNODES;
|
SHOW BNODES;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示当前系统中存在的 BNODE (backup node, 即备份节点)的信息。
|
Shows information about backup nodes (bnodes) in the system.
|
||||||
|
|
||||||
## SHOW CLUSTER
|
## SHOW CLUSTER
|
||||||
|
|
||||||
|
@ -37,7 +37,7 @@ SHOW BNODES;
|
||||||
SHOW CLUSTER;
|
SHOW CLUSTER;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示当前集群的信息
|
Shows information about the current cluster.
|
||||||
|
|
||||||
## SHOW CONNECTIONS
|
## SHOW CONNECTIONS
|
||||||
|
|
||||||
|
@ -45,7 +45,7 @@ SHOW CLUSTER;
|
||||||
SHOW CONNECTIONS;
|
SHOW CONNECTIONS;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示当前系统中存在的连接的信息。
|
Shows information about connections to the system.
|
||||||
|
|
||||||
## SHOW CONSUMERS
|
## SHOW CONSUMERS
|
||||||
|
|
||||||
|
@ -53,7 +53,7 @@ SHOW CONNECTIONS;
|
||||||
SHOW CONSUMERS;
|
SHOW CONSUMERS;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示当前数据库下所有活跃的消费者的信息。
|
Shows information about all active consumers in the system.
|
||||||
|
|
||||||
## SHOW CREATE DATABASE
|
## SHOW CREATE DATABASE
|
||||||
|
|
||||||
|
@ -61,7 +61,7 @@ SHOW CONSUMERS;
|
||||||
SHOW CREATE DATABASE db_name;
|
SHOW CREATE DATABASE db_name;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示 db_name 指定的数据库的创建语句。
|
Shows the SQL statement used to create the specified database.
|
||||||
|
|
||||||
## SHOW CREATE STABLE
|
## SHOW CREATE STABLE
|
||||||
|
|
||||||
|
@ -69,7 +69,7 @@ SHOW CREATE DATABASE db_name;
|
||||||
SHOW CREATE STABLE [db_name.]stb_name;
|
SHOW CREATE STABLE [db_name.]stb_name;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示 tb_name 指定的超级表的创建语句
|
Shows the SQL statement used to create the specified supertable.
|
||||||
|
|
||||||
## SHOW CREATE TABLE
|
## SHOW CREATE TABLE
|
||||||
|
|
||||||
|
@ -77,7 +77,7 @@ SHOW CREATE STABLE [db_name.]stb_name;
|
||||||
SHOW CREATE TABLE [db_name.]tb_name
|
SHOW CREATE TABLE [db_name.]tb_name
|
||||||
```
|
```
|
||||||
|
|
||||||
显示 tb_name 指定的表的创建语句。支持普通表、超级表和子表。
|
Shows the SQL statement used to create the specified table. This statement can be used on supertables, standard tables, and subtables.
|
||||||
|
|
||||||
## SHOW DATABASES
|
## SHOW DATABASES
|
||||||
|
|
||||||
|
@ -85,7 +85,7 @@ SHOW CREATE TABLE [db_name.]tb_name
|
||||||
SHOW DATABASES;
|
SHOW DATABASES;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示用户定义的所有数据库。
|
Shows all user-created databases.
|
||||||
|
|
||||||
## SHOW DNODES
|
## SHOW DNODES
|
||||||
|
|
||||||
|
@ -93,7 +93,7 @@ SHOW DATABASES;
|
||||||
SHOW DNODES;
|
SHOW DNODES;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示当前系统中 DNODE 的信息。
|
Shows all dnodes in the system.
|
||||||
|
|
||||||
## SHOW FUNCTIONS
|
## SHOW FUNCTIONS
|
||||||
|
|
||||||
|
@ -101,7 +101,7 @@ SHOW DNODES;
|
||||||
SHOW FUNCTIONS;
|
SHOW FUNCTIONS;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示用户定义的自定义函数。
|
Shows all user-defined functions in the system.
|
||||||
|
|
||||||
## SHOW LICENSE
|
## SHOW LICENSE
|
||||||
|
|
||||||
|
@ -110,9 +110,9 @@ SHOW LICENSE;
|
||||||
SHOW GRANTS;
|
SHOW GRANTS;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示企业版许可授权的信息。
|
Shows information about the TDengine Enterprise Edition license.
|
||||||
|
|
||||||
注:企业版独有
|
Note: TDengine Enterprise Edition only.
|
||||||
|
|
||||||
## SHOW INDEXES
|
## SHOW INDEXES
|
||||||
|
|
||||||
|
@ -120,7 +120,7 @@ SHOW GRANTS;
|
||||||
SHOW INDEXES FROM tbl_name [FROM db_name];
|
SHOW INDEXES FROM tbl_name [FROM db_name];
|
||||||
```
|
```
|
||||||
|
|
||||||
显示已创建的索引。
|
Shows indices that have been created.
|
||||||
|
|
||||||
## SHOW LOCAL VARIABLES
|
## SHOW LOCAL VARIABLES
|
||||||
|
|
||||||
|
@ -128,7 +128,7 @@ SHOW INDEXES FROM tbl_name [FROM db_name];
|
||||||
SHOW LOCAL VARIABLES;
|
SHOW LOCAL VARIABLES;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示当前客户端配置参数的运行值。
|
Shows the working configuration of the client.
|
||||||
|
|
||||||
## SHOW MNODES
|
## SHOW MNODES
|
||||||
|
|
||||||
|
@ -136,7 +136,7 @@ SHOW LOCAL VARIABLES;
|
||||||
SHOW MNODES;
|
SHOW MNODES;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示当前系统中 MNODE 的信息。
|
Shows information about mnodes in the system.
|
||||||
|
|
||||||
## SHOW MODULES
|
## SHOW MODULES
|
||||||
|
|
||||||
|
@ -144,7 +144,7 @@ SHOW MNODES;
|
||||||
SHOW MODULES;
|
SHOW MODULES;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示当前系统中所安装的组件的信息。
|
Shows information about modules installed in the system.
|
||||||
|
|
||||||
## SHOW QNODES
|
## SHOW QNODES
|
||||||
|
|
||||||
|
@ -152,7 +152,7 @@ SHOW MODULES;
|
||||||
SHOW QNODES;
|
SHOW QNODES;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示当前系统中 QNODE (查询节点)的信息。
|
Shows information about qnodes in the system.
|
||||||
|
|
||||||
## SHOW SCORES
|
## SHOW SCORES
|
||||||
|
|
||||||
|
@ -160,9 +160,9 @@ SHOW QNODES;
|
||||||
SHOW SCORES;
|
SHOW SCORES;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示系统被许可授权的容量的信息。
|
Shows information about the storage space allowed by the license.
|
||||||
|
|
||||||
注:企业版独有
|
Note: TDengine Enterprise Edition only.
|
||||||
|
|
||||||
## SHOW SNODES
|
## SHOW SNODES
|
||||||
|
|
||||||
|
@ -170,7 +170,7 @@ SHOW SCORES;
|
||||||
SHOW SNODES;
|
SHOW SNODES;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示当前系统中 SNODE (流计算节点)的信息。
|
Shows information about stream processing nodes (snodes) in the system.
|
||||||
|
|
||||||
## SHOW STABLES
|
## SHOW STABLES
|
||||||
|
|
||||||
|
@ -178,7 +178,7 @@ SHOW SNODES;
|
||||||
SHOW [db_name.]STABLES [LIKE 'pattern'];
|
SHOW [db_name.]STABLES [LIKE 'pattern'];
|
||||||
```
|
```
|
||||||
|
|
||||||
显示当前数据库下的所有超级表的信息。可以使用 LIKE 对表名进行模糊匹配。
|
Shows all supertables in the current database. You can use LIKE for fuzzy matching.
|
||||||
|
|
||||||
## SHOW STREAMS
|
## SHOW STREAMS
|
||||||
|
|
||||||
|
@ -186,7 +186,7 @@ SHOW [db_name.]STABLES [LIKE 'pattern'];
|
||||||
SHOW STREAMS;
|
SHOW STREAMS;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示当前系统内所有流计算的信息。
|
Shows information about streams in the system.
|
||||||
|
|
||||||
## SHOW SUBSCRIPTIONS
|
## SHOW SUBSCRIPTIONS
|
||||||
|
|
||||||
|
@ -194,7 +194,7 @@ SHOW STREAMS;
|
||||||
SHOW SUBSCRIPTIONS;
|
SHOW SUBSCRIPTIONS;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示当前数据库下的所有的订阅关系
|
Shows all subscriptions in the system.
|
||||||
|
|
||||||
## SHOW TABLES
|
## SHOW TABLES
|
||||||
|
|
||||||
|
@ -202,7 +202,7 @@ SHOW SUBSCRIPTIONS;
|
||||||
SHOW [db_name.]TABLES [LIKE 'pattern'];
|
SHOW [db_name.]TABLES [LIKE 'pattern'];
|
||||||
```
|
```
|
||||||
|
|
||||||
显示当前数据库下的所有普通表和子表的信息。可以使用 LIKE 对表名进行模糊匹配。
|
Shows all standard tables and subtables in the current database. You can use LIKE for fuzzy matching.
|
||||||
|
|
||||||
## SHOW TABLE DISTRIBUTED
|
## SHOW TABLE DISTRIBUTED
|
||||||
|
|
||||||
|
@ -210,7 +210,7 @@ SHOW [db_name.]TABLES [LIKE 'pattern'];
|
||||||
SHOW TABLE DISTRIBUTED table_name;
|
SHOW TABLE DISTRIBUTED table_name;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示表的数据分布信息。
|
Shows how table data is distributed.
|
||||||
|
|
||||||
## SHOW TAGS
|
## SHOW TAGS
|
||||||
|
|
||||||
|
@ -218,7 +218,7 @@ SHOW TABLE DISTRIBUTED table_name;
|
||||||
SHOW TAGS FROM child_table_name [FROM db_name];
|
SHOW TAGS FROM child_table_name [FROM db_name];
|
||||||
```
|
```
|
||||||
|
|
||||||
显示子表的标签信息。
|
Shows all tag information in a subtable.
|
||||||
|
|
||||||
## SHOW TOPICS
|
## SHOW TOPICS
|
||||||
|
|
||||||
|
@ -226,7 +226,7 @@ SHOW TAGS FROM child_table_name [FROM db_name];
|
||||||
SHOW TOPICS;
|
SHOW TOPICS;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示当前数据库下的所有主题的信息。
|
Shows all topics in the current database.
|
||||||
|
|
||||||
## SHOW TRANSACTIONS
|
## SHOW TRANSACTIONS
|
||||||
|
|
||||||
|
@ -234,7 +234,7 @@ SHOW TOPICS;
|
||||||
SHOW TRANSACTIONS;
|
SHOW TRANSACTIONS;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示当前系统中正在执行的事务的信息
|
Shows all running transactions in the system.
|
||||||
|
|
||||||
## SHOW USERS
|
## SHOW USERS
|
||||||
|
|
||||||
|
@ -242,7 +242,7 @@ SHOW TRANSACTIONS;
|
||||||
SHOW USERS;
|
SHOW USERS;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示当前系统中所有用户的信息。包括用户自定义的用户和系统默认用户。
|
Shows information about users on the system. This includes user-created users and system-defined users.
|
||||||
|
|
||||||
## SHOW VARIABLES
|
## SHOW VARIABLES
|
||||||
|
|
||||||
|
@ -251,7 +251,7 @@ SHOW VARIABLES;
|
||||||
SHOW DNODE dnode_id VARIABLES;
|
SHOW DNODE dnode_id VARIABLES;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示当前系统中各节点需要相同的配置参数的运行值,也可以指定 DNODE 来查看其的配置参数。
|
Shows the working configuration of the parameters that must be the same on each node. You can also specify a dnode to show the working configuration for that node.
|
||||||
|
|
||||||
## SHOW VGROUPS
|
## SHOW VGROUPS
|
||||||
|
|
||||||
|
@ -259,7 +259,7 @@ SHOW DNODE dnode_id VARIABLES;
|
||||||
SHOW [db_name.]VGROUPS;
|
SHOW [db_name.]VGROUPS;
|
||||||
```
|
```
|
||||||
|
|
||||||
显示当前系统中所有 VGROUP 或某个 db 的 VGROUPS 的信息。
|
Shows information about all vgroups in the system or about the vgroups for a specified database.
|
||||||
|
|
||||||
## SHOW VNODES
|
## SHOW VNODES
|
||||||
|
|
||||||
|
@ -267,4 +267,4 @@ SHOW [db_name.]VGROUPS;
|
||||||
SHOW VNODES [dnode_name];
|
SHOW VNODES [dnode_name];
|
||||||
```
|
```
|
||||||
|
|
||||||
显示当前系统中所有 VNODE 或某个 DNODE 的 VNODE 的信息。
|
Shows information about all vnodes in the system or about the vnodes for a specified dnode.
|
||||||
|
|
|
@ -1,29 +1,30 @@
|
||||||
---
|
---
|
||||||
sidebar_label: 权限管理
|
sidebar_label: Access Control
|
||||||
title: 权限管理
|
title: User and Access Control
|
||||||
|
description: Manage user and user's permission
|
||||||
---
|
---
|
||||||
|
|
||||||
本节讲述如何在 TDengine 中进行权限管理的相关操作。
|
This document describes how to manage permissions in TDengine.
|
||||||
|
|
||||||
## 创建用户
|
## Create a User
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
CREATE USER use_name PASS password;
|
CREATE USER use_name PASS 'password';
|
||||||
```
|
```
|
||||||
|
|
||||||
创建用户。
|
This statement creates a user account.
|
||||||
|
|
||||||
use_name最长为23字节。
|
The maximum length of use_name is 23 bytes.
|
||||||
|
|
||||||
password最长为128字节,合法字符包括"a-zA-Z0-9!?$%^&*()_–+={[}]:;@~#|<,>.?/",不可以出现单双引号、撇号、反斜杠和空格,且不可以为空。
|
The maximum length of password is 128 bytes. The password can include leters, digits, and special characters excluding single quotation marks, double quotation marks, backticks, backslashes, and spaces. The password cannot be empty.
|
||||||
|
|
||||||
## 删除用户
|
## Delete a User
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
DROP USER user_name;
|
DROP USER user_name;
|
||||||
```
|
```
|
||||||
|
|
||||||
## 修改用户信息
|
## Modify User Information
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
ALTER USER user_name alter_user_clause
|
ALTER USER user_name alter_user_clause
|
||||||
|
@ -35,12 +36,12 @@ alter_user_clause: {
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
- PASS:修改用户密码。
|
- PASS: Modify the user password.
|
||||||
- ENABLE:修改用户是否启用。1表示启用此用户,0表示禁用此用户。
|
- ENABLE: Specify whether the user is enabled or disabled. 1 indicates enabled and 0 indicates disabled.
|
||||||
- SYSINFO:修改用户是否可查看系统信息。1表示可以查看系统信息,0表示不可以查看系统信息。
|
- SYSINFO: Specify whether the user can query system information. 1 indicates that the user can query system information and 0 indicates that the user cannot query system information.
|
||||||
|
|
||||||
|
|
||||||
## 授权
|
## Grant Permissions
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
GRANT privileges ON priv_level TO user_name
|
GRANT privileges ON priv_level TO user_name
|
||||||
|
@ -61,15 +62,15 @@ priv_level : {
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
对用户授权。
|
Grant permissions to a user.
|
||||||
|
|
||||||
授权级别支持到DATABASE,权限有READ和WRITE两种。
|
Permissions are granted on the database level. You can grant read or write permissions.
|
||||||
|
|
||||||
TDengine 有超级用户和普通用户两类用户。超级用户缺省创建为root,拥有所有权限。使用超级用户创建出来的用户为普通用户。在未授权的情况下,普通用户可以创建DATABASE,并拥有自己创建的DATABASE的所有权限,包括删除数据库、修改数据库、查询时序数据和写入时序数据。超级用户可以给普通用户授予其他DATABASE的读写权限,使其可以在此DATABASE上读写数据,但不能对其进行删除和修改数据库的操作。
|
TDengine has superusers and standard users. The default superuser name is root. This account has all permissions. You can use the superuser account to create standard users. With no permissions, standard users can create databases and have permissions on the databases that they create. These include deleting, modifying, querying, and writing to their own databases. Superusers can grant users permission to read and write other databases. However, standard users cannot delete or modify databases created by other users.
|
||||||
|
|
||||||
对于非DATABASE的对象,如USER、DNODE、UDF、QNODE等,普通用户只有读权限(一般为SHOW命令),不能创建和修改。
|
For non-database objects such as users, dnodes, and user-defined functions, standard users have read permissions only, generally by means of the SHOW statement. Standard users cannot create or modify these objects.
|
||||||
|
|
||||||
## 撤销授权
|
## Revoke Permissions
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
REVOKE privileges ON priv_level FROM user_name
|
REVOKE privileges ON priv_level FROM user_name
|
||||||
|
@ -91,4 +92,4 @@ priv_level : {
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
收回对用户的授权。
|
Revoke permissions from a user.
|
||||||
|
|
|
@ -1,28 +1,68 @@
|
||||||
---
|
---
|
||||||
sidebar_label: 自定义函数
|
sidebar_label: User-Defined Functions
|
||||||
title: 用户自定义函数
|
title: User-Defined Functions (UDF)
|
||||||
---
|
---
|
||||||
|
|
||||||
除了 TDengine 的内置函数以外,用户还可以编写自己的函数逻辑并加入TDengine系统中。
|
You can create user-defined functions and import them into TDengine.
|
||||||
|
## Create UDF
|
||||||
|
|
||||||
## 创建函数
|
SQL command can be executed on the host where the generated UDF DLL resides to load the UDF DLL into TDengine. This operation cannot be done through REST interface or web console. Once created, any client of the current TDengine can use these UDF functions in their SQL commands. UDF are stored in the management node of TDengine. The UDFs loaded in TDengine would be still available after TDengine is restarted.
|
||||||
|
|
||||||
|
When creating UDF, the type of UDF, i.e. a scalar function or aggregate function must be specified. If the specified type is wrong, the SQL statements using the function would fail with errors. The input data type and output data type must be consistent with the UDF definition.
|
||||||
|
|
||||||
|
- Create Scalar Function
|
||||||
```sql
|
```sql
|
||||||
CREATE [AGGREGATE] FUNCTION func_name AS library_path OUTPUTTYPE type_name [BUFSIZE value]
|
CREATE FUNCTION function_name AS library_path OUTPUTTYPE output_type;
|
||||||
```
|
```
|
||||||
|
|
||||||
语法说明:
|
- function_name: The scalar function name to be used in SQL statement which must be consistent with the UDF name and is also the name of the compiled DLL (.so file).
|
||||||
|
- library_path: The absolute path of the DLL file including the name of the shared object file (.so). The path must be quoted with single or double quotes.
|
||||||
|
- output_type: The data type of the results of the UDF.
|
||||||
|
|
||||||
AGGREGATE:标识此函数是标量函数还是聚集函数。
|
For example, the following SQL statement can be used to create a UDF from `libbitand.so`.
|
||||||
func_name:函数名,必须与函数实现中udfNormalFunc的实际名称一致。
|
|
||||||
library_path:包含UDF函数实现的动态链接库的绝对路径,是在客户端侧主机上的绝对路径。
|
|
||||||
OUTPUTTYPE:标识此函数的返回类型。
|
|
||||||
BUFSIZE:中间结果的缓冲区大小,单位是字节。不设置则默认为0。最大不可超过512字节。
|
|
||||||
|
|
||||||
关于如何开发自定义函数,请参考 [UDF使用说明](../../develop/udf)。
|
|
||||||
|
|
||||||
## 删除自定义函数
|
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
DROP FUNCTION func_name
|
CREATE FUNCTION bit_and AS "/home/taos/udf_example/libbitand.so" OUTPUTTYPE INT;
|
||||||
```
|
```
|
||||||
|
|
||||||
|
- Create Aggregate Function
|
||||||
|
```sql
|
||||||
|
CREATE AGGREGATE FUNCTION function_name AS library_path OUTPUTTYPE output_type [ BUFSIZE buffer_size ];
|
||||||
|
```
|
||||||
|
|
||||||
|
- function_name: The aggregate function name to be used in SQL statement which must be consistent with the udfNormalFunc name and is also the name of the compiled DLL (.so file).
|
||||||
|
- library_path: The absolute path of the DLL file including the name of the shared object file (.so). The path must be quoted with single or double quotes.
|
||||||
|
- output_type: The output data type, the value is the literal string of the supported TDengine data type.
|
||||||
|
- buffer_size: The size of the intermediate buffer in bytes. This parameter is optional.
|
||||||
|
|
||||||
|
For example, the following SQL statement can be used to create a UDF from `libl2norm.so`.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE AGGREGATE FUNCTION l2norm AS "/home/taos/udf_example/libl2norm.so" OUTPUTTYPE DOUBLE bufsize 8;
|
||||||
|
```
|
||||||
|
For more information about user-defined functions, see [User-Defined Functions](../../develop/udf).
|
||||||
|
|
||||||
|
## Manage UDF
|
||||||
|
|
||||||
|
- The following statement deleted the specified user-defined function.
|
||||||
|
```
|
||||||
|
DROP FUNCTION function_name;
|
||||||
|
```
|
||||||
|
|
||||||
|
- function_name: The value of function_name in the CREATE statement used to import the UDF for example `bit_and` or `l2norm`.
|
||||||
|
```sql
|
||||||
|
DROP FUNCTION bit_and;
|
||||||
|
```
|
||||||
|
- Show Available UDF
|
||||||
|
```sql
|
||||||
|
SHOW FUNCTIONS;
|
||||||
|
```
|
||||||
|
|
||||||
|
## Call UDF
|
||||||
|
|
||||||
|
The function name specified when creating UDF can be used directly in SQL statements, just like builtin functions. For example:
|
||||||
|
```sql
|
||||||
|
SELECT X(c1,c2) FROM table/stable;
|
||||||
|
```
|
||||||
|
|
||||||
|
The above SQL statement invokes function X for column c1 and c2. You can use query keywords like WHERE with user-defined functions.
|
||||||
|
|
|
@ -1,11 +1,11 @@
|
||||||
---
|
---
|
||||||
sidebar_label: 索引
|
sidebar_label: Index
|
||||||
title: 使用索引
|
title: Using Indices
|
||||||
---
|
---
|
||||||
|
|
||||||
TDengine 从 3.0.0.0 版本开始引入了索引功能,支持 SMA 索引和 FULLTEXT 索引。
|
TDengine supports SMA and FULLTEXT indexing.
|
||||||
|
|
||||||
## 创建索引
|
## Create an Index
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
CREATE FULLTEXT INDEX index_name ON tb_name (col_name [, col_name] ...)
|
CREATE FULLTEXT INDEX index_name ON tb_name (col_name [, col_name] ...)
|
||||||
|
@ -19,29 +19,29 @@ functions:
|
||||||
function [, function] ...
|
function [, function] ...
|
||||||
```
|
```
|
||||||
|
|
||||||
### SMA 索引
|
### SMA Indexing
|
||||||
|
|
||||||
对指定列按 INTERVAL 子句定义的时间窗口创建进行预聚合计算,预聚合计算类型由 functions_string 指定。SMA 索引能提升指定时间段的聚合查询的性能。目前,限制一个超级表只能创建一个 SMA INDEX。
|
Performs pre-aggregation on the specified column over the time window defined by the INTERVAL clause. The type is specified in functions_string. SMA indexing improves aggregate query performance for the specified time period. One supertable can only contain one SMA index.
|
||||||
|
|
||||||
- 支持的函数包括 MAX、MIN 和 SUM。
|
- The max, min, and sum functions are supported.
|
||||||
- WATERMARK: 最小单位毫秒,取值范围 [0ms, 900000ms],默认值为 5 秒,只可用于超级表。
|
- WATERMARK: Enter a value between 0ms and 900000ms. The most precise unit supported is milliseconds. The default value is 5 seconds. This option can be used only on supertables.
|
||||||
- MAX_DELAY: 最小单位毫秒,取值范围 [1ms, 900000ms],默认值为 interval 的值(但不能超过最大值),只可用于超级表。注:不建议 MAX_DELAY 设置太小,否则会过于频繁的推送结果,影响存储和查询性能,如无特殊需求,取默认值即可。
|
- MAX_DELAY: Enter a value between 1ms and 900000ms. The most precise unit supported is milliseconds. The default value is the value of interval provided that it does not exceed 900000ms. This option can be used only on supertables. Note: Retain the default value if possible. Configuring a small MAX_DELAY may cause results to be frequently pushed, affecting storage and query performance.
|
||||||
|
|
||||||
### FULLTEXT 索引
|
### FULLTEXT Indexing
|
||||||
|
|
||||||
对指定列建立文本索引,可以提升含有文本过滤的查询的性能。FULLTEXT 索引不支持 index_option 语法。现阶段只支持对 JSON 类型的标签列创建 FULLTEXT 索引。不支持多列联合索引,但可以为每个列分布创建 FULLTEXT 索引。
|
Creates a text index for the specified column. FULLTEXT indexing improves performance for queries with text filtering. The index_option syntax is not supported for FULLTEXT indexing. FULLTEXT indexing is supported for JSON tag columns only. Multiple columns cannot be indexed together. However, separate indices can be created for each column.
|
||||||
|
|
||||||
## 删除索引
|
## Delete an Index
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
DROP INDEX index_name;
|
DROP INDEX index_name;
|
||||||
```
|
```
|
||||||
|
|
||||||
## 查看索引
|
## View Indices
|
||||||
|
|
||||||
````sql
|
````sql
|
||||||
```sql
|
```sql
|
||||||
SHOW INDEXES FROM tbl_name [FROM db_name];
|
SHOW INDEXES FROM tbl_name [FROM db_name];
|
||||||
````
|
````
|
||||||
|
|
||||||
显示在所指定的数据库或表上已创建的索引。
|
Shows indices that have been created for the specified database or table.
|
||||||
|
|
|
@ -1,38 +1,38 @@
|
||||||
---
|
---
|
||||||
sidebar_label: 异常恢复
|
sidebar_label: Error Recovery
|
||||||
title: 异常恢复
|
title: Error Recovery
|
||||||
---
|
---
|
||||||
|
|
||||||
在一个复杂的应用场景中,连接和查询任务等有可能进入一种错误状态或者耗时过长迟迟无法结束,此时需要有能够终止这些连接或任务的方法。
|
In a complex environment, connections and query tasks may encounter errors or fail to return in a reasonable time. If this occurs, you can terminate the connection or task.
|
||||||
|
|
||||||
## 终止连接
|
## Terminate a Connection
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
KILL CONNECTION conn_id;
|
KILL CONNECTION conn_id;
|
||||||
```
|
```
|
||||||
|
|
||||||
conn_id 可以通过 `SHOW CONNECTIONS` 获取。
|
You can use the SHOW CONNECTIONS statement to find the conn_id.
|
||||||
|
|
||||||
## 终止查询
|
## Terminate a Query
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SHOW QUERY query_id;
|
SHOW QUERY query_id;
|
||||||
```
|
```
|
||||||
|
|
||||||
query_id 可以通过 `SHOW QUERIES` 获取。
|
You can use the SHOW QUERIES statement to find the query_id.
|
||||||
|
|
||||||
## 终止事务
|
## Terminate a Transaction
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
KILL TRANSACTION trans_id
|
KILL TRANSACTION trans_id
|
||||||
```
|
```
|
||||||
|
|
||||||
trans_id 可以通过 `SHOW TRANSACTIONS` 获取。
|
You can use the SHOW TRANSACTIONS statement to find the trans_id.
|
||||||
|
|
||||||
## 重置客户端缓存
|
## Reset Client Cache
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
RESET QUERY CACHE;
|
RESET QUERY CACHE;
|
||||||
```
|
```
|
||||||
|
|
||||||
如果在多客户端情况下出现元数据不同步的情况,可以用这条命令强制清空客户端缓存,随后客户端会从服务端拉取最新的元数据。
|
If metadata becomes desynchronized among multiple clients, you can use this command to clear the client-side cache. Clients then obtain the latest metadata from the server.
|
||||||
|
|
|
@ -0,0 +1,95 @@
|
||||||
|
---
|
||||||
|
sidebar_label: Changes in TDengine 3.0
|
||||||
|
title: Changes in TDengine 3.0
|
||||||
|
description: "This document explains how TDengine SQL has changed in version 3.0."
|
||||||
|
---
|
||||||
|
|
||||||
|
## Basic SQL Elements
|
||||||
|
|
||||||
|
| # | **Element** | **<div style={{width: 60}}>Change</div>** | **Description** |
|
||||||
|
| - | :------- | :-------- | :------- |
|
||||||
|
| 1 | VARCHAR | Added | Alias of BINARY.
|
||||||
|
| 2 | TIMESTAMP literal | Added | TIMESTAMP 'timestamp format' syntax now supported.
|
||||||
|
| 3 | _ROWTS pseudocolumn | Added | Indicates the primary key. Alias of _C0.
|
||||||
|
| 4 | INFORMATION_SCHEMA | Added | Database for system metadata containing all schema definitions
|
||||||
|
| 5 | PERFORMANCE_SCHEMA | Added | Database for system performance information.
|
||||||
|
| 6 | Connection queries | Deprecated | Connection queries are no longer supported. The syntax and interfaces are deprecated.
|
||||||
|
| 7 | Mixed operations | Enhanced | Mixing scalar and vector operations in queries has been enhanced and is supported in all SELECT clauses.
|
||||||
|
| 8 | Tag operations | Added | Tag columns can be used in queries and clauses like data columns.
|
||||||
|
| 9 | Timeline clauses and time functions in supertables | Enhanced | When PARTITION BY is not used, data in supertables is merged into a single timeline.
|
||||||
|
|
||||||
|
## SQL Syntax
|
||||||
|
|
||||||
|
The following data types can be used in the schema for standard tables.
|
||||||
|
|
||||||
|
| # | **Statement** | **<div style={{width: 60}}>Change</div>** | **Description** |
|
||||||
|
| - | :------- | :-------- | :------- |
|
||||||
|
| 1 | ALTER ACCOUNT | Deprecated| This Enterprise Edition-only statement has been removed. It returns the error "This statement is no longer supported."
|
||||||
|
| 2 | ALTER ALL DNODES | Added | Modifies the configuration of all dnodes.
|
||||||
|
| 3 | ALTER DATABASE | Modified | Deprecated<ul><li>QUORUM: Specified the required number of confirmations. STRICT is now used to specify strong or weak consistency. The STRICT parameter cannot be modified. </li><li>BLOCKS: Specified the memory blocks used by each vnode. BUFFER is now used to specify the size of the write cache pool for each vnode. </li><li>UPDATE: Specified whether update operations were supported. All databases now support updating data in certain columns. </li><li>CACHELAST: Specified how to cache the newest row of data. CACHEMODEL now replaces CACHELAST. </li><li>COMP: Cannot be modified. <br/>Added</li><li>CACHEMODEL: Specifies whether to cache the latest subtable data. </li><li>CACHESIZE: Specifies the size of the cache for the newest subtable data. </li><li>WAL_FSYNC_PERIOD: Replaces the FSYNC parameter. </li><li>WAL_LEVEL: Replaces the WAL parameter. <br/>Modified</li><li>REPLICA: Cannot be modified. </li><li>KEEP: Now supports units. </li></ul>
|
||||||
|
| 4 | ALTER STABLE | Modified | Deprecated<ul><li>CHANGE TAG: Modified the name of a tag. Replaced by RENAME TAG. <br/>Added</li><li>RENAME TAG: Replaces CHANGE TAG. </li><li>COMMENT: Specifies comments for a supertable. </li></ul>
|
||||||
|
| 5 | ALTER TABLE | Modified | Deprecated<ul><li>CHANGE TAG: Modified the name of a tag. Replaced by RENAME TAG. <br/>Added</li><li>RENAME TAG: Replaces CHANGE TAG. </li><li>COMMENT: Specifies comments for a standard table. </li><li>TTL: Specifies the time-to-live for a standard table. </li></ul>
|
||||||
|
| 6 | ALTER USER | Modified | Deprecated<ul><li>PRIVILEGE: Specified user permissions. Replaced by GRANT and REVOKE. <br/>Added</li><li>ENABLE: Enables or disables a user. </li><li>SYSINFO: Specifies whether a user can query system information. </li></ul>
|
||||||
|
| 7 | COMPACT VNODES | Not supported | Compacted the data on a vnode. Not supported.
|
||||||
|
| 8 | CREATE ACCOUNT | Deprecated| This Enterprise Edition-only statement has been removed. It returns the error "This statement is no longer supported."
|
||||||
|
| 9 | CREATE DATABASE | Modified | Deprecated<ul><li>BLOCKS: Specified the number of blocks for each vnode. BUFFER is now used to specify the size of the write cache pool for each vnode. </li><li>CACHE: Specified the size of the memory blocks used by each vnode. BUFFER is now used to specify the size of the write cache pool for each vnode. </li><li>CACHELAST: Specified how to cache the newest row of data. CACHEMODEL now replaces CACHELAST. </li><li>DAYS: The length of time to store in a single file. Replaced by DURATION. </li><li>FSYNC: Specified the fsync interval when WAL was set to 2. Replaced by WAL_FSYNC_PERIOD. </li><li>QUORUM: Specified the number of confirmations required. STRICT is now used to specify strong or weak consistency. </li><li>UPDATE: Specified whether update operations were supported. All databases now support updating data in certain columns. </li><li>WAL: Specified the WAL level. Replaced by WAL_LEVEL. <br/>Added</li><li>BUFFER: Specifies the size of the write cache pool for each vnode. </li><li>CACHEMODEL: Specifies whether to cache the latest subtable data. </li><li>CACHESIZE: Specifies the size of the cache for the newest subtable data. </li><li>DURATION: Replaces DAYS. Now supports units. </li><li>PAGES: Specifies the number of pages in the metadata storage engine cache on each vnode. </li><li>PAGESIZE: specifies the size (in KB) of each page in the metadata storage engine cache on each vnode. </li><li>RETENTIONS: Specifies the aggregation interval and retention period </li><li>STRICT: Specifies whether strong data consistency is enabled. </li><li>SINGLE_STABLE: Specifies whether a database can contain multiple supertables. </li><li>VGROUPS: Specifies the initial number of vgroups when a database is created. </li><li>WAL_FSYNC_PERIOD: Replaces the FSYNC parameter. </li><li>WAL_LEVEL: Replaces the WAL parameter. </li><li>WAL_RETENTION_PERIOD: specifies the time after which WAL files are deleted. This parameter is used for data subscription. </li><li>WAL_RETENTION_SIZE: specifies the size at which WAL files are deleted. This parameter is used for data subscription. </li><li>WAL_ROLL_PERIOD: Specifies the WAL rotation period. </li><li>WAL_SEGMENT_SIZE: specifies the maximum size of a WAL file. <br/>Modified</li><li>KEEP: Now supports units. </li></ul>
|
||||||
|
| 10 | CREATE DNODE | Modified | Now supports specifying hostname and port separately<ul><li>CREATE DNODE dnode_host_name PORT port_val</li></ul>
|
||||||
|
| 11 | CREATE INDEX | Added | Creates an SMA index.
|
||||||
|
| 12 | CREATE MNODE | Added | Creates an mnode.
|
||||||
|
| 13 | CREATE QNODE | Added | Creates a qnode.
|
||||||
|
| 14 | CREATE STABLE | Modified | New parameter added<li>COMMENT: Specifies comments for the supertable. </li>
|
||||||
|
| 15 | CREATE STREAM | Added | Creates a stream.
|
||||||
|
| 16 | CREATE TABLE | Modified | New parameters added<ul><li>COMMENT: Specifies comments for the table </li><li>WATERMARK: Specifies the window closing time. </li><li>MAX_DELAY: Specifies the maximum delay for pushing stream processing results. </li><li>ROLLUP: Specifies aggregate functions to roll up. Rolling up a function provides downsampled results based on multiple axes. </li><li>SMA: Provides user-defined precomputation of aggregates based on data blocks. </li><li>TTL: Specifies the time-to-live for a standard table. </li></ul>
|
||||||
|
| 17 | CREATE TOPIC | Added | Creates a topic.
|
||||||
|
| 18 | DROP ACCOUNT | Deprecated| This Enterprise Edition-only statement has been removed. It returns the error "This statement is no longer supported."
|
||||||
|
| 19 | DROP CONSUMER GROUP | Added | Deletes a consumer group.
|
||||||
|
| 20 | DROP INDEX | Added | Deletes an index.
|
||||||
|
| 21 | DROP MNODE | Added | Creates an mnode.
|
||||||
|
| 22 | DROP QNODE | Added | Creates a qnode.
|
||||||
|
| 23 | DROP STREAM | Added | Deletes a stream.
|
||||||
|
| 24 | DROP TABLE | Modified | Added batch deletion syntax.
|
||||||
|
| 25 | DROP TOPIC | Added | Deletes a topic.
|
||||||
|
| 26 | EXPLAIN | Added | Query the execution plan of a query statement.
|
||||||
|
| 27 | GRANT | Added | Grants permissions to a user.
|
||||||
|
| 28 | KILL TRANSACTION | Added | Terminates an mnode transaction.
|
||||||
|
| 29 | KILL STREAM | Deprecated | Terminated a continuous query. The continuous query feature has been replaced with the stream processing feature.
|
||||||
|
| 30 | MERGE VGROUP | Added | Merges vgroups.
|
||||||
|
| 31 | REVOKE | Added | Revokes permissions from a user.
|
||||||
|
| 32 | SELECT | Modified | <ul><li>SELECT does not use the implicit results column. Output columns must be specified in the SELECT clause. </li><li>DISTINCT support is enhanced. In previous versions, DISTINCT only worked on the tag column and could not be used with JOIN or GROUP BY. </li><li>JOIN support is enhanced. The following are now supported after JOIN: a WHERE clause with OR, operations on multiple tables, and GROUP BY on multiple tables. </li><li>Subqueries after FROM are enhanced. Levels of nesting are no longer restricted. Subqueries can be used with UNION ALL. Other syntax restrictions are eliminated. </li><li>All scalar functions can be used after WHERE. </li><li>GROUP BY is enhanced. You can group by any scalar expression or combination thereof. </li><li>SESSION can be used on supertables. When PARTITION BY is not used, data in supertables is merged into a single timeline. </li><li>STATE_WINDOW can be used on supertables. When PARTITION BY is not used, data in supertables is merged into a single timeline. </li><li>ORDER BY is enhanced. It is no longer required to use ORDER BY and GROUP BY together. There is no longer a restriction on the number of order expressions. NULLS FIRST and NULLS LAST syntax has been added. Any expression that conforms to the ORDER BY semantics can be used. </li><li>Added PARTITION BY syntax. PARTITION BY replaces GROUP BY tags. </li></ul>
|
||||||
|
| 33 | SHOW ACCOUNTS | Deprecated | This Enterprise Edition-only statement has been removed. It returns the error "This statement is no longer supported."
|
||||||
|
| 34 | SHOW APPS | Added | Shows all clients (such as applications) that connect to the cluster.
|
||||||
|
| 35 | SHOW CONSUMERS | Added | Shows information about all active consumers in the system.
|
||||||
|
| 36 | SHOW DATABASES | Modified | Only shows database names.
|
||||||
|
| 37 | SHOW FUNCTIONS | Modified | Only shows UDF names.
|
||||||
|
| 38 | SHOW LICENCE | Added | Alias of SHOW GRANTS.
|
||||||
|
| 39 | SHOW INDEXES | Added | Shows indices that have been created.
|
||||||
|
| 40 | SHOW LOCAL VARIABLES | Added | Shows the working configuration of the client.
|
||||||
|
| 41 | SHOW MODULES | Deprecated | Shows information about modules installed in the system.
|
||||||
|
| 42 | SHOW QNODES | Added | Shows information about qnodes in the system.
|
||||||
|
| 43 | SHOW STABLES | Modified | Only shows supertable names.
|
||||||
|
| 44 | SHOW STREAMS | Modified | This statement previously showed continuous queries. The continuous query feature has been replaced with the stream processing feature. This statement now shows streams that have been created.
|
||||||
|
| 45 | SHOW SUBSCRIPTIONS | Added | Shows all subscriptions in the current database.
|
||||||
|
| 46 | SHOW TABLES | Modified | Only shows table names.
|
||||||
|
| 47 | SHOW TABLE DISTRIBUTED | Added | Shows how table data is distributed. This replaces the `SELECT _block_dist() FROM { tb_name | stb_name }` command.
|
||||||
|
| 48 | SHOW TOPICS | Added | Shows all subscribed topics in the current database.
|
||||||
|
| 49 | SHOW TRANSACTIONS | Added | Shows all running transactions in the system.
|
||||||
|
| 50 | SHOW DNODE VARIABLES | Added | Shows the configuration of the specified dnode.
|
||||||
|
| 51 | SHOW VNODES | Not supported | Shows information about vnodes in the system. Not supported.
|
||||||
|
| 52 | SPLIT VGROUP | Added | Splits a vgroup into two vgroups.
|
||||||
|
| 53 | TRIM DATABASE | Added | Deletes data that has expired and orders the remaining data in accordance with the storage configuration.
|
||||||
|
|
||||||
|
## SQL Functions
|
||||||
|
|
||||||
|
| # | **Function** | ** <div style={{width: 60}}>Change</div> ** | **Description** |
|
||||||
|
| - | :------- | :-------- | :------- |
|
||||||
|
| 1 | TWA | Added | Can be used on supertables. When PARTITION BY is not used, data in supertables is merged into a single timeline.
|
||||||
|
| 2 | IRATE | Enhanced | Can be used on supertables. When PARTITION BY is not used, data in supertables is merged into a single timeline.
|
||||||
|
| 3 | LEASTSQUARES | Enhanced | Can be used on supertables.
|
||||||
|
| 4 | ELAPSED | Enhanced | Can be used on supertables. When PARTITION BY is not used, data in supertables is merged into a single timeline.
|
||||||
|
| 5 | DIFF | Enhanced | Can be used on supertables. When PARTITION BY is not used, data in supertables is merged into a single timeline.
|
||||||
|
| 6 | DERIVATIVE | Enhanced | Can be used on supertables. When PARTITION BY is not used, data in supertables is merged into a single timeline.
|
||||||
|
| 7 | CSUM | Enhanced | Can be used on supertables. When PARTITION BY is not used, data in supertables is merged into a single timeline.
|
||||||
|
| 8 | MAVG | Enhanced | Can be used on supertables. When PARTITION BY is not used, data in supertables is merged into a single timeline.
|
||||||
|
| 9 | SAMPLE | Enhanced | Can be used on supertables. When PARTITION BY is not used, data in supertables is merged into a single timeline.
|
||||||
|
| 10 | STATECOUNT | Enhanced | Can be used on supertables. When PARTITION BY is not used, data in supertables is merged into a single timeline.
|
||||||
|
| 11 | STATEDURATION | Enhanced | Can be used on supertables. When PARTITION BY is not used, data in supertables is merged into a single timeline.
|
|
@ -1,22 +1,23 @@
|
||||||
---
|
---
|
||||||
title: TDengine SQL
|
title: TDengine SQL
|
||||||
description: "The syntax supported by TDengine SQL "
|
description: 'The syntax supported by TDengine SQL '
|
||||||
---
|
---
|
||||||
|
|
||||||
This section explains the syntax of SQL to perform operations on databases, tables and STables, insert data, select data and use functions. We also provide some tips that can be used in TDengine SQL. If you have previous experience with SQL this section will be fairly easy to understand. If you do not have previous experience with SQL, you'll come to appreciate the simplicity and power of SQL.
|
This section explains the syntax of SQL to perform operations on databases, tables and STables, insert data, select data and use functions. We also provide some tips that can be used in TDengine SQL. If you have previous experience with SQL this section will be fairly easy to understand. If you do not have previous experience with SQL, you'll come to appreciate the simplicity and power of SQL. TDengine SQL has been enhanced in version 3.0, and the query engine has been rearchitected. For information about how TDengine SQL has changed, see [Changes in TDengine 3.0](../taos-sql/changes).
|
||||||
|
|
||||||
TDengine SQL is the major interface for users to write data into or query from TDengine. For ease of use, the syntax is similar to that of standard SQL. However, please note that TDengine SQL is not standard SQL. For instance, TDengine doesn't provide a delete function for time series data and so corresponding statements are not provided in TDengine SQL.
|
TDengine SQL is the major interface for users to write data into or query from TDengine. It uses standard SQL syntax and includes extensions and optimizations for time-series data and services. The maximum length of a TDengine SQL statement is 1 MB. Note that keyword abbreviations are not supported. For example, DELETE cannot be entered as DEL.
|
||||||
|
|
||||||
Syntax Specifications used in this chapter:
|
Syntax Specifications used in this chapter:
|
||||||
|
|
||||||
- The content inside <\> needs to be input by the user, excluding <\> itself.
|
- Keywords are given in uppercase, although SQL is not case-sensitive.
|
||||||
|
- Information that you input is given in lowercase.
|
||||||
- \[ \] means optional input, excluding [] itself.
|
- \[ \] means optional input, excluding [] itself.
|
||||||
- | means one of a few options, excluding | itself.
|
- | means one of a few options, excluding | itself.
|
||||||
- … means the item prior to it can be repeated multiple times.
|
- … means the item prior to it can be repeated multiple times.
|
||||||
|
|
||||||
To better demonstrate the syntax, usage and rules of TAOS SQL, hereinafter it's assumed that there is a data set of data from electric meters. Each meter collects 3 data measurements: current, voltage, phase. The data model is shown below:
|
To better demonstrate the syntax, usage and rules of TDengine SQL, hereinafter it's assumed that there is a data set of data from electric meters. Each meter collects 3 data measurements: current, voltage, phase. The data model is shown below:
|
||||||
|
|
||||||
```sql
|
```
|
||||||
taos> DESCRIBE meters;
|
taos> DESCRIBE meters;
|
||||||
Field | Type | Length | Note |
|
Field | Type | Length | Note |
|
||||||
=================================================================================
|
=================================================================================
|
||||||
|
@ -29,3 +30,10 @@ taos> DESCRIBE meters;
|
||||||
```
|
```
|
||||||
|
|
||||||
The data set includes the data collected by 4 meters, the corresponding table name is d1001, d1002, d1003 and d1004 based on the data model of TDengine.
|
The data set includes the data collected by 4 meters, the corresponding table name is d1001, d1002, d1003 and d1004 based on the data model of TDengine.
|
||||||
|
|
||||||
|
```mdx-code-block
|
||||||
|
import DocCardList from '@theme/DocCardList';
|
||||||
|
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
|
||||||
|
|
||||||
|
<DocCardList items={useCurrentSidebarCategory().items}/>
|
||||||
|
```
|
||||||
|
|
|
@ -1,156 +1,77 @@
|
||||||
---
|
---
|
||||||
title: Install & Uninstall
|
title: Install and Uninstall
|
||||||
description: Install, Uninstall, Start, Stop and Upgrade
|
description: Install, Uninstall, Start, Stop and Upgrade
|
||||||
---
|
---
|
||||||
|
|
||||||
import Tabs from "@theme/Tabs";
|
import Tabs from "@theme/Tabs";
|
||||||
import TabItem from "@theme/TabItem";
|
import TabItem from "@theme/TabItem";
|
||||||
|
|
||||||
TDengine community version provides deb and rpm packages for users to choose from, based on their system environment. The deb package supports Debian, Ubuntu and derivative systems. The rpm package supports CentOS, RHEL, SUSE and derivative systems. Furthermore, a tar.gz package is provided for TDengine Enterprise customers.
|
This document gives more information about installing, uninstalling, and upgrading TDengine.
|
||||||
|
|
||||||
## Install
|
## Install
|
||||||
|
|
||||||
<Tabs>
|
About details of installing TDenine, please refer to [Installation Guide](../../get-started/package/).
|
||||||
<TabItem label="Install Deb" value="debinst">
|
|
||||||
|
|
||||||
1. Download deb package from official website, for example TDengine-server-2.4.0.7-Linux-x64.deb
|
|
||||||
2. In the directory where the package is located, execute the command below
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ sudo dpkg -i TDengine-server-2.4.0.7-Linux-x64.deb
|
|
||||||
(Reading database ... 137504 files and directories currently installed.)
|
|
||||||
Preparing to unpack TDengine-server-2.4.0.7-Linux-x64.deb ...
|
|
||||||
TDengine is removed successfully!
|
|
||||||
Unpacking tdengine (2.4.0.7) over (2.4.0.7) ...
|
|
||||||
Setting up tdengine (2.4.0.7) ...
|
|
||||||
Start to install TDengine...
|
|
||||||
|
|
||||||
System hostname is: ubuntu-1804
|
|
||||||
|
|
||||||
Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join
|
|
||||||
OR leave it blank to build one:
|
|
||||||
|
|
||||||
Enter your email address for priority support or enter empty to skip:
|
|
||||||
Created symlink /etc/systemd/system/multi-user.target.wants/taosd.service → /etc/systemd/system/taosd.service.
|
|
||||||
|
|
||||||
To configure TDengine : edit /etc/taos/taos.cfg
|
|
||||||
To start TDengine : sudo systemctl start taosd
|
|
||||||
To access TDengine : taos -h ubuntu-1804 to login into TDengine server
|
|
||||||
|
|
||||||
|
|
||||||
TDengine is installed successfully!
|
|
||||||
```
|
|
||||||
|
|
||||||
</TabItem>
|
|
||||||
|
|
||||||
<TabItem label="Install RPM" value="rpminst">
|
|
||||||
|
|
||||||
1. Download rpm package from official website, for example TDengine-server-2.4.0.7-Linux-x64.rpm;
|
|
||||||
2. In the directory where the package is located, execute the command below
|
|
||||||
|
|
||||||
```
|
|
||||||
$ sudo rpm -ivh TDengine-server-2.4.0.7-Linux-x64.rpm
|
|
||||||
Preparing... ################################# [100%]
|
|
||||||
Updating / installing...
|
|
||||||
1:tdengine-2.4.0.7-3 ################################# [100%]
|
|
||||||
Start to install TDengine...
|
|
||||||
|
|
||||||
System hostname is: centos7
|
|
||||||
|
|
||||||
Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join
|
|
||||||
OR leave it blank to build one:
|
|
||||||
|
|
||||||
Enter your email address for priority support or enter empty to skip:
|
|
||||||
|
|
||||||
Created symlink from /etc/systemd/system/multi-user.target.wants/taosd.service to /etc/systemd/system/taosd.service.
|
|
||||||
|
|
||||||
To configure TDengine : edit /etc/taos/taos.cfg
|
|
||||||
To start TDengine : sudo systemctl start taosd
|
|
||||||
To access TDengine : taos -h centos7 to login into TDengine server
|
|
||||||
|
|
||||||
|
|
||||||
TDengine is installed successfully!
|
|
||||||
```
|
|
||||||
|
|
||||||
</TabItem>
|
|
||||||
|
|
||||||
<TabItem label="Install tar.gz" value="tarinst">
|
|
||||||
|
|
||||||
1. Download the tar.gz package, for example TDengine-server-2.4.0.7-Linux-x64.tar.gz;
|
|
||||||
2. In the directory where the package is located, first decompress the file, then switch to the sub-directory generated in decompressing, i.e. "TDengine-enterprise-server-2.4.0.7/" in this example, and execute the `install.sh` script.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ tar xvzf TDengine-enterprise-server-2.4.0.7-Linux-x64.tar.gz
|
|
||||||
TDengine-enterprise-server-2.4.0.7/
|
|
||||||
TDengine-enterprise-server-2.4.0.7/driver/
|
|
||||||
TDengine-enterprise-server-2.4.0.7/driver/vercomp.txt
|
|
||||||
TDengine-enterprise-server-2.4.0.7/driver/libtaos.so.2.4.0.7
|
|
||||||
TDengine-enterprise-server-2.4.0.7/install.sh
|
|
||||||
TDengine-enterprise-server-2.4.0.7/examples/
|
|
||||||
...
|
|
||||||
|
|
||||||
$ ll
|
|
||||||
total 43816
|
|
||||||
drwxrwxr-x 3 ubuntu ubuntu 4096 Feb 22 09:31 ./
|
|
||||||
drwxr-xr-x 20 ubuntu ubuntu 4096 Feb 22 09:30 ../
|
|
||||||
drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 TDengine-enterprise-server-2.4.0.7/
|
|
||||||
-rw-rw-r-- 1 ubuntu ubuntu 44852544 Feb 22 09:31 TDengine-enterprise-server-2.4.0.7-Linux-x64.tar.gz
|
|
||||||
|
|
||||||
$ cd TDengine-enterprise-server-2.4.0.7/
|
|
||||||
|
|
||||||
$ ll
|
|
||||||
total 40784
|
|
||||||
drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 ./
|
|
||||||
drwxrwxr-x 3 ubuntu ubuntu 4096 Feb 22 09:31 ../
|
|
||||||
drwxrwxr-x 2 ubuntu ubuntu 4096 Feb 22 09:30 driver/
|
|
||||||
drwxrwxr-x 10 ubuntu ubuntu 4096 Feb 22 09:30 examples/
|
|
||||||
-rwxrwxr-x 1 ubuntu ubuntu 33294 Feb 22 09:30 install.sh*
|
|
||||||
-rw-rw-r-- 1 ubuntu ubuntu 41704288 Feb 22 09:30 taos.tar.gz
|
|
||||||
|
|
||||||
$ sudo ./install.sh
|
|
||||||
|
|
||||||
Start to update TDengine...
|
|
||||||
Created symlink /etc/systemd/system/multi-user.target.wants/taosd.service → /etc/systemd/system/taosd.service.
|
|
||||||
Nginx for TDengine is updated successfully!
|
|
||||||
|
|
||||||
To configure TDengine : edit /etc/taos/taos.cfg
|
|
||||||
To configure Taos Adapter (if has) : edit /etc/taos/taosadapter.toml
|
|
||||||
To start TDengine : sudo systemctl start taosd
|
|
||||||
To access TDengine : use taos -h ubuntu-1804 in shell OR from http://127.0.0.1:6060
|
|
||||||
|
|
||||||
TDengine is updated successfully!
|
|
||||||
Install taoskeeper as a standalone service
|
|
||||||
taoskeeper is installed, enable it by `systemctl enable taoskeeper`
|
|
||||||
```
|
|
||||||
|
|
||||||
:::info
|
|
||||||
Users will be prompted to enter some configuration information when install.sh is executing. The interactive mode can be disabled by executing `./install.sh -e no`. `./install.sh -h` can show all parameters with detailed explanation.
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
</TabItem>
|
|
||||||
</Tabs>
|
|
||||||
|
|
||||||
:::note
|
|
||||||
When installing on the first node in the cluster, at the "Enter FQDN:" prompt, nothing needs to be provided. When installing on subsequent nodes, at the "Enter FQDN:" prompt, you must enter the end point of the first dnode in the cluster if it is already up. You can also just ignore it and configure it later after installation is finished.
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
## Uninstall
|
## Uninstall
|
||||||
|
|
||||||
<Tabs>
|
<Tabs>
|
||||||
|
<TabItem label="Uninstall apt-get" value="aptremove">
|
||||||
|
|
||||||
|
Apt-get package of TDengine can be uninstalled as below:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ sudo apt-get remove tdengine
|
||||||
|
Reading package lists... Done
|
||||||
|
Building dependency tree
|
||||||
|
Reading state information... Done
|
||||||
|
The following packages will be REMOVED:
|
||||||
|
tdengine
|
||||||
|
0 upgraded, 0 newly installed, 1 to remove and 18 not upgraded.
|
||||||
|
After this operation, 68.3 MB disk space will be freed.
|
||||||
|
Do you want to continue? [Y/n] y
|
||||||
|
(Reading database ... 135625 files and directories currently installed.)
|
||||||
|
Removing tdengine (3.0.0.0) ...
|
||||||
|
TDengine is removed successfully!
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
Apt-get package of taosTools can be uninstalled as below:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ sudo apt remove taostools
|
||||||
|
Reading package lists... Done
|
||||||
|
Building dependency tree
|
||||||
|
Reading state information... Done
|
||||||
|
The following packages will be REMOVED:
|
||||||
|
taostools
|
||||||
|
0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
|
||||||
|
After this operation, 68.3 MB disk space will be freed.
|
||||||
|
Do you want to continue? [Y/n]
|
||||||
|
(Reading database ... 147973 files and directories currently installed.)
|
||||||
|
Removing taostools (2.1.2) ...
|
||||||
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
<TabItem label="Uninstall Deb" value="debuninst">
|
<TabItem label="Uninstall Deb" value="debuninst">
|
||||||
|
|
||||||
Deb package of TDengine can be uninstalled as below:
|
Deb package of TDengine can be uninstalled as below:
|
||||||
|
|
||||||
```bash
|
```
|
||||||
$ sudo dpkg -r tdengine
|
$ sudo dpkg -r tdengine
|
||||||
(Reading database ... 137504 files and directories currently installed.)
|
(Reading database ... 137504 files and directories currently installed.)
|
||||||
Removing tdengine (2.4.0.7) ...
|
Removing tdengine (3.0.0.0) ...
|
||||||
TDengine is removed successfully!
|
TDengine is removed successfully!
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Deb package of taosTools can be uninstalled as below:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ sudo dpkg -r taostools
|
||||||
|
(Reading database ... 147973 files and directories currently installed.)
|
||||||
|
Removing taostools (2.1.2) ...
|
||||||
|
```
|
||||||
|
|
||||||
</TabItem>
|
</TabItem>
|
||||||
|
|
||||||
<TabItem label="Uninstall RPM" value="rpmuninst">
|
<TabItem label="Uninstall RPM" value="rpmuninst">
|
||||||
|
@ -162,6 +83,13 @@ $ sudo rpm -e tdengine
|
||||||
TDengine is removed successfully!
|
TDengine is removed successfully!
|
||||||
```
|
```
|
||||||
|
|
||||||
|
RPM package of taosTools can be uninstalled as below:
|
||||||
|
|
||||||
|
```
|
||||||
|
sudo rpm -e taostools
|
||||||
|
taosToole is removed successfully!
|
||||||
|
```
|
||||||
|
|
||||||
</TabItem>
|
</TabItem>
|
||||||
|
|
||||||
<TabItem label="Uninstall tar.gz" value="taruninst">
|
<TabItem label="Uninstall tar.gz" value="taruninst">
|
||||||
|
@ -170,106 +98,61 @@ tar.gz package of TDengine can be uninstalled as below:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ rmtaos
|
$ rmtaos
|
||||||
Nginx for TDengine is running, stopping it...
|
|
||||||
TDengine is removed successfully!
|
TDengine is removed successfully!
|
||||||
|
```
|
||||||
|
|
||||||
taosKeeper is removed successfully!
|
tar.gz package of taosTools can be uninstalled as below:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ rmtaostools
|
||||||
|
Start to uninstall taos tools ...
|
||||||
|
|
||||||
|
taos tools is uninstalled successfully!
|
||||||
```
|
```
|
||||||
|
|
||||||
|
</TabItem>
|
||||||
|
<TabItem label="Windows uninstall" value="windows">
|
||||||
|
Run C:\TDengine\unins000.exe to uninstall TDengine on a Windows system.
|
||||||
</TabItem>
|
</TabItem>
|
||||||
</Tabs>
|
</Tabs>
|
||||||
|
|
||||||
:::note
|
:::info
|
||||||
|
|
||||||
- We strongly recommend not to use multiple kinds of installation packages on a single host TDengine.
|
- We strongly recommend not to use multiple kinds of installation packages on a single host TDengine. The packages may affect each other and cause errors.
|
||||||
- After deb package is installed, if the installation directory is removed manually, uninstall or reinstall will not work. This issue can be resolved by using the command below which cleans up TDengine package information. You can then reinstall if needed.
|
|
||||||
|
|
||||||
```bash
|
- After deb package is installed, if the installation directory is removed manually, uninstall or reinstall will not work. This issue can be resolved by using the command below which cleans up TDengine package information.
|
||||||
|
|
||||||
|
```
|
||||||
$ sudo rm -f /var/lib/dpkg/info/tdengine*
|
$ sudo rm -f /var/lib/dpkg/info/tdengine*
|
||||||
```
|
```
|
||||||
|
|
||||||
- After rpm package is installed, if the installation directory is removed manually, uninstall or reinstall will not work. This issue can be resolved by using the command below which cleans up TDengine package information. You can then reinstall if needed.
|
You can then reinstall if needed.
|
||||||
|
|
||||||
```bash
|
- After rpm package is installed, if the installation directory is removed manually, uninstall or reinstall will not work. This issue can be resolved by using the command below which cleans up TDengine package information.
|
||||||
|
|
||||||
|
```
|
||||||
$ sudo rpm -e --noscripts tdengine
|
$ sudo rpm -e --noscripts tdengine
|
||||||
```
|
```
|
||||||
|
|
||||||
|
You can then reinstall if needed.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
## Installation Directory
|
Uninstalling and Modifying Files
|
||||||
|
|
||||||
TDengine is installed at /usr/local/taos if successful.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ cd /usr/local/taos
|
|
||||||
$ ll
|
|
||||||
$ ll
|
|
||||||
total 28
|
|
||||||
drwxr-xr-x 7 root root 4096 Feb 22 09:34 ./
|
|
||||||
drwxr-xr-x 12 root root 4096 Feb 22 09:34 ../
|
|
||||||
drwxr-xr-x 2 root root 4096 Feb 22 09:34 bin/
|
|
||||||
drwxr-xr-x 2 root root 4096 Feb 22 09:34 cfg/
|
|
||||||
lrwxrwxrwx 1 root root 13 Feb 22 09:34 data -> /var/lib/taos/
|
|
||||||
drwxr-xr-x 2 root root 4096 Feb 22 09:34 driver/
|
|
||||||
drwxr-xr-x 10 root root 4096 Feb 22 09:34 examples/
|
|
||||||
drwxr-xr-x 2 root root 4096 Feb 22 09:34 include/
|
|
||||||
lrwxrwxrwx 1 root root 13 Feb 22 09:34 log -> /var/log/taos/
|
|
||||||
```
|
|
||||||
|
|
||||||
During the installation process:
|
|
||||||
|
|
||||||
- Configuration directory, data directory, and log directory are created automatically if they don't exist
|
|
||||||
- The default configuration file is located at /etc/taos/taos.cfg, which is a copy of /usr/local/taos/cfg/taos.cfg
|
|
||||||
- The default data directory is /var/lib/taos, which is a soft link to /usr/local/taos/data
|
|
||||||
- The default log directory is /var/log/taos, which is a soft link to /usr/local/taos/log
|
|
||||||
- The executables at /usr/local/taos/bin are linked to /usr/bin
|
|
||||||
- The DLL files at /usr/local/taos/driver are linked to /usr/lib
|
|
||||||
- The header files at /usr/local/taos/include are linked to /usr/include
|
|
||||||
|
|
||||||
:::note
|
|
||||||
|
|
||||||
- When TDengine is uninstalled, the configuration /etc/taos/taos.cfg, data directory /var/lib/taos, log directory /var/log/taos are kept. They can be deleted manually with caution, because data can't be recovered. Please follow data integrity, security, backup or relevant SOPs before deleting any data.
|
- When TDengine is uninstalled, the configuration /etc/taos/taos.cfg, data directory /var/lib/taos, log directory /var/log/taos are kept. They can be deleted manually with caution, because data can't be recovered. Please follow data integrity, security, backup or relevant SOPs before deleting any data.
|
||||||
|
|
||||||
- When reinstalling TDengine, if the default configuration file /etc/taos/taos.cfg exists, it will be kept and the configuration file in the installation package will be renamed to taos.cfg.orig and stored at /usr/local/taos/cfg to be used as configuration sample. Otherwise the configuration file in the installation package will be installed to /etc/taos/taos.cfg and used.
|
- When reinstalling TDengine, if the default configuration file /etc/taos/taos.cfg exists, it will be kept and the configuration file in the installation package will be renamed to taos.cfg.orig and stored at /usr/local/taos/cfg to be used as configuration sample. Otherwise the configuration file in the installation package will be installed to /etc/taos/taos.cfg and used.
|
||||||
|
|
||||||
## Start and Stop
|
|
||||||
|
|
||||||
Linux system services `systemd`, `systemctl` or `service` are used to start, stop and restart TDengine. The server process of TDengine is `taosd`, which is started automatically after the Linux system is started. System operators can use `systemd`, `systemctl` or `service` to start, stop or restart TDengine server.
|
|
||||||
|
|
||||||
For example, if using `systemctl` , the commands to start, stop, restart and check TDengine server are below:
|
|
||||||
|
|
||||||
- Start server:`systemctl start taosd`
|
|
||||||
|
|
||||||
- Stop server:`systemctl stop taosd`
|
|
||||||
|
|
||||||
- Restart server:`systemctl restart taosd`
|
|
||||||
|
|
||||||
- Check server status:`systemctl status taosd`
|
|
||||||
|
|
||||||
From version 2.4.0.0, a new independent component named as `taosAdapter` has been included in TDengine. `taosAdapter` should be started and stopped using `systemctl`.
|
|
||||||
|
|
||||||
If the server process is OK, the output of `systemctl status` is like below:
|
|
||||||
|
|
||||||
```
|
|
||||||
Active: active (running)
|
|
||||||
```
|
|
||||||
|
|
||||||
Otherwise, the output is as below:
|
|
||||||
|
|
||||||
```
|
|
||||||
Active: inactive (dead)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Upgrade
|
## Upgrade
|
||||||
|
|
||||||
There are two aspects in upgrade operation: upgrade installation package and upgrade a running server.
|
There are two aspects in upgrade operation: upgrade installation package and upgrade a running server.
|
||||||
|
|
||||||
To upgrade a package, follow the steps mentioned previously to first uninstall the old version then install the new version.
|
To upgrade a package, follow the steps mentioned previously to first uninstall the old version then install the new version.
|
||||||
|
|
||||||
Upgrading a running server is much more complex. First please check the version number of the old version and the new version. The version number of TDengine consists of 4 sections, only if the first 3 sections match can the old version be upgraded to the new version. The steps of upgrading a running server are as below:
|
Upgrading a running server is much more complex. First please check the version number of the old version and the new version. The version number of TDengine consists of 4 sections, only if the first 3 sections match can the old version be upgraded to the new version. The steps of upgrading a running server are as below:
|
||||||
|
|
||||||
- Stop inserting data
|
- Stop inserting data
|
||||||
- Make sure all data is persisted to disk
|
- Make sure all data is persisted to disk
|
||||||
- Make some simple queries (Such as total rows in stables, tables and so on. Note down the values. Follow best practices and relevant SOPs.)
|
|
||||||
- Stop the cluster of TDengine
|
- Stop the cluster of TDengine
|
||||||
- Uninstall old version and install new version
|
- Uninstall old version and install new version
|
||||||
- Start the cluster of TDengine
|
- Start the cluster of TDengine
|
||||||
|
@ -278,7 +161,6 @@ Upgrading a running server is much more complex. First please check the version
|
||||||
- Restore business services
|
- Restore business services
|
||||||
|
|
||||||
:::warning
|
:::warning
|
||||||
|
|
||||||
TDengine doesn't guarantee any lower version is compatible with the data generated by a higher version, so it's never recommended to downgrade the version.
|
TDengine doesn't guarantee any lower version is compatible with the data generated by a higher version, so it's never recommended to downgrade the version.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
|
@ -1,40 +1,32 @@
|
||||||
---
|
---
|
||||||
|
sidebar_label: Resource Planning
|
||||||
title: Resource Planning
|
title: Resource Planning
|
||||||
---
|
---
|
||||||
|
|
||||||
It is important to plan computing and storage resources if using TDengine to build an IoT, time-series or Big Data platform. How to plan the CPU, memory and disk resources required, will be described in this chapter.
|
It is important to plan computing and storage resources if using TDengine to build an IoT, time-series or Big Data platform. How to plan the CPU, memory and disk resources required, will be described in this chapter.
|
||||||
|
|
||||||
## Memory Requirement of Server Side
|
## Server Memory Requirements
|
||||||
|
|
||||||
By default, the number of vgroups created for each database is the same as the number of CPU cores. This can be configured by the parameter `maxVgroupsPerDb`. Each vnode in a vgroup stores one replica. Each vnode consumes a fixed amount of memory, i.e. `blocks` \* `cache`. In addition, some memory is required for tag values associated with each table. A fixed amount of memory is required for each cluster. So, the memory required for each DB can be calculated using the formula below:
|
Each database creates a fixed number of vgroups. This number is 2 by default and can be configured with the `vgroups` parameter. The number of replicas can be controlled with the `replica` parameter. Each replica requires one vnode per vgroup. Altogether, the memory required by each database depends on the following configuration options:
|
||||||
|
|
||||||
|
- vgroups
|
||||||
|
- replica
|
||||||
|
- buffer
|
||||||
|
- pages
|
||||||
|
- pagesize
|
||||||
|
- cachesize
|
||||||
|
|
||||||
|
For more information, see [Database](../../taos-sql/database).
|
||||||
|
|
||||||
|
The memory required by a database is therefore greater than or equal to:
|
||||||
|
|
||||||
```
|
```
|
||||||
Database Memory Size = maxVgroupsPerDb * replica * (blocks * cache + 10MB) + numOfTables * (tagSizePerTable + 0.5KB)
|
vgroups * replica * (buffer + pages * pagesize + cachesize)
|
||||||
```
|
```
|
||||||
|
|
||||||
For example, assuming the default value of `maxVgroupPerDB` is 64, the default value of `cache` is 16M, the default value of `blocks` is 6, there are 100,000 tables in a DB, the replica number is 1, total length of tag values is 256 bytes, the total memory required for this DB is: 64 \* 1 \* (16 \* 6 + 10) + 100000 \* (0.25 + 0.5) / 1000 = 6792M.
|
However, note that this requirement is spread over all dnodes in the cluster, not on a single physical machine. The physical servers that run dnodes meet the requirement together. If a cluster has multiple databases, the memory required increases accordingly. In complex environments where dnodes were added after initial deployment in response to increasing resource requirements, load may not be balanced among the original dnodes and newer dnodes. In this situation, the actual status of your dnodes is more important than theoretical calculations.
|
||||||
|
|
||||||
In the real operation of TDengine, we are more concerned about the memory used by each TDengine server process `taosd`.
|
## Client Memory Requirements
|
||||||
|
|
||||||
```
|
|
||||||
taosd_memory = vnode_memory + mnode_memory + query_memory
|
|
||||||
```
|
|
||||||
|
|
||||||
In the above formula:
|
|
||||||
|
|
||||||
1. "vnode_memory" of a `taosd` process is the memory used by all vnodes hosted by this `taosd` process. It can be roughly calculated by firstly adding up the total memory of all DBs whose memory usage can be derived according to the formula for Database Memory Size, mentioned above, then dividing by number of dnodes and multiplying the number of replicas.
|
|
||||||
|
|
||||||
```
|
|
||||||
vnode_memory = (sum(Database Memory Size) / number_of_dnodes) * replica
|
|
||||||
```
|
|
||||||
|
|
||||||
2. "mnode_memory" of a `taosd` process is the memory consumed by a mnode. If there is one (and only one) mnode hosted in a `taosd` process, the memory consumed by "mnode" is "0.2KB \* the total number of tables in the cluster".
|
|
||||||
|
|
||||||
3. "query_memory" is the memory used when processing query requests. Each ongoing query consumes at least "0.2 KB \* total number of involved tables".
|
|
||||||
|
|
||||||
Please note that the above formulas can only be used to estimate the minimum memory requirement, instead of maximum memory usage. In a real production environment, it's better to reserve some redundance beyond the estimated minimum memory requirement. If memory is abundant, it's suggested to increase the value of parameter `blocks` to speed up data insertion and data query.
|
|
||||||
|
|
||||||
## Memory Requirement of Client Side
|
|
||||||
|
|
||||||
For the client programs using TDengine client driver `taosc` to connect to the server side there is a memory requirement as well.
|
For the client programs using TDengine client driver `taosc` to connect to the server side there is a memory requirement as well.
|
||||||
|
|
||||||
|
@ -56,10 +48,10 @@ So, at least 3GB needs to be reserved for such a client.
|
||||||
|
|
||||||
The CPU resources required depend on two aspects:
|
The CPU resources required depend on two aspects:
|
||||||
|
|
||||||
- **Data Insertion** Each dnode of TDengine can process at least 10,000 insertion requests in one second, while each insertion request can have multiple rows. The difference in computing resource consumed, between inserting 1 row at a time, and inserting 10 rows at a time is very small. So, the more the number of rows that can be inserted one time, the higher the efficiency. Inserting in batch also imposes requirements on the client side which needs to cache rows to insert in batch once the number of cached rows reaches a threshold.
|
- **Data Insertion** Each dnode of TDengine can process at least 10,000 insertion requests in one second, while each insertion request can have multiple rows. The difference in computing resource consumed, between inserting 1 row at a time, and inserting 10 rows at a time is very small. So, the more the number of rows that can be inserted one time, the higher the efficiency. If each insert request contains more than 200 records, a single core can process more than 1 million records per second. Inserting in batch also imposes requirements on the client side which needs to cache rows to insert in batch once the number of cached rows reaches a threshold.
|
||||||
- **Data Query** High efficiency query is provided in TDengine, but it's hard to estimate the CPU resource required because the queries used in different use cases and the frequency of queries vary significantly. It can only be verified with the query statements, query frequency, data size to be queried, and other requirements provided by users.
|
- **Data Query** High efficiency query is provided in TDengine, but it's hard to estimate the CPU resource required because the queries used in different use cases and the frequency of queries vary significantly. It can only be verified with the query statements, query frequency, data size to be queried, and other requirements provided by users.
|
||||||
|
|
||||||
In short, the CPU resource required for data insertion can be estimated but it's hard to do so for query use cases. In real operation, it's suggested to control CPU usage below 50%. If this threshold is exceeded, it's a reminder for system operator to add more nodes in the cluster to expand resources.
|
In short, the CPU resource required for data insertion can be estimated but it's hard to do so for query use cases. If possible, ensure that CPU usage remains below 50%. If this threshold is exceeded, it's a reminder for system operator to add more nodes in the cluster to expand resources.
|
||||||
|
|
||||||
## Disk Requirement
|
## Disk Requirement
|
||||||
|
|
||||||
|
@ -77,6 +69,6 @@ To increase performance, multiple disks can be setup for parallel data reading o
|
||||||
|
|
||||||
## Number of Hosts
|
## Number of Hosts
|
||||||
|
|
||||||
A host can be either physical or virtual. The total memory, total CPU, total disk required can be estimated according to the formulae mentioned previously. Then, according to the system resources that a single host can provide, assuming all hosts have the same resources, the number of hosts can be derived easily.
|
A host can be either physical or virtual. The total memory, total CPU, total disk required can be estimated according to the formulae mentioned previously. If the number of data replicas is not 1, the required resources are multiplied by the number of replicas.
|
||||||
|
|
||||||
**Quick Estimation for CPU, Memory and Disk** Please refer to [Resource Estimate](https://www.taosdata.com/config/config.html).
|
Then, according to the system resources that a single host can provide, assuming all hosts have the same resources, the number of hosts can be derived easily.
|
||||||
|
|
|
@ -1,6 +1,5 @@
|
||||||
---
|
---
|
||||||
sidebar_label: Fault Tolerance
|
title: Fault Tolerance and Disaster Recovery
|
||||||
title: Fault Tolerance & Disaster Recovery
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Fault Tolerance
|
## Fault Tolerance
|
||||||
|
@ -11,22 +10,21 @@ When a data block is received by TDengine, the original data block is first writ
|
||||||
|
|
||||||
There are 2 configuration parameters related to WAL:
|
There are 2 configuration parameters related to WAL:
|
||||||
|
|
||||||
- walLevel:
|
- wal_level: Specifies the WAL level. 1 indicates that WAL is enabled but fsync is disabled. 2 indicates that WAL and fsync are both enabled. The default value is 1.
|
||||||
- 0:wal is disabled
|
- wal_fsync_period: This parameter is only valid when wal_level is set to 2. It specifies the interval, in milliseconds, of invoking fsync. If set to 0, it means fsync is invoked immediately once WAL is written.
|
||||||
- 1:wal is enabled without fsync
|
|
||||||
- 2:wal is enabled with fsync
|
|
||||||
- fsync:This parameter is only valid when walLevel is set to 2. It specifies the interval, in milliseconds, of invoking fsync. If set to 0, it means fsync is invoked immediately once WAL is written.
|
|
||||||
|
|
||||||
To achieve absolutely no data loss, walLevel should be set to 2 and fsync should be set to 1. There is a performance penalty to the data ingestion rate. However, if the concurrent data insertion threads on the client side can reach a big enough number, for example 50, the data ingestion performance will be still good enough. Our verification shows that the drop is only 30% when fsync is set to 3,000 milliseconds.
|
To achieve absolutely no data loss, set wal_level to 2 and wal_fsync_period to 0. There is a performance penalty to the data ingestion rate. However, if the concurrent data insertion threads on the client side can reach a big enough number, for example 50, the data ingestion performance will be still good enough. Our verification shows that the drop is only 30% when wal_fsync_period is set to 3000 milliseconds.
|
||||||
|
|
||||||
## Disaster Recovery
|
## Disaster Recovery
|
||||||
|
|
||||||
TDengine uses replication to provide high availability and disaster recovery capability.
|
TDengine uses replication to provide high availability.
|
||||||
|
|
||||||
A TDengine cluster is managed by mnode. To ensure the high availability of mnode, multiple replicas can be configured by the system parameter `numOfMnodes`. The data replication between mnode replicas is performed in a synchronous way to guarantee metadata consistency.
|
A TDengine cluster is managed by mnodes. You can configure up to three mnodes to ensure high availability. The data replication between mnode replicas is performed in a synchronous way to guarantee metadata consistency.
|
||||||
|
|
||||||
The number of replicas for time series data in TDengine is associated with each database. There can be many databases in a cluster and each database can be configured with a different number of replicas. When creating a database, parameter `replica` is used to configure the number of replications. To achieve high availability, `replica` needs to be higher than 1.
|
The number of replicas for time series data in TDengine is associated with each database. There can be many databases in a cluster and each database can be configured with a different number of replicas. When creating a database, the parameter `replica` is used to specify the number of replicas. To achieve high availability, set `replica` to 3.
|
||||||
|
|
||||||
The number of dnodes in a TDengine cluster must NOT be lower than the number of replicas for any database, otherwise it would fail when trying to create a table.
|
The number of dnodes in a TDengine cluster must NOT be lower than the number of replicas for any database, otherwise it would fail when trying to create a table.
|
||||||
|
|
||||||
As long as the dnodes of a TDengine cluster are deployed on different physical machines and the replica number is higher than 1, high availability can be achieved without any other assistance. For disaster recovery, dnodes of a TDengine cluster should be deployed in geographically different data centers.
|
As long as the dnodes of a TDengine cluster are deployed on different physical machines and the replica number is higher than 1, high availability can be achieved without any other assistance. For disaster recovery, dnodes of a TDengine cluster should be deployed in geographically different data centers.
|
||||||
|
|
||||||
|
Alternatively, you can use taosX to synchronize the data from one TDengine cluster to another cluster in a remote location. However, taosX is only available in TDengine enterprise version, for more information please contact tdengine.com.
|
||||||
|
|
|
@ -1,50 +0,0 @@
|
||||||
---
|
|
||||||
title: User Management
|
|
||||||
---
|
|
||||||
|
|
||||||
A system operator can use TDengine CLI `taos` to create or remove users or change passwords. The SQL commands are documented below:
|
|
||||||
|
|
||||||
## Create User
|
|
||||||
|
|
||||||
```sql
|
|
||||||
CREATE USER <user_name> PASS <'password'>;
|
|
||||||
```
|
|
||||||
|
|
||||||
When creating a user and specifying the user name and password, the password needs to be quoted using single quotes.
|
|
||||||
|
|
||||||
## Drop User
|
|
||||||
|
|
||||||
```sql
|
|
||||||
DROP USER <user_name>;
|
|
||||||
```
|
|
||||||
|
|
||||||
Dropping a user can only be performed by root.
|
|
||||||
|
|
||||||
## Change Password
|
|
||||||
|
|
||||||
```sql
|
|
||||||
ALTER USER <user_name> PASS <'password'>;
|
|
||||||
```
|
|
||||||
|
|
||||||
To keep the case of the password when changing password, the password needs to be quoted using single quotes.
|
|
||||||
|
|
||||||
## Change Privilege
|
|
||||||
|
|
||||||
```sql
|
|
||||||
ALTER USER <user_name> PRIVILEGE <write|read>;
|
|
||||||
```
|
|
||||||
|
|
||||||
The privileges that can be changed to are `read` or `write` without single quotes.
|
|
||||||
|
|
||||||
Note:there is another privilege `super`, which is not allowed to be authorized to any user.
|
|
||||||
|
|
||||||
## Show Users
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SHOW USERS;
|
|
||||||
```
|
|
||||||
|
|
||||||
:::note
|
|
||||||
In SQL syntax, `< >` means the part that needs to be input by the user, excluding the `< >` itself.
|
|
||||||
|
|
||||||
:::
|
|
|
@ -1,54 +0,0 @@
|
||||||
---
|
|
||||||
sidebar_label: Connections & Tasks
|
|
||||||
title: Manage Connections and Query Tasks
|
|
||||||
---
|
|
||||||
|
|
||||||
A system operator can use the TDengine CLI to show connections, ongoing queries, stream computing, and can close connections or stop ongoing query tasks or stream computing.
|
|
||||||
|
|
||||||
## Show Connections
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SHOW CONNECTIONS;
|
|
||||||
```
|
|
||||||
|
|
||||||
One column of the output of the above SQL command is "ip:port", which is the end point of the client.
|
|
||||||
|
|
||||||
## Force Close Connections
|
|
||||||
|
|
||||||
```sql
|
|
||||||
KILL CONNECTION <connection-id>;
|
|
||||||
```
|
|
||||||
|
|
||||||
In the above SQL command, `connection-id` is from the first column of the output of `SHOW CONNECTIONS`.
|
|
||||||
|
|
||||||
## Show Ongoing Queries
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SHOW QUERIES;
|
|
||||||
```
|
|
||||||
|
|
||||||
The first column of the output is query ID, which is composed of the corresponding connection ID and the sequence number of the current query task started on this connection. The format is "connection-id:query-no".
|
|
||||||
|
|
||||||
## Force Close Queries
|
|
||||||
|
|
||||||
```sql
|
|
||||||
KILL QUERY <query-id>;
|
|
||||||
```
|
|
||||||
|
|
||||||
In the above SQL command, `query-id` is from the first column of the output of `SHOW QUERIES `.
|
|
||||||
|
|
||||||
## Show Continuous Query
|
|
||||||
|
|
||||||
```sql
|
|
||||||
SHOW STREAMS;
|
|
||||||
```
|
|
||||||
|
|
||||||
The first column of the output is stream ID, which is composed of the connection ID and the sequence number of the current stream started on this connection. The format is "connection-id:stream-no".
|
|
||||||
|
|
||||||
## Force Close Continuous Query
|
|
||||||
|
|
||||||
```sql
|
|
||||||
KILL STREAM <stream-id>;
|
|
||||||
```
|
|
||||||
|
|
||||||
The above SQL command, `stream-id` is from the first column of the output of `SHOW STREAMS`.
|
|
|
@ -14,109 +14,58 @@ Diagnostic steps:
|
||||||
2. On the server side, execute command `taos -n server -P <port> -l <pktlen>` to monitor the port range starting from the port specified by `-P` parameter with the role of "server".
|
2. On the server side, execute command `taos -n server -P <port> -l <pktlen>` to monitor the port range starting from the port specified by `-P` parameter with the role of "server".
|
||||||
3. On the client side, execute command `taos -n client -h <fqdn of server> -P <port> -l <pktlen>` to send a testing package to the specified server and port.
|
3. On the client side, execute command `taos -n client -h <fqdn of server> -P <port> -l <pktlen>` to send a testing package to the specified server and port.
|
||||||
|
|
||||||
-l <pktlen\>: The size of the testing package, in bytes. The value range is [11, 64,000] and default value is 1,000. Please note that the package length must be same in the above 2 commands executed on server side and client side respectively.
|
-l <pktlen\>: The size of the testing package, in bytes. The value range is [11, 64,000] and default value is 1,000.
|
||||||
|
Please note that the package length must be same in the above 2 commands executed on server side and client side respectively.
|
||||||
|
|
||||||
Output of the server side for the example is below:
|
Output of the server side for the example is below:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# taos -n server -P 6000
|
# taos -n server -P 6030 -l 1000
|
||||||
12/21 14:50:13.522509 0x7f536f455200 UTL work as server, host:172.27.0.7 startPort:6000 endPort:6011 pkgLen:1000
|
network test server is initialized, port:6030
|
||||||
|
request is received, size:1000
|
||||||
12/21 14:50:13.522659 0x7f5352242700 UTL TCP server at port:6000 is listening
|
request is received, size:1000
|
||||||
12/21 14:50:13.522727 0x7f5351240700 UTL TCP server at port:6001 is listening
|
|
||||||
...
|
...
|
||||||
...
|
...
|
||||||
...
|
...
|
||||||
12/21 14:50:13.523954 0x7f5342fed700 UTL TCP server at port:6011 is listening
|
request is received, size:1000
|
||||||
12/21 14:50:13.523989 0x7f53437ee700 UTL UDP server at port:6010 is listening
|
request is received, size:1000
|
||||||
12/21 14:50:13.524019 0x7f53427ec700 UTL UDP server at port:6011 is listening
|
|
||||||
12/21 14:50:22.192849 0x7f5352242700 UTL TCP: read:1000 bytes from 172.27.0.8 at 6000
|
|
||||||
12/21 14:50:22.192993 0x7f5352242700 UTL TCP: write:1000 bytes to 172.27.0.8 at 6000
|
|
||||||
12/21 14:50:22.237082 0x7f5351a41700 UTL UDP: recv:1000 bytes from 172.27.0.8 at 6000
|
|
||||||
12/21 14:50:22.237203 0x7f5351a41700 UTL UDP: send:1000 bytes to 172.27.0.8 at 6000
|
|
||||||
12/21 14:50:22.237450 0x7f5351240700 UTL TCP: read:1000 bytes from 172.27.0.8 at 6001
|
|
||||||
12/21 14:50:22.237576 0x7f5351240700 UTL TCP: write:1000 bytes to 172.27.0.8 at 6001
|
|
||||||
12/21 14:50:22.281038 0x7f5350a3f700 UTL UDP: recv:1000 bytes from 172.27.0.8 at 6001
|
|
||||||
12/21 14:50:22.281141 0x7f5350a3f700 UTL UDP: send:1000 bytes to 172.27.0.8 at 6001
|
|
||||||
...
|
|
||||||
...
|
|
||||||
...
|
|
||||||
12/21 14:50:22.677443 0x7f5342fed700 UTL TCP: read:1000 bytes from 172.27.0.8 at 6011
|
|
||||||
12/21 14:50:22.677576 0x7f5342fed700 UTL TCP: write:1000 bytes to 172.27.0.8 at 6011
|
|
||||||
12/21 14:50:22.721144 0x7f53427ec700 UTL UDP: recv:1000 bytes from 172.27.0.8 at 6011
|
|
||||||
12/21 14:50:22.721261 0x7f53427ec700 UTL UDP: send:1000 bytes to 172.27.0.8 at 6011
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Output of the client side for the example is below:
|
Output of the client side for the example is below:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# taos -n client -h 172.27.0.7 -P 6000
|
# taos -n client -h 172.27.0.7 -P 6000
|
||||||
12/21 14:50:22.192434 0x7fc95d859200 UTL work as client, host:172.27.0.7 startPort:6000 endPort:6011 pkgLen:1000
|
taos -n client -h v3s2 -P 6030 -l 1000
|
||||||
|
network test client is initialized, the server is v3s2:6030
|
||||||
|
request is sent, size:1000
|
||||||
|
response is received, size:1000
|
||||||
|
request is sent, size:1000
|
||||||
|
response is received, size:1000
|
||||||
|
...
|
||||||
|
...
|
||||||
|
...
|
||||||
|
request is sent, size:1000
|
||||||
|
response is received, size:1000
|
||||||
|
request is sent, size:1000
|
||||||
|
response is received, size:1000
|
||||||
|
|
||||||
12/21 14:50:22.192472 0x7fc95d859200 UTL server ip:172.27.0.7 is resolved from host:172.27.0.7
|
total succ: 100/100 cost: 16.23 ms speed: 5.87 MB/s
|
||||||
12/21 14:50:22.236869 0x7fc95d859200 UTL successed to test TCP port:6000
|
|
||||||
12/21 14:50:22.237215 0x7fc95d859200 UTL successed to test UDP port:6000
|
|
||||||
...
|
|
||||||
...
|
|
||||||
...
|
|
||||||
12/21 14:50:22.676891 0x7fc95d859200 UTL successed to test TCP port:6010
|
|
||||||
12/21 14:50:22.677240 0x7fc95d859200 UTL successed to test UDP port:6010
|
|
||||||
12/21 14:50:22.720893 0x7fc95d859200 UTL successed to test TCP port:6011
|
|
||||||
12/21 14:50:22.721274 0x7fc95d859200 UTL successed to test UDP port:6011
|
|
||||||
```
|
```
|
||||||
|
|
||||||
The output needs to be checked carefully for the system operator to find the root cause and resolve the problem.
|
The output needs to be checked carefully for the system operator to find the root cause and resolve the problem.
|
||||||
|
|
||||||
## Startup Status and RPC Diagnostic
|
|
||||||
|
|
||||||
`taos -n startup -h <fqdn of server>` can be used to check the startup status of a `taosd` process. This is a common task which should be performed by a system operator, especially in the case of a cluster, to determine whether `taosd` has been started successfully.
|
|
||||||
|
|
||||||
`taos -n rpc -h <fqdn of server>` can be used to check whether the port of a started `taosd` can be accessed or not. If `taosd` process doesn't respond or is working abnormally, this command can be used to initiate a rpc communication with the specified fqdn to determine whether it's a network problem or whether `taosd` is abnormal.
|
|
||||||
|
|
||||||
## Sync and Arbitrator Diagnostic
|
|
||||||
|
|
||||||
```bash
|
|
||||||
taos -n sync -P 6040 -h <fqdn of server>
|
|
||||||
taos -n sync -P 6042 -h <fqdn of server>
|
|
||||||
```
|
|
||||||
|
|
||||||
The above commands can be executed in a Linux shell to check whether the port for sync is working well and whether the sync module on the server side is working well. Additionally, `-P 6042` is used to check whether the arbitrator is configured properly and is working well.
|
|
||||||
|
|
||||||
## Network Speed Diagnostic
|
|
||||||
|
|
||||||
`taos -n speed -h <fqdn of server> -P 6030 -N 10 -l 10000000 -S TCP`
|
|
||||||
|
|
||||||
From version 2.2.0.0 onwards, the above command can be executed in a Linux shell to test network speed. The command sends uncompressed packages to a running `taosd` server process or a simulated server process started by `taos -n server` to test the network speed. Parameters can be used when testing network speed are as below:
|
|
||||||
|
|
||||||
-n:When set to "speed", it means testing network speed.
|
|
||||||
-h:The FQDN or IP of the server process to be connected to; if not set, the FQDN configured in `taos.cfg` is used.
|
|
||||||
-P:The port of the server process to connect to, the default value is 6030.
|
|
||||||
-N:The number of packages that will be sent in the test, range is [1,10000], default value is 100.
|
|
||||||
-l:The size of each package in bytes, range is [1024, 1024 \* 1024 \* 1024], default value is 1024.
|
|
||||||
-S:The type of network packages to send, can be either TCP or UDP, default value is TCP.
|
|
||||||
|
|
||||||
## FQDN Resolution Diagnostic
|
|
||||||
|
|
||||||
`taos -n fqdn -h <fqdn of server>`
|
|
||||||
|
|
||||||
From version 2.2.0.0 onward, the above command can be executed in a Linux shell to test the resolution speed of FQDN. It can be used to try to resolve a FQDN to an IP address and record the time spent in this process. The parameters that can be used for this purpose are as below:
|
|
||||||
|
|
||||||
-n:When set to "fqdn", it means testing the speed of resolving FQDN.
|
|
||||||
-h:The FQDN to be resolved. If not set, the `FQDN` parameter in `taos.cfg` is used by default.
|
|
||||||
|
|
||||||
## Server Log
|
## Server Log
|
||||||
|
|
||||||
The parameter `debugFlag` is used to control the log level of the `taosd` server process. The default value is 131. For debugging and tracing, it needs to be set to either 135 or 143 respectively.
|
The parameter `debugFlag` is used to control the log level of the `taosd` server process. The default value is 131. For debugging and tracing, it needs to be set to either 135 or 143 respectively.
|
||||||
|
|
||||||
Once this parameter is set to 135 or 143, the log file grows very quickly especially when there is a huge volume of data insertion and data query requests. If all the logs are stored together, some important information may be missed very easily and so on the server side, important information is stored in a different place from other logs.
|
Once this parameter is set to 135 or 143, the log file grows very quickly especially when there is a huge volume of data insertion and data query requests. Ensure that the disk drive on which logs are stored has sufficient space.
|
||||||
|
|
||||||
- The log at level of INFO, WARNING and ERROR is stored in `taosinfo` so that it is easy to find important information
|
|
||||||
- The log at level of DEBUG (135) and TRACE (143) and other information not handled by `taosinfo` are stored in `taosdlog`
|
|
||||||
|
|
||||||
## Client Log
|
## Client Log
|
||||||
|
|
||||||
An independent log file, named as "taoslog+<seq num\>" is generated for each client program, i.e. a client process. The default value of `debugFlag` is also 131 and only logs at level of INFO/ERROR/WARNING are recorded. As stated above, for debugging and tracing, it needs to be changed to 135 or 143 respectively, so that logs at DEBUG or TRACE level can be recorded.
|
An independent log file, named as "taoslog+<seq num\>" is generated for each client program, i.e. a client process. The parameter `debugFlag` is used to control the log level. The default value is 131. For debugging and tracing, it needs to be set to either 135 or 143 respectively.
|
||||||
|
|
||||||
|
The default value of `debugFlag` is also 131 and only logs at level of INFO/ERROR/WARNING are recorded. As stated above, for debugging and tracing, it needs to be changed to 135 or 143 respectively, so that logs at DEBUG or TRACE level can be recorded.
|
||||||
|
|
||||||
The maximum length of a single log file is controlled by parameter `numOfLogLines` and only 2 log files are kept for each `taosd` server process.
|
The maximum length of a single log file is controlled by parameter `numOfLogLines` and only 2 log files are kept for each `taosd` server process.
|
||||||
|
|
||||||
Log files are written in an async way to minimize the workload on disk, but the trade off for performance is that a few log lines may be lost in some extreme conditions.
|
Log files are written in an async way to minimize the workload on disk, but the trade off for performance is that a few log lines may be lost in some extreme conditions. You can configure asynclog to 0 when needed for troubleshooting purposes to ensure that no log information is lost.
|
||||||
|
|
|
@ -5,12 +5,12 @@ title: REST API
|
||||||
To support the development of various types of applications and platforms, TDengine provides an API that conforms to REST principles; namely REST API. To minimize the learning cost, unlike REST APIs for other database engines, TDengine allows insertion of SQL commands in the BODY of an HTTP POST request, to operate the database.
|
To support the development of various types of applications and platforms, TDengine provides an API that conforms to REST principles; namely REST API. To minimize the learning cost, unlike REST APIs for other database engines, TDengine allows insertion of SQL commands in the BODY of an HTTP POST request, to operate the database.
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
One difference from the native connector is that the REST interface is stateless and so the `USE db_name` command has no effect. All references to table names and super table names need to specify the database name in the prefix. (Since version 2.2.0.0, TDengine supports specification of the db_name in RESTful URL. If the database name prefix is not specified in the SQL command, the `db_name` specified in the URL will be used. Since version 2.4.0.0, REST service is provided by taosAdapter by default and it requires that the `db_name` must be specified in the URL.)
|
One difference from the native connector is that the REST interface is stateless and so the `USE db_name` command has no effect. All references to table names and super table names need to specify the database name in the prefix. TDengine supports specification of the db_name in RESTful URL. If the database name prefix is not specified in the SQL command, the `db_name` specified in the URL will be used.
|
||||||
:::
|
:::
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
The REST interface does not rely on any TDengine native library, so the client application does not need to install any TDengine libraries. The client application's development language only needs to support the HTTP protocol.
|
The REST interface does not rely on any TDengine native library, so the client application does not need to install any TDengine libraries. The client application's development language only needs to support the HTTP protocol. The REST interface is provided by [taosAdapter](../taosadapter), to use REST interface you need to make sure `taosAdapter` is running properly.
|
||||||
|
|
||||||
## Verification
|
## Verification
|
||||||
|
|
||||||
|
@ -18,86 +18,77 @@ If the TDengine server is already installed, it can be verified as follows:
|
||||||
|
|
||||||
The following example is in an Ubuntu environment and uses the `curl` tool to verify that the REST interface is working. Note that the `curl` tool may need to be installed in your environment.
|
The following example is in an Ubuntu environment and uses the `curl` tool to verify that the REST interface is working. Note that the `curl` tool may need to be installed in your environment.
|
||||||
|
|
||||||
The following example lists all databases on the host h1.taosdata.com. To use it in your environment, replace `h1.taosdata.com` and `6041` (the default port) with the actual running TDengine service FQDN and port number.
|
The following example lists all databases on the host h1.tdengine.com. To use it in your environment, replace `h1.tdengine.com` and `6041` (the default port) with the actual running TDengine service FQDN and port number.
|
||||||
|
|
||||||
```html
|
```bash
|
||||||
curl -L -H "Authorization: Basic cm9vdDp0YW9zZGF0YQ==" -d "show databases;" h1.taosdata.com:6041/rest/sql
|
curl -L -H "Authorization: Basic cm9vdDp0YW9zZGF0YQ==" \
|
||||||
|
-d "select name, ntables, status from information_schema.ins_databases;" \
|
||||||
|
h1.tdengine.com:6041/rest/sql
|
||||||
```
|
```
|
||||||
|
|
||||||
The following return value results indicate that the verification passed.
|
The following return value results indicate that the verification passed.
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"status": "succ",
|
"code": 0,
|
||||||
"head": [
|
"column_meta": [
|
||||||
|
[
|
||||||
"name",
|
"name",
|
||||||
"created_time",
|
"VARCHAR",
|
||||||
|
64
|
||||||
|
],
|
||||||
|
[
|
||||||
"ntables",
|
"ntables",
|
||||||
"vgroups",
|
"BIGINT",
|
||||||
"replica",
|
8
|
||||||
"quorum",
|
],
|
||||||
"days",
|
[
|
||||||
"keep1,keep2,keep(D)",
|
"status",
|
||||||
"cache(MB)",
|
"VARCHAR",
|
||||||
"blocks",
|
10
|
||||||
"minrows",
|
]
|
||||||
"maxrows",
|
|
||||||
"wallevel",
|
|
||||||
"fsync",
|
|
||||||
"comp",
|
|
||||||
"precision",
|
|
||||||
"status"
|
|
||||||
],
|
],
|
||||||
"data": [
|
"data": [
|
||||||
[
|
[
|
||||||
"log",
|
"information_schema",
|
||||||
"2020-09-02 17:23:00.039",
|
16,
|
||||||
4,
|
"ready"
|
||||||
1,
|
],
|
||||||
1,
|
[
|
||||||
1,
|
"performance_schema",
|
||||||
10,
|
9,
|
||||||
"30,30,30",
|
|
||||||
1,
|
|
||||||
3,
|
|
||||||
100,
|
|
||||||
4096,
|
|
||||||
1,
|
|
||||||
3000,
|
|
||||||
2,
|
|
||||||
"us",
|
|
||||||
"ready"
|
"ready"
|
||||||
]
|
]
|
||||||
],
|
],
|
||||||
"rows": 1
|
"rows": 2
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## HTTP request URL format
|
## HTTP request URL format
|
||||||
|
|
||||||
```
|
```text
|
||||||
http://<fqdn>:<port>/rest/sql/[db_name]
|
http://<fqdn>:<port>/rest/sql/[db_name]
|
||||||
```
|
```
|
||||||
|
|
||||||
Parameter Description:
|
Parameter Description:
|
||||||
|
|
||||||
- fqnd: FQDN or IP address of any host in the cluster
|
- fqnd: FQDN or IP address of any host in the cluster.
|
||||||
- port: httpPort configuration item in the configuration file, default is 6041
|
- port: httpPort configuration item in the configuration file, default is 6041.
|
||||||
- db_name: Optional parameter that specifies the default database name for the executed SQL command. (supported since version 2.2.0.0)
|
- db_name: Optional parameter that specifies the default database name for the executed SQL command.
|
||||||
|
|
||||||
For example, `http://h1.taos.com:6041/rest/sql/test` is a URL to `h1.taos.com:6041` and sets the default database name to `test`.
|
For example, `http://h1.taos.com:6041/rest/sql/test` is a URL to `h1.taos.com:6041` and sets the default database name to `test`.
|
||||||
|
|
||||||
TDengine supports both Basic authentication and custom authentication mechanisms, and subsequent versions will provide a standard secure digital signature mechanism for authentication.
|
TDengine supports both Basic authentication and custom authentication mechanisms, and subsequent versions will provide a standard secure digital signature mechanism for authentication.
|
||||||
|
|
||||||
- The custom authentication information is as follows. More details about "token" later.
|
- authentication information is shown below:
|
||||||
|
|
||||||
```
|
```text
|
||||||
Authorization: Taosd <TOKEN>
|
Authorization: Taosd <TOKEN>
|
||||||
```
|
```
|
||||||
|
|
||||||
- Basic authentication information is shown below
|
- Basic authentication information is shown below:
|
||||||
|
|
||||||
```
|
```text
|
||||||
Authorization: Basic <TOKEN>
|
Authorization: Basic <TOKEN>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -109,51 +100,148 @@ Use `curl` to initiate an HTTP request with a custom authentication method, with
|
||||||
curl -L -H "Authorization: Basic <TOKEN>" -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name]
|
curl -L -H "Authorization: Basic <TOKEN>" -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name]
|
||||||
```
|
```
|
||||||
|
|
||||||
Or
|
or
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
curl -L -u username:password -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name]
|
curl -L -u username:password -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name]
|
||||||
```
|
```
|
||||||
|
|
||||||
where `TOKEN` is the string after Base64 encoding of `{username}:{password}`, e.g. `root:taosdata` is encoded as `cm9vdDp0YW9zZGF0YQ==`.
|
where `TOKEN` is the string after Base64 encoding of `{username}:{password}`, e.g. `root:taosdata` is encoded as `cm9vdDp0YW9zZGF0YQ==`..
|
||||||
|
|
||||||
## HTTP Return Format
|
## HTTP Return Format
|
||||||
|
|
||||||
The return result is in JSON format, as follows:
|
### HTTP Response Code
|
||||||
|
|
||||||
|
| **Response Code** | **Description** |
|
||||||
|
|-------------------|----------------|
|
||||||
|
| 200 | Success. (Also used for C interface errors.) |
|
||||||
|
| 400 | Parameter error |
|
||||||
|
| 401 | Authentication failure |
|
||||||
|
| 404 | Interface not found |
|
||||||
|
| 500 | Internal error |
|
||||||
|
| 503 | Insufficient system resources |
|
||||||
|
|
||||||
|
### HTTP body structure
|
||||||
|
|
||||||
|
#### Successful Operation
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"status": "succ",
|
"code": 0,
|
||||||
"head": ["ts", "current", ...],
|
"column_meta": [["affected_rows", "INT", 4]],
|
||||||
"column_meta": [["ts",9,8],["current",6,4], ...],
|
"data": [[0]],
|
||||||
"data": [
|
"rows": 1
|
||||||
["2018-10-03 14:38:05.000", 10.3, ...],
|
|
||||||
["2018-10-03 14:38:15.000", 12.6, ...]
|
|
||||||
],
|
|
||||||
"rows": 2
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Description:
|
Description:
|
||||||
|
|
||||||
- status: tells you whethre the operation result is success or failure.
|
- code: (`int`) 0 indicates success.
|
||||||
- head: the definition of the table, or just one column "affected_rows" if no result set is returned. (As of version 2.0.17.0, it is recommended not to rely on the head return value to determine the data column type but rather use column_meta. In later versions, the head item may be removed from the return value.)
|
- column_meta: (`[1][3]any`) Only returns `[["affected_rows", "INT", 4]]`.
|
||||||
- column_meta: this item is added to the return value to indicate the data type of each column in the data with version 2.0.17.0 and later versions. Each column is described by three values: column name, column type, and type length. For example, `["current",6,4]` means that the column name is "current", the column type is 6, which is the float type, and the type length is 4, which is the float type with 4 bytes. If the column type is binary or nchar, the type length indicates the maximum length of content stored in the column, not the length of the specific data in this return value. When the column type is nchar, the type length indicates the number of Unicode characters that can be saved, not bytes.
|
- rows: (`int`) Only returns `1`.
|
||||||
- data: The exact data returned, presented row by row, or just [[affected_rows]] if no result set is returned. The order of the data columns in each row of data is the same as that of the data columns described in column_meta.
|
- data: (`[][]any`) Returns the number of rows affected.
|
||||||
- rows: Indicates how many rows of data there are.
|
|
||||||
|
|
||||||
The column types in column_meta are described as follows:
|
#### Successful Query
|
||||||
|
|
||||||
- 1:BOOL
|
Example:
|
||||||
- 2:TINYINT
|
|
||||||
- 3:SMALLINT
|
```json
|
||||||
- 4:INT
|
{
|
||||||
- 5:BIGINT
|
"code": 0,
|
||||||
- 6:FLOAT
|
"column_meta": [
|
||||||
- 7:DOUBLE
|
["ts", "TIMESTAMP", 8],
|
||||||
- 8:BINARY
|
["count", "BIGINT", 8],
|
||||||
- 9:TIMESTAMP
|
["endpoint", "VARCHAR", 45],
|
||||||
- 10:NCHAR
|
["status_code", "INT", 4],
|
||||||
|
["client_ip", "VARCHAR", 40],
|
||||||
|
["request_method", "VARCHAR", 15],
|
||||||
|
["request_uri", "VARCHAR", 128]
|
||||||
|
],
|
||||||
|
"data": [
|
||||||
|
[
|
||||||
|
"2022-06-29T05:50:55.401Z",
|
||||||
|
2,
|
||||||
|
"LAPTOP-NNKFTLTG:6041",
|
||||||
|
200,
|
||||||
|
"172.23.208.1",
|
||||||
|
"POST",
|
||||||
|
"/rest/sql"
|
||||||
|
],
|
||||||
|
[
|
||||||
|
"2022-06-29T05:52:16.603Z",
|
||||||
|
1,
|
||||||
|
"LAPTOP-NNKFTLTG:6041",
|
||||||
|
200,
|
||||||
|
"172.23.208.1",
|
||||||
|
"POST",
|
||||||
|
"/rest/sql"
|
||||||
|
],
|
||||||
|
[
|
||||||
|
"2022-06-29T06:28:14.118Z",
|
||||||
|
1,
|
||||||
|
"LAPTOP-NNKFTLTG:6041",
|
||||||
|
200,
|
||||||
|
"172.23.208.1",
|
||||||
|
"POST",
|
||||||
|
"/rest/sql"
|
||||||
|
],
|
||||||
|
[
|
||||||
|
"2022-06-29T05:52:16.603Z",
|
||||||
|
2,
|
||||||
|
"LAPTOP-NNKFTLTG:6041",
|
||||||
|
401,
|
||||||
|
"172.23.208.1",
|
||||||
|
"POST",
|
||||||
|
"/rest/sql"
|
||||||
|
]
|
||||||
|
],
|
||||||
|
"rows": 4
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Description:
|
||||||
|
|
||||||
|
- code: `int` 0 indicates success.
|
||||||
|
- column_meta: (`[][3]any`) Column information. Each column is described with three values: column name (string), column type (string), and type length (int).
|
||||||
|
- rows: (`int`) The number of rows returned.
|
||||||
|
- data: (`[][]any`)
|
||||||
|
|
||||||
|
The following types may be returned:
|
||||||
|
|
||||||
|
- "NULL"
|
||||||
|
- "BOOL"
|
||||||
|
- "TINYINT"
|
||||||
|
- "SMALLINT"
|
||||||
|
- "INT"
|
||||||
|
- "BIGINT"
|
||||||
|
- "FLOAT"
|
||||||
|
- "DOUBLE"
|
||||||
|
- "VARCHAR"
|
||||||
|
- "TIMESTAMP"
|
||||||
|
- "NCHAR"
|
||||||
|
- "TINYINT UNSIGNED"
|
||||||
|
- "SMALLINT UNSIGNED"
|
||||||
|
- "INT UNSIGNED"
|
||||||
|
- "BIGINT UNSIGNED"
|
||||||
|
- "JSON"
|
||||||
|
|
||||||
|
#### Errors
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"code": 9728,
|
||||||
|
"desc": "syntax error near \"1\""
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Description:
|
||||||
|
|
||||||
|
- code: (`int`) Error code.
|
||||||
|
- desc: (`string`): Error code description.
|
||||||
|
|
||||||
## Custom Authorization Code
|
## Custom Authorization Code
|
||||||
|
|
||||||
|
@ -165,11 +253,9 @@ curl http://<fqnd>:<port>/rest/login/<username>/<password>
|
||||||
|
|
||||||
Where `fqdn` is the FQDN or IP address of the TDengine database. `port` is the port number of the TDengine service. `username` is the database username. `password` is the database password. The return value is in `JSON` format, and the meaning of each field is as follows.
|
Where `fqdn` is the FQDN or IP address of the TDengine database. `port` is the port number of the TDengine service. `username` is the database username. `password` is the database password. The return value is in `JSON` format, and the meaning of each field is as follows.
|
||||||
|
|
||||||
- status: flag bit of the request result
|
- status: flag bit of the request result.
|
||||||
|
- code: return value code.
|
||||||
- code: return value code
|
- desc: authorization code.
|
||||||
|
|
||||||
- desc: authorization code
|
|
||||||
|
|
||||||
Example of getting authorization code.
|
Example of getting authorization code.
|
||||||
|
|
||||||
|
@ -187,7 +273,7 @@ Response body:
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## For example
|
## Usage examples
|
||||||
|
|
||||||
- query all records from table d1001 of database demo
|
- query all records from table d1001 of database demo
|
||||||
|
|
||||||
|
@ -199,17 +285,42 @@ Response body:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"status": "succ",
|
"code": 0,
|
||||||
"head": ["ts", "current", "voltage", "phase"],
|
|
||||||
"column_meta": [
|
"column_meta": [
|
||||||
["ts", 9, 8],
|
[
|
||||||
["current", 6, 4],
|
"ts",
|
||||||
["voltage", 4, 4],
|
"TIMESTAMP",
|
||||||
["phase", 6, 4]
|
8
|
||||||
|
],
|
||||||
|
[
|
||||||
|
"current",
|
||||||
|
"FLOAT",
|
||||||
|
4
|
||||||
|
],
|
||||||
|
[
|
||||||
|
"voltage",
|
||||||
|
"INT",
|
||||||
|
4
|
||||||
|
],
|
||||||
|
[
|
||||||
|
"phase",
|
||||||
|
"FLOAT",
|
||||||
|
4
|
||||||
|
]
|
||||||
],
|
],
|
||||||
"data": [
|
"data": [
|
||||||
["2018-10-03 14:38:05.000", 10.3, 219, 0.31],
|
[
|
||||||
["2018-10-03 14:38:15.000", 12.6, 218, 0.33]
|
"2022-07-30T06:44:40.32Z",
|
||||||
|
10.3,
|
||||||
|
219,
|
||||||
|
0.31
|
||||||
|
],
|
||||||
|
[
|
||||||
|
"2022-07-30T06:44:41.32Z",
|
||||||
|
12.6,
|
||||||
|
218,
|
||||||
|
0.33
|
||||||
|
]
|
||||||
],
|
],
|
||||||
"rows": 2
|
"rows": 2
|
||||||
}
|
}
|
||||||
|
@ -225,83 +336,23 @@ Response body:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"status": "succ",
|
"code": 0,
|
||||||
"head": ["affected_rows"],
|
"column_meta": [
|
||||||
"column_meta": [["affected_rows", 4, 4]],
|
[
|
||||||
"data": [[1]],
|
"affected_rows",
|
||||||
|
"INT",
|
||||||
|
4
|
||||||
|
]
|
||||||
|
],
|
||||||
|
"data": [
|
||||||
|
[
|
||||||
|
0
|
||||||
|
]
|
||||||
|
],
|
||||||
"rows": 1
|
"rows": 1
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
## Other Uses
|
## Reference
|
||||||
|
|
||||||
### Unix timestamps for result sets
|
[taosAdapter](/reference/taosadapter/)
|
||||||
|
|
||||||
When the HTTP request URL uses `/rest/sqlt`, the returned result set's timestamp value will be in Unix timestamp format, for example:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
curl -L -H "Authorization: Basic cm9vdDp0YW9zZGF0YQ==" -d "select * from demo.d1001" 192.168.0.1:6041/rest/sqlt
|
|
||||||
```
|
|
||||||
|
|
||||||
Response body:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"status": "succ",
|
|
||||||
"head": ["ts", "current", "voltage", "phase"],
|
|
||||||
"column_meta": [
|
|
||||||
["ts", 9, 8],
|
|
||||||
["current", 6, 4],
|
|
||||||
["voltage", 4, 4],
|
|
||||||
["phase", 6, 4]
|
|
||||||
],
|
|
||||||
"data": [
|
|
||||||
[1538548685000, 10.3, 219, 0.31],
|
|
||||||
[1538548695000, 12.6, 218, 0.33]
|
|
||||||
],
|
|
||||||
"rows": 2
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### UTC format for the result set
|
|
||||||
|
|
||||||
When the HTTP request URL uses `/rest/sqlutc`, the timestamp of the returned result set will be expressed as a UTC format, for example:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
curl -L -H "Authorization: Basic cm9vdDp0YW9zZGF0YQ==" -d "select * from demo.t1" 192.168.0.1:6041/rest/sqlutc
|
|
||||||
```
|
|
||||||
|
|
||||||
Response body:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"status": "succ",
|
|
||||||
"head": ["ts", "current", "voltage", "phase"],
|
|
||||||
"column_meta": [
|
|
||||||
["ts", 9, 8],
|
|
||||||
["current", 6, 4],
|
|
||||||
["voltage", 4, 4],
|
|
||||||
["phase", 6, 4]
|
|
||||||
],
|
|
||||||
"data": [
|
|
||||||
["2018-10-03T14:38:05.000+0800", 10.3, 219, 0.31],
|
|
||||||
["2018-10-03T14:38:15.000+0800", 12.6, 218, 0.33]
|
|
||||||
],
|
|
||||||
"rows": 2
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## Important configuration items
|
|
||||||
|
|
||||||
Only some configuration parameters related to the RESTful interface are listed below. Please see the description in the configuration file for other system parameters.
|
|
||||||
|
|
||||||
- The port number of the external RESTful service is bound to 6041 by default (the actual value is serverPort + 11, so it can be changed by modifying the setting of the serverPort parameter).
|
|
||||||
- httpMaxThreads: the number of threads to start, default is 2 (the default value is rounded down to half of the CPU cores with version 2.0.17.0 and later versions).
|
|
||||||
- restfulRowLimit: the maximum number of result sets (in JSON format) to return. The default value is 10240.
|
|
||||||
- httpEnableCompress: whether to support compression, the default is not supported. Currently, TDengine only supports the gzip compression format.
|
|
||||||
- httpDebugFlag: logging switch, default is 131. 131: error and alarm messages only, 135: debug messages, 143: very detailed debug messages.
|
|
||||||
- httpDbNameMandatory: users must specify the default database name in the RESTful URL. The default is 0, which turns off this check. If set to 1, users must put a default database name in every RESTful URL. Otherwise, it will return an execution error and reject this SQL statement, regardless of whether the SQL statement executed at this time requires a specified database.
|
|
||||||
|
|
||||||
:::note
|
|
||||||
If you are using the REST API provided by taosd, you should write the above configuration in taosd's configuration file taos.cfg. If you use the REST API of taosAdapter, you need to refer to taosAdapter [corresponding configuration method](/reference/taosadapter/).
|
|
||||||
:::
|
|
||||||
|
|
|
@ -1,5 +1,4 @@
|
||||||
---
|
---
|
||||||
sidebar_position: 1
|
|
||||||
sidebar_label: C/C++
|
sidebar_label: C/C++
|
||||||
title: C/C++ Connector
|
title: C/C++ Connector
|
||||||
---
|
---
|
||||||
|
@ -12,8 +11,8 @@ C/C++ developers can use TDengine's client driver and the C/C++ connector, to de
|
||||||
|
|
||||||
After TDengine server or client installation, `taos.h` is located at
|
After TDengine server or client installation, `taos.h` is located at
|
||||||
|
|
||||||
- Linux: `/usr/local/taos/include`
|
- Linux:`/usr/local/taos/include`
|
||||||
- Windows: `C:\TDengine\include`
|
- Windows:`C:\TDengine\include`
|
||||||
|
|
||||||
The dynamic libraries for the TDengine client driver are located in.
|
The dynamic libraries for the TDengine client driver are located in.
|
||||||
|
|
||||||
|
@ -28,7 +27,7 @@ Please refer to [list of supported platforms](/reference/connector#supported-pla
|
||||||
|
|
||||||
The version number of the TDengine client driver and the version number of the TDengine server should be same. A lower version of the client driver is compatible with a higher version of the server, if the first three version numbers are the same (i.e., only the fourth version number is different). For e.g. if the client version is x.y.z.1 and the server version is x.y.z.2 the client and server are compatible. But in general we do not recommend using a lower client version with a newer server version. It is also strongly discouraged to use a higher version of the client driver to access a lower version of the TDengine server.
|
The version number of the TDengine client driver and the version number of the TDengine server should be same. A lower version of the client driver is compatible with a higher version of the server, if the first three version numbers are the same (i.e., only the fourth version number is different). For e.g. if the client version is x.y.z.1 and the server version is x.y.z.2 the client and server are compatible. But in general we do not recommend using a lower client version with a newer server version. It is also strongly discouraged to use a higher version of the client driver to access a lower version of the TDengine server.
|
||||||
|
|
||||||
## Installation steps
|
## Installation Steps
|
||||||
|
|
||||||
Please refer to the [Installation Steps](/reference/connector#installation-steps) for TDengine client driver installation
|
Please refer to the [Installation Steps](/reference/connector#installation-steps) for TDengine client driver installation
|
||||||
|
|
||||||
|
@ -45,7 +44,7 @@ The following is sample code for establishing a connection, which omits the quer
|
||||||
exit(1);
|
exit(1);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* put your code here for query and writing */
|
/* put your code here for read and write */
|
||||||
|
|
||||||
taos_close(taos);
|
taos_close(taos);
|
||||||
taos_cleanup();
|
taos_cleanup();
|
||||||
|
@ -60,7 +59,7 @@ In the above example code, `taos_connect()` establishes a connection to port 603
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
## Example program
|
## Sample program
|
||||||
|
|
||||||
This section shows sample code for standard access methods to TDengine clusters using the client driver.
|
This section shows sample code for standard access methods to TDengine clusters using the client driver.
|
||||||
|
|
||||||
|
@ -125,7 +124,7 @@ You can find it in the installation directory under the `examples/c` path. This
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
## API reference
|
## API Reference
|
||||||
|
|
||||||
The following describes the basic API, synchronous API, asynchronous API, subscription API, and schemaless write API of TDengine client driver, respectively.
|
The following describes the basic API, synchronous API, asynchronous API, subscription API, and schemaless write API of TDengine client driver, respectively.
|
||||||
|
|
||||||
|
@ -143,8 +142,7 @@ The base API is used to do things like create database connections and provide a
|
||||||
|
|
||||||
- `int taos_options(TSDB_OPTION option, const void * arg, ...)`
|
- `int taos_options(TSDB_OPTION option, const void * arg, ...)`
|
||||||
|
|
||||||
Set client options, currently supports region setting (`TSDB_OPTION_LOCALE`), character set
|
Set client options, currently supports region setting (`TSDB_OPTION_LOCALE`), character set (`TSDB_OPTION_CHARSET`), time zone (`TSDB_OPTION_TIMEZONE`), configuration file path (`TSDB_OPTION_CONFIGDIR`). The region setting, character set, and time zone default to the current settings of the operating system.
|
||||||
(`TSDB_OPTION_CHARSET`), time zone (`TSDB_OPTION_TIMEZONE`), configuration file path (`TSDB_OPTION_CONFIGDIR`). The region setting, character set, and time zone default to the current settings of the operating system.
|
|
||||||
|
|
||||||
- `char *taos_get_client_info()`
|
- `char *taos_get_client_info()`
|
||||||
|
|
||||||
|
@ -235,7 +233,7 @@ typedef struct taosField {
|
||||||
|
|
||||||
Get the reason for the failure of the last API call. The return value is an error message identified by a string.
|
Get the reason for the failure of the last API call. The return value is an error message identified by a string.
|
||||||
|
|
||||||
- 'int taos_errno(TAOS_RES *res)`
|
- `int taos_errno(TAOS_RES *res)`
|
||||||
|
|
||||||
Get the reason for the last API call failure. The return value is the error code.
|
Get the reason for the last API call failure. The return value is the error code.
|
||||||
|
|
||||||
|
@ -405,46 +403,3 @@ In addition to writing data using the SQL method or the parameter binding API, w
|
||||||
|
|
||||||
**Supported Versions**
|
**Supported Versions**
|
||||||
This feature interface is supported from version 2.3.0.0.
|
This feature interface is supported from version 2.3.0.0.
|
||||||
|
|
||||||
### Subscription and Consumption API
|
|
||||||
|
|
||||||
The Subscription API currently supports subscribing to one or more tables and continuously fetching the latest data written to them by polling periodically.
|
|
||||||
|
|
||||||
- `TAOS_SUB *taos_subscribe(TAOS* taos, int restart, const char* topic, const char *sql, TAOS_SUBSCRIBE_CALLBACK fp, void *param, int interval)`
|
|
||||||
|
|
||||||
This function is responsible for starting the subscription service, returning the subscription object on success and `NULL` on failure, with the following parameters.
|
|
||||||
|
|
||||||
- taos: the database connection that has been established.
|
|
||||||
- restart: if the subscription already exists, whether to restart or continue the previous subscription.
|
|
||||||
- topic: the topic of the subscription (i.e., the name). This parameter is the unique identifier of the subscription.
|
|
||||||
- sql: the query statement of the subscription which can only be a _select_ statement. Only the original data should be queried, and data can only be queried in temporal order.
|
|
||||||
- fp: the callback function when the query result is received only used when called asynchronously. This parameter should be passed `NULL` when called synchronously. The function prototype is described below.
|
|
||||||
- param: additional parameter when calling the callback function. The system API will pass it to the callback function as is, without any processing.
|
|
||||||
- interval: polling period in milliseconds. The callback function will be called periodically according to this parameter when called asynchronously. The interval should not be too small to avoid impact on system performance when called synchronously. If the interval between two calls to `taos_consume()` is less than this period, the API will block until the interval exceeds this period.
|
|
||||||
|
|
||||||
- ` typedef void (*TAOS_SUBSCRIBE_CALLBACK)(TAOS_SUB* tsub, TAOS_RES *res, void* param, int code)`
|
|
||||||
|
|
||||||
The prototype of the callback function in asynchronous mode with the following parameters
|
|
||||||
|
|
||||||
- tsub: subscription object
|
|
||||||
- res: query result set, note that there may be no records in the result set
|
|
||||||
- param: additional parameters provided by the client program when calling `taos_subscribe()`
|
|
||||||
- code: error code
|
|
||||||
|
|
||||||
:::note
|
|
||||||
The callback function should not take too long to process, especially if the returned result set has a lot of data. Otherwise, it may lead to an abnormal state, such as client blocking. If you must perform complex calculations, we recommend handling them in a separate thread.
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
- `TAOS_RES *taos_consume(TAOS_SUB *tsub)`
|
|
||||||
|
|
||||||
In synchronous mode, this function is used to fetch the results of a subscription. The user application places it in a loop. If the interval between two calls to `taos_consume()` is less than the polling period of the subscription, the API will block until the interval exceeds this period. If a new record arrives in the database, the API returns that latest record. Otherwise, it returns an empty result set with no records. If the return value is `NULL`, there is a system error. This API should not be called by user programs in asynchronous mode.
|
|
||||||
|
|
||||||
:::note
|
|
||||||
After calling `taos_consume()`, the user application should make sure to call `taos_fetch_row()` or `taos_fetch_block()` to process the subscription results as soon as possible. Otherwise, the server-side will keep caching the query result data waiting to be read by the client, which in extreme cases will cause the server side to run out of memory and affect the stability of the service.
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
- `void taos_unsubscribe(TAOS_SUB *tsub, int keepProgress)`
|
|
||||||
|
|
||||||
Unsubscribe. If the parameter `keepProgress` is not 0, the API will keep the progress information of the subscription, and subsequent calls to `taos_subscribe()` will continue based on this progress; otherwise, the progress information will be deleted, and subsequent readings will have to be restarted.
|
|
|
@ -1,19 +1,18 @@
|
||||||
---
|
---
|
||||||
toc_max_heading_level: 4
|
toc_max_heading_level: 4
|
||||||
sidebar_position: 2
|
|
||||||
sidebar_label: Java
|
sidebar_label: Java
|
||||||
title: TDengine Java Connector
|
title: TDengine Java Connector
|
||||||
description: TDengine Java based on JDBC API and provide both native and REST connections
|
description: The TDengine Java Connector is implemented on the standard JDBC API and provides native and REST connectors.
|
||||||
---
|
---
|
||||||
|
|
||||||
import Tabs from '@theme/Tabs';
|
import Tabs from '@theme/Tabs';
|
||||||
import TabItem from '@theme/TabItem';
|
import TabItem from '@theme/TabItem';
|
||||||
|
|
||||||
'taos-jdbcdriver' is TDengine's official Java language connector, which allows Java developers to develop applications that access the TDengine database. 'taos-jdbcdriver' implements the interface of the JDBC driver standard and provides two forms of connectors. One is to connect to a TDengine instance natively through the TDengine client driver (taosc), which supports functions including data writing, querying, subscription, schemaless writing, and bind interface. And the other is to connect to a TDengine instance through the REST interface provided by taosAdapter (2.4.0.0 and later). The implementation of the REST connection and those of the native connections have slight differences in features.
|
`taos-jdbcdriver` is the official Java connector for TDengine. Java developers can use it to develop applications that access data in TDengine. `taos-jdbcdriver` implements standard JDBC driver interfaces and two connection methods: One is **native connection**, which connects to TDengine instances natively through the TDengine client driver (taosc), supporting data writing, querying, subscriptions, schemaless writing, and bind interface. The second is **REST connection** which is implemented through taosAdapter. The set of features implemented by the REST connection differs slightly from those implemented by the native connection.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
The preceding diagram shows two ways for a Java app to access TDengine via connector:
|
The preceding figure shows the two ways in which a Java application can access TDengine.
|
||||||
|
|
||||||
- JDBC native connection: Java applications use TSDBDriver on physical node 1 (pnode1) to call client-driven directly (`libtaos.so` or `taos.dll`) APIs to send writing and query requests to taosd instances located on physical node 2 (pnode2).
|
- JDBC native connection: Java applications use TSDBDriver on physical node 1 (pnode1) to call client-driven directly (`libtaos.so` or `taos.dll`) APIs to send writing and query requests to taosd instances located on physical node 2 (pnode2).
|
||||||
- JDBC REST connection: The Java application encapsulates the SQL as a REST request via RestfulDriver, sends it to the REST server (taosAdapter) on physical node 2. taosAdapter forwards the request to TDengine server and returns the result.
|
- JDBC REST connection: The Java application encapsulates the SQL as a REST request via RestfulDriver, sends it to the REST server (taosAdapter) on physical node 2. taosAdapter forwards the request to TDengine server and returns the result.
|
||||||
|
@ -30,34 +29,34 @@ TDengine's JDBC driver implementation is as consistent as possible with the rela
|
||||||
|
|
||||||
## Supported platforms
|
## Supported platforms
|
||||||
|
|
||||||
Native connection supports the same platform as TDengine client-driven support.
|
Native connections are supported on the same platforms as the TDengine client driver.
|
||||||
REST connection supports all platforms that can run Java.
|
REST connection supports all platforms that can run Java.
|
||||||
|
|
||||||
## Version support
|
## Version support
|
||||||
|
|
||||||
Please refer to [Version Support List](/reference/connector#version-support).
|
Please refer to [version support list](/reference/connector#version-support)
|
||||||
|
|
||||||
## TDengine DataType vs. Java DataType
|
## TDengine DataType vs. Java DataType
|
||||||
|
|
||||||
TDengine currently supports timestamp, number, character, Boolean type, and the corresponding type conversion with Java is as follows:
|
TDengine currently supports timestamp, number, character, Boolean type, and the corresponding type conversion with Java is as follows:
|
||||||
|
|
||||||
| TDengine DataType | JDBCType (driver version < 2.0.24) | JDBCType (driver version > = 2.0.24) |
|
| TDengine DataType | JDBCType |
|
||||||
| ----------------- | ---------------------------------- | ------------------------------------ |
|
| ----------------- | ---------------------------------- |
|
||||||
| TIMESTAMP | java.lang.Long | java.sql.Timestamp |
|
| TIMESTAMP | java.sql.Timestamp |
|
||||||
| INT | java.lang.Integer | java.lang.Integer |
|
| INT | java.lang.Integer |
|
||||||
| BIGINT | java.lang.Long | java.lang.Long |
|
| BIGINT | java.lang.Long |
|
||||||
| FLOAT | java.lang.Float | java.lang.Float |
|
| FLOAT | java.lang.Float |
|
||||||
| DOUBLE | java.lang.Double | java.lang.Double |
|
| DOUBLE | java.lang.Double |
|
||||||
| SMALLINT | java.lang.Short | java.lang.Short |
|
| SMALLINT | java.lang.Short |
|
||||||
| TINYINT | java.lang.Byte | java.lang.Byte |
|
| TINYINT | java.lang.Byte |
|
||||||
| BOOL | java.lang.Boolean | java.lang.Boolean |
|
| BOOL | java.lang.Boolean |
|
||||||
| BINARY | java.lang.String | byte array |
|
| BINARY | byte array |
|
||||||
| NCHAR | java.lang.String | java.lang.String |
|
| NCHAR | java.lang.String |
|
||||||
| JSON | - | java.lang.String |
|
| JSON | java.lang.String |
|
||||||
|
|
||||||
**Note**: Only TAG supports JSON types
|
**Note**: Only TAG supports JSON types
|
||||||
|
|
||||||
## Installation steps
|
## Installation Steps
|
||||||
|
|
||||||
### Pre-installation preparation
|
### Pre-installation preparation
|
||||||
|
|
||||||
|
@ -71,17 +70,19 @@ Before using Java Connector to connect to the database, the following conditions
|
||||||
<Tabs defaultValue="maven">
|
<Tabs defaultValue="maven">
|
||||||
<TabItem value="maven" label="Install via Maven">
|
<TabItem value="maven" label="Install via Maven">
|
||||||
|
|
||||||
|
taos-jdbcdriver has been published on the [Sonatype Repository](https://search.maven.org/artifact/com.taosdata.jdbc/taos-jdbcdriver) and synchronized to other major repositories.
|
||||||
|
|
||||||
- [sonatype](https://search.maven.org/artifact/com.taosdata.jdbc/taos-jdbcdriver)
|
- [sonatype](https://search.maven.org/artifact/com.taosdata.jdbc/taos-jdbcdriver)
|
||||||
- [mvnrepository](https://mvnrepository.com/artifact/com.taosdata.jdbc/taos-jdbcdriver)
|
- [mvnrepository](https://mvnrepository.com/artifact/com.taosdata.jdbc/taos-jdbcdriver)
|
||||||
- [maven.aliyun](https://maven.aliyun.com/mvn/search)
|
- [maven.aliyun](https://maven.aliyun.com/mvn/search)
|
||||||
|
|
||||||
Add following dependency in the `pom.xml` file of your Maven project:
|
Add following dependency in the `pom.xml` file of your Maven project:
|
||||||
|
|
||||||
```xml
|
```xml-dtd
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>com.taosdata.jdbc</groupId>
|
<groupId>com.taosdata.jdbc</groupId>
|
||||||
<artifactId>taos-jdbcdriver</artifactId>
|
<artifactId>taos-jdbcdriver</artifactId>
|
||||||
<version>2.0.**</version>
|
<version>3.0.0</version>
|
||||||
</dependency>
|
</dependency>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -90,18 +91,18 @@ Add following dependency in the `pom.xml` file of your Maven project:
|
||||||
|
|
||||||
You can build Java connector from source code after cloning the TDengine project:
|
You can build Java connector from source code after cloning the TDengine project:
|
||||||
|
|
||||||
```
|
```shell
|
||||||
git clone https://github.com/taosdata/taos-connector-jdbc.git --branch 2.0
|
git clone https://github.com/taosdata/taos-connector-jdbc.git
|
||||||
cd taos-connector-jdbc
|
cd taos-connector-jdbc
|
||||||
mvn clean install -Dmaven.test.skip=true
|
mvn clean install -Dmaven.test.skip=true
|
||||||
```
|
```
|
||||||
|
|
||||||
After compilation, a jar package named taos-jdbcdriver-2.0.XX-dist.jar is generated in the target directory, and the compiled jar file is automatically placed in the local Maven repository.
|
After you have compiled taos-jdbcdriver, the `taos-jdbcdriver-3.0.*-dist.jar` file is created in the target directory. The compiled JAR file is automatically stored in your local Maven repository.
|
||||||
|
|
||||||
</TabItem>
|
</TabItem>
|
||||||
</Tabs>
|
</Tabs>
|
||||||
|
|
||||||
## Establish a connection
|
## Establishing a connection
|
||||||
|
|
||||||
TDengine's JDBC URL specification format is:
|
TDengine's JDBC URL specification format is:
|
||||||
`jdbc:[TAOS|TAOS-RS]://[host_name]:[port]/[database_name]?[user={user}|&password={password}|&charset={charset}|&cfgdir={config_dir}|&locale={locale}|&timezone={timezone}]`
|
`jdbc:[TAOS|TAOS-RS]://[host_name]:[port]/[database_name]?[user={user}|&password={password}|&charset={charset}|&cfgdir={config_dir}|&locale={locale}|&timezone={timezone}]`
|
||||||
|
@ -109,7 +110,7 @@ TDengine's JDBC URL specification format is:
|
||||||
For establishing connections, native connections differ slightly from REST connections.
|
For establishing connections, native connections differ slightly from REST connections.
|
||||||
|
|
||||||
<Tabs defaultValue="native">
|
<Tabs defaultValue="native">
|
||||||
<TabItem value="native" label="Native connection">
|
<TabItem value="native" label="native connection">
|
||||||
|
|
||||||
```java
|
```java
|
||||||
Class.forName("com.taosdata.jdbc.TSDBDriver");
|
Class.forName("com.taosdata.jdbc.TSDBDriver");
|
||||||
|
@ -129,11 +130,9 @@ The configuration parameters in the URL are as follows:
|
||||||
- charset: The character set used by the client, the default value is the system character set.
|
- charset: The character set used by the client, the default value is the system character set.
|
||||||
- locale: Client locale, by default, use the system's current locale.
|
- locale: Client locale, by default, use the system's current locale.
|
||||||
- timezone: The time zone used by the client, the default value is the system's current time zone.
|
- timezone: The time zone used by the client, the default value is the system's current time zone.
|
||||||
- batchfetch: true: pulls result sets in batches when executing queries; false: pulls result sets row by row. The default value is: false. Enabling batch pulling and obtaining a batch of data can improve query performance when the query data volume is large.
|
- batchfetch: true: pulls result sets in batches when executing queries; false: pulls result sets row by row. The default value is true. Enabling batch pulling and obtaining a batch of data can improve query performance when the query data volume is large.
|
||||||
- batchErrorIgnore:true: When executing statement executeBatch, if there is a SQL execution failure in the middle, the following SQL will continue to be executed. false: No more statements after the failed SQL are executed. The default value is: false.
|
- batchErrorIgnore:true: When executing statement executeBatch, if there is a SQL execution failure in the middle, the following SQL will continue to be executed. false: No more statements after the failed SQL are executed. The default value is: false.
|
||||||
|
|
||||||
For more information about JDBC native connections, see [Video Tutorial](https://www.taosdata.com/blog/2020/11/11/1955.html).
|
|
||||||
|
|
||||||
**Connect using the TDengine client-driven configuration file **
|
**Connect using the TDengine client-driven configuration file **
|
||||||
|
|
||||||
When you use a JDBC native connection to connect to a TDengine cluster, you can use the TDengine client driver configuration file to specify parameters such as `firstEp` and `secondEp` of the cluster in the configuration file as below:
|
When you use a JDBC native connection to connect to a TDengine cluster, you can use the TDengine client driver configuration file to specify parameters such as `firstEp` and `secondEp` of the cluster in the configuration file as below:
|
||||||
|
@ -173,11 +172,8 @@ In the above example, JDBC uses the client's configuration file to establish a c
|
||||||
|
|
||||||
In TDengine, as long as one node in firstEp and secondEp is valid, the connection to the cluster can be established normally.
|
In TDengine, as long as one node in firstEp and secondEp is valid, the connection to the cluster can be established normally.
|
||||||
|
|
||||||
:::note
|
|
||||||
The configuration file here refers to the configuration file on the machine where the application that calls the JDBC Connector is located, the default path is `/etc/taos/taos.cfg` on Linux, and the default path is `C://TDengine/cfg/taos.cfg` on Windows.
|
The configuration file here refers to the configuration file on the machine where the application that calls the JDBC Connector is located, the default path is `/etc/taos/taos.cfg` on Linux, and the default path is `C://TDengine/cfg/taos.cfg` on Windows.
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
</TabItem>
|
</TabItem>
|
||||||
<TabItem value="rest" label="REST connection">
|
<TabItem value="rest" label="REST connection">
|
||||||
|
|
||||||
|
@ -195,11 +191,11 @@ There is no dependency on the client driver when Using a JDBC REST connection. C
|
||||||
2. jdbcUrl starting with "jdbc:TAOS-RS://".
|
2. jdbcUrl starting with "jdbc:TAOS-RS://".
|
||||||
3. use 6041 as the connection port.
|
3. use 6041 as the connection port.
|
||||||
|
|
||||||
The configuration parameters in the URL are as follows.
|
The configuration parameters in the URL are as follows:
|
||||||
|
|
||||||
- user: Login TDengine user name, default value 'root'.
|
- user: Log in to the TDengine username. The default value is 'root'.
|
||||||
- password: user login password, default value 'taosdata'.
|
- password: User login password, the default value is 'taosdata'.
|
||||||
- batchfetch: true: pull the result set in batch when executing the query; false: pull the result set row by row. The default value is false. batchfetch uses HTTP for data transfer. The JDBC REST connection supports bulk data pulling function in taos-jdbcdriver-2.0.38 and TDengine 2.4.0.12 and later versions. taos-jdbcdriver and TDengine transfer data via WebSocket connection. Compared with HTTP, WebSocket enables JDBC REST connection to support large data volume querying and improve query performance.
|
- batchfetch: true: pulls result sets in batches when executing queries; false: pulls result sets row by row. The default value is: false. batchfetch uses HTTP for data transfer. JDBC REST supports batch pulls. taos-jdbcdriver and TDengine transfer data via WebSocket connection. Compared with HTTP, WebSocket enables JDBC REST connection to support large data volume querying and improve query performance.
|
||||||
- charset: specify the charset to parse the string, this parameter is valid only when set batchfetch to true.
|
- charset: specify the charset to parse the string, this parameter is valid only when set batchfetch to true.
|
||||||
- batchErrorIgnore: true: when executing executeBatch of Statement, if one SQL execution fails in the middle, continue to execute the following SQL. false: no longer execute any statement after the failed SQL. The default value is: false.
|
- batchErrorIgnore: true: when executing executeBatch of Statement, if one SQL execution fails in the middle, continue to execute the following SQL. false: no longer execute any statement after the failed SQL. The default value is: false.
|
||||||
- httpConnectTimeout: REST connection timeout in milliseconds, the default value is 5000 ms.
|
- httpConnectTimeout: REST connection timeout in milliseconds, the default value is 5000 ms.
|
||||||
|
@ -211,13 +207,13 @@ The configuration parameters in the URL are as follows.
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
|
|
||||||
- Unlike the native connection method, the REST interface is stateless. When using the JDBC REST connection, you need to specify the database name of the table and super table in SQL. For example.
|
- Unlike the native connection method, the REST interface is stateless. When using the JDBC REST connection, you need to specify the database name of the table and super table in SQL. For example:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('California.SanFrancisco') VALUES(now, 24.6);
|
INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('California.SanFrancisco') VALUES(now, 24.6);
|
||||||
```
|
```
|
||||||
|
|
||||||
- Starting from taos-jdbcdriver-2.0.36 and TDengine 2.2.0.0, if dbname is specified in the URL, JDBC REST connections will use `/rest/sql/dbname` as the URL for REST requests by default, and there is no need to specify dbname in SQL. For example, if the URL is `jdbc:TAOS-RS://127.0.0.1:6041/test`, then the SQL can be executed: insert into test using weather(ts, temperature) tags('California.SanFrancisco') values(now, 24.6);
|
- If the dbname is specified in the URL, the JDBC REST connection uses /rest/sql/dbname as the default URL for RESTful requests. In this case, it is not necessary to specify the dbname in SQL. For example, if the URL is `jdbc:TAOS-RS://127.0.0.1:6041/test`, then the SQL can be executed: insert into test using weather(ts, temperature) tags('California.SanFrancisco') values(now, 24.6);
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
@ -228,10 +224,10 @@ INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('California.SanFra
|
||||||
|
|
||||||
In addition to getting the connection from the specified URL, you can use Properties to specify parameters when the connection is established.
|
In addition to getting the connection from the specified URL, you can use Properties to specify parameters when the connection is established.
|
||||||
|
|
||||||
**Note**:
|
Note:
|
||||||
|
|
||||||
- The client parameter set in the application is process-level. If you want to update the parameters of the client, you need to restart the application. This is because the client parameter is a global parameter that takes effect only the first time the application is set.
|
- The client parameter set in the application is process-level. If you want to update the parameters of the client, you need to restart the application. This is because the client parameter is a global parameter that takes effect only the first time the application is set.
|
||||||
- The following sample code is based on taos-jdbcdriver-2.0.36.
|
- The following sample code is based on taos-jdbcdriver-3.0.0.
|
||||||
|
|
||||||
```java
|
```java
|
||||||
public Connection getConn() throws Exception{
|
public Connection getConn() throws Exception{
|
||||||
|
@ -257,14 +253,14 @@ public Connection getRestConn() throws Exception{
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
In the above example, a connection is established to `taosdemo.com`, port is 6030/6041, and database named `test`. The connection specifies the user name as `root` and the password as `taosdata` in the URL and specifies the character set, language environment, time zone, and whether to enable bulk fetching in the connProps.
|
In the above example, a connection is established to `taosdemo.com`, port is 6030/6041, and database named `test`. The connection specifies the user name as `root` and the password as `taosdata` in the URL and specifies the character set, language environment, time zone, and whether to enable bulk fetching in the connProps.The url specifies the user name as `root` and the password as `taosdata`.
|
||||||
|
|
||||||
The configuration parameters in properties are as follows.
|
The configuration parameters in properties are as follows.
|
||||||
|
|
||||||
- TSDBDriver.PROPERTY_KEY_USER: login TDengine user name, default value 'root'.
|
- TSDBDriver.PROPERTY_KEY_USER: login TDengine user name, default value 'root'.
|
||||||
- TSDBDriver.PROPERTY_KEY_PASSWORD: user login password, default value 'taosdata'.
|
- TSDBDriver.PROPERTY_KEY_PASSWORD: user login password, default value 'taosdata'.
|
||||||
- TSDBDriver.PROPERTY_KEY_BATCH_LOAD: true: pull the result set in batch when executing query; false: pull the result set row by row. The default value is: false.
|
- TSDBDriver.PROPERTY_KEY_BATCH_LOAD: true: pull the result set in batch when executing query; false: pull the result set row by row. The default value is: false.
|
||||||
- TSDBDriver.PROPERTY_KEY_BATCH_ERROR_IGNORE: true: when executing executeBatch of Statement, if there is a SQL execution failure in the middle, continue to execute the following sq. false: no longer execute any statement after the failed SQL. The default value is: false.
|
- TSDBDriver.PROPERTY_KEY_BATCH_ERROR_IGNORE: true: when executing executeBatch of Statement, if there is a SQL execution failure in the middle, continue to execute the following sql. false: no longer execute any statement after the failed SQL. The default value is: false.
|
||||||
- TSDBDriver.PROPERTY_KEY_CONFIG_DIR: only works when using JDBC native connection. Client configuration file directory path, default value `/etc/taos` on Linux OS, default value `C:/TDengine/cfg` on Windows OS.
|
- TSDBDriver.PROPERTY_KEY_CONFIG_DIR: only works when using JDBC native connection. Client configuration file directory path, default value `/etc/taos` on Linux OS, default value `C:/TDengine/cfg` on Windows OS.
|
||||||
- TSDBDriver.PROPERTY_KEY_CHARSET: In the character set used by the client, the default value is the system character set.
|
- TSDBDriver.PROPERTY_KEY_CHARSET: In the character set used by the client, the default value is the system character set.
|
||||||
- TSDBDriver.PROPERTY_KEY_LOCALE: this only takes effect when using JDBC native connection. Client language environment, the default value is system current locale.
|
- TSDBDriver.PROPERTY_KEY_LOCALE: this only takes effect when using JDBC native connection. Client language environment, the default value is system current locale.
|
||||||
|
@ -272,7 +268,7 @@ The configuration parameters in properties are as follows.
|
||||||
- TSDBDriver.HTTP_CONNECT_TIMEOUT: REST connection timeout in milliseconds, the default value is 5000 ms. It only takes effect when using JDBC REST connection.
|
- TSDBDriver.HTTP_CONNECT_TIMEOUT: REST connection timeout in milliseconds, the default value is 5000 ms. It only takes effect when using JDBC REST connection.
|
||||||
- TSDBDriver.HTTP_SOCKET_TIMEOUT: socket timeout in milliseconds, the default value is 5000 ms. It only takes effect when using JDBC REST connection and batchfetch is false.
|
- TSDBDriver.HTTP_SOCKET_TIMEOUT: socket timeout in milliseconds, the default value is 5000 ms. It only takes effect when using JDBC REST connection and batchfetch is false.
|
||||||
- TSDBDriver.PROPERTY_KEY_MESSAGE_WAIT_TIMEOUT: message transmission timeout in milliseconds, the default value is 3000 ms. It only takes effect when using JDBC REST connection and batchfetch is true.
|
- TSDBDriver.PROPERTY_KEY_MESSAGE_WAIT_TIMEOUT: message transmission timeout in milliseconds, the default value is 3000 ms. It only takes effect when using JDBC REST connection and batchfetch is true.
|
||||||
- TSDBDriver.PROPERTY_KEY_USE_SSL: connecting Securely Using SSL. true: using SSL conneciton, false: not using SSL connection. It only takes effect when using using JDBC REST connection.
|
- TSDBDriver.PROPERTY_KEY_USE_SSL: connecting Securely Using SSL. true: using SSL conneciton, false: not using SSL connection. It only takes effect when using JDBC REST connection.
|
||||||
For JDBC native connections, you can specify other parameters, such as log level, SQL length, etc., by specifying URL and Properties. For more detailed configuration, please refer to [Client Configuration](/reference/config/#Client-Only).
|
For JDBC native connections, you can specify other parameters, such as log level, SQL length, etc., by specifying URL and Properties. For more detailed configuration, please refer to [Client Configuration](/reference/config/#Client-Only).
|
||||||
|
|
||||||
### Priority of configuration parameters
|
### Priority of configuration parameters
|
||||||
|
@ -304,7 +300,7 @@ stmt.executeUpdate("create table if not exists tb (ts timestamp, temperature int
|
||||||
|
|
||||||
> **Note**: If you do not use `use db` to specify the database, all subsequent operations on the table need to add the database name as a prefix, such as db.tb.
|
> **Note**: If you do not use `use db` to specify the database, all subsequent operations on the table need to add the database name as a prefix, such as db.tb.
|
||||||
|
|
||||||
### Insert data
|
### 插入数据
|
||||||
|
|
||||||
```java
|
```java
|
||||||
// insert data
|
// insert data
|
||||||
|
@ -319,7 +315,7 @@ System.out.println("insert " + affectedRows + " rows.");
|
||||||
### Querying data
|
### Querying data
|
||||||
|
|
||||||
```java
|
```java
|
||||||
// query data
|
// insert data
|
||||||
ResultSet resultSet = stmt.executeQuery("select * from tb");
|
ResultSet resultSet = stmt.executeQuery("select * from tb");
|
||||||
|
|
||||||
Timestamp ts = null;
|
Timestamp ts = null;
|
||||||
|
@ -354,25 +350,21 @@ try (Statement statement = connection.createStatement()) {
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
There are three types of error codes that the JDBC connector can report:
|
There are three types of error codes that the JDBC connector can report: - Error code of the JDBC driver itself (error code between 0x2301 and 0x2350), - Error code of the native connection method (error code between 0x2351 and 0x2400), and - Error code of other TDengine function modules.
|
||||||
|
|
||||||
- Error code of the JDBC driver itself (error code between 0x2301 and 0x2350)
|
|
||||||
- Error code of the native connection method (error code between 0x2351 and 0x2400)
|
|
||||||
- Error code of other TDengine function modules
|
|
||||||
|
|
||||||
For specific error codes, please refer to.
|
For specific error codes, please refer to.
|
||||||
|
|
||||||
- [TDengine Java Connector](https://github.com/taosdata/taos-connector-jdbc/blob/main/src/main/java/com/taosdata/jdbc/TSDBErrorNumbers.java)
|
- [TDengine Java Connector](https://github.com/taosdata/taos-connector-jdbc/blob/main/src/main/java/com/taosdata/jdbc/TSDBErrorNumbers.java)
|
||||||
- [TDengine_ERROR_CODE](https://github.com/taosdata/TDengine/blob/develop/src/inc/taoserror.h)
|
<!-- - [TDengine_ERROR_CODE](../error-code) -->
|
||||||
|
|
||||||
### Writing data via parameter binding
|
### Writing data via parameter binding
|
||||||
|
|
||||||
TDengine's native JDBC connection implementation has significantly improved its support for data writing (INSERT) scenarios via bind interface with version 2.1.2.0 and later versions. Writing data in this way avoids the resource consumption of SQL syntax parsing, resulting in significant write performance improvements in many cases.
|
TDengine has significantly improved the bind APIs to support data writing (INSERT) scenarios. Writing data in this way avoids the resource consumption of SQL syntax parsing, resulting in significant write performance improvements in many cases.
|
||||||
|
|
||||||
**Note**.
|
**Note:**
|
||||||
|
|
||||||
- JDBC REST connections do not currently support bind interface
|
- JDBC REST connections do not currently support bind interface
|
||||||
- The following sample code is based on taos-jdbcdriver-2.0.36
|
- The following sample code is based on taos-jdbcdriver-3.0.0
|
||||||
- The setString method should be called for binary type data, and the setNString method should be called for nchar type data
|
- The setString method should be called for binary type data, and the setNString method should be called for nchar type data
|
||||||
- both setString and setNString require the user to declare the width of the corresponding column in the size parameter of the table definition
|
- both setString and setNString require the user to declare the width of the corresponding column in the size parameter of the table definition
|
||||||
|
|
||||||
|
@ -577,7 +569,7 @@ public class ParameterBindingDemo {
|
||||||
// set table name
|
// set table name
|
||||||
pstmt.setTableName("t5_" + i);
|
pstmt.setTableName("t5_" + i);
|
||||||
// set tags
|
// set tags
|
||||||
pstmt.setTagNString(0, "California-abc");
|
pstmt.setTagNString(0, "California.SanFrancisco");
|
||||||
|
|
||||||
// set columns
|
// set columns
|
||||||
ArrayList<Long> tsList = new ArrayList<>();
|
ArrayList<Long> tsList = new ArrayList<>();
|
||||||
|
@ -588,7 +580,7 @@ public class ParameterBindingDemo {
|
||||||
|
|
||||||
ArrayList<String> f1List = new ArrayList<>();
|
ArrayList<String> f1List = new ArrayList<>();
|
||||||
for (int j = 0; j < numOfRow; j++) {
|
for (int j = 0; j < numOfRow; j++) {
|
||||||
f1List.add("California-abc");
|
f1List.add("California.LosAngeles");
|
||||||
}
|
}
|
||||||
pstmt.setNString(1, f1List, BINARY_COLUMN_SIZE);
|
pstmt.setNString(1, f1List, BINARY_COLUMN_SIZE);
|
||||||
|
|
||||||
|
@ -635,12 +627,12 @@ public void setNString(int columnIndex, ArrayList<String> list, int size) throws
|
||||||
|
|
||||||
### Schemaless Writing
|
### Schemaless Writing
|
||||||
|
|
||||||
Starting with version 2.2.0.0, TDengine has added the ability to perform schemaless writing. It is compatible with InfluxDB's Line Protocol, OpenTSDB's telnet line protocol, and OpenTSDB's JSON format protocol. See [schemaless writing](/reference/schemaless/) for details.
|
TDengine supports schemaless writing. It is compatible with InfluxDB's Line Protocol, OpenTSDB's telnet line protocol, and OpenTSDB's JSON format protocol. For more information, see [Schemaless Writing](../../schemaless).
|
||||||
|
|
||||||
**Note**.
|
Note:
|
||||||
|
|
||||||
- JDBC REST connections do not currently support schemaless writes
|
- JDBC REST connections do not currently support schemaless writes
|
||||||
- The following sample code is based on taos-jdbcdriver-2.0.36
|
- The following sample code is based on taos-jdbcdriver-3.0.0
|
||||||
|
|
||||||
```java
|
```java
|
||||||
public class SchemalessInsertTest {
|
public class SchemalessInsertTest {
|
||||||
|
@ -671,59 +663,137 @@ public class SchemalessInsertTest {
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
### Subscriptions
|
### Data Subscription
|
||||||
|
|
||||||
The TDengine Java Connector supports subscription functionality with the following application API.
|
The TDengine Java Connector supports subscription functionality with the following application API.
|
||||||
|
|
||||||
#### Create subscriptions
|
#### Create a Topic
|
||||||
|
|
||||||
```java
|
```java
|
||||||
TSDBSubscribe sub = ((TSDBConnection)conn).subscribe("topicname", "select * from meters", false);
|
Connection connection = DriverManager.getConnection(url, properties);
|
||||||
|
Statement statement = connection.createStatement();
|
||||||
|
statement.executeUpdate("create topic if not exists topic_speed as select ts, speed from speed_table");
|
||||||
```
|
```
|
||||||
|
|
||||||
The three parameters of the `subscribe()` method have the following meanings.
|
The three parameters of the `subscribe()` method have the following meanings.
|
||||||
|
|
||||||
- topicname: the name of the subscribed topic. This parameter is the unique identifier of the subscription.
|
- topic_speed: the subscribed topic (name). This is the unique identifier of the subscription.
|
||||||
- sql: the query statement of the subscription. This statement can only be a `select` statement. Only original data can be queried, and you can query the data only temporal order.
|
- sql: the query statement of the subscription which can only be a _select_ statement. Only the original data should be queried, and data can only be queried in temporal order..
|
||||||
- restart: if the subscription already exists, whether to restart or continue the previous subscription
|
|
||||||
|
|
||||||
The above example will use the SQL command `select * from meters` to create a subscription named `topicname`. If the subscription exists, it will continue the progress of the previous query instead of consuming all the data from the beginning.
|
The preceding example uses the SQL statement `select ts, speed from speed_table` and creates a subscription named `topic_speed`.
|
||||||
|
|
||||||
|
#### Create a Consumer
|
||||||
|
|
||||||
|
```java
|
||||||
|
Properties config = new Properties();
|
||||||
|
config.setProperty("enable.auto.commit", "true");
|
||||||
|
config.setProperty("group.id", "group1");
|
||||||
|
config.setProperty("value.deserializer", "com.taosdata.jdbc.tmq.ConsumerTest.ResultDeserializer");
|
||||||
|
|
||||||
|
TaosConsumer consumer = new TaosConsumer<>(config);
|
||||||
|
```
|
||||||
|
|
||||||
|
- enable.auto.commit: Specifies whether to commit automatically.
|
||||||
|
- group.id: consumer: Specifies the group that the consumer is in.
|
||||||
|
- value.deserializer: To deserialize the results, you can inherit `com.taosdata.jdbc.tmq.ReferenceDeserializer` and specify the result set bean. You can also inherit `com.taosdata.jdbc.tmq.Deserializer` and perform custom deserialization based on the SQL result set.
|
||||||
|
- For more information, see [Consumer Parameters](../../../develop/tmq).
|
||||||
|
|
||||||
#### Subscribe to consume data
|
#### Subscribe to consume data
|
||||||
|
|
||||||
```java
|
```java
|
||||||
int total = 0;
|
|
||||||
while(true) {
|
while(true) {
|
||||||
TSDBResultSet rs = sub.consume();
|
ConsumerRecords<ResultBean> records = consumer.poll(Duration.ofMillis(100));
|
||||||
int count = 0;
|
for (ResultBean record : records) {
|
||||||
while(rs.next()) {
|
process(record);
|
||||||
count++;
|
|
||||||
}
|
}
|
||||||
total += count;
|
|
||||||
System.out.printf("%d rows consumed, total %d\n", count, total);
|
|
||||||
Thread.sleep(1000);
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
The `consume()` method returns a result set containing all new data from the last `consume()`. Be sure to choose a reasonable frequency for calling `consume()` as needed (e.g. `Thread.sleep(1000)` in the example). Otherwise, it will cause unnecessary stress on the server-side.
|
`poll` obtains one message each time it is run.
|
||||||
|
|
||||||
#### Close subscriptions
|
#### Close subscriptions
|
||||||
|
|
||||||
```java
|
```java
|
||||||
sub.close(true);
|
// Unsubscribe
|
||||||
|
consumer.unsubscribe();
|
||||||
|
// Close consumer
|
||||||
|
consumer.close()
|
||||||
```
|
```
|
||||||
|
|
||||||
The `close()` method closes a subscription. If its argument is `true` it means that the subscription progress information is retained, and the subscription with the same name can be created to continue consuming data; if it is `false` it does not retain the subscription progress.
|
For more information, see [Data Subscription](../../../develop/tmq).
|
||||||
|
|
||||||
### Closing resources
|
### Usage examples
|
||||||
|
|
||||||
```java
|
```java
|
||||||
resultSet.close();
|
public abstract class ConsumerLoop {
|
||||||
stmt.close();
|
private final TaosConsumer<ResultBean> consumer;
|
||||||
conn.close();
|
private final List<String> topics;
|
||||||
```
|
private final AtomicBoolean shutdown;
|
||||||
|
private final CountDownLatch shutdownLatch;
|
||||||
|
|
||||||
> **Be sure to close the connection**, otherwise, there will be a connection leak.
|
public ConsumerLoop() throws SQLException {
|
||||||
|
Properties config = new Properties();
|
||||||
|
config.setProperty("msg.with.table.name", "true");
|
||||||
|
config.setProperty("enable.auto.commit", "true");
|
||||||
|
config.setProperty("group.id", "group1");
|
||||||
|
config.setProperty("value.deserializer", "com.taosdata.jdbc.tmq.ConsumerTest.ConsumerLoop$ResultDeserializer");
|
||||||
|
|
||||||
|
this.consumer = new TaosConsumer<>(config);
|
||||||
|
this.topics = Collections.singletonList("topic_speed");
|
||||||
|
this.shutdown = new AtomicBoolean(false);
|
||||||
|
this.shutdownLatch = new CountDownLatch(1);
|
||||||
|
}
|
||||||
|
|
||||||
|
public abstract void process(ResultBean result);
|
||||||
|
|
||||||
|
public void pollData() throws SQLException {
|
||||||
|
try {
|
||||||
|
consumer.subscribe(topics);
|
||||||
|
|
||||||
|
while (!shutdown.get()) {
|
||||||
|
ConsumerRecords<ResultBean> records = consumer.poll(Duration.ofMillis(100));
|
||||||
|
for (ResultBean record : records) {
|
||||||
|
process(record);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
consumer.unsubscribe();
|
||||||
|
} finally {
|
||||||
|
consumer.close();
|
||||||
|
shutdownLatch.countDown();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
public void shutdown() throws InterruptedException {
|
||||||
|
shutdown.set(true);
|
||||||
|
shutdownLatch.await();
|
||||||
|
}
|
||||||
|
|
||||||
|
public static class ResultDeserializer extends ReferenceDeserializer<ResultBean> {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
public static class ResultBean {
|
||||||
|
private Timestamp ts;
|
||||||
|
private int speed;
|
||||||
|
|
||||||
|
public Timestamp getTs() {
|
||||||
|
return ts;
|
||||||
|
}
|
||||||
|
|
||||||
|
public void setTs(Timestamp ts) {
|
||||||
|
this.ts = ts;
|
||||||
|
}
|
||||||
|
|
||||||
|
public int getSpeed() {
|
||||||
|
return speed;
|
||||||
|
}
|
||||||
|
|
||||||
|
public void setSpeed(int speed) {
|
||||||
|
this.speed = speed;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
### Use with connection pool
|
### Use with connection pool
|
||||||
|
|
||||||
|
@ -754,7 +824,7 @@ Example usage is as follows.
|
||||||
//query or insert
|
//query or insert
|
||||||
// ...
|
// ...
|
||||||
|
|
||||||
connection.close(); // put back to connection pool
|
connection.close(); // put back to conneciton pool
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -786,26 +856,12 @@ public static void main(String[] args) throws Exception {
|
||||||
//query or insert
|
//query or insert
|
||||||
// ...
|
// ...
|
||||||
|
|
||||||
connection.close(); // put back to connection pool
|
connection.close(); // put back to conneciton pool
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
> For more questions about using druid, please see [Official Instructions](https://github.com/alibaba/druid).
|
> For more questions about using druid, please see [Official Instructions](https://github.com/alibaba/druid).
|
||||||
|
|
||||||
**Caution:**
|
|
||||||
|
|
||||||
- TDengine `v1.6.4.1` provides a special function `select server_status()` for heartbeat detection, so it is recommended to use `select server_status()` for Validation Query when using connection pooling.
|
|
||||||
|
|
||||||
As you can see below, `select server_status()` returns `1` on successful execution.
|
|
||||||
|
|
||||||
```sql
|
|
||||||
taos> select server_status();
|
|
||||||
server_status()|
|
|
||||||
================
|
|
||||||
1 |
|
|
||||||
Query OK, 1 row(s) in set (0.000141s)
|
|
||||||
```
|
|
||||||
|
|
||||||
### More sample programs
|
### More sample programs
|
||||||
|
|
||||||
The source code of the sample application is under `TDengine/examples/JDBC`:
|
The source code of the sample application is under `TDengine/examples/JDBC`:
|
||||||
|
@ -816,12 +872,13 @@ The source code of the sample application is under `TDengine/examples/JDBC`:
|
||||||
- SpringJdbcTemplate: using taos-jdbcdriver in Spring JdbcTemplate.
|
- SpringJdbcTemplate: using taos-jdbcdriver in Spring JdbcTemplate.
|
||||||
- mybatisplus-demo: using taos-jdbcdriver in Springboot + Mybatis.
|
- mybatisplus-demo: using taos-jdbcdriver in Springboot + Mybatis.
|
||||||
|
|
||||||
Please refer to: [JDBC example](https://github.com/taosdata/TDengine/tree/develop/examples/JDBC)
|
[JDBC example](https://github.com/taosdata/TDengine/tree/3.0/examples/JDBC)
|
||||||
|
|
||||||
## Recent update logs
|
## Recent update logs
|
||||||
|
|
||||||
| taos-jdbcdriver version | major changes |
|
| taos-jdbcdriver version | major changes |
|
||||||
| :---------------------: | :--------------------------------------------: |
|
| :---------------------: | :--------------------------------------------: |
|
||||||
|
| 3.0.0 | Support for TDengine 3.0 |
|
||||||
| 2.0.39 - 2.0.40 | Add REST connection/request timeout parameters |
|
| 2.0.39 - 2.0.40 | Add REST connection/request timeout parameters |
|
||||||
| 2.0.38 | JDBC REST connections add bulk pull function |
|
| 2.0.38 | JDBC REST connections add bulk pull function |
|
||||||
| 2.0.37 | Support json tags |
|
| 2.0.37 | Support json tags |
|
||||||
|
@ -841,13 +898,19 @@ Please refer to: [JDBC example](https://github.com/taosdata/TDengine/tree/develo
|
||||||
|
|
||||||
**Solution**: On Windows you can copy `C:\TDengine\driver\taos.dll` to the `C:\Windows\System32` directory, on Linux the following soft link will be created `ln -s /usr/local/taos/driver/libtaos.so.x.x.x.x /usr/lib/libtaos.so` will work.
|
**Solution**: On Windows you can copy `C:\TDengine\driver\taos.dll` to the `C:\Windows\System32` directory, on Linux the following soft link will be created `ln -s /usr/local/taos/driver/libtaos.so.x.x.x.x /usr/lib/libtaos.so` will work.
|
||||||
|
|
||||||
3. java.lang.UnsatisfiedLinkError: taos.dll Can't load AMD 64 bit on an IA 32-bit platform
|
3. java.lang.UnsatisfiedLinkError: taos.dll Can't load AMD 64 bit on a IA 32-bit platform
|
||||||
|
|
||||||
**Cause**: Currently, TDengine only supports 64-bit JDK.
|
**Cause**: Currently, TDengine only supports 64-bit JDK.
|
||||||
|
|
||||||
**Solution**: Reinstall the 64-bit JDK. 4.
|
**Solution**: Reinstall the 64-bit JDK.
|
||||||
|
|
||||||
For other questions, please refer to [FAQ](/train-faq/faq)
|
4. java.lang.NoSuchMethodError: setByteArray
|
||||||
|
|
||||||
|
**Cause**: taos-jbdcdriver 3.* only supports TDengine 3.0 and later.
|
||||||
|
|
||||||
|
**Solution**: Use taos-jdbcdriver 2.* with your TDengine 2.* deployment.
|
||||||
|
|
||||||
|
For additional troubleshooting, see [FAQ](../../../train-faq/faq).
|
||||||
|
|
||||||
## API Reference
|
## API Reference
|
||||||
|
|
|
@ -1,6 +1,5 @@
|
||||||
---
|
---
|
||||||
toc_max_heading_level: 4
|
toc_max_heading_level: 4
|
||||||
sidebar_position: 4
|
|
||||||
sidebar_label: Go
|
sidebar_label: Go
|
||||||
title: TDengine Go Connector
|
title: TDengine Go Connector
|
||||||
---
|
---
|
||||||
|
@ -8,7 +7,7 @@ title: TDengine Go Connector
|
||||||
import Tabs from '@theme/Tabs';
|
import Tabs from '@theme/Tabs';
|
||||||
import TabItem from '@theme/TabItem';
|
import TabItem from '@theme/TabItem';
|
||||||
|
|
||||||
import Preparation from "./_preparation.mdx"
|
import Preparition from "./_preparition.mdx"
|
||||||
import GoInsert from "../../07-develop/03-insert-data/_go_sql.mdx"
|
import GoInsert from "../../07-develop/03-insert-data/_go_sql.mdx"
|
||||||
import GoInfluxLine from "../../07-develop/03-insert-data/_go_line.mdx"
|
import GoInfluxLine from "../../07-develop/03-insert-data/_go_line.mdx"
|
||||||
import GoOpenTSDBTelnet from "../../07-develop/03-insert-data/_go_opts_telnet.mdx"
|
import GoOpenTSDBTelnet from "../../07-develop/03-insert-data/_go_opts_telnet.mdx"
|
||||||
|
@ -23,7 +22,7 @@ This article describes how to install `driver-go` and connect to TDengine cluste
|
||||||
|
|
||||||
The source code of `driver-go` is hosted on [GitHub](https://github.com/taosdata/driver-go).
|
The source code of `driver-go` is hosted on [GitHub](https://github.com/taosdata/driver-go).
|
||||||
|
|
||||||
## Supported Platforms
|
## Supported platforms
|
||||||
|
|
||||||
Native connections are supported on the same platforms as the TDengine client driver.
|
Native connections are supported on the same platforms as the TDengine client driver.
|
||||||
REST connections are supported on all platforms that can run Go.
|
REST connections are supported on all platforms that can run Go.
|
||||||
|
@ -41,33 +40,31 @@ A "native connection" is established by the connector directly to the TDengine i
|
||||||
* Normal queries
|
* Normal queries
|
||||||
* Continuous queries
|
* Continuous queries
|
||||||
* Subscriptions
|
* Subscriptions
|
||||||
* schemaless interface
|
* Schemaless interface
|
||||||
* parameter binding interface
|
* Parameter binding interface
|
||||||
|
|
||||||
### REST connection
|
### REST connection
|
||||||
|
|
||||||
A "REST connection" is a connection between the application and the TDengine instance via the REST API provided by the taosAdapter component. The following features are supported:
|
A "REST connection" is a connection between the application and the TDengine instance via the REST API provided by the taosAdapter component. The following features are supported:
|
||||||
|
|
||||||
* General queries
|
* Normal queries
|
||||||
* Continuous queries
|
* Continuous queries
|
||||||
|
|
||||||
## Installation steps
|
## Installation Steps
|
||||||
|
|
||||||
### Pre-installation
|
### Pre-installation preparation
|
||||||
|
|
||||||
- Install Go development environment (Go 1.14 and above, GCC 4.8.5 and above)
|
* Install Go development environment (Go 1.14 and above, GCC 4.8.5 and above)
|
||||||
- If you use the native connector, please install the TDengine client driver. Please refer to [Install Client Driver](/reference/connector/#install-client-driver) for specific steps
|
- If you use the native connector, please install the TDengine client driver. Please refer to [Install Client Driver](/reference/connector/#install-client-driver) for specific steps
|
||||||
|
|
||||||
Configure the environment variables and check the command.
|
Configure the environment variables and check the command.
|
||||||
|
|
||||||
* `go env`
|
* ```go env```
|
||||||
* `gcc -v`
|
* ```gcc -v```
|
||||||
|
|
||||||
### Use go get to install
|
### Use go get to install
|
||||||
|
|
||||||
```
|
`go get -u github.com/taosdata/driver-go/v3@latest`
|
||||||
go get -u github.com/taosdata/driver-go/v2@develop
|
|
||||||
```
|
|
||||||
|
|
||||||
### Manage with go mod
|
### Manage with go mod
|
||||||
|
|
||||||
|
@ -82,7 +79,7 @@ go get -u github.com/taosdata/driver-go/v2@develop
|
||||||
```go
|
```go
|
||||||
import (
|
import (
|
||||||
"database/sql"
|
"database/sql"
|
||||||
_ "github.com/taosdata/driver-go/v2/taosSql"
|
_ "github.com/taosdata/driver-go/v3/taosSql"
|
||||||
)
|
)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -99,7 +96,7 @@ go get -u github.com/taosdata/driver-go/v2@develop
|
||||||
go build
|
go build
|
||||||
```
|
```
|
||||||
|
|
||||||
## Create a connection
|
## Establishing a connection
|
||||||
|
|
||||||
### Data source name (DSN)
|
### Data source name (DSN)
|
||||||
|
|
||||||
|
@ -114,7 +111,6 @@ DSN in full form.
|
||||||
```text
|
```text
|
||||||
username:password@protocol(address)/dbname?param=value
|
username:password@protocol(address)/dbname?param=value
|
||||||
```
|
```
|
||||||
|
|
||||||
### Connecting via connector
|
### Connecting via connector
|
||||||
|
|
||||||
<Tabs defaultValue="native">
|
<Tabs defaultValue="native">
|
||||||
|
@ -126,7 +122,7 @@ Use `taosSql` as `driverName` and use a correct [DSN](#DSN) as `dataSourceName`,
|
||||||
|
|
||||||
* configPath specifies the `taos.cfg` directory
|
* configPath specifies the `taos.cfg` directory
|
||||||
|
|
||||||
Example.
|
For example:
|
||||||
|
|
||||||
```go
|
```go
|
||||||
package main
|
package main
|
||||||
|
@ -135,7 +131,7 @@ import (
|
||||||
"database/sql"
|
"database/sql"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
_ "github.com/taosdata/driver-go/v2/taosSql"
|
_ "github.com/taosdata/driver-go/v3/taosSql"
|
||||||
)
|
)
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
|
@ -158,7 +154,7 @@ Use `taosRestful` as `driverName` and use a correct [DSN](#DSN) as `dataSourceNa
|
||||||
* `disableCompression` whether to accept compressed data, default is true do not accept compressed data, set to false if transferring data using gzip compression.
|
* `disableCompression` whether to accept compressed data, default is true do not accept compressed data, set to false if transferring data using gzip compression.
|
||||||
* `readBufferSize` The default size of the buffer for reading data is 4K (4096), which can be adjusted upwards when the query result has a lot of data.
|
* `readBufferSize` The default size of the buffer for reading data is 4K (4096), which can be adjusted upwards when the query result has a lot of data.
|
||||||
|
|
||||||
Example.
|
For example:
|
||||||
|
|
||||||
```go
|
```go
|
||||||
package main
|
package main
|
||||||
|
@ -167,7 +163,7 @@ import (
|
||||||
"database/sql"
|
"database/sql"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
_ "github.com/taosdata/driver-go/v2/taosRestful"
|
_ "github.com/taosdata/driver-go/v3/taosRestful"
|
||||||
)
|
)
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
|
@ -208,14 +204,14 @@ func main() {
|
||||||
|
|
||||||
### More sample programs
|
### More sample programs
|
||||||
|
|
||||||
* [sample program](https://github.com/taosdata/TDengine/tree/develop/examples/go)
|
* [sample program](https://github.com/taosdata/driver-go/tree/3.0/examples)
|
||||||
* [Video tutorial](https://www.taosdata.com/blog/2020/11/11/1951.html).
|
|
||||||
|
|
||||||
## Usage limitations
|
## Usage limitations
|
||||||
|
|
||||||
Since the REST interface is stateless, the `use db` syntax will not work. You need to put the db name into the SQL command, e.g. `create table if not exists tb1 (ts timestamp, a int)` to `create table if not exists test.tb1 (ts timestamp, a int)` otherwise it will report the error `[0x217] Database not specified or available`.
|
Since the REST interface is stateless, the `use db` syntax will not work. You need to put the db name into the SQL command, e.g. `create table if not exists tb1 (ts timestamp, a int)` to `create table if not exists test.tb1 (ts timestamp, a int)` otherwise it will report the error `[0x217] Database not specified or available`.
|
||||||
|
|
||||||
You can also put the db name in the DSN by changing `root:taosdata@http(localhost:6041)/` to `root:taosdata@http(localhost:6041)/test`. This method is supported by taosAdapter since TDengine 2.4.0.5. Executing the `create database` statement when the specified db does not exist will not report an error while executing other queries or writing against that db will report an error.
|
You can also put the db name in the DSN by changing `root:taosdata@http(localhost:6041)/` to `root:taosdata@http(localhost:6041)/test`. Executing the `create database` statement when the specified db does not exist will not report an error while executing other queries or writing against that db will report an error.
|
||||||
|
|
||||||
The complete example is as follows.
|
The complete example is as follows.
|
||||||
|
|
||||||
|
@ -227,7 +223,7 @@ import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
_ "github.com/taosdata/driver-go/v2/taosRestful"
|
_ "github.com/taosdata/driver-go/v3/taosRestful"
|
||||||
)
|
)
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
|
@ -269,35 +265,27 @@ func main() {
|
||||||
|
|
||||||
## Frequently Asked Questions
|
## Frequently Asked Questions
|
||||||
|
|
||||||
1. Cannot find the package `github.com/taosdata/driver-go/v2/taosRestful`
|
1. bind interface in database/sql crashes
|
||||||
|
|
||||||
Change the `github.com/taosdata/driver-go/v2` line in the require block of the `go.mod` file to `github.com/taosdata/driver-go/v2 develop`, then execute `go mod tidy`.
|
|
||||||
|
|
||||||
2. bind interface in database/sql crashes
|
|
||||||
|
|
||||||
REST does not support parameter binding related interface. It is recommended to use `db.Exec` and `db.Query`.
|
REST does not support parameter binding related interface. It is recommended to use `db.Exec` and `db.Query`.
|
||||||
|
|
||||||
3. error `[0x217] Database not specified or available` after executing other statements with `use db` statement
|
2. error `[0x217] Database not specified or available` after executing other statements with `use db` statement
|
||||||
|
|
||||||
The execution of SQL command in the REST interface is not contextual, so using `use db` statement will not work, see the usage restrictions section above.
|
The execution of SQL command in the REST interface is not contextual, so using `use db` statement will not work, see the usage restrictions section above.
|
||||||
|
|
||||||
4. use `taosSql` without error but use `taosRestful` with error `[0x217] Database not specified or available`
|
3. use `taosSql` without error but use `taosRestful` with error `[0x217] Database not specified or available`
|
||||||
|
|
||||||
Because the REST interface is stateless, using the `use db` statement will not take effect. See the usage restrictions section above.
|
Because the REST interface is stateless, using the `use db` statement will not take effect. See the usage restrictions section above.
|
||||||
|
|
||||||
5. Upgrade `github.com/taosdata/driver-go/v2/taosRestful`
|
4. `readBufferSize` parameter has no significant effect after being increased
|
||||||
|
|
||||||
Change the `github.com/taosdata/driver-go/v2` line in the `go.mod` file to `github.com/taosdata/driver-go/v2 develop`, then execute `go mod tidy`.
|
|
||||||
|
|
||||||
6. `readBufferSize` parameter has no significant effect after being increased
|
|
||||||
|
|
||||||
Increasing `readBufferSize` will reduce the number of `syscall` calls when fetching results. If the query result is smaller, modifying this parameter will not improve performance significantly. If you increase the parameter value too much, the bottleneck will be parsing JSON data. If you need to optimize the query speed, you must adjust the value based on the actual situation to achieve the best query performance.
|
Increasing `readBufferSize` will reduce the number of `syscall` calls when fetching results. If the query result is smaller, modifying this parameter will not improve performance significantly. If you increase the parameter value too much, the bottleneck will be parsing JSON data. If you need to optimize the query speed, you must adjust the value based on the actual situation to achieve the best query performance.
|
||||||
|
|
||||||
7. `disableCompression` parameter is set to `false` when the query efficiency is reduced
|
5. `disableCompression` parameter is set to `false` when the query efficiency is reduced
|
||||||
|
|
||||||
When set `disableCompression` parameter to `false`, the query result will be compressed by `gzip` and then transmitted, so you have to decompress the data by `gzip` after getting it.
|
When set `disableCompression` parameter to `false`, the query result will be compressed by `gzip` and then transmitted, so you have to decompress the data by `gzip` after getting it.
|
||||||
|
|
||||||
8. `go get` command can't get the package, or timeout to get the package
|
6. `go get` command can't get the package, or timeout to get the package
|
||||||
|
|
||||||
Set Go proxy `go env -w GOPROXY=https://goproxy.cn,direct`.
|
Set Go proxy `go env -w GOPROXY=https://goproxy.cn,direct`.
|
||||||
|
|
||||||
|
@ -311,14 +299,13 @@ func main() {
|
||||||
|
|
||||||
:::info
|
:::info
|
||||||
This API is created successfully without checking permissions, but only when you execute a Query or Exec, and check if user/password/host/port is legal.
|
This API is created successfully without checking permissions, but only when you execute a Query or Exec, and check if user/password/host/port is legal.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
* `func (db *DB) Exec(query string, args . .interface{}) (Result, error)`
|
* `func (db *DB) Exec(query string, args ...interface{}) (Result, error)`
|
||||||
|
|
||||||
`sql.Open` built-in method to execute non-query related SQL.
|
`sql.Open` built-in method to execute non-query related SQL.
|
||||||
|
|
||||||
* `func (db *DB) Query(query string, args ... . interface{}) (*Rows, error)`
|
* `func (db *DB) Query(query string, args ...interface{}) (*Rows, error)`
|
||||||
|
|
||||||
`sql.Open` Built-in method to execute query statements.
|
`sql.Open` Built-in method to execute query statements.
|
||||||
|
|
||||||
|
@ -338,17 +325,33 @@ The `af` package encapsulates TDengine advanced functions such as connection man
|
||||||
|
|
||||||
#### Subscribe to
|
#### Subscribe to
|
||||||
|
|
||||||
* `func (conn *Connector) Subscribe(restart bool, topic string, sql string, interval time.Duration) (Subscriber, error)`
|
* `func NewConsumer(conf *Config) (*Consumer, error)`
|
||||||
|
|
||||||
Subscribe to data.
|
Creates consumer group.
|
||||||
|
|
||||||
* `func (s *taosSubscriber) Consume() (driver.Rows, error)`
|
* `func (c *Consumer) Subscribe(topics []string) error`
|
||||||
|
|
||||||
Consume the subscription data, returning the `Rows` structure of the `database/sql/driver` package.
|
Subscribes to a topic.
|
||||||
|
|
||||||
* `func (s *taosSubscriber) Unsubscribe(keepProgress bool)`
|
* `func (c *Consumer) Poll(timeout time.Duration) (*Result, error)`
|
||||||
|
|
||||||
Unsubscribe from data.
|
Polling information.
|
||||||
|
|
||||||
|
* `func (c *Consumer) Commit(ctx context.Context, message unsafe.Pointer) error`
|
||||||
|
|
||||||
|
Commit information.
|
||||||
|
|
||||||
|
* `func (c *Consumer) FreeMessage(message unsafe.Pointer)`
|
||||||
|
|
||||||
|
Free information.
|
||||||
|
|
||||||
|
* `func (c *Consumer) Unsubscribe() error`
|
||||||
|
|
||||||
|
Unsubscribe.
|
||||||
|
|
||||||
|
* `func (c *Consumer) Close() error`
|
||||||
|
|
||||||
|
Close consumer.
|
||||||
|
|
||||||
#### schemaless
|
#### schemaless
|
||||||
|
|
||||||
|
@ -370,11 +373,7 @@ The `af` package encapsulates TDengine advanced functions such as connection man
|
||||||
|
|
||||||
Parameter bound single row insert.
|
Parameter bound single row insert.
|
||||||
|
|
||||||
* `func (conn *Connector) StmtQuery(sql string, params *param.Param) (rows driver.Rows, err error)`
|
* `func (conn *Connector) InsertStmt() *insertstmt.InsertStmt`
|
||||||
|
|
||||||
Parameter bound query that returns the `Rows` structure of the `database/sql/driver` package.
|
|
||||||
|
|
||||||
* `func (conn *Connector) InsertStmt() *insertstmt.
|
|
||||||
|
|
||||||
Initialize the parameters.
|
Initialize the parameters.
|
||||||
|
|
||||||
|
@ -412,4 +411,4 @@ The `af` package encapsulates TDengine advanced functions such as connection man
|
||||||
|
|
||||||
## API Reference
|
## API Reference
|
||||||
|
|
||||||
Full API see [driver-go documentation](https://pkg.go.dev/github.com/taosdata/driver-go/v2)
|
Full API see [driver-go documentation](https://pkg.go.dev/github.com/taosdata/driver-go/v3)
|
|
@ -1,6 +1,5 @@
|
||||||
---
|
---
|
||||||
toc_max_heading_level: 4
|
toc_max_heading_level: 4
|
||||||
sidebar_position: 5
|
|
||||||
sidebar_label: Rust
|
sidebar_label: Rust
|
||||||
title: TDengine Rust Connector
|
title: TDengine Rust Connector
|
||||||
---
|
---
|
||||||
|
@ -8,43 +7,45 @@ title: TDengine Rust Connector
|
||||||
import Tabs from '@theme/Tabs';
|
import Tabs from '@theme/Tabs';
|
||||||
import TabItem from '@theme/TabItem';
|
import TabItem from '@theme/TabItem';
|
||||||
|
|
||||||
import Preparation from "./_preparation.mdx"
|
import Preparition from "./_preparition.mdx"
|
||||||
import RustInsert from "../../07-develop/03-insert-data/_rust_sql.mdx"
|
import RustInsert from "../../07-develop/03-insert-data/_rust_sql.mdx"
|
||||||
import RustBind from "../../07-develop/03-insert-data/_rust_stmt.mdx"
|
import RustBind from "../../07-develop/03-insert-data/_rust_stmt.mdx"
|
||||||
import RustQuery from "../../07-develop/04-query-data/_rust.mdx"
|
import RustQuery from "../../07-develop/04-query-data/_rust.mdx"
|
||||||
|
|
||||||
[`taos`][taos] is the official Rust language connector for TDengine. Rust developers can develop applications to access the TDengine instance data.
|
[](https://crates.io/crates/taos)  [](https://docs.rs/taos)
|
||||||
|
|
||||||
Rust connector provides two ways to establish connections. One is the **Native Connection**, which connects to TDengine instances via the TDengine client driver (taosc). The other is **Websocket connection**, which connects to TDengine instances via taosAdapter service.
|
`taos` is the official Rust connector for TDengine. Rust developers can develop applications to access the TDengine instance data.
|
||||||
|
|
||||||
The source code is hosted on [taosdata/taos-connector-rust](https://github.com/taosdata/taos-connector-rust).
|
`taos` provides two ways to establish connections. One is the **Native Connection**, which connects to TDengine instances via the TDengine client driver (taosc). The other is the **WebSocket connection**, which connects to TDengine instances via the WebSocket interface provided by taosAdapter. You can specify a connection type with Cargo features. By default, both types are supported. The Websocket connection can be used on any platform. The native connection can be used on any platform that the TDengine Client supports.
|
||||||
|
|
||||||
|
The source code for the Rust connectors is located on [GitHub](https://github.com/taosdata/taos-connector-rust).
|
||||||
|
|
||||||
## Supported platforms
|
## Supported platforms
|
||||||
|
|
||||||
The platforms supported by native connections are the same as those supported by the TDengine client driver.
|
Native connections are supported on the same platforms as the TDengine client driver.
|
||||||
REST connections are supported on all platforms that can run Rust.
|
Websocket connections are supported on all platforms that can run Go.
|
||||||
|
|
||||||
## Version support
|
## Version support
|
||||||
|
|
||||||
Please refer to [version support list](/reference/connector#version-support).
|
Please refer to [version support list](/reference/connector#version-support)
|
||||||
|
|
||||||
The Rust Connector is still under rapid development and is not guaranteed to be backward compatible before 1.0. We recommend using TDengine version 3.0 or higher to avoid known issues.
|
The Rust Connector is still under rapid development and is not guaranteed to be backward compatible before 1.0. We recommend using TDengine version 3.0 or higher to avoid known issues.
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
### Pre-installation
|
### Pre-installation preparation
|
||||||
|
|
||||||
* Install the Rust development toolchain
|
* Install the Rust development toolchain
|
||||||
* If using the native connection, please install the TDengine client driver. Please refer to [install client driver](/reference/connector#install-client-driver)
|
* If using the native connection, please install the TDengine client driver. Please refer to [install client driver](/reference/connector#install-client-driver)
|
||||||
|
|
||||||
### Add dependencies
|
# Add taos dependency
|
||||||
|
|
||||||
Add the dependency to the [Rust](https://rust-lang.org) project as follows, depending on the connection method selected.
|
Depending on the connection method, add the [taos][taos] dependency in your Rust project as follows:
|
||||||
|
|
||||||
<Tabs defaultValue="default">
|
<Tabs defaultValue="default">
|
||||||
<TabItem value="default" label="Both">
|
<TabItem value="default" label="Support Both">
|
||||||
|
|
||||||
Add [taos] to the `Cargo.toml` file.
|
In `cargo.toml`, add [taos][taos]:
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
[dependencies]
|
[dependencies]
|
||||||
|
@ -53,9 +54,10 @@ taos = "*"
|
||||||
```
|
```
|
||||||
|
|
||||||
</TabItem>
|
</TabItem>
|
||||||
<TabItem value="native" label="Native only">
|
|
||||||
|
|
||||||
Add [taos] to the `Cargo.toml` file.
|
<TabItem value="native" label="native connection only">
|
||||||
|
|
||||||
|
In `cargo.toml`, add [taos][taos] and enable the native feature:
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
[dependencies]
|
[dependencies]
|
||||||
|
@ -65,7 +67,7 @@ taos = { version = "*", default-features = false, features = ["native"] }
|
||||||
</TabItem>
|
</TabItem>
|
||||||
<TabItem value="rest" label="Websocket only">
|
<TabItem value="rest" label="Websocket only">
|
||||||
|
|
||||||
Add [taos] to the `Cargo.toml` file and enable the `ws` feature.
|
In `cargo.toml`, add [taos][taos] and enable the ws feature:
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
[dependencies]
|
[dependencies]
|
||||||
|
@ -75,15 +77,15 @@ taos = { version = "*", default-features = false, features = ["ws"] }
|
||||||
</TabItem>
|
</TabItem>
|
||||||
</Tabs>
|
</Tabs>
|
||||||
|
|
||||||
## Create a connection
|
## Establishing a connection
|
||||||
|
|
||||||
In rust connector, we use a DSN connection string as a connection builder. For example,
|
[TaosBuilder] creates a connection constructor through the DSN connection description string.
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
let builder = TaosBuilder::from_dsn("taos://")?;
|
let builder = TaosBuilder::from_dsn("taos://")?;
|
||||||
```
|
```
|
||||||
|
|
||||||
You can now use connection client to create the connection.
|
You can now use this object to create the connection.
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
let conn = builder.build()?;
|
let conn = builder.build()?;
|
||||||
|
@ -96,9 +98,7 @@ let conn1 = builder.build()?;
|
||||||
let conn2 = builder.build()?;
|
let conn2 = builder.build()?;
|
||||||
```
|
```
|
||||||
|
|
||||||
DSN is short for **D**ata **S**ource **N**ame string - [a data structure used to describe a connection to a data source](https://en.wikipedia.org/wiki/Data_source_name).
|
The structure of the DSN description string is as follows:
|
||||||
|
|
||||||
A common DSN is basically constructed as this:
|
|
||||||
|
|
||||||
```text
|
```text
|
||||||
<driver>[+<protocol>]://[[<username>:<password>@]<host>:<port>][/<database>][?<p1>=<v1>[&<p2>=<v2>]]
|
<driver>[+<protocol>]://[[<username>:<password>@]<host>:<port>][/<database>][?<p1>=<v1>[&<p2>=<v2>]]
|
||||||
|
@ -106,31 +106,31 @@ A common DSN is basically constructed as this:
|
||||||
|driver| protocol | | username | password | host | port | database | params |
|
|driver| protocol | | username | password | host | port | database | params |
|
||||||
```
|
```
|
||||||
|
|
||||||
- **Driver**: the main entrypoint to a processer. **Required**. In Rust connector, the supported driver names are listed here:
|
The parameters are described as follows:
|
||||||
- **taos**: the legacy TDengine connection data source.
|
|
||||||
- **tmq**: subscription data source from TDengine.
|
|
||||||
- **http/ws**: use websocket protocol via `ws://` scheme.
|
|
||||||
- **https/wss**: use websocket protocol via `wss://` scheme.
|
|
||||||
- **Protocol**: the additional information appended to driver, which can be be used to support different kind of data sources. By default, leave it empty for native driver(only under feature "native"), and `ws/wss` for websocket driver (only under feature "ws"). **Optional**.
|
|
||||||
- **Username**: as its definition, is the username to the connection. **Optional**.
|
|
||||||
- **Password**: the password of the username. **Optional**.
|
|
||||||
- **Host**: address host to the datasource. **Optional**.
|
|
||||||
- **Port**: address port to the datasource. **Optional**.
|
|
||||||
- **Database**: database name or collection name in the datasource. **Optional**.
|
|
||||||
- **Params**: a key-value map for any other informations to the datasource. **Optional**.
|
|
||||||
|
|
||||||
Here is a simple DSN connection string example:
|
- **driver**: Specify a driver name so that the connector can choose which method to use to establish the connection. Supported driver names are as follows:
|
||||||
|
- **taos**: Table names use the TDengine connector driver.
|
||||||
|
- **tmq**: Use the TMQ to subscribe to data.
|
||||||
|
- **http/ws**: Use Websocket to establish connections.
|
||||||
|
- **https/wss**: Use Websocket to establish connections, and enable SSL/TLS.
|
||||||
|
- **protocol**: Specify which connection method to use. For example, `taos+ws://localhost:6041` uses Websocket to establish connections.
|
||||||
|
- **username/password**: Username and password used to create connections.
|
||||||
|
- **host/port**: Specifies the server and port to establish a connection. If you do not specify a hostname or port, native connections default to `localhost:6030` and Websocket connections default to `localhost:6041`.
|
||||||
|
- **database**: Specify the default database to connect to.
|
||||||
|
- **params**:Optional parameters.
|
||||||
|
|
||||||
|
A sample DSN description string is as follows:
|
||||||
|
|
||||||
```text
|
```text
|
||||||
taos+ws://localhost:6041/test
|
taos+ws://localhost:6041/test
|
||||||
```
|
```
|
||||||
|
|
||||||
which means connect `localhost` with port `6041` via `ws` protocol, and make `test` as the default database.
|
This indicates that the Websocket connection method is used on port 6041 to connect to the server localhost and use the database `test` by default.
|
||||||
|
|
||||||
So that you can use DSN to specify connection protocol at runtime:
|
You can create DSNs to connect to servers in your environment.
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
use taos::*; // use it like a `prelude` mod, we need some traits at next.
|
use taos::*;
|
||||||
|
|
||||||
// use native protocol.
|
// use native protocol.
|
||||||
let builder = TaosBuilder::from_dsn("taos://localhost:6030")?;
|
let builder = TaosBuilder::from_dsn("taos://localhost:6030")?;
|
||||||
|
@ -140,7 +140,7 @@ let conn1 = builder.build();
|
||||||
let conn2 = TaosBuilder::from_dsn("taos+ws://localhost:6041")?;
|
let conn2 = TaosBuilder::from_dsn("taos+ws://localhost:6041")?;
|
||||||
```
|
```
|
||||||
|
|
||||||
After connected, you can perform the following operations on the database.
|
After the connection is established, you can perform operations on your database.
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
async fn demo(taos: &Taos, db: &str) -> Result<(), Error> {
|
async fn demo(taos: &Taos, db: &str) -> Result<(), Error> {
|
||||||
|
@ -179,7 +179,7 @@ async fn demo(taos: &Taos, db: &str) -> Result<(), Error> {
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Rust connector provides two kinds of ways to fetch data:
|
There are two ways to query data: Using built-in types or the [serde](https://serde.rs) deserialization framework.
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
// Query option 1, use rows stream.
|
// Query option 1, use rows stream.
|
||||||
|
@ -225,41 +225,41 @@ Rust connector provides two kinds of ways to fetch data:
|
||||||
|
|
||||||
<RustInsert />
|
<RustInsert />
|
||||||
|
|
||||||
#### Stmt bind
|
#### STMT Write
|
||||||
|
|
||||||
<RustBind />
|
<RustBind />
|
||||||
|
|
||||||
### Query data
|
### Query data
|
||||||
|
|
||||||
<RustQuery />|
|
<RustQuery />
|
||||||
|
|
||||||
## API Reference
|
## API Reference
|
||||||
|
|
||||||
### Connector builder
|
### Connector Constructor
|
||||||
|
|
||||||
Use DSN to directly construct a TaosBuilder object.
|
You create a connector constructor by using a DSN.
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
let builder = TaosBuilder::from_dsn("")? ;
|
let cfg = TaosBuilder::default().build()?;
|
||||||
```
|
```
|
||||||
|
|
||||||
Use `builder` to create many connections:
|
You use the builder object to create multiple connections.
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
let conn: Taos = cfg.build();
|
let conn: Taos = cfg.build();
|
||||||
```
|
```
|
||||||
|
|
||||||
### Connection pool
|
### Connection pooling
|
||||||
|
|
||||||
In complex applications, we recommend enabling connection pools. Connection pool for [taos] is implemented using [r2d2] by enabling "r2d2" feature.
|
In complex applications, we recommend enabling connection pools. [taos] implements connection pools based on [r2d2].
|
||||||
|
|
||||||
Basically, a connection pool with default parameters can be generated as:
|
As follows, a connection pool with default parameters can be generated.
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
let pool = TaosBuilder::from_dsn(dsn)?.pool()?;
|
let pool = TaosBuilder::from_dsn(dsn)?.pool()?;
|
||||||
```
|
```
|
||||||
|
|
||||||
You can set the connection pool parameters using the `PoolBuilder`.
|
You can set the same connection pool parameters using the connection pool's constructor.
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
let dsn = "taos://localhost:6030";
|
let dsn = "taos://localhost:6030";
|
||||||
|
@ -279,17 +279,17 @@ In the application code, use `pool.get()?` to get a connection object [Taos].
|
||||||
let taos = pool.get()?;
|
let taos = pool.get()?;
|
||||||
```
|
```
|
||||||
|
|
||||||
### Connection methods
|
# Connectors
|
||||||
|
|
||||||
The [Taos] connection struct provides several APIs for convenient use.
|
The [Taos][struct.Taos] object provides an API to perform operations on multiple databases.
|
||||||
|
|
||||||
1. `exec`: Execute some non-query SQL statements, such as `CREATE`, `ALTER`, `INSERT` etc. and return affected rows (only meaningful to `INSERT`).
|
1. `exec`: Execute some non-query SQL statements, such as `CREATE`, `ALTER`, `INSERT`, etc.
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
let affected_rows = taos.exec("INSERT INTO tb1 VALUES(now, NULL)").await?;
|
let affected_rows = taos.exec("INSERT INTO tb1 VALUES(now, NULL)").await?;
|
||||||
```
|
```
|
||||||
|
|
||||||
2. `exec_many`: You can execute many SQL statements in order with `exec_many` method.
|
2. `exec_many`: Run multiple SQL statements simultaneously or in order.
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
taos.exec_many([
|
taos.exec_many([
|
||||||
|
@ -299,15 +299,15 @@ The [Taos] connection struct provides several APIs for convenient use.
|
||||||
]).await?;
|
]).await?;
|
||||||
```
|
```
|
||||||
|
|
||||||
3. `query`: Execute the query statement and return the [ResultSet] object.
|
3. `query`: Run a query statement and return a [ResultSet] object.
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
let mut q = taos.query("select * from log.logs").await?
|
let mut q = taos.query("select * from log.logs").await?;
|
||||||
```
|
```
|
||||||
|
|
||||||
The [ResultSet] object stores the query result data and basic information about the returned columns (column name, type, length).
|
The [ResultSet] object stores query result data and the names, types, and lengths of returned columns
|
||||||
|
|
||||||
Get filed information with `fields` method.
|
You can obtain column information by using [.fields()].
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
let cols = q.fields();
|
let cols = q.fields();
|
||||||
|
@ -316,7 +316,7 @@ The [Taos] connection struct provides several APIs for convenient use.
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Users could fetch data by rows.
|
It fetches data line by line.
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
let mut rows = result.rows();
|
let mut rows = result.rows();
|
||||||
|
@ -332,7 +332,7 @@ The [Taos] connection struct provides several APIs for convenient use.
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Or use it with [serde](https://serde.rs) deserialization.
|
Or use the [serde](https://serde.rs) deserialization framework.
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
#[derive(Debug, Deserialize)]
|
#[derive(Debug, Deserialize)]
|
||||||
|
@ -359,15 +359,17 @@ The [Taos] connection struct provides several APIs for convenient use.
|
||||||
|
|
||||||
Note that Rust asynchronous functions and an asynchronous runtime are required.
|
Note that Rust asynchronous functions and an asynchronous runtime are required.
|
||||||
|
|
||||||
[Taos] provides a few Rust methods that encapsulate SQL to reduce the frequency of `format!` code blocks.
|
[Taos][struct.Taos] provides Rust methods for some SQL statements to reduce the number of `format!`s.
|
||||||
|
|
||||||
- `.describe(table: &str)`: Executes `DESCRIBE` and returns a Rust data structure.
|
- `.describe(table: &str)`: Executes `DESCRIBE` and returns a Rust data structure.
|
||||||
- `.create_database(database: &str)`: Executes the `CREATE DATABASE` statement.
|
- `.create_database(database: &str)`: Executes the `CREATE DATABASE` statement.
|
||||||
- `.use_database(database: &str)`: Executes the `USE` statement.
|
- `.use_database(database: &str)`: Executes the `USE` statement.
|
||||||
|
|
||||||
### Bind API
|
In addition, this structure is also the entry point for [Parameter Binding](#Parameter Binding Interface) and [Line Protocol Interface](#Line Protocol Interface). Please refer to the specific API descriptions for usage.
|
||||||
|
|
||||||
Similar to the C interface, Rust provides the bind interface's wrapping. First, create a bind object [Stmt] for a SQL command with the [Taos] object.
|
### Bind Interface
|
||||||
|
|
||||||
|
Similar to the C interface, Rust provides the bind interface's wrapping. First, the [Taos][struct.taos] object creates a parameter binding object [Stmt] for an SQL statement.
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
let mut stmt = Stmt::init(&taos).await?;
|
let mut stmt = Stmt::init(&taos).await?;
|
||||||
|
@ -387,17 +389,17 @@ stmt.set_tbname("d0")?;
|
||||||
|
|
||||||
#### `.set_tags(&[tag])`
|
#### `.set_tags(&[tag])`
|
||||||
|
|
||||||
Bind tag values when the SQL statement uses a super table.
|
Bind sub-table table names and tag values when the SQL statement uses a super table.
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
let mut stmt = taos.stmt("insert into ? using stb0 tags(?) values(? ,?)")?;
|
let mut stmt = taos.stmt("insert into ? using stb0 tags(?) values(? ,?)")?;
|
||||||
stmt.set_tbname("d0")?;
|
stmt.set_tbname("d0")?;
|
||||||
stmt.set_tags(&[Value::VarChar("涛思".to_string())])?;
|
stmt.set_tags(&[Value::VarChar("taos".to_string())])?;
|
||||||
```
|
```
|
||||||
|
|
||||||
#### `.bind(&[column])`
|
#### `.bind(&[column])`
|
||||||
|
|
||||||
Bind value types. Use the [ColumnView] structure to construct the desired type and bind.
|
Bind value types. Use the [ColumnView] structure to create and bind the required types.
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
let params = vec![
|
let params = vec![
|
||||||
|
@ -421,42 +423,42 @@ let rows = stmt.bind(¶ms)?.add_batch()?.execute()?;
|
||||||
|
|
||||||
#### `.execute()`
|
#### `.execute()`
|
||||||
|
|
||||||
Execute to insert all bind records. [Stmt] objects can be reused, re-bind, and executed after execution. Remember to call `add_batch` before `execute`.
|
Execute SQL. [Stmt] objects can be reused, re-binded, and executed after execution. Before execution, ensure that all data has been added to the queue with `.add_batch`.
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
stmt.add_batch()?.execute()?;
|
stmt.execute()?;
|
||||||
|
|
||||||
// next bind cycle.
|
// next bind cycle.
|
||||||
//stmt.set_tbname()?;
|
//stmt.set_tbname()?;
|
||||||
//stmt.bind()?;
|
//stmt.bind()?;
|
||||||
//stmt.add_batch().execute()? ;
|
//stmt.execute()?;
|
||||||
```
|
```
|
||||||
|
|
||||||
A runnable example for bind can be found [here](https://github.com/taosdata/taos-connector-rust/blob/main/examples/bind.rs).
|
For a working example, see [GitHub](https://github.com/taosdata/taos-connector-rust/blob/main/examples/bind.rs).
|
||||||
|
|
||||||
### Subscription API
|
### Subscriptions
|
||||||
|
|
||||||
Users can subscribe a [TOPIC](../../../taos-sql/tmq/) with TMQ(the TDengine Message Queue) API.
|
TDengine starts subscriptions through [TMQ](../../../taos-sql/tmq/).
|
||||||
|
|
||||||
Start from a TMQ builder:
|
You create a TMQ connector by using a DSN.
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
let tmq = TmqBuilder::from_dsn("taos://localhost:6030/?group.id=test")?;
|
let tmq = TmqBuilder::from_dsn("taos://localhost:6030/?group.id=test")?;
|
||||||
```
|
```
|
||||||
|
|
||||||
Build a consumer:
|
Create a consumer:
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
let mut consumer = tmq.build()?;
|
let mut consumer = tmq.build()?;
|
||||||
```
|
```
|
||||||
|
|
||||||
Subscribe a topic:
|
A single consumer can subscribe to one or more topics.
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
consumer.subscribe(["tmq_meters"]).await?;
|
consumer.subscribe(["tmq_meters"]).await?;
|
||||||
```
|
```
|
||||||
|
|
||||||
Consume messages, and commit the offset for each message.
|
The TMQ is of [futures::Stream](https://docs.rs/futures/latest/futures/stream/index.html) type. You can use the corresponding API to consume each message in the queue and then use `.commit` to mark them as consumed.
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
{
|
{
|
||||||
|
@ -495,22 +497,21 @@ Unsubscribe:
|
||||||
consumer.unsubscribe().await;
|
consumer.unsubscribe().await;
|
||||||
```
|
```
|
||||||
|
|
||||||
In TMQ DSN, you must choose to subscribe with a group id. Also, there's several options could be set:
|
The following parameters can be configured for the TMQ DSN. Only `group.id` is mandatory.
|
||||||
|
|
||||||
- `group.id`: **Required**, a group id is any visible string you set.
|
- `group.id`: Within a consumer group, load balancing is implemented by consuming messages on an at-least-once basis.
|
||||||
- `client.id`: a optional client description string.
|
- `client.id`: Subscriber client ID.
|
||||||
- `auto.offset.reset`: choose to subscribe from *earliest* or *latest*, default is *none* which means 'earliest'.
|
- `auto.offset.reset`: Initial point of subscription. *earliest* subscribes from the beginning, and *latest* subscribes from the newest message. The default is earliest. Note: This parameter is set per consumer group.
|
||||||
- `enable.auto.commit`: automatically commit with specified time interval. By default - in the recommended way _ you must use `commit` to ensure that you've consumed the messages correctly, otherwise, consumers will received repeated messages when re-subscribe.
|
- `enable.auto.commit`: Automatically commits. This can be enabled when data consistency is not essential.
|
||||||
- `auto.commit.interval.ms`: the auto commit interval in milliseconds.
|
- `auto.commit.interval.ms`: Interval for automatic commits.
|
||||||
|
|
||||||
Check the whole subscription example at [GitHub](https://github.com/taosdata/taos-connector-rust/blob/main/examples/subscribe.rs).
|
For more information, see [GitHub sample file](https://github.com/taosdata/taos-connector-rust/blob/main/examples/subscribe.rs).
|
||||||
|
|
||||||
Please move to the Rust documentation hosting page for other related structure API usage instructions: <https://docs.rs/taos>.
|
For information about other structure APIs, see the [Rust documentation](https://docs.rs/taos).
|
||||||
|
|
||||||
[TDengine]: https://github.com/taosdata/TDengine
|
[taos]: https://github.com/taosdata/rust-connector-taos
|
||||||
[r2d2]: https://crates.io/crates/r2d2
|
[r2d2]: https://crates.io/crates/r2d2
|
||||||
[Taos]: https://docs.rs/taos/latest/taos/struct.Taos.html
|
[TaosBuilder]: https://docs.rs/taos/latest/taos/struct.TaosBuilder.html
|
||||||
[ResultSet]: https://docs.rs/taos/latest/taos/struct.ResultSet.html
|
[TaosCfg]: https://docs.rs/taos/latest/taos/struct.TaosCfg.html
|
||||||
[Value]: https://docs.rs/taos/latest/taos/enum.Value.html
|
[struct.Taos]: https://docs.rs/taos/latest/taos/struct.Taos.html
|
||||||
[Stmt]: https://docs.rs/taos/latest/taos/stmt/struct.Stmt.html
|
[Stmt]: https://docs.rs/taos/latest/taos/struct.Stmt.html
|
||||||
[taos]: https://crates.io/crates/taos
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue