Merge branch '3.0' into fix/TD-17791
This commit is contained in:
commit
7119aa2e21
|
@ -0,0 +1,58 @@
|
||||||
|
# 贡献指南
|
||||||
|
|
||||||
|
我们感谢所有开发者提交贡献。随时关注我们,Fork 存储库,报告错误,以及在 GitHub 上提交您的代码。但是,我们希望开发者遵循我们的指南,才能更好的做出贡献。
|
||||||
|
|
||||||
|
## 报告错误
|
||||||
|
|
||||||
|
- 任何用户都可以通过 **[GitHub issue tracker](https://github.com/taosdata/TDengine/issues)** 向我们报告错误。请您对所遇到的问题进行**详细描述**,最好提供重现错误的详细步骤。
|
||||||
|
- 欢迎提供包含由 Bug 生成的日志文件的附录。
|
||||||
|
|
||||||
|
## 需要强调的代码提交规则
|
||||||
|
|
||||||
|
- 在提交代码之前,需要**同意贡献者许可协议(CLA)**。点击 [TaosData CLA](https://cla-assistant.io/taosdata/TDengine) 阅读并签署协议。如果您不接受该协议,请停止提交。
|
||||||
|
- 请在 [GitHub issue tracker](https://github.com/taosdata/TDengine/issues) 中解决问题或添加注册功能。
|
||||||
|
- 如果在 [GitHub issue tracker](https://github.com/taosdata/TDengine/issues) 中没有找到相应的问题或功能,请**创建一个新的 issue**。
|
||||||
|
- 将代码提交到我们的存储库时,请创建**包含问题编号的 PR**。
|
||||||
|
|
||||||
|
## 贡献指南
|
||||||
|
|
||||||
|
1. 请用友好的语气书写。
|
||||||
|
|
||||||
|
2. **主动语态**总体上优于被动语态。主动语态中的句子会突出执行动作的人,而不是被动语态突出动作的接受者。
|
||||||
|
|
||||||
|
3. 文档写作建议
|
||||||
|
|
||||||
|
- 正确拼写产品名称 “TDengine”。 “TD” 用大写字母,“TD” 和 “engine” 之间没有空格 **(正确拼写:TDengine)**。
|
||||||
|
- 在句号或其他标点符号后只留一个空格。
|
||||||
|
|
||||||
|
4. 尽量**使用简单句**,而不是复杂句。
|
||||||
|
|
||||||
|
## 给贡献者的礼品
|
||||||
|
|
||||||
|
只要您是为 TDengine 做贡献的开发者,不管是代码贡献、修复 bug 或功能请求,还是文档更改,您都将会获得一份**特别的贡献者纪念品礼物**!
|
||||||
|
|
||||||
|
<p align="left">
|
||||||
|
<img
|
||||||
|
src="docs/assets/contributing-cup.jpg"
|
||||||
|
alt=""
|
||||||
|
width="200"
|
||||||
|
/>
|
||||||
|
<img
|
||||||
|
src="docs/assets/contributing-notebook.jpg"
|
||||||
|
alt=""
|
||||||
|
width="200"
|
||||||
|
/>
|
||||||
|
<img
|
||||||
|
src="docs/assets/contributing-shirt.jpg"
|
||||||
|
alt=""
|
||||||
|
width="200"
|
||||||
|
/>
|
||||||
|
|
||||||
|
TDengine 社区致力于让更多的开发者理解和使用它。
|
||||||
|
请填写**贡献者提交表**以选择您想收到的礼物。
|
||||||
|
|
||||||
|
- [贡献者提交表](https://page.ma.scrmtech.com/form/index?pf_uid=27715_2095&id=12100)
|
||||||
|
|
||||||
|
## 联系我们
|
||||||
|
|
||||||
|
如果您有什么问题需要解决,或者有什么问题需要解答,可以添加微信:TDengineECO
|
|
@ -1,15 +1,64 @@
|
||||||
# Contributing
|
# Contributing
|
||||||
|
|
||||||
We appreciate contributions from all developers. Feel free to follow us, fork the repository, report bugs and even submit your code on GitHub. However, we would like developers to follow our guides to contribute for better corporation.
|
We appreciate contributions from all developers. Feel free to follow us, fork the repository, report bugs, and even submit your code on GitHub. However, we would like developers to follow the guidelines in this document to ensure effective cooperation.
|
||||||
|
|
||||||
## Report bugs
|
## Reporting a bug
|
||||||
|
|
||||||
Any users can report bugs to us through the [github issue tracker](https://github.com/taosdata/TDengine/issues). We appreciate a detailed description of the problem you met. It is better to provide the detailed steps on reproducing the bug. Otherwise, an appendix with log files generated by the bug is welcome.
|
- Any users can report bugs to us through the **[GitHub issue tracker](https://github.com/taosdata/TDengine/issues)**. We would appreciate if you could provide **a detailed description** of the problem you encountered, including steps to reproduce it.
|
||||||
|
|
||||||
## Read the contributor license agreement
|
- Attaching log files caused by the bug is really appreciated.
|
||||||
|
|
||||||
It is required to agree the Contributor Licence Agreement(CLA) before a user submitting his/her code patch. Follow the [TaosData CLA](https://www.taosdata.com/en/contributor/) link to read through the agreement.
|
## Guidelines for committing code
|
||||||
|
|
||||||
## Submit your code
|
- You must agree to the **Contributor License Agreement(CLA) before submitting your code patch**. Follow the **[TAOSData CLA](https://cla-assistant.io/taosdata/TDengine)** link to read through and sign the agreement. If you do not accept the agreement, your contributions cannot be accepted.
|
||||||
|
|
||||||
Before submitting your code, make sure to [read the contributor license agreement](#read-the-contributor-license-agreement) beforehand. If you don't accept the aggreement, please stop submitting. Your submission means you have accepted the agreement. Your submission should solve an issue or add a feature registered in the [github issue tracker](https://github.com/taosdata/TDengine/issues). If no corresponding issue or feature is found in the issue tracker, please create one. When submitting your code to our repository, please create a pull request with the issue number included.
|
- Please solve an issue or add a feature registered in the **[GitHub issue tracker](https://github.com/taosdata/TDengine/issues)**.
|
||||||
|
- If no corresponding issue or feature is found in the issue tracker, please **create one**.
|
||||||
|
- When submitting your code to our repository, please create a pull request with the **issue number** included.
|
||||||
|
|
||||||
|
## Guidelines for communicating
|
||||||
|
|
||||||
|
1. Please be **nice and polite** in the description.
|
||||||
|
2. **Active voice is better than passive voice in general**. Sentences in the active voice will highlight who is performing the action rather than the recipient of the action highlighted by the passive voice.
|
||||||
|
3. Documentation writing advice
|
||||||
|
|
||||||
|
- Spell the product name "TDengine" correctly. "TD" is written in capital letters, and there is no space between "TD" and "engine" (**Correct spelling: TDengine**).
|
||||||
|
- Please **capitalize the first letter** of every sentence.
|
||||||
|
- Leave **only one space** after periods or other punctuation marks.
|
||||||
|
- Use **American spelling**.
|
||||||
|
- When possible, **use second person** rather than first person (e.g.“You are recommended to use a reverse proxy such as Nginx.” rather than “We recommend to use a reverse proxy such as Nginx.”).
|
||||||
|
|
||||||
|
5. Use **simple sentences**, rather than complex sentences.
|
||||||
|
|
||||||
|
## Gifts for the contributors
|
||||||
|
|
||||||
|
Developers, as long as you contribute to TDengine, whether it's code contributions to fix bugs or feature requests, or documentation changes, **you are eligible for a very special Contributor Souvenir Gift!**
|
||||||
|
|
||||||
|
**You can choose one of the following gifts:**
|
||||||
|
|
||||||
|
<p align="left">
|
||||||
|
<img
|
||||||
|
src="docs/assets/contributing-cup.jpg"
|
||||||
|
alt=""
|
||||||
|
width="200"
|
||||||
|
/>
|
||||||
|
<img
|
||||||
|
src="docs/assets/contributing-notebook.jpg"
|
||||||
|
alt=""
|
||||||
|
width="200"
|
||||||
|
/>
|
||||||
|
<img
|
||||||
|
src="docs/assets/contributing-shirt.jpg"
|
||||||
|
alt=""
|
||||||
|
width="200"
|
||||||
|
/>
|
||||||
|
|
||||||
|
The TDengine community is committed to making TDengine accepted and used by more developers.
|
||||||
|
|
||||||
|
Just fill out the **Contributor Submission Form** to choose your desired gift.
|
||||||
|
|
||||||
|
- [Contributor Submission Form](https://page.ma.scrmtech.com/form/index?pf_uid=27715_2095&id=12100)
|
||||||
|
|
||||||
|
## Contact us
|
||||||
|
|
||||||
|
If you have any problems or questions that need help from us, please feel free to add our WeChat account: TDengineECO.
|
||||||
|
|
60
Jenkinsfile2
60
Jenkinsfile2
|
@ -1,6 +1,7 @@
|
||||||
import hudson.model.Result
|
import hudson.model.Result
|
||||||
import hudson.model.*;
|
import hudson.model.*;
|
||||||
import jenkins.model.CauseOfInterruption
|
import jenkins.model.CauseOfInterruption
|
||||||
|
docs_only=0
|
||||||
node {
|
node {
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -29,6 +30,48 @@ def abort_previous(){
|
||||||
if (buildNumber > 1) milestone(buildNumber - 1)
|
if (buildNumber > 1) milestone(buildNumber - 1)
|
||||||
milestone(buildNumber)
|
milestone(buildNumber)
|
||||||
}
|
}
|
||||||
|
def check_docs() {
|
||||||
|
if (env.CHANGE_URL =~ /\/TDengine\//) {
|
||||||
|
sh '''
|
||||||
|
hostname
|
||||||
|
date
|
||||||
|
env
|
||||||
|
'''
|
||||||
|
sh '''
|
||||||
|
cd ${WKC}
|
||||||
|
git reset --hard
|
||||||
|
git clean -fxd
|
||||||
|
rm -rf examples/rust/
|
||||||
|
git remote prune origin
|
||||||
|
git fetch
|
||||||
|
'''
|
||||||
|
script {
|
||||||
|
sh '''
|
||||||
|
cd ${WKC}
|
||||||
|
git checkout ''' + env.CHANGE_TARGET + '''
|
||||||
|
'''
|
||||||
|
}
|
||||||
|
sh '''
|
||||||
|
cd ${WKC}
|
||||||
|
git pull >/dev/null
|
||||||
|
git fetch origin +refs/pull/${CHANGE_ID}/merge
|
||||||
|
git checkout -qf FETCH_HEAD
|
||||||
|
'''
|
||||||
|
def file_changed = sh (
|
||||||
|
script: '''
|
||||||
|
cd ${WKC}
|
||||||
|
git --no-pager diff --name-only FETCH_HEAD `git merge-base FETCH_HEAD ${CHANGE_TARGET}`|grep -v "^docs/en/"|grep -v "^docs/zh/"
|
||||||
|
''',
|
||||||
|
returnStdout: true
|
||||||
|
).trim()
|
||||||
|
if (file_changed == '') {
|
||||||
|
echo "docs PR"
|
||||||
|
docs_only=1
|
||||||
|
} else {
|
||||||
|
echo file_changed
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
def pre_test(){
|
def pre_test(){
|
||||||
sh '''
|
sh '''
|
||||||
hostname
|
hostname
|
||||||
|
@ -307,10 +350,27 @@ pipeline {
|
||||||
WKPY = '/var/lib/jenkins/workspace/taos-connector-python'
|
WKPY = '/var/lib/jenkins/workspace/taos-connector-python'
|
||||||
}
|
}
|
||||||
stages {
|
stages {
|
||||||
|
stage('check') {
|
||||||
|
when {
|
||||||
|
allOf {
|
||||||
|
not { expression { env.CHANGE_BRANCH =~ /docs\// }}
|
||||||
|
not { expression { env.CHANGE_URL =~ /\/TDinternal\// }}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
parallel {
|
||||||
|
stage('check docs') {
|
||||||
|
agent{label " worker03 || slave215 || slave217 || slave219 || Mac_catalina "}
|
||||||
|
steps {
|
||||||
|
check_docs()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
stage('run test') {
|
stage('run test') {
|
||||||
when {
|
when {
|
||||||
allOf {
|
allOf {
|
||||||
not { expression { env.CHANGE_BRANCH =~ /docs\// }}
|
not { expression { env.CHANGE_BRANCH =~ /docs\// }}
|
||||||
|
expression { docs_only == 0 }
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
parallel {
|
parallel {
|
||||||
|
|
19
README.md
19
README.md
|
@ -15,7 +15,6 @@
|
||||||
[](https://coveralls.io/github/taosdata/TDengine?branch=develop)
|
[](https://coveralls.io/github/taosdata/TDengine?branch=develop)
|
||||||
[](https://bestpractices.coreinfrastructure.org/projects/4201)
|
[](https://bestpractices.coreinfrastructure.org/projects/4201)
|
||||||
|
|
||||||
|
|
||||||
English | [简体中文](README-CN.md) | We are hiring, check [here](https://tdengine.com/careers)
|
English | [简体中文](README-CN.md) | We are hiring, check [here](https://tdengine.com/careers)
|
||||||
|
|
||||||
# What is TDengine?
|
# What is TDengine?
|
||||||
|
@ -42,7 +41,7 @@ For user manual, system design and architecture, please refer to [TDengine Docum
|
||||||
|
|
||||||
At the moment, TDengine server supports running on Linux, Windows systems.Any OS application can also choose the RESTful interface of taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU , and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future.
|
At the moment, TDengine server supports running on Linux, Windows systems.Any OS application can also choose the RESTful interface of taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU , and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future.
|
||||||
|
|
||||||
You can choose to install through source code according to your needs, [container](https://docs.taosdata.com/get-started/docker/), [installation package](https://docs.taosdata.com/get-started/package/) or [Kubenetes](https://docs.taosdata.com/deployment/k8s/) to install. This quick guide only applies to installing from source.
|
You can choose to install through source code according to your needs, [container](https://docs.taosdata.com/get-started/docker/), [installation package](https://docs.taosdata.com/get-started/package/) or [Kubernetes](https://docs.taosdata.com/deployment/k8s/) to install. This quick guide only applies to installing from source.
|
||||||
|
|
||||||
TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine.
|
TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine.
|
||||||
|
|
||||||
|
@ -58,7 +57,6 @@ sudo apt-get install -y gcc cmake build-essential git libssl-dev
|
||||||
|
|
||||||
#### Install build dependencies for taosTools
|
#### Install build dependencies for taosTools
|
||||||
|
|
||||||
|
|
||||||
To build the [taosTools](https://github.com/taosdata/taos-tools) on Ubuntu/Debian, the following packages need to be installed.
|
To build the [taosTools](https://github.com/taosdata/taos-tools) on Ubuntu/Debian, the following packages need to be installed.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
@ -82,7 +80,6 @@ sudo dnf install -y gcc gcc-c++ make cmake epel-release git openssl-devel
|
||||||
|
|
||||||
#### Install build dependencies for taosTools on CentOS
|
#### Install build dependencies for taosTools on CentOS
|
||||||
|
|
||||||
|
|
||||||
#### CentOS 7.9
|
#### CentOS 7.9
|
||||||
|
|
||||||
```
|
```
|
||||||
|
@ -100,14 +97,14 @@ sudo yum install -y zlib-devel xz-devel snappy-devel jansson jansson-devel pkgco
|
||||||
|
|
||||||
Note: Since snappy lacks pkg-config support (refer to [link](https://github.com/google/snappy/pull/86)), it leads a cmake prompt libsnappy not found. But snappy still works well.
|
Note: Since snappy lacks pkg-config support (refer to [link](https://github.com/google/snappy/pull/86)), it leads a cmake prompt libsnappy not found. But snappy still works well.
|
||||||
|
|
||||||
If the powertools installation fails, you can try to use:
|
If the PowerTools installation fails, you can try to use:
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo yum config-manager --set-enabled Powertools
|
sudo yum config-manager --set-enabled powertools
|
||||||
```
|
```
|
||||||
|
|
||||||
### Setup golang environment
|
### Setup golang environment
|
||||||
|
|
||||||
|
|
||||||
TDengine includes a few components like taosAdapter developed by Go language. Please refer to golang.org official documentation for golang environment setup.
|
TDengine includes a few components like taosAdapter developed by Go language. Please refer to golang.org official documentation for golang environment setup.
|
||||||
|
|
||||||
Please use version 1.14+. For the user in China, we recommend using a proxy to accelerate package downloading.
|
Please use version 1.14+. For the user in China, we recommend using a proxy to accelerate package downloading.
|
||||||
|
@ -125,7 +122,7 @@ cmake .. -DBUILD_HTTP=false
|
||||||
|
|
||||||
### Setup rust environment
|
### Setup rust environment
|
||||||
|
|
||||||
TDengine includes a few compoments developed by Rust language. Please refer to rust-lang.org official documentation for rust environment setup.
|
TDengine includes a few components developed by Rust language. Please refer to rust-lang.org official documentation for rust environment setup.
|
||||||
|
|
||||||
## Get the source codes
|
## Get the source codes
|
||||||
|
|
||||||
|
@ -136,7 +133,6 @@ git clone https://github.com/taosdata/TDengine.git
|
||||||
cd TDengine
|
cd TDengine
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
You can modify the file ~/.gitconfig to use ssh protocol instead of https for better download speed. You will need to upload ssh public key to GitHub first. Please refer to GitHub official documentation for detail.
|
You can modify the file ~/.gitconfig to use ssh protocol instead of https for better download speed. You will need to upload ssh public key to GitHub first. Please refer to GitHub official documentation for detail.
|
||||||
|
|
||||||
```
|
```
|
||||||
|
@ -146,14 +142,12 @@ You can modify the file ~/.gitconfig to use ssh protocol instead of https for be
|
||||||
|
|
||||||
## Special Note
|
## Special Note
|
||||||
|
|
||||||
|
|
||||||
[JDBC Connector](https://github.com/taosdata/taos-connector-jdbc), [Go Connector](https://github.com/taosdata/driver-go),[Python Connector](https://github.com/taosdata/taos-connector-python),[Node.js Connector](https://github.com/taosdata/taos-connector-node),[C# Connector](https://github.com/taosdata/taos-connector-dotnet) ,[Rust Connector](https://github.com/taosdata/taos-connector-rust) and [Grafana plugin](https://github.com/taosdata/grafanaplugin) has been moved to standalone repository.
|
[JDBC Connector](https://github.com/taosdata/taos-connector-jdbc), [Go Connector](https://github.com/taosdata/driver-go),[Python Connector](https://github.com/taosdata/taos-connector-python),[Node.js Connector](https://github.com/taosdata/taos-connector-node),[C# Connector](https://github.com/taosdata/taos-connector-dotnet) ,[Rust Connector](https://github.com/taosdata/taos-connector-rust) and [Grafana plugin](https://github.com/taosdata/grafanaplugin) has been moved to standalone repository.
|
||||||
|
|
||||||
## Build TDengine
|
## Build TDengine
|
||||||
|
|
||||||
### On Linux platform
|
### On Linux platform
|
||||||
|
|
||||||
|
|
||||||
You can run the bash script `build.sh` to build both TDengine and taosTools including taosBenchmark and taosdump as below:
|
You can run the bash script `build.sh` to build both TDengine and taosTools including taosBenchmark and taosdump as below:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
@ -169,7 +163,6 @@ cmake .. -DBUILD_TOOLS=true
|
||||||
make
|
make
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
You can use Jemalloc as memory allocator instead of glibc:
|
You can use Jemalloc as memory allocator instead of glibc:
|
||||||
|
|
||||||
```
|
```
|
||||||
|
@ -309,7 +302,7 @@ Query OK, 2 row(s) in set (0.001700s)
|
||||||
|
|
||||||
## Official Connectors
|
## Official Connectors
|
||||||
|
|
||||||
TDengine provides abundant developing tools for users to develop on TDengine. include C/C++、Java、Python、Go、Node.js、C# 、RESTful ,Follow the links below to find your desired connectors and relevant documentation.
|
TDengine provides abundant developing tools for users to develop on TDengine. Follow the links below to find your desired connectors and relevant documentation.
|
||||||
|
|
||||||
- [Java](https://docs.taosdata.com/reference/connector/java/)
|
- [Java](https://docs.taosdata.com/reference/connector/java/)
|
||||||
- [C/C++](https://docs.taosdata.com/reference/connector/cpp/)
|
- [C/C++](https://docs.taosdata.com/reference/connector/cpp/)
|
||||||
|
|
Binary file not shown.
After Width: | Height: | Size: 344 KiB |
Binary file not shown.
After Width: | Height: | Size: 266 KiB |
Binary file not shown.
After Width: | Height: | Size: 258 KiB |
|
@ -223,7 +223,7 @@ phpize && ./configure && make -j && make install
|
||||||
**Specify TDengine Location:**
|
**Specify TDengine Location:**
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
phpize && ./configure --with-tdengine-dir=/usr/local/Cellar/tdengine/2.4.0.0 && make -j && make install
|
phpize && ./configure --with-tdengine-dir=/usr/local/Cellar/tdengine/3.0.0.0 && make -j && make install
|
||||||
```
|
```
|
||||||
|
|
||||||
> `--with-tdengine-dir=` is followed by the TDengine installation location.
|
> `--with-tdengine-dir=` is followed by the TDengine installation location.
|
||||||
|
|
|
@ -43,7 +43,7 @@ Query OK, 2 row(s) in set (0.001100s)
|
||||||
|
|
||||||
To meet the requirements of varied use cases, some special functions have been added in TDengine. Some examples are `twa` (Time Weighted Average), `spread` (The difference between the maximum and the minimum), and `last_row` (the last row).
|
To meet the requirements of varied use cases, some special functions have been added in TDengine. Some examples are `twa` (Time Weighted Average), `spread` (The difference between the maximum and the minimum), and `last_row` (the last row).
|
||||||
|
|
||||||
For detailed query syntax, see [Select](../../taos-sql././select).
|
For detailed query syntax, see [Select](../../taos-sql/select).
|
||||||
|
|
||||||
## Aggregation among Tables
|
## Aggregation among Tables
|
||||||
|
|
||||||
|
@ -74,7 +74,7 @@ taos> SELECT count(*), max(current) FROM meters where groupId = 2;
|
||||||
Query OK, 1 row(s) in set (0.002136s)
|
Query OK, 1 row(s) in set (0.002136s)
|
||||||
```
|
```
|
||||||
|
|
||||||
In [Select](../../taos-sql././select), all query operations are marked as to whether they support STables or not.
|
In [Select](../../taos-sql/select), all query operations are marked as to whether they support STables or not.
|
||||||
|
|
||||||
## Down Sampling and Interpolation
|
## Down Sampling and Interpolation
|
||||||
|
|
||||||
|
@ -122,7 +122,7 @@ In many use cases, it's hard to align the timestamp of the data collected by eac
|
||||||
|
|
||||||
Interpolation can be performed in TDengine if there is no data in a time range.
|
Interpolation can be performed in TDengine if there is no data in a time range.
|
||||||
|
|
||||||
For more information, see [Aggregate by Window](../../taos-sql/interval).
|
For more information, see [Aggregate by Window](../../taos-sql/distinguished).
|
||||||
|
|
||||||
## Examples
|
## Examples
|
||||||
|
|
||||||
|
|
|
@ -102,7 +102,7 @@ Replace `aggfn` with the name of your function.
|
||||||
|
|
||||||
## Interface Functions
|
## Interface Functions
|
||||||
|
|
||||||
There are strict naming conventions for interface functions. The names of the start, finish, init, and destroy interfaces must be <udf-name>_start, <udf-name>_finish, <udf-name>_init, and <udf-name>_destroy, respectively. Replace `scalarfn`, `aggfn`, and `udf` with the name of your user-defined function.
|
There are strict naming conventions for interface functions. The names of the start, finish, init, and destroy interfaces must be <udf-name\>_start, <udf-name\>_finish, <udf-name\>_init, and <udf-name\>_destroy, respectively. Replace `scalarfn`, `aggfn`, and `udf` with the name of your user-defined function.
|
||||||
|
|
||||||
Interface functions return a value that indicates whether the operation was successful. If an operation fails, the interface function returns an error code. Otherwise, it returns TSDB_CODE_SUCCESS. The error codes are defined in `taoserror.h` and in the common API error codes in `taos.h`. For example, TSDB_CODE_UDF_INVALID_INPUT indicates invalid input. TSDB_CODE_OUT_OF_MEMORY indicates insufficient memory.
|
Interface functions return a value that indicates whether the operation was successful. If an operation fails, the interface function returns an error code. Otherwise, it returns TSDB_CODE_SUCCESS. The error codes are defined in `taoserror.h` and in the common API error codes in `taos.h`. For example, TSDB_CODE_UDF_INVALID_INPUT indicates invalid input. TSDB_CODE_OUT_OF_MEMORY indicates insufficient memory.
|
||||||
|
|
||||||
|
|
|
@ -15,11 +15,6 @@ title: Escape Characters
|
||||||
| `\%` | % see below for details |
|
| `\%` | % see below for details |
|
||||||
| `\_` | \_ see below for details |
|
| `\_` | \_ see below for details |
|
||||||
|
|
||||||
:::note
|
|
||||||
Escape characters are available from version 2.4.0.4 .
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
## Restrictions
|
## Restrictions
|
||||||
|
|
||||||
1. If there are escape characters in identifiers (database name, table name, column name)
|
1. If there are escape characters in identifiers (database name, table name, column name)
|
||||||
|
|
|
@ -3,7 +3,7 @@ title: TDengine SQL
|
||||||
description: "The syntax supported by TDengine SQL "
|
description: "The syntax supported by TDengine SQL "
|
||||||
---
|
---
|
||||||
|
|
||||||
This section explains the syntax of SQL to perform operations on databases, tables and STables, insert data, select data and use functions. We also provide some tips that can be used in TDengine SQL. If you have previous experience with SQL this section will be fairly easy to understand. If you do not have previous experience with SQL, you'll come to appreciate the simplicity and power of SQL. TDengine SQL has been enhanced in version 3.0, and the query engine has been rearchitected. For information about how TDengine SQL has changed, see [Changes in TDengine 3.0](/taos-sql/changes).
|
This section explains the syntax of SQL to perform operations on databases, tables and STables, insert data, select data and use functions. We also provide some tips that can be used in TDengine SQL. If you have previous experience with SQL this section will be fairly easy to understand. If you do not have previous experience with SQL, you'll come to appreciate the simplicity and power of SQL. TDengine SQL has been enhanced in version 3.0, and the query engine has been rearchitected. For information about how TDengine SQL has changed, see [Changes in TDengine 3.0](../taos-sql/changes).
|
||||||
|
|
||||||
TDengine SQL is the major interface for users to write data into or query from TDengine. It uses standard SQL syntax and includes extensions and optimizations for time-series data and services. The maximum length of a TDengine SQL statement is 1 MB. Note that keyword abbreviations are not supported. For example, DELETE cannot be entered as DEL.
|
TDengine SQL is the major interface for users to write data into or query from TDengine. It uses standard SQL syntax and includes extensions and optimizations for time-series data and services. The maximum length of a TDengine SQL statement is 1 MB. Note that keyword abbreviations are not supported. For example, DELETE cannot be entered as DEL.
|
||||||
|
|
||||||
|
|
|
@ -13,16 +13,16 @@ TDengine community version provides deb and rpm packages for users to choose fro
|
||||||
<Tabs>
|
<Tabs>
|
||||||
<TabItem label="Install Deb" value="debinst">
|
<TabItem label="Install Deb" value="debinst">
|
||||||
|
|
||||||
1. Download deb package from official website, for example TDengine-server-2.4.0.7-Linux-x64.deb
|
1. Download deb package from official website, for example TDengine-server-3.0.0.0-Linux-x64.deb
|
||||||
2. In the directory where the package is located, execute the command below
|
2. In the directory where the package is located, execute the command below
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ sudo dpkg -i TDengine-server-2.4.0.7-Linux-x64.deb
|
$ sudo dpkg -i TDengine-server-3.0.0.0-Linux-x64.deb
|
||||||
(Reading database ... 137504 files and directories currently installed.)
|
(Reading database ... 137504 files and directories currently installed.)
|
||||||
Preparing to unpack TDengine-server-2.4.0.7-Linux-x64.deb ...
|
Preparing to unpack TDengine-server-3.0.0.0-Linux-x64.deb ...
|
||||||
TDengine is removed successfully!
|
TDengine is removed successfully!
|
||||||
Unpacking tdengine (2.4.0.7) over (2.4.0.7) ...
|
Unpacking tdengine (3.0.0.0) over (3.0.0.0) ...
|
||||||
Setting up tdengine (2.4.0.7) ...
|
Setting up tdengine (3.0.0.0) ...
|
||||||
Start to install TDengine...
|
Start to install TDengine...
|
||||||
|
|
||||||
System hostname is: ubuntu-1804
|
System hostname is: ubuntu-1804
|
||||||
|
@ -45,14 +45,14 @@ TDengine is installed successfully!
|
||||||
|
|
||||||
<TabItem label="Install RPM" value="rpminst">
|
<TabItem label="Install RPM" value="rpminst">
|
||||||
|
|
||||||
1. Download rpm package from official website, for example TDengine-server-2.4.0.7-Linux-x64.rpm;
|
1. Download rpm package from official website, for example TDengine-server-3.0.0.0-Linux-x64.rpm;
|
||||||
2. In the directory where the package is located, execute the command below
|
2. In the directory where the package is located, execute the command below
|
||||||
|
|
||||||
```
|
```
|
||||||
$ sudo rpm -ivh TDengine-server-2.4.0.7-Linux-x64.rpm
|
$ sudo rpm -ivh TDengine-server-3.0.0.0-Linux-x64.rpm
|
||||||
Preparing... ################################# [100%]
|
Preparing... ################################# [100%]
|
||||||
Updating / installing...
|
Updating / installing...
|
||||||
1:tdengine-2.4.0.7-3 ################################# [100%]
|
1:tdengine-3.0.0.0-3 ################################# [100%]
|
||||||
Start to install TDengine...
|
Start to install TDengine...
|
||||||
|
|
||||||
System hostname is: centos7
|
System hostname is: centos7
|
||||||
|
@ -76,27 +76,27 @@ TDengine is installed successfully!
|
||||||
|
|
||||||
<TabItem label="Install tar.gz" value="tarinst">
|
<TabItem label="Install tar.gz" value="tarinst">
|
||||||
|
|
||||||
1. Download the tar.gz package, for example TDengine-server-2.4.0.7-Linux-x64.tar.gz;
|
1. Download the tar.gz package, for example TDengine-server-3.0.0.0-Linux-x64.tar.gz;
|
||||||
2. In the directory where the package is located, first decompress the file, then switch to the sub-directory generated in decompressing, i.e. "TDengine-enterprise-server-2.4.0.7/" in this example, and execute the `install.sh` script.
|
2. In the directory where the package is located, first decompress the file, then switch to the sub-directory generated in decompressing, i.e. "TDengine-enterprise-server-3.0.0.0/" in this example, and execute the `install.sh` script.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ tar xvzf TDengine-enterprise-server-2.4.0.7-Linux-x64.tar.gz
|
$ tar xvzf TDengine-enterprise-server-3.0.0.0-Linux-x64.tar.gz
|
||||||
TDengine-enterprise-server-2.4.0.7/
|
TDengine-enterprise-server-3.0.0.0/
|
||||||
TDengine-enterprise-server-2.4.0.7/driver/
|
TDengine-enterprise-server-3.0.0.0/driver/
|
||||||
TDengine-enterprise-server-2.4.0.7/driver/vercomp.txt
|
TDengine-enterprise-server-3.0.0.0/driver/vercomp.txt
|
||||||
TDengine-enterprise-server-2.4.0.7/driver/libtaos.so.2.4.0.7
|
TDengine-enterprise-server-3.0.0.0/driver/libtaos.so.3.0.0.0
|
||||||
TDengine-enterprise-server-2.4.0.7/install.sh
|
TDengine-enterprise-server-3.0.0.0/install.sh
|
||||||
TDengine-enterprise-server-2.4.0.7/examples/
|
TDengine-enterprise-server-3.0.0.0/examples/
|
||||||
...
|
...
|
||||||
|
|
||||||
$ ll
|
$ ll
|
||||||
total 43816
|
total 43816
|
||||||
drwxrwxr-x 3 ubuntu ubuntu 4096 Feb 22 09:31 ./
|
drwxrwxr-x 3 ubuntu ubuntu 4096 Feb 22 09:31 ./
|
||||||
drwxr-xr-x 20 ubuntu ubuntu 4096 Feb 22 09:30 ../
|
drwxr-xr-x 20 ubuntu ubuntu 4096 Feb 22 09:30 ../
|
||||||
drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 TDengine-enterprise-server-2.4.0.7/
|
drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 TDengine-enterprise-server-3.0.0.0/
|
||||||
-rw-rw-r-- 1 ubuntu ubuntu 44852544 Feb 22 09:31 TDengine-enterprise-server-2.4.0.7-Linux-x64.tar.gz
|
-rw-rw-r-- 1 ubuntu ubuntu 44852544 Feb 22 09:31 TDengine-enterprise-server-3.0.0.0-Linux-x64.tar.gz
|
||||||
|
|
||||||
$ cd TDengine-enterprise-server-2.4.0.7/
|
$ cd TDengine-enterprise-server-3.0.0.0/
|
||||||
|
|
||||||
$ ll
|
$ ll
|
||||||
total 40784
|
total 40784
|
||||||
|
@ -146,7 +146,7 @@ Deb package of TDengine can be uninstalled as below:
|
||||||
```bash
|
```bash
|
||||||
$ sudo dpkg -r tdengine
|
$ sudo dpkg -r tdengine
|
||||||
(Reading database ... 137504 files and directories currently installed.)
|
(Reading database ... 137504 files and directories currently installed.)
|
||||||
Removing tdengine (2.4.0.7) ...
|
Removing tdengine (3.0.0.0) ...
|
||||||
TDengine is removed successfully!
|
TDengine is removed successfully!
|
||||||
|
|
||||||
```
|
```
|
||||||
|
@ -245,7 +245,7 @@ For example, if using `systemctl` , the commands to start, stop, restart and che
|
||||||
|
|
||||||
- Check server status:`systemctl status taosd`
|
- Check server status:`systemctl status taosd`
|
||||||
|
|
||||||
From version 2.4.0.0, a new independent component named as `taosAdapter` has been included in TDengine. `taosAdapter` should be started and stopped using `systemctl`.
|
Another component named as `taosAdapter` is to provide HTTP service for TDengine, it should be started and stopped using `systemctl`.
|
||||||
|
|
||||||
If the server process is OK, the output of `systemctl status` is like below:
|
If the server process is OK, the output of `systemctl status` is like below:
|
||||||
|
|
||||||
|
|
|
@ -61,7 +61,7 @@ phpize && ./configure && make -j && make install
|
||||||
**Specify TDengine location:**
|
**Specify TDengine location:**
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
phpize && ./configure --with-tdengine-dir=/usr/local/Cellar/tdengine/2.4.0.0 && make -j && make install
|
phpize && ./configure --with-tdengine-dir=/usr/local/Cellar/tdengine/3.0.0.0 && make -j && make install
|
||||||
```
|
```
|
||||||
|
|
||||||
> `--with-tdengine-dir=` is followed by TDengine location.
|
> `--with-tdengine-dir=` is followed by TDengine location.
|
||||||
|
|
|
@ -30,7 +30,7 @@ taosAdapter provides the following features.
|
||||||
|
|
||||||
### Install taosAdapter
|
### Install taosAdapter
|
||||||
|
|
||||||
taosAdapter has been part of TDengine server software since TDengine v2.4.0.0. If you use the TDengine server, you don't need additional steps to install taosAdapter. You can download taosAdapter from [TDengine official website](https://tdengine.com/all-downloads/) to download the TDengine server installation package (taosAdapter is included in v2.4.0.0 and later version). If you need to deploy taosAdapter separately on another server other than the TDengine server, you should install the full TDengine server package on that server to install taosAdapter. If you need to build taosAdapter from source code, you can refer to the [Building taosAdapter]( https://github.com/taosdata/taosadapter/blob/3.0/BUILD.md) documentation.
|
If you use the TDengine server, you don't need additional steps to install taosAdapter. You can download taosAdapter from [TDengine 3.0 released versions](../../releases) to download the TDengine server installation package. If you need to deploy taosAdapter separately on another server other than the TDengine server, you should install the full TDengine server package on that server to install taosAdapter. If you need to build taosAdapter from source code, you can refer to the [Building taosAdapter]( https://github.com/taosdata/taosadapter/blob/3.0/BUILD.md) documentation.
|
||||||
|
|
||||||
### Start/Stop taosAdapter
|
### Start/Stop taosAdapter
|
||||||
|
|
||||||
|
|
|
@ -75,7 +75,6 @@ taos --dump-config
|
||||||
| Applicable | Server Only |
|
| Applicable | Server Only |
|
||||||
| Meaning | The port for external access after `taosd` is started |
|
| Meaning | The port for external access after `taosd` is started |
|
||||||
| Default Value | 6030 |
|
| Default Value | 6030 |
|
||||||
| Note | REST service is provided by `taosd` before 2.4.0.0 but by `taosAdapter` after 2.4.0.0, the default port of REST service is 6041 |
|
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
TDengine uses 13 continuous ports, both TCP and UDP, starting with the port specified by `serverPort`. You should ensure, in your firewall rules, that these ports are kept open. Below table describes the ports used by TDengine in details.
|
TDengine uses 13 continuous ports, both TCP and UDP, starting with the port specified by `serverPort`. You should ensure, in your firewall rules, that these ports are kept open. Below table describes the ports used by TDengine in details.
|
||||||
|
@ -87,11 +86,11 @@ TDengine uses 13 continuous ports, both TCP and UDP, starting with the port spec
|
||||||
| TCP | 6030 | Communication between client and server | serverPort |
|
| TCP | 6030 | Communication between client and server | serverPort |
|
||||||
| TCP | 6035 | Communication among server nodes in cluster | serverPort+5 |
|
| TCP | 6035 | Communication among server nodes in cluster | serverPort+5 |
|
||||||
| TCP | 6040 | Data syncup among server nodes in cluster | serverPort+10 |
|
| TCP | 6040 | Data syncup among server nodes in cluster | serverPort+10 |
|
||||||
| TCP | 6041 | REST connection between client and server | Prior to 2.4.0.0: serverPort+11; After 2.4.0.0 refer to [taosAdapter](/reference/taosadapter/) |
|
| TCP | 6041 | REST connection between client and server | Please refer to [taosAdapter](../taosadapter/) |
|
||||||
| TCP | 6042 | Service Port of Arbitrator | The parameter of Arbitrator |
|
| TCP | 6042 | Service Port of Arbitrator | The parameter of Arbitrator |
|
||||||
| TCP | 6043 | Service Port of TaosKeeper | The parameter of TaosKeeper |
|
| TCP | 6043 | Service Port of TaosKeeper | The parameter of TaosKeeper |
|
||||||
| TCP | 6044 | Data access port for StatsD | refer to [taosAdapter](/reference/taosadapter/) |
|
| TCP | 6044 | Data access port for StatsD | refer to [taosAdapter](../taosadapter/) |
|
||||||
| UDP | 6045 | Data access for statsd | refer to [taosAdapter](/reference/taosadapter/) |
|
| UDP | 6045 | Data access for statsd | refer to [taosAdapter](../taosadapter/) |
|
||||||
| TCP | 6060 | Port of Monitoring Service in Enterprise version | |
|
| TCP | 6060 | Port of Monitoring Service in Enterprise version | |
|
||||||
| UDP | 6030-6034 | Communication between client and server | serverPort |
|
| UDP | 6030-6034 | Communication between client and server | serverPort |
|
||||||
| UDP | 6035-6039 | Communication among server nodes in cluster | serverPort |
|
| UDP | 6035-6039 | Communication among server nodes in cluster | serverPort |
|
||||||
|
@ -777,12 +776,6 @@ To prevent system resource from being exhausted by multiple concurrent streams,
|
||||||
|
|
||||||
## HTTP Parameters
|
## HTTP Parameters
|
||||||
|
|
||||||
:::note
|
|
||||||
HTTP service was provided by `taosd` prior to version 2.4.0.0 and is provided by `taosAdapter` after version 2.4.0.0.
|
|
||||||
The parameters described in this section are only application in versions prior to 2.4.0.0. If you are using any version from 2.4.0.0, please refer to [taosAdapter](/reference/taosadapter/).
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
### http
|
### http
|
||||||
|
|
||||||
| Attribute | Description |
|
| Attribute | Description |
|
||||||
|
@ -980,16 +973,7 @@ The parameters described in this section are only application in versions prior
|
||||||
| Applicable | Server and Client |
|
| Applicable | Server and Client |
|
||||||
| Meaning | Log level of common module |
|
| Meaning | Log level of common module |
|
||||||
| Value Range | Same as debugFlag |
|
| Value Range | Same as debugFlag |
|
||||||
| Default Value | |
|
| Default Value | | |
|
||||||
|
|
||||||
### httpDebugFlag
|
|
||||||
|
|
||||||
| Attribute | Description |
|
|
||||||
| ------------- | ------------------------------------------- |
|
|
||||||
| Applicable | Server Only |
|
|
||||||
| Meaning | Log level of http module (prior to 2.4.0.0) |
|
|
||||||
| Value Range | Same as debugFlag |
|
|
||||||
| Default Value | |
|
|
||||||
|
|
||||||
### mqttDebugFlag
|
### mqttDebugFlag
|
||||||
|
|
||||||
|
|
|
@ -29,11 +29,6 @@ All executable files of TDengine are in the _/usr/local/taos/bin_ directory by d
|
||||||
- _set_core.sh_: script for setting up the system to generate core dump files for easy debugging
|
- _set_core.sh_: script for setting up the system to generate core dump files for easy debugging
|
||||||
- _taosd-dump-cfg.gdb_: script to facilitate debugging of taosd's gdb execution.
|
- _taosd-dump-cfg.gdb_: script to facilitate debugging of taosd's gdb execution.
|
||||||
|
|
||||||
:::note
|
|
||||||
taosdump after version 2.4.0.0 require taosTools as a standalone installation. A new version of taosBenchmark is include in taosTools too.
|
|
||||||
|
|
||||||
:::
|
|
||||||
|
|
||||||
:::tip
|
:::tip
|
||||||
You can configure different data directories and log directories by modifying the system configuration file `taos.cfg`.
|
You can configure different data directories and log directories by modifying the system configuration file `taos.cfg`.
|
||||||
|
|
||||||
|
|
|
@ -1,285 +0,0 @@
|
||||||
---
|
|
||||||
sidebar_label: TDengine in Docker
|
|
||||||
title: Deploy TDengine in Docker
|
|
||||||
---
|
|
||||||
|
|
||||||
We do not recommend deploying TDengine using Docker in a production system. However, Docker is still very useful in a development environment, especially when your host is not Linux. From version 2.0.14.0, the official image of TDengine can support X86-64, X86, arm64, and rm32 .
|
|
||||||
|
|
||||||
In this chapter we introduce a simple step by step guide to use TDengine in Docker.
|
|
||||||
|
|
||||||
## Install Docker
|
|
||||||
|
|
||||||
To install Docker please refer to [Get Docker](https://docs.docker.com/get-docker/).
|
|
||||||
|
|
||||||
After Docker is installed, you can check whether Docker is installed properly by displaying Docker version.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ docker -v
|
|
||||||
Docker version 20.10.3, build 48d30b5
|
|
||||||
```
|
|
||||||
|
|
||||||
## Launch TDengine in Docker
|
|
||||||
|
|
||||||
### Launch TDengine Server
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ docker run -d -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
|
|
||||||
526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd
|
|
||||||
```
|
|
||||||
|
|
||||||
In the above command, a docker container is started to run TDengine server, the port range 6030-6049 of the container is mapped to host port range 6030-6049. If port range 6030-6049 has been occupied on the host, please change to an available host port range. For port requirements on the host, please refer to [Port Configuration](/reference/config/#serverport).
|
|
||||||
|
|
||||||
- **docker run**: Launch a docker container
|
|
||||||
- **-d**: the container will run in background mode
|
|
||||||
- **-p**: port mapping
|
|
||||||
- **tdengine/tdengine**: The image from which to launch the container
|
|
||||||
- **526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd**: the container ID if successfully launched.
|
|
||||||
|
|
||||||
Furthermore, `--name` can be used with `docker run` to specify name for the container, `--hostname` can be used to specify hostname for the container, `-v` can be used to mount local volumes to the container so that the data generated inside the container can be persisted to disk on the host.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker run -d --name tdengine --hostname="tdengine-server" -v ~/work/taos/log:/var/log/taos -v ~/work/taos/data:/var/lib/taos -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine
|
|
||||||
```
|
|
||||||
|
|
||||||
- **--name tdengine**: specify the name of the container, the name can be used to specify the container later
|
|
||||||
- **--hostname=tdengine-server**: specify the hostname inside the container, the hostname can be used inside the container without worrying the container IP may vary
|
|
||||||
- **-v**: volume mapping between host and container
|
|
||||||
|
|
||||||
### Check the container
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker ps
|
|
||||||
```
|
|
||||||
|
|
||||||
The output is like below:
|
|
||||||
|
|
||||||
```
|
|
||||||
CONTAINER ID IMAGE COMMAND CREATED STATUS ···
|
|
||||||
c452519b0f9b tdengine/tdengine "taosd" 14 minutes ago Up 14 minutes ···
|
|
||||||
```
|
|
||||||
|
|
||||||
- **docker ps**: List all the containers
|
|
||||||
- **CONTAINER ID**: Container ID
|
|
||||||
- **IMAGE**: The image used for the container
|
|
||||||
- **COMMAND**: The command used when launching the container
|
|
||||||
- **CREATED**: When the container was created
|
|
||||||
- **STATUS**: Status of the container
|
|
||||||
|
|
||||||
### Access TDengine inside container
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ docker exec -it tdengine /bin/bash
|
|
||||||
root@tdengine-server:~/TDengine-server-2.4.0.4#
|
|
||||||
```
|
|
||||||
|
|
||||||
- **docker exec**: Attach to the container
|
|
||||||
- **-i**: Interactive mode
|
|
||||||
- **-t**: Use terminal
|
|
||||||
- **tdengine**: Container name, up to the output of `docker ps`
|
|
||||||
- **/bin/bash**: The command to execute once the container is attached
|
|
||||||
|
|
||||||
Inside the container, start TDengine CLI `taos`
|
|
||||||
|
|
||||||
```bash
|
|
||||||
root@tdengine-server:~/TDengine-server-2.4.0.4# taos
|
|
||||||
|
|
||||||
Welcome to the TDengine shell from Linux, Client Version:2.4.0.4
|
|
||||||
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
|
|
||||||
|
|
||||||
taos>
|
|
||||||
```
|
|
||||||
|
|
||||||
The above example is for a successful connection. If `taos` fails to connect to the server side, error information would be shown.
|
|
||||||
|
|
||||||
In TDengine CLI, SQL commands can be executed to create/drop databases, tables, STables, and insert or query data. For details please refer to [TAOS SQL](/taos-sql/).
|
|
||||||
|
|
||||||
### Access TDengine from host
|
|
||||||
|
|
||||||
If option `-p` used to map ports properly between host and container, it's also able to access TDengine in container from the host as long as `firstEp` is configured correctly for the client on host.
|
|
||||||
|
|
||||||
```
|
|
||||||
$ taos
|
|
||||||
|
|
||||||
Welcome to the TDengine shell from Linux, Client Version:2.4.0.4
|
|
||||||
Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.
|
|
||||||
|
|
||||||
taos>
|
|
||||||
```
|
|
||||||
|
|
||||||
It's also able to access the REST interface provided by TDengine in container from the host.
|
|
||||||
|
|
||||||
```
|
|
||||||
curl -L -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
|
|
||||||
```
|
|
||||||
|
|
||||||
Output is like below:
|
|
||||||
|
|
||||||
```
|
|
||||||
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep0,keep1,keep(D)","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep0,keep1,keep(D)",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["test","2021-08-18 06:01:11.021",10000,4,1,1,10,"3650,3650,3650",16,6,100,4096,1,3000,2,0,"ms",0,"ready"],["log","2021-08-18 05:51:51.065",4,1,1,1,10,"30,30,30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":2}
|
|
||||||
```
|
|
||||||
|
|
||||||
For details of REST API please refer to [REST API](/reference/rest-api/).
|
|
||||||
|
|
||||||
### Run TDengine server and taosAdapter inside container
|
|
||||||
|
|
||||||
From version 2.4.0.0, in the TDengine Docker image, `taosAdapter` is enabled by default, but can be disabled using environment variable `TAOS_DISABLE_ADAPTER=true` . `taosAdapter` can also be run alone without `taosd` when launching a container.
|
|
||||||
|
|
||||||
For the port mapping of `taosAdapter`, please refer to [taosAdapter](/reference/taosadapter/).
|
|
||||||
|
|
||||||
- Run both `taosd` and `taosAdapter` (by default) in docker container:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker run -d --name tdengine-all -p 6030-6049:6030-6049 -p 6030-6049:6030-6049/udp tdengine/tdengine:2.4.0.4
|
|
||||||
```
|
|
||||||
|
|
||||||
- Run `taosAdapter` only in docker container, `TAOS_FIRST_EP` environment variable needs to be used to specify the container name in which `taosd` is running:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker run -d --name tdengine-taosa -p 6041-6049:6041-6049 -p 6041-6049:6041-6049/udp -e TAOS_FIRST_EP=tdengine-all tdengine/tdengine:2.4.0.4 taosadapter
|
|
||||||
```
|
|
||||||
|
|
||||||
- Run `taosd` only in docker container:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker run -d --name tdengine-taosd -p 6030-6042:6030-6042 -p 6030-6042:6030-6042/udp -e TAOS_DISABLE_ADAPTER=true tdengine/tdengine:2.4.0.4
|
|
||||||
```
|
|
||||||
|
|
||||||
- Verify the REST interface:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
curl -L -H "Authorization: Basic cm9vdDp0YW9zZGF0YQ==" -d "show databases;" 127.0.0.1:6041/rest/sql
|
|
||||||
```
|
|
||||||
|
|
||||||
Below is an example output:
|
|
||||||
|
|
||||||
```
|
|
||||||
{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[["log","2021-12-28 09:18:55.765",10,1,1,1,10,"30",1,3,100,4096,1,3000,2,0,"us",0,"ready"]],"rows":1}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Use taosBenchmark on host to access TDengine server in container
|
|
||||||
|
|
||||||
1. Run `taosBenchmark`, named as `taosdemo` previously, on the host:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ taosBenchmark
|
|
||||||
|
|
||||||
taosBenchmark is simulating data generated by power equipments monitoring...
|
|
||||||
|
|
||||||
host: 127.0.0.1:6030
|
|
||||||
user: root
|
|
||||||
password: taosdata
|
|
||||||
configDir:
|
|
||||||
resultFile: ./output.txt
|
|
||||||
thread num of insert data: 10
|
|
||||||
thread num of create table: 10
|
|
||||||
top insert interval: 0
|
|
||||||
number of records per req: 30000
|
|
||||||
max sql length: 1048576
|
|
||||||
database count: 1
|
|
||||||
database[0]:
|
|
||||||
database[0] name: test
|
|
||||||
drop: yes
|
|
||||||
replica: 1
|
|
||||||
precision: ms
|
|
||||||
super table count: 1
|
|
||||||
super table[0]:
|
|
||||||
stbName: meters
|
|
||||||
autoCreateTable: no
|
|
||||||
childTblExists: no
|
|
||||||
childTblCount: 10000
|
|
||||||
childTblPrefix: d
|
|
||||||
dataSource: rand
|
|
||||||
iface: taosc
|
|
||||||
insertRows: 10000
|
|
||||||
interlaceRows: 0
|
|
||||||
disorderRange: 1000
|
|
||||||
disorderRatio: 0
|
|
||||||
maxSqlLen: 1048576
|
|
||||||
timeStampStep: 1
|
|
||||||
startTimestamp: 2017-07-14 10:40:00.000
|
|
||||||
sampleFormat:
|
|
||||||
sampleFile:
|
|
||||||
tagsFile:
|
|
||||||
columnCount: 3
|
|
||||||
column[0]:FLOAT column[1]:INT column[2]:FLOAT
|
|
||||||
tagCount: 2
|
|
||||||
tag[0]:INT tag[1]:BINARY(16)
|
|
||||||
|
|
||||||
Press enter key to continue or Ctrl-C to stop
|
|
||||||
```
|
|
||||||
|
|
||||||
Once the execution is finished, a database `test` is created, a STable `meters` is created in database `test`, 10,000 sub tables are created using `meters` as template, named as "d0" to "d9999", while 10,000 rows are inserted into each table, so totally 100,000,000 rows are inserted.
|
|
||||||
|
|
||||||
2. Check the data
|
|
||||||
|
|
||||||
- **Check database**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ taos> show databases;
|
|
||||||
name | created_time | ntables | vgroups | ···
|
|
||||||
test | 2021-08-18 06:01:11.021 | 10000 | 6 | ···
|
|
||||||
log | 2021-08-18 05:51:51.065 | 4 | 1 | ···
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
- **Check STable**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ taos> use test;
|
|
||||||
Database changed.
|
|
||||||
|
|
||||||
$ taos> show stables;
|
|
||||||
name | created_time | columns | tags | tables |
|
|
||||||
============================================================================================
|
|
||||||
meters | 2021-08-18 06:01:11.116 | 4 | 2 | 10000 |
|
|
||||||
Query OK, 1 row(s) in set (0.003259s)
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
- **Check Tables**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ taos> select * from test.t0 limit 10;
|
|
||||||
|
|
||||||
DB error: Table does not exist (0.002857s)
|
|
||||||
taos> select * from test.d0 limit 10;
|
|
||||||
ts | current | voltage | phase |
|
|
||||||
======================================================================================
|
|
||||||
2017-07-14 10:40:00.000 | 10.12072 | 223 | 0.34167 |
|
|
||||||
2017-07-14 10:40:00.001 | 10.16103 | 224 | 0.34445 |
|
|
||||||
2017-07-14 10:40:00.002 | 10.00204 | 220 | 0.33334 |
|
|
||||||
2017-07-14 10:40:00.003 | 10.00030 | 220 | 0.33333 |
|
|
||||||
2017-07-14 10:40:00.004 | 9.84029 | 216 | 0.32222 |
|
|
||||||
2017-07-14 10:40:00.005 | 9.88028 | 217 | 0.32500 |
|
|
||||||
2017-07-14 10:40:00.006 | 9.88110 | 217 | 0.32500 |
|
|
||||||
2017-07-14 10:40:00.007 | 10.08137 | 222 | 0.33889 |
|
|
||||||
2017-07-14 10:40:00.008 | 10.12063 | 223 | 0.34167 |
|
|
||||||
2017-07-14 10:40:00.009 | 10.16086 | 224 | 0.34445 |
|
|
||||||
Query OK, 10 row(s) in set (0.016791s)
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
- **Check tag values of table d0**
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ taos> select groupid, location from test.d0;
|
|
||||||
groupid | location |
|
|
||||||
=================================
|
|
||||||
0 | California.SanDiego |
|
|
||||||
Query OK, 1 row(s) in set (0.003490s)
|
|
||||||
```
|
|
||||||
|
|
||||||
### Access TDengine from 3rd party tools
|
|
||||||
|
|
||||||
A lot of 3rd party tools can be used to write data into TDengine through `taosAdapter`, for details please refer to [3rd party tools](/third-party/).
|
|
||||||
|
|
||||||
There is nothing different from the 3rd party side to access TDengine server inside a container, as long as the end point is specified correctly, the end point should be the FQDN and the mapped port of the host.
|
|
||||||
|
|
||||||
## Stop TDengine inside container
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker stop tdengine
|
|
||||||
```
|
|
||||||
|
|
||||||
- **docker stop**: stop a container
|
|
||||||
- **tdengine**: container name
|
|
|
@ -0,0 +1,9 @@
|
||||||
|
---
|
||||||
|
sidebar_label: Releases
|
||||||
|
title: Released Versions
|
||||||
|
---
|
||||||
|
|
||||||
|
import Release from "/components/ReleaseV3";
|
||||||
|
|
||||||
|
|
||||||
|
<Release versionPrefix="3.0" />
|
|
@ -187,7 +187,7 @@ TDengine 中时间戳的时区总是由客户端进行处理,而与服务端
|
||||||
|
|
||||||
### 17. 为什么 RESTful 接口无响应、Grafana 无法添加 TDengine 为数据源、TDengineGUI 选了 6041 端口还是无法连接成功?
|
### 17. 为什么 RESTful 接口无响应、Grafana 无法添加 TDengine 为数据源、TDengineGUI 选了 6041 端口还是无法连接成功?
|
||||||
|
|
||||||
taosAdapter 从 TDengine 2.4.0.0 版本开始成为 TDengine 服务端软件的组成部分,是 TDengine 集群和应用程序之间的桥梁和适配器。在此之前 RESTful 接口等功能是由 taosd 内置的 HTTP 服务提供的,而如今要实现上述功能需要执行:```systemctl start taosadapter``` 命令来启动 taosAdapter 服务。
|
这个现象可能是因为 taosAdapter 没有被正确启动引起的,需要执行:```systemctl start taosadapter``` 命令来启动 taosAdapter 服务。
|
||||||
|
|
||||||
需要说明的是,taosAdapter 的日志路径 path 需要单独配置,默认路径是 /var/log/taos ;日志等级 logLevel 有 8 个等级,默认等级是 info ,配置成 panic 可关闭日志输出。请注意操作系统 / 目录的空间大小,可通过命令行参数、环境变量或配置文件来修改配置,默认配置文件是 /etc/taos/taosadapter.toml 。
|
需要说明的是,taosAdapter 的日志路径 path 需要单独配置,默认路径是 /var/log/taos ;日志等级 logLevel 有 8 个等级,默认等级是 info ,配置成 panic 可关闭日志输出。请注意操作系统 / 目录的空间大小,可通过命令行参数、环境变量或配置文件来修改配置,默认配置文件是 /etc/taos/taosadapter.toml 。
|
||||||
|
|
||||||
|
|
|
@ -17,7 +17,7 @@
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>com.taosdata.jdbc</groupId>
|
<groupId>com.taosdata.jdbc</groupId>
|
||||||
<artifactId>taos-jdbcdriver</artifactId>
|
<artifactId>taos-jdbcdriver</artifactId>
|
||||||
<version>2.0.34</version>
|
<version>3.0.0</version>
|
||||||
</dependency>
|
</dependency>
|
||||||
</dependencies>
|
</dependencies>
|
||||||
|
|
||||||
|
|
|
@ -47,7 +47,7 @@
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>com.taosdata.jdbc</groupId>
|
<groupId>com.taosdata.jdbc</groupId>
|
||||||
<artifactId>taos-jdbcdriver</artifactId>
|
<artifactId>taos-jdbcdriver</artifactId>
|
||||||
<version>2.0.18</version>
|
<version>3.0.0</version>
|
||||||
</dependency>
|
</dependency>
|
||||||
|
|
||||||
</dependencies>
|
</dependencies>
|
||||||
|
|
|
@ -10,7 +10,7 @@
|
||||||
```xml
|
```xml
|
||||||
<bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
|
<bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource">
|
||||||
<property name="driverClassName" value="com.taosdata.jdbc.TSDBDriver"></property>
|
<property name="driverClassName" value="com.taosdata.jdbc.TSDBDriver"></property>
|
||||||
<property name="url" value="jdbc:TAOS://127.0.0.1:6030/log"></property>
|
<property name="url" value="jdbc:TAOS://127.0.0.1:6030/test"></property>
|
||||||
<property name="username" value="root"></property>
|
<property name="username" value="root"></property>
|
||||||
<property name="password" value="taosdata"></property>
|
<property name="password" value="taosdata"></property>
|
||||||
</bean>
|
</bean>
|
||||||
|
@ -28,5 +28,5 @@ mvn clean package
|
||||||
```
|
```
|
||||||
打包成功之后,进入 `target/` 目录下,执行以下命令就可运行测试:
|
打包成功之后,进入 `target/` 目录下,执行以下命令就可运行测试:
|
||||||
```shell
|
```shell
|
||||||
java -jar SpringJdbcTemplate-1.0-SNAPSHOT-jar-with-dependencies.jar
|
java -jar target/SpringJdbcTemplate-1.0-SNAPSHOT-jar-with-dependencies.jar
|
||||||
```
|
```
|
|
@ -28,7 +28,7 @@ public class App {
|
||||||
//use database
|
//use database
|
||||||
executor.doExecute("use test");
|
executor.doExecute("use test");
|
||||||
// create table
|
// create table
|
||||||
executor.doExecute("create table if not exists test.weather (ts timestamp, temperature int, humidity float)");
|
executor.doExecute("create table if not exists test.weather (ts timestamp, temperature float, humidity int)");
|
||||||
|
|
||||||
WeatherDao weatherDao = ctx.getBean(WeatherDao.class);
|
WeatherDao weatherDao = ctx.getBean(WeatherDao.class);
|
||||||
Weather weather = new Weather(new Timestamp(new Date().getTime()), random.nextFloat() * 50.0f, random.nextInt(100));
|
Weather weather = new Weather(new Timestamp(new Date().getTime()), random.nextFloat() * 50.0f, random.nextInt(100));
|
||||||
|
|
|
@ -41,7 +41,7 @@ public class BatcherInsertTest {
|
||||||
//use database
|
//use database
|
||||||
executor.doExecute("use test");
|
executor.doExecute("use test");
|
||||||
// create table
|
// create table
|
||||||
executor.doExecute("create table if not exists test.weather (ts timestamp, temperature int, humidity float)");
|
executor.doExecute("create table if not exists test.weather (ts timestamp, temperature float, humidity int)");
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
|
|
|
@ -13,13 +13,13 @@ ConnectionPoolDemo的程序逻辑:
|
||||||
### 如何运行这个例子:
|
### 如何运行这个例子:
|
||||||
|
|
||||||
```shell script
|
```shell script
|
||||||
mvn clean package assembly:single
|
mvn clean package
|
||||||
java -jar target/connectionPools-1.0-SNAPSHOT-jar-with-dependencies.jar -host 127.0.0.1
|
java -jar target/ConnectionPoolDemo-jar-with-dependencies.jar -host 127.0.0.1
|
||||||
```
|
```
|
||||||
使用mvn运行ConnectionPoolDemo的main方法,可以指定参数
|
使用mvn运行ConnectionPoolDemo的main方法,可以指定参数
|
||||||
```shell script
|
```shell script
|
||||||
Usage:
|
Usage:
|
||||||
java -jar target/connectionPools-1.0-SNAPSHOT-jar-with-dependencies.jar
|
java -jar target/ConnectionPoolDemo-jar-with-dependencies.jar
|
||||||
-host : hostname
|
-host : hostname
|
||||||
-poolType <c3p0| dbcp| druid| hikari>
|
-poolType <c3p0| dbcp| druid| hikari>
|
||||||
-poolSize <poolSize>
|
-poolSize <poolSize>
|
||||||
|
|
|
@ -18,7 +18,7 @@
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>com.taosdata.jdbc</groupId>
|
<groupId>com.taosdata.jdbc</groupId>
|
||||||
<artifactId>taos-jdbcdriver</artifactId>
|
<artifactId>taos-jdbcdriver</artifactId>
|
||||||
<version>2.0.18</version>
|
<version>3.0.0</version>
|
||||||
</dependency>
|
</dependency>
|
||||||
<!-- druid -->
|
<!-- druid -->
|
||||||
<dependency>
|
<dependency>
|
||||||
|
|
|
@ -47,7 +47,7 @@
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>com.taosdata.jdbc</groupId>
|
<groupId>com.taosdata.jdbc</groupId>
|
||||||
<artifactId>taos-jdbcdriver</artifactId>
|
<artifactId>taos-jdbcdriver</artifactId>
|
||||||
<version>2.0.18</version>
|
<version>3.0.0</version>
|
||||||
</dependency>
|
</dependency>
|
||||||
|
|
||||||
<dependency>
|
<dependency>
|
||||||
|
|
|
@ -0,0 +1,14 @@
|
||||||
|
# 使用说明
|
||||||
|
|
||||||
|
## 创建使用db
|
||||||
|
```shell
|
||||||
|
$ taos
|
||||||
|
|
||||||
|
> create database mp_test
|
||||||
|
```
|
||||||
|
|
||||||
|
## 执行测试用例
|
||||||
|
|
||||||
|
```shell
|
||||||
|
$ mvn clean test
|
||||||
|
```
|
|
@ -2,7 +2,17 @@ package com.taosdata.example.mybatisplusdemo.mapper;
|
||||||
|
|
||||||
import com.baomidou.mybatisplus.core.mapper.BaseMapper;
|
import com.baomidou.mybatisplus.core.mapper.BaseMapper;
|
||||||
import com.taosdata.example.mybatisplusdemo.domain.Weather;
|
import com.taosdata.example.mybatisplusdemo.domain.Weather;
|
||||||
|
import org.apache.ibatis.annotations.Insert;
|
||||||
|
import org.apache.ibatis.annotations.Update;
|
||||||
|
|
||||||
public interface WeatherMapper extends BaseMapper<Weather> {
|
public interface WeatherMapper extends BaseMapper<Weather> {
|
||||||
|
|
||||||
|
@Update("CREATE TABLE if not exists weather(ts timestamp, temperature float, humidity int, location nchar(100))")
|
||||||
|
int createTable();
|
||||||
|
|
||||||
|
@Insert("insert into weather (ts, temperature, humidity, location) values(#{ts}, #{temperature}, #{humidity}, #{location})")
|
||||||
|
int insertOne(Weather one);
|
||||||
|
|
||||||
|
@Update("drop table if exists weather")
|
||||||
|
void dropTable();
|
||||||
}
|
}
|
||||||
|
|
|
@ -2,7 +2,7 @@ spring:
|
||||||
datasource:
|
datasource:
|
||||||
driver-class-name: com.taosdata.jdbc.TSDBDriver
|
driver-class-name: com.taosdata.jdbc.TSDBDriver
|
||||||
url: jdbc:TAOS://localhost:6030/mp_test?charset=UTF-8&locale=en_US.UTF-8&timezone=UTC-8
|
url: jdbc:TAOS://localhost:6030/mp_test?charset=UTF-8&locale=en_US.UTF-8&timezone=UTC-8
|
||||||
user: root
|
username: root
|
||||||
password: taosdata
|
password: taosdata
|
||||||
|
|
||||||
druid:
|
druid:
|
||||||
|
|
|
@ -82,27 +82,15 @@ public class TemperatureMapperTest {
|
||||||
Assert.assertEquals(1, affectRows);
|
Assert.assertEquals(1, affectRows);
|
||||||
}
|
}
|
||||||
|
|
||||||
/***
|
|
||||||
* test SelectOne
|
|
||||||
* **/
|
|
||||||
@Test
|
|
||||||
public void testSelectOne() {
|
|
||||||
QueryWrapper<Temperature> wrapper = new QueryWrapper<>();
|
|
||||||
wrapper.eq("location", "beijing");
|
|
||||||
Temperature one = mapper.selectOne(wrapper);
|
|
||||||
System.out.println(one);
|
|
||||||
Assert.assertNotNull(one);
|
|
||||||
}
|
|
||||||
|
|
||||||
/***
|
/***
|
||||||
* test select By map
|
* test select By map
|
||||||
* ***/
|
* ***/
|
||||||
@Test
|
@Test
|
||||||
public void testSelectByMap() {
|
public void testSelectByMap() {
|
||||||
Map<String, Object> map = new HashMap<>();
|
Map<String, Object> map = new HashMap<>();
|
||||||
map.put("location", "beijing");
|
map.put("location", "北京");
|
||||||
List<Temperature> temperatures = mapper.selectByMap(map);
|
List<Temperature> temperatures = mapper.selectByMap(map);
|
||||||
Assert.assertEquals(1, temperatures.size());
|
Assert.assertTrue(temperatures.size() > 1);
|
||||||
}
|
}
|
||||||
|
|
||||||
/***
|
/***
|
||||||
|
@ -120,7 +108,7 @@ public class TemperatureMapperTest {
|
||||||
@Test
|
@Test
|
||||||
public void testSelectCount() {
|
public void testSelectCount() {
|
||||||
int count = mapper.selectCount(null);
|
int count = mapper.selectCount(null);
|
||||||
Assert.assertEquals(5, count);
|
Assert.assertEquals(10, count);
|
||||||
}
|
}
|
||||||
|
|
||||||
/****
|
/****
|
||||||
|
|
|
@ -6,6 +6,7 @@ import com.baomidou.mybatisplus.extension.plugins.pagination.Page;
|
||||||
import com.taosdata.example.mybatisplusdemo.domain.Weather;
|
import com.taosdata.example.mybatisplusdemo.domain.Weather;
|
||||||
import org.junit.Assert;
|
import org.junit.Assert;
|
||||||
import org.junit.Test;
|
import org.junit.Test;
|
||||||
|
import org.junit.Before;
|
||||||
import org.junit.runner.RunWith;
|
import org.junit.runner.RunWith;
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
import org.springframework.beans.factory.annotation.Autowired;
|
||||||
import org.springframework.boot.test.context.SpringBootTest;
|
import org.springframework.boot.test.context.SpringBootTest;
|
||||||
|
@ -26,6 +27,18 @@ public class WeatherMapperTest {
|
||||||
@Autowired
|
@Autowired
|
||||||
private WeatherMapper mapper;
|
private WeatherMapper mapper;
|
||||||
|
|
||||||
|
@Before
|
||||||
|
public void createTable(){
|
||||||
|
mapper.dropTable();
|
||||||
|
mapper.createTable();
|
||||||
|
Weather one = new Weather();
|
||||||
|
one.setTs(new Timestamp(1605024000000l));
|
||||||
|
one.setTemperature(12.22f);
|
||||||
|
one.setLocation("望京");
|
||||||
|
one.setHumidity(100);
|
||||||
|
mapper.insertOne(one);
|
||||||
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void testSelectList() {
|
public void testSelectList() {
|
||||||
List<Weather> weathers = mapper.selectList(null);
|
List<Weather> weathers = mapper.selectList(null);
|
||||||
|
@ -46,20 +59,20 @@ public class WeatherMapperTest {
|
||||||
@Test
|
@Test
|
||||||
public void testSelectOne() {
|
public void testSelectOne() {
|
||||||
QueryWrapper<Weather> wrapper = new QueryWrapper<>();
|
QueryWrapper<Weather> wrapper = new QueryWrapper<>();
|
||||||
wrapper.eq("location", "beijing");
|
wrapper.eq("location", "望京");
|
||||||
Weather one = mapper.selectOne(wrapper);
|
Weather one = mapper.selectOne(wrapper);
|
||||||
System.out.println(one);
|
System.out.println(one);
|
||||||
Assert.assertEquals(12.22f, one.getTemperature(), 0.00f);
|
Assert.assertEquals(12.22f, one.getTemperature(), 0.00f);
|
||||||
Assert.assertEquals("beijing", one.getLocation());
|
Assert.assertEquals("望京", one.getLocation());
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
// @Test
|
||||||
public void testSelectByMap() {
|
// public void testSelectByMap() {
|
||||||
Map<String, Object> map = new HashMap<>();
|
// Map<String, Object> map = new HashMap<>();
|
||||||
map.put("location", "beijing");
|
// map.put("location", "beijing");
|
||||||
List<Weather> weathers = mapper.selectByMap(map);
|
// List<Weather> weathers = mapper.selectByMap(map);
|
||||||
Assert.assertEquals(1, weathers.size());
|
// Assert.assertEquals(1, weathers.size());
|
||||||
}
|
// }
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
public void testSelectObjs() {
|
public void testSelectObjs() {
|
||||||
|
|
|
@ -10,4 +10,4 @@
|
||||||
| 6 | taosdemo | This is an internal tool for testing Our JDBC-JNI, JDBC-RESTful, RESTful interfaces |
|
| 6 | taosdemo | This is an internal tool for testing Our JDBC-JNI, JDBC-RESTful, RESTful interfaces |
|
||||||
|
|
||||||
|
|
||||||
more detail: https://www.taosdata.com/cn//documentation20/connector-java/
|
more detail: https://docs.taosdata.com/reference/connector/java/
|
|
@ -68,7 +68,7 @@
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>com.taosdata.jdbc</groupId>
|
<groupId>com.taosdata.jdbc</groupId>
|
||||||
<artifactId>taos-jdbcdriver</artifactId>
|
<artifactId>taos-jdbcdriver</artifactId>
|
||||||
<version>2.0.34</version>
|
<version>3.0.0</version>
|
||||||
</dependency>
|
</dependency>
|
||||||
|
|
||||||
<dependency>
|
<dependency>
|
||||||
|
|
|
@ -1,10 +1,11 @@
|
||||||
## TDengine SpringBoot + Mybatis Demo
|
## TDengine SpringBoot + Mybatis Demo
|
||||||
|
|
||||||
|
## 需要提前创建 test 数据库
|
||||||
### 配置 application.properties
|
### 配置 application.properties
|
||||||
```properties
|
```properties
|
||||||
# datasource config
|
# datasource config
|
||||||
spring.datasource.driver-class-name=com.taosdata.jdbc.TSDBDriver
|
spring.datasource.driver-class-name=com.taosdata.jdbc.TSDBDriver
|
||||||
spring.datasource.url=jdbc:TAOS://127.0.0.1:6030/log
|
spring.datasource.url=jdbc:TAOS://127.0.0.1:6030/test
|
||||||
spring.datasource.username=root
|
spring.datasource.username=root
|
||||||
spring.datasource.password=taosdata
|
spring.datasource.password=taosdata
|
||||||
|
|
||||||
|
|
|
@ -6,7 +6,6 @@ import org.springframework.beans.factory.annotation.Autowired;
|
||||||
import org.springframework.web.bind.annotation.*;
|
import org.springframework.web.bind.annotation.*;
|
||||||
|
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.Map;
|
|
||||||
|
|
||||||
@RequestMapping("/weather")
|
@RequestMapping("/weather")
|
||||||
@RestController
|
@RestController
|
||||||
|
|
|
@ -10,8 +10,7 @@
|
||||||
</resultMap>
|
</resultMap>
|
||||||
|
|
||||||
<select id="lastOne" resultType="java.util.Map">
|
<select id="lastOne" resultType="java.util.Map">
|
||||||
select last_row(*), location, groupid
|
select last_row(ts) as ts, last_row(temperature) as temperature, last_row(humidity) as humidity, last_row(note) as note,last_row(location) as location , last_row(groupid) as groupid from test.weather;
|
||||||
from test.weather
|
|
||||||
</select>
|
</select>
|
||||||
|
|
||||||
<update id="dropDB">
|
<update id="dropDB">
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
#spring.datasource.password=taosdata
|
#spring.datasource.password=taosdata
|
||||||
# datasource config - JDBC-RESTful
|
# datasource config - JDBC-RESTful
|
||||||
spring.datasource.driver-class-name=com.taosdata.jdbc.rs.RestfulDriver
|
spring.datasource.driver-class-name=com.taosdata.jdbc.rs.RestfulDriver
|
||||||
spring.datasource.url=jdbc:TAOS-RS://localhsot:6041/test?timezone=UTC-8&charset=UTF-8&locale=en_US.UTF-8
|
spring.datasource.url=jdbc:TAOS-RS://localhost:6041/test?timezone=UTC-8&charset=UTF-8&locale=en_US.UTF-8
|
||||||
spring.datasource.username=root
|
spring.datasource.username=root
|
||||||
spring.datasource.password=taosdata
|
spring.datasource.password=taosdata
|
||||||
spring.datasource.druid.initial-size=5
|
spring.datasource.druid.initial-size=5
|
||||||
|
|
|
@ -67,7 +67,7 @@
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>com.taosdata.jdbc</groupId>
|
<groupId>com.taosdata.jdbc</groupId>
|
||||||
<artifactId>taos-jdbcdriver</artifactId>
|
<artifactId>taos-jdbcdriver</artifactId>
|
||||||
<version>2.0.20</version>
|
<version>3.0.0</version>
|
||||||
<!-- <scope>system</scope>-->
|
<!-- <scope>system</scope>-->
|
||||||
<!-- <systemPath>${project.basedir}/src/main/resources/lib/taos-jdbcdriver-2.0.15-dist.jar</systemPath>-->
|
<!-- <systemPath>${project.basedir}/src/main/resources/lib/taos-jdbcdriver-2.0.15-dist.jar</systemPath>-->
|
||||||
</dependency>
|
</dependency>
|
||||||
|
|
|
@ -2,9 +2,9 @@
|
||||||
cd tests/examples/JDBC/taosdemo
|
cd tests/examples/JDBC/taosdemo
|
||||||
mvn clean package -Dmaven.test.skip=true
|
mvn clean package -Dmaven.test.skip=true
|
||||||
# 先建表,再插入的
|
# 先建表,再插入的
|
||||||
java -jar target/taosdemo-2.0-jar-with-dependencies.jar -host [hostname] -database [database] -doCreateTable true -superTableSQL "create table weather(ts timestamp, f1 int) tags(t1 nchar(4))" -numOfTables 1000 -numOfRowsPerTable 100000000 -numOfThreadsForInsert 10 -numOfTablesPerSQL 10 -numOfValuesPerSQL 100
|
java -jar target/taosdemo-2.0.1-jar-with-dependencies.jar -host [hostname] -database [database] -doCreateTable true -superTableSQL "create table weather(ts timestamp, f1 int) tags(t1 nchar(4))" -numOfTables 1000 -numOfRowsPerTable 100000000 -numOfThreadsForInsert 10 -numOfTablesPerSQL 10 -numOfValuesPerSQL 100
|
||||||
# 不建表,直接插入的
|
# 不建表,直接插入的
|
||||||
java -jar target/taosdemo-2.0-jar-with-dependencies.jar -host [hostname] -database [database] -doCreateTable false -superTableSQL "create table weather(ts timestamp, f1 int) tags(t1 nchar(4))" -numOfTables 1000 -numOfRowsPerTable 100000000 -numOfThreadsForInsert 10 -numOfTablesPerSQL 10 -numOfValuesPerSQL 100
|
java -jar target/taosdemo-2.0.1-jar-with-dependencies.jar -host [hostname] -database [database] -doCreateTable false -superTableSQL "create table weather(ts timestamp, f1 int) tags(t1 nchar(4))" -numOfTables 1000 -numOfRowsPerTable 100000000 -numOfThreadsForInsert 10 -numOfTablesPerSQL 10 -numOfValuesPerSQL 100
|
||||||
```
|
```
|
||||||
|
|
||||||
需求:
|
需求:
|
||||||
|
|
|
@ -32,8 +32,10 @@ public class TaosDemoApplication {
|
||||||
System.exit(0);
|
System.exit(0);
|
||||||
}
|
}
|
||||||
// 初始化
|
// 初始化
|
||||||
final DataSource dataSource = DataSourceFactory.getInstance(config.host, config.port, config.user, config.password);
|
final DataSource dataSource = DataSourceFactory.getInstance(config.host, config.port, config.user,
|
||||||
if (config.executeSql != null && !config.executeSql.isEmpty() && !config.executeSql.replaceAll("\\s", "").isEmpty()) {
|
config.password);
|
||||||
|
if (config.executeSql != null && !config.executeSql.isEmpty()
|
||||||
|
&& !config.executeSql.replaceAll("\\s", "").isEmpty()) {
|
||||||
Thread task = new Thread(new SqlExecuteTask(dataSource, config.executeSql));
|
Thread task = new Thread(new SqlExecuteTask(dataSource, config.executeSql));
|
||||||
task.start();
|
task.start();
|
||||||
try {
|
try {
|
||||||
|
@ -70,11 +72,13 @@ public class TaosDemoApplication {
|
||||||
if (config.database != null && !config.database.isEmpty())
|
if (config.database != null && !config.database.isEmpty())
|
||||||
superTableMeta.setDatabase(config.database);
|
superTableMeta.setDatabase(config.database);
|
||||||
} else if (config.numOfFields == 0) {
|
} else if (config.numOfFields == 0) {
|
||||||
String sql = "create table " + config.database + "." + config.superTable + " (ts timestamp, temperature float, humidity int) tags(location nchar(64), groupId int)";
|
String sql = "create table " + config.database + "." + config.superTable
|
||||||
|
+ " (ts timestamp, temperature float, humidity int) tags(location nchar(64), groupId int)";
|
||||||
superTableMeta = SuperTableMetaGenerator.generate(sql);
|
superTableMeta = SuperTableMetaGenerator.generate(sql);
|
||||||
} else {
|
} else {
|
||||||
// create super table with specified field size and tag size
|
// create super table with specified field size and tag size
|
||||||
superTableMeta = SuperTableMetaGenerator.generate(config.database, config.superTable, config.numOfFields, config.prefixOfFields, config.numOfTags, config.prefixOfTags);
|
superTableMeta = SuperTableMetaGenerator.generate(config.database, config.superTable, config.numOfFields,
|
||||||
|
config.prefixOfFields, config.numOfTags, config.prefixOfTags);
|
||||||
}
|
}
|
||||||
/**********************************************************************************/
|
/**********************************************************************************/
|
||||||
// 建表
|
// 建表
|
||||||
|
@ -84,7 +88,8 @@ public class TaosDemoApplication {
|
||||||
superTableService.create(superTableMeta);
|
superTableService.create(superTableMeta);
|
||||||
if (!config.autoCreateTable) {
|
if (!config.autoCreateTable) {
|
||||||
// 批量建子表
|
// 批量建子表
|
||||||
subTableService.createSubTable(superTableMeta, config.numOfTables, config.prefixOfTable, config.numOfThreadsForCreate);
|
subTableService.createSubTable(superTableMeta, config.numOfTables, config.prefixOfTable,
|
||||||
|
config.numOfThreadsForCreate);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
end = System.currentTimeMillis();
|
end = System.currentTimeMillis();
|
||||||
|
@ -93,7 +98,7 @@ public class TaosDemoApplication {
|
||||||
// 插入
|
// 插入
|
||||||
long tableSize = config.numOfTables;
|
long tableSize = config.numOfTables;
|
||||||
int threadSize = config.numOfThreadsForInsert;
|
int threadSize = config.numOfThreadsForInsert;
|
||||||
long startTime = getProperStartTime(config.startTime, config.keep);
|
long startTime = getProperStartTime(config.startTime, config.days);
|
||||||
|
|
||||||
if (tableSize < threadSize)
|
if (tableSize < threadSize)
|
||||||
threadSize = (int) tableSize;
|
threadSize = (int) tableSize;
|
||||||
|
@ -101,13 +106,13 @@ public class TaosDemoApplication {
|
||||||
|
|
||||||
start = System.currentTimeMillis();
|
start = System.currentTimeMillis();
|
||||||
// multi threads to insert
|
// multi threads to insert
|
||||||
int affectedRows = subTableService.insertMultiThreads(superTableMeta, threadSize, tableSize, startTime, gap, config);
|
int affectedRows = subTableService.insertMultiThreads(superTableMeta, threadSize, tableSize, startTime, gap,
|
||||||
|
config);
|
||||||
end = System.currentTimeMillis();
|
end = System.currentTimeMillis();
|
||||||
logger.info("insert " + affectedRows + " rows, time cost: " + (end - start) + " ms");
|
logger.info("insert " + affectedRows + " rows, time cost: " + (end - start) + " ms");
|
||||||
/**********************************************************************************/
|
/**********************************************************************************/
|
||||||
// 查询
|
// 查询
|
||||||
|
|
||||||
|
|
||||||
/**********************************************************************************/
|
/**********************************************************************************/
|
||||||
// 删除表
|
// 删除表
|
||||||
if (config.dropTable) {
|
if (config.dropTable) {
|
||||||
|
|
|
@ -1,7 +1,5 @@
|
||||||
package com.taosdata.taosdemo.service;
|
package com.taosdata.taosdemo.service;
|
||||||
|
|
||||||
import com.taosdata.jdbc.utils.SqlSyntaxValidator;
|
|
||||||
|
|
||||||
import javax.sql.DataSource;
|
import javax.sql.DataSource;
|
||||||
import java.sql.*;
|
import java.sql.*;
|
||||||
import java.util.ArrayList;
|
import java.util.ArrayList;
|
||||||
|
@ -23,10 +21,6 @@ public class QueryService {
|
||||||
Boolean[] ret = new Boolean[sqls.length];
|
Boolean[] ret = new Boolean[sqls.length];
|
||||||
for (int i = 0; i < sqls.length; i++) {
|
for (int i = 0; i < sqls.length; i++) {
|
||||||
ret[i] = true;
|
ret[i] = true;
|
||||||
if (!SqlSyntaxValidator.isValidForExecuteQuery(sqls[i])) {
|
|
||||||
ret[i] = false;
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
try (Connection conn = dataSource.getConnection(); Statement stmt = conn.createStatement()) {
|
try (Connection conn = dataSource.getConnection(); Statement stmt = conn.createStatement()) {
|
||||||
stmt.executeQuery(sqls[i]);
|
stmt.executeQuery(sqls[i]);
|
||||||
} catch (SQLException e) {
|
} catch (SQLException e) {
|
||||||
|
|
|
@ -15,9 +15,12 @@ public class SqlSpeller {
|
||||||
StringBuilder sb = new StringBuilder();
|
StringBuilder sb = new StringBuilder();
|
||||||
sb.append("create database if not exists ").append(map.get("database")).append(" ");
|
sb.append("create database if not exists ").append(map.get("database")).append(" ");
|
||||||
if (map.containsKey("keep"))
|
if (map.containsKey("keep"))
|
||||||
sb.append("keep ").append(map.get("keep")).append(" ");
|
sb.append("keep ");
|
||||||
if (map.containsKey("days"))
|
if (map.containsKey("days")) {
|
||||||
sb.append("days ").append(map.get("days")).append(" ");
|
sb.append(map.get("days")).append("d ");
|
||||||
|
} else {
|
||||||
|
sb.append(" ");
|
||||||
|
}
|
||||||
if (map.containsKey("replica"))
|
if (map.containsKey("replica"))
|
||||||
sb.append("replica ").append(map.get("replica")).append(" ");
|
sb.append("replica ").append(map.get("replica")).append(" ");
|
||||||
if (map.containsKey("cache"))
|
if (map.containsKey("cache"))
|
||||||
|
@ -29,7 +32,7 @@ public class SqlSpeller {
|
||||||
if (map.containsKey("maxrows"))
|
if (map.containsKey("maxrows"))
|
||||||
sb.append("maxrows ").append(map.get("maxrows")).append(" ");
|
sb.append("maxrows ").append(map.get("maxrows")).append(" ");
|
||||||
if (map.containsKey("precision"))
|
if (map.containsKey("precision"))
|
||||||
sb.append("precision ").append(map.get("precision")).append(" ");
|
sb.append("precision '").append(map.get("precision")).append("' ");
|
||||||
if (map.containsKey("comp"))
|
if (map.containsKey("comp"))
|
||||||
sb.append("comp ").append(map.get("comp")).append(" ");
|
sb.append("comp ").append(map.get("comp")).append(" ");
|
||||||
if (map.containsKey("walLevel"))
|
if (map.containsKey("walLevel"))
|
||||||
|
@ -46,8 +49,10 @@ public class SqlSpeller {
|
||||||
// create table if not exists xx.xx using xx.xx tags(x,x,x)
|
// create table if not exists xx.xx using xx.xx tags(x,x,x)
|
||||||
public static String createTableUsingSuperTable(SubTableMeta subTableMeta) {
|
public static String createTableUsingSuperTable(SubTableMeta subTableMeta) {
|
||||||
StringBuilder sb = new StringBuilder();
|
StringBuilder sb = new StringBuilder();
|
||||||
sb.append("create table if not exists ").append(subTableMeta.getDatabase()).append(".").append(subTableMeta.getName()).append(" ");
|
sb.append("create table if not exists ").append(subTableMeta.getDatabase()).append(".")
|
||||||
sb.append("using ").append(subTableMeta.getDatabase()).append(".").append(subTableMeta.getSupertable()).append(" ");
|
.append(subTableMeta.getName()).append(" ");
|
||||||
|
sb.append("using ").append(subTableMeta.getDatabase()).append(".").append(subTableMeta.getSupertable())
|
||||||
|
.append(" ");
|
||||||
// String tagStr = subTableMeta.getTags().stream().filter(Objects::nonNull)
|
// String tagStr = subTableMeta.getTags().stream().filter(Objects::nonNull)
|
||||||
// .map(tagValue -> tagValue.getName() + " '" + tagValue.getValue() + "' ")
|
// .map(tagValue -> tagValue.getName() + " '" + tagValue.getValue() + "' ")
|
||||||
// .collect(Collectors.joining(",", "(", ")"));
|
// .collect(Collectors.joining(",", "(", ")"));
|
||||||
|
@ -89,8 +94,10 @@ public class SqlSpeller {
|
||||||
// insert into xx.xxx using xx.xx tags(x,x,x) values(x,x,x),(x,x,x)...
|
// insert into xx.xxx using xx.xx tags(x,x,x) values(x,x,x),(x,x,x)...
|
||||||
public static String insertOneTableMultiValuesUsingSuperTable(SubTableValue subTableValue) {
|
public static String insertOneTableMultiValuesUsingSuperTable(SubTableValue subTableValue) {
|
||||||
StringBuilder sb = new StringBuilder();
|
StringBuilder sb = new StringBuilder();
|
||||||
sb.append("insert into ").append(subTableValue.getDatabase()).append(".").append(subTableValue.getName()).append(" ");
|
sb.append("insert into ").append(subTableValue.getDatabase()).append(".").append(subTableValue.getName())
|
||||||
sb.append("using ").append(subTableValue.getDatabase()).append(".").append(subTableValue.getSupertable()).append(" ");
|
.append(" ");
|
||||||
|
sb.append("using ").append(subTableValue.getDatabase()).append(".").append(subTableValue.getSupertable())
|
||||||
|
.append(" ");
|
||||||
sb.append("tags ").append(tagValues(subTableValue.getTags()) + " ");
|
sb.append("tags ").append(tagValues(subTableValue.getTags()) + " ");
|
||||||
sb.append("values ").append(rowValues(subTableValue.getValues()));
|
sb.append("values ").append(rowValues(subTableValue.getValues()));
|
||||||
return sb.toString();
|
return sb.toString();
|
||||||
|
@ -126,7 +133,8 @@ public class SqlSpeller {
|
||||||
// create table if not exists xx.xx (f1 xx,f2 xx...) tags(t1 xx, t2 xx...)
|
// create table if not exists xx.xx (f1 xx,f2 xx...) tags(t1 xx, t2 xx...)
|
||||||
public static String createSuperTable(SuperTableMeta tableMetadata) {
|
public static String createSuperTable(SuperTableMeta tableMetadata) {
|
||||||
StringBuilder sb = new StringBuilder();
|
StringBuilder sb = new StringBuilder();
|
||||||
sb.append("create table if not exists ").append(tableMetadata.getDatabase()).append(".").append(tableMetadata.getName());
|
sb.append("create table if not exists ").append(tableMetadata.getDatabase()).append(".")
|
||||||
|
.append(tableMetadata.getName());
|
||||||
String fields = tableMetadata.getFields().stream()
|
String fields = tableMetadata.getFields().stream()
|
||||||
.filter(Objects::nonNull).map(field -> field.getName() + " " + field.getType() + " ")
|
.filter(Objects::nonNull).map(field -> field.getName() + " " + field.getType() + " ")
|
||||||
.collect(Collectors.joining(",", "(", ")"));
|
.collect(Collectors.joining(",", "(", ")"));
|
||||||
|
@ -139,10 +147,10 @@ public class SqlSpeller {
|
||||||
return sb.toString();
|
return sb.toString();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
public static String createTable(TableMeta tableMeta) {
|
public static String createTable(TableMeta tableMeta) {
|
||||||
StringBuilder sb = new StringBuilder();
|
StringBuilder sb = new StringBuilder();
|
||||||
sb.append("create table if not exists ").append(tableMeta.getDatabase()).append(".").append(tableMeta.getName()).append(" ");
|
sb.append("create table if not exists ").append(tableMeta.getDatabase()).append(".").append(tableMeta.getName())
|
||||||
|
.append(" ");
|
||||||
String fields = tableMeta.getFields().stream()
|
String fields = tableMeta.getFields().stream()
|
||||||
.filter(Objects::nonNull).map(field -> field.getName() + " " + field.getType() + " ")
|
.filter(Objects::nonNull).map(field -> field.getName() + " " + field.getType() + " ")
|
||||||
.collect(Collectors.joining(",", "(", ")"));
|
.collect(Collectors.joining(",", "(", ")"));
|
||||||
|
@ -179,16 +187,17 @@ public class SqlSpeller {
|
||||||
public static String insertMultiTableMultiValuesWithColumns(List<TableValue> tables) {
|
public static String insertMultiTableMultiValuesWithColumns(List<TableValue> tables) {
|
||||||
StringBuilder sb = new StringBuilder();
|
StringBuilder sb = new StringBuilder();
|
||||||
sb.append("insert into ").append(tables.stream().filter(Objects::nonNull)
|
sb.append("insert into ").append(tables.stream().filter(Objects::nonNull)
|
||||||
.map(table -> table.getDatabase() + "." + table.getName() + " " + columnNames(table.getColumns()) + " values " + rowValues(table.getValues()))
|
.map(table -> table.getDatabase() + "." + table.getName() + " " + columnNames(table.getColumns())
|
||||||
|
+ " values " + rowValues(table.getValues()))
|
||||||
.collect(Collectors.joining(" ")));
|
.collect(Collectors.joining(" ")));
|
||||||
return sb.toString();
|
return sb.toString();
|
||||||
}
|
}
|
||||||
|
|
||||||
public static String insertMultiTableMultiValues(List<TableValue> tables) {
|
public static String insertMultiTableMultiValues(List<TableValue> tables) {
|
||||||
StringBuilder sb = new StringBuilder();
|
StringBuilder sb = new StringBuilder();
|
||||||
sb.append("insert into ").append(tables.stream().filter(Objects::nonNull).map(table ->
|
sb.append("insert into ").append(tables.stream().filter(Objects::nonNull)
|
||||||
table.getDatabase() + "." + table.getName() + " values " + rowValues(table.getValues())
|
.map(table -> table.getDatabase() + "." + table.getName() + " values " + rowValues(table.getValues()))
|
||||||
).collect(Collectors.joining(" ")));
|
.collect(Collectors.joining(" ")));
|
||||||
return sb.toString();
|
return sb.toString();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
jdbc.driver=com.taosdata.jdbc.rs.RestfulDriver
|
# jdbc.driver=com.taosdata.jdbc.rs.RestfulDriver
|
||||||
#jdbc.driver=com.taosdata.jdbc.TSDBDriver
|
jdbc.driver=com.taosdata.jdbc.TSDBDriver
|
||||||
hikari.maximum-pool-size=20
|
hikari.maximum-pool-size=20
|
||||||
hikari.minimum-idle=20
|
hikari.minimum-idle=20
|
||||||
hikari.max-lifetime=0
|
hikari.max-lifetime=0
|
|
@ -1,31 +0,0 @@
|
||||||
package com.taosdata.taosdemo.service;
|
|
||||||
|
|
||||||
import com.taosdata.taosdemo.domain.TableMeta;
|
|
||||||
import org.junit.Before;
|
|
||||||
import org.junit.Test;
|
|
||||||
|
|
||||||
import java.util.ArrayList;
|
|
||||||
import java.util.List;
|
|
||||||
|
|
||||||
public class TableServiceTest {
|
|
||||||
private TableService tableService;
|
|
||||||
|
|
||||||
private List<TableMeta> tables;
|
|
||||||
|
|
||||||
@Before
|
|
||||||
public void before() {
|
|
||||||
tables = new ArrayList<>();
|
|
||||||
for (int i = 0; i < 1; i++) {
|
|
||||||
TableMeta tableMeta = new TableMeta();
|
|
||||||
tableMeta.setDatabase("test");
|
|
||||||
tableMeta.setName("weather" + (i + 1));
|
|
||||||
tables.add(tableMeta);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
@Test
|
|
||||||
public void testCreate() {
|
|
||||||
tableService.create(tables);
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
|
@ -142,6 +142,7 @@ typedef struct SqlFunctionCtx {
|
||||||
struct SSDataBlock *pDstBlock; // used by indifinite rows function to set selectivity
|
struct SSDataBlock *pDstBlock; // used by indifinite rows function to set selectivity
|
||||||
int32_t curBufPage;
|
int32_t curBufPage;
|
||||||
bool increase;
|
bool increase;
|
||||||
|
bool isStream;
|
||||||
|
|
||||||
char udfName[TSDB_FUNC_NAME_LEN];
|
char udfName[TSDB_FUNC_NAME_LEN];
|
||||||
} SqlFunctionCtx;
|
} SqlFunctionCtx;
|
||||||
|
|
|
@ -238,6 +238,9 @@ int32_t parseSql(SRequestObj* pRequest, bool topicQuery, SQuery** pQuery, SStmtC
|
||||||
TSWAP(pRequest->targetTableList, (*pQuery)->pTargetTableList);
|
TSWAP(pRequest->targetTableList, (*pQuery)->pTargetTableList);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
taosArrayDestroy(cxt.pTableMetaPos);
|
||||||
|
taosArrayDestroy(cxt.pTableVgroupPos);
|
||||||
|
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -674,6 +674,8 @@ static void destorySqlParseWrapper(SqlParseWrapper *pWrapper) {
|
||||||
taosArrayDestroy(pWrapper->catalogReq.pIndex);
|
taosArrayDestroy(pWrapper->catalogReq.pIndex);
|
||||||
taosArrayDestroy(pWrapper->catalogReq.pUser);
|
taosArrayDestroy(pWrapper->catalogReq.pUser);
|
||||||
taosArrayDestroy(pWrapper->catalogReq.pTableIndex);
|
taosArrayDestroy(pWrapper->catalogReq.pTableIndex);
|
||||||
|
taosArrayDestroy(pWrapper->pCtx->pTableMetaPos);
|
||||||
|
taosArrayDestroy(pWrapper->pCtx->pTableVgroupPos);
|
||||||
taosMemoryFree(pWrapper->pCtx);
|
taosMemoryFree(pWrapper->pCtx);
|
||||||
taosMemoryFree(pWrapper);
|
taosMemoryFree(pWrapper);
|
||||||
}
|
}
|
||||||
|
|
|
@ -69,8 +69,10 @@ typedef struct SIOCostSummary {
|
||||||
double buildmemBlock;
|
double buildmemBlock;
|
||||||
int64_t headFileLoad;
|
int64_t headFileLoad;
|
||||||
double headFileLoadTime;
|
double headFileLoadTime;
|
||||||
int64_t smaData;
|
int64_t smaDataLoad;
|
||||||
double smaLoadTime;
|
double smaLoadTime;
|
||||||
|
int64_t lastBlockLoad;
|
||||||
|
double lastBlockLoadTime;
|
||||||
} SIOCostSummary;
|
} SIOCostSummary;
|
||||||
|
|
||||||
typedef struct SBlockLoadSuppInfo {
|
typedef struct SBlockLoadSuppInfo {
|
||||||
|
@ -728,7 +730,7 @@ static int32_t doLoadFileBlock(STsdbReader* pReader, SArray* pIndexList, SArray*
|
||||||
|
|
||||||
double el = (taosGetTimestampUs() - st) / 1000.0;
|
double el = (taosGetTimestampUs() - st) / 1000.0;
|
||||||
tsdbDebug("load block of %d tables completed, blocks:%d in %d tables, lastBlock:%d, size:%.2f Kb, elapsed time:%.2f ms %s",
|
tsdbDebug("load block of %d tables completed, blocks:%d in %d tables, lastBlock:%d, size:%.2f Kb, elapsed time:%.2f ms %s",
|
||||||
numOfTables, total, numOfQTable, pBlockNum->numOfLastBlocks, sizeInDisk
|
numOfTables, pBlockNum->numOfBlocks, numOfQTable, pBlockNum->numOfLastBlocks, sizeInDisk
|
||||||
/ 1000.0, el, pReader->idStr);
|
/ 1000.0, el, pReader->idStr);
|
||||||
|
|
||||||
pReader->cost.numOfBlocks += total;
|
pReader->cost.numOfBlocks += total;
|
||||||
|
@ -1303,9 +1305,23 @@ static bool fileBlockShouldLoad(STsdbReader* pReader, SFileDataBlockInfo* pFBloc
|
||||||
overlapWithlastBlock = !(pBlock->maxKey.ts < pBlockL->minKey || pBlock->minKey.ts > pBlockL->maxKey);
|
overlapWithlastBlock = !(pBlock->maxKey.ts < pBlockL->minKey || pBlock->minKey.ts > pBlockL->maxKey);
|
||||||
}
|
}
|
||||||
|
|
||||||
return (overlapWithNeighbor || hasDup || dataBlockPartiallyRequired(&pReader->window, &pReader->verRange, pBlock) ||
|
bool moreThanOutputCapacity = pBlock->nRow > pReader->capacity;
|
||||||
keyOverlapFileBlock(key, pBlock, &pReader->verRange) || (pBlock->nRow > pReader->capacity) ||
|
bool partiallyRequired = dataBlockPartiallyRequired(&pReader->window, &pReader->verRange, pBlock);
|
||||||
overlapWithDel || overlapWithlastBlock);
|
bool overlapWithKey = keyOverlapFileBlock(key, pBlock, &pReader->verRange);
|
||||||
|
|
||||||
|
bool loadDataBlock = (overlapWithNeighbor || hasDup || partiallyRequired || overlapWithKey ||
|
||||||
|
moreThanOutputCapacity || overlapWithDel || overlapWithlastBlock);
|
||||||
|
|
||||||
|
// log the reason why load the datablock for profile
|
||||||
|
if (loadDataBlock) {
|
||||||
|
tsdbDebug("%p uid:%" PRIu64
|
||||||
|
" need to load the datablock, overlapwithneighborblock:%d, hasDup:%d, partiallyRequired:%d, "
|
||||||
|
"overlapWithKey:%d, greaterThanBuf:%d, overlapWithDel:%d, overlapWithlastBlock:%d, %s",
|
||||||
|
pReader, pFBlock->uid, overlapWithNeighbor, hasDup, partiallyRequired, overlapWithKey,
|
||||||
|
moreThanOutputCapacity, overlapWithDel, overlapWithlastBlock, pReader->idStr);
|
||||||
|
}
|
||||||
|
|
||||||
|
return loadDataBlock;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t buildDataBlockFromBuf(STsdbReader* pReader, STableBlockScanInfo* pBlockScanInfo, int64_t endKey) {
|
static int32_t buildDataBlockFromBuf(STsdbReader* pReader, STableBlockScanInfo* pBlockScanInfo, int64_t endKey) {
|
||||||
|
@ -1991,8 +2007,8 @@ static int32_t buildComposedDataBlockImpl(STsdbReader* pReader, STableBlockScanI
|
||||||
if (pBlockData->nRow > 0) {
|
if (pBlockData->nRow > 0) {
|
||||||
TSDBROW fRow = tsdbRowFromBlockData(pBlockData, pDumpInfo->rowIndex);
|
TSDBROW fRow = tsdbRowFromBlockData(pBlockData, pDumpInfo->rowIndex);
|
||||||
|
|
||||||
// no last block
|
// no last block available, only data block exists
|
||||||
if (pLastBlockReader->lastBlockData.nRow == 0) {
|
if (pLastBlockReader->lastBlockData.nRow == 0 || (!hasDataInLastBlock(pLastBlockReader))) {
|
||||||
if (tryCopyDistinctRowFromFileBlock(pReader, pBlockData, key, pDumpInfo)) {
|
if (tryCopyDistinctRowFromFileBlock(pReader, pBlockData, key, pDumpInfo)) {
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
} else {
|
} else {
|
||||||
|
@ -2012,38 +2028,10 @@ static int32_t buildComposedDataBlockImpl(STsdbReader* pReader, STableBlockScanI
|
||||||
|
|
||||||
// row in last file block
|
// row in last file block
|
||||||
int64_t ts = getCurrentKeyInLastBlock(pLastBlockReader);
|
int64_t ts = getCurrentKeyInLastBlock(pLastBlockReader);
|
||||||
if (ts < key) { // save rows in last block
|
ASSERT(ts >= key);
|
||||||
SBlockData* pLastBlockData = &pLastBlockReader->lastBlockData;
|
|
||||||
|
|
||||||
STSRow* pTSRow = NULL;
|
if (ASCENDING_TRAVERSE(pReader->order)) {
|
||||||
SRowMerger merge = {0};
|
if (key < ts) {
|
||||||
|
|
||||||
TSDBROW fRow1 = tsdbRowFromBlockData(pLastBlockData, *pLastBlockReader->rowIndex);
|
|
||||||
|
|
||||||
tRowMergerInit(&merge, &fRow1, pReader->pSchema);
|
|
||||||
doMergeRowsInLastBlock(pLastBlockReader, pBlockScanInfo, ts, &merge);
|
|
||||||
tRowMergerGetRow(&merge, &pTSRow);
|
|
||||||
|
|
||||||
doAppendRowFromTSRow(pReader->pResBlock, pReader, pTSRow, pBlockScanInfo->uid);
|
|
||||||
|
|
||||||
taosMemoryFree(pTSRow);
|
|
||||||
tRowMergerClear(&merge);
|
|
||||||
return TSDB_CODE_SUCCESS;
|
|
||||||
} else if (ts == key) {
|
|
||||||
STSRow* pTSRow = NULL;
|
|
||||||
SRowMerger merge = {0};
|
|
||||||
|
|
||||||
tRowMergerInit(&merge, &fRow, pReader->pSchema);
|
|
||||||
doMergeRowsInFileBlocks(pBlockData, pBlockScanInfo, pReader, &merge);
|
|
||||||
doMergeRowsInLastBlock(pLastBlockReader, pBlockScanInfo, ts, &merge);
|
|
||||||
|
|
||||||
tRowMergerGetRow(&merge, &pTSRow);
|
|
||||||
doAppendRowFromTSRow(pReader->pResBlock, pReader, pTSRow, pBlockScanInfo->uid);
|
|
||||||
|
|
||||||
taosMemoryFree(pTSRow);
|
|
||||||
tRowMergerClear(&merge);
|
|
||||||
return TSDB_CODE_SUCCESS;
|
|
||||||
} else { // ts > key, asc; todo handle desc
|
|
||||||
// imem & mem are all empty, only file exist
|
// imem & mem are all empty, only file exist
|
||||||
if (tryCopyDistinctRowFromFileBlock(pReader, pBlockData, key, pDumpInfo)) {
|
if (tryCopyDistinctRowFromFileBlock(pReader, pBlockData, key, pDumpInfo)) {
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
|
@ -2060,6 +2048,43 @@ static int32_t buildComposedDataBlockImpl(STsdbReader* pReader, STableBlockScanI
|
||||||
tRowMergerClear(&merge);
|
tRowMergerClear(&merge);
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
} else if (key == ts) {
|
||||||
|
STSRow* pTSRow = NULL;
|
||||||
|
SRowMerger merge = {0};
|
||||||
|
|
||||||
|
tRowMergerInit(&merge, &fRow, pReader->pSchema);
|
||||||
|
doMergeRowsInFileBlocks(pBlockData, pBlockScanInfo, pReader, &merge);
|
||||||
|
doMergeRowsInLastBlock(pLastBlockReader, pBlockScanInfo, ts, &merge);
|
||||||
|
|
||||||
|
tRowMergerGetRow(&merge, &pTSRow);
|
||||||
|
doAppendRowFromTSRow(pReader->pResBlock, pReader, pTSRow, pBlockScanInfo->uid);
|
||||||
|
|
||||||
|
taosMemoryFree(pTSRow);
|
||||||
|
tRowMergerClear(&merge);
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
} else {
|
||||||
|
ASSERT(0);
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
} else { // desc order
|
||||||
|
SBlockData* pLastBlockData = &pLastBlockReader->lastBlockData;
|
||||||
|
TSDBROW fRow1 = tsdbRowFromBlockData(pLastBlockData, *pLastBlockReader->rowIndex);
|
||||||
|
|
||||||
|
STSRow* pTSRow = NULL;
|
||||||
|
SRowMerger merge = {0};
|
||||||
|
tRowMergerInit(&merge, &fRow1, pReader->pSchema);
|
||||||
|
doMergeRowsInLastBlock(pLastBlockReader, pBlockScanInfo, ts, &merge);
|
||||||
|
|
||||||
|
if (ts == key) {
|
||||||
|
doMergeRowsInFileBlocks(pBlockData, pBlockScanInfo, pReader, &merge);
|
||||||
|
}
|
||||||
|
|
||||||
|
tRowMergerGetRow(&merge, &pTSRow);
|
||||||
|
doAppendRowFromTSRow(pReader->pResBlock, pReader, pTSRow, pBlockScanInfo->uid);
|
||||||
|
|
||||||
|
taosMemoryFree(pTSRow);
|
||||||
|
tRowMergerClear(&merge);
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
} else { // only last block exists
|
} else { // only last block exists
|
||||||
SBlockData* pLastBlockData = &pLastBlockReader->lastBlockData;
|
SBlockData* pLastBlockData = &pLastBlockReader->lastBlockData;
|
||||||
|
@ -2383,7 +2408,6 @@ static int32_t moveToNextFile(STsdbReader* pReader, SBlockNumber* pBlockNum) {
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
// todo add elapsed time results
|
|
||||||
static int32_t doLoadRelatedLastBlock(SLastBlockReader* pLastBlockReader, STableBlockScanInfo *pBlockScanInfo, STsdbReader* pReader) {
|
static int32_t doLoadRelatedLastBlock(SLastBlockReader* pLastBlockReader, STableBlockScanInfo *pBlockScanInfo, STsdbReader* pReader) {
|
||||||
SArray* pBlocks = pLastBlockReader->pBlockL;
|
SArray* pBlocks = pLastBlockReader->pBlockL;
|
||||||
SBlockL* pBlock = NULL;
|
SBlockL* pBlock = NULL;
|
||||||
|
@ -2415,6 +2439,7 @@ static int32_t doLoadRelatedLastBlock(SLastBlockReader* pLastBlockReader, STable
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
int64_t st = taosGetTimestampUs();
|
||||||
int32_t code = tBlockDataInit(&pLastBlockReader->lastBlockData, pReader->suid, pReader->suid ? 0 : uid, pReader->pSchema);
|
int32_t code = tBlockDataInit(&pLastBlockReader->lastBlockData, pReader->suid, pReader->suid ? 0 : uid, pReader->pSchema);
|
||||||
if (code != TSDB_CODE_SUCCESS) {
|
if (code != TSDB_CODE_SUCCESS) {
|
||||||
tsdbError("%p init block data failed, code:%s %s", pReader, tstrerror(code), pReader->idStr);
|
tsdbError("%p init block data failed, code:%s %s", pReader, tstrerror(code), pReader->idStr);
|
||||||
|
@ -2422,17 +2447,23 @@ static int32_t doLoadRelatedLastBlock(SLastBlockReader* pLastBlockReader, STable
|
||||||
}
|
}
|
||||||
|
|
||||||
code = tsdbReadLastBlock(pReader->pFileReader, pBlock, &pLastBlockReader->lastBlockData);
|
code = tsdbReadLastBlock(pReader->pFileReader, pBlock, &pLastBlockReader->lastBlockData);
|
||||||
|
|
||||||
|
double el = (taosGetTimestampUs() - st) / 1000.0;
|
||||||
if (code != TSDB_CODE_SUCCESS) {
|
if (code != TSDB_CODE_SUCCESS) {
|
||||||
tsdbError("%p error occurs in loading last block into buffer, last block index:%d, total:%d code:%s %s", pReader,
|
tsdbError("%p error occurs in loading last block into buffer, last block index:%d, total:%d code:%s %s", pReader,
|
||||||
pLastBlockReader->currentBlockIndex, totalLastBlocks, tstrerror(code), pReader->idStr);
|
pLastBlockReader->currentBlockIndex, totalLastBlocks, tstrerror(code), pReader->idStr);
|
||||||
} else {
|
} else {
|
||||||
tsdbDebug("%p load last block completed, uid:%" PRIu64
|
tsdbDebug("%p load last block completed, uid:%" PRIu64
|
||||||
" last block index:%d, total:%d rows:%d, minVer:%d, maxVer:%d, brange:%" PRId64 " - %" PRId64 " %s",
|
" last block index:%d, total:%d rows:%d, minVer:%d, maxVer:%d, brange:%" PRId64 "-%" PRId64
|
||||||
pReader, uid, pLastBlockReader->currentBlockIndex, totalLastBlocks, pBlock->nRow, pBlock->minVer,
|
" elapsed time:%.2f ms, %s",
|
||||||
pBlock->maxVer, pBlock->minKey, pBlock->maxKey, pReader->idStr);
|
pReader, uid, index, totalLastBlocks, pBlock->nRow, pBlock->minVer, pBlock->maxVer, pBlock->minKey,
|
||||||
|
pBlock->maxKey, el, pReader->idStr);
|
||||||
}
|
}
|
||||||
|
|
||||||
pLastBlockReader->currentBlockIndex = index;
|
pLastBlockReader->currentBlockIndex = index;
|
||||||
|
pReader->cost.lastBlockLoad += 1;
|
||||||
|
pReader->cost.lastBlockLoadTime += el;
|
||||||
|
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2495,13 +2526,12 @@ static int32_t doLoadLastBlockSequentially(STsdbReader* pReader) {
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t doBuildDataBlock(STsdbReader* pReader) {
|
static int32_t doBuildDataBlock(STsdbReader* pReader) {
|
||||||
|
TSDBKEY key = {0};
|
||||||
int32_t code = TSDB_CODE_SUCCESS;
|
int32_t code = TSDB_CODE_SUCCESS;
|
||||||
|
SBlock* pBlock = NULL;
|
||||||
|
|
||||||
SReaderStatus* pStatus = &pReader->status;
|
SReaderStatus* pStatus = &pReader->status;
|
||||||
SDataBlockIter* pBlockIter = &pStatus->blockIter;
|
SDataBlockIter* pBlockIter = &pStatus->blockIter;
|
||||||
|
|
||||||
TSDBKEY key = {0};
|
|
||||||
SBlock* pBlock = NULL;
|
|
||||||
STableBlockScanInfo* pScanInfo = NULL;
|
STableBlockScanInfo* pScanInfo = NULL;
|
||||||
SFileDataBlockInfo* pBlockInfo = getCurrentBlockInfo(pBlockIter);
|
SFileDataBlockInfo* pBlockInfo = getCurrentBlockInfo(pBlockIter);
|
||||||
SLastBlockReader* pLastBlockReader = pReader->status.fileIter.pLastBlockReader;
|
SLastBlockReader* pLastBlockReader = pReader->status.fileIter.pLastBlockReader;
|
||||||
|
@ -2554,6 +2584,14 @@ static int32_t doBuildDataBlock(STsdbReader* pReader) {
|
||||||
// todo rows in buffer should be less than the file block in asc, greater than file block in desc
|
// todo rows in buffer should be less than the file block in asc, greater than file block in desc
|
||||||
int64_t endKey = (ASCENDING_TRAVERSE(pReader->order)) ? pBlock->minKey.ts : pBlock->maxKey.ts;
|
int64_t endKey = (ASCENDING_TRAVERSE(pReader->order)) ? pBlock->minKey.ts : pBlock->maxKey.ts;
|
||||||
code = buildDataBlockFromBuf(pReader, pScanInfo, endKey);
|
code = buildDataBlockFromBuf(pReader, pScanInfo, endKey);
|
||||||
|
} else {
|
||||||
|
if (hasDataInLastBlock(pLastBlockReader) && !ASCENDING_TRAVERSE(pReader->order)) {
|
||||||
|
// only return the rows in last block
|
||||||
|
int64_t tsLast = getCurrentKeyInLastBlock(pLastBlockReader);
|
||||||
|
ASSERT (tsLast >= pBlock->maxKey.ts);
|
||||||
|
tBlockDataReset(&pReader->status.fileBlockData);
|
||||||
|
|
||||||
|
code = buildComposedDataBlock(pReader);
|
||||||
} else { // whole block is required, return it directly
|
} else { // whole block is required, return it directly
|
||||||
SDataBlockInfo* pInfo = &pReader->pResBlock->info;
|
SDataBlockInfo* pInfo = &pReader->pResBlock->info;
|
||||||
pInfo->rows = pBlock->nRow;
|
pInfo->rows = pBlock->nRow;
|
||||||
|
@ -2562,6 +2600,7 @@ static int32_t doBuildDataBlock(STsdbReader* pReader) {
|
||||||
setComposedBlockFlag(pReader, false);
|
setComposedBlockFlag(pReader, false);
|
||||||
setBlockAllDumped(&pStatus->fBlockDumpInfo, pBlock->maxKey.ts, pReader->order);
|
setBlockAllDumped(&pStatus->fBlockDumpInfo, pBlock->maxKey.ts, pReader->order);
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
@ -2628,7 +2667,7 @@ static int32_t initForFirstBlockInFile(STsdbReader* pReader, SDataBlockIter* pBl
|
||||||
// initialize the block iterator for a new fileset
|
// initialize the block iterator for a new fileset
|
||||||
if (num.numOfBlocks > 0) {
|
if (num.numOfBlocks > 0) {
|
||||||
code = initBlockIterator(pReader, pBlockIter, num.numOfBlocks);
|
code = initBlockIterator(pReader, pBlockIter, num.numOfBlocks);
|
||||||
} else {
|
} else { // no block data, only last block exists
|
||||||
tBlockDataReset(&pReader->status.fileBlockData);
|
tBlockDataReset(&pReader->status.fileBlockData);
|
||||||
resetDataBlockIterator(pBlockIter, pReader->order, pReader->status.pTableMap);
|
resetDataBlockIterator(pBlockIter, pReader->order, pReader->status.pTableMap);
|
||||||
}
|
}
|
||||||
|
@ -2701,7 +2740,6 @@ static int32_t buildBlockFromFiles(STsdbReader* pReader) {
|
||||||
if (hasNext) { // check for the next block in the block accessed order list
|
if (hasNext) { // check for the next block in the block accessed order list
|
||||||
initBlockDumpInfo(pReader, pBlockIter);
|
initBlockDumpInfo(pReader, pBlockIter);
|
||||||
} else if (taosArrayGetSize(pReader->status.fileIter.pLastBlockReader->pBlockL) > 0) { // data blocks in current file are exhausted, let's try the next file now
|
} else if (taosArrayGetSize(pReader->status.fileIter.pLastBlockReader->pBlockL) > 0) { // data blocks in current file are exhausted, let's try the next file now
|
||||||
// todo dump all data in last block if exists.
|
|
||||||
tBlockDataReset(&pReader->status.fileBlockData);
|
tBlockDataReset(&pReader->status.fileBlockData);
|
||||||
resetDataBlockIterator(pBlockIter, pReader->order, pReader->status.pTableMap);
|
resetDataBlockIterator(pBlockIter, pReader->order, pReader->status.pTableMap);
|
||||||
goto _begin;
|
goto _begin;
|
||||||
|
@ -3498,10 +3536,11 @@ void tsdbReaderClose(STsdbReader* pReader) {
|
||||||
tsdbDebug("%p :io-cost summary: head-file:%" PRIu64 ", head-file time:%.2f ms, SMA:%" PRId64
|
tsdbDebug("%p :io-cost summary: head-file:%" PRIu64 ", head-file time:%.2f ms, SMA:%" PRId64
|
||||||
" SMA-time:%.2f ms, fileBlocks:%" PRId64
|
" SMA-time:%.2f ms, fileBlocks:%" PRId64
|
||||||
", fileBlocks-time:%.2f ms, "
|
", fileBlocks-time:%.2f ms, "
|
||||||
"build in-memory-block-time:%.2f ms, STableBlockScanInfo size:%.2f Kb %s",
|
"build in-memory-block-time:%.2f ms, lastBlocks:%" PRId64
|
||||||
pReader, pCost->headFileLoad, pCost->headFileLoadTime, pCost->smaData, pCost->smaLoadTime,
|
", lastBlocks-time:%.2f ms, STableBlockScanInfo size:%.2f Kb %s",
|
||||||
pCost->numOfBlocks, pCost->blockLoadTime, pCost->buildmemBlock,
|
pReader, pCost->headFileLoad, pCost->headFileLoadTime, pCost->smaDataLoad, pCost->smaLoadTime,
|
||||||
numOfTables * sizeof(STableBlockScanInfo) / 1000.0, pReader->idStr);
|
pCost->numOfBlocks, pCost->blockLoadTime, pCost->buildmemBlock, pCost->lastBlockLoad,
|
||||||
|
pCost->lastBlockLoadTime, numOfTables * sizeof(STableBlockScanInfo) / 1000.0, pReader->idStr);
|
||||||
|
|
||||||
taosMemoryFree(pReader->idStr);
|
taosMemoryFree(pReader->idStr);
|
||||||
taosMemoryFree(pReader->pSchema);
|
taosMemoryFree(pReader->pSchema);
|
||||||
|
@ -3663,7 +3702,7 @@ int32_t tsdbRetrieveDatablockSMA(STsdbReader* pReader, SColumnDataAgg*** pBlockS
|
||||||
|
|
||||||
double elapsed = (taosGetTimestampUs() - stime) / 1000.0;
|
double elapsed = (taosGetTimestampUs() - stime) / 1000.0;
|
||||||
pReader->cost.smaLoadTime += elapsed;
|
pReader->cost.smaLoadTime += elapsed;
|
||||||
pReader->cost.smaData += 1;
|
pReader->cost.smaDataLoad += 1;
|
||||||
|
|
||||||
*pBlockStatis = pSup->plist;
|
*pBlockStatis = pSup->plist;
|
||||||
|
|
||||||
|
|
|
@ -893,7 +893,7 @@ int32_t catalogChkTbMetaVersion(SCatalog* pCtg, SRequestConnInfo *pConn, SArray*
|
||||||
CTG_API_LEAVE(TSDB_CODE_CTG_INVALID_INPUT);
|
CTG_API_LEAVE(TSDB_CODE_CTG_INVALID_INPUT);
|
||||||
}
|
}
|
||||||
|
|
||||||
SName name;
|
SName name = {0};
|
||||||
int32_t sver = 0;
|
int32_t sver = 0;
|
||||||
int32_t tver = 0;
|
int32_t tver = 0;
|
||||||
int32_t tbNum = taosArrayGetSize(pTables);
|
int32_t tbNum = taosArrayGetSize(pTables);
|
||||||
|
|
|
@ -860,8 +860,8 @@ int32_t handleLimitOffset(SOperatorInfo *pOperator, SLimitInfo* pLimitInfo, SSDa
|
||||||
bool hasLimitOffsetInfo(SLimitInfo* pLimitInfo);
|
bool hasLimitOffsetInfo(SLimitInfo* pLimitInfo);
|
||||||
void initLimitInfo(const SNode* pLimit, const SNode* pSLimit, SLimitInfo* pLimitInfo);
|
void initLimitInfo(const SNode* pLimit, const SNode* pSLimit, SLimitInfo* pLimitInfo);
|
||||||
|
|
||||||
void doApplyFunctions(SExecTaskInfo* taskInfo, SqlFunctionCtx* pCtx, STimeWindow* pWin, SColumnInfoData* pTimeWindowData, int32_t offset,
|
void doApplyFunctions(SExecTaskInfo* taskInfo, SqlFunctionCtx* pCtx, SColumnInfoData* pTimeWindowData, int32_t offset,
|
||||||
int32_t forwardStep, TSKEY* tsCol, int32_t numOfTotal, int32_t numOfOutput, int32_t order);
|
int32_t forwardStep, int32_t numOfTotal, int32_t numOfOutput);
|
||||||
|
|
||||||
int32_t extractDataBlockFromFetchRsp(SSDataBlock* pRes, char* pData, int32_t numOfOutput, SArray* pColList, char** pNextStart);
|
int32_t extractDataBlockFromFetchRsp(SSDataBlock* pRes, char* pData, int32_t numOfOutput, SArray* pColList, char** pNextStart);
|
||||||
void updateLoadRemoteInfo(SLoadRemoteDataInfo *pInfo, int32_t numOfRows, int32_t dataLen, int64_t startTs,
|
void updateLoadRemoteInfo(SLoadRemoteDataInfo *pInfo, int32_t numOfRows, int32_t dataLen, int64_t startTs,
|
||||||
|
|
|
@ -987,6 +987,7 @@ SqlFunctionCtx* createSqlFunctionCtx(SExprInfo* pExprInfo, int32_t numOfOutput,
|
||||||
pCtx->end.key = INT64_MIN;
|
pCtx->end.key = INT64_MIN;
|
||||||
pCtx->numOfParams = pExpr->base.numOfParams;
|
pCtx->numOfParams = pExpr->base.numOfParams;
|
||||||
pCtx->increase = false;
|
pCtx->increase = false;
|
||||||
|
pCtx->isStream = false;
|
||||||
|
|
||||||
pCtx->param = pFunct->pParam;
|
pCtx->param = pFunct->pParam;
|
||||||
}
|
}
|
||||||
|
|
|
@ -378,15 +378,30 @@ void initExecTimeWindowInfo(SColumnInfoData* pColData, STimeWindow* pQueryWindow
|
||||||
|
|
||||||
void cleanupExecTimeWindowInfo(SColumnInfoData* pColData) { colDataDestroy(pColData); }
|
void cleanupExecTimeWindowInfo(SColumnInfoData* pColData) { colDataDestroy(pColData); }
|
||||||
|
|
||||||
void doApplyFunctions(SExecTaskInfo* taskInfo, SqlFunctionCtx* pCtx, STimeWindow* pWin,
|
typedef struct {
|
||||||
SColumnInfoData* pTimeWindowData, int32_t offset, int32_t forwardStep, TSKEY* tsCol,
|
bool hasAgg;
|
||||||
int32_t numOfTotal, int32_t numOfOutput, int32_t order) {
|
int32_t numOfRows;
|
||||||
|
int32_t startOffset;
|
||||||
|
} SFunctionCtxStatus;
|
||||||
|
|
||||||
|
static void functionCtxSave(SqlFunctionCtx* pCtx, SFunctionCtxStatus* pStatus) {
|
||||||
|
pStatus->hasAgg = pCtx->input.colDataAggIsSet;
|
||||||
|
pStatus->numOfRows = pCtx->input.numOfRows;
|
||||||
|
pStatus->startOffset = pCtx->input.startRowIndex;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void functionCtxRestore(SqlFunctionCtx* pCtx, SFunctionCtxStatus* pStatus) {
|
||||||
|
pCtx->input.colDataAggIsSet = pStatus->hasAgg;
|
||||||
|
pCtx->input.numOfRows = pStatus->numOfRows;
|
||||||
|
pCtx->input.startRowIndex = pStatus->startOffset;
|
||||||
|
}
|
||||||
|
|
||||||
|
void doApplyFunctions(SExecTaskInfo* taskInfo, SqlFunctionCtx* pCtx, SColumnInfoData* pTimeWindowData, int32_t offset,
|
||||||
|
int32_t forwardStep, int32_t numOfTotal, int32_t numOfOutput) {
|
||||||
for (int32_t k = 0; k < numOfOutput; ++k) {
|
for (int32_t k = 0; k < numOfOutput; ++k) {
|
||||||
// keep it temporarily
|
// keep it temporarily
|
||||||
// todo no need this??
|
SFunctionCtxStatus status = {0};
|
||||||
bool hasAgg = pCtx[k].input.colDataAggIsSet;
|
functionCtxSave(&pCtx[k], &status);
|
||||||
int32_t numOfRows = pCtx[k].input.numOfRows;
|
|
||||||
int32_t startOffset = pCtx[k].input.startRowIndex;
|
|
||||||
|
|
||||||
pCtx[k].input.startRowIndex = offset;
|
pCtx[k].input.startRowIndex = offset;
|
||||||
pCtx[k].input.numOfRows = forwardStep;
|
pCtx[k].input.numOfRows = forwardStep;
|
||||||
|
@ -424,9 +439,7 @@ void doApplyFunctions(SExecTaskInfo* taskInfo, SqlFunctionCtx* pCtx, STimeWindow
|
||||||
}
|
}
|
||||||
|
|
||||||
// restore it
|
// restore it
|
||||||
pCtx[k].input.colDataAggIsSet = hasAgg;
|
functionCtxRestore(&pCtx[k], &status);
|
||||||
pCtx[k].input.startRowIndex = startOffset;
|
|
||||||
pCtx[k].input.numOfRows = numOfRows;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -277,7 +277,7 @@ static void doHashGroupbyAgg(SOperatorInfo* pOperator, SSDataBlock* pBlock) {
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t rowIndex = j - num;
|
int32_t rowIndex = j - num;
|
||||||
doApplyFunctions(pTaskInfo, pCtx, &w, NULL, rowIndex, num, NULL, pBlock->info.rows, pOperator->exprSupp.numOfExprs, TSDB_ORDER_ASC);
|
doApplyFunctions(pTaskInfo, pCtx, NULL, rowIndex, num, pBlock->info.rows, pOperator->exprSupp.numOfExprs);
|
||||||
|
|
||||||
// assign the group keys or user input constant values if required
|
// assign the group keys or user input constant values if required
|
||||||
doAssignGroupKeys(pCtx, pOperator->exprSupp.numOfExprs, pBlock->info.rows, rowIndex);
|
doAssignGroupKeys(pCtx, pOperator->exprSupp.numOfExprs, pBlock->info.rows, rowIndex);
|
||||||
|
@ -295,7 +295,7 @@ static void doHashGroupbyAgg(SOperatorInfo* pOperator, SSDataBlock* pBlock) {
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t rowIndex = pBlock->info.rows - num;
|
int32_t rowIndex = pBlock->info.rows - num;
|
||||||
doApplyFunctions(pTaskInfo, pCtx, &w, NULL, rowIndex, num, NULL, pBlock->info.rows, pOperator->exprSupp.numOfExprs, TSDB_ORDER_ASC);
|
doApplyFunctions(pTaskInfo, pCtx, NULL, rowIndex, num, pBlock->info.rows, pOperator->exprSupp.numOfExprs);
|
||||||
doAssignGroupKeys(pCtx, pOperator->exprSupp.numOfExprs, pBlock->info.rows, rowIndex);
|
doAssignGroupKeys(pCtx, pOperator->exprSupp.numOfExprs, pBlock->info.rows, rowIndex);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -641,8 +641,7 @@ static void doInterpUnclosedTimeWindow(SOperatorInfo* pOperatorInfo, int32_t num
|
||||||
setResultRowInterpo(pResult, RESULT_ROW_END_INTERP);
|
setResultRowInterpo(pResult, RESULT_ROW_END_INTERP);
|
||||||
setNotInterpoWindowKey(pSup->pCtx, numOfExprs, RESULT_ROW_START_INTERP);
|
setNotInterpoWindowKey(pSup->pCtx, numOfExprs, RESULT_ROW_START_INTERP);
|
||||||
|
|
||||||
doApplyFunctions(pTaskInfo, pSup->pCtx, &w, &pInfo->twAggSup.timeWindowData, startPos, 0, tsCols, pBlock->info.rows,
|
doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, startPos, 0, pBlock->info.rows, numOfExprs);
|
||||||
numOfExprs, pInfo->inputOrder);
|
|
||||||
|
|
||||||
if (isResultRowInterpolated(pResult, RESULT_ROW_END_INTERP)) {
|
if (isResultRowInterpolated(pResult, RESULT_ROW_END_INTERP)) {
|
||||||
closeResultRow(pr);
|
closeResultRow(pr);
|
||||||
|
@ -986,8 +985,8 @@ static void hashIntervalAgg(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResul
|
||||||
if ((!pInfo->ignoreExpiredData || !isCloseWindow(&win, &pInfo->twAggSup)) &&
|
if ((!pInfo->ignoreExpiredData || !isCloseWindow(&win, &pInfo->twAggSup)) &&
|
||||||
inSlidingWindow(&pInfo->interval, &win, &pBlock->info)) {
|
inSlidingWindow(&pInfo->interval, &win, &pBlock->info)) {
|
||||||
updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &win, true);
|
updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &win, true);
|
||||||
doApplyFunctions(pTaskInfo, pSup->pCtx, &win, &pInfo->twAggSup.timeWindowData, startPos, forwardRows, tsCols,
|
doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, startPos, forwardRows,
|
||||||
pBlock->info.rows, numOfOutput, pInfo->inputOrder);
|
pBlock->info.rows, numOfOutput);
|
||||||
}
|
}
|
||||||
|
|
||||||
doCloseWindow(pResultRowInfo, pInfo, pResult);
|
doCloseWindow(pResultRowInfo, pInfo, pResult);
|
||||||
|
@ -1026,8 +1025,8 @@ static void hashIntervalAgg(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResul
|
||||||
doWindowBorderInterpolation(pInfo, pBlock, pResult, &nextWin, startPos, forwardRows, pSup);
|
doWindowBorderInterpolation(pInfo, pBlock, pResult, &nextWin, startPos, forwardRows, pSup);
|
||||||
|
|
||||||
updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &nextWin, true);
|
updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &nextWin, true);
|
||||||
doApplyFunctions(pTaskInfo, pSup->pCtx, &nextWin, &pInfo->twAggSup.timeWindowData, startPos, forwardRows, tsCols,
|
doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, startPos, forwardRows,
|
||||||
pBlock->info.rows, numOfOutput, pInfo->inputOrder);
|
pBlock->info.rows, numOfOutput);
|
||||||
doCloseWindow(pResultRowInfo, pInfo, pResult);
|
doCloseWindow(pResultRowInfo, pInfo, pResult);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1190,8 +1189,8 @@ static void doStateWindowAggImpl(SOperatorInfo* pOperator, SStateWindowOperatorI
|
||||||
}
|
}
|
||||||
|
|
||||||
updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &window, false);
|
updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &window, false);
|
||||||
doApplyFunctions(pTaskInfo, pSup->pCtx, &window, &pInfo->twAggSup.timeWindowData, pRowSup->startRowIndex,
|
doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, pRowSup->startRowIndex,
|
||||||
pRowSup->numOfRows, NULL, pBlock->info.rows, numOfOutput, TSDB_ORDER_ASC);
|
pRowSup->numOfRows, pBlock->info.rows, numOfOutput);
|
||||||
|
|
||||||
// here we start a new session window
|
// here we start a new session window
|
||||||
doKeepNewWindowStartInfo(pRowSup, tsList, j, gid);
|
doKeepNewWindowStartInfo(pRowSup, tsList, j, gid);
|
||||||
|
@ -1215,8 +1214,8 @@ static void doStateWindowAggImpl(SOperatorInfo* pOperator, SStateWindowOperatorI
|
||||||
}
|
}
|
||||||
|
|
||||||
updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &pRowSup->win, false);
|
updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &pRowSup->win, false);
|
||||||
doApplyFunctions(pTaskInfo, pSup->pCtx, &pRowSup->win, &pInfo->twAggSup.timeWindowData, pRowSup->startRowIndex,
|
doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, pRowSup->startRowIndex,
|
||||||
pRowSup->numOfRows, NULL, pBlock->info.rows, numOfOutput, TSDB_ORDER_ASC);
|
pRowSup->numOfRows, pBlock->info.rows, numOfOutput);
|
||||||
}
|
}
|
||||||
|
|
||||||
static SSDataBlock* doStateWindowAgg(SOperatorInfo* pOperator) {
|
static SSDataBlock* doStateWindowAgg(SOperatorInfo* pOperator) {
|
||||||
|
@ -1794,6 +1793,12 @@ void initIntervalDownStream(SOperatorInfo* downstream, uint16_t type, SAggSuppor
|
||||||
pScanInfo->sessionSup.pIntervalAggSup = pSup;
|
pScanInfo->sessionSup.pIntervalAggSup = pSup;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void initStreamFunciton(SqlFunctionCtx* pCtx, int32_t numOfExpr) {
|
||||||
|
for (int32_t i = 0; i < numOfExpr; i++) {
|
||||||
|
pCtx[i].isStream = true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
SOperatorInfo* createIntervalOperatorInfo(SOperatorInfo* downstream, SExprInfo* pExprInfo, int32_t numOfCols,
|
SOperatorInfo* createIntervalOperatorInfo(SOperatorInfo* downstream, SExprInfo* pExprInfo, int32_t numOfCols,
|
||||||
SSDataBlock* pResBlock, SInterval* pInterval, int32_t primaryTsSlotId,
|
SSDataBlock* pResBlock, SInterval* pInterval, int32_t primaryTsSlotId,
|
||||||
STimeWindowAggSupp* pTwAggSupp, SIntervalPhysiNode* pPhyNode,
|
STimeWindowAggSupp* pTwAggSupp, SIntervalPhysiNode* pPhyNode,
|
||||||
|
@ -1836,6 +1841,7 @@ SOperatorInfo* createIntervalOperatorInfo(SOperatorInfo* downstream, SExprInfo*
|
||||||
if (isStream) {
|
if (isStream) {
|
||||||
ASSERT(numOfCols > 0);
|
ASSERT(numOfCols > 0);
|
||||||
increaseTs(pSup->pCtx);
|
increaseTs(pSup->pCtx);
|
||||||
|
initStreamFunciton(pSup->pCtx, pSup->numOfExprs);
|
||||||
}
|
}
|
||||||
|
|
||||||
initExecTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &pInfo->win);
|
initExecTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &pInfo->win);
|
||||||
|
@ -1934,8 +1940,8 @@ static void doSessionWindowAggImpl(SOperatorInfo* pOperator, SSessionAggOperator
|
||||||
|
|
||||||
// pInfo->numOfRows data belong to the current session window
|
// pInfo->numOfRows data belong to the current session window
|
||||||
updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &window, false);
|
updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &window, false);
|
||||||
doApplyFunctions(pTaskInfo, pSup->pCtx, &window, &pInfo->twAggSup.timeWindowData, pRowSup->startRowIndex,
|
doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, pRowSup->startRowIndex,
|
||||||
pRowSup->numOfRows, NULL, pBlock->info.rows, numOfOutput, TSDB_ORDER_ASC);
|
pRowSup->numOfRows, pBlock->info.rows, numOfOutput);
|
||||||
|
|
||||||
// here we start a new session window
|
// here we start a new session window
|
||||||
doKeepNewWindowStartInfo(pRowSup, tsList, j, gid);
|
doKeepNewWindowStartInfo(pRowSup, tsList, j, gid);
|
||||||
|
@ -1952,8 +1958,8 @@ static void doSessionWindowAggImpl(SOperatorInfo* pOperator, SSessionAggOperator
|
||||||
}
|
}
|
||||||
|
|
||||||
updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &pRowSup->win, false);
|
updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &pRowSup->win, false);
|
||||||
doApplyFunctions(pTaskInfo, pSup->pCtx, &pRowSup->win, &pInfo->twAggSup.timeWindowData, pRowSup->startRowIndex,
|
doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, pRowSup->startRowIndex,
|
||||||
pRowSup->numOfRows, NULL, pBlock->info.rows, numOfOutput, TSDB_ORDER_ASC);
|
pRowSup->numOfRows, pBlock->info.rows, numOfOutput);
|
||||||
}
|
}
|
||||||
|
|
||||||
static SSDataBlock* doSessionWindowAgg(SOperatorInfo* pOperator) {
|
static SSDataBlock* doSessionWindowAgg(SOperatorInfo* pOperator) {
|
||||||
|
@ -2952,8 +2958,8 @@ static void doHashInterval(SOperatorInfo* pOperatorInfo, SSDataBlock* pSDataBloc
|
||||||
setResultBufPageDirty(pInfo->aggSup.pResultBuf, &pResultRowInfo->cur);
|
setResultBufPageDirty(pInfo->aggSup.pResultBuf, &pResultRowInfo->cur);
|
||||||
}
|
}
|
||||||
updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &nextWin, true);
|
updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &nextWin, true);
|
||||||
doApplyFunctions(pTaskInfo, pSup->pCtx, &nextWin, &pInfo->twAggSup.timeWindowData, startPos, forwardRows, tsCols,
|
doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, startPos, forwardRows,
|
||||||
pSDataBlock->info.rows, numOfOutput, TSDB_ORDER_ASC);
|
pSDataBlock->info.rows, numOfOutput);
|
||||||
int32_t prevEndPos = (forwardRows - 1) * step + startPos;
|
int32_t prevEndPos = (forwardRows - 1) * step + startPos;
|
||||||
ASSERT(pSDataBlock->info.window.skey > 0 && pSDataBlock->info.window.ekey > 0);
|
ASSERT(pSDataBlock->info.window.skey > 0 && pSDataBlock->info.window.ekey > 0);
|
||||||
startPos = getNextQualifiedWindow(&pInfo->interval, &nextWin, &pSDataBlock->info, tsCols, prevEndPos, pInfo->order);
|
startPos = getNextQualifiedWindow(&pInfo->interval, &nextWin, &pSDataBlock->info, tsCols, prevEndPos, pInfo->order);
|
||||||
|
@ -3330,6 +3336,7 @@ SOperatorInfo* createStreamFinalIntervalOperatorInfo(SOperatorInfo* downstream,
|
||||||
SSDataBlock* pResBlock = createResDataBlock(pPhyNode->pOutputDataBlockDesc);
|
SSDataBlock* pResBlock = createResDataBlock(pPhyNode->pOutputDataBlockDesc);
|
||||||
|
|
||||||
int32_t code = initAggInfo(&pOperator->exprSupp, &pInfo->aggSup, pExprInfo, numOfCols, keyBufSize, pTaskInfo->id.str);
|
int32_t code = initAggInfo(&pOperator->exprSupp, &pInfo->aggSup, pExprInfo, numOfCols, keyBufSize, pTaskInfo->id.str);
|
||||||
|
initStreamFunciton(pOperator->exprSupp.pCtx, pOperator->exprSupp.numOfExprs);
|
||||||
initBasicInfo(&pInfo->binfo, pResBlock);
|
initBasicInfo(&pInfo->binfo, pResBlock);
|
||||||
|
|
||||||
ASSERT(numOfCols > 0);
|
ASSERT(numOfCols > 0);
|
||||||
|
@ -3471,6 +3478,7 @@ int32_t initBasicInfoEx(SOptrBasicInfo* pBasicInfo, SExprSupp* pSup, SExprInfo*
|
||||||
if (code != TSDB_CODE_SUCCESS) {
|
if (code != TSDB_CODE_SUCCESS) {
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
initStreamFunciton(pSup->pCtx, pSup->numOfExprs);
|
||||||
|
|
||||||
initBasicInfo(pBasicInfo, pResultBlock);
|
initBasicInfo(pBasicInfo, pResultBlock);
|
||||||
|
|
||||||
|
@ -3776,8 +3784,7 @@ static int32_t doOneWindowAggImpl(int32_t tsColId, SOptrBasicInfo* pBinfo, SStre
|
||||||
return TSDB_CODE_QRY_OUT_OF_MEMORY;
|
return TSDB_CODE_QRY_OUT_OF_MEMORY;
|
||||||
}
|
}
|
||||||
updateTimeWindowInfo(pTimeWindowData, &pCurWin->win, false);
|
updateTimeWindowInfo(pTimeWindowData, &pCurWin->win, false);
|
||||||
doApplyFunctions(pTaskInfo, pSup->pCtx, &pCurWin->win, pTimeWindowData, startIndex, winRows, tsCols,
|
doApplyFunctions(pTaskInfo, pSup->pCtx, pTimeWindowData, startIndex, winRows, pSDataBlock->info.rows, numOutput);
|
||||||
pSDataBlock->info.rows, numOutput, TSDB_ORDER_ASC);
|
|
||||||
SFilePage* bufPage = getBufPage(pAggSup->pResultBuf, pCurWin->pos.pageId);
|
SFilePage* bufPage = getBufPage(pAggSup->pResultBuf, pCurWin->pos.pageId);
|
||||||
setBufPageDirty(bufPage, true);
|
setBufPageDirty(bufPage, true);
|
||||||
releaseBufPage(pAggSup->pResultBuf, bufPage);
|
releaseBufPage(pAggSup->pResultBuf, bufPage);
|
||||||
|
@ -4571,8 +4578,8 @@ SStateWindowInfo* getStateWindow(SStreamAggSupporter* pAggSup, TSKEY ts, uint64_
|
||||||
return insertNewStateWindow(pWinInfos, ts, pKeyData, index + 1, pCol);
|
return insertNewStateWindow(pWinInfos, ts, pKeyData, index + 1, pCol);
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t updateStateWindowInfo(SArray* pWinInfos, int32_t winIndex, TSKEY* pTs, SColumnInfoData* pKeyCol, int32_t rows,
|
int32_t updateStateWindowInfo(SArray* pWinInfos, int32_t winIndex, TSKEY* pTs, uint64_t groupId,
|
||||||
int32_t start, bool* allEqual, SHashObj* pSeDelete) {
|
SColumnInfoData* pKeyCol, int32_t rows, int32_t start, bool* allEqual, SHashObj* pSeDeleted) {
|
||||||
*allEqual = true;
|
*allEqual = true;
|
||||||
SStateWindowInfo* pWinInfo = taosArrayGet(pWinInfos, winIndex);
|
SStateWindowInfo* pWinInfo = taosArrayGet(pWinInfos, winIndex);
|
||||||
for (int32_t i = start; i < rows; ++i) {
|
for (int32_t i = start; i < rows; ++i) {
|
||||||
|
@ -4592,9 +4599,10 @@ int32_t updateStateWindowInfo(SArray* pWinInfos, int32_t winIndex, TSKEY* pTs, S
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (pWinInfo->winInfo.win.skey > pTs[i]) {
|
if (pWinInfo->winInfo.win.skey > pTs[i]) {
|
||||||
if (pSeDelete && pWinInfo->winInfo.isOutput) {
|
if (pSeDeleted && pWinInfo->winInfo.isOutput) {
|
||||||
taosHashPut(pSeDelete, &pWinInfo->winInfo.pos, sizeof(SResultRowPosition), &pWinInfo->winInfo.win.skey,
|
SWinRes res = {.ts = pWinInfo->winInfo.win.skey, .groupId = groupId};
|
||||||
sizeof(TSKEY));
|
taosHashPut(pSeDeleted, &pWinInfo->winInfo.pos, sizeof(SResultRowPosition), &res,
|
||||||
|
sizeof(SWinRes));
|
||||||
pWinInfo->winInfo.isOutput = false;
|
pWinInfo->winInfo.isOutput = false;
|
||||||
}
|
}
|
||||||
pWinInfo->winInfo.win.skey = pTs[i];
|
pWinInfo->winInfo.win.skey = pTs[i];
|
||||||
|
@ -4607,22 +4615,23 @@ int32_t updateStateWindowInfo(SArray* pWinInfos, int32_t winIndex, TSKEY* pTs, S
|
||||||
return rows - start;
|
return rows - start;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void doClearStateWindows(SStreamAggSupporter* pAggSup, SSDataBlock* pBlock, int32_t tsIndex, SColumn* pCol,
|
static void doClearStateWindows(SStreamAggSupporter* pAggSup, SSDataBlock* pBlock,
|
||||||
int32_t keyIndex, SHashObj* pSeUpdated, SHashObj* pSeDeleted) {
|
int32_t tsIndex, SColumn* pCol, int32_t keyIndex, SHashObj* pSeUpdated, SHashObj* pSeDeleted) {
|
||||||
SColumnInfoData* pTsColInfo = taosArrayGet(pBlock->pDataBlock, tsIndex);
|
SColumnInfoData* pTsColInfo = taosArrayGet(pBlock->pDataBlock, tsIndex);
|
||||||
SColumnInfoData* pKeyColInfo = taosArrayGet(pBlock->pDataBlock, keyIndex);
|
SColumnInfoData* pKeyColInfo = taosArrayGet(pBlock->pDataBlock, keyIndex);
|
||||||
TSKEY* tsCol = (TSKEY*)pTsColInfo->pData;
|
TSKEY* tsCol = (TSKEY*)pTsColInfo->pData;
|
||||||
bool allEqual = false;
|
bool allEqual = false;
|
||||||
int32_t step = 1;
|
int32_t step = 1;
|
||||||
|
uint64_t groupId = pBlock->info.groupId;
|
||||||
for (int32_t i = 0; i < pBlock->info.rows; i += step) {
|
for (int32_t i = 0; i < pBlock->info.rows; i += step) {
|
||||||
char* pKeyData = colDataGetData(pKeyColInfo, i);
|
char* pKeyData = colDataGetData(pKeyColInfo, i);
|
||||||
int32_t winIndex = 0;
|
int32_t winIndex = 0;
|
||||||
SStateWindowInfo* pCurWin = getStateWindowByTs(pAggSup, tsCol[i], pBlock->info.groupId, &winIndex);
|
SStateWindowInfo* pCurWin = getStateWindowByTs(pAggSup, tsCol[i], groupId, &winIndex);
|
||||||
if (!pCurWin) {
|
if (!pCurWin) {
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
step = updateStateWindowInfo(pAggSup->pCurWins, winIndex, tsCol, pKeyColInfo, pBlock->info.rows, i, &allEqual,
|
step = updateStateWindowInfo(pAggSup->pCurWins, winIndex, tsCol, groupId, pKeyColInfo,
|
||||||
pSeDeleted);
|
pBlock->info.rows, i, &allEqual, pSeDeleted);
|
||||||
ASSERT(isTsInWindow(pCurWin, tsCol[i]) || isEqualStateKey(pCurWin, pKeyData));
|
ASSERT(isTsInWindow(pCurWin, tsCol[i]) || isEqualStateKey(pCurWin, pKeyData));
|
||||||
taosHashRemove(pSeUpdated, &pCurWin->winInfo.pos, sizeof(SResultRowPosition));
|
taosHashRemove(pSeUpdated, &pCurWin->winInfo.pos, sizeof(SResultRowPosition));
|
||||||
deleteWindow(pAggSup->pCurWins, winIndex, destroyStateWinInfo);
|
deleteWindow(pAggSup->pCurWins, winIndex, destroyStateWinInfo);
|
||||||
|
@ -4661,12 +4670,12 @@ static void doStreamStateAggImpl(SOperatorInfo* pOperator, SSDataBlock* pSDataBl
|
||||||
int32_t winIndex = 0;
|
int32_t winIndex = 0;
|
||||||
bool allEqual = true;
|
bool allEqual = true;
|
||||||
SStateWindowInfo* pCurWin =
|
SStateWindowInfo* pCurWin =
|
||||||
getStateWindow(pAggSup, tsCols[i], pSDataBlock->info.groupId, pKeyData, &pInfo->stateCol, &winIndex);
|
getStateWindow(pAggSup, tsCols[i], groupId, pKeyData, &pInfo->stateCol, &winIndex);
|
||||||
winRows = updateStateWindowInfo(pAggSup->pCurWins, winIndex, tsCols, pKeyColInfo, pSDataBlock->info.rows, i,
|
winRows = updateStateWindowInfo(pAggSup->pCurWins, winIndex, tsCols, groupId, pKeyColInfo,
|
||||||
&allEqual, pInfo->pSeDeleted);
|
pSDataBlock->info.rows, i, &allEqual, pStDeleted);
|
||||||
if (!allEqual) {
|
if (!allEqual) {
|
||||||
appendOneRow(pAggSup->pScanBlock, &pCurWin->winInfo.win.skey, &pCurWin->winInfo.win.ekey,
|
appendOneRow(pAggSup->pScanBlock, &pCurWin->winInfo.win.skey, &pCurWin->winInfo.win.ekey,
|
||||||
&pSDataBlock->info.groupId);
|
&groupId);
|
||||||
taosHashRemove(pSeUpdated, &pCurWin->winInfo.pos, sizeof(SResultRowPosition));
|
taosHashRemove(pSeUpdated, &pCurWin->winInfo.pos, sizeof(SResultRowPosition));
|
||||||
deleteWindow(pAggSup->pCurWins, winIndex, destroyStateWinInfo);
|
deleteWindow(pAggSup->pCurWins, winIndex, destroyStateWinInfo);
|
||||||
continue;
|
continue;
|
||||||
|
@ -4830,9 +4839,7 @@ SOperatorInfo* createStreamStateAggOperatorInfo(SOperatorInfo* downstream, SPhys
|
||||||
_hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY);
|
_hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY);
|
||||||
pInfo->pSeDeleted = taosHashInit(64, hashFn, true, HASH_NO_LOCK);
|
pInfo->pSeDeleted = taosHashInit(64, hashFn, true, HASH_NO_LOCK);
|
||||||
pInfo->pDelIterator = NULL;
|
pInfo->pDelIterator = NULL;
|
||||||
// pInfo->pDelRes = createSpecialDataBlock(STREAM_DELETE_RESULT);
|
pInfo->pDelRes = createSpecialDataBlock(STREAM_DELETE_RESULT);
|
||||||
pInfo->pDelRes = createOneDataBlock(pInfo->binfo.pRes, false); // todo(liuyao) for delete
|
|
||||||
pInfo->pDelRes->info.type = STREAM_DELETE_RESULT; // todo(liuyao) for delete
|
|
||||||
pInfo->pChildren = NULL;
|
pInfo->pChildren = NULL;
|
||||||
pInfo->ignoreExpiredData = pStateNode->window.igExpired;
|
pInfo->ignoreExpiredData = pStateNode->window.igExpired;
|
||||||
|
|
||||||
|
@ -4938,8 +4945,8 @@ static void doMergeAlignedIntervalAggImpl(SOperatorInfo* pOperatorInfo, SResultR
|
||||||
}
|
}
|
||||||
|
|
||||||
updateTimeWindowInfo(&iaInfo->twAggSup.timeWindowData, &currWin, true);
|
updateTimeWindowInfo(&iaInfo->twAggSup.timeWindowData, &currWin, true);
|
||||||
doApplyFunctions(pTaskInfo, pSup->pCtx, &currWin, &iaInfo->twAggSup.timeWindowData, startPos, currPos - startPos,
|
doApplyFunctions(pTaskInfo, pSup->pCtx, &iaInfo->twAggSup.timeWindowData, startPos, currPos - startPos,
|
||||||
tsCols, pBlock->info.rows, numOfOutput, iaInfo->inputOrder);
|
pBlock->info.rows, numOfOutput);
|
||||||
|
|
||||||
outputMergeAlignedIntervalResult(pOperatorInfo, tableGroupId, pResultBlock, miaInfo->curTs);
|
outputMergeAlignedIntervalResult(pOperatorInfo, tableGroupId, pResultBlock, miaInfo->curTs);
|
||||||
miaInfo->curTs = tsCols[currPos];
|
miaInfo->curTs = tsCols[currPos];
|
||||||
|
@ -4960,8 +4967,8 @@ static void doMergeAlignedIntervalAggImpl(SOperatorInfo* pOperatorInfo, SResultR
|
||||||
}
|
}
|
||||||
|
|
||||||
updateTimeWindowInfo(&iaInfo->twAggSup.timeWindowData, &currWin, true);
|
updateTimeWindowInfo(&iaInfo->twAggSup.timeWindowData, &currWin, true);
|
||||||
doApplyFunctions(pTaskInfo, pSup->pCtx, &currWin, &iaInfo->twAggSup.timeWindowData, startPos, currPos - startPos,
|
doApplyFunctions(pTaskInfo, pSup->pCtx, &iaInfo->twAggSup.timeWindowData, startPos, currPos - startPos,
|
||||||
tsCols, pBlock->info.rows, numOfOutput, iaInfo->inputOrder);
|
pBlock->info.rows, numOfOutput);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void doMergeAlignedIntervalAgg(SOperatorInfo* pOperator) {
|
static void doMergeAlignedIntervalAgg(SOperatorInfo* pOperator) {
|
||||||
|
@ -5253,8 +5260,8 @@ static void doMergeIntervalAggImpl(SOperatorInfo* pOperatorInfo, SResultRowInfo*
|
||||||
}
|
}
|
||||||
|
|
||||||
updateTimeWindowInfo(&iaInfo->twAggSup.timeWindowData, &win, true);
|
updateTimeWindowInfo(&iaInfo->twAggSup.timeWindowData, &win, true);
|
||||||
doApplyFunctions(pTaskInfo, pExprSup->pCtx, &win, &iaInfo->twAggSup.timeWindowData, startPos, forwardRows, tsCols,
|
doApplyFunctions(pTaskInfo, pExprSup->pCtx, &iaInfo->twAggSup.timeWindowData, startPos, forwardRows,
|
||||||
pBlock->info.rows, numOfOutput, iaInfo->inputOrder);
|
pBlock->info.rows, numOfOutput);
|
||||||
doCloseWindow(pResultRowInfo, iaInfo, pResult);
|
doCloseWindow(pResultRowInfo, iaInfo, pResult);
|
||||||
|
|
||||||
// output previous interval results after this interval (&win) is closed
|
// output previous interval results after this interval (&win) is closed
|
||||||
|
@ -5285,8 +5292,8 @@ static void doMergeIntervalAggImpl(SOperatorInfo* pOperatorInfo, SResultRowInfo*
|
||||||
doWindowBorderInterpolation(iaInfo, pBlock, pResult, &nextWin, startPos, forwardRows, pExprSup);
|
doWindowBorderInterpolation(iaInfo, pBlock, pResult, &nextWin, startPos, forwardRows, pExprSup);
|
||||||
|
|
||||||
updateTimeWindowInfo(&iaInfo->twAggSup.timeWindowData, &nextWin, true);
|
updateTimeWindowInfo(&iaInfo->twAggSup.timeWindowData, &nextWin, true);
|
||||||
doApplyFunctions(pTaskInfo, pExprSup->pCtx, &nextWin, &iaInfo->twAggSup.timeWindowData, startPos, forwardRows,
|
doApplyFunctions(pTaskInfo, pExprSup->pCtx, &iaInfo->twAggSup.timeWindowData, startPos, forwardRows,
|
||||||
tsCols, pBlock->info.rows, numOfOutput, iaInfo->inputOrder);
|
pBlock->info.rows, numOfOutput);
|
||||||
doCloseWindow(pResultRowInfo, iaInfo, pResult);
|
doCloseWindow(pResultRowInfo, iaInfo, pResult);
|
||||||
|
|
||||||
// output previous interval results after this interval (&nextWin) is closed
|
// output previous interval results after this interval (&nextWin) is closed
|
||||||
|
|
|
@ -817,6 +817,7 @@ void nodesDestroyNode(SNode* pNode) {
|
||||||
destroyLogicNode((SLogicNode*)pLogicNode);
|
destroyLogicNode((SLogicNode*)pLogicNode);
|
||||||
nodesDestroyNode(pLogicNode->pWStartTs);
|
nodesDestroyNode(pLogicNode->pWStartTs);
|
||||||
nodesDestroyNode(pLogicNode->pValues);
|
nodesDestroyNode(pLogicNode->pValues);
|
||||||
|
nodesDestroyList(pLogicNode->pFillExprs);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
case QUERY_NODE_LOGIC_PLAN_SORT: {
|
case QUERY_NODE_LOGIC_PLAN_SORT: {
|
||||||
|
|
|
@ -1159,6 +1159,16 @@ void destoryParseMetaCache(SParseMetaCache* pMetaCache, bool request) {
|
||||||
taosHashCleanup(pMetaCache->pTableMeta);
|
taosHashCleanup(pMetaCache->pTableMeta);
|
||||||
taosHashCleanup(pMetaCache->pTableVgroup);
|
taosHashCleanup(pMetaCache->pTableVgroup);
|
||||||
}
|
}
|
||||||
|
SInsertTablesMetaReq* p = taosHashIterate(pMetaCache->pInsertTables, NULL);
|
||||||
|
while (NULL != p) {
|
||||||
|
taosArrayDestroy(p->pTableMetaPos);
|
||||||
|
taosArrayDestroy(p->pTableMetaReq);
|
||||||
|
taosArrayDestroy(p->pTableVgroupPos);
|
||||||
|
taosArrayDestroy(p->pTableVgroupReq);
|
||||||
|
|
||||||
|
p = taosHashIterate(pMetaCache->pInsertTables, p);
|
||||||
|
}
|
||||||
|
taosHashCleanup(pMetaCache->pInsertTables);
|
||||||
taosHashCleanup(pMetaCache->pDbVgroup);
|
taosHashCleanup(pMetaCache->pDbVgroup);
|
||||||
taosHashCleanup(pMetaCache->pDbCfg);
|
taosHashCleanup(pMetaCache->pDbCfg);
|
||||||
taosHashCleanup(pMetaCache->pDbInfo);
|
taosHashCleanup(pMetaCache->pDbInfo);
|
||||||
|
|
|
@ -149,13 +149,10 @@ int32_t qwExecTask(QW_FPARAMS_DEF, SQWTaskCtx *ctx, bool *queryStop) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
_return:
|
||||||
|
|
||||||
taosArrayDestroy(pResList);
|
taosArrayDestroy(pResList);
|
||||||
QW_RET(code);
|
QW_RET(code);
|
||||||
|
|
||||||
_return:
|
|
||||||
taosArrayDestroy(pResList);
|
|
||||||
|
|
||||||
return code;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t qwGenerateSchHbRsp(SQWorker *mgmt, SQWSchStatus *sch, SQWHbInfo *hbInfo) {
|
int32_t qwGenerateSchHbRsp(SQWorker *mgmt, SQWSchStatus *sch, SQWHbInfo *hbInfo) {
|
||||||
|
|
|
@ -171,7 +171,7 @@ SRaftCfg *raftCfgOpen(const char *path) {
|
||||||
|
|
||||||
taosLSeekFile(pCfg->pFile, 0, SEEK_SET);
|
taosLSeekFile(pCfg->pFile, 0, SEEK_SET);
|
||||||
|
|
||||||
char buf[1024] = {0};
|
char buf[CONFIG_FILE_LEN] = {0};
|
||||||
int len = taosReadFile(pCfg->pFile, buf, sizeof(buf));
|
int len = taosReadFile(pCfg->pFile, buf, sizeof(buf));
|
||||||
ASSERT(len > 0);
|
ASSERT(len > 0);
|
||||||
|
|
||||||
|
|
|
@ -5,15 +5,15 @@ sleep 50
|
||||||
sql connect
|
sql connect
|
||||||
|
|
||||||
print =============== create database
|
print =============== create database
|
||||||
sql create database test vgroups 1
|
sql create database test vgroups 1;
|
||||||
sql select * from information_schema.ins_databases
|
sql select * from information_schema.ins_databases;
|
||||||
if $rows != 3 then
|
if $rows != 3 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
print $data00 $data01 $data02
|
print $data00 $data01 $data02
|
||||||
|
|
||||||
sql use test
|
sql use test;
|
||||||
|
|
||||||
sql create table t1(ts timestamp, a int, b int , c int, d double, id int);
|
sql create table t1(ts timestamp, a int, b int , c int, d double, id int);
|
||||||
sql create stream streams1 trigger at_once into streamt1 as select _wstart, count(*) c1, count(d) c2 , sum(a) c3 , max(a) c4, min(c) c5, max(id) c from t1 state_window(a);
|
sql create stream streams1 trigger at_once into streamt1 as select _wstart, count(*) c1, count(d) c2 , sum(a) c3 , max(a) c4, min(c) c5, max(id) c from t1 state_window(a);
|
||||||
|
|
|
@ -114,8 +114,8 @@ sql select tbcol5 - tbcol3 from stb
|
||||||
sql select spread( tbcol2 )/44, spread(tbcol2), 0.204545455 * 44 from stb;
|
sql select spread( tbcol2 )/44, spread(tbcol2), 0.204545455 * 44 from stb;
|
||||||
sql select min(tbcol) * max(tbcol) /4, sum(tbcol2) * apercentile(tbcol2, 20), apercentile(tbcol2, 33) + 52/9 from stb;
|
sql select min(tbcol) * max(tbcol) /4, sum(tbcol2) * apercentile(tbcol2, 20), apercentile(tbcol2, 33) + 52/9 from stb;
|
||||||
sql select distinct(tbname), tgcol from stb;
|
sql select distinct(tbname), tgcol from stb;
|
||||||
#sql select sum(tbcol) from stb partition by tbname interval(1s) slimit 1 soffset 1;
|
sql select sum(tbcol) from stb partition by tbname interval(1s) slimit 1 soffset 1;
|
||||||
#sql select sum(tbcol) from stb partition by tbname interval(1s) slimit 2 soffset 4 limit 10 offset 1;
|
sql select sum(tbcol) from stb partition by tbname interval(1s) slimit 2 soffset 4 limit 10 offset 1;
|
||||||
|
|
||||||
print =============== step5: explain
|
print =============== step5: explain
|
||||||
sql explain analyze select ts from stb where -2;
|
sql explain analyze select ts from stb where -2;
|
||||||
|
|
|
@ -66,7 +66,7 @@ $null=
|
||||||
|
|
||||||
system_content sh/checkValgrind.sh -n dnode1
|
system_content sh/checkValgrind.sh -n dnode1
|
||||||
print cmd return result ----> [ $system_content ]
|
print cmd return result ----> [ $system_content ]
|
||||||
if $system_content > 2 then
|
if $system_content > 0 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
|
|
|
@ -143,7 +143,7 @@ $null=
|
||||||
|
|
||||||
system_content sh/checkValgrind.sh -n dnode1
|
system_content sh/checkValgrind.sh -n dnode1
|
||||||
print cmd return result ----> [ $system_content ]
|
print cmd return result ----> [ $system_content ]
|
||||||
if $system_content > 2 then
|
if $system_content > 0 then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue