Merge branch '3.0' into test/jcy
This commit is contained in:
commit
ccca39f60b
|
@ -7,25 +7,18 @@
|
|||
- 任何用户都可以通过 **[GitHub issue tracker](https://github.com/taosdata/TDengine/issues)** 向我们报告错误。请您对所遇到的问题进行**详细描述**,最好提供重现错误的详细步骤。
|
||||
- 欢迎提供包含由 Bug 生成的日志文件的附录。
|
||||
|
||||
## 需要强调的代码提交规则
|
||||
## 代码提交规则
|
||||
|
||||
- 在提交代码之前,需要**同意贡献者许可协议(CLA)**。点击 [TaosData CLA](https://cla-assistant.io/taosdata/TDengine) 阅读并签署协议。如果您不接受该协议,请停止提交。
|
||||
- 请在 [GitHub issue tracker](https://github.com/taosdata/TDengine/issues) 中解决问题或添加注册功能。
|
||||
- 如果在 [GitHub issue tracker](https://github.com/taosdata/TDengine/issues) 中没有找到相应的问题或功能,请**创建一个新的 issue**。
|
||||
- 将代码提交到我们的存储库时,请创建**包含问题编号的 PR**。
|
||||
1. 在提交代码之前,需要**同意贡献者许可协议(CLA)**。点击 [TaosData CLA](https://cla-assistant.io/taosdata/TDengine) 阅读并签署协议。如果您不接受该协议,请停止提交。
|
||||
2. 请在 [GitHub issue tracker](https://github.com/taosdata/TDengine/issues) 中解决问题或添加注册功能。
|
||||
如果在 [GitHub issue tracker](https://github.com/taosdata/TDengine/issues) 中没有找到相应的问题或功能,请**创建一个新的 issue**。
|
||||
将代码提交到我们的存储库时,请创建**包含问题编号的 PR**。
|
||||
3. 将TDengine仓库库fork到自己的账户中并创建分支(branch)。
|
||||
注意:默认分支`main`不能直接接受PR,请基于开发分支`3.0`创建自己的分支。
|
||||
注意:修改文档的分支要以`docs/`为开头,以免进行不必要的测试。
|
||||
4. 创建pull request,将自己的分支合并到开发分支`3.0`,我们开发团队将尽快审核。
|
||||
|
||||
## 贡献指南
|
||||
|
||||
1. 请用友好的语气书写。
|
||||
|
||||
2. **主动语态**总体上优于被动语态。主动语态中的句子会突出执行动作的人,而不是被动语态突出动作的接受者。
|
||||
|
||||
3. 文档写作建议
|
||||
|
||||
- 正确拼写产品名称 “TDengine”。 “TD” 用大写字母,“TD” 和 “engine” 之间没有空格 **(正确拼写:TDengine)**。
|
||||
- 在句号或其他标点符号后只留一个空格。
|
||||
|
||||
4. 尽量**使用简单句**,而不是复杂句。
|
||||
如遇任何问题,请添加官方微信TDengineECO。我们的团队会帮忙解决。
|
||||
|
||||
## 给贡献者的礼品
|
||||
|
||||
|
|
|
@ -1,40 +1,36 @@
|
|||
# Contributing
|
||||
# Contributing to TDengine
|
||||
|
||||
We appreciate contributions from all developers. Feel free to follow us, fork the repository, report bugs, and even submit your code on GitHub. However, we would like developers to follow the guidelines in this document to ensure effective cooperation.
|
||||
TDengine Community Edition is free, open-source software. Its development is led by the TDengine Team, but we welcome contributions from all community members and open-source developers. This document describes how you can contribute, no matter whether you're a user or a developer yourself.
|
||||
|
||||
## Reporting a bug
|
||||
## Bug reports
|
||||
|
||||
- Any users can report bugs to us through the **[GitHub issue tracker](https://github.com/taosdata/TDengine/issues)**. We would appreciate if you could provide **a detailed description** of the problem you encountered, including steps to reproduce it.
|
||||
All users can report bugs to us through the **[GitHub issue tracker](https://github.com/taosdata/TDengine/issues)**. To ensure that the development team can locate and resolve the issue that you experienced, please include the following in your bug report:
|
||||
|
||||
- Attaching log files caused by the bug is really appreciated.
|
||||
- A detailed description of the issue, including the steps to reproduce it.
|
||||
- Any log files that may be relevant to the issue.
|
||||
|
||||
## Guidelines for committing code
|
||||
## Code contributions
|
||||
|
||||
- You must agree to the **Contributor License Agreement(CLA) before submitting your code patch**. Follow the **[TAOSData CLA](https://cla-assistant.io/taosdata/TDengine)** link to read through and sign the agreement. If you do not accept the agreement, your contributions cannot be accepted.
|
||||
Developers are encouraged to submit patches to the project, and all contributions, from minor documentation changes to bug fixes, are appreciated by our team. To ensure that your code can be merged successfully and improve the experience for other community members, we ask that you go through the following procedure before submitting a pull request:
|
||||
|
||||
- Please solve an issue or add a feature registered in the **[GitHub issue tracker](https://github.com/taosdata/TDengine/issues)**.
|
||||
- If no corresponding issue or feature is found in the issue tracker, please **create one**.
|
||||
- When submitting your code to our repository, please create a pull request with the **issue number** included.
|
||||
1. Read and accept the terms of the TAOS Data Contributor License Agreement (CLA) located at [https://cla-assistant.io/taosdata/TDengine](https://cla-assistant.io/taosdata/TDengine).
|
||||
|
||||
## Guidelines for communicating
|
||||
2. For bug fixes, search the [GitHub issue tracker](https://github.com/taosdata/TDengine/issues) to check whether the bug has already been filed.
|
||||
- If the bug that you want to fix already exists in the issue tracker, review the previous discussion before submitting your patch.
|
||||
- If the bug that you want to fix does not exist in the issue tracker, click **New issue** and file a report.
|
||||
- Ensure that you note the issue number in your pull request when you submit your patch.
|
||||
|
||||
3. Fork our repository to your GitHub account and create a branch for your patch.
|
||||
**Important:** The `main` branch is for stable versions and cannot accept patches directly. For all code and documentation changes, create your own branch from the development branch `3.0` and not from `main`.
|
||||
Note: For a documentation change, ensure that the branch name starts with `docs/` so that the change can be merged without running tests.
|
||||
|
||||
4. Create a pull request to merge your changes into the development branch `3.0`, and our team members will review the request as soon as possible.
|
||||
|
||||
1. Please be **nice and polite** in the description.
|
||||
2. **Active voice is better than passive voice in general**. Sentences in the active voice will highlight who is performing the action rather than the recipient of the action highlighted by the passive voice.
|
||||
3. Documentation writing advice
|
||||
If you encounter any difficulties or problems in contributing your code, you can join our [Discord server](https://discord.com/invite/VZdSuUg4pS) and receive assistance from members of the TDengine Team.
|
||||
|
||||
- Spell the product name "TDengine" correctly. "TD" is written in capital letters, and there is no space between "TD" and "engine" (**Correct spelling: TDengine**).
|
||||
- Please **capitalize the first letter** of every sentence.
|
||||
- Leave **only one space** after periods or other punctuation marks.
|
||||
- Use **American spelling**.
|
||||
- When possible, **use second person** rather than first person (e.g.“You are recommended to use a reverse proxy such as Nginx.” rather than “We recommend to use a reverse proxy such as Nginx.”).
|
||||
## Expressing our thanks
|
||||
|
||||
5. Use **simple sentences**, rather than complex sentences.
|
||||
|
||||
## Gifts for the contributors
|
||||
|
||||
Developers, as long as you contribute to TDengine, whether it's code contributions to fix bugs or feature requests, or documentation changes, **you are eligible for a very special Contributor Souvenir Gift!**
|
||||
|
||||
**You can choose one of the following gifts:**
|
||||
To thank community members for your support, we are offering a free gift to any developer who submits at least one contribution. You can choose one of the following items:
|
||||
|
||||
<p align="left">
|
||||
<img
|
||||
|
@ -53,12 +49,10 @@ Developers, as long as you contribute to TDengine, whether it's code contributio
|
|||
width="200"
|
||||
/>
|
||||
|
||||
The TDengine community is committed to making TDengine accepted and used by more developers.
|
||||
If you would like to claim your gift, send an email to [developer@tdengine.com](mailto:developer@tdengine.com?subject=Claiming&20my%20developer%20gift) including the following information:
|
||||
|
||||
Just fill out the **Contributor Submission Form** to choose your desired gift.
|
||||
- Your GitHub account name
|
||||
- Your name and mailing address
|
||||
- Your preferred gift
|
||||
|
||||
- [Contributor Submission Form](https://page.ma.scrmtech.com/form/index?pf_uid=27715_2095&id=12100)
|
||||
|
||||
## Contact us
|
||||
|
||||
If you have any problems or questions that need help from us, please feel free to add our WeChat account: TDengineECO.
|
||||
Note: Limit one per person.
|
|
@ -303,7 +303,7 @@ def pre_test_build_win() {
|
|||
set CL=/MP8
|
||||
echo ">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> cmake"
|
||||
time /t
|
||||
cmake .. -G "NMake Makefiles JOM" -DBUILD_TEST=true || exit 7
|
||||
cmake .. -G "NMake Makefiles JOM" -DBUILD_TEST=true -DBUILD_TOOLS=true || exit 7
|
||||
echo ">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> jom -j 6"
|
||||
time /t
|
||||
jom -j 6 || exit 8
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
# taos-tools
|
||||
ExternalProject_Add(taos-tools
|
||||
GIT_REPOSITORY https://github.com/taosdata/taos-tools.git
|
||||
GIT_TAG e00ebd9
|
||||
GIT_TAG efa2a5f
|
||||
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools"
|
||||
BINARY_DIR ""
|
||||
#BUILD_IN_SOURCE TRUE
|
||||
|
|
|
@ -20,7 +20,18 @@ import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
|
|||
<DocCardList items={useCurrentSidebarCategory().items}/>
|
||||
```
|
||||
|
||||
### Join TDengine Community
|
||||
## Study TDengine Knowledge Map
|
||||
|
||||
The TDengine Knowledge Map covers the various knowledge points of TDengine, revealing the invocation relationships and data flow between various conceptual entities. Learning and understanding the TDengine Knowledge Map will help you quickly master the TDengine knowledge system.
|
||||
|
||||
<figure>
|
||||
<center>
|
||||
<a href="pathname:///img/tdengine-map.svg" target="_blank"><img src="/img/tdengine-map.svg" width="80%" /></a>
|
||||
<figcaption>Diagram 1. TDengine Knowledge Map</figcaption>
|
||||
</center>
|
||||
</figure>
|
||||
|
||||
## Join TDengine Community
|
||||
|
||||
<table width="100%">
|
||||
<tr align="center" style={{border:0}}>
|
||||
|
|
|
@ -301,7 +301,7 @@ SELECT TIMEZONE();
|
|||
### Syntax
|
||||
|
||||
```txt
|
||||
WHERE (column|tbname) **match/MATCH/nmatch/NMATCH** _regex_
|
||||
WHERE (column|tbname) match/MATCH/nmatch/NMATCH _regex_
|
||||
```
|
||||
|
||||
### Specification
|
||||
|
|
|
@ -26,14 +26,13 @@ Using REST connection can support a broader range of operating systems as it doe
|
|||
|
||||
TDengine version updates often add new features, and the connector versions in the list are the best-fit versions of the connector.
|
||||
|
||||
| **TDengine Versions** | **Java** | **Python** | **Go** | **C#** | **Node.js** | **Rust** |
|
||||
| --------------------- | -------- | ---------- | ------------ | ------------- | --------------- | -------- |
|
||||
| **3.0.0.0 and later** | 3.0.0 | current version | 3.0 branch | 3.0.0 | 3.0.0 | current version |
|
||||
| **2.4.0.14 and up** | 2.0.38 | current version | develop branch | 1.0.2 - 1.0.6 | 2.0.10 - 2.0.12 | current version |
|
||||
| **2.4.0.6 and up** | 2.0.37 | current version | develop branch | 1.0.2 - 1.0.6 | 2.0.10 - 2.0.12 | current version |
|
||||
| **2.4.0.4 - 2.4.0.5** | 2.0.37 | current version | develop branch | 1.0.2 - 1.0.6 | 2.0.10 - 2.0.12 | current version |
|
||||
| **2.2.x.x ** | 2.0.36 | current version | master branch | n/a | 2.0.7 - 2.0.9 | current version |
|
||||
| **2.0.x.x ** | 2.0.34 | current version | master branch | n/a | 2.0.1 - 2.0.6 | current version |
|
||||
| **TDengine Versions** | **Java** | **Python** | **Go** | **C#** | **Node.js** | **Rust** |
|
||||
| --------------------------- | -------------- | -------------- | -------------- | ------------- | --------------- | --------------- |
|
||||
| **3.0.0.0 and later** | 3.0.2 + | latest version | 3.0 branch | 3.0.0 | 3.0.0 | current version |
|
||||
| **2.4.0.14 and up ** | 2.0.38 | latest version | develop branch | 1.0.2 - 1.0.6 | 2.0.10 - 2.0.12 | current version |
|
||||
| **2.4.0.4 - 2.4.0.13 ** | 2.0.37 | latest version | develop branch | 1.0.2 - 1.0.6 | 2.0.10 - 2.0.12 | current version |
|
||||
| **2.2.x.x ** | 2.0.36 | latest version | master branch | n/a | 2.0.7 - 2.0.9 | current version |
|
||||
| **2.0.x.x ** | 2.0.34 | latest version | master branch | n/a | 2.0.1 - 2.0.6 | current version |
|
||||
|
||||
## Functional Features
|
||||
|
||||
|
|
|
@ -4,6 +4,7 @@ from taos.tmq import *
|
|||
conn = taos.connect()
|
||||
|
||||
print("init")
|
||||
conn.execute("drop topic if exists topic_ctb_column")
|
||||
conn.execute("drop database if exists py_tmq")
|
||||
conn.execute("create database if not exists py_tmq vgroups 2")
|
||||
conn.select_db("py_tmq")
|
||||
|
@ -15,7 +16,6 @@ conn.execute("create table if not exists tb2 using stb1 tags(2)")
|
|||
conn.execute("create table if not exists tb3 using stb1 tags(3)")
|
||||
|
||||
print("create topic")
|
||||
conn.execute("drop topic if exists topic_ctb_column")
|
||||
conn.execute(
|
||||
"create topic if not exists topic_ctb_column as select ts, c1, c2, c3 from stb1"
|
||||
)
|
||||
|
|
|
@ -26,14 +26,13 @@ TDengine 提供了丰富的应用程序开发接口,为了便于用户快速
|
|||
|
||||
TDengine 版本更新往往会增加新的功能特性,列表中的连接器版本为连接器最佳适配版本。
|
||||
|
||||
| **TDengine 版本** | **Java** | **Python** | **Go** | **C#** | **Node.js** | **Rust** |
|
||||
| --------------------- | -------- | ---------- | ------------ | ------------- | --------------- | -------- |
|
||||
| **3.0.0.0 及以上** | 3.0.0 | 当前版本 | 3.0 分支 | 3.0.0 | 3.0.0 | 当前版本 |
|
||||
| **2.4.0.14 及以上** | 2.0.38 | 当前版本 | develop 分支 | 1.0.2 - 1.0.6 | 2.0.10 - 2.0.12 | 当前版本 |
|
||||
| **2.4.0.6 及以上** | 2.0.37 | 当前版本 | develop 分支 | 1.0.2 - 1.0.6 | 2.0.10 - 2.0.12 | 当前版本 |
|
||||
| **2.4.0.4 - 2.4.0.5** | 2.0.37 | 当前版本 | develop 分支 | 1.0.2 - 1.0.6 | 2.0.10 - 2.0.12 | 当前版本 |
|
||||
| **2.2.x.x ** | 2.0.36 | 当前版本 | master 分支 | n/a | 2.0.7 - 2.0.9 | 当前版本 |
|
||||
| **2.0.x.x ** | 2.0.34 | 当前版本 | master 分支 | n/a | 2.0.1 - 2.0.6 | 当前版本 |
|
||||
| **TDengine 版本** | **Java** | **Python** | **Go** | **C#** | **Node.js** | **Rust** |
|
||||
| ---------------------- | --------- | ---------- | ------------ | ------------- | --------------- | -------- |
|
||||
| **3.0.0.0 及以上** | 3.0.2以上 | 当前版本 | 3.0 分支 | 3.0.0 | 3.0.0 | 当前版本 |
|
||||
| **2.4.0.14 及以上** | 2.0.38 | 当前版本 | develop 分支 | 1.0.2 - 1.0.6 | 2.0.10 - 2.0.12 | 当前版本 |
|
||||
| **2.4.0.4 - 2.4.0.13** | 2.0.37 | 当前版本 | develop 分支 | 1.0.2 - 1.0.6 | 2.0.10 - 2.0.12 | 当前版本 |
|
||||
| **2.2.x.x ** | 2.0.36 | 当前版本 | master 分支 | n/a | 2.0.7 - 2.0.9 | 当前版本 |
|
||||
| **2.0.x.x ** | 2.0.34 | 当前版本 | master 分支 | n/a | 2.0.1 - 2.0.6 | 当前版本 |
|
||||
|
||||
## 功能特性
|
||||
|
||||
|
|
|
@ -302,7 +302,7 @@ SELECT TIMEZONE();
|
|||
### 语法
|
||||
|
||||
```txt
|
||||
WHERE (column|tbname) **match/MATCH/nmatch/NMATCH** _regex_
|
||||
WHERE (column|tbname) match/MATCH/nmatch/NMATCH _regex_
|
||||
```
|
||||
|
||||
### 正则表达式规范
|
||||
|
|
|
@ -36,7 +36,7 @@ extern "C" {
|
|||
#define SYNC_DEL_WAL_MS (1000 * 60)
|
||||
#define SYNC_ADD_QUORUM_COUNT 3
|
||||
#define SYNC_MNODE_LOG_RETENTION 10000
|
||||
#define SYNC_VNODE_LOG_RETENTION 100
|
||||
#define SYNC_VNODE_LOG_RETENTION 20
|
||||
#define SNAPSHOT_MAX_CLOCK_SKEW_MS 1000 * 10
|
||||
#define SNAPSHOT_WAIT_MS 1000 * 30
|
||||
|
||||
|
|
|
@ -40,27 +40,32 @@ pipeline {
|
|||
choice(
|
||||
name: 'sourcePath',
|
||||
choices: ['nas','web'],
|
||||
description: 'choice which way to download the installation pacakge;web is Office Web and nas means taos nas server '
|
||||
description: 'Choice which way to download the installation pacakge;web is Office Web and nas means taos nas server '
|
||||
)
|
||||
choice(
|
||||
name: 'verMode',
|
||||
choices: ['all','community','enterprise'],
|
||||
description: 'Choice which types of package you want do check '
|
||||
)
|
||||
string (
|
||||
name:'version',
|
||||
defaultValue:'3.0.1.6',
|
||||
description: 'release version number,eg: 3.0.0.1 or 3.0.0.'
|
||||
defaultValue:'3.0.1.7',
|
||||
description: 'Release version number,eg: 3.0.0.1 or 3.0.0.'
|
||||
)
|
||||
string (
|
||||
name:'baseVersion',
|
||||
defaultValue:'3.0.1.6',
|
||||
description: 'This number of baseVerison is generally not modified.Now it is 3.0.0.1'
|
||||
defaultValue:'3.0.1.7',
|
||||
description: 'The number of baseVerison is generally not modified.Now it is 3.0.0.1'
|
||||
)
|
||||
string (
|
||||
name:'toolsVersion',
|
||||
defaultValue:'2.2.7',
|
||||
description: 'This number of baseVerison is generally not modified.Now it is 3.0.0.1'
|
||||
description: 'Release version number,eg:2.2.0'
|
||||
)
|
||||
string (
|
||||
name:'toolsBaseVersion',
|
||||
defaultValue:'2.1.2',
|
||||
description: 'This number of baseVerison is generally not modified.Now it is 3.0.0.1'
|
||||
description: 'The number of baseVerison is generally not modified.Now it is 2.1.2'
|
||||
)
|
||||
}
|
||||
environment{
|
||||
|
@ -68,10 +73,10 @@ pipeline {
|
|||
TDINTERNAL_ROOT_DIR = '/var/lib/jenkins/workspace/TDinternal'
|
||||
TDENGINE_ROOT_DIR = '/var/lib/jenkins/workspace/TDinternal/community'
|
||||
BRANCH_NAME = '3.0'
|
||||
|
||||
TD_SERVER_TAR = "TDengine-server-${version}-Linux-x64.tar.gz"
|
||||
|
||||
TD_SERVER_TAR = "${preServerPackag}-${version}-Linux-x64.tar.gz"
|
||||
BASE_TD_SERVER_TAR = "TDengine-server-${baseVersion}-Linux-x64.tar.gz"
|
||||
|
||||
|
||||
TD_SERVER_ARM_TAR = "TDengine-server-${version}-Linux-arm64.tar.gz"
|
||||
BASE_TD_SERVER_ARM_TAR = "TDengine-server-${baseVersion}-Linux-arm64.tar.gz"
|
||||
|
||||
|
@ -108,19 +113,28 @@ pipeline {
|
|||
timeout(time: 30, unit: 'MINUTES'){
|
||||
sync_source("${BRANCH_NAME}")
|
||||
sh '''
|
||||
if [ "${verMode}" = "all" ];then
|
||||
verMode="community enterprise"
|
||||
fi
|
||||
verModeList=${verMode}
|
||||
for verModeSin in ${verModeList}
|
||||
do
|
||||
cd ${TDENGINE_ROOT_DIR}/packaging
|
||||
bash testpackage.sh -f server -m ${verModeSin} -f server -l false -c x64 -v ${version} -o ${baseVersion} -s ${sourcePath} -t tar
|
||||
python3 checkPackageRuning.py
|
||||
done
|
||||
'''
|
||||
|
||||
sh '''
|
||||
cd ${TDENGINE_ROOT_DIR}/packaging
|
||||
bash testpackage.sh ${TD_SERVER_TAR} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server ${sourcePath}
|
||||
python3 checkPackageRuning.py
|
||||
'''
|
||||
sh '''
|
||||
bash testpackage.sh -f server -m community -f server -l true -c x64 -v ${version} -o ${baseVersion} -s ${sourcePath} -t tar
|
||||
python3 checkPackageRuning.py
|
||||
'''
|
||||
|
||||
sh '''
|
||||
cd ${TDENGINE_ROOT_DIR}/packaging
|
||||
bash testpackage.sh ${TD_SERVER_LITE_TAR} ${version} ${BASE_TD_SERVER_LITE_TAR} ${baseVersion} server ${sourcePath}
|
||||
python3 checkPackageRuning.py
|
||||
'''
|
||||
sh '''
|
||||
cd ${TDENGINE_ROOT_DIR}/packaging
|
||||
bash testpackage.sh ${TD_SERVER_DEB} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server ${sourcePath}
|
||||
python3 checkPackageRuning.py
|
||||
bash testpackage.sh -f server -m community -f server -l false -c x64 -v ${version} -o ${baseVersion} -s ${sourcePath} -t deb
|
||||
python3 checkPackageRuning.py
|
||||
'''
|
||||
}
|
||||
}
|
||||
|
@ -131,22 +145,30 @@ pipeline {
|
|||
timeout(time: 30, unit: 'MINUTES'){
|
||||
sync_source("${BRANCH_NAME}")
|
||||
sh '''
|
||||
cd ${TDENGINE_ROOT_DIR}/packaging
|
||||
bash testpackage.sh ${TD_SERVER_TAR} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server ${sourcePath}
|
||||
python3 checkPackageRuning.py
|
||||
'''
|
||||
sh '''
|
||||
cd ${TDENGINE_ROOT_DIR}/packaging
|
||||
bash testpackage.sh ${TD_SERVER_LITE_TAR} ${version} ${BASE_TD_SERVER_LITE_TAR} ${baseVersion} server ${sourcePath}
|
||||
python3 checkPackageRuning.py
|
||||
'''
|
||||
sh '''
|
||||
cd ${TDENGINE_ROOT_DIR}/packaging
|
||||
bash testpackage.sh ${TD_SERVER_DEB} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server ${sourcePath}
|
||||
python3 checkPackageRuning.py
|
||||
dpkg -r tdengine
|
||||
if [ "${verMode}" = "all" ];then
|
||||
verMode="community enterprise"
|
||||
fi
|
||||
verModeList=${verMode}
|
||||
for verModeSin in ${verModeList}
|
||||
do
|
||||
cd ${TDENGINE_ROOT_DIR}/packaging
|
||||
bash testpackage.sh -f server -m ${verModeSin} -f server -l false -c x64 -v ${version} -o ${baseVersion} -s ${sourcePath} -t tar
|
||||
python3 checkPackageRuning.py
|
||||
done
|
||||
'''
|
||||
|
||||
sh '''
|
||||
cd ${TDENGINE_ROOT_DIR}/packaging
|
||||
bash testpackage.sh -f server -m community -f server -l true -c x64 -v ${version} -o ${baseVersion} -s ${sourcePath} -t tar
|
||||
python3 checkPackageRuning.py
|
||||
'''
|
||||
|
||||
sh '''
|
||||
cd ${TDENGINE_ROOT_DIR}/packaging
|
||||
bash testpackage.sh -f server -m community -f server -l false -c x64 -v ${version} -o ${baseVersion} -s ${sourcePath} -t deb
|
||||
python3 checkPackageRuning.py
|
||||
dpkg -r tdengine
|
||||
'''
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -156,19 +178,28 @@ pipeline {
|
|||
timeout(time: 30, unit: 'MINUTES'){
|
||||
sync_source("${BRANCH_NAME}")
|
||||
sh '''
|
||||
cd ${TDENGINE_ROOT_DIR}/packaging
|
||||
bash testpackage.sh ${TD_SERVER_TAR} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server ${sourcePath}
|
||||
python3 checkPackageRuning.py
|
||||
if [ "${verMode}" = "all" ];then
|
||||
verMode="community enterprise"
|
||||
fi
|
||||
verModeList=${verMode}
|
||||
for verModeSin in ${verModeList}
|
||||
do
|
||||
cd ${TDENGINE_ROOT_DIR}/packaging
|
||||
bash testpackage.sh -f server -m community -f server -l false -c x64 -v ${version} -o ${baseVersion} -s ${sourcePath} -t tar
|
||||
python3 checkPackageRuning.py
|
||||
done
|
||||
'''
|
||||
|
||||
sh '''
|
||||
cd ${TDENGINE_ROOT_DIR}/packaging
|
||||
bash testpackage.sh -f server -m community -f server -l true -c x64 -v ${version} -o ${baseVersion} -s ${sourcePath} -t tar
|
||||
python3 checkPackageRuning.py
|
||||
'''
|
||||
|
||||
sh '''
|
||||
cd ${TDENGINE_ROOT_DIR}/packaging
|
||||
bash testpackage.sh ${TD_SERVER_LITE_TAR} ${version} ${BASE_TD_SERVER_LITE_TAR} ${baseVersion} server ${sourcePath}
|
||||
python3 checkPackageRuning.py
|
||||
'''
|
||||
sh '''
|
||||
cd ${TDENGINE_ROOT_DIR}/packaging
|
||||
bash testpackage.sh ${TD_SERVER_RPM} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server ${sourcePath}
|
||||
python3 checkPackageRuning.py
|
||||
bash testpackage.sh -f server -m community -f server -l false -c x64 -v ${version} -o ${baseVersion} -s ${sourcePath} -t rpm
|
||||
python3 checkPackageRuning.py
|
||||
'''
|
||||
}
|
||||
}
|
||||
|
@ -179,21 +210,30 @@ pipeline {
|
|||
timeout(time: 30, unit: 'MINUTES'){
|
||||
sync_source("${BRANCH_NAME}")
|
||||
sh '''
|
||||
cd ${TDENGINE_ROOT_DIR}/packaging
|
||||
bash testpackage.sh ${TD_SERVER_TAR} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server ${sourcePath}
|
||||
python3 checkPackageRuning.py
|
||||
if [ "${verMode}" = "all" ];then
|
||||
verMode="community enterprise"
|
||||
fi
|
||||
verModeList=${verMode}
|
||||
for verModeSin in ${verModeList}
|
||||
do
|
||||
cd ${TDENGINE_ROOT_DIR}/packaging
|
||||
bash testpackage.sh -f server -m ${verModeSin} -f server -l false -c x64 -v ${version} -o ${baseVersion} -s ${sourcePath} -t tar
|
||||
python3 checkPackageRuning.py
|
||||
done
|
||||
'''
|
||||
|
||||
sh '''
|
||||
cd ${TDENGINE_ROOT_DIR}/packaging
|
||||
bash testpackage.sh ${TD_SERVER_LITE_TAR} ${version} ${BASE_TD_SERVER_LITE_TAR} ${baseVersion} server ${sourcePath}
|
||||
python3 checkPackageRuning.py
|
||||
bash testpackage.sh -f server -m community -f server -l true -c x64 -v ${version} -o ${baseVersion} -s ${sourcePath} -t tar
|
||||
python3 checkPackageRuning.py
|
||||
'''
|
||||
|
||||
sh '''
|
||||
cd ${TDENGINE_ROOT_DIR}/packaging
|
||||
bash testpackage.sh ${TD_SERVER_RPM} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server ${sourcePath}
|
||||
python3 checkPackageRuning.py
|
||||
bash testpackage.sh -f server -m community -f server -l false -c x64 -v ${version} -o ${baseVersion} -s ${sourcePath} -t rpm
|
||||
python3 checkPackageRuning.py
|
||||
sudo rpm -e tdengine
|
||||
'''
|
||||
'''
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -203,9 +243,16 @@ pipeline {
|
|||
timeout(time: 30, unit: 'MINUTES'){
|
||||
sync_source("${BRANCH_NAME}")
|
||||
sh '''
|
||||
cd ${TDENGINE_ROOT_DIR}/packaging
|
||||
bash testpackage.sh ${TD_SERVER_ARM_TAR} ${version} ${BASE_TD_SERVER_ARM_TAR} ${baseVersion} server ${sourcePath}
|
||||
python3 checkPackageRuning.py
|
||||
if [ "${verMode}" = "all" ];then
|
||||
verMode="community enterprise"
|
||||
fi
|
||||
verModeList=${verMode}
|
||||
for verModeSin in ${verModeList}
|
||||
do
|
||||
cd ${TDENGINE_ROOT_DIR}/packaging
|
||||
bash testpackage.sh -f server -m ${verModeSin} -f server -l false -c arm64 -v ${version} -o ${baseVersion} -s ${sourcePath} -t tar
|
||||
python3 checkPackageRuning.py
|
||||
done
|
||||
'''
|
||||
}
|
||||
}
|
||||
|
@ -219,8 +266,16 @@ pipeline {
|
|||
steps {
|
||||
timeout(time: 30, unit: 'MINUTES'){
|
||||
sh '''
|
||||
cd ${TDENGINE_ROOT_DIR}/packaging
|
||||
bash testpackage.sh ${TD_CLIENT_TAR} ${version} ${BASE_TD_CLIENT_TAR} ${baseVersion} client ${sourcePath}
|
||||
if [ "${verMode}" = "all" ];then
|
||||
verMode="community enterprise"
|
||||
fi
|
||||
verModeList=${verMode}
|
||||
for verModeSin in ${verModeList}
|
||||
do
|
||||
cd ${TDENGINE_ROOT_DIR}/packaging
|
||||
bash testpackage.sh -f server -m ${verModeSin} -f client -l false -c x64 -v ${version} -o ${baseVersion} -s ${sourcePath} -t tar
|
||||
python3 checkPackageRuning.py
|
||||
done
|
||||
python3 checkPackageRuning.py 192.168.0.21
|
||||
'''
|
||||
}
|
||||
|
@ -231,8 +286,10 @@ pipeline {
|
|||
steps {
|
||||
timeout(time: 30, unit: 'MINUTES'){
|
||||
sh '''
|
||||
verModeList=community
|
||||
cd ${TDENGINE_ROOT_DIR}/packaging
|
||||
bash testpackage.sh ${TD_CLIENT_LITE_TAR} ${version} ${BASE_TD_CLIENT_LITE_TAR} ${baseVersion} client ${sourcePath}
|
||||
bash testpackage.sh -f server -m ${verModeSin} -f client -l true -c x64 -v ${version} -o ${baseVersion} -s ${sourcePath} -t tar
|
||||
python3 checkPackageRuning.py
|
||||
python3 checkPackageRuning.py 192.168.0.24
|
||||
'''
|
||||
}
|
||||
|
@ -245,8 +302,16 @@ pipeline {
|
|||
steps {
|
||||
timeout(time: 30, unit: 'MINUTES'){
|
||||
sh '''
|
||||
cd ${TDENGINE_ROOT_DIR}/packaging
|
||||
bash testpackage.sh ${TD_CLIENT_ARM_TAR} ${version} ${BASE_TD_CLIENT_ARM_TAR} ${baseVersion} client ${sourcePath}
|
||||
if [ "${verMode}" = "all" ];then
|
||||
verMode="community enterprise"
|
||||
fi
|
||||
verModeList=${verMode}
|
||||
for verModeSin in ${verModeList}
|
||||
do
|
||||
cd ${TDENGINE_ROOT_DIR}/packaging
|
||||
bash testpackage.sh -f server -m ${verModeSin} -f client -l false -c arm64 -v ${version} -o ${baseVersion} -s ${sourcePath} -t tar
|
||||
python3 checkPackageRuning.py
|
||||
done
|
||||
python3 checkPackageRuning.py 192.168.0.21
|
||||
'''
|
||||
}
|
||||
|
|
|
@ -221,12 +221,12 @@ if [[ "$cpuType" == "x64" ]] || [[ "$cpuType" == "aarch64" ]] || [[ "$cpuType" =
|
|||
# community-version compile
|
||||
cmake ../ -DCPUTYPE=${cpuType} -DWEBSOCKET=true -DOSTYPE=${osType} -DSOMODE=${soMode} -DDBNAME=${dbName} -DVERTYPE=${verType} -DVERDATE="${build_time}" -DGITINFO=${gitinfo} -DGITINFOI=${gitinfoOfInternal} -DVERNUMBER=${verNumber} -DVERCOMPATIBLE=${verNumberComp} -DPAGMODE=${pagMode} -DBUILD_HTTP=${BUILD_HTTP} -DBUILD_TOOLS=${BUILD_TOOLS} ${allocator_macro}
|
||||
elif [ "$verMode" == "cloud" ]; then
|
||||
cmake ../../ -DCPUTYPE=${cpuType} -DWEBSOCKET=true -DBUILD_CLOUD=true -DOSTYPE=${osType} -DSOMODE=${soMode} -DDBNAME=${dbName} -DVERTYPE=${verType} -DVERDATE="${build_time}" -DGITINFO=${gitinfo} -DGITINFOI=${gitinfoOfInternal} -DVERNUMBER=${verNumber} -DVERCOMPATIBLE=${verNumberComp} -DBUILD_HTTP=${BUILD_HTTP} -DBUILD_TOOLS=${BUILD_TOOLS} ${allocator_macro}
|
||||
cmake ../../ -DCPUTYPE=${cpuType} -DWEBSOCKET=true -DBUILD_TAOSX=true -DBUILD_CLOUD=true -DOSTYPE=${osType} -DSOMODE=${soMode} -DDBNAME=${dbName} -DVERTYPE=${verType} -DVERDATE="${build_time}" -DGITINFO=${gitinfo} -DGITINFOI=${gitinfoOfInternal} -DVERNUMBER=${verNumber} -DVERCOMPATIBLE=${verNumberComp} -DBUILD_HTTP=${BUILD_HTTP} -DBUILD_TOOLS=${BUILD_TOOLS} ${allocator_macro}
|
||||
elif [ "$verMode" == "cluster" ]; then
|
||||
if [[ "$dbName" != "taos" ]]; then
|
||||
replace_enterprise_$dbName
|
||||
fi
|
||||
cmake ../../ -DCPUTYPE=${cpuType} -DWEBSOCKET=true -DOSTYPE=${osType} -DSOMODE=${soMode} -DDBNAME=${dbName} -DVERTYPE=${verType} -DVERDATE="${build_time}" -DGITINFO=${gitinfo} -DGITINFOI=${gitinfoOfInternal} -DVERNUMBER=${verNumber} -DVERCOMPATIBLE=${verNumberComp} -DBUILD_HTTP=${BUILD_HTTP} -DBUILD_TOOLS=${BUILD_TOOLS} ${allocator_macro}
|
||||
cmake ../../ -DCPUTYPE=${cpuType} -DWEBSOCKET=true -DBUILD_TAOSX=true -DOSTYPE=${osType} -DSOMODE=${soMode} -DDBNAME=${dbName} -DVERTYPE=${verType} -DVERDATE="${build_time}" -DGITINFO=${gitinfo} -DGITINFOI=${gitinfoOfInternal} -DVERNUMBER=${verNumber} -DVERCOMPATIBLE=${verNumberComp} -DBUILD_HTTP=${BUILD_HTTP} -DBUILD_TOOLS=${BUILD_TOOLS} ${allocator_macro}
|
||||
fi
|
||||
else
|
||||
echo "input cpuType=${cpuType} error!!!"
|
||||
|
|
|
@ -1,15 +1,72 @@
|
|||
#!/bin/sh
|
||||
|
||||
|
||||
function usage() {
|
||||
echo "$0"
|
||||
echo -e "\t -f test file type,server/client/tools/"
|
||||
echo -e "\t -m pacakage version Type,community/enterprise"
|
||||
echo -e "\t -l package type,lite or not"
|
||||
echo -e "\t -c operation type,x64/arm64"
|
||||
echo -e "\t -v pacakage version,3.0.1.7"
|
||||
echo -e "\t -o pacakage version,3.0.1.7"
|
||||
echo -e "\t -s source Path,web/nas"
|
||||
echo -e "\t -t package Type,tar/rpm/deb"
|
||||
echo -e "\t -h help"
|
||||
}
|
||||
|
||||
|
||||
#parameter
|
||||
scriptDir=$(dirname $(readlink -f $0))
|
||||
packgeName=$1
|
||||
version=$2
|
||||
originPackageName=$3
|
||||
originversion=$4
|
||||
testFile=$5
|
||||
# sourcePath:web/nas
|
||||
sourcePath=$6
|
||||
version="3.0.1.7"
|
||||
originversion="3.0.1.7"
|
||||
testFile="server"
|
||||
verMode="communtity"
|
||||
sourcePath="nas"
|
||||
cpuType="x64"
|
||||
lite="true"
|
||||
packageType="tar"
|
||||
subFile="taos.tar.gz"
|
||||
while getopts "m:c:f:l:s:o:t:v:h" opt; do
|
||||
case $opt in
|
||||
m)
|
||||
verMode=$OPTARG
|
||||
;;
|
||||
v)
|
||||
version=$OPTARG
|
||||
;;
|
||||
f)
|
||||
testFile=$OPTARG
|
||||
;;
|
||||
l)
|
||||
lite=$OPTARG
|
||||
;;
|
||||
s)
|
||||
sourcePath=$OPTARG
|
||||
;;
|
||||
o)
|
||||
originversion=$OPTARG
|
||||
;;
|
||||
c)
|
||||
cpuType=$OPTARG
|
||||
;;
|
||||
t)
|
||||
packageType=$OPTARG
|
||||
;;
|
||||
h)
|
||||
usage
|
||||
exit 0
|
||||
;;
|
||||
?)
|
||||
echo "Invalid option: -$OPTARG"
|
||||
usage
|
||||
exit 0
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
|
||||
|
||||
echo "testFile:${testFile},verMode:${verMode},lite:${lite},cpuType:${cpuType},packageType:${packageType},version-${version},originversion:${originversion},sourcePath:${sourcePath}"
|
||||
# Color setting
|
||||
RED='\033[41;30m'
|
||||
GREEN='\033[1;32m'
|
||||
|
@ -21,20 +78,40 @@ BLUE_DARK='\033[0;34m'
|
|||
GREEN_UNDERLINE='\033[4;32m'
|
||||
NC='\033[0m'
|
||||
|
||||
if [ ${testFile} = "server" ];then
|
||||
tdPath="TDengine-server-${version}"
|
||||
originTdpPath="TDengine-server-${originversion}"
|
||||
if [[ ${verMode} = "enterprise" ]];then
|
||||
prePackag="TDengine-enterprise-${testFile}"
|
||||
elif [ ${verMode} = "community" ];then
|
||||
prePackag="TDengine-${testFile}"
|
||||
fi
|
||||
if [ ${lite} = "true" ];then
|
||||
packageLite="-Lite"
|
||||
elif [ ${lite} = "false" ];then
|
||||
packageLite=""
|
||||
fi
|
||||
if [[ "$packageType" = "tar" ]] ;then
|
||||
packageType="tar.gz"
|
||||
fi
|
||||
|
||||
tdPath="${prePackag}-${version}"
|
||||
originTdpPath="${prePackag}-${originversion}"
|
||||
|
||||
packgeName="${tdPath}-Linux-${cpuType}${packageLite}.${packageType}"
|
||||
originPackageName="${originTdpPath}-Linux-${cpuType}${packageLite}.${packageType}"
|
||||
|
||||
if [ "$testFile" == "server" ] ;then
|
||||
installCmd="install.sh"
|
||||
elif [ ${testFile} = "client" ];then
|
||||
tdPath="TDengine-client-${version}"
|
||||
originTdpPath="TDengine-client-${originversion}"
|
||||
installCmd="install_client.sh"
|
||||
elif [ ${testFile} = "tools" ];then
|
||||
tdPath="taosTools-${version}"
|
||||
originTdpPath="taosTools-${originversion}"
|
||||
packgeName="${tdPath}-Linux-${cpuType}${packageLite}.${packageType}"
|
||||
originPackageName="${originTdpPath}-Linux-${cpuType}${packageLite}.${packageType}"
|
||||
installCmd="install-taostools.sh"
|
||||
fi
|
||||
|
||||
|
||||
echo "tdPath:${tdPath},originTdpPath:${originTdpPath},packgeName:${packgeName},originPackageName:${originPackageName}"
|
||||
function cmdInstall {
|
||||
command=$1
|
||||
if command -v ${command} ;then
|
||||
|
@ -76,16 +153,16 @@ file=$1
|
|||
versionPath=$2
|
||||
sourceP=$3
|
||||
nasServerIP="192.168.1.131"
|
||||
packagePath="/nas/TDengine/v${versionPath}/community"
|
||||
packagePath="/nas/TDengine/v${versionPath}/${verMode}"
|
||||
if [ -f ${file} ];then
|
||||
echoColor YD "${file} already exists ,it will delete it and download it again "
|
||||
rm -rf ${file}
|
||||
fi
|
||||
|
||||
if [ ${sourceP} = 'web' ];then
|
||||
if [[ ${sourceP} = 'web' ]];then
|
||||
echoColor BD "====download====:wget https://www.taosdata.com/assets-download/3.0/${file}"
|
||||
wget https://www.taosdata.com/assets-download/3.0/${file}
|
||||
elif [ ${sourceP} = 'nas' ];then
|
||||
elif [[ ${sourceP} = 'nas' ]];then
|
||||
echoColor BD "====download====:scp root@${nasServerIP}:${packagePath}/${file} ."
|
||||
scp root@${nasServerIP}:${packagePath}/${file} .
|
||||
fi
|
||||
|
|
|
@ -1106,6 +1106,8 @@ int taos_get_table_vgId(TAOS *taos, const char *db, const char *table, int *vgId
|
|||
return terrno;
|
||||
}
|
||||
|
||||
pRequest->syncQuery = true;
|
||||
|
||||
STscObj *pTscObj = pRequest->pTscObj;
|
||||
SCatalog *pCtg = NULL;
|
||||
code = catalogGetHandle(pTscObj->pAppInfo->clusterId, &pCtg);
|
||||
|
|
|
@ -36,7 +36,7 @@ int32_t colDataGetFullLength(const SColumnInfoData* pColumnInfoData, int32_t num
|
|||
if (IS_VAR_DATA_TYPE(pColumnInfoData->info.type)) {
|
||||
return pColumnInfoData->varmeta.length + sizeof(int32_t) * numOfRows;
|
||||
} else {
|
||||
return pColumnInfoData->info.bytes * numOfRows + BitmapLen(numOfRows);
|
||||
return ((pColumnInfoData->info.type == TSDB_DATA_TYPE_NULL) ? 0 : pColumnInfoData->info.bytes * numOfRows) + BitmapLen(numOfRows);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1084,8 +1084,6 @@ int32_t dataBlockCompar_rv(const void* p1, const void* p2, const void* param) {
|
|||
return 0;
|
||||
}
|
||||
|
||||
int32_t varColSort(SColumnInfoData* pColumnInfoData, SBlockOrderInfo* pOrder) { return 0; }
|
||||
|
||||
int32_t blockDataSort_rv(SSDataBlock* pDataBlock, SArray* pOrderInfo, bool nullFirst) {
|
||||
// Allocate the additional buffer.
|
||||
int64_t p0 = taosGetTimestampUs();
|
||||
|
@ -1962,6 +1960,7 @@ char* dumpBlockData(SSDataBlock* pDataBlock, const char* flag, char** pDataBuf)
|
|||
memset(pBuf, 0, sizeof(pBuf));
|
||||
char* pData = colDataGetVarData(pColInfoData, j);
|
||||
int32_t dataSize = TMIN(sizeof(pBuf), varDataLen(pData));
|
||||
dataSize = TMIN(dataSize, 50);
|
||||
memcpy(pBuf, varDataVal(pData), dataSize);
|
||||
len += snprintf(dumpBuf + len, size - len, " %15s |", pBuf);
|
||||
if (len >= size - 1) return dumpBuf;
|
||||
|
|
|
@ -277,7 +277,9 @@ static int32_t taosAddServerLogCfg(SConfig *pCfg) {
|
|||
static int32_t taosAddClientCfg(SConfig *pCfg) {
|
||||
char defaultFqdn[TSDB_FQDN_LEN] = {0};
|
||||
int32_t defaultServerPort = 6030;
|
||||
if (taosGetFqdn(defaultFqdn) != 0) return -1;
|
||||
if (taosGetFqdn(defaultFqdn) != 0) {
|
||||
strcpy(defaultFqdn, "localhost");
|
||||
}
|
||||
|
||||
if (cfgAddString(pCfg, "firstEp", "", 1) != 0) return -1;
|
||||
if (cfgAddString(pCfg, "secondEp", "", 1) != 0) return -1;
|
||||
|
|
|
@ -148,8 +148,8 @@ static int32_t vmPutMsgToQueue(SVnodeMgmt *pMgmt, SRpcMsg *pMsg, EQueueType qtyp
|
|||
|
||||
SVnodeObj *pVnode = vmAcquireVnode(pMgmt, pHead->vgId);
|
||||
if (pVnode == NULL) {
|
||||
dGError("vgId:%d, msg:%p failed to put into vnode queue since %s, type:%s qtype:%d", pHead->vgId, pMsg, terrstr(),
|
||||
TMSG_INFO(pMsg->msgType), qtype);
|
||||
dGError("vgId:%d, msg:%p failed to put into vnode queue since %s, type:%s qtype:%d contLen:%d", pHead->vgId, pMsg, terrstr(),
|
||||
TMSG_INFO(pMsg->msgType), qtype, pHead->contLen);
|
||||
return terrno != 0 ? terrno : -1;
|
||||
}
|
||||
|
||||
|
|
|
@ -203,10 +203,13 @@ static int32_t tdProcessTSmaInsertImpl(SSma *pSma, int64_t indexUid, const char
|
|||
goto _err;
|
||||
}
|
||||
|
||||
SBatchDeleteReq deleteReq;
|
||||
SBatchDeleteReq deleteReq = {0};
|
||||
SSubmitReq *pSubmitReq =
|
||||
tqBlockToSubmit(pSma->pVnode, (const SArray *)msg, pTsmaStat->pTSchema, &pTsmaStat->pTSma->schemaTag, true,
|
||||
pTsmaStat->pTSma->dstTbUid, pTsmaStat->pTSma->dstTbName, &deleteReq);
|
||||
// TODO deleteReq
|
||||
taosArrayDestroy(deleteReq.deleteReqs);
|
||||
|
||||
|
||||
if (!pSubmitReq) {
|
||||
smaError("vgId:%d, failed to gen submit blk while tsma insert for smaIndex %" PRIi64 " since %s", SMA_VID(pSma),
|
||||
|
|
|
@ -308,9 +308,8 @@ int tqPushMsg(STQ* pTq, void* msg, int32_t msgLen, tmsg_t msgType, int64_t ver)
|
|||
}
|
||||
|
||||
if (vnodeIsRoleLeader(pTq->pVnode)) {
|
||||
if (taosHashGetSize(pTq->pStreamMeta->pTasks) == 0) return 0;
|
||||
if (msgType == TDMT_VND_SUBMIT) {
|
||||
if (taosHashGetSize(pTq->pStreamMeta->pTasks) == 0) return 0;
|
||||
|
||||
void* data = taosMemoryMalloc(msgLen);
|
||||
if (data == NULL) {
|
||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||
|
|
|
@ -236,6 +236,7 @@ SSubmitReq* tqBlockToSubmit(SVnode* pVnode, const SArray* pBlocks, const STSchem
|
|||
SSDataBlock* pDataBlock = taosArrayGet(pBlocks, i);
|
||||
if (pDataBlock->info.type == STREAM_DELETE_RESULT) {
|
||||
pDeleteReq->suid = suid;
|
||||
pDeleteReq->deleteReqs = taosArrayInit(0, sizeof(SSingleDeleteReq));
|
||||
tqBuildDeleteReq(pVnode, stbFullName, pDataBlock, pDeleteReq);
|
||||
continue;
|
||||
}
|
||||
|
|
|
@ -287,12 +287,17 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32
|
|||
hasRes = true;
|
||||
p->ts = pColVal->ts;
|
||||
|
||||
uint8_t* px = p->colVal.value.pData;
|
||||
p->colVal = pColVal->colVal;
|
||||
if (!IS_VAR_DATA_TYPE(pColVal->colVal.type)) {
|
||||
p->colVal = pColVal->colVal;
|
||||
} else {
|
||||
if (COL_VAL_IS_VALUE(&pColVal->colVal)) {
|
||||
memcpy(p->colVal.value.pData, pColVal->colVal.value.pData, pColVal->colVal.value.nData);
|
||||
}
|
||||
|
||||
if (COL_VAL_IS_VALUE(&pColVal->colVal) && IS_VAR_DATA_TYPE(pColVal->colVal.type)) {
|
||||
p->colVal.value.pData = px;
|
||||
memcpy(px, pColVal->colVal.value.pData, pColVal->colVal.value.nData);
|
||||
p->colVal.value.nData = pColVal->colVal.value.nData;
|
||||
p->colVal.type = pColVal->colVal.type;
|
||||
p->colVal.flag = pColVal->colVal.flag;
|
||||
p->colVal.cid = pColVal->colVal.cid;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -305,7 +305,7 @@ static void* getPosInBlockInfoBuf(SBlockInfoBuf* pBuf, int32_t index) {
|
|||
}
|
||||
|
||||
// NOTE: speedup the whole processing by preparing the buffer for STableBlockScanInfo in batch model
|
||||
static SHashObj* createDataBlockScanInfo(STsdbReader* pTsdbReader, const STableKeyInfo* idList, int32_t numOfTables) {
|
||||
static SHashObj* createDataBlockScanInfo(STsdbReader* pTsdbReader, SBlockInfoBuf* pBuf, const STableKeyInfo* idList, int32_t numOfTables) {
|
||||
// allocate buffer in order to load data blocks from file
|
||||
// todo use simple hash instead, optimize the memory consumption
|
||||
SHashObj* pTableMap =
|
||||
|
@ -315,10 +315,10 @@ static SHashObj* createDataBlockScanInfo(STsdbReader* pTsdbReader, const STableK
|
|||
}
|
||||
|
||||
int64_t st = taosGetTimestampUs();
|
||||
initBlockScanInfoBuf(&pTsdbReader->blockInfoBuf, numOfTables);
|
||||
initBlockScanInfoBuf(pBuf, numOfTables);
|
||||
|
||||
for (int32_t j = 0; j < numOfTables; ++j) {
|
||||
STableBlockScanInfo* pScanInfo = getPosInBlockInfoBuf(&pTsdbReader->blockInfoBuf, j);
|
||||
STableBlockScanInfo* pScanInfo = getPosInBlockInfoBuf(pBuf, j);
|
||||
pScanInfo->uid = idList[j].uid;
|
||||
if (ASCENDING_TRAVERSE(pTsdbReader->order)) {
|
||||
int64_t skey = pTsdbReader->window.skey;
|
||||
|
@ -329,20 +329,6 @@ static SHashObj* createDataBlockScanInfo(STsdbReader* pTsdbReader, const STableK
|
|||
}
|
||||
|
||||
taosHashPut(pTableMap, &pScanInfo->uid, sizeof(uint64_t), &pScanInfo, POINTER_BYTES);
|
||||
|
||||
#if 0
|
||||
// STableBlockScanInfo info = {.lastKey = 0, .uid = idList[j].uid};
|
||||
if (ASCENDING_TRAVERSE(pTsdbReader->order)) {
|
||||
int64_t skey = pTsdbReader->window.skey;
|
||||
info.lastKey = (skey > INT64_MIN) ? (skey - 1) : skey;
|
||||
} else {
|
||||
int64_t ekey = pTsdbReader->window.ekey;
|
||||
info.lastKey = (ekey < INT64_MAX) ? (ekey + 1) : ekey;
|
||||
}
|
||||
|
||||
taosHashPut(pTableMap, &info.uid, sizeof(uint64_t), &info, sizeof(info));
|
||||
#endif
|
||||
|
||||
tsdbTrace("%p check table uid:%" PRId64 " from lastKey:%" PRId64 " %s", pTsdbReader, pScanInfo->uid,
|
||||
pScanInfo->lastKey, pTsdbReader->idStr);
|
||||
}
|
||||
|
@ -361,11 +347,17 @@ static void resetAllDataBlockScanInfo(SHashObj* pTableMap, int64_t ts) {
|
|||
STableBlockScanInfo* pInfo = *(STableBlockScanInfo**)p;
|
||||
|
||||
pInfo->iterInit = false;
|
||||
pInfo->iter.hasVal = false;
|
||||
pInfo->iiter.hasVal = false;
|
||||
|
||||
if (pInfo->iter.iter != NULL) {
|
||||
pInfo->iter.iter = tsdbTbDataIterDestroy(pInfo->iter.iter);
|
||||
}
|
||||
|
||||
if (pInfo->iiter.iter != NULL) {
|
||||
pInfo->iiter.iter = tsdbTbDataIterDestroy(pInfo->iiter.iter);
|
||||
}
|
||||
|
||||
pInfo->delSkyline = taosArrayDestroy(pInfo->delSkyline);
|
||||
pInfo->lastKey = ts;
|
||||
}
|
||||
|
@ -373,6 +365,8 @@ static void resetAllDataBlockScanInfo(SHashObj* pTableMap, int64_t ts) {
|
|||
|
||||
static void clearBlockScanInfo(STableBlockScanInfo* p) {
|
||||
p->iterInit = false;
|
||||
|
||||
p->iter.hasVal = false;
|
||||
p->iiter.hasVal = false;
|
||||
|
||||
if (p->iter.iter != NULL) {
|
||||
|
@ -388,9 +382,9 @@ static void clearBlockScanInfo(STableBlockScanInfo* p) {
|
|||
tMapDataClear(&p->mapData);
|
||||
}
|
||||
|
||||
static void destroyAllBlockScanInfo(SHashObj* pTableMap, bool clearEntry) {
|
||||
static void destroyAllBlockScanInfo(SHashObj* pTableMap) {
|
||||
void* p = NULL;
|
||||
while (clearEntry && ((p = taosHashIterate(pTableMap, p)) != NULL)) {
|
||||
while ((p = taosHashIterate(pTableMap, p)) != NULL) {
|
||||
clearBlockScanInfo(*(STableBlockScanInfo**)p);
|
||||
}
|
||||
|
||||
|
@ -2226,6 +2220,7 @@ static int32_t initMemDataIterator(STableBlockScanInfo* pBlockScanInfo, STsdbRea
|
|||
if (pReader->pReadSnap->pMem != NULL) {
|
||||
d = tsdbGetTbDataFromMemTable(pReader->pReadSnap->pMem, pReader->suid, pBlockScanInfo->uid);
|
||||
if (d != NULL) {
|
||||
ASSERT(pBlockScanInfo->iter.iter == NULL);
|
||||
code = tsdbTbDataIterCreate(d, &startKey, backward, &pBlockScanInfo->iter.iter);
|
||||
if (code == TSDB_CODE_SUCCESS) {
|
||||
pBlockScanInfo->iter.hasVal = (tsdbTbDataIterGet(pBlockScanInfo->iter.iter) != NULL);
|
||||
|
@ -3789,11 +3784,10 @@ int32_t tsdbReaderOpen(SVnode* pVnode, SQueryTableDataCond* pCond, void* pTableL
|
|||
updateBlockSMAInfo(pReader->pSchema, &pReader->suppInfo);
|
||||
}
|
||||
|
||||
STsdbReader* p = pReader->innerReader[0] != NULL ? pReader->innerReader[0] : pReader;
|
||||
|
||||
pReader->status.pTableMap = createDataBlockScanInfo(p, pTableList, numOfTables);
|
||||
STsdbReader* p = (pReader->innerReader[0] != NULL)? pReader->innerReader[0]:pReader;
|
||||
pReader->status.pTableMap = createDataBlockScanInfo(p, &pReader->blockInfoBuf, pTableList, numOfTables);
|
||||
if (pReader->status.pTableMap == NULL) {
|
||||
tsdbReaderClose(pReader);
|
||||
tsdbReaderClose(p);
|
||||
*ppReader = NULL;
|
||||
|
||||
code = TSDB_CODE_TDB_OUT_OF_MEMORY;
|
||||
|
@ -3849,7 +3843,7 @@ void tsdbReaderClose(STsdbReader* pReader) {
|
|||
}
|
||||
|
||||
{
|
||||
if (pReader->innerReader[0] != NULL) {
|
||||
if (pReader->innerReader[0] != NULL || pReader->innerReader[1] != NULL) {
|
||||
STsdbReader* p = pReader->innerReader[0];
|
||||
|
||||
p->status.pTableMap = NULL;
|
||||
|
@ -3887,9 +3881,12 @@ void tsdbReaderClose(STsdbReader* pReader) {
|
|||
cleanupDataBlockIterator(&pReader->status.blockIter);
|
||||
|
||||
size_t numOfTables = taosHashGetSize(pReader->status.pTableMap);
|
||||
destroyAllBlockScanInfo(pReader->status.pTableMap, (pReader->innerReader[0] == NULL) ? true : false);
|
||||
if (pReader->status.pTableMap != NULL) {
|
||||
destroyAllBlockScanInfo(pReader->status.pTableMap);
|
||||
clearBlockScanInfoBuf(&pReader->blockInfoBuf);
|
||||
}
|
||||
|
||||
blockDataDestroy(pReader->pResBlock);
|
||||
clearBlockScanInfoBuf(&pReader->blockInfoBuf);
|
||||
|
||||
if (pReader->pFileReader != NULL) {
|
||||
tsdbDataFReaderClose(&pReader->pFileReader);
|
||||
|
@ -4118,9 +4115,13 @@ int32_t tsdbRetrieveDatablockSMA(STsdbReader* pReader, SColumnDataAgg*** pBlockS
|
|||
} else if (pAgg->colId < pSup->colIds[j]) {
|
||||
i += 1;
|
||||
} else if (pSup->colIds[j] < pAgg->colId) {
|
||||
if (pSup->colIds[j] == PRIMARYKEY_TIMESTAMP_COL_ID) {
|
||||
taosArrayPush(pNewAggList, &pSup->tsColAgg);
|
||||
} else {
|
||||
// all date in this block are null
|
||||
SColumnDataAgg nullColAgg = {.colId = pSup->colIds[j], .numOfNull = pBlock->nRow};
|
||||
taosArrayPush(pNewAggList, &nullColAgg);
|
||||
SColumnDataAgg nullColAgg = {.colId = pSup->colIds[j], .numOfNull = pBlock->nRow};
|
||||
taosArrayPush(pNewAggList, &nullColAgg);
|
||||
}
|
||||
j += 1;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -609,6 +609,7 @@ static int32_t vnodeProcessCreateTbReq(SVnode *pVnode, int64_t version, void *pR
|
|||
_exit:
|
||||
for (int32_t iReq = 0; iReq < req.nReqs; iReq++) {
|
||||
pCreateReq = req.pReqs + iReq;
|
||||
taosMemoryFree(pCreateReq->comment);
|
||||
taosArrayDestroy(pCreateReq->ctb.tagName);
|
||||
}
|
||||
taosArrayDestroyEx(rsp.pArray, tFreeSVCreateTbRsp);
|
||||
|
|
|
@ -2,10 +2,6 @@ aux_source_directory(src EXECUTOR_SRC)
|
|||
#add_library(executor ${EXECUTOR_SRC})
|
||||
|
||||
add_library(executor STATIC ${EXECUTOR_SRC})
|
||||
#set_target_properties(executor PROPERTIES
|
||||
# IMPORTED_LOCATION "${CMAKE_CURRENT_SOURCE_DIR}/libexecutor.a"
|
||||
# INTERFACE_INCLUDE_DIRECTORIES "${TD_SOURCE_DIR}/include/libs/executor"
|
||||
# )
|
||||
|
||||
target_link_libraries(executor
|
||||
PRIVATE os util common function parser planner qcom vnode scalar nodes index stream
|
||||
|
|
|
@ -235,16 +235,6 @@ typedef enum {
|
|||
#define COL_MATCH_FROM_COL_ID 0x1
|
||||
#define COL_MATCH_FROM_SLOT_ID 0x2
|
||||
|
||||
typedef struct SSourceDataInfo {
|
||||
int32_t index;
|
||||
SRetrieveTableRsp* pRsp;
|
||||
uint64_t totalRows;
|
||||
int64_t startTime;
|
||||
int32_t code;
|
||||
EX_SOURCE_STATUS status;
|
||||
const char* taskId;
|
||||
} SSourceDataInfo;
|
||||
|
||||
typedef struct SLoadRemoteDataInfo {
|
||||
uint64_t totalSize; // total load bytes from remote
|
||||
uint64_t totalRows; // total number of rows
|
||||
|
@ -371,23 +361,8 @@ typedef struct STagScanInfo {
|
|||
SColMatchInfo matchInfo;
|
||||
int32_t curPos;
|
||||
SReadHandle readHandle;
|
||||
STableListInfo* pTableList;
|
||||
} STagScanInfo;
|
||||
|
||||
typedef struct SLastrowScanInfo {
|
||||
SSDataBlock* pRes;
|
||||
SReadHandle readHandle;
|
||||
void* pLastrowReader;
|
||||
SColMatchInfo matchInfo;
|
||||
int32_t* pSlotIds;
|
||||
SExprSupp pseudoExprSup;
|
||||
int32_t retrieveType;
|
||||
int32_t currentGroupIndex;
|
||||
SSDataBlock* pBufferredRes;
|
||||
SArray* pUidList;
|
||||
int32_t indexOfBufferedRes;
|
||||
} SLastrowScanInfo;
|
||||
|
||||
typedef enum EStreamScanMode {
|
||||
STREAM_SCAN_FROM_READERHANDLE = 1,
|
||||
STREAM_SCAN_FROM_RES,
|
||||
|
@ -504,40 +479,6 @@ typedef struct {
|
|||
SSnapContext* sContext;
|
||||
} SStreamRawScanInfo;
|
||||
|
||||
typedef struct SSysTableIndex {
|
||||
int8_t init;
|
||||
SArray* uids;
|
||||
int32_t lastIdx;
|
||||
} SSysTableIndex;
|
||||
|
||||
typedef struct SSysTableScanInfo {
|
||||
SRetrieveMetaTableRsp* pRsp;
|
||||
SRetrieveTableReq req;
|
||||
SEpSet epSet;
|
||||
tsem_t ready;
|
||||
SReadHandle readHandle;
|
||||
int32_t accountId;
|
||||
const char* pUser;
|
||||
bool sysInfo;
|
||||
bool showRewrite;
|
||||
SNode* pCondition; // db_name filter condition, to discard data that are not in current database
|
||||
SMTbCursor* pCur; // cursor for iterate the local table meta store.
|
||||
SSysTableIndex* pIdx; // idx for local table meta
|
||||
SColMatchInfo matchInfo;
|
||||
SName name;
|
||||
SSDataBlock* pRes;
|
||||
int64_t numOfBlocks; // extract basic running information.
|
||||
SLoadRemoteDataInfo loadInfo;
|
||||
} SSysTableScanInfo;
|
||||
|
||||
typedef struct SBlockDistInfo {
|
||||
SSDataBlock* pResBlock;
|
||||
STsdbReader* pHandle;
|
||||
SReadHandle readHandle;
|
||||
uint64_t uid; // table uid
|
||||
} SBlockDistInfo;
|
||||
|
||||
// todo remove this
|
||||
typedef struct SOptrBasicInfo {
|
||||
SResultRowInfo resultRowInfo;
|
||||
SSDataBlock* pRes;
|
||||
|
@ -603,24 +544,6 @@ typedef struct SAggOperatorInfo {
|
|||
SExprSupp scalarExprSup;
|
||||
} SAggOperatorInfo;
|
||||
|
||||
typedef struct SProjectOperatorInfo {
|
||||
SOptrBasicInfo binfo;
|
||||
SAggSupporter aggSup;
|
||||
SArray* pPseudoColInfo;
|
||||
SLimitInfo limitInfo;
|
||||
bool mergeDataBlocks;
|
||||
SSDataBlock* pFinalRes;
|
||||
} SProjectOperatorInfo;
|
||||
|
||||
typedef struct SIndefOperatorInfo {
|
||||
SOptrBasicInfo binfo;
|
||||
SAggSupporter aggSup;
|
||||
SArray* pPseudoColInfo;
|
||||
SExprSupp scalarSup;
|
||||
uint64_t groupId;
|
||||
SSDataBlock* pNextGroupRes;
|
||||
} SIndefOperatorInfo;
|
||||
|
||||
typedef struct SFillOperatorInfo {
|
||||
struct SFillInfo* pFillInfo;
|
||||
SSDataBlock* pRes;
|
||||
|
@ -638,42 +561,12 @@ typedef struct SFillOperatorInfo {
|
|||
SExprSupp noFillExprSupp;
|
||||
} SFillOperatorInfo;
|
||||
|
||||
typedef struct SGroupbyOperatorInfo {
|
||||
SOptrBasicInfo binfo;
|
||||
SAggSupporter aggSup;
|
||||
SArray* pGroupCols; // group by columns, SArray<SColumn>
|
||||
SArray* pGroupColVals; // current group column values, SArray<SGroupKeys>
|
||||
bool isInit; // denote if current val is initialized or not
|
||||
char* keyBuf; // group by keys for hash
|
||||
int32_t groupKeyLen; // total group by column width
|
||||
SGroupResInfo groupResInfo;
|
||||
SExprSupp scalarSup;
|
||||
} SGroupbyOperatorInfo;
|
||||
|
||||
typedef struct SDataGroupInfo {
|
||||
uint64_t groupId;
|
||||
int64_t numOfRows;
|
||||
SArray* pPageList;
|
||||
} SDataGroupInfo;
|
||||
|
||||
// The sort in partition may be needed later.
|
||||
typedef struct SPartitionOperatorInfo {
|
||||
SOptrBasicInfo binfo;
|
||||
SArray* pGroupCols;
|
||||
SArray* pGroupColVals; // current group column values, SArray<SGroupKeys>
|
||||
char* keyBuf; // group by keys for hash
|
||||
int32_t groupKeyLen; // total group by column width
|
||||
SHashObj* pGroupSet; // quick locate the window object for each result
|
||||
|
||||
SDiskbasedBuf* pBuf; // query result buffer based on blocked-wised disk file
|
||||
int32_t rowCapacity; // maximum number of rows for each buffer page
|
||||
int32_t* columnOffset; // start position for each column data
|
||||
SArray* sortedGroupArray; // SDataGroupInfo sorted by group id
|
||||
int32_t groupIndex; // group index
|
||||
int32_t pageIndex; // page index of current group
|
||||
SExprSupp scalarSup;
|
||||
} SPartitionOperatorInfo;
|
||||
|
||||
typedef struct SWindowRowsSup {
|
||||
STimeWindow win;
|
||||
TSKEY prevTs;
|
||||
|
@ -754,6 +647,23 @@ typedef struct SStreamPartitionOperatorInfo {
|
|||
SSDataBlock* pDelRes;
|
||||
} SStreamPartitionOperatorInfo;
|
||||
|
||||
typedef struct SStreamFillSupporter {
|
||||
int32_t type; // fill type
|
||||
SInterval interval;
|
||||
SResultRowData prev;
|
||||
SResultRowData cur;
|
||||
SResultRowData next;
|
||||
SResultRowData nextNext;
|
||||
SFillColInfo* pAllColInfo; // fill exprs and not fill exprs
|
||||
SExprSupp notFillExprSup;
|
||||
int32_t numOfAllCols; // number of all exprs, including the tags columns
|
||||
int32_t numOfFillCols;
|
||||
int32_t numOfNotFillCols;
|
||||
int32_t rowSize;
|
||||
SSHashObj* pResMap;
|
||||
bool hasDelete;
|
||||
} SStreamFillSupporter;
|
||||
|
||||
typedef struct SStreamFillOperatorInfo {
|
||||
SStreamFillSupporter* pFillSup;
|
||||
SSDataBlock* pRes;
|
||||
|
@ -800,33 +710,6 @@ typedef struct SStateWindowOperatorInfo {
|
|||
STimeWindowAggSupp twAggSup;
|
||||
} SStateWindowOperatorInfo;
|
||||
|
||||
typedef struct SSortOperatorInfo {
|
||||
SOptrBasicInfo binfo;
|
||||
uint32_t sortBufSize; // max buffer size for in-memory sort
|
||||
SArray* pSortInfo;
|
||||
SSortHandle* pSortHandle;
|
||||
SColMatchInfo matchInfo;
|
||||
int32_t bufPageSize;
|
||||
int64_t startTs; // sort start time
|
||||
uint64_t sortElapsed; // sort elapsed time, time to flush to disk not included.
|
||||
SLimitInfo limitInfo;
|
||||
} SSortOperatorInfo;
|
||||
|
||||
typedef struct SJoinOperatorInfo {
|
||||
SSDataBlock* pRes;
|
||||
int32_t joinType;
|
||||
int32_t inputOrder;
|
||||
|
||||
SSDataBlock* pLeft;
|
||||
int32_t leftPos;
|
||||
SColumnInfo leftCol;
|
||||
|
||||
SSDataBlock* pRight;
|
||||
int32_t rightPos;
|
||||
SColumnInfo rightCol;
|
||||
SNode* pCondAfterMerge;
|
||||
} SJoinOperatorInfo;
|
||||
|
||||
#define OPTR_IS_OPENED(_optr) (((_optr)->status & OP_OPENED) == OP_OPENED)
|
||||
#define OPTR_SET_OPENED(_optr) ((_optr)->status |= OP_OPENED)
|
||||
|
||||
|
@ -850,7 +733,6 @@ void doBuildStreamResBlock(SOperatorInfo* pOperator, SOptrBasicInfo* pbInfo, SGr
|
|||
void doBuildResultDatablock(SOperatorInfo* pOperator, SOptrBasicInfo* pbInfo, SGroupResInfo* pGroupResInfo,
|
||||
SDiskbasedBuf* pBuf);
|
||||
|
||||
int32_t handleLimitOffset(SOperatorInfo* pOperator, SLimitInfo* pLimitInfo, SSDataBlock* pBlock, bool holdDataInBuf);
|
||||
bool hasLimitOffsetInfo(SLimitInfo* pLimitInfo);
|
||||
void initLimitInfo(const SNode* pLimit, const SNode* pSLimit, SLimitInfo* pLimitInfo);
|
||||
void applyLimitOffset(SLimitInfo* pLimitInfo, SSDataBlock* pBlock, SExecTaskInfo* pTaskInfo, SOperatorInfo* pOperator);
|
||||
|
@ -880,9 +762,6 @@ void cleanupAggSup(SAggSupporter* pAggSup);
|
|||
void appendOneRowToDataBlock(SSDataBlock* pBlock, STupleHandle* pTupleHandle);
|
||||
void setTbNameColData(const SSDataBlock* pBlock, SColumnInfoData* pColInfoData, int32_t functionId, const char* name);
|
||||
|
||||
int32_t doPrepareScan(SOperatorInfo* pOperator, uint64_t uid, int64_t ts);
|
||||
int32_t doGetScanStatus(SOperatorInfo* pOperator, uint64_t* uid, int64_t* ts);
|
||||
|
||||
SSDataBlock* loadNextDataBlock(void* param);
|
||||
|
||||
void setResultRowInitCtx(SResultRow* pResult, SqlFunctionCtx* pCtx, int32_t numOfOutput, int32_t* rowEntryInfoOffset);
|
||||
|
@ -965,9 +844,8 @@ void setInputDataBlock(SExprSupp* pExprSupp, SSDataBlock* pBlock, int32_t order,
|
|||
bool isTaskKilled(SExecTaskInfo* pTaskInfo);
|
||||
int32_t checkForQueryBuf(size_t numOfTables);
|
||||
|
||||
void setTaskKilled(SExecTaskInfo* pTaskInfo);
|
||||
void queryCostStatis(SExecTaskInfo* pTaskInfo);
|
||||
|
||||
void setTaskKilled(SExecTaskInfo* pTaskInfo);
|
||||
void queryCostStatis(SExecTaskInfo* pTaskInfo);
|
||||
void doDestroyTask(SExecTaskInfo* pTaskInfo);
|
||||
void destroyOperatorInfo(SOperatorInfo* pOperator);
|
||||
int32_t getMaximumIdleDurationSec();
|
||||
|
@ -995,9 +873,6 @@ int32_t createExecTaskInfoImpl(SSubplan* pPlan, SExecTaskInfo** pTaskInfo, SRead
|
|||
int32_t createDataSinkParam(SDataSinkNode* pNode, void** pParam, qTaskInfo_t* pTaskInfo, SReadHandle* readHandle);
|
||||
int32_t getOperatorExplainExecInfo(SOperatorInfo* operatorInfo, SArray* pExecInfoList);
|
||||
|
||||
int32_t aggDecodeResultRow(SOperatorInfo* pOperator, char* result);
|
||||
int32_t aggEncodeResultRow(SOperatorInfo* pOperator, char** result, int32_t* length);
|
||||
|
||||
STimeWindow getActiveTimeWindow(SDiskbasedBuf* pBuf, SResultRowInfo* pResultRowInfo, int64_t ts, SInterval* pInterval,
|
||||
int32_t order);
|
||||
int32_t getNumOfRowsInTimeWindow(SDataBlockInfo* pDataBlockInfo, TSKEY* pPrimaryColumn, int32_t startPos, TSKEY ekey,
|
||||
|
|
|
@ -111,22 +111,6 @@ typedef struct SStreamFillInfo {
|
|||
int32_t delIndex;
|
||||
} SStreamFillInfo;
|
||||
|
||||
typedef struct SStreamFillSupporter {
|
||||
int32_t type; // fill type
|
||||
SInterval interval;
|
||||
SResultRowData prev;
|
||||
SResultRowData cur;
|
||||
SResultRowData next;
|
||||
SResultRowData nextNext;
|
||||
SFillColInfo* pAllColInfo; // fill exprs and not fill exprs
|
||||
int32_t numOfAllCols; // number of all exprs, including the tags columns
|
||||
int32_t numOfFillCols;
|
||||
int32_t numOfNotFillCols;
|
||||
int32_t rowSize;
|
||||
SSHashObj* pResMap;
|
||||
bool hasDelete;
|
||||
} SStreamFillSupporter;
|
||||
|
||||
int64_t getNumOfResultsAfterFillGap(SFillInfo* pFillInfo, int64_t ekey, int32_t maxNumOfRows);
|
||||
|
||||
void taosFillSetStartInfo(struct SFillInfo* pFillInfo, int32_t numOfRows, TSKEY endKey);
|
||||
|
|
|
@ -36,12 +36,13 @@ typedef struct SMultiMergeSource {
|
|||
|
||||
typedef struct SSortSource {
|
||||
SMultiMergeSource src;
|
||||
union {
|
||||
struct {
|
||||
SArray* pageIdList;
|
||||
int32_t pageIndex;
|
||||
};
|
||||
struct {
|
||||
SArray* pageIdList;
|
||||
int32_t pageIndex;
|
||||
};
|
||||
struct {
|
||||
void* param;
|
||||
bool onlyRef;
|
||||
};
|
||||
|
||||
} SSortSource;
|
||||
|
|
|
@ -25,15 +25,29 @@
|
|||
#include "thash.h"
|
||||
#include "ttypes.h"
|
||||
|
||||
typedef struct SCacheRowsScanInfo {
|
||||
SSDataBlock* pRes;
|
||||
SReadHandle readHandle;
|
||||
void* pLastrowReader;
|
||||
SColMatchInfo matchInfo;
|
||||
int32_t* pSlotIds;
|
||||
SExprSupp pseudoExprSup;
|
||||
int32_t retrieveType;
|
||||
int32_t currentGroupIndex;
|
||||
SSDataBlock* pBufferredRes;
|
||||
SArray* pUidList;
|
||||
int32_t indexOfBufferedRes;
|
||||
} SCacheRowsScanInfo;
|
||||
|
||||
static SSDataBlock* doScanCache(SOperatorInfo* pOperator);
|
||||
static void destroyLastrowScanOperator(void* param);
|
||||
static void destroyCacheScanOperator(void* param);
|
||||
static int32_t extractCacheScanSlotId(const SArray* pColMatchInfo, SExecTaskInfo* pTaskInfo, int32_t** pSlotIds);
|
||||
static int32_t removeRedundantTsCol(SLastRowScanPhysiNode* pScanNode, SColMatchInfo* pColMatchInfo);
|
||||
|
||||
SOperatorInfo* createCacherowsScanOperator(SLastRowScanPhysiNode* pScanNode, SReadHandle* readHandle,
|
||||
SExecTaskInfo* pTaskInfo) {
|
||||
int32_t code = TSDB_CODE_SUCCESS;
|
||||
SLastrowScanInfo* pInfo = taosMemoryCalloc(1, sizeof(SLastrowScanInfo));
|
||||
SCacheRowsScanInfo* pInfo = taosMemoryCalloc(1, sizeof(SCacheRowsScanInfo));
|
||||
SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo));
|
||||
if (pInfo == NULL || pOperator == NULL) {
|
||||
code = TSDB_CODE_OUT_OF_MEMORY;
|
||||
|
@ -97,14 +111,14 @@ SOperatorInfo* createCacherowsScanOperator(SLastRowScanPhysiNode* pScanNode, SRe
|
|||
pOperator->exprSupp.numOfExprs = taosArrayGetSize(pInfo->pRes->pDataBlock);
|
||||
|
||||
pOperator->fpSet =
|
||||
createOperatorFpSet(operatorDummyOpenFn, doScanCache, NULL, destroyLastrowScanOperator, NULL);
|
||||
createOperatorFpSet(operatorDummyOpenFn, doScanCache, NULL, destroyCacheScanOperator, NULL);
|
||||
|
||||
pOperator->cost.openCost = 0;
|
||||
return pOperator;
|
||||
|
||||
_error:
|
||||
pTaskInfo->code = code;
|
||||
destroyLastrowScanOperator(pInfo);
|
||||
destroyCacheScanOperator(pInfo);
|
||||
taosMemoryFree(pOperator);
|
||||
return NULL;
|
||||
}
|
||||
|
@ -114,7 +128,7 @@ SSDataBlock* doScanCache(SOperatorInfo* pOperator) {
|
|||
return NULL;
|
||||
}
|
||||
|
||||
SLastrowScanInfo* pInfo = pOperator->info;
|
||||
SCacheRowsScanInfo* pInfo = pOperator->info;
|
||||
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
|
||||
STableListInfo* pTableList = pTaskInfo->pTableInfoList;
|
||||
|
||||
|
@ -157,9 +171,12 @@ SSDataBlock* doScanCache(SOperatorInfo* pOperator) {
|
|||
SColumnInfoData* pSrc = taosArrayGet(pInfo->pBufferredRes->pDataBlock, slotId);
|
||||
SColumnInfoData* pDst = taosArrayGet(pRes->pDataBlock, slotId);
|
||||
|
||||
char* p = colDataGetData(pSrc, pInfo->indexOfBufferedRes);
|
||||
bool isNull = colDataIsNull_s(pSrc, pInfo->indexOfBufferedRes);
|
||||
colDataAppend(pDst, 0, p, isNull);
|
||||
if (colDataIsNull_s(pSrc, pInfo->indexOfBufferedRes)) {
|
||||
colDataAppendNULL(pDst, 0);
|
||||
} else {
|
||||
char* p = colDataGetData(pSrc, pInfo->indexOfBufferedRes);
|
||||
colDataAppend(pDst, 0, p, false);
|
||||
}
|
||||
}
|
||||
|
||||
pRes->info.uid = *(tb_uid_t*)taosArrayGet(pInfo->pUidList, pInfo->indexOfBufferedRes);
|
||||
|
@ -226,6 +243,8 @@ SSDataBlock* doScanCache(SOperatorInfo* pOperator) {
|
|||
|
||||
pInfo->pLastrowReader = tsdbCacherowsReaderClose(pInfo->pLastrowReader);
|
||||
return pInfo->pRes;
|
||||
} else {
|
||||
pInfo->pLastrowReader = tsdbCacherowsReaderClose(pInfo->pLastrowReader);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -234,8 +253,8 @@ SSDataBlock* doScanCache(SOperatorInfo* pOperator) {
|
|||
}
|
||||
}
|
||||
|
||||
void destroyLastrowScanOperator(void* param) {
|
||||
SLastrowScanInfo* pInfo = (SLastrowScanInfo*)param;
|
||||
void destroyCacheScanOperator(void* param) {
|
||||
SCacheRowsScanInfo* pInfo = (SCacheRowsScanInfo*)param;
|
||||
blockDataDestroy(pInfo->pRes);
|
||||
blockDataDestroy(pInfo->pBufferredRes);
|
||||
taosMemoryFree(pInfo->pSlotIds);
|
||||
|
@ -246,6 +265,7 @@ void destroyLastrowScanOperator(void* param) {
|
|||
pInfo->pLastrowReader = tsdbCacherowsReaderClose(pInfo->pLastrowReader);
|
||||
}
|
||||
|
||||
cleanupExprSupp(&pInfo->pseudoExprSup);
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
|
|
|
@ -250,6 +250,8 @@ static int32_t putDataBlock(SDataSinkHandle* pHandle, const SInputData* pInput,
|
|||
return code;
|
||||
}
|
||||
|
||||
taosArrayClear(pInserter->pDataBlocks);
|
||||
|
||||
code = sendSubmitRequest(pInserter, pMsg, pInserter->pParam->readHandle->pMsgCb->clientRpc, &pInserter->pNode->epSet);
|
||||
if (code) {
|
||||
return code;
|
||||
|
|
|
@ -41,6 +41,16 @@ typedef struct SFetchRspHandleWrapper {
|
|||
int32_t sourceIndex;
|
||||
} SFetchRspHandleWrapper;
|
||||
|
||||
typedef struct SSourceDataInfo {
|
||||
int32_t index;
|
||||
SRetrieveTableRsp* pRsp;
|
||||
uint64_t totalRows;
|
||||
int64_t startTime;
|
||||
int32_t code;
|
||||
EX_SOURCE_STATUS status;
|
||||
const char* taskId;
|
||||
} SSourceDataInfo;
|
||||
|
||||
static void destroyExchangeOperatorInfo(void* param);
|
||||
static void freeBlock(void* pParam);
|
||||
static void freeSourceDataInfo(void* param);
|
||||
|
@ -52,6 +62,7 @@ static int32_t getCompletedSources(const SArray* pArray);
|
|||
static int32_t prepareConcurrentlyLoad(SOperatorInfo* pOperator);
|
||||
static int32_t seqLoadRemoteData(SOperatorInfo* pOperator);
|
||||
static int32_t prepareLoadRemoteData(SOperatorInfo* pOperator);
|
||||
static int32_t handleLimitOffset(SOperatorInfo* pOperator, SLimitInfo* pLimitInfo, SSDataBlock* pBlock, bool holdDataInBuf);
|
||||
|
||||
static void concurrentlyLoadRemoteDataImpl(SOperatorInfo* pOperator, SExchangeInfo* pExchangeInfo,
|
||||
SExecTaskInfo* pTaskInfo) {
|
||||
|
@ -647,3 +658,80 @@ int32_t prepareLoadRemoteData(SOperatorInfo* pOperator) {
|
|||
pOperator->cost.openCost = (taosGetTimestampUs() - st) / 1000.0;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t handleLimitOffset(SOperatorInfo* pOperator, SLimitInfo* pLimitInfo, SSDataBlock* pBlock, bool holdDataInBuf) {
|
||||
if (pLimitInfo->remainGroupOffset > 0) {
|
||||
if (pLimitInfo->currentGroupId == 0) { // it is the first group
|
||||
pLimitInfo->currentGroupId = pBlock->info.groupId;
|
||||
blockDataCleanup(pBlock);
|
||||
return PROJECT_RETRIEVE_CONTINUE;
|
||||
} else if (pLimitInfo->currentGroupId != pBlock->info.groupId) {
|
||||
// now it is the data from a new group
|
||||
pLimitInfo->remainGroupOffset -= 1;
|
||||
|
||||
// ignore data block in current group
|
||||
if (pLimitInfo->remainGroupOffset > 0) {
|
||||
blockDataCleanup(pBlock);
|
||||
return PROJECT_RETRIEVE_CONTINUE;
|
||||
}
|
||||
}
|
||||
|
||||
// set current group id of the project operator
|
||||
pLimitInfo->currentGroupId = pBlock->info.groupId;
|
||||
}
|
||||
|
||||
// here check for a new group data, we need to handle the data of the previous group.
|
||||
if (pLimitInfo->currentGroupId != 0 && pLimitInfo->currentGroupId != pBlock->info.groupId) {
|
||||
pLimitInfo->numOfOutputGroups += 1;
|
||||
if ((pLimitInfo->slimit.limit > 0) && (pLimitInfo->slimit.limit <= pLimitInfo->numOfOutputGroups)) {
|
||||
pOperator->status = OP_EXEC_DONE;
|
||||
blockDataCleanup(pBlock);
|
||||
|
||||
return PROJECT_RETRIEVE_DONE;
|
||||
}
|
||||
|
||||
// reset the value for a new group data
|
||||
pLimitInfo->numOfOutputRows = 0;
|
||||
pLimitInfo->remainOffset = pLimitInfo->limit.offset;
|
||||
|
||||
// existing rows that belongs to previous group.
|
||||
if (pBlock->info.rows > 0) {
|
||||
return PROJECT_RETRIEVE_DONE;
|
||||
}
|
||||
}
|
||||
|
||||
// here we reach the start position, according to the limit/offset requirements.
|
||||
|
||||
// set current group id
|
||||
pLimitInfo->currentGroupId = pBlock->info.groupId;
|
||||
|
||||
if (pLimitInfo->remainOffset >= pBlock->info.rows) {
|
||||
pLimitInfo->remainOffset -= pBlock->info.rows;
|
||||
blockDataCleanup(pBlock);
|
||||
return PROJECT_RETRIEVE_CONTINUE;
|
||||
} else if (pLimitInfo->remainOffset < pBlock->info.rows && pLimitInfo->remainOffset > 0) {
|
||||
blockDataTrimFirstNRows(pBlock, pLimitInfo->remainOffset);
|
||||
pLimitInfo->remainOffset = 0;
|
||||
}
|
||||
|
||||
// check for the limitation in each group
|
||||
if (pLimitInfo->limit.limit >= 0 && pLimitInfo->numOfOutputRows + pBlock->info.rows >= pLimitInfo->limit.limit) {
|
||||
int32_t keepRows = (int32_t)(pLimitInfo->limit.limit - pLimitInfo->numOfOutputRows);
|
||||
blockDataKeepFirstNRows(pBlock, keepRows);
|
||||
if (pLimitInfo->slimit.limit > 0 && pLimitInfo->slimit.limit <= pLimitInfo->numOfOutputGroups) {
|
||||
pOperator->status = OP_EXEC_DONE;
|
||||
}
|
||||
|
||||
return PROJECT_RETRIEVE_DONE;
|
||||
}
|
||||
|
||||
// todo optimize performance
|
||||
// If there are slimit/soffset value exists, multi-round result can not be packed into one group, since the
|
||||
// they may not belong to the same group the limit/offset value is not valid in this case.
|
||||
if ((!holdDataInBuf) || (pBlock->info.rows >= pOperator->resultInfo.threshold) || pLimitInfo->slimit.offset != -1 ||
|
||||
pLimitInfo->slimit.limit != -1) {
|
||||
return PROJECT_RETRIEVE_DONE;
|
||||
} else { // not full enough, continue to accumulate the output data in the buffer.
|
||||
return PROJECT_RETRIEVE_CONTINUE;
|
||||
}
|
||||
}
|
||||
|
|
|
@ -20,12 +20,10 @@
|
|||
#include "querynodes.h"
|
||||
#include "tfill.h"
|
||||
#include "tname.h"
|
||||
#include "tref.h"
|
||||
|
||||
#include "tdatablock.h"
|
||||
#include "tglobal.h"
|
||||
#include "tmsg.h"
|
||||
#include "tsort.h"
|
||||
#include "ttime.h"
|
||||
|
||||
#include "executorimpl.h"
|
||||
|
@ -134,45 +132,6 @@ static int32_t doCopyToSDataBlock(SExecTaskInfo* pTaskInfo, SSDataBlock* pBlock,
|
|||
static void initCtxOutputBuffer(SqlFunctionCtx* pCtx, int32_t size);
|
||||
static void doSetTableGroupOutputBuf(SOperatorInfo* pOperator, int32_t numOfOutput, uint64_t groupId);
|
||||
|
||||
#if 0
|
||||
static bool chkResultRowFromKey(STaskRuntimeEnv* pRuntimeEnv, SResultRowInfo* pResultRowInfo, char* pData,
|
||||
int16_t bytes, bool masterscan, uint64_t uid) {
|
||||
bool existed = false;
|
||||
SET_RES_WINDOW_KEY(pRuntimeEnv->keyBuf, pData, bytes, uid);
|
||||
|
||||
SResultRow** p1 =
|
||||
(SResultRow**)taosHashGet(pRuntimeEnv->pResultRowHashTable, pRuntimeEnv->keyBuf, GET_RES_WINDOW_KEY_LEN(bytes));
|
||||
|
||||
// in case of repeat scan/reverse scan, no new time window added.
|
||||
if (QUERY_IS_INTERVAL_QUERY(pRuntimeEnv->pQueryAttr)) {
|
||||
if (!masterscan) { // the *p1 may be NULL in case of sliding+offset exists.
|
||||
return p1 != NULL;
|
||||
}
|
||||
|
||||
if (p1 != NULL) {
|
||||
if (pResultRowInfo->size == 0) {
|
||||
existed = false;
|
||||
} else if (pResultRowInfo->size == 1) {
|
||||
// existed = (pResultRowInfo->pResult[0] == (*p1));
|
||||
} else { // check if current pResultRowInfo contains the existed pResultRow
|
||||
SET_RES_EXT_WINDOW_KEY(pRuntimeEnv->keyBuf, pData, bytes, uid, pResultRowInfo);
|
||||
int64_t* index =
|
||||
taosHashGet(pRuntimeEnv->pResultRowListSet, pRuntimeEnv->keyBuf, GET_RES_EXT_WINDOW_KEY_LEN(bytes));
|
||||
if (index != NULL) {
|
||||
existed = true;
|
||||
} else {
|
||||
existed = false;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return existed;
|
||||
}
|
||||
|
||||
return p1 != NULL;
|
||||
}
|
||||
#endif
|
||||
|
||||
SResultRow* getNewResultRow(SDiskbasedBuf* pResultBuf, int32_t* currentPageId, int32_t interBufSize) {
|
||||
SFilePage* pData = NULL;
|
||||
|
||||
|
@ -337,8 +296,6 @@ void initExecTimeWindowInfo(SColumnInfoData* pColData, STimeWindow* pQueryWindow
|
|||
colDataAppendInt64(pColData, 4, &pQueryWindow->ekey);
|
||||
}
|
||||
|
||||
void cleanupExecTimeWindowInfo(SColumnInfoData* pColData) { colDataDestroy(pColData); }
|
||||
|
||||
typedef struct {
|
||||
bool hasAgg;
|
||||
int32_t numOfRows;
|
||||
|
@ -1387,42 +1344,6 @@ void queryCostStatis(SExecTaskInfo* pTaskInfo) {
|
|||
}
|
||||
}
|
||||
|
||||
// static void updateOffsetVal(STaskRuntimeEnv *pRuntimeEnv, SDataBlockInfo *pBlockInfo) {
|
||||
// STaskAttr *pQueryAttr = pRuntimeEnv->pQueryAttr;
|
||||
// STableQueryInfo* pTableQueryInfo = pRuntimeEnv->current;
|
||||
//
|
||||
// int32_t step = GET_FORWARD_DIRECTION_FACTOR(pQueryAttr->order.order);
|
||||
//
|
||||
// if (pQueryAttr->limit.offset == pBlockInfo->rows) { // current block will ignore completed
|
||||
// pTableQueryInfo->lastKey = QUERY_IS_ASC_QUERY(pQueryAttr) ? pBlockInfo->window.ekey + step :
|
||||
// pBlockInfo->window.skey + step; pQueryAttr->limit.offset = 0; return;
|
||||
// }
|
||||
//
|
||||
// if (QUERY_IS_ASC_QUERY(pQueryAttr)) {
|
||||
// pQueryAttr->pos = (int32_t)pQueryAttr->limit.offset;
|
||||
// } else {
|
||||
// pQueryAttr->pos = pBlockInfo->rows - (int32_t)pQueryAttr->limit.offset - 1;
|
||||
// }
|
||||
//
|
||||
// assert(pQueryAttr->pos >= 0 && pQueryAttr->pos <= pBlockInfo->rows - 1);
|
||||
//
|
||||
// SArray * pDataBlock = tsdbRetrieveDataBlock(pRuntimeEnv->pTsdbReadHandle, NULL);
|
||||
// SColumnInfoData *pColInfoData = taosArrayGet(pDataBlock, 0);
|
||||
//
|
||||
// // update the pQueryAttr->limit.offset value, and pQueryAttr->pos value
|
||||
// TSKEY *keys = (TSKEY *) pColInfoData->pData;
|
||||
//
|
||||
// // update the offset value
|
||||
// pTableQueryInfo->lastKey = keys[pQueryAttr->pos];
|
||||
// pQueryAttr->limit.offset = 0;
|
||||
//
|
||||
// int32_t numOfRes = tableApplyFunctionsOnBlock(pRuntimeEnv, pBlockInfo, NULL, binarySearchForKey, pDataBlock);
|
||||
//
|
||||
// //qDebug("QInfo:0x%"PRIx64" check data block, brange:%" PRId64 "-%" PRId64 ", numBlocksOfStep:%d, numOfRes:%d,
|
||||
// lastKey:%"PRId64, GET_TASKID(pRuntimeEnv),
|
||||
// pBlockInfo->window.skey, pBlockInfo->window.ekey, pBlockInfo->rows, numOfRes, pQuery->current->lastKey);
|
||||
// }
|
||||
|
||||
// void skipBlocks(STaskRuntimeEnv *pRuntimeEnv) {
|
||||
// STaskAttr *pQueryAttr = pRuntimeEnv->pQueryAttr;
|
||||
//
|
||||
|
@ -1763,159 +1684,6 @@ static SSDataBlock* getAggregateResult(SOperatorInfo* pOperator) {
|
|||
return (rows == 0) ? NULL : pInfo->pRes;
|
||||
}
|
||||
|
||||
int32_t aggEncodeResultRow(SOperatorInfo* pOperator, char** result, int32_t* length) {
|
||||
if (result == NULL || length == NULL) {
|
||||
return TSDB_CODE_TSC_INVALID_INPUT;
|
||||
}
|
||||
SOptrBasicInfo* pInfo = (SOptrBasicInfo*)(pOperator->info);
|
||||
SAggSupporter* pSup = (SAggSupporter*)POINTER_SHIFT(pOperator->info, sizeof(SOptrBasicInfo));
|
||||
int32_t size = tSimpleHashGetSize(pSup->pResultRowHashTable);
|
||||
size_t keyLen = sizeof(uint64_t) * 2; // estimate the key length
|
||||
int32_t totalSize =
|
||||
sizeof(int32_t) + sizeof(int32_t) + size * (sizeof(int32_t) + keyLen + sizeof(int32_t) + pSup->resultRowSize);
|
||||
|
||||
// no result
|
||||
if (getTotalBufSize(pSup->pResultBuf) == 0) {
|
||||
*result = NULL;
|
||||
*length = 0;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
*result = (char*)taosMemoryCalloc(1, totalSize);
|
||||
if (*result == NULL) {
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
int32_t offset = sizeof(int32_t);
|
||||
*(int32_t*)(*result + offset) = size;
|
||||
offset += sizeof(int32_t);
|
||||
|
||||
// prepare memory
|
||||
SResultRowPosition* pos = &pInfo->resultRowInfo.cur;
|
||||
void* pPage = getBufPage(pSup->pResultBuf, pos->pageId);
|
||||
SResultRow* pRow = (SResultRow*)((char*)pPage + pos->offset);
|
||||
setBufPageDirty(pPage, true);
|
||||
releaseBufPage(pSup->pResultBuf, pPage);
|
||||
|
||||
int32_t iter = 0;
|
||||
void* pIter = NULL;
|
||||
while ((pIter = tSimpleHashIterate(pSup->pResultRowHashTable, pIter, &iter))) {
|
||||
void* key = tSimpleHashGetKey(pIter, &keyLen);
|
||||
SResultRowPosition* p1 = (SResultRowPosition*)pIter;
|
||||
|
||||
pPage = (SFilePage*)getBufPage(pSup->pResultBuf, p1->pageId);
|
||||
pRow = (SResultRow*)((char*)pPage + p1->offset);
|
||||
setBufPageDirty(pPage, true);
|
||||
releaseBufPage(pSup->pResultBuf, pPage);
|
||||
|
||||
// recalculate the result size
|
||||
int32_t realTotalSize = offset + sizeof(int32_t) + keyLen + sizeof(int32_t) + pSup->resultRowSize;
|
||||
if (realTotalSize > totalSize) {
|
||||
char* tmp = (char*)taosMemoryRealloc(*result, realTotalSize);
|
||||
if (tmp == NULL) {
|
||||
taosMemoryFree(*result);
|
||||
*result = NULL;
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
} else {
|
||||
*result = tmp;
|
||||
}
|
||||
}
|
||||
// save key
|
||||
*(int32_t*)(*result + offset) = keyLen;
|
||||
offset += sizeof(int32_t);
|
||||
memcpy(*result + offset, key, keyLen);
|
||||
offset += keyLen;
|
||||
|
||||
// save value
|
||||
*(int32_t*)(*result + offset) = pSup->resultRowSize;
|
||||
offset += sizeof(int32_t);
|
||||
memcpy(*result + offset, pRow, pSup->resultRowSize);
|
||||
offset += pSup->resultRowSize;
|
||||
}
|
||||
|
||||
*(int32_t*)(*result) = offset;
|
||||
*length = offset;
|
||||
|
||||
return TDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t handleLimitOffset(SOperatorInfo* pOperator, SLimitInfo* pLimitInfo, SSDataBlock* pBlock, bool holdDataInBuf) {
|
||||
if (pLimitInfo->remainGroupOffset > 0) {
|
||||
if (pLimitInfo->currentGroupId == 0) { // it is the first group
|
||||
pLimitInfo->currentGroupId = pBlock->info.groupId;
|
||||
blockDataCleanup(pBlock);
|
||||
return PROJECT_RETRIEVE_CONTINUE;
|
||||
} else if (pLimitInfo->currentGroupId != pBlock->info.groupId) {
|
||||
// now it is the data from a new group
|
||||
pLimitInfo->remainGroupOffset -= 1;
|
||||
|
||||
// ignore data block in current group
|
||||
if (pLimitInfo->remainGroupOffset > 0) {
|
||||
blockDataCleanup(pBlock);
|
||||
return PROJECT_RETRIEVE_CONTINUE;
|
||||
}
|
||||
}
|
||||
|
||||
// set current group id of the project operator
|
||||
pLimitInfo->currentGroupId = pBlock->info.groupId;
|
||||
}
|
||||
|
||||
// here check for a new group data, we need to handle the data of the previous group.
|
||||
if (pLimitInfo->currentGroupId != 0 && pLimitInfo->currentGroupId != pBlock->info.groupId) {
|
||||
pLimitInfo->numOfOutputGroups += 1;
|
||||
if ((pLimitInfo->slimit.limit > 0) && (pLimitInfo->slimit.limit <= pLimitInfo->numOfOutputGroups)) {
|
||||
pOperator->status = OP_EXEC_DONE;
|
||||
blockDataCleanup(pBlock);
|
||||
|
||||
return PROJECT_RETRIEVE_DONE;
|
||||
}
|
||||
|
||||
// reset the value for a new group data
|
||||
pLimitInfo->numOfOutputRows = 0;
|
||||
pLimitInfo->remainOffset = pLimitInfo->limit.offset;
|
||||
|
||||
// existing rows that belongs to previous group.
|
||||
if (pBlock->info.rows > 0) {
|
||||
return PROJECT_RETRIEVE_DONE;
|
||||
}
|
||||
}
|
||||
|
||||
// here we reach the start position, according to the limit/offset requirements.
|
||||
|
||||
// set current group id
|
||||
pLimitInfo->currentGroupId = pBlock->info.groupId;
|
||||
|
||||
if (pLimitInfo->remainOffset >= pBlock->info.rows) {
|
||||
pLimitInfo->remainOffset -= pBlock->info.rows;
|
||||
blockDataCleanup(pBlock);
|
||||
return PROJECT_RETRIEVE_CONTINUE;
|
||||
} else if (pLimitInfo->remainOffset < pBlock->info.rows && pLimitInfo->remainOffset > 0) {
|
||||
blockDataTrimFirstNRows(pBlock, pLimitInfo->remainOffset);
|
||||
pLimitInfo->remainOffset = 0;
|
||||
}
|
||||
|
||||
// check for the limitation in each group
|
||||
if (pLimitInfo->limit.limit >= 0 && pLimitInfo->numOfOutputRows + pBlock->info.rows >= pLimitInfo->limit.limit) {
|
||||
int32_t keepRows = (int32_t)(pLimitInfo->limit.limit - pLimitInfo->numOfOutputRows);
|
||||
blockDataKeepFirstNRows(pBlock, keepRows);
|
||||
if (pLimitInfo->slimit.limit > 0 && pLimitInfo->slimit.limit <= pLimitInfo->numOfOutputGroups) {
|
||||
pOperator->status = OP_EXEC_DONE;
|
||||
}
|
||||
|
||||
return PROJECT_RETRIEVE_DONE;
|
||||
}
|
||||
|
||||
// todo optimize performance
|
||||
// If there are slimit/soffset value exists, multi-round result can not be packed into one group, since the
|
||||
// they may not belong to the same group the limit/offset value is not valid in this case.
|
||||
if ((!holdDataInBuf) || (pBlock->info.rows >= pOperator->resultInfo.threshold) || pLimitInfo->slimit.offset != -1 ||
|
||||
pLimitInfo->slimit.limit != -1) {
|
||||
return PROJECT_RETRIEVE_DONE;
|
||||
} else { // not full enough, continue to accumulate the output data in the buffer.
|
||||
return PROJECT_RETRIEVE_CONTINUE;
|
||||
}
|
||||
}
|
||||
|
||||
static void doApplyScalarCalculation(SOperatorInfo* pOperator, SSDataBlock* pBlock, int32_t order, int32_t scanFlag);
|
||||
static void doHandleRemainBlockForNewGroupImpl(SOperatorInfo* pOperator, SFillOperatorInfo* pInfo,
|
||||
SResultInfo* pResultInfo, SExecTaskInfo* pTaskInfo) {
|
||||
|
@ -2096,6 +1864,8 @@ void destroyExprInfo(SExprInfo* pExpr, int32_t numOfExprs) {
|
|||
for (int32_t j = 0; j < pExprInfo->base.numOfParams; ++j) {
|
||||
if (pExprInfo->base.pParam[j].type == FUNC_PARAM_TYPE_COLUMN) {
|
||||
taosMemoryFreeClear(pExprInfo->base.pParam[j].pCol);
|
||||
} else if (pExprInfo->base.pParam[j].type == FUNC_PARAM_TYPE_VALUE) {
|
||||
taosVariantDestroy(&pExprInfo->base.pParam[j].param);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -27,6 +27,36 @@
|
|||
#include "thash.h"
|
||||
#include "ttypes.h"
|
||||
|
||||
typedef struct SGroupbyOperatorInfo {
|
||||
SOptrBasicInfo binfo;
|
||||
SAggSupporter aggSup;
|
||||
SArray* pGroupCols; // group by columns, SArray<SColumn>
|
||||
SArray* pGroupColVals; // current group column values, SArray<SGroupKeys>
|
||||
bool isInit; // denote if current val is initialized or not
|
||||
char* keyBuf; // group by keys for hash
|
||||
int32_t groupKeyLen; // total group by column width
|
||||
SGroupResInfo groupResInfo;
|
||||
SExprSupp scalarSup;
|
||||
} SGroupbyOperatorInfo;
|
||||
|
||||
// The sort in partition may be needed later.
|
||||
typedef struct SPartitionOperatorInfo {
|
||||
SOptrBasicInfo binfo;
|
||||
SArray* pGroupCols;
|
||||
SArray* pGroupColVals; // current group column values, SArray<SGroupKeys>
|
||||
char* keyBuf; // group by keys for hash
|
||||
int32_t groupKeyLen; // total group by column width
|
||||
SHashObj* pGroupSet; // quick locate the window object for each result
|
||||
|
||||
SDiskbasedBuf* pBuf; // query result buffer based on blocked-wised disk file
|
||||
int32_t rowCapacity; // maximum number of rows for each buffer page
|
||||
int32_t* columnOffset; // start position for each column data
|
||||
SArray* sortedGroupArray; // SDataGroupInfo sorted by group id
|
||||
int32_t groupIndex; // group index
|
||||
int32_t pageIndex; // page index of current group
|
||||
SExprSupp scalarSup;
|
||||
} SPartitionOperatorInfo;
|
||||
|
||||
static void* getCurrentDataGroupInfo(const SPartitionOperatorInfo* pInfo, SDataGroupInfo** pGroupInfo, int32_t len);
|
||||
static int32_t* setupColumnOffset(const SSDataBlock* pBlock, int32_t rowCapacity);
|
||||
static int32_t setGroupResultOutputBuf(SOperatorInfo* pOperator, SOptrBasicInfo* binfo, int32_t numOfCols, char* pData,
|
||||
|
|
|
@ -24,6 +24,21 @@
|
|||
#include "tmsg.h"
|
||||
#include "ttypes.h"
|
||||
|
||||
typedef struct SJoinOperatorInfo {
|
||||
SSDataBlock* pRes;
|
||||
int32_t joinType;
|
||||
int32_t inputOrder;
|
||||
|
||||
SSDataBlock* pLeft;
|
||||
int32_t leftPos;
|
||||
SColumnInfo leftCol;
|
||||
|
||||
SSDataBlock* pRight;
|
||||
int32_t rightPos;
|
||||
SColumnInfo rightCol;
|
||||
SNode* pCondAfterMerge;
|
||||
} SJoinOperatorInfo;
|
||||
|
||||
static void setJoinColumnInfo(SColumnInfo* pColumn, const SColumnNode* pColumnNode);
|
||||
static SSDataBlock* doMergeJoin(struct SOperatorInfo* pOperator);
|
||||
static void destroyMergeJoinOperator(void* param);
|
||||
|
|
|
@ -17,6 +17,24 @@
|
|||
#include "executorimpl.h"
|
||||
#include "functionMgt.h"
|
||||
|
||||
typedef struct SProjectOperatorInfo {
|
||||
SOptrBasicInfo binfo;
|
||||
SAggSupporter aggSup;
|
||||
SArray* pPseudoColInfo;
|
||||
SLimitInfo limitInfo;
|
||||
bool mergeDataBlocks;
|
||||
SSDataBlock* pFinalRes;
|
||||
} SProjectOperatorInfo;
|
||||
|
||||
typedef struct SIndefOperatorInfo {
|
||||
SOptrBasicInfo binfo;
|
||||
SAggSupporter aggSup;
|
||||
SArray* pPseudoColInfo;
|
||||
SExprSupp scalarSup;
|
||||
uint64_t groupId;
|
||||
SSDataBlock* pNextGroupRes;
|
||||
} SIndefOperatorInfo;
|
||||
|
||||
static SSDataBlock* doGenerateSourceData(SOperatorInfo* pOperator);
|
||||
static SSDataBlock* doProjectOperation(SOperatorInfo* pOperator);
|
||||
static SSDataBlock* doApplyIndefinitFunction(SOperatorInfo* pOperator);
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -17,6 +17,18 @@
|
|||
#include "executorimpl.h"
|
||||
#include "tdatablock.h"
|
||||
|
||||
typedef struct SSortOperatorInfo {
|
||||
SOptrBasicInfo binfo;
|
||||
uint32_t sortBufSize; // max buffer size for in-memory sort
|
||||
SArray* pSortInfo;
|
||||
SSortHandle* pSortHandle;
|
||||
SColMatchInfo matchInfo;
|
||||
int32_t bufPageSize;
|
||||
int64_t startTs; // sort start time
|
||||
uint64_t sortElapsed; // sort elapsed time, time to flush to disk not included.
|
||||
SLimitInfo limitInfo;
|
||||
} SSortOperatorInfo;
|
||||
|
||||
static SSDataBlock* doSort(SOperatorInfo* pOperator);
|
||||
static int32_t doOpenSortOperator(SOperatorInfo* pOperator);
|
||||
static int32_t getExplainExecInfo(SOperatorInfo* pOptr, void** pOptrExplain, uint32_t* len);
|
||||
|
@ -176,10 +188,10 @@ int32_t doOpenSortOperator(SOperatorInfo* pOperator) {
|
|||
|
||||
SSortSource* ps = taosMemoryCalloc(1, sizeof(SSortSource));
|
||||
ps->param = pOperator->pDownstream[0];
|
||||
ps->onlyRef = true;
|
||||
tsortAddSource(pInfo->pSortHandle, ps);
|
||||
|
||||
int32_t code = tsortOpen(pInfo->pSortHandle);
|
||||
taosMemoryFreeClear(ps);
|
||||
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
T_LONG_JMP(pTaskInfo->env, terrno);
|
||||
|
@ -377,10 +389,10 @@ int32_t beginSortGroup(SOperatorInfo* pOperator) {
|
|||
param->childOpInfo = pOperator->pDownstream[0];
|
||||
param->grpSortOpInfo = pInfo;
|
||||
ps->param = param;
|
||||
ps->onlyRef = false;
|
||||
tsortAddSource(pInfo->pCurrSortHandle, ps);
|
||||
|
||||
int32_t code = tsortOpen(pInfo->pCurrSortHandle);
|
||||
taosMemoryFreeClear(ps);
|
||||
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
T_LONG_JMP(pTaskInfo->env, terrno);
|
||||
|
@ -471,6 +483,9 @@ void destroyGroupSortOperatorInfo(void* param) {
|
|||
taosArrayDestroy(pInfo->pSortInfo);
|
||||
taosArrayDestroy(pInfo->matchInfo.pList);
|
||||
|
||||
tsortDestroySortHandle(pInfo->pCurrSortHandle);
|
||||
pInfo->pCurrSortHandle = NULL;
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
||||
|
@ -563,6 +578,7 @@ int32_t doOpenMultiwayMergeOperator(SOperatorInfo* pOperator) {
|
|||
for (int32_t i = 0; i < pOperator->numOfDownstream; ++i) {
|
||||
SSortSource* ps = taosMemoryCalloc(1, sizeof(SSortSource));
|
||||
ps->param = pOperator->pDownstream[i];
|
||||
ps->onlyRef = true;
|
||||
tsortAddSource(pInfo->pSortHandle, ps);
|
||||
}
|
||||
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -709,6 +709,7 @@ void* destroyStreamFillSupporter(SStreamFillSupporter* pFillSup) {
|
|||
pFillSup->pResMap = NULL;
|
||||
releaseOutputBuf(NULL, NULL, (SResultRow*)pFillSup->cur.pRowVal);
|
||||
pFillSup->cur.pRowVal = NULL;
|
||||
cleanupExprSupp(&pFillSup->notFillExprSup);
|
||||
|
||||
taosMemoryFree(pFillSup);
|
||||
return NULL;
|
||||
|
@ -1417,25 +1418,13 @@ static void doApplyStreamScalarCalculation(SOperatorInfo* pOperator, SSDataBlock
|
|||
blockDataEnsureCapacity(pDstBlock, pSrcBlock->info.rows);
|
||||
setInputDataBlock(pSup, pSrcBlock, TSDB_ORDER_ASC, MAIN_SCAN, false);
|
||||
projectApplyFunctions(pSup->pExprInfo, pDstBlock, pSrcBlock, pSup->pCtx, pSup->numOfExprs, NULL);
|
||||
|
||||
pDstBlock->info.rows = 0;
|
||||
pSup = &pInfo->pFillSup->notFillExprSup;
|
||||
setInputDataBlock(pSup, pSrcBlock, TSDB_ORDER_ASC, MAIN_SCAN, false);
|
||||
projectApplyFunctions(pSup->pExprInfo, pDstBlock, pSrcBlock, pSup->pCtx, pSup->numOfExprs, NULL);
|
||||
pDstBlock->info.groupId = pSrcBlock->info.groupId;
|
||||
|
||||
SColumnInfoData* pDst = taosArrayGet(pDstBlock->pDataBlock, pInfo->primaryTsCol);
|
||||
SColumnInfoData* pSrc = taosArrayGet(pSrcBlock->pDataBlock, pInfo->primarySrcSlotId);
|
||||
colDataAssign(pDst, pSrc, pDstBlock->info.rows, &pDstBlock->info);
|
||||
|
||||
int32_t numOfNotFill = pInfo->pFillSup->numOfAllCols - pInfo->pFillSup->numOfFillCols;
|
||||
for (int32_t i = 0; i < numOfNotFill; ++i) {
|
||||
SFillColInfo* pCol = &pInfo->pFillSup->pAllColInfo[i + pInfo->pFillSup->numOfFillCols];
|
||||
ASSERT(pCol->notFillCol);
|
||||
|
||||
SExprInfo* pExpr = pCol->pExpr;
|
||||
int32_t srcSlotId = pExpr->base.pParam[0].pCol->slotId;
|
||||
int32_t dstSlotId = pExpr->base.resSchema.slotId;
|
||||
|
||||
SColumnInfoData* pDst1 = taosArrayGet(pDstBlock->pDataBlock, dstSlotId);
|
||||
SColumnInfoData* pSrc1 = taosArrayGet(pSrcBlock->pDataBlock, srcSlotId);
|
||||
colDataAssign(pDst1, pSrc1, pDstBlock->info.rows, &pDstBlock->info);
|
||||
}
|
||||
blockDataUpdateTsWindow(pDstBlock, pInfo->primaryTsCol);
|
||||
}
|
||||
|
||||
|
@ -1577,6 +1566,14 @@ static SStreamFillSupporter* initStreamFillSup(SStreamFillPhysiNode* pPhyFillNod
|
|||
destroyStreamFillSupporter(pFillSup);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
SExprInfo* noFillExpr = createExprInfo(pPhyFillNode->pNotFillExprs, NULL, &numOfNotFillCols);
|
||||
code = initExprSupp(&pFillSup->notFillExprSup, noFillExpr, numOfNotFillCols);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
destroyStreamFillSupporter(pFillSup);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
_hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY);
|
||||
pFillSup->pResMap = tSimpleHashInit(16, hashFn);
|
||||
pFillSup->hasDelete = false;
|
||||
|
|
|
@ -1618,6 +1618,7 @@ void destroyStreamFinalIntervalOperatorInfo(void* param) {
|
|||
nodesDestroyNode((SNode*)pInfo->pPhyNode);
|
||||
colDataDestroy(&pInfo->twAggSup.timeWindowData);
|
||||
cleanupGroupResInfo(&pInfo->groupResInfo);
|
||||
cleanupExprSupp(&pInfo->scalarSupp);
|
||||
|
||||
taosMemoryFreeClear(param);
|
||||
}
|
||||
|
@ -2703,6 +2704,7 @@ static void rebuildIntervalWindow(SOperatorInfo* pOperator, SArray* pWinArray, S
|
|||
pChildSup->rowEntryInfoOffset, &pChInfo->aggSup);
|
||||
updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &parentWin, true);
|
||||
compactFunctions(pSup->pCtx, pChildSup->pCtx, numOfOutput, pTaskInfo, &pInfo->twAggSup.timeWindowData);
|
||||
releaseOutputBuf(pChInfo->pState, pWinRes, pChResult);
|
||||
}
|
||||
if (num > 0 && pUpdatedMap) {
|
||||
saveWinResultInfo(pCurResult->win.skey, pWinRes->groupId, pUpdatedMap);
|
||||
|
@ -5362,15 +5364,6 @@ SOperatorInfo* createStreamIntervalOperatorInfo(SOperatorInfo* downstream, SPhys
|
|||
pInfo->primaryTsIndex = ((SColumnNode*)pIntervalPhyNode->window.pTspk)->slotId;
|
||||
initResultSizeInfo(&pOperator->resultInfo, 4096);
|
||||
|
||||
if (pIntervalPhyNode->window.pExprs != NULL) {
|
||||
int32_t numOfScalar = 0;
|
||||
SExprInfo* pScalarExprInfo = createExprInfo(pIntervalPhyNode->window.pExprs, NULL, &numOfScalar);
|
||||
code = initExprSupp(&pInfo->scalarSupp, pScalarExprInfo, numOfScalar);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
goto _error;
|
||||
}
|
||||
}
|
||||
|
||||
size_t keyBufSize = sizeof(int64_t) + sizeof(int64_t) + POINTER_BYTES;
|
||||
code = initAggInfo(pSup, &pInfo->aggSup, pExprInfo, numOfCols, keyBufSize, pTaskInfo->id.str);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
|
|
|
@ -110,6 +110,22 @@ static int32_t sortComparCleanup(SMsortComparParam* cmpParam) {
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
void tsortClearOrderdSource(SArray *pOrderedSource) {
|
||||
for (size_t i = 0; i < taosArrayGetSize(pOrderedSource); i++) {
|
||||
SSortSource** pSource = taosArrayGet(pOrderedSource, i);
|
||||
if (NULL == *pSource) {
|
||||
continue;
|
||||
}
|
||||
|
||||
if ((*pSource)->param && !(*pSource)->onlyRef) {
|
||||
taosMemoryFree((*pSource)->param);
|
||||
}
|
||||
taosMemoryFreeClear(*pSource);
|
||||
}
|
||||
|
||||
taosArrayClear(pOrderedSource);
|
||||
}
|
||||
|
||||
void tsortDestroySortHandle(SSortHandle* pSortHandle) {
|
||||
if (pSortHandle == NULL) {
|
||||
return;
|
||||
|
@ -123,10 +139,8 @@ void tsortDestroySortHandle(SSortHandle* pSortHandle) {
|
|||
destroyDiskbasedBuf(pSortHandle->pBuf);
|
||||
taosMemoryFreeClear(pSortHandle->idStr);
|
||||
blockDataDestroy(pSortHandle->pDataBlock);
|
||||
for (size_t i = 0; i < taosArrayGetSize(pSortHandle->pOrderedSource); i++) {
|
||||
SSortSource** pSource = taosArrayGet(pSortHandle->pOrderedSource, i);
|
||||
taosMemoryFreeClear(*pSource);
|
||||
}
|
||||
|
||||
tsortClearOrderdSource(pSortHandle->pOrderedSource);
|
||||
taosArrayDestroy(pSortHandle->pOrderedSource);
|
||||
taosMemoryFreeClear(pSortHandle);
|
||||
}
|
||||
|
@ -561,7 +575,7 @@ static int32_t doInternalMergeSort(SSortHandle* pHandle) {
|
|||
}
|
||||
}
|
||||
|
||||
taosArrayClear(pHandle->pOrderedSource);
|
||||
tsortClearOrderdSource(pHandle->pOrderedSource);
|
||||
taosArrayAddAll(pHandle->pOrderedSource, pResList);
|
||||
taosArrayDestroy(pResList);
|
||||
|
||||
|
@ -598,8 +612,11 @@ static int32_t createInitialSources(SSortHandle* pHandle) {
|
|||
size_t sortBufSize = pHandle->numOfPages * pHandle->pageSize;
|
||||
|
||||
if (pHandle->type == SORT_SINGLESOURCE_SORT) {
|
||||
SSortSource* source = taosArrayGetP(pHandle->pOrderedSource, 0);
|
||||
taosArrayClear(pHandle->pOrderedSource);
|
||||
SSortSource** pSource = taosArrayGet(pHandle->pOrderedSource, 0);
|
||||
SSortSource* source = *pSource;
|
||||
*pSource = NULL;
|
||||
|
||||
tsortClearOrderdSource(pHandle->pOrderedSource);
|
||||
|
||||
while (1) {
|
||||
SSDataBlock* pBlock = pHandle->fetchfp(source->param);
|
||||
|
@ -623,6 +640,10 @@ static int32_t createInitialSources(SSortHandle* pHandle) {
|
|||
|
||||
int32_t code = blockDataMerge(pHandle->pDataBlock, pBlock);
|
||||
if (code != 0) {
|
||||
if (source->param && !source->onlyRef) {
|
||||
taosMemoryFree(source->param);
|
||||
}
|
||||
taosMemoryFree(source);
|
||||
return code;
|
||||
}
|
||||
|
||||
|
@ -632,6 +653,10 @@ static int32_t createInitialSources(SSortHandle* pHandle) {
|
|||
int64_t p = taosGetTimestampUs();
|
||||
code = blockDataSort(pHandle->pDataBlock, pHandle->pSortInfo);
|
||||
if (code != 0) {
|
||||
if (source->param && !source->onlyRef) {
|
||||
taosMemoryFree(source->param);
|
||||
}
|
||||
taosMemoryFree(source);
|
||||
return code;
|
||||
}
|
||||
|
||||
|
@ -642,6 +667,11 @@ static int32_t createInitialSources(SSortHandle* pHandle) {
|
|||
}
|
||||
}
|
||||
|
||||
if (source->param && !source->onlyRef) {
|
||||
taosMemoryFree(source->param);
|
||||
}
|
||||
taosMemoryFree(source);
|
||||
|
||||
if (pHandle->pDataBlock != NULL && pHandle->pDataBlock->info.rows > 0) {
|
||||
size_t size = blockDataGetSize(pHandle->pDataBlock);
|
||||
|
||||
|
|
|
@ -6568,7 +6568,9 @@ int32_t cachedLastRowFunction(SqlFunctionCtx* pCtx) {
|
|||
for (int32_t i = pInput->numOfRows + pInput->startRowIndex - 1; i >= pInput->startRowIndex; --i) {
|
||||
numOfElems++;
|
||||
|
||||
char* data = colDataGetData(pInputCol, i);
|
||||
bool isNull = colDataIsNull(pInputCol, pInput->numOfRows, i, NULL);
|
||||
char* data = isNull ? NULL : colDataGetData(pInputCol, i);
|
||||
|
||||
TSKEY cts = getRowPTs(pInput->pPTS, i);
|
||||
if (pResInfo->numOfRes == 0 || pInfo->ts < cts) {
|
||||
doSaveLastrow(pCtx, data, i, cts, pInfo);
|
||||
|
|
|
@ -172,8 +172,8 @@ static int32_t parseDuplicateUsingClause(SInsertParseContext* pCxt, SVnodeModifO
|
|||
}
|
||||
|
||||
// pStmt->pSql -> field1_name, ...)
|
||||
static int32_t parseBoundColumns(SInsertParseContext* pCxt, const char** pSql, SParsedDataColInfo* pColList,
|
||||
SSchema* pSchema) {
|
||||
static int32_t parseBoundColumns(SInsertParseContext* pCxt, const char** pSql, bool isTags,
|
||||
SParsedDataColInfo* pColList, SSchema* pSchema) {
|
||||
col_id_t nCols = pColList->numOfCols;
|
||||
|
||||
pColList->numOfBound = 0;
|
||||
|
@ -227,6 +227,10 @@ static int32_t parseBoundColumns(SInsertParseContext* pCxt, const char** pSql, S
|
|||
}
|
||||
}
|
||||
|
||||
if (!isTags && pColList->cols[0].valStat == VAL_STAT_NONE) {
|
||||
return buildInvalidOperationMsg(&pCxt->msg, "primary timestamp column can not be null");
|
||||
}
|
||||
|
||||
pColList->orderStatus = isOrdered ? ORDER_STATUS_ORDERED : ORDER_STATUS_DISORDERED;
|
||||
|
||||
if (!isOrdered) {
|
||||
|
@ -525,7 +529,7 @@ static int32_t parseBoundTagsClause(SInsertParseContext* pCxt, SVnodeModifOpStmt
|
|||
}
|
||||
|
||||
pStmt->pSql += index;
|
||||
return parseBoundColumns(pCxt, &pStmt->pSql, &pCxt->tags, pTagsSchema);
|
||||
return parseBoundColumns(pCxt, &pStmt->pSql, true, &pCxt->tags, pTagsSchema);
|
||||
}
|
||||
|
||||
static int32_t parseTagValue(SInsertParseContext* pCxt, SVnodeModifOpStmt* pStmt, SSchema* pTagSchema, SToken* pToken,
|
||||
|
@ -792,6 +796,8 @@ static int32_t getTableMeta(SInsertParseContext* pCxt, SName* pTbName, bool isSt
|
|||
*pMissCache = true;
|
||||
} else if (isStb && TSDB_SUPER_TABLE != (*pTableMeta)->tableType) {
|
||||
code = buildInvalidOperationMsg(&pCxt->msg, "create table only from super table is allowed");
|
||||
} else if (!isStb && TSDB_SUPER_TABLE == (*pTableMeta)->tableType) {
|
||||
code = buildInvalidOperationMsg(&pCxt->msg, "insert data into super table is not supported");
|
||||
}
|
||||
}
|
||||
return code;
|
||||
|
@ -935,11 +941,12 @@ static int32_t parseBoundColumnsClause(SInsertParseContext* pCxt, SVnodeModifOpS
|
|||
return buildSyntaxErrMsg(&pCxt->msg, "keyword VALUES or FILE is expected", token.z);
|
||||
}
|
||||
// pStmt->pSql -> field1_name, ...)
|
||||
return parseBoundColumns(pCxt, &pStmt->pSql, &pDataBuf->boundColumnInfo, getTableColumnSchema(pStmt->pTableMeta));
|
||||
return parseBoundColumns(pCxt, &pStmt->pSql, false, &pDataBuf->boundColumnInfo,
|
||||
getTableColumnSchema(pStmt->pTableMeta));
|
||||
}
|
||||
|
||||
if (NULL != pStmt->pBoundCols) {
|
||||
return parseBoundColumns(pCxt, &pStmt->pBoundCols, &pDataBuf->boundColumnInfo,
|
||||
return parseBoundColumns(pCxt, &pStmt->pBoundCols, false, &pDataBuf->boundColumnInfo,
|
||||
getTableColumnSchema(pStmt->pTableMeta));
|
||||
}
|
||||
|
||||
|
@ -1571,16 +1578,16 @@ static int32_t parseInsertBody(SInsertParseContext* pCxt, SVnodeModifOpStmt* pSt
|
|||
|
||||
static void destroySubTableHashElem(void* p) { taosMemoryFree(*(STableMeta**)p); }
|
||||
|
||||
static int32_t createVnodeModifOpStmt(SParseContext* pCxt, bool reentry, SNode** pOutput) {
|
||||
static int32_t createVnodeModifOpStmt(SInsertParseContext* pCxt, bool reentry, SNode** pOutput) {
|
||||
SVnodeModifOpStmt* pStmt = (SVnodeModifOpStmt*)nodesMakeNode(QUERY_NODE_VNODE_MODIF_STMT);
|
||||
if (NULL == pStmt) {
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
if (pCxt->pStmtCb) {
|
||||
if (pCxt->pComCxt->pStmtCb) {
|
||||
TSDB_QUERY_SET_TYPE(pStmt->insertType, TSDB_QUERY_TYPE_STMT_INSERT);
|
||||
}
|
||||
pStmt->pSql = pCxt->pSql;
|
||||
pStmt->pSql = pCxt->pComCxt->pSql;
|
||||
pStmt->freeHashFunc = insDestroyBlockHashmap;
|
||||
pStmt->freeArrayFunc = insDestroyBlockArrayList;
|
||||
|
||||
|
@ -1604,7 +1611,7 @@ static int32_t createVnodeModifOpStmt(SParseContext* pCxt, bool reentry, SNode**
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t createInsertQuery(SParseContext* pCxt, SQuery** pOutput) {
|
||||
static int32_t createInsertQuery(SInsertParseContext* pCxt, SQuery** pOutput) {
|
||||
SQuery* pQuery = (SQuery*)nodesMakeNode(QUERY_NODE_QUERY);
|
||||
if (NULL == pQuery) {
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
|
@ -1667,11 +1674,15 @@ static int32_t getTableVgroupFromMetaData(const SArray* pTables, SVnodeModifOpSt
|
|||
sizeof(SVgroupInfo));
|
||||
}
|
||||
|
||||
static int32_t getTableSchemaFromMetaData(const SMetaData* pMetaData, SVnodeModifOpStmt* pStmt, bool isStb) {
|
||||
static int32_t getTableSchemaFromMetaData(SInsertParseContext* pCxt, const SMetaData* pMetaData,
|
||||
SVnodeModifOpStmt* pStmt, bool isStb) {
|
||||
int32_t code = checkAuthFromMetaData(pMetaData->pUser);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = getTableMetaFromMetaData(pMetaData->pTableMeta, &pStmt->pTableMeta);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code && !isStb && TSDB_SUPER_TABLE == pStmt->pTableMeta->tableType) {
|
||||
code = buildInvalidOperationMsg(&pCxt->msg, "insert data into super table is not supported");
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = getTableVgroupFromMetaData(pMetaData->pTableHash, pStmt, isStb);
|
||||
}
|
||||
|
@ -1696,24 +1707,25 @@ static void clearCatalogReq(SCatalogReq* pCatalogReq) {
|
|||
pCatalogReq->pUser = NULL;
|
||||
}
|
||||
|
||||
static int32_t setVnodeModifOpStmt(SParseContext* pCxt, SCatalogReq* pCatalogReq, const SMetaData* pMetaData,
|
||||
static int32_t setVnodeModifOpStmt(SInsertParseContext* pCxt, SCatalogReq* pCatalogReq, const SMetaData* pMetaData,
|
||||
SVnodeModifOpStmt* pStmt) {
|
||||
clearCatalogReq(pCatalogReq);
|
||||
|
||||
if (pStmt->usingTableProcessing) {
|
||||
return getTableSchemaFromMetaData(pMetaData, pStmt, true);
|
||||
return getTableSchemaFromMetaData(pCxt, pMetaData, pStmt, true);
|
||||
}
|
||||
return getTableSchemaFromMetaData(pMetaData, pStmt, false);
|
||||
return getTableSchemaFromMetaData(pCxt, pMetaData, pStmt, false);
|
||||
}
|
||||
|
||||
static int32_t resetVnodeModifOpStmt(SParseContext* pCxt, SQuery* pQuery) {
|
||||
static int32_t resetVnodeModifOpStmt(SInsertParseContext* pCxt, SQuery* pQuery) {
|
||||
nodesDestroyNode(pQuery->pRoot);
|
||||
|
||||
int32_t code = createVnodeModifOpStmt(pCxt, true, &pQuery->pRoot);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
SVnodeModifOpStmt* pStmt = (SVnodeModifOpStmt*)pQuery->pRoot;
|
||||
|
||||
(*pCxt->pStmtCb->getExecInfoFn)(pCxt->pStmtCb->pStmt, &pStmt->pVgroupsHashObj, &pStmt->pTableBlockHashObj);
|
||||
(*pCxt->pComCxt->pStmtCb->getExecInfoFn)(pCxt->pComCxt->pStmtCb->pStmt, &pStmt->pVgroupsHashObj,
|
||||
&pStmt->pTableBlockHashObj);
|
||||
if (NULL == pStmt->pVgroupsHashObj) {
|
||||
pStmt->pVgroupsHashObj = taosHashInit(128, taosGetDefaultHashFunction(TSDB_DATA_TYPE_INT), true, HASH_NO_LOCK);
|
||||
}
|
||||
|
@ -1729,13 +1741,13 @@ static int32_t resetVnodeModifOpStmt(SParseContext* pCxt, SQuery* pQuery) {
|
|||
return code;
|
||||
}
|
||||
|
||||
static int32_t initInsertQuery(SParseContext* pCxt, SCatalogReq* pCatalogReq, const SMetaData* pMetaData,
|
||||
static int32_t initInsertQuery(SInsertParseContext* pCxt, SCatalogReq* pCatalogReq, const SMetaData* pMetaData,
|
||||
SQuery** pQuery) {
|
||||
if (NULL == *pQuery) {
|
||||
return createInsertQuery(pCxt, pQuery);
|
||||
}
|
||||
|
||||
if (NULL != pCxt->pStmtCb) {
|
||||
if (NULL != pCxt->pComCxt->pStmtCb) {
|
||||
return resetVnodeModifOpStmt(pCxt, *pQuery);
|
||||
}
|
||||
|
||||
|
@ -1896,7 +1908,7 @@ int32_t parseInsertSql(SParseContext* pCxt, SQuery** pQuery, SCatalogReq* pCatal
|
|||
.usingDuplicateTable = false,
|
||||
};
|
||||
|
||||
int32_t code = initInsertQuery(pCxt, pCatalogReq, pMetaData, pQuery);
|
||||
int32_t code = initInsertQuery(&context, pCatalogReq, pMetaData, pQuery);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = parseInsertSqlImpl(&context, (SVnodeModifOpStmt*)(*pQuery)->pRoot);
|
||||
}
|
||||
|
|
|
@ -67,6 +67,9 @@ int32_t syncNodeOnAppendEntriesReply(SSyncNode* ths, const SRpcMsg* pRpcMsg) {
|
|||
if (pMsg->matchIndex > oldMatchIndex) {
|
||||
syncIndexMgrSetIndex(ths->pMatchIndex, &(pMsg->srcId), pMsg->matchIndex);
|
||||
syncMaybeAdvanceCommitIndex(ths);
|
||||
|
||||
// maybe update minMatchIndex
|
||||
ths->minMatchIndex = syncMinMatchIndex(ths);
|
||||
}
|
||||
syncIndexMgrSetIndex(ths->pNextIndex, &(pMsg->srcId), pMsg->matchIndex + 1);
|
||||
|
||||
|
|
|
@ -243,6 +243,18 @@ int32_t syncBeginSnapshot(int64_t rid, int64_t lastApplyIndex) {
|
|||
goto _DEL_WAL;
|
||||
|
||||
} else {
|
||||
lastApplyIndex -= SYNC_VNODE_LOG_RETENTION;
|
||||
|
||||
SyncIndex beginIndex = pSyncNode->pLogStore->syncLogBeginIndex(pSyncNode->pLogStore);
|
||||
SyncIndex endIndex = pSyncNode->pLogStore->syncLogEndIndex(pSyncNode->pLogStore);
|
||||
bool isEmpty = pSyncNode->pLogStore->syncLogIsEmpty(pSyncNode->pLogStore);
|
||||
|
||||
if (isEmpty || !(lastApplyIndex >= beginIndex && lastApplyIndex <= endIndex)) {
|
||||
sNTrace(pSyncNode, "new-snapshot-index:%" PRId64 ", empty:%d, do not delete wal", lastApplyIndex, isEmpty);
|
||||
syncNodeRelease(pSyncNode);
|
||||
return 0;
|
||||
}
|
||||
|
||||
// vnode
|
||||
if (pSyncNode->replicaNum > 1) {
|
||||
// multi replicas
|
||||
|
@ -300,26 +312,31 @@ int32_t syncBeginSnapshot(int64_t rid, int64_t lastApplyIndex) {
|
|||
_DEL_WAL:
|
||||
|
||||
do {
|
||||
SyncIndex snapshottingIndex = atomic_load_64(&pSyncNode->snapshottingIndex);
|
||||
SSyncLogStoreData* pData = pSyncNode->pLogStore->data;
|
||||
SyncIndex snapshotVer = walGetSnapshotVer(pData->pWal);
|
||||
SyncIndex walCommitVer = walGetCommittedVer(pData->pWal);
|
||||
SyncIndex wallastVer = walGetLastVer(pData->pWal);
|
||||
if (lastApplyIndex <= walCommitVer) {
|
||||
SyncIndex snapshottingIndex = atomic_load_64(&pSyncNode->snapshottingIndex);
|
||||
|
||||
if (snapshottingIndex == SYNC_INDEX_INVALID) {
|
||||
atomic_store_64(&pSyncNode->snapshottingIndex, lastApplyIndex);
|
||||
pSyncNode->snapshottingTime = taosGetTimestampMs();
|
||||
if (snapshottingIndex == SYNC_INDEX_INVALID) {
|
||||
atomic_store_64(&pSyncNode->snapshottingIndex, lastApplyIndex);
|
||||
pSyncNode->snapshottingTime = taosGetTimestampMs();
|
||||
|
||||
code = walBeginSnapshot(pData->pWal, lastApplyIndex);
|
||||
if (code == 0) {
|
||||
sNTrace(pSyncNode, "wal snapshot begin, index:%" PRId64 ", last apply index:%" PRId64,
|
||||
pSyncNode->snapshottingIndex, lastApplyIndex);
|
||||
} else {
|
||||
sNError(pSyncNode, "wal snapshot begin error since:%s, index:%" PRId64 ", last apply index:%" PRId64,
|
||||
terrstr(terrno), pSyncNode->snapshottingIndex, lastApplyIndex);
|
||||
atomic_store_64(&pSyncNode->snapshottingIndex, SYNC_INDEX_INVALID);
|
||||
}
|
||||
|
||||
SSyncLogStoreData* pData = pSyncNode->pLogStore->data;
|
||||
code = walBeginSnapshot(pData->pWal, lastApplyIndex);
|
||||
if (code == 0) {
|
||||
sNTrace(pSyncNode, "wal snapshot begin, index:%" PRId64 ", last apply index:%" PRId64,
|
||||
pSyncNode->snapshottingIndex, lastApplyIndex);
|
||||
} else {
|
||||
sNError(pSyncNode, "wal snapshot begin error since:%s, index:%" PRId64 ", last apply index:%" PRId64,
|
||||
terrstr(terrno), pSyncNode->snapshottingIndex, lastApplyIndex);
|
||||
atomic_store_64(&pSyncNode->snapshottingIndex, SYNC_INDEX_INVALID);
|
||||
sNTrace(pSyncNode, "snapshotting for %" PRId64 ", do not delete wal for new-snapshot-index:%" PRId64,
|
||||
snapshottingIndex, lastApplyIndex);
|
||||
}
|
||||
|
||||
} else {
|
||||
sNTrace(pSyncNode, "snapshotting for %" PRId64 ", do not delete wal for new-snapshot-index:%" PRId64,
|
||||
snapshottingIndex, lastApplyIndex);
|
||||
}
|
||||
} while (0);
|
||||
|
||||
|
|
|
@ -375,7 +375,17 @@ static int32_t raftLogGetLastEntry(SSyncLogStore* pLogStore, SSyncRaftEntry** pp
|
|||
int32_t raftLogUpdateCommitIndex(SSyncLogStore* pLogStore, SyncIndex index) {
|
||||
SSyncLogStoreData* pData = pLogStore->data;
|
||||
SWal* pWal = pData->pWal;
|
||||
// ASSERT(walCommit(pWal, index) == 0);
|
||||
|
||||
// need not update
|
||||
SyncIndex snapshotVer = walGetSnapshotVer(pWal);
|
||||
SyncIndex walCommitVer = walGetCommittedVer(pWal);
|
||||
SyncIndex wallastVer = walGetLastVer(pWal);
|
||||
|
||||
if (index < snapshotVer || index > wallastVer) {
|
||||
// ignore
|
||||
return 0;
|
||||
}
|
||||
|
||||
int32_t code = walCommit(pWal, index);
|
||||
if (code != 0) {
|
||||
int32_t err = terrno;
|
||||
|
|
|
@ -62,18 +62,20 @@ static int32_t syncNodeTimerRoutine(SSyncNode* ths) {
|
|||
syncNodeCleanConfigIndex(ths);
|
||||
}
|
||||
|
||||
// end timeout wal snapshot
|
||||
int64_t timeNow = taosGetTimestampMs();
|
||||
if (timeNow - ths->snapshottingIndex > SYNC_DEL_WAL_MS &&
|
||||
atomic_load_64(&ths->snapshottingIndex) != SYNC_INDEX_INVALID) {
|
||||
SSyncLogStoreData* pData = ths->pLogStore->data;
|
||||
int32_t code = walEndSnapshot(pData->pWal);
|
||||
if (code != 0) {
|
||||
sNError(ths, "timer wal snapshot end error since:%s", terrstr());
|
||||
return -1;
|
||||
} else {
|
||||
sNTrace(ths, "wal snapshot end, index:%" PRId64, atomic_load_64(&ths->snapshottingIndex));
|
||||
atomic_store_64(&ths->snapshottingIndex, SYNC_INDEX_INVALID);
|
||||
if (atomic_load_64(&ths->snapshottingIndex) != SYNC_INDEX_INVALID) {
|
||||
// end timeout wal snapshot
|
||||
if (timeNow - ths->snapshottingTime > SYNC_DEL_WAL_MS &&
|
||||
atomic_load_64(&ths->snapshottingIndex) != SYNC_INDEX_INVALID) {
|
||||
SSyncLogStoreData* pData = ths->pLogStore->data;
|
||||
int32_t code = walEndSnapshot(pData->pWal);
|
||||
if (code != 0) {
|
||||
sNError(ths, "timer wal snapshot end error since:%s", terrstr());
|
||||
return -1;
|
||||
} else {
|
||||
sNTrace(ths, "wal snapshot end, index:%" PRId64, atomic_load_64(&ths->snapshottingIndex));
|
||||
atomic_store_64(&ths->snapshottingIndex, SYNC_INDEX_INVALID);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -239,11 +239,11 @@ void syncPrintNodeLog(const char* flags, ELogLevel level, int32_t dflag, SSyncNo
|
|||
"vgId:%d, sync %s "
|
||||
"%s"
|
||||
", tm:%" PRIu64 ", cmt:%" PRId64 ", fst:%" PRId64 ", lst:%" PRId64 ", min:%" PRId64 ", snap:%" PRId64
|
||||
", snap-tm:%" PRIu64 ", sby:%d, aq:%d, bch:%d, r-num:%d, lcfg:%" PRId64
|
||||
", snap-tm:%" PRIu64 ", sby:%d, aq:%d, snaping:%" PRId64 ", r-num:%d, lcfg:%" PRId64
|
||||
", chging:%d, rsto:%d, dquorum:%d, elt:%" PRId64 ", hb:%" PRId64 ", %s, %s",
|
||||
pNode->vgId, syncStr(pNode->state), eventLog, currentTerm, pNode->commitIndex, logBeginIndex,
|
||||
logLastIndex, pNode->minMatchIndex, snapshot.lastApplyIndex, snapshot.lastApplyTerm,
|
||||
pNode->pRaftCfg->isStandBy, aqItems, pNode->pRaftCfg->batchSize, pNode->replicaNum,
|
||||
pNode->pRaftCfg->isStandBy, aqItems, pNode->snapshottingIndex, pNode->replicaNum,
|
||||
pNode->pRaftCfg->lastConfigIndex, pNode->changing, pNode->restoreFinish, quorum,
|
||||
pNode->electTimerLogicClock, pNode->heartbeatTimerLogicClockUser, peerStr, cfgStr);
|
||||
}
|
||||
|
|
|
@ -791,6 +791,7 @@ void cliSend(SCliConn* pConn) {
|
|||
|
||||
int msgLen = transMsgLenFromCont(pMsg->contLen);
|
||||
STransMsgHead* pHead = transHeadFromCont(pMsg->pCont);
|
||||
|
||||
pHead->ahandle = pCtx != NULL ? (uint64_t)pCtx->ahandle : 0;
|
||||
pHead->noResp = REQUEST_NO_RESP(pMsg) ? 1 : 0;
|
||||
pHead->persist = REQUEST_PERSIS_HANDLE(pMsg) ? 1 : 0;
|
||||
|
@ -820,10 +821,15 @@ void cliSend(SCliConn* pConn) {
|
|||
uv_timer_start((uv_timer_t*)pConn->timer, cliReadTimeoutCb, TRANS_READ_TIMEOUT, 0);
|
||||
}
|
||||
|
||||
if (pTransInst->compressSize != -1 && pTransInst->compressSize < pMsg->contLen) {
|
||||
msgLen = transCompressMsg(pMsg->pCont, pMsg->contLen) + sizeof(STransMsgHead);
|
||||
pHead->msgLen = (int32_t)htonl((uint32_t)msgLen);
|
||||
if (pHead->comp == 0) {
|
||||
if (pTransInst->compressSize != -1 && pTransInst->compressSize < pMsg->contLen) {
|
||||
msgLen = transCompressMsg(pMsg->pCont, pMsg->contLen) + sizeof(STransMsgHead);
|
||||
pHead->msgLen = (int32_t)htonl((uint32_t)msgLen);
|
||||
}
|
||||
} else {
|
||||
msgLen = (int32_t)ntohl((uint32_t)(pHead->msgLen));
|
||||
}
|
||||
|
||||
tGDebug("%s conn %p %s is sent to %s, local info %s, len:%d", CONN_GET_INST_LABEL(pConn), pConn,
|
||||
TMSG_INFO(pHead->msgType), pConn->dst, pConn->src, msgLen);
|
||||
|
||||
|
|
|
@ -398,7 +398,7 @@ static int uvPrepareSendData(SSvrMsg* smsg, uv_buf_t* wb) {
|
|||
pHead->magicNum = htonl(TRANS_MAGIC_NUM);
|
||||
|
||||
// handle invalid drop_task resp, TD-20098
|
||||
if (pMsg->msgType == TDMT_SCH_DROP_TASK && pMsg->code == TSDB_CODE_VND_INVALID_VGROUP_ID) {
|
||||
if (pConn->inType == TDMT_SCH_DROP_TASK && pMsg->code == TSDB_CODE_VND_INVALID_VGROUP_ID) {
|
||||
transQueuePop(&pConn->srvMsgs);
|
||||
destroySmsg(smsg);
|
||||
return -1;
|
||||
|
|
|
@ -324,23 +324,36 @@ int32_t walEndSnapshot(SWal *pWal) {
|
|||
// find files safe to delete
|
||||
SWalFileInfo *pInfo = taosArraySearch(pWal->fileInfoSet, &tmp, compareWalFileInfo, TD_LE);
|
||||
if (pInfo) {
|
||||
SWalFileInfo *pLastFileInfo = taosArrayGetLast(pWal->fileInfoSet);
|
||||
wDebug("vgId:%d, wal search found file info: first:%" PRId64 " last:%" PRId64, pWal->cfg.vgId, pInfo->firstVer,
|
||||
pInfo->lastVer);
|
||||
if (ver >= pInfo->lastVer) {
|
||||
pInfo--;
|
||||
pInfo++;
|
||||
wDebug("vgId:%d, wal remove advance one file: first:%" PRId64 " last:%" PRId64, pWal->cfg.vgId, pInfo->firstVer,
|
||||
pInfo->lastVer);
|
||||
}
|
||||
if (POINTER_DISTANCE(pInfo, pWal->fileInfoSet->pData) > 0) {
|
||||
wDebug("vgId:%d, wal end remove for %" PRId64, pWal->cfg.vgId, pInfo->firstVer);
|
||||
if (pInfo <= pLastFileInfo) {
|
||||
wDebug("vgId:%d, wal end remove for first:%" PRId64 " last:%" PRId64, pWal->cfg.vgId, pInfo->firstVer,
|
||||
pInfo->lastVer);
|
||||
} else {
|
||||
wDebug("vgId:%d, wal no remove", pWal->cfg.vgId);
|
||||
}
|
||||
// iterate files, until the searched result
|
||||
for (SWalFileInfo *iter = pWal->fileInfoSet->pData; iter < pInfo; iter++) {
|
||||
if ((pWal->cfg.retentionSize != -1 && newTotSize > pWal->cfg.retentionSize) ||
|
||||
(pWal->cfg.retentionPeriod != -1 && iter->closeTs + pWal->cfg.retentionPeriod > ts)) {
|
||||
wDebug("vgId:%d, wal check remove file %" PRId64 "(file size %" PRId64 " close ts %" PRId64
|
||||
"), new tot size %" PRId64,
|
||||
pWal->cfg.vgId, iter->firstVer, iter->fileSize, iter->closeTs, newTotSize);
|
||||
if (((pWal->cfg.retentionSize == 0) || (pWal->cfg.retentionSize != -1 && newTotSize > pWal->cfg.retentionSize)) ||
|
||||
((pWal->cfg.retentionPeriod == 0) ||
|
||||
(pWal->cfg.retentionPeriod != -1 && iter->closeTs + pWal->cfg.retentionPeriod > ts))) {
|
||||
// delete according to file size or close time
|
||||
wDebug("vgId:%d, check pass", pWal->cfg.vgId);
|
||||
deleteCnt++;
|
||||
newTotSize -= iter->fileSize;
|
||||
}
|
||||
wDebug("vgId:%d, check not pass", pWal->cfg.vgId);
|
||||
}
|
||||
wDebug("vgId:%d, wal should delete %d files", pWal->cfg.vgId, deleteCnt);
|
||||
int32_t actualDelete = 0;
|
||||
char fnameStr[WAL_FILE_LEN];
|
||||
// remove file
|
||||
|
|
|
@ -107,7 +107,7 @@ static uint64_t allocatePositionInFile(SDiskbasedBuf* pBuf, size_t size) {
|
|||
|
||||
static void setPageNotInBuf(SPageInfo* pPageInfo) { pPageInfo->pData = NULL; }
|
||||
|
||||
static FORCE_INLINE size_t getAllocPageSize(int32_t pageSize) { return pageSize + POINTER_BYTES + 2; }
|
||||
static FORCE_INLINE size_t getAllocPageSize(int32_t pageSize) { return pageSize + POINTER_BYTES + sizeof(SFilePage); }
|
||||
|
||||
/**
|
||||
* +--------------------------+-------------------+--------------+
|
||||
|
|
|
@ -216,7 +216,7 @@
|
|||
,,y,script,./test.sh -f tsim/stream/drop_stream.sim
|
||||
,,y,script,./test.sh -f tsim/stream/fillHistoryBasic1.sim
|
||||
,,y,script,./test.sh -f tsim/stream/fillHistoryBasic2.sim
|
||||
,,n,script,./test.sh -f tsim/stream/fillHistoryBasic3.sim
|
||||
,,y,script,./test.sh -f tsim/stream/fillHistoryBasic3.sim
|
||||
,,y,script,./test.sh -f tsim/stream/distributeInterval0.sim
|
||||
,,y,script,./test.sh -f tsim/stream/distributeIntervalRetrive0.sim
|
||||
,,y,script,./test.sh -f tsim/stream/distributeSession0.sim
|
||||
|
@ -227,7 +227,7 @@
|
|||
,,y,script,./test.sh -f tsim/stream/triggerSession0.sim
|
||||
,,y,script,./test.sh -f tsim/stream/partitionby.sim
|
||||
,,y,script,./test.sh -f tsim/stream/partitionby1.sim
|
||||
,,n,script,./test.sh -f tsim/stream/schedSnode.sim
|
||||
,,y,script,./test.sh -f tsim/stream/schedSnode.sim
|
||||
,,y,script,./test.sh -f tsim/stream/windowClose.sim
|
||||
,,y,script,./test.sh -f tsim/stream/ignoreExpiredData.sim
|
||||
,,y,script,./test.sh -f tsim/stream/sliding.sim
|
||||
|
@ -431,40 +431,40 @@
|
|||
,,,system-test,python3 ./test.py -f 1-insert/time_range_wise.py
|
||||
,,,system-test,python3 ./test.py -f 1-insert/block_wise.py
|
||||
,,,system-test,python3 ./test.py -f 1-insert/create_retentions.py
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 1-insert/table_param_ttl.py
|
||||
,,,system-test,python3 ./test.py -f 1-insert/mutil_stage.py
|
||||
,,,system-test,python3 ./test.py -f 1-insert/table_param_ttl.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 1-insert/table_param_ttl.py
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 1-insert/table_param_ttl.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 1-insert/update_data_muti_rows.py
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 1-insert/db_tb_name_check.py
|
||||
,,,system-test,python3 ./test.py -f 1-insert/database_pre_suf.py
|
||||
,,,system-test,python3 ./test.py -f 1-insert/InsertFuturets.py
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 0-others/show.py
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/abs.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/abs.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/abs.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/and_or_for_byte.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/and_or_for_byte.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/and_or_for_byte.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/apercentile.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/apercentile.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/apercentile.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/arccos.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/arccos.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/arccos.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/arcsin.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/arcsin.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/arcsin.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/arctan.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/arctan.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/arctan.py -R
|
||||
,,,system-test,python3 ./test.py -f 2-query/avg.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/avg.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/between.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/between.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/between.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/bottom.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/bottom.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/bottom.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/cast.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/cast.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/cast.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/ceil.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/ceil.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/ceil.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/char_length.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/char_length.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/char_length.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/check_tsdb.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/check_tsdb.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/check_tsdb.py -R
|
||||
,,,system-test,python3 ./test.py -f 2-query/concat.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/concat.py -R
|
||||
,,,system-test,python3 ./test.py -f 2-query/concat_ws.py
|
||||
|
@ -472,63 +472,63 @@
|
|||
,,,system-test,python3 ./test.py -f 2-query/concat_ws2.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/concat_ws2.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/cos.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/cos.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/cos.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/count_partition.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/count_partition.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/count_partition.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/count.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/count.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/count.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/countAlwaysReturnValue.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/countAlwaysReturnValue.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/countAlwaysReturnValue.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/db.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/db.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/db.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/diff.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/diff.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/diff.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distinct.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/distinct.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distinct.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_apercentile.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/distribute_agg_apercentile.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_apercentile.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_avg.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/distribute_agg_avg.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_avg.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_count.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/distribute_agg_count.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_count.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_max.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/distribute_agg_max.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_max.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_min.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/distribute_agg_min.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_min.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_spread.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/distribute_agg_spread.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_spread.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_stddev.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/distribute_agg_stddev.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_stddev.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_sum.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/distribute_agg_sum.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_sum.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/explain.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/explain.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/explain.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/first.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/first.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/first.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/floor.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/floor.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/floor.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/function_null.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/function_null.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/function_null.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/function_stateduration.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/function_stateduration.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/function_stateduration.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/histogram.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/histogram.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/histogram.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/hyperloglog.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/hyperloglog.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/hyperloglog.py -R
|
||||
,,,system-test,python3 ./test.py -f 2-query/interp.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/interp.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/irate.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/irate.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/irate.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/join.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/join.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/join.py -R
|
||||
,,,system-test,python3 ./test.py -f 2-query/last_row.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/last_row.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/last.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/last.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/last.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/leastsquares.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/leastsquares.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/leastsquares.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/length.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/length.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/length.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/log.py
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/log.py -R
|
||||
,,,system-test,python3 ./test.py -f 2-query/lower.py
|
||||
|
@ -536,25 +536,25 @@
|
|||
,,,system-test,python3 ./test.py -f 2-query/ltrim.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/ltrim.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/mavg.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/mavg.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/mavg.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/max_partition.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/max_partition.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/max_partition.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/max.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/max.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/max.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/min.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/min.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/min.py -R
|
||||
,,,system-test,python3 ./test.py -f 2-query/mode.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/mode.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/Now.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/Now.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/Now.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/percentile.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/percentile.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/percentile.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/pow.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/pow.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/pow.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/query_cols_tags_and_or.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/query_cols_tags_and_or.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/query_cols_tags_and_or.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/round.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/round.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/round.py -R
|
||||
,,,system-test,python3 ./test.py -f 2-query/rtrim.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/rtrim.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/sample.py
|
||||
|
@ -566,51 +566,51 @@
|
|||
,,,system-test,python3 ./test.py -f 2-query/sml.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/sml.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/spread.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/spread.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/spread.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/sqrt.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/sqrt.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/sqrt.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/statecount.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/statecount.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/statecount.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/stateduration.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/stateduration.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/stateduration.py -R
|
||||
,,,system-test,python3 ./test.py -f 2-query/substr.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/substr.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/sum.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/sum.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/sum.py -R
|
||||
,,,system-test,python3 ./test.py -f 2-query/tail.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/tail.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tan.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/tan.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tan.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/Timediff.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/Timediff.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/Timediff.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/timetruncate.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/timetruncate.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/timetruncate.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/timezone.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/timezone.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/timezone.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/To_iso8601.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/To_iso8601.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/To_iso8601.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/To_unixtimestamp.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/To_unixtimestamp.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/To_unixtimestamp.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/Today.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/Today.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/Today.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/top.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/top.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/top.py -R
|
||||
,,,system-test,python3 ./test.py -f 2-query/tsbsQuery.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/tsbsQuery.py -R
|
||||
,,,system-test,python3 ./test.py -f 2-query/ttl_comment.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/ttl_comment.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/ttl_comment.py
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/ttl_comment.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/twa.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/twa.py -R
|
||||
,,,system-test,python3 ./test.py -f 2-query/union.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/union.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/twa.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/union.py
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/union.py -R
|
||||
,,,system-test,python3 ./test.py -f 2-query/unique.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/unique.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/upper.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/upper.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/upper.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/varchar.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/varchar.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/varchar.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/case_when.py
|
||||
,,,system-test,python3 ./test.py -f 2-query/case_when.py -R
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/case_when.py -R
|
||||
,,,system-test,python3 ./test.py -f 1-insert/update_data.py
|
||||
,,,system-test,python3 ./test.py -f 1-insert/tb_100w_data_order.py
|
||||
,,,system-test,python3 ./test.py -f 1-insert/delete_stable.py
|
||||
|
@ -708,6 +708,7 @@
|
|||
,,,system-test,python3 ./test.py -f 7-tmq/tmqConsFromTsdb1-mutilVg-mutilCtb.py
|
||||
,,,system-test,python3 ./test.py -f 7-tmq/tmqAutoCreateTbl.py
|
||||
,,,system-test,python3 ./test.py -f 7-tmq/tmqDnodeRestart.py
|
||||
,,,system-test,python3 ./test.py -f 7-tmq/tmqDnodeRestart1.py
|
||||
,,,system-test,python3 ./test.py -f 7-tmq/tmqUpdate-1ctb.py
|
||||
,,,system-test,python3 ./test.py -f 7-tmq/tmqUpdateWithConsume.py
|
||||
,,,system-test,python3 ./test.py -f 7-tmq/tmqUpdate-multiCtb-snapshot0.py
|
||||
|
@ -804,7 +805,7 @@
|
|||
,,,system-test,python3 ./test.py -f 2-query/function_stateduration.py -Q 2
|
||||
,,,system-test,python3 ./test.py -f 2-query/statecount.py -Q 2
|
||||
,,,system-test,python3 ./test.py -f 2-query/tail.py -Q 2
|
||||
,,,system-test,python3 ./test.py -f 2-query/ttl_comment.py -Q 2
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/ttl_comment.py -Q 2
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_count.py -Q 2
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_max.py -Q 2
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_min.py -Q 2
|
||||
|
@ -817,7 +818,7 @@
|
|||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/irate.py -Q 2
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/function_null.py -Q 2
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/count_partition.py -Q 2
|
||||
,,,system-test,python3 ./test.py -f 2-query/max_partition.py -Q 2
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/max_partition.py -Q 2
|
||||
,,,system-test,python3 ./test.py -f 2-query/last_row.py -Q 2
|
||||
,,,system-test,python3 ./test.py -f 2-query/tsbsQuery.py -Q 2
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/sml.py -Q 2
|
||||
|
@ -864,8 +865,8 @@
|
|||
,,,system-test,python3 ./test.py -f 2-query/json_tag.py -Q 3
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/top.py -Q 3
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/bottom.py -Q 3
|
||||
,,,system-test,python3 ./test.py -f 2-query/percentile.py -Q 3
|
||||
,,,system-test,python3 ./test.py -f 2-query/apercentile.py -Q 3
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/percentile.py -Q 3
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/apercentile.py -Q 3
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/abs.py -Q 3
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/ceil.py -Q 3
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/floor.py -Q 3
|
||||
|
@ -878,7 +879,7 @@
|
|||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tan.py -Q 3
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/arcsin.py -Q 3
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/arccos.py -Q 3
|
||||
,,,system-test,python3 ./test.py -f 2-query/arctan.py -Q 3
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/arctan.py -Q 3
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/query_cols_tags_and_or.py -Q 3
|
||||
,,,system-test,python3 ./test.py -f 2-query/nestedQuery.py -Q 3
|
||||
,,,system-test,python3 ./test.py -f 2-query/nestedQuery_str.py -Q 3
|
||||
|
@ -897,11 +898,11 @@
|
|||
,,,system-test,python3 ./test.py -f 2-query/function_stateduration.py -Q 3
|
||||
,,,system-test,python3 ./test.py -f 2-query/statecount.py -Q 3
|
||||
,,,system-test,python3 ./test.py -f 2-query/tail.py -Q 3
|
||||
,,,system-test,python3 ./test.py -f 2-query/ttl_comment.py -Q 3
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/ttl_comment.py -Q 3
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_count.py -Q 3
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_max.py -Q 3
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_min.py -Q 3
|
||||
,,,system-test,python3 ./test.py -f 2-query/distribute_agg_sum.py -Q 3
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_sum.py -Q 3
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_spread.py -Q 3
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_apercentile.py -Q 3
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_avg.py -Q 3
|
||||
|
@ -923,7 +924,7 @@
|
|||
,,,system-test,python3 ./test.py -f 2-query/rtrim.py -Q 4
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/length.py -Q 4
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/char_length.py -Q 4
|
||||
,,,system-test,python3 ./test.py -f 2-query/upper.py -Q 4
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/upper.py -Q 4
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/lower.py -Q 4
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/join.py -Q 4
|
||||
,,,system-test,python3 ./test.py -f 2-query/join2.py -Q 4
|
||||
|
@ -984,13 +985,14 @@
|
|||
,,,system-test,python3 ./test.py -f 2-query/csum.py -Q 4
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/mavg.py -Q 4
|
||||
,,,system-test,python3 ./test.py -f 2-query/sample.py -Q 4
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/cast.py -Q 4
|
||||
,,,system-test,python3 ./test.py -f 2-query/function_diff.py -Q 4
|
||||
,,,system-test,python3 ./test.py -f 2-query/unique.py -Q 4
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/stateduration.py -Q 4
|
||||
,,,system-test,python3 ./test.py -f 2-query/function_stateduration.py -Q 4
|
||||
,,,system-test,python3 ./test.py -f 2-query/statecount.py -Q 4
|
||||
,,,system-test,python3 ./test.py -f 2-query/tail.py -Q 4
|
||||
,,,system-test,python3 ./test.py -f 2-query/ttl_comment.py -Q 4
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/ttl_comment.py -Q 4
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_count.py -Q 4
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_max.py -Q 4
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_min.py -Q 4
|
||||
|
@ -998,7 +1000,7 @@
|
|||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_spread.py -Q 4
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_apercentile.py -Q 4
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_avg.py -Q 4
|
||||
,,,system-test,python3 ./test.py -f 2-query/distribute_agg_stddev.py -Q 4
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/distribute_agg_stddev.py -Q 4
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/twa.py -Q 4
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/irate.py -Q 4
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/function_null.py -Q 4
|
||||
|
|
|
@ -0,0 +1,365 @@
|
|||
import os
|
||||
import socket
|
||||
import requests
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
import os ,sys
|
||||
import random
|
||||
import argparse
|
||||
import subprocess
|
||||
import time
|
||||
import platform
|
||||
|
||||
# valgrind mode ?
|
||||
valgrind_mode = False
|
||||
|
||||
msg_dict = {0:"success" , 1:"failed" , 2:"other errors" , 3:"crash occured" , 4:"Invalid read/write" , 5:"memory leak" }
|
||||
|
||||
# formal
|
||||
hostname = socket.gethostname()
|
||||
|
||||
group_url = 'https://open.feishu.cn/open-apis/bot/v2/hook/56c333b5-eae9-4c18-b0b6-7e4b7174f5c9'
|
||||
|
||||
def get_msg(text):
|
||||
return {
|
||||
"msg_type": "post",
|
||||
"content": {
|
||||
"post": {
|
||||
"zh_cn": {
|
||||
"title": "Crash_gen Monitor",
|
||||
"content": [
|
||||
[{
|
||||
"tag": "text",
|
||||
"text": text
|
||||
}
|
||||
]]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def send_msg(json):
|
||||
headers = {
|
||||
'Content-Type': 'application/json'
|
||||
}
|
||||
|
||||
req = requests.post(url=group_url, headers=headers, json=json)
|
||||
inf = req.json()
|
||||
if "StatusCode" in inf and inf["StatusCode"] == 0:
|
||||
pass
|
||||
else:
|
||||
print(inf)
|
||||
|
||||
|
||||
# set path about run instance
|
||||
|
||||
core_path = subprocess.Popen("cat /proc/sys/kernel/core_pattern", shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
|
||||
core_path = "/".join(core_path.split("/")[:-1])
|
||||
print(" ======= core path is %s ======== " %core_path)
|
||||
if not os.path.exists(core_path):
|
||||
os.mkdir(core_path)
|
||||
|
||||
base_dir = os.path.dirname(os.path.realpath(__file__))
|
||||
if base_dir.find("community")>0:
|
||||
repo = "community"
|
||||
elif base_dir.find("TDengine")>0:
|
||||
repo = "TDengine"
|
||||
else:
|
||||
repo ="TDengine"
|
||||
print("base_dir:",base_dir)
|
||||
home_dir = base_dir[:base_dir.find(repo)]
|
||||
print("home_dir:",home_dir)
|
||||
run_dir = os.path.join(home_dir,'run_dir')
|
||||
run_dir = os.path.abspath(run_dir)
|
||||
print("run dir is *** :",run_dir)
|
||||
if not os.path.exists(run_dir):
|
||||
os.mkdir(run_dir)
|
||||
run_log_file = run_dir+'/crash_gen_run.log'
|
||||
crash_gen_cmds_file = os.path.join(run_dir, 'crash_gen_cmds.sh')
|
||||
exit_status_logs = os.path.join(run_dir, 'crash_exit.log')
|
||||
|
||||
def get_path():
|
||||
buildPath=''
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosd" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root) - len("/build/bin")]
|
||||
break
|
||||
return buildPath
|
||||
|
||||
# generate crash_gen start script randomly
|
||||
|
||||
def random_args(args_list):
|
||||
nums_args_list = ["--max-dbs","--num-replicas","--num-dnodes","--max-steps","--num-threads",] # record int type arguments
|
||||
bools_args_list = ["--auto-start-service" , "--debug","--run-tdengine","--ignore-errors","--track-memory-leaks","--larger-data","--mix-oos-data","--dynamic-db-table-names",
|
||||
"--per-thread-db-connection","--record-ops","--verify-data","--use-shadow-db","--continue-on-exception"
|
||||
] # record bool type arguments
|
||||
strs_args_list = ["--connector-type"] # record str type arguments
|
||||
|
||||
args_list["--auto-start-service"]= False
|
||||
args_list["--continue-on-exception"]=True
|
||||
# connect_types=['native','rest','mixed'] # restful interface has change ,we should trans dbnames to connection or change sql such as "db.test"
|
||||
connect_types=['native']
|
||||
# args_list["--connector-type"]=connect_types[random.randint(0,2)]
|
||||
args_list["--connector-type"]= connect_types[0]
|
||||
args_list["--max-dbs"]= random.randint(1,10)
|
||||
|
||||
# dnodes = [1,3] # set single dnodes;
|
||||
|
||||
# args_list["--num-dnodes"]= random.sample(dnodes,1)[0]
|
||||
# args_list["--num-replicas"]= random.randint(1,args_list["--num-dnodes"])
|
||||
args_list["--debug"]=False
|
||||
args_list["--per-thread-db-connection"]=True
|
||||
args_list["--track-memory-leaks"]=False
|
||||
|
||||
args_list["--max-steps"]=random.randint(500,2000)
|
||||
|
||||
# args_list["--ignore-errors"]=[] ## can add error codes for detail
|
||||
|
||||
|
||||
args_list["--run-tdengine"]= False
|
||||
args_list["--use-shadow-db"]= False
|
||||
args_list["--dynamic-db-table-names"]= True
|
||||
args_list["--verify-data"]= False
|
||||
args_list["--record-ops"] = False
|
||||
|
||||
for key in bools_args_list:
|
||||
set_bool_value = [True,False]
|
||||
if key == "--auto-start-service" :
|
||||
continue
|
||||
elif key =="--run-tdengine":
|
||||
continue
|
||||
elif key == "--ignore-errors":
|
||||
continue
|
||||
elif key == "--debug":
|
||||
continue
|
||||
elif key == "--per-thread-db-connection":
|
||||
continue
|
||||
elif key == "--continue-on-exception":
|
||||
continue
|
||||
elif key == "--use-shadow-db":
|
||||
continue
|
||||
elif key =="--track-memory-leaks":
|
||||
continue
|
||||
elif key == "--dynamic-db-table-names":
|
||||
continue
|
||||
elif key == "--verify-data":
|
||||
continue
|
||||
elif key == "--record-ops":
|
||||
continue
|
||||
else:
|
||||
args_list[key]=set_bool_value[random.randint(0,1)]
|
||||
|
||||
if args_list["--larger-data"]:
|
||||
threads = [16,32]
|
||||
else:
|
||||
threads = [32,64,128,256]
|
||||
args_list["--num-threads"]=random.sample(threads,1)[0] #$ debug
|
||||
|
||||
return args_list
|
||||
|
||||
def limits(args_list):
|
||||
if args_list["--use-shadow-db"]==True:
|
||||
if args_list["--max-dbs"] > 1:
|
||||
print("Cannot combine use-shadow-db with max-dbs of more than 1 ,set max-dbs=1")
|
||||
args_list["--max-dbs"]=1
|
||||
else:
|
||||
pass
|
||||
|
||||
# env is start by test frame , not crash_gen instance
|
||||
|
||||
# elif args_list["--num-replicas"]==0:
|
||||
# print(" make sure num-replicas is at least 1 ")
|
||||
# args_list["--num-replicas"]=1
|
||||
# elif args_list["--num-replicas"]==1:
|
||||
# pass
|
||||
|
||||
# elif args_list["--num-replicas"]>1:
|
||||
# if not args_list["--auto-start-service"]:
|
||||
# print("it should be deployed by crash_gen auto-start-service for multi replicas")
|
||||
|
||||
# else:
|
||||
# pass
|
||||
|
||||
return args_list
|
||||
|
||||
def get_auto_mix_cmds(args_list ,valgrind=valgrind_mode):
|
||||
build_path = get_path()
|
||||
if repo == "community":
|
||||
crash_gen_path = build_path[:-5]+"community/tests/pytest/"
|
||||
elif repo == "TDengine":
|
||||
crash_gen_path = build_path[:-5]+"/tests/pytest/"
|
||||
else:
|
||||
pass
|
||||
|
||||
bools_args_list = ["--auto-start-service" , "--debug","--run-tdengine","--ignore-errors","--track-memory-leaks","--larger-data","--mix-oos-data","--dynamic-db-table-names",
|
||||
"--per-thread-db-connection","--record-ops","--verify-data","--use-shadow-db","--continue-on-exception"]
|
||||
arguments = ""
|
||||
for k ,v in args_list.items():
|
||||
if k == "--ignore-errors":
|
||||
if v:
|
||||
arguments+=(k+"="+str(v)+" ")
|
||||
else:
|
||||
arguments+=""
|
||||
elif k in bools_args_list and v==True:
|
||||
arguments+=(k+" ")
|
||||
elif k in bools_args_list and v==False:
|
||||
arguments+=""
|
||||
else:
|
||||
arguments+=(k+"="+str(v)+" ")
|
||||
|
||||
if valgrind :
|
||||
|
||||
crash_gen_cmd = 'cd %s && ./crash_gen.sh --valgrind %s -g 0x32c,0x32d,0x3d3,0x18,0x2501,0x369,0x388,0x061a,0x2550,0x0203 '%(crash_gen_path ,arguments)
|
||||
|
||||
else:
|
||||
|
||||
crash_gen_cmd = 'cd %s && ./crash_gen.sh %s -g 0x32c,0x32d,0x3d3,0x18,0x2501,0x369,0x388,0x061a,0x2550,0x0203'%(crash_gen_path ,arguments)
|
||||
|
||||
return crash_gen_cmd
|
||||
|
||||
def start_taosd():
|
||||
build_path = get_path()
|
||||
if repo == "community":
|
||||
start_path = build_path[:-5]+"community/tests/system-test/"
|
||||
elif repo == "TDengine":
|
||||
start_path = build_path[:-5]+"/tests/system-test/"
|
||||
else:
|
||||
pass
|
||||
|
||||
start_cmd = 'cd %s && python3 test.py >>/dev/null '%(start_path)
|
||||
os.system(start_cmd)
|
||||
|
||||
def get_cmds(args_list):
|
||||
# build_path = get_path()
|
||||
# if repo == "community":
|
||||
# crash_gen_path = build_path[:-5]+"community/tests/pytest/"
|
||||
# elif repo == "TDengine":
|
||||
# crash_gen_path = build_path[:-5]+"/tests/pytest/"
|
||||
# else:
|
||||
# pass
|
||||
|
||||
# crash_gen_cmd = 'cd %s && ./crash_gen.sh --valgrind -p -t 10 -s 1000 -g 0x32c,0x32d,0x3d3,0x18,0x2501,0x369,0x388,0x061a,0x2550 '%(crash_gen_path)
|
||||
|
||||
crash_gen_cmd = get_auto_mix_cmds(args_list,valgrind=valgrind_mode)
|
||||
return crash_gen_cmd
|
||||
|
||||
def run_crash_gen(crash_cmds):
|
||||
|
||||
# prepare env of taosd
|
||||
start_taosd()
|
||||
|
||||
build_path = get_path()
|
||||
if repo == "community":
|
||||
crash_gen_path = build_path[:-5]+"community/tests/pytest/"
|
||||
elif repo == "TDengine":
|
||||
crash_gen_path = build_path[:-5]+"/tests/pytest/"
|
||||
else:
|
||||
pass
|
||||
result_file = os.path.join(crash_gen_path, 'valgrind.out')
|
||||
|
||||
|
||||
# run crash_gen and back logs
|
||||
os.system('echo "%s">>%s'%(crash_cmds,crash_gen_cmds_file))
|
||||
os.system("%s >>%s "%(crash_cmds,result_file))
|
||||
|
||||
|
||||
def check_status():
|
||||
build_path = get_path()
|
||||
if repo == "community":
|
||||
crash_gen_path = build_path[:-5]+"community/tests/pytest/"
|
||||
elif repo == "TDengine":
|
||||
crash_gen_path = build_path[:-5]+"/tests/pytest/"
|
||||
else:
|
||||
pass
|
||||
result_file = os.path.join(crash_gen_path, 'valgrind.out')
|
||||
run_code = subprocess.Popen("tail -n 50 %s"%result_file, shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
|
||||
os.system("tail -n 50 %s>>%s"%(result_file,exit_status_logs))
|
||||
|
||||
core_check = subprocess.Popen('ls -l %s | grep "^-" | wc -l'%core_path, shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
|
||||
|
||||
if int(core_check.strip().rstrip()) > 0:
|
||||
# it means core files has occured
|
||||
return 3
|
||||
|
||||
if "Crash_Gen is now exiting with status code: 1" in run_code:
|
||||
return 1
|
||||
elif "Crash_Gen is now exiting with status code: 0" in run_code:
|
||||
return 0
|
||||
else:
|
||||
return 2
|
||||
|
||||
|
||||
def main():
|
||||
|
||||
args_list = {"--auto-start-service":False ,"--max-dbs":0,"--connector-type":"native","--debug":False,"--run-tdengine":False,"--ignore-errors":[],
|
||||
"--track-memory-leaks":False , "--larger-data":False, "--mix-oos-data":False, "--dynamic-db-table-names":False,
|
||||
"--per-thread-db-connection":False , "--record-ops":False , "--max-steps":100, "--num-threads":10, "--verify-data":False,"--use-shadow-db":False ,
|
||||
"--continue-on-exception":False }
|
||||
|
||||
args = random_args(args_list)
|
||||
args = limits(args)
|
||||
|
||||
|
||||
build_path = get_path()
|
||||
os.system("pip install git+https://github.com/taosdata/taos-connector-python.git")
|
||||
if repo =="community":
|
||||
crash_gen_path = build_path[:-5]+"community/tests/pytest/"
|
||||
elif repo =="TDengine":
|
||||
crash_gen_path = build_path[:-5]+"/tests/pytest/"
|
||||
else:
|
||||
pass
|
||||
|
||||
if os.path.exists(crash_gen_path+"crash_gen.sh"):
|
||||
print(" make sure crash_gen.sh is ready")
|
||||
else:
|
||||
print( " crash_gen.sh is not exists ")
|
||||
sys.exit(1)
|
||||
|
||||
git_commit = subprocess.Popen("cd %s && git log | head -n1"%crash_gen_path, shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")[8:16]
|
||||
|
||||
# crash_cmds = get_cmds()
|
||||
|
||||
crash_cmds = get_cmds(args)
|
||||
# clean run_dir
|
||||
os.system('rm -rf %s'%run_dir )
|
||||
if not os.path.exists(run_dir):
|
||||
os.mkdir(run_dir)
|
||||
print(crash_cmds)
|
||||
run_crash_gen(crash_cmds)
|
||||
status = check_status()
|
||||
|
||||
print("exit status : ", status)
|
||||
|
||||
if status ==4:
|
||||
print('======== crash_gen found memory bugs ========')
|
||||
if status ==5:
|
||||
print('======== crash_gen found memory errors ========')
|
||||
if status >0:
|
||||
print('======== crash_gen run failed and not exit as expected ========')
|
||||
else:
|
||||
print('======== crash_gen run sucess and exit as expected ========')
|
||||
|
||||
|
||||
if status!=0 :
|
||||
|
||||
try:
|
||||
text = f"crash_gen instance exit status of docker [ {hostname} ] is : {msg_dict[status]}\n " + f" and git commit : {git_commit}"
|
||||
send_msg(get_msg(text))
|
||||
except Exception as e:
|
||||
print("exception:", e)
|
||||
exit(status)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
|
||||
|
|
@ -0,0 +1,399 @@
|
|||
#!/usr/bin/python3
|
||||
|
||||
|
||||
import os
|
||||
import socket
|
||||
import requests
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
import os ,sys
|
||||
import random
|
||||
import argparse
|
||||
import subprocess
|
||||
import time
|
||||
import platform
|
||||
|
||||
# valgrind mode ?
|
||||
valgrind_mode = True
|
||||
|
||||
msg_dict = {0:"success" , 1:"failed" , 2:"other errors" , 3:"crash occured" , 4:"Invalid read/write" , 5:"memory leak" }
|
||||
|
||||
# formal
|
||||
hostname = socket.gethostname()
|
||||
|
||||
group_url = 'https://open.feishu.cn/open-apis/bot/v2/hook/56c333b5-eae9-4c18-b0b6-7e4b7174f5c9'
|
||||
|
||||
def get_msg(text):
|
||||
return {
|
||||
"msg_type": "post",
|
||||
"content": {
|
||||
"post": {
|
||||
"zh_cn": {
|
||||
"title": "Crash_gen Monitor",
|
||||
"content": [
|
||||
[{
|
||||
"tag": "text",
|
||||
"text": text
|
||||
}
|
||||
]]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def send_msg(json):
|
||||
headers = {
|
||||
'Content-Type': 'application/json'
|
||||
}
|
||||
|
||||
req = requests.post(url=group_url, headers=headers, json=json)
|
||||
inf = req.json()
|
||||
if "StatusCode" in inf and inf["StatusCode"] == 0:
|
||||
pass
|
||||
else:
|
||||
print(inf)
|
||||
|
||||
|
||||
# set path about run instance
|
||||
|
||||
core_path = subprocess.Popen("cat /proc/sys/kernel/core_pattern", shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
|
||||
core_path = "/".join(core_path.split("/")[:-1])
|
||||
print(" ======= core path is %s ======== " %core_path)
|
||||
if not os.path.exists(core_path):
|
||||
os.mkdir(core_path)
|
||||
|
||||
base_dir = os.path.dirname(os.path.realpath(__file__))
|
||||
if base_dir.find("community")>0:
|
||||
repo = "community"
|
||||
elif base_dir.find("TDengine")>0:
|
||||
repo = "TDengine"
|
||||
else:
|
||||
repo ="TDengine"
|
||||
print("base_dir:",base_dir)
|
||||
home_dir = base_dir[:base_dir.find(repo)]
|
||||
print("home_dir:",home_dir)
|
||||
run_dir = os.path.join(home_dir,'run_dir')
|
||||
run_dir = os.path.abspath(run_dir)
|
||||
print("run dir is *** :",run_dir)
|
||||
if not os.path.exists(run_dir):
|
||||
os.mkdir(run_dir)
|
||||
run_log_file = run_dir+'/crash_gen_run.log'
|
||||
crash_gen_cmds_file = os.path.join(run_dir, 'crash_gen_cmds.sh')
|
||||
exit_status_logs = os.path.join(run_dir, 'crash_exit.log')
|
||||
|
||||
def get_path():
|
||||
buildPath=''
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosd" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root) - len("/build/bin")]
|
||||
break
|
||||
return buildPath
|
||||
|
||||
# generate crash_gen start script randomly
|
||||
|
||||
def random_args(args_list):
|
||||
nums_args_list = ["--max-dbs","--num-replicas","--num-dnodes","--max-steps","--num-threads",] # record int type arguments
|
||||
bools_args_list = ["--auto-start-service" , "--debug","--run-tdengine","--ignore-errors","--track-memory-leaks","--larger-data","--mix-oos-data","--dynamic-db-table-names",
|
||||
"--per-thread-db-connection","--record-ops","--verify-data","--use-shadow-db","--continue-on-exception"
|
||||
] # record bool type arguments
|
||||
strs_args_list = ["--connector-type"] # record str type arguments
|
||||
|
||||
args_list["--auto-start-service"]= False
|
||||
args_list["--continue-on-exception"]=True
|
||||
# connect_types=['native','rest','mixed'] # restful interface has change ,we should trans dbnames to connection or change sql such as "db.test"
|
||||
connect_types=['native']
|
||||
# args_list["--connector-type"]=connect_types[random.randint(0,2)]
|
||||
args_list["--connector-type"]= connect_types[0]
|
||||
args_list["--max-dbs"]= random.randint(1,10)
|
||||
|
||||
# dnodes = [1,3] # set single dnodes;
|
||||
|
||||
# args_list["--num-dnodes"]= random.sample(dnodes,1)[0]
|
||||
# args_list["--num-replicas"]= random.randint(1,args_list["--num-dnodes"])
|
||||
args_list["--debug"]=False
|
||||
args_list["--per-thread-db-connection"]=True
|
||||
args_list["--track-memory-leaks"]=False
|
||||
|
||||
args_list["--max-steps"]=random.randint(200,500)
|
||||
|
||||
threads = [16,32]
|
||||
|
||||
args_list["--num-threads"]=random.sample(threads,1)[0] #$ debug
|
||||
# args_list["--ignore-errors"]=[] ## can add error codes for detail
|
||||
|
||||
|
||||
args_list["--run-tdengine"]= False
|
||||
args_list["--use-shadow-db"]= False
|
||||
args_list["--dynamic-db-table-names"]= True
|
||||
args_list["--verify-data"]= False
|
||||
args_list["--record-ops"] = False
|
||||
|
||||
for key in bools_args_list:
|
||||
set_bool_value = [True,False]
|
||||
if key == "--auto-start-service" :
|
||||
continue
|
||||
elif key =="--run-tdengine":
|
||||
continue
|
||||
elif key == "--ignore-errors":
|
||||
continue
|
||||
elif key == "--debug":
|
||||
continue
|
||||
elif key == "--per-thread-db-connection":
|
||||
continue
|
||||
elif key == "--continue-on-exception":
|
||||
continue
|
||||
elif key == "--use-shadow-db":
|
||||
continue
|
||||
elif key =="--track-memory-leaks":
|
||||
continue
|
||||
elif key == "--dynamic-db-table-names":
|
||||
continue
|
||||
elif key == "--verify-data":
|
||||
continue
|
||||
elif key == "--record-ops":
|
||||
continue
|
||||
elif key == "--larger-data":
|
||||
continue
|
||||
else:
|
||||
args_list[key]=set_bool_value[random.randint(0,1)]
|
||||
return args_list
|
||||
|
||||
def limits(args_list):
|
||||
if args_list["--use-shadow-db"]==True:
|
||||
if args_list["--max-dbs"] > 1:
|
||||
print("Cannot combine use-shadow-db with max-dbs of more than 1 ,set max-dbs=1")
|
||||
args_list["--max-dbs"]=1
|
||||
else:
|
||||
pass
|
||||
|
||||
# env is start by test frame , not crash_gen instance
|
||||
|
||||
# elif args_list["--num-replicas"]==0:
|
||||
# print(" make sure num-replicas is at least 1 ")
|
||||
# args_list["--num-replicas"]=1
|
||||
# elif args_list["--num-replicas"]==1:
|
||||
# pass
|
||||
|
||||
# elif args_list["--num-replicas"]>1:
|
||||
# if not args_list["--auto-start-service"]:
|
||||
# print("it should be deployed by crash_gen auto-start-service for multi replicas")
|
||||
|
||||
# else:
|
||||
# pass
|
||||
|
||||
return args_list
|
||||
|
||||
def get_auto_mix_cmds(args_list ,valgrind=valgrind_mode):
|
||||
build_path = get_path()
|
||||
if repo == "community":
|
||||
crash_gen_path = build_path[:-5]+"community/tests/pytest/"
|
||||
elif repo == "TDengine":
|
||||
crash_gen_path = build_path[:-5]+"/tests/pytest/"
|
||||
else:
|
||||
pass
|
||||
|
||||
bools_args_list = ["--auto-start-service" , "--debug","--run-tdengine","--ignore-errors","--track-memory-leaks","--larger-data","--mix-oos-data","--dynamic-db-table-names",
|
||||
"--per-thread-db-connection","--record-ops","--verify-data","--use-shadow-db","--continue-on-exception"]
|
||||
arguments = ""
|
||||
for k ,v in args_list.items():
|
||||
if k == "--ignore-errors":
|
||||
if v:
|
||||
arguments+=(k+"="+str(v)+" ")
|
||||
else:
|
||||
arguments+=""
|
||||
elif k in bools_args_list and v==True:
|
||||
arguments+=(k+" ")
|
||||
elif k in bools_args_list and v==False:
|
||||
arguments+=""
|
||||
else:
|
||||
arguments+=(k+"="+str(v)+" ")
|
||||
|
||||
if valgrind :
|
||||
|
||||
crash_gen_cmd = 'cd %s && ./crash_gen.sh --valgrind %s -g 0x32c,0x32d,0x3d3,0x18,0x2501,0x369,0x388,0x061a,0x2550,0x0203 '%(crash_gen_path ,arguments)
|
||||
|
||||
else:
|
||||
|
||||
crash_gen_cmd = 'cd %s && ./crash_gen.sh %s -g 0x32c,0x32d,0x3d3,0x18,0x2501,0x369,0x388,0x061a,0x2550,0x0203'%(crash_gen_path ,arguments)
|
||||
|
||||
return crash_gen_cmd
|
||||
|
||||
|
||||
def start_taosd():
|
||||
build_path = get_path()
|
||||
if repo == "community":
|
||||
start_path = build_path[:-5]+"community/tests/system-test/"
|
||||
elif repo == "TDengine":
|
||||
start_path = build_path[:-5]+"/tests/system-test/"
|
||||
else:
|
||||
pass
|
||||
|
||||
start_cmd = 'cd %s && python3 test.py '%(start_path)
|
||||
os.system(start_cmd +">>/dev/null")
|
||||
|
||||
def get_cmds(args_list):
|
||||
# build_path = get_path()
|
||||
# if repo == "community":
|
||||
# crash_gen_path = build_path[:-5]+"community/tests/pytest/"
|
||||
# elif repo == "TDengine":
|
||||
# crash_gen_path = build_path[:-5]+"/tests/pytest/"
|
||||
# else:
|
||||
# pass
|
||||
|
||||
# crash_gen_cmd = 'cd %s && ./crash_gen.sh --valgrind -p -t 10 -s 1000 -g 0x32c,0x32d,0x3d3,0x18,0x2501,0x369,0x388,0x061a,0x2550 '%(crash_gen_path)
|
||||
|
||||
crash_gen_cmd = get_auto_mix_cmds(args_list,valgrind=valgrind_mode)
|
||||
return crash_gen_cmd
|
||||
|
||||
def run_crash_gen(crash_cmds):
|
||||
|
||||
# prepare env of taosd
|
||||
start_taosd()
|
||||
# run crash_gen and back logs
|
||||
os.system('echo "%s">>%s'%(crash_cmds,crash_gen_cmds_file))
|
||||
# os.system("cp %s %s"%(crash_gen_cmds_file, core_path))
|
||||
os.system("%s"%(crash_cmds))
|
||||
|
||||
def check_status():
|
||||
build_path = get_path()
|
||||
if repo == "community":
|
||||
crash_gen_path = build_path[:-5]+"community/tests/pytest/"
|
||||
elif repo == "TDengine":
|
||||
crash_gen_path = build_path[:-5]+"/tests/pytest/"
|
||||
else:
|
||||
pass
|
||||
result_file = os.path.join(crash_gen_path, 'valgrind.out')
|
||||
run_code = subprocess.Popen("tail -n 50 %s"%result_file, shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
|
||||
os.system("tail -n 50 %s>>%s"%(result_file,exit_status_logs))
|
||||
|
||||
core_check = subprocess.Popen('ls -l %s | grep "^-" | wc -l'%core_path, shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
|
||||
|
||||
if int(core_check.strip().rstrip()) > 0:
|
||||
# it means core files has occured
|
||||
return 3
|
||||
|
||||
mem_status = check_memory()
|
||||
if mem_status >0:
|
||||
return mem_status
|
||||
if "Crash_Gen is now exiting with status code: 1" in run_code:
|
||||
return 1
|
||||
elif "Crash_Gen is now exiting with status code: 0" in run_code:
|
||||
return 0
|
||||
else:
|
||||
return 2
|
||||
|
||||
|
||||
def check_memory():
|
||||
|
||||
build_path = get_path()
|
||||
if repo == "community":
|
||||
crash_gen_path = build_path[:-5]+"community/tests/pytest/"
|
||||
elif repo == "TDengine":
|
||||
crash_gen_path = build_path[:-5]+"/tests/pytest/"
|
||||
else:
|
||||
pass
|
||||
'''
|
||||
invalid read, invalid write
|
||||
'''
|
||||
back_path = os.path.join(core_path,"valgrind_report")
|
||||
if not os.path.exists(back_path):
|
||||
os.mkdir(back_path)
|
||||
|
||||
stderr_file = os.path.join(crash_gen_path , "valgrind.err")
|
||||
|
||||
status = 0
|
||||
|
||||
grep_res = subprocess.Popen("grep -i 'Invalid read' %s "%stderr_file , shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
|
||||
|
||||
if grep_res:
|
||||
# os.system("cp %s %s"%(stderr_file , back_path))
|
||||
status = 4
|
||||
|
||||
grep_res = subprocess.Popen("grep -i 'Invalid write' %s "%stderr_file , shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
|
||||
|
||||
if grep_res:
|
||||
# os.system("cp %s %s"%(stderr_file , back_path))
|
||||
status = 4
|
||||
|
||||
grep_res = subprocess.Popen("grep -i 'taosMemoryMalloc' %s "%stderr_file , shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
|
||||
|
||||
if grep_res:
|
||||
# os.system("cp %s %s"%(stderr_file , back_path))
|
||||
status = 5
|
||||
|
||||
return status
|
||||
|
||||
def main():
|
||||
|
||||
args_list = {"--auto-start-service":False ,"--max-dbs":0,"--connector-type":"native","--debug":False,"--run-tdengine":False,"--ignore-errors":[],
|
||||
"--track-memory-leaks":False , "--larger-data":False, "--mix-oos-data":False, "--dynamic-db-table-names":False,
|
||||
"--per-thread-db-connection":False , "--record-ops":False , "--max-steps":100, "--num-threads":10, "--verify-data":False,"--use-shadow-db":False ,
|
||||
"--continue-on-exception":False }
|
||||
|
||||
args = random_args(args_list)
|
||||
args = limits(args)
|
||||
|
||||
build_path = get_path()
|
||||
os.system("pip install git+https://github.com/taosdata/taos-connector-python.git >>/dev/null")
|
||||
if repo =="community":
|
||||
crash_gen_path = build_path[:-5]+"community/tests/pytest/"
|
||||
elif repo =="TDengine":
|
||||
crash_gen_path = build_path[:-5]+"/tests/pytest/"
|
||||
else:
|
||||
pass
|
||||
|
||||
if os.path.exists(crash_gen_path+"crash_gen.sh"):
|
||||
print(" make sure crash_gen.sh is ready")
|
||||
else:
|
||||
print( " crash_gen.sh is not exists ")
|
||||
sys.exit(1)
|
||||
|
||||
git_commit = subprocess.Popen("cd %s && git log | head -n1"%crash_gen_path, shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")[8:16]
|
||||
|
||||
# crash_cmds = get_cmds()
|
||||
|
||||
crash_cmds = get_cmds(args)
|
||||
|
||||
# clean run_dir
|
||||
os.system('rm -rf %s'%run_dir )
|
||||
if not os.path.exists(run_dir):
|
||||
os.mkdir(run_dir)
|
||||
print(crash_cmds)
|
||||
run_crash_gen(crash_cmds)
|
||||
status = check_status()
|
||||
# back_path = os.path.join(core_path,"valgrind_report")
|
||||
|
||||
print("exit status : ", status)
|
||||
|
||||
if status ==4:
|
||||
print('======== crash_gen found memory bugs ========')
|
||||
if status ==5:
|
||||
print('======== crash_gen found memory errors ========')
|
||||
if status >0:
|
||||
print('======== crash_gen run failed and not exit as expected ========')
|
||||
else:
|
||||
print('======== crash_gen run sucess and exit as expected ========')
|
||||
|
||||
if status!=0 :
|
||||
|
||||
try:
|
||||
text = f"crash_gen instance exit status of docker [ {hostname} ] is : {msg_dict[status]}\n " + f" and git commit : {git_commit}"
|
||||
send_msg(get_msg(text))
|
||||
except Exception as e:
|
||||
print("exception:", e)
|
||||
exit(status)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
|
||||
|
|
@ -0,0 +1,399 @@
|
|||
#!/usr/bin/python3
|
||||
|
||||
|
||||
import os
|
||||
import socket
|
||||
import requests
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
import os ,sys
|
||||
import random
|
||||
import argparse
|
||||
import subprocess
|
||||
import time
|
||||
import platform
|
||||
|
||||
# valgrind mode ?
|
||||
valgrind_mode = True
|
||||
|
||||
msg_dict = {0:"success" , 1:"failed" , 2:"other errors" , 3:"crash occured" , 4:"Invalid read/write" , 5:"memory leak" }
|
||||
|
||||
# formal
|
||||
hostname = socket.gethostname()
|
||||
|
||||
group_url = 'https://open.feishu.cn/open-apis/bot/v2/hook/56c333b5-eae9-4c18-b0b6-7e4b7174f5c9'
|
||||
|
||||
def get_msg(text):
|
||||
return {
|
||||
"msg_type": "post",
|
||||
"content": {
|
||||
"post": {
|
||||
"zh_cn": {
|
||||
"title": "Crash_gen Monitor",
|
||||
"content": [
|
||||
[{
|
||||
"tag": "text",
|
||||
"text": text
|
||||
}
|
||||
]]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def send_msg(json):
|
||||
headers = {
|
||||
'Content-Type': 'application/json'
|
||||
}
|
||||
|
||||
req = requests.post(url=group_url, headers=headers, json=json)
|
||||
inf = req.json()
|
||||
if "StatusCode" in inf and inf["StatusCode"] == 0:
|
||||
pass
|
||||
else:
|
||||
print(inf)
|
||||
|
||||
|
||||
# set path about run instance
|
||||
|
||||
core_path = subprocess.Popen("cat /proc/sys/kernel/core_pattern", shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
|
||||
core_path = "/".join(core_path.split("/")[:-1])
|
||||
print(" ======= core path is %s ======== " %core_path)
|
||||
if not os.path.exists(core_path):
|
||||
os.mkdir(core_path)
|
||||
|
||||
base_dir = os.path.dirname(os.path.realpath(__file__))
|
||||
if base_dir.find("community")>0:
|
||||
repo = "community"
|
||||
elif base_dir.find("TDengine")>0:
|
||||
repo = "TDengine"
|
||||
else:
|
||||
repo ="TDengine"
|
||||
print("base_dir:",base_dir)
|
||||
home_dir = base_dir[:base_dir.find(repo)]
|
||||
print("home_dir:",home_dir)
|
||||
run_dir = os.path.join(home_dir,'run_dir')
|
||||
run_dir = os.path.abspath(run_dir)
|
||||
print("run dir is *** :",run_dir)
|
||||
if not os.path.exists(run_dir):
|
||||
os.mkdir(run_dir)
|
||||
run_log_file = run_dir+'/crash_gen_run.log'
|
||||
crash_gen_cmds_file = os.path.join(run_dir, 'crash_gen_cmds.sh')
|
||||
exit_status_logs = os.path.join(run_dir, 'crash_exit.log')
|
||||
|
||||
def get_path():
|
||||
buildPath=''
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosd" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root) - len("/build/bin")]
|
||||
break
|
||||
return buildPath
|
||||
|
||||
# generate crash_gen start script randomly
|
||||
|
||||
def random_args(args_list):
|
||||
nums_args_list = ["--max-dbs","--num-replicas","--num-dnodes","--max-steps","--num-threads",] # record int type arguments
|
||||
bools_args_list = ["--auto-start-service" , "--debug","--run-tdengine","--ignore-errors","--track-memory-leaks","--larger-data","--mix-oos-data","--dynamic-db-table-names",
|
||||
"--per-thread-db-connection","--record-ops","--verify-data","--use-shadow-db","--continue-on-exception"
|
||||
] # record bool type arguments
|
||||
strs_args_list = ["--connector-type"] # record str type arguments
|
||||
|
||||
args_list["--auto-start-service"]= False
|
||||
args_list["--continue-on-exception"]=True
|
||||
# connect_types=['native','rest','mixed'] # restful interface has change ,we should trans dbnames to connection or change sql such as "db.test"
|
||||
connect_types=['native']
|
||||
# args_list["--connector-type"]=connect_types[random.randint(0,2)]
|
||||
args_list["--connector-type"]= connect_types[0]
|
||||
args_list["--max-dbs"]= random.randint(1,10)
|
||||
|
||||
# dnodes = [1,3] # set single dnodes;
|
||||
|
||||
# args_list["--num-dnodes"]= random.sample(dnodes,1)[0]
|
||||
# args_list["--num-replicas"]= random.randint(1,args_list["--num-dnodes"])
|
||||
args_list["--debug"]=False
|
||||
args_list["--per-thread-db-connection"]=True
|
||||
args_list["--track-memory-leaks"]=False
|
||||
|
||||
args_list["--max-steps"]=random.randint(200,500)
|
||||
|
||||
threads = [16,32]
|
||||
|
||||
args_list["--num-threads"]=random.sample(threads,1)[0] #$ debug
|
||||
# args_list["--ignore-errors"]=[] ## can add error codes for detail
|
||||
|
||||
|
||||
args_list["--run-tdengine"]= False
|
||||
args_list["--use-shadow-db"]= False
|
||||
args_list["--dynamic-db-table-names"]= True
|
||||
args_list["--verify-data"]= False
|
||||
args_list["--record-ops"] = False
|
||||
|
||||
for key in bools_args_list:
|
||||
set_bool_value = [True,False]
|
||||
if key == "--auto-start-service" :
|
||||
continue
|
||||
elif key =="--run-tdengine":
|
||||
continue
|
||||
elif key == "--ignore-errors":
|
||||
continue
|
||||
elif key == "--debug":
|
||||
continue
|
||||
elif key == "--per-thread-db-connection":
|
||||
continue
|
||||
elif key == "--continue-on-exception":
|
||||
continue
|
||||
elif key == "--use-shadow-db":
|
||||
continue
|
||||
elif key =="--track-memory-leaks":
|
||||
continue
|
||||
elif key == "--dynamic-db-table-names":
|
||||
continue
|
||||
elif key == "--verify-data":
|
||||
continue
|
||||
elif key == "--record-ops":
|
||||
continue
|
||||
elif key == "--larger-data":
|
||||
continue
|
||||
else:
|
||||
args_list[key]=set_bool_value[random.randint(0,1)]
|
||||
return args_list
|
||||
|
||||
def limits(args_list):
|
||||
if args_list["--use-shadow-db"]==True:
|
||||
if args_list["--max-dbs"] > 1:
|
||||
print("Cannot combine use-shadow-db with max-dbs of more than 1 ,set max-dbs=1")
|
||||
args_list["--max-dbs"]=1
|
||||
else:
|
||||
pass
|
||||
|
||||
# env is start by test frame , not crash_gen instance
|
||||
|
||||
# elif args_list["--num-replicas"]==0:
|
||||
# print(" make sure num-replicas is at least 1 ")
|
||||
# args_list["--num-replicas"]=1
|
||||
# elif args_list["--num-replicas"]==1:
|
||||
# pass
|
||||
|
||||
# elif args_list["--num-replicas"]>1:
|
||||
# if not args_list["--auto-start-service"]:
|
||||
# print("it should be deployed by crash_gen auto-start-service for multi replicas")
|
||||
|
||||
# else:
|
||||
# pass
|
||||
|
||||
return args_list
|
||||
|
||||
def get_auto_mix_cmds(args_list ,valgrind=valgrind_mode):
|
||||
build_path = get_path()
|
||||
if repo == "community":
|
||||
crash_gen_path = build_path[:-5]+"community/tests/pytest/"
|
||||
elif repo == "TDengine":
|
||||
crash_gen_path = build_path[:-5]+"/tests/pytest/"
|
||||
else:
|
||||
pass
|
||||
|
||||
bools_args_list = ["--auto-start-service" , "--debug","--run-tdengine","--ignore-errors","--track-memory-leaks","--larger-data","--mix-oos-data","--dynamic-db-table-names",
|
||||
"--per-thread-db-connection","--record-ops","--verify-data","--use-shadow-db","--continue-on-exception"]
|
||||
arguments = ""
|
||||
for k ,v in args_list.items():
|
||||
if k == "--ignore-errors":
|
||||
if v:
|
||||
arguments+=(k+"="+str(v)+" ")
|
||||
else:
|
||||
arguments+=""
|
||||
elif k in bools_args_list and v==True:
|
||||
arguments+=(k+" ")
|
||||
elif k in bools_args_list and v==False:
|
||||
arguments+=""
|
||||
else:
|
||||
arguments+=(k+"="+str(v)+" ")
|
||||
|
||||
if valgrind :
|
||||
|
||||
crash_gen_cmd = 'cd %s && ./crash_gen.sh --valgrind -i 3 %s -g 0x32c,0x32d,0x3d3,0x18,0x2501,0x369,0x388,0x061a,0x2550,0x0707,0x0203 '%(crash_gen_path ,arguments)
|
||||
|
||||
else:
|
||||
|
||||
crash_gen_cmd = 'cd %s && ./crash_gen.sh -i 3 %s -g 0x32c,0x32d,0x3d3,0x18,0x2501,0x369,0x388,0x061a,0x2550,0x0014,0x0707,0x0203'%(crash_gen_path ,arguments)
|
||||
|
||||
return crash_gen_cmd
|
||||
|
||||
|
||||
def start_taosd():
|
||||
build_path = get_path()
|
||||
if repo == "community":
|
||||
start_path = build_path[:-5]+"community/tests/system-test/"
|
||||
elif repo == "TDengine":
|
||||
start_path = build_path[:-5]+"/tests/system-test/"
|
||||
else:
|
||||
pass
|
||||
|
||||
start_cmd = 'cd %s && python3 test.py -N 4 -M 1 '%(start_path)
|
||||
os.system(start_cmd +">>/dev/null")
|
||||
|
||||
def get_cmds(args_list):
|
||||
# build_path = get_path()
|
||||
# if repo == "community":
|
||||
# crash_gen_path = build_path[:-5]+"community/tests/pytest/"
|
||||
# elif repo == "TDengine":
|
||||
# crash_gen_path = build_path[:-5]+"/tests/pytest/"
|
||||
# else:
|
||||
# pass
|
||||
|
||||
# crash_gen_cmd = 'cd %s && ./crash_gen.sh --valgrind -p -t 10 -s 1000 -g 0x32c,0x32d,0x3d3,0x18,0x2501,0x369,0x388,0x061a,0x2550 '%(crash_gen_path)
|
||||
|
||||
crash_gen_cmd = get_auto_mix_cmds(args_list,valgrind=valgrind_mode)
|
||||
return crash_gen_cmd
|
||||
|
||||
def run_crash_gen(crash_cmds):
|
||||
|
||||
# prepare env of taosd
|
||||
start_taosd()
|
||||
# run crash_gen and back logs
|
||||
os.system('echo "%s">>%s'%(crash_cmds,crash_gen_cmds_file))
|
||||
# os.system("cp %s %s"%(crash_gen_cmds_file, core_path))
|
||||
os.system("%s"%(crash_cmds))
|
||||
|
||||
def check_status():
|
||||
build_path = get_path()
|
||||
if repo == "community":
|
||||
crash_gen_path = build_path[:-5]+"community/tests/pytest/"
|
||||
elif repo == "TDengine":
|
||||
crash_gen_path = build_path[:-5]+"/tests/pytest/"
|
||||
else:
|
||||
pass
|
||||
result_file = os.path.join(crash_gen_path, 'valgrind.out')
|
||||
run_code = subprocess.Popen("tail -n 50 %s"%result_file, shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
|
||||
os.system("tail -n 50 %s>>%s"%(result_file,exit_status_logs))
|
||||
|
||||
core_check = subprocess.Popen('ls -l %s | grep "^-" | wc -l'%core_path, shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
|
||||
|
||||
if int(core_check.strip().rstrip()) > 0:
|
||||
# it means core files has occured
|
||||
return 3
|
||||
|
||||
mem_status = check_memory()
|
||||
if mem_status >0:
|
||||
return mem_status
|
||||
if "Crash_Gen is now exiting with status code: 1" in run_code:
|
||||
return 1
|
||||
elif "Crash_Gen is now exiting with status code: 0" in run_code:
|
||||
return 0
|
||||
else:
|
||||
return 2
|
||||
|
||||
|
||||
def check_memory():
|
||||
|
||||
build_path = get_path()
|
||||
if repo == "community":
|
||||
crash_gen_path = build_path[:-5]+"community/tests/pytest/"
|
||||
elif repo == "TDengine":
|
||||
crash_gen_path = build_path[:-5]+"/tests/pytest/"
|
||||
else:
|
||||
pass
|
||||
'''
|
||||
invalid read, invalid write
|
||||
'''
|
||||
back_path = os.path.join(core_path,"valgrind_report")
|
||||
if not os.path.exists(back_path):
|
||||
os.mkdir(back_path)
|
||||
|
||||
stderr_file = os.path.join(crash_gen_path , "valgrind.err")
|
||||
|
||||
status = 0
|
||||
|
||||
grep_res = subprocess.Popen("grep -i 'Invalid read' %s "%stderr_file , shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
|
||||
|
||||
if grep_res:
|
||||
# os.system("cp %s %s"%(stderr_file , back_path))
|
||||
status = 4
|
||||
|
||||
grep_res = subprocess.Popen("grep -i 'Invalid write' %s "%stderr_file , shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
|
||||
|
||||
if grep_res:
|
||||
# os.system("cp %s %s"%(stderr_file , back_path))
|
||||
status = 4
|
||||
|
||||
grep_res = subprocess.Popen("grep -i 'taosMemoryMalloc' %s "%stderr_file , shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
|
||||
|
||||
if grep_res:
|
||||
# os.system("cp %s %s"%(stderr_file , back_path))
|
||||
status = 5
|
||||
|
||||
return status
|
||||
|
||||
def main():
|
||||
|
||||
args_list = {"--auto-start-service":False ,"--max-dbs":0,"--connector-type":"native","--debug":False,"--run-tdengine":False,"--ignore-errors":[],
|
||||
"--track-memory-leaks":False , "--larger-data":False, "--mix-oos-data":False, "--dynamic-db-table-names":False,
|
||||
"--per-thread-db-connection":False , "--record-ops":False , "--max-steps":100, "--num-threads":10, "--verify-data":False,"--use-shadow-db":False ,
|
||||
"--continue-on-exception":False }
|
||||
|
||||
args = random_args(args_list)
|
||||
args = limits(args)
|
||||
|
||||
build_path = get_path()
|
||||
os.system("pip install git+https://github.com/taosdata/taos-connector-python.git >>/dev/null")
|
||||
if repo =="community":
|
||||
crash_gen_path = build_path[:-5]+"community/tests/pytest/"
|
||||
elif repo =="TDengine":
|
||||
crash_gen_path = build_path[:-5]+"/tests/pytest/"
|
||||
else:
|
||||
pass
|
||||
|
||||
if os.path.exists(crash_gen_path+"crash_gen.sh"):
|
||||
print(" make sure crash_gen.sh is ready")
|
||||
else:
|
||||
print( " crash_gen.sh is not exists ")
|
||||
sys.exit(1)
|
||||
|
||||
git_commit = subprocess.Popen("cd %s && git log | head -n1"%crash_gen_path, shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")[8:16]
|
||||
|
||||
# crash_cmds = get_cmds()
|
||||
|
||||
crash_cmds = get_cmds(args)
|
||||
|
||||
# clean run_dir
|
||||
os.system('rm -rf %s'%run_dir )
|
||||
if not os.path.exists(run_dir):
|
||||
os.mkdir(run_dir)
|
||||
print(crash_cmds)
|
||||
run_crash_gen(crash_cmds)
|
||||
status = check_status()
|
||||
# back_path = os.path.join(core_path,"valgrind_report")
|
||||
|
||||
print("exit status : ", status)
|
||||
|
||||
if status ==4:
|
||||
print('======== crash_gen found memory bugs ========')
|
||||
if status ==5:
|
||||
print('======== crash_gen found memory errors ========')
|
||||
if status >0:
|
||||
print('======== crash_gen run failed and not exit as expected ========')
|
||||
else:
|
||||
print('======== crash_gen run sucess and exit as expected ========')
|
||||
|
||||
if status!=0 :
|
||||
|
||||
try:
|
||||
text = f"crash_gen instance exit status of docker [ {hostname} ] is : {msg_dict[status]}\n " + f" and git commit : {git_commit}"
|
||||
send_msg(get_msg(text))
|
||||
except Exception as e:
|
||||
print("exception:", e)
|
||||
exit(status)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
|
||||
|
|
@ -0,0 +1,11 @@
|
|||
#!/bin/bash
|
||||
|
||||
# set LD_LIBRARY_PATH
|
||||
export PATH=$PATH:/home/TDengine/debug/build/bin
|
||||
export LD_LIBRARY_PATH=/home/TDengine/debug/build/lib
|
||||
ln -s /home/TDengine/debug/build/lib/libtaos.so /usr/lib/libtaos.so 2>/dev/null
|
||||
ln -s /home/TDengine/debug/build/lib/libtaos.so /usr/lib/libtaos.so.1 2>/dev/null
|
||||
ln -s /home/TDengine/include/client/taos.h /usr/include/taos.h 2>/dev/null
|
||||
|
||||
# run crash_gen auto script
|
||||
python3 /home/TDengine/tests/pytest/auto_crash_gen.py
|
|
@ -0,0 +1,11 @@
|
|||
#!/bin/bash
|
||||
|
||||
# set LD_LIBRARY_PATH
|
||||
export PATH=$PATH:/home/TDengine/debug/build/bin
|
||||
export LD_LIBRARY_PATH=/home/TDengine/debug/build/lib
|
||||
ln -s /home/TDengine/debug/build/lib/libtaos.so /usr/lib/libtaos.so 2>/dev/null
|
||||
ln -s /home/TDengine/debug/build/lib/libtaos.so /usr/lib/libtaos.so.1 2>/dev/null
|
||||
ln -s /home/TDengine/include/client/taos.h /usr/include/taos.h 2>/dev/null
|
||||
|
||||
# run crash_gen auto script
|
||||
python3 /home/TDengine/tests/pytest/auto_crash_gen_valgrind.py
|
|
@ -0,0 +1,11 @@
|
|||
#!/bin/bash
|
||||
|
||||
# set LD_LIBRARY_PATH
|
||||
export PATH=$PATH:/home/TDengine/debug/build/bin
|
||||
export LD_LIBRARY_PATH=/home/TDengine/debug/build/lib
|
||||
ln -s /home/TDengine/debug/build/lib/libtaos.so /usr/lib/libtaos.so 2>/dev/null
|
||||
ln -s /home/TDengine/debug/build/lib/libtaos.so /usr/lib/libtaos.so.1 2>/dev/null
|
||||
ln -s /home/TDengine/include/client/taos.h /usr/include/taos.h 2>/dev/null
|
||||
|
||||
# run crash_gen auto script
|
||||
python3 /home/TDengine/tests/pytest/auto_crash_gen_valgrind_cluster.py
|
|
@ -0,0 +1,31 @@
|
|||
#!/bin/bash
|
||||
|
||||
mkdir -p /data/wz/crash_gen_logs/
|
||||
logdir='/data/wz/crash_gen_logs/'
|
||||
date_tag=`date +%Y%m%d-%H%M%S`
|
||||
hostname='vm_valgrind_'
|
||||
|
||||
for i in {1..50}
|
||||
do
|
||||
echo $i
|
||||
# create docker and start crash_gen
|
||||
log_dir=${logdir}${hostname}${date_tag}_${i}
|
||||
docker run -d --hostname=${hostname}${date_tag}_${i} --name ${hostname}${date_tag}_${i} --privileged -v ${log_dir}:/corefile/ -- crash_gen:v1.0 sleep 99999999999999
|
||||
echo create docker ${hostname}${date_tag}_${i}
|
||||
docker exec -d ${hostname}${date_tag}_${i} sh -c 'rm -rf /home/taos-connector-python'
|
||||
docker cp /data/wz/TDengine ${hostname}${date_tag}_${i}:/home/TDengine
|
||||
docker cp /data/wz/taos-connector-python ${hostname}${date_tag}_${i}:/home/taos-connector-python
|
||||
echo copy TDengine in container done!
|
||||
docker exec ${hostname}${date_tag}_${i} sh -c 'sh /home/TDengine/tests/pytest/auto_run_valgrind.sh '
|
||||
if [ $? -eq 0 ]
|
||||
then
|
||||
echo crash_gen exit as expect , run success
|
||||
|
||||
# # clear docker which success
|
||||
docker stop ${hostname}${date_tag}_${i}
|
||||
docker rm -f ${hostname}${date_tag}_${i}
|
||||
else
|
||||
docker stop ${hostname}${date_tag}_${i}
|
||||
echo crash_gen exit error , run failed
|
||||
fi
|
||||
done
|
|
@ -149,6 +149,8 @@ class TDTestCase:
|
|||
|
||||
tmqCom.waitSubscriptionExit(tdSql, topicFromStb)
|
||||
tdSql.query("drop topic %s"%topicFromStb)
|
||||
|
||||
tmqCom.stopTmqSimProcess(processorName="tmq_sim")
|
||||
|
||||
tdLog.printNoPrefix("======== test case 1 end ...... ")
|
||||
|
||||
|
@ -178,6 +180,8 @@ class TDTestCase:
|
|||
paraDict['vgroups'] = self.vgroups
|
||||
paraDict['ctbNum'] = self.ctbNum
|
||||
paraDict['rowsPerTbl'] = self.rowsPerTbl
|
||||
|
||||
tmqCom.initConsumerTable()
|
||||
|
||||
tdLog.info("create topics from stb")
|
||||
topicFromDb = 'topic_db'
|
||||
|
@ -203,10 +207,10 @@ class TDTestCase:
|
|||
tmqCom.getStartCommitNotifyFromTmqsim('cdb',1)
|
||||
|
||||
tdLog.info("create some new child table and insert data for latest mode")
|
||||
paraDict["batchNum"] = 100
|
||||
paraDict["batchNum"] = 10
|
||||
paraDict["ctbPrefix"] = 'newCtb'
|
||||
paraDict["ctbNum"] = 10
|
||||
paraDict["rowsPerTbl"] = 10
|
||||
paraDict["ctbNum"] = 100
|
||||
paraDict["rowsPerTbl"] = 100
|
||||
tmqCom.insert_data_with_autoCreateTbl(tdSql,paraDict["dbName"],paraDict["stbName"],paraDict["ctbPrefix"],paraDict["ctbNum"],paraDict["rowsPerTbl"],paraDict["batchNum"])
|
||||
|
||||
tdLog.info("================= restart dnode ===========================")
|
||||
|
|
|
@ -101,19 +101,20 @@ ELSE ()
|
|||
BUILD_COMMAND
|
||||
COMMAND set CGO_CFLAGS=-I${CMAKE_CURRENT_SOURCE_DIR}/../include/client
|
||||
COMMAND set CGO_LDFLAGS=-L${CMAKE_BINARY_DIR}/build/lib
|
||||
COMMAND go build -a -o taosadapter.exe -ldflags "-s -w -X github.com/taosdata/taosadapter/v3/version.Version=${taos_version} -X github.com/taosdata/taosadapter/v3/version.CommitID=${taosadapter_commit_sha1}"
|
||||
COMMAND go build -a -o taosadapter-debug.exe -ldflags "-X github.com/taosdata/taosadapter/v3/version.Version=${taos_version} -X github.com/taosdata/taosadapter/v3/version.CommitID=${taosadapter_commit_sha1}"
|
||||
COMMAND go build -a -o taosadapter.exe -ldflags "-X github.com/taosdata/taosadapter/v3/version.Version=${taos_version} -X github.com/taosdata/taosadapter/v3/version.CommitID=${taosadapter_commit_sha1}"
|
||||
# COMMAND go build -a -o taosadapter.exe -ldflags "-s -w -X github.com/taosdata/taosadapter/v3/version.Version=${taos_version} -X github.com/taosdata/taosadapter/v3/version.CommitID=${taosadapter_commit_sha1}"
|
||||
# COMMAND go build -a -o taosadapter-debug.exe -ldflags "-X github.com/taosdata/taosadapter/v3/version.Version=${taos_version} -X github.com/taosdata/taosadapter/v3/version.CommitID=${taosadapter_commit_sha1}"
|
||||
|
||||
INSTALL_COMMAND
|
||||
COMMAND cmake -E echo "Comparessing taosadapter.exe"
|
||||
COMMAND cmake -E time upx taosadapter.exe
|
||||
# COMMAND cmake -E echo "Comparessing taosadapter.exe"
|
||||
# COMMAND cmake -E time upx taosadapter.exe
|
||||
COMMAND cmake -E echo "Copy taosadapter.exe"
|
||||
COMMAND cmake -E copy taosadapter.exe ${CMAKE_BINARY_DIR}/build/bin/taosadapter.exe
|
||||
COMMAND cmake -E make_directory ${CMAKE_BINARY_DIR}/test/cfg/
|
||||
COMMAND cmake -E echo "Copy taosadapter.toml"
|
||||
COMMAND cmake -E copy ./example/config/taosadapter.toml ${CMAKE_BINARY_DIR}/test/cfg/
|
||||
COMMAND cmake -E echo "Copy taosadapter-debug.exe"
|
||||
COMMAND cmake -E copy taosadapter-debug.exe ${CMAKE_BINARY_DIR}/build/bin
|
||||
# COMMAND cmake -E echo "Copy taosadapter-debug.exe"
|
||||
# COMMAND cmake -E copy taosadapter-debug.exe ${CMAKE_BINARY_DIR}/build/bin
|
||||
)
|
||||
ELSE (TD_WINDOWS)
|
||||
MESSAGE("Building taosAdapter on non-Windows")
|
||||
|
@ -128,19 +129,20 @@ ELSE ()
|
|||
PATCH_COMMAND
|
||||
COMMAND git clean -f -d
|
||||
BUILD_COMMAND
|
||||
COMMAND CGO_CFLAGS=-I${CMAKE_CURRENT_SOURCE_DIR}/../include/client CGO_LDFLAGS=-L${CMAKE_BINARY_DIR}/build/lib go build -a -ldflags "-s -w -X github.com/taosdata/taosadapter/v3/version.Version=${taos_version} -X github.com/taosdata/taosadapter/v3/version.CommitID=${taosadapter_commit_sha1}"
|
||||
COMMAND CGO_CFLAGS=-I${CMAKE_CURRENT_SOURCE_DIR}/../include/client CGO_LDFLAGS=-L${CMAKE_BINARY_DIR}/build/lib go build -a -o taosadapter-debug -ldflags "-X github.com/taosdata/taosadapter/v3/version.Version=${taos_version} -X github.com/taosdata/taosadapter/v3/version.CommitID=${taosadapter_commit_sha1}"
|
||||
COMMAND CGO_CFLAGS=-I${CMAKE_CURRENT_SOURCE_DIR}/../include/client CGO_LDFLAGS=-L${CMAKE_BINARY_DIR}/build/lib go build -a -ldflags "-X github.com/taosdata/taosadapter/v3/version.Version=${taos_version} -X github.com/taosdata/taosadapter/v3/version.CommitID=${taosadapter_commit_sha1}"
|
||||
# COMMAND CGO_CFLAGS=-I${CMAKE_CURRENT_SOURCE_DIR}/../include/client CGO_LDFLAGS=-L${CMAKE_BINARY_DIR}/build/lib go build -a -ldflags "-s -w -X github.com/taosdata/taosadapter/v3/version.Version=${taos_version} -X github.com/taosdata/taosadapter/v3/version.CommitID=${taosadapter_commit_sha1}"
|
||||
# COMMAND CGO_CFLAGS=-I${CMAKE_CURRENT_SOURCE_DIR}/../include/client CGO_LDFLAGS=-L${CMAKE_BINARY_DIR}/build/lib go build -a -o taosadapter-debug -ldflags "-X github.com/taosdata/taosadapter/v3/version.Version=${taos_version} -X github.com/taosdata/taosadapter/v3/version.CommitID=${taosadapter_commit_sha1}"
|
||||
INSTALL_COMMAND
|
||||
COMMAND cmake -E echo "Comparessing taosadapter.exe"
|
||||
COMMAND upx taosadapter || :
|
||||
# COMMAND cmake -E echo "Comparessing taosadapter.exe"
|
||||
# COMMAND upx taosadapter || :
|
||||
COMMAND cmake -E echo "Copy taosadapter"
|
||||
COMMAND cmake -E copy taosadapter ${CMAKE_BINARY_DIR}/build/bin
|
||||
COMMAND cmake -E make_directory ${CMAKE_BINARY_DIR}/test/cfg/
|
||||
COMMAND cmake -E echo "Copy taosadapter.toml"
|
||||
COMMAND cmake -E copy ./example/config/taosadapter.toml ${CMAKE_BINARY_DIR}/test/cfg/
|
||||
COMMAND cmake -E copy ./taosadapter.service ${CMAKE_BINARY_DIR}/test/cfg/
|
||||
COMMAND cmake -E echo "Copy taosadapter-debug"
|
||||
COMMAND cmake -E copy taosadapter-debug ${CMAKE_BINARY_DIR}/build/bin
|
||||
# COMMAND cmake -E echo "Copy taosadapter-debug"
|
||||
# COMMAND cmake -E copy taosadapter-debug ${CMAKE_BINARY_DIR}/build/bin
|
||||
)
|
||||
ENDIF (TD_WINDOWS)
|
||||
ENDIF ()
|
||||
|
|
Loading…
Reference in New Issue