feat: merge with 3.0
This commit is contained in:
commit
8d2c364310
|
@ -119,3 +119,11 @@ contrib/*
|
||||||
sql
|
sql
|
||||||
debug*/
|
debug*/
|
||||||
.env
|
.env
|
||||||
|
tools/README
|
||||||
|
tools/LICENSE
|
||||||
|
tools/README.1ST
|
||||||
|
tools/THANKS
|
||||||
|
tools/NEWS
|
||||||
|
tools/COPYING
|
||||||
|
tools/BUGS
|
||||||
|
tools/taos-tools
|
88
Jenkinsfile2
88
Jenkinsfile2
|
@ -38,40 +38,21 @@ def pre_test(){
|
||||||
sh '''
|
sh '''
|
||||||
cd ${WK}
|
cd ${WK}
|
||||||
git reset --hard
|
git reset --hard
|
||||||
|
git remote prune origin
|
||||||
|
git fetch
|
||||||
cd ${WKC}
|
cd ${WKC}
|
||||||
git reset --hard
|
git reset --hard
|
||||||
git clean -fxd
|
git clean -fxd
|
||||||
|
git remote prune origin
|
||||||
|
git fetch
|
||||||
'''
|
'''
|
||||||
script {
|
script {
|
||||||
if (env.CHANGE_TARGET == 'master') {
|
sh '''
|
||||||
sh '''
|
cd ${WK}
|
||||||
cd ${WK}
|
git checkout ''' + env.CHANGE_TARGET + '''
|
||||||
git checkout master
|
cd ${WKC}
|
||||||
cd ${WKC}
|
git checkout ''' + env.CHANGE_TARGET + '''
|
||||||
git checkout master
|
'''
|
||||||
'''
|
|
||||||
} else if(env.CHANGE_TARGET == '2.0') {
|
|
||||||
sh '''
|
|
||||||
cd ${WK}
|
|
||||||
git checkout 2.0
|
|
||||||
cd ${WKC}
|
|
||||||
git checkout 2.0
|
|
||||||
'''
|
|
||||||
} else if(env.CHANGE_TARGET == '3.0') {
|
|
||||||
sh '''
|
|
||||||
cd ${WK}
|
|
||||||
git checkout 3.0
|
|
||||||
cd ${WKC}
|
|
||||||
git checkout 3.0
|
|
||||||
'''
|
|
||||||
} else {
|
|
||||||
sh '''
|
|
||||||
cd ${WK}
|
|
||||||
git checkout develop
|
|
||||||
cd ${WKC}
|
|
||||||
git checkout develop
|
|
||||||
'''
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
if (env.CHANGE_URL =~ /\/TDengine\//) {
|
if (env.CHANGE_URL =~ /\/TDengine\//) {
|
||||||
sh '''
|
sh '''
|
||||||
|
@ -169,49 +150,24 @@ def pre_test_win(){
|
||||||
bat '''
|
bat '''
|
||||||
cd %WIN_INTERNAL_ROOT%
|
cd %WIN_INTERNAL_ROOT%
|
||||||
git reset --hard
|
git reset --hard
|
||||||
|
git remote prune origin
|
||||||
|
git fetch
|
||||||
'''
|
'''
|
||||||
bat '''
|
bat '''
|
||||||
cd %WIN_COMMUNITY_ROOT%
|
cd %WIN_COMMUNITY_ROOT%
|
||||||
git reset --hard
|
git reset --hard
|
||||||
|
git remote prune origin
|
||||||
|
git fetch
|
||||||
'''
|
'''
|
||||||
script {
|
script {
|
||||||
if (env.CHANGE_TARGET == 'master') {
|
bat '''
|
||||||
bat '''
|
cd %WIN_INTERNAL_ROOT%
|
||||||
cd %WIN_INTERNAL_ROOT%
|
git checkout ''' + env.CHANGE_TARGET + '''
|
||||||
git checkout master
|
'''
|
||||||
'''
|
bat '''
|
||||||
bat '''
|
cd %WIN_COMMUNITY_ROOT%
|
||||||
cd %WIN_COMMUNITY_ROOT%
|
git checkout ''' + env.CHANGE_TARGET + '''
|
||||||
git checkout master
|
'''
|
||||||
'''
|
|
||||||
} else if(env.CHANGE_TARGET == '2.0') {
|
|
||||||
bat '''
|
|
||||||
cd %WIN_INTERNAL_ROOT%
|
|
||||||
git checkout 2.0
|
|
||||||
'''
|
|
||||||
bat '''
|
|
||||||
cd %WIN_COMMUNITY_ROOT%
|
|
||||||
git checkout 2.0
|
|
||||||
'''
|
|
||||||
} else if(env.CHANGE_TARGET == '3.0') {
|
|
||||||
bat '''
|
|
||||||
cd %WIN_INTERNAL_ROOT%
|
|
||||||
git checkout 3.0
|
|
||||||
'''
|
|
||||||
bat '''
|
|
||||||
cd %WIN_COMMUNITY_ROOT%
|
|
||||||
git checkout 3.0
|
|
||||||
'''
|
|
||||||
} else {
|
|
||||||
bat '''
|
|
||||||
cd %WIN_INTERNAL_ROOT%
|
|
||||||
git checkout develop
|
|
||||||
'''
|
|
||||||
bat '''
|
|
||||||
cd %WIN_COMMUNITY_ROOT%
|
|
||||||
git checkout develop
|
|
||||||
'''
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
script {
|
script {
|
||||||
if (env.CHANGE_URL =~ /\/TDengine\//) {
|
if (env.CHANGE_URL =~ /\/TDengine\//) {
|
||||||
|
|
58
README-CN.md
58
README-CN.md
|
@ -20,33 +20,27 @@
|
||||||
|
|
||||||
# TDengine 简介
|
# TDengine 简介
|
||||||
|
|
||||||
TDengine 是一款高性能、分布式、支持 SQL 的时序数据库(Time-Series Database)。而且除时序数据库功能外,它还提供缓存、数据订阅、流式计算等功能,最大程度减少研发和运维的复杂度,且核心代码,包括集群功能全部开源(开源协议,AGPL v3.0)。与其他时序数据数据库相比,TDengine 有以下特点:
|
TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series Database, TSDB)。TDengine 能被广泛运用于物联网、工业互联网、车联网、IT 运维、金融等领域。除核心的时序数据库功能外,TDengine 还提供缓存、数据订阅、流式计算等功能,是一极简的时序数据处理平台,最大程度的减小系统设计的复杂度,降低研发和运营成本。TDengine 的主要优势如下:
|
||||||
|
|
||||||
- **高性能**:通过创新的存储引擎设计,无论是数据写入还是查询,TDengine 的性能比通用数据库快 10 倍以上,也远超其他时序数据库,而且存储空间也大为节省。
|
- 高性能:通过创新的存储引擎设计,无论是数据写入还是查询,TDengine 的性能比通用数据库快 10 倍以上,也远超其他时序数据库,存储空间不及通用数据库的1/10。
|
||||||
|
|
||||||
- **分布式**:通过原生分布式的设计,TDengine 提供了水平扩展的能力,只需要增加节点就能获得更强的数据处理能力,同时通过多副本机制保证了系统的高可用。
|
- 云原生:通过原生分布式的设计,充分利用云平台的优势,TDengine 提供了水平扩展能力,具备弹性、韧性和可观测性,支持k8s部署,可运行在公有云、私有云和混合云上。
|
||||||
|
|
||||||
- **支持 SQL**:TDengine 采用 SQL 作为数据查询语言,减少学习和迁移成本,同时提供 SQL 扩展来处理时序数据特有的分析,而且支持方便灵活的 schemaless 数据写入。
|
- 极简时序数据平台:TDengine 内建消息队列、缓存、流式计算等功能,应用无需再集成 Kafka/Redis/HBase/Spark 等软件,大幅降低系统的复杂度,降低应用开发和运营成本。
|
||||||
|
|
||||||
- **All in One**:将数据库、消息队列、缓存、流式计算等功能融合一起,应用无需再集成 Kafka/Redis/HBase/Spark 等软件,大幅降低应用开发和维护成本。
|
- 分析能力:支持 SQL,同时为时序数据特有的分析提供SQL扩展。通过超级表、存储计算分离、分区分片、预计算、自定义函数等技术,TDengine 具备强大的分析能力。
|
||||||
|
|
||||||
- **零管理**:安装、集群几秒搞定,无任何依赖,不用分库分表,系统运行状态监测能与 Grafana 或其他运维工具无缝集成。
|
- 简单易用:无任何依赖,安装、集群几秒搞定;提供REST以及各种语言连接器,与众多第三方工具无缝集成;提供命令行程序,便于管理和即席查询;提供各种运维工具。
|
||||||
|
|
||||||
- **零学习成本**:采用 SQL 查询语言,支持 Python、Java、C/C++、Go、Rust、Node.js 等多种编程语言,与 MySQL 相似,零学习成本。
|
- 核心开源:TDengine 的核心代码包括集群功能全部开源,截止到2022年8月1日,全球超过 135.9k 个运行实例,GitHub Star 18.7k,Fork 4.4k,社区活跃。
|
||||||
|
|
||||||
- **无缝集成**:不用一行代码,即可与 Telegraf、Grafana、EMQX、Prometheus、StatsD、collectd、Matlab、R 等第三方工具无缝集成。
|
|
||||||
|
|
||||||
- **互动 Console**: 通过命令行 console,不用编程,执行 SQL 语句就能做即席查询、各种数据库的操作、管理以及集群的维护.
|
|
||||||
|
|
||||||
TDengine 可以广泛应用于物联网、工业互联网、车联网、IT 运维、能源、金融等领域,让大量设备、数据采集器每天产生的高达 TB 甚至 PB 级的数据能得到高效实时的处理,对业务的运行状态进行实时的监测、预警,从大数据中挖掘出商业价值。
|
|
||||||
|
|
||||||
# 文档
|
# 文档
|
||||||
|
|
||||||
TDengine 采用传统的关系数据库模型,您可以像使用关系型数据库 MySQL 一样来使用它。但由于引入了超级表,一个采集点一张表的概念,建议您在使用前仔细阅读一遍下面的文档,特别是 [数据模型](https://www.taosdata.com/cn/documentation/architecture) 与 [数据建模](https://www.taosdata.com/cn/documentation/model)。除本文档之外,欢迎 [下载产品白皮书](https://www.taosdata.com/downloads/TDengine%20White%20Paper.pdf)。
|
关于完整的使用手册,系统架构和更多细节,请参考 [TDengine 文档](https://docs.taosdata.com) 或者 [English Version](https://docs.tdengine.com)。
|
||||||
|
|
||||||
# 构建
|
# 构建
|
||||||
|
|
||||||
TDengine 目前 2.0 版服务器仅能在 Linux 系统上安装和运行,后续会支持 Windows、macOS 等系统。客户端可以在 Windows 或 Linux 上安装和运行。任何 OS 的应用也可以选择 RESTful 接口连接服务器 taosd。CPU 支持 X64/ARM64/MIPS64/Alpha64,后续会支持 ARM32、RISC-V 等 CPU 架构。用户可根据需求选择通过[源码](https://www.taosdata.com/cn/getting-started/#通过源码安装)或者[安装包](https://www.taosdata.com/cn/getting-started/#通过安装包安装)来安装。本快速指南仅适用于通过源码安装。
|
TDengine 目前 2.0 版服务器仅能在 Linux 系统上安装和运行,后续会支持 Windows、macOS 等系统。客户端可以在 Windows 或 Linux 上安装和运行。任何 OS 的应用也可以选择 RESTful 接口连接服务器 taosd。CPU 支持 X64/ARM64/MIPS64/Alpha64,后续会支持 ARM32、RISC-V 等 CPU 架构。用户可根据需求选择通过源码或者[安装包](https://docs.taosdata.com/get-started/package/)来安装。本快速指南仅适用于通过源码安装。
|
||||||
|
|
||||||
## 安装工具
|
## 安装工具
|
||||||
|
|
||||||
|
@ -188,7 +182,7 @@ apt install autoconf
|
||||||
cmake .. -DJEMALLOC_ENABLED=true
|
cmake .. -DJEMALLOC_ENABLED=true
|
||||||
```
|
```
|
||||||
|
|
||||||
在 X86-64、X86、arm64、arm32 和 mips64 平台上,TDengine 生成脚本可以自动检测机器架构。也可以手动配置 CPUTYPE 参数来指定 CPU 类型,如 aarch64 或 aarch32 等。
|
在 X86-64、X86、arm64 平台上,TDengine 生成脚本可以自动检测机器架构。也可以手动配置 CPUTYPE 参数来指定 CPU 类型,如 aarch64 等。
|
||||||
|
|
||||||
aarch64:
|
aarch64:
|
||||||
|
|
||||||
|
@ -196,18 +190,6 @@ aarch64:
|
||||||
cmake .. -DCPUTYPE=aarch64 && cmake --build .
|
cmake .. -DCPUTYPE=aarch64 && cmake --build .
|
||||||
```
|
```
|
||||||
|
|
||||||
aarch32:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cmake .. -DCPUTYPE=aarch32 && cmake --build .
|
|
||||||
```
|
|
||||||
|
|
||||||
mips64:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cmake .. -DCPUTYPE=mips64 && cmake --build .
|
|
||||||
```
|
|
||||||
|
|
||||||
### Windows 系统
|
### Windows 系统
|
||||||
|
|
||||||
如果你使用的是 Visual Studio 2013 版本:
|
如果你使用的是 Visual Studio 2013 版本:
|
||||||
|
@ -351,19 +333,14 @@ Query OK, 2 row(s) in set (0.001700s)
|
||||||
|
|
||||||
TDengine 提供了丰富的应用程序开发接口,其中包括 C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用:
|
TDengine 提供了丰富的应用程序开发接口,其中包括 C/C++、Java、Python、Go、Node.js、C# 、RESTful 等,便于用户快速开发应用:
|
||||||
|
|
||||||
- [Java](https://www.taosdata.com/cn/documentation/connector/java)
|
- [Java](https://docs.taosdata.com/reference/connector/java/)
|
||||||
|
|
||||||
- [C/C++](https://www.taosdata.com/cn/documentation/connector#c-cpp)
|
- [C/C++](https://www.taosdata.com/cn/documentation/connector#c-cpp)
|
||||||
|
- [Python](https://docs.taosdata.com/reference/connector/python/)
|
||||||
- [Python](https://www.taosdata.com/cn/documentation/connector#python)
|
- [Go](https://docs.taosdata.com/reference/connector/go/)
|
||||||
|
- [Node.js](https://docs.taosdata.com/reference/connector/node/)
|
||||||
- [Go](https://www.taosdata.com/cn/documentation/connector#go)
|
- [Rust](https://docs.taosdata.com/reference/connector/rust/)
|
||||||
|
- [C#](https://docs.taosdata.com/reference/connector/csharp/)
|
||||||
- [RESTful API](https://www.taosdata.com/cn/documentation/connector#restful)
|
- [RESTful API](https://docs.taosdata.com/reference/rest-api/)
|
||||||
|
|
||||||
- [Node.js](https://www.taosdata.com/cn/documentation/connector#nodejs)
|
|
||||||
|
|
||||||
- [Rust](https://www.taosdata.com/cn/documentation/connector/rust)
|
|
||||||
|
|
||||||
## 第三方连接器
|
## 第三方连接器
|
||||||
|
|
||||||
|
@ -372,6 +349,7 @@ TDengine 社区生态中也有一些非常友好的第三方连接器,可以
|
||||||
- [Rust Bindings](https://github.com/songtianyi/tdengine-rust-bindings/tree/master/examples)
|
- [Rust Bindings](https://github.com/songtianyi/tdengine-rust-bindings/tree/master/examples)
|
||||||
- [.Net Core Connector](https://github.com/maikebing/Maikebing.EntityFrameworkCore.Taos)
|
- [.Net Core Connector](https://github.com/maikebing/Maikebing.EntityFrameworkCore.Taos)
|
||||||
- [Lua Connector](https://github.com/taosdata/TDengine/tree/develop/examples/lua)
|
- [Lua Connector](https://github.com/taosdata/TDengine/tree/develop/examples/lua)
|
||||||
|
- [PHP](https://www.taosdata.com/en/documentation/connector#c-cpp)
|
||||||
|
|
||||||
# 运行和添加测试例
|
# 运行和添加测试例
|
||||||
|
|
||||||
|
|
64
README.md
64
README.md
|
@ -20,30 +20,23 @@ English | [简体中文](README-CN.md) | We are hiring, check [here](https://tde
|
||||||
|
|
||||||
# What is TDengine?
|
# What is TDengine?
|
||||||
|
|
||||||
TDengine is a high-performance, scalable time-series database with SQL support. Its code including cluster feature is open source under [GNU AGPL v3.0](http://www.gnu.org/licenses/agpl-3.0.html). Besides the database, it provides caching, stream processing, data subscription and other functionalities to reduce the complexity and cost of development and operation. TDengine differentiates itself from other TSDBs with the following advantages.
|
TDengine is an open source, cloud native time-series database optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. Below are the most outstanding advantages of TDengine:
|
||||||
|
|
||||||
- **High Performance**: TDengine outperforms other time series databases in data ingestion and querying while significantly reducing storage cost and compute costs, with an innovatively designed and purpose-built storage engine.
|
- High-Performance: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression.
|
||||||
|
|
||||||
- **Scalable**: TDengine provides out-of-box scalability and high-availability through its native distributed design. Nodes can be added through simple configuration to achieve greater data processing power. In addition, this feature is open source.
|
- Simplified Solution: Through built-in caching, stream processing and data subscription features, TDengine provides a simplified solution for time-series data processing. It reduces system design complexity and operation costs significantly.
|
||||||
|
|
||||||
- **SQL Support**: TDengine uses SQL as the query language, thereby reducing learning and migration costs, while adding SQL extensions to handle time-series data better, and supporting convenient and flexible schemaless data ingestion.
|
- Cloud Native: Through native distributed design, sharding and partitioning, separation of compute and storage, RAFT, support for kubernetes deployment and full observability, TDengine can be deployed on public, private or hybrid clouds.
|
||||||
|
|
||||||
- **All in One**: TDengine has built-in caching, stream processing and data subscription functions, it is no longer necessary to integrate Kafka/Redis/HBase/Spark or other software in some scenarios. It makes the system architecture much simpler and easy to maintain.
|
- Open Source: TDengine’s core modules, including cluster feature, are all available under open source licenses. It has gathered 18.7k stars on GitHub, an active developer community, and over 137k running instances worldwide.
|
||||||
|
|
||||||
- **Seamless Integration**: Without a single line of code, TDengine provide seamless integration with third-party tools such as Telegraf, Grafana, EMQX, Prometheus, StatsD, collectd, etc. More will be integrated.
|
- Ease of Use: For administrators, TDengine significantly reduces the effort to deploy and maintain. For developers, it provides a simple interface, simplified solution and seamless integrations for third party tools. For data users, it gives easy data access.
|
||||||
|
|
||||||
- **Zero Management**: Installation and cluster setup can be done in seconds. Data partitioning and sharding are executed automatically. TDengine’s running status can be monitored via Grafana or other DevOps tools.
|
- Easy Data Analytics: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way.
|
||||||
|
|
||||||
- **Zero Learning Cost**: With SQL as the query language, support for ubiquitous tools like Python, Java, C/C++, Go, Rust, Node.js connectors, there is zero learning cost.
|
|
||||||
|
|
||||||
- **Interactive Console**: TDengine provides convenient console access to the database to run ad hoc queries, maintain the database, or manage the cluster without any programming.
|
|
||||||
|
|
||||||
TDengine can be widely applied to Internet of Things (IoT), Connected Vehicles, Industrial IoT, DevOps, energy, finance and many other scenarios.
|
|
||||||
|
|
||||||
# Documentation
|
# Documentation
|
||||||
|
|
||||||
For user manual, system design and architecture, engineering blogs, refer to [TDengine Documentation](https://www.taosdata.com/en/documentation/)(中文版请点击[这里](https://www.taosdata.com/cn/documentation20/))
|
For user manual, system design and architecture, please refer to [TDengine Documentation](https://docs.tdengine.com) (中文版请点击[这里](https://docs.taosdata.com))
|
||||||
for details. The documentation from our website can also be downloaded locally from _documentation/tdenginedocs-en_ or _documentation/tdenginedocs-cn_.
|
|
||||||
|
|
||||||
# Building
|
# Building
|
||||||
|
|
||||||
|
@ -205,8 +198,8 @@ apt install autoconf
|
||||||
cmake .. -DJEMALLOC_ENABLED=true
|
cmake .. -DJEMALLOC_ENABLED=true
|
||||||
```
|
```
|
||||||
|
|
||||||
TDengine build script can detect the host machine's architecture on X86-64, X86, arm64, arm32 and mips64 platform.
|
TDengine build script can detect the host machine's architecture on X86-64, X86, arm64 platform.
|
||||||
You can also specify CPUTYPE option like aarch64 or aarch32 too if the detection result is not correct:
|
You can also specify CPUTYPE option like aarch64 too if the detection result is not correct:
|
||||||
|
|
||||||
aarch64:
|
aarch64:
|
||||||
|
|
||||||
|
@ -214,18 +207,6 @@ aarch64:
|
||||||
cmake .. -DCPUTYPE=aarch64 && cmake --build .
|
cmake .. -DCPUTYPE=aarch64 && cmake --build .
|
||||||
```
|
```
|
||||||
|
|
||||||
aarch32:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cmake .. -DCPUTYPE=aarch32 && cmake --build .
|
|
||||||
```
|
|
||||||
|
|
||||||
mips64:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cmake .. -DCPUTYPE=mips64 && cmake --build .
|
|
||||||
```
|
|
||||||
|
|
||||||
### On Windows platform
|
### On Windows platform
|
||||||
|
|
||||||
If you use the Visual Studio 2013, please open a command window by executing "cmd.exe".
|
If you use the Visual Studio 2013, please open a command window by executing "cmd.exe".
|
||||||
|
@ -381,13 +362,14 @@ Query OK, 2 row(s) in set (0.001700s)
|
||||||
|
|
||||||
TDengine provides abundant developing tools for users to develop on TDengine. Follow the links below to find your desired connectors and relevant documentation.
|
TDengine provides abundant developing tools for users to develop on TDengine. Follow the links below to find your desired connectors and relevant documentation.
|
||||||
|
|
||||||
- [Java](https://www.taosdata.com/en/documentation/connector/java)
|
- [Java](https://docs.taosdata.com/reference/connector/java/)
|
||||||
- [C/C++](https://www.taosdata.com/en/documentation/connector#c-cpp)
|
- [C/C++](https://docs.taosdata.com/reference/connector/cpp/)
|
||||||
- [Python](https://www.taosdata.com/en/documentation/connector#python)
|
- [Python](https://docs.taosdata.com/reference/connector/python/)
|
||||||
- [Go](https://www.taosdata.com/en/documentation/connector#go)
|
- [Go](https://docs.taosdata.com/reference/connector/go/)
|
||||||
- [RESTful API](https://www.taosdata.com/en/documentation/connector#restful)
|
- [Node.js](https://docs.taosdata.com/reference/connector/node/)
|
||||||
- [Node.js](https://www.taosdata.com/en/documentation/connector#nodejs)
|
- [Rust](https://docs.taosdata.com/reference/connector/rust/)
|
||||||
- [Rust](https://www.taosdata.com/en/documentation/connector/rust)
|
- [C#](https://docs.taosdata.com/reference/connector/csharp/)
|
||||||
|
- [RESTful API](https://docs.taosdata.com/reference/rest-api/)
|
||||||
|
|
||||||
## Third Party Connectors
|
## Third Party Connectors
|
||||||
|
|
||||||
|
@ -396,21 +378,13 @@ The TDengine community has also kindly built some of their own connectors! Follo
|
||||||
- [Rust Bindings](https://github.com/songtianyi/tdengine-rust-bindings/tree/master/examples)
|
- [Rust Bindings](https://github.com/songtianyi/tdengine-rust-bindings/tree/master/examples)
|
||||||
- [.Net Core Connector](https://github.com/maikebing/Maikebing.EntityFrameworkCore.Taos)
|
- [.Net Core Connector](https://github.com/maikebing/Maikebing.EntityFrameworkCore.Taos)
|
||||||
- [Lua Connector](https://github.com/taosdata/TDengine/tree/develop/tests/examples/lua)
|
- [Lua Connector](https://github.com/taosdata/TDengine/tree/develop/tests/examples/lua)
|
||||||
|
- [PHP](https://www.taosdata.com/en/documentation/connector#c-cpp)
|
||||||
|
|
||||||
# How to run the test cases and how to add a new test case
|
# How to run the test cases and how to add a new test case
|
||||||
|
|
||||||
TDengine's test framework and all test cases are fully open source.
|
TDengine's test framework and all test cases are fully open source.
|
||||||
Please refer to [this document](https://github.com/taosdata/TDengine/blob/develop/tests/How-To-Run-Test-And-How-To-Add-New-Test-Case.md) for how to run test and develop new test case.
|
Please refer to [this document](https://github.com/taosdata/TDengine/blob/develop/tests/How-To-Run-Test-And-How-To-Add-New-Test-Case.md) for how to run test and develop new test case.
|
||||||
|
|
||||||
# TDengine Roadmap
|
|
||||||
|
|
||||||
- Support event-driven stream computing
|
|
||||||
- Support user defined functions
|
|
||||||
- Support MQTT connection
|
|
||||||
- Support OPC connection
|
|
||||||
- Support Hadoop, Spark connections
|
|
||||||
- Support Tableau and other BI tools
|
|
||||||
|
|
||||||
# Contribute to TDengine
|
# Contribute to TDengine
|
||||||
|
|
||||||
Please follow the [contribution guidelines](CONTRIBUTING.md) to contribute to the project.
|
Please follow the [contribution guidelines](CONTRIBUTING.md) to contribute to the project.
|
||||||
|
|
|
@ -22,7 +22,9 @@ ELSEIF (TD_WINDOWS)
|
||||||
INSTALL(FILES ${EXECUTABLE_OUTPUT_PATH}/taos.exe DESTINATION .)
|
INSTALL(FILES ${EXECUTABLE_OUTPUT_PATH}/taos.exe DESTINATION .)
|
||||||
INSTALL(FILES ${EXECUTABLE_OUTPUT_PATH}/taosd.exe DESTINATION .)
|
INSTALL(FILES ${EXECUTABLE_OUTPUT_PATH}/taosd.exe DESTINATION .)
|
||||||
INSTALL(FILES ${EXECUTABLE_OUTPUT_PATH}/udfd.exe DESTINATION .)
|
INSTALL(FILES ${EXECUTABLE_OUTPUT_PATH}/udfd.exe DESTINATION .)
|
||||||
INSTALL(FILES ${EXECUTABLE_OUTPUT_PATH}/taosBenchmark.exe DESTINATION .)
|
IF (BUILD_TOOLS)
|
||||||
|
INSTALL(FILES ${EXECUTABLE_OUTPUT_PATH}/taosBenchmark.exe DESTINATION .)
|
||||||
|
ENDIF ()
|
||||||
|
|
||||||
IF (TD_MVN_INSTALLED)
|
IF (TD_MVN_INSTALLED)
|
||||||
INSTALL(FILES ${LIBRARY_OUTPUT_PATH}/taos-jdbcdriver-2.0.38-dist.jar DESTINATION connector/jdbc)
|
INSTALL(FILES ${LIBRARY_OUTPUT_PATH}/taos-jdbcdriver-2.0.38-dist.jar DESTINATION connector/jdbc)
|
||||||
|
|
|
@ -2,7 +2,7 @@
|
||||||
# taos-tools
|
# taos-tools
|
||||||
ExternalProject_Add(taos-tools
|
ExternalProject_Add(taos-tools
|
||||||
GIT_REPOSITORY https://github.com/taosdata/taos-tools.git
|
GIT_REPOSITORY https://github.com/taosdata/taos-tools.git
|
||||||
GIT_TAG 57bdfbf
|
GIT_TAG 79bf23d
|
||||||
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools"
|
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools"
|
||||||
BINARY_DIR ""
|
BINARY_DIR ""
|
||||||
#BUILD_IN_SOURCE TRUE
|
#BUILD_IN_SOURCE TRUE
|
||||||
|
|
Binary file not shown.
Before Width: | Height: | Size: 8.8 KiB After Width: | Height: | Size: 37 KiB |
|
@ -17,7 +17,6 @@ Currently, TDengine's native interface connectors can support platforms such as
|
||||||
| **X86 64bit** | **Win32** | ● | ● | ● | ● | ○ | ○ | ● |
|
| **X86 64bit** | **Win32** | ● | ● | ● | ● | ○ | ○ | ● |
|
||||||
| **X86 32bit** | **Win32** | ○ | ○ | ○ | ○ | ○ | ○ | ● |
|
| **X86 32bit** | **Win32** | ○ | ○ | ○ | ○ | ○ | ○ | ● |
|
||||||
| **ARM64** | **Linux** | ● | ● | ● | ● | ○ | ○ | ● |
|
| **ARM64** | **Linux** | ● | ● | ● | ● | ○ | ○ | ● |
|
||||||
| **ARM32** | **Linux** | ● | ● | ● | ● | ○ | ○ | ● |
|
|
||||||
| **MIPS** | **Linux** | ○ | ○ | ○ | ○ | ○ | ○ | ○ |
|
| **MIPS** | **Linux** | ○ | ○ | ○ | ○ | ○ | ○ | ○ |
|
||||||
|
|
||||||
Where ● means the official test verification passed, ○ means the unofficial test verification passed, -- means no assurance.
|
Where ● means the official test verification passed, ○ means the unofficial test verification passed, -- means no assurance.
|
||||||
|
|
|
@ -47,27 +47,28 @@ If the displayed content is followed by `...` you can use this command to change
|
||||||
|
|
||||||
You can change the behavior of TDengine CLI by specifying command-line parameters. The following parameters are commonly used.
|
You can change the behavior of TDengine CLI by specifying command-line parameters. The following parameters are commonly used.
|
||||||
|
|
||||||
- -h, --host=HOST: FQDN of the server where the TDengine server is to be connected. Default is to connect to the local service
|
- -h HOST: FQDN of the server where the TDengine server is to be connected. Default is to connect to the local service
|
||||||
- -P, --port=PORT: Specify the port number to be used by the server. Default is `6030`
|
- -P PORT: Specify the port number to be used by the server. Default is `6030`
|
||||||
- -u, --user=USER: the user name to use when connecting. Default is `root`
|
- -u USER: the user name to use when connecting. Default is `root`
|
||||||
- -p, --password=PASSWORD: the password to use when connecting to the server. Default is `taosdata`
|
- -p PASSWORD: the password to use when connecting to the server. Default is `taosdata`
|
||||||
- -?, --help: print out all command-line arguments
|
- -?, --help: print out all command-line arguments
|
||||||
|
|
||||||
And many more parameters.
|
And many more parameters.
|
||||||
|
|
||||||
- -c, --config-dir: Specify the directory where configuration file exists. The default is `/etc/taos`, and the default name of the configuration file in this directory is `taos.cfg`
|
- -a AUTHSTR: The auth string to use when connecting to the server
|
||||||
- -C, --dump-config: Print the configuration parameters of `taos.cfg` in the default directory or specified by -c
|
- -A: Generate auth string from password
|
||||||
- -d, --database=DATABASE: Specify the database to use when connecting to the server
|
- -c CONFIGDIR: Specify the directory where configuration file exists. The default is `/etc/taos`, and the default name of the configuration file in this directory is `taos.cfg`
|
||||||
- -D, --directory=DIRECTORY: Import the SQL script file in the specified path
|
- -C: Print the configuration parameters of `taos.cfg` in the default directory or specified by -c
|
||||||
- -f, --file=FILE: Execute the SQL script file in non-interactive mode
|
- -d DATABASE: Specify the database to use when connecting to the server
|
||||||
- -k, --check=CHECK: Specify the table to be checked
|
- -f FILE: Execute the SQL script file in non-interactive mode
|
||||||
- -l, --pktlen=PKTLEN: Test package size to be used for network testing
|
- -k: Check the service status, 0: unavailable,1: network ok,2: service ok,3: service degraded,4: exiting
|
||||||
- -n, --netrole=NETROLE: test scope for network connection test, default is `startup`. The value can be `client`, `server`, `rpc`, `startup`, `sync`, `speed`, or `fqdn`.
|
- -l PKTLEN: Test package length to be used for network testing
|
||||||
- -r, --raw-time: output the timestamp format as unsigned 64-bits integer (uint64_t in C language)
|
- -n NETROLE: test scope for network connection test, default is `client`. The value can be `client`, `server`
|
||||||
- -s, --commands=COMMAND: execute SQL commands in non-interactive mode
|
- -N PKTNUM: Test package numbers to be used for network testing
|
||||||
- -S, --pkttype=PKTTYPE: Specify the packet type used for network testing. The default is TCP, can be specified as either TCP or UDP when `speed` is specified to `netrole` parameter
|
- -r: output the timestamp format as unsigned 64-bits integer (uint64_t in C language)
|
||||||
- -T, --thread=THREADNUM: The number of threads to import data in multi-threaded mode
|
- -s COMMAND: execute SQL commands in non-interactive mode
|
||||||
- -s, --commands: Run TDengine CLI commands without entering the terminal
|
- -t: Check the details of the service status,status same as -k
|
||||||
|
- -w DISPLAYWIDTH: 客户端列显示宽度
|
||||||
- -z, --timezone=TIMEZONE: Specify time zone. Default is the value of current configuration file
|
- -z, --timezone=TIMEZONE: Specify time zone. Default is the value of current configuration file
|
||||||
- -V, --version: Print out the current version number
|
- -V, --version: Print out the current version number
|
||||||
|
|
||||||
|
|
|
@ -5,12 +5,11 @@ description: "List of platforms supported by TDengine server, client, and connec
|
||||||
|
|
||||||
## List of supported platforms for TDengine server
|
## List of supported platforms for TDengine server
|
||||||
|
|
||||||
| | **CentOS 7/8** | **Ubuntu 16/18/20** | **Other Linux** |
|
| | **Windows 10/11** | **CentOS 7.9/8** | **Ubuntu 18/20** | **Other Linux** | **UOS** | **Kylin** | **Ningsi V60/V80** | **HUAWEI EulerOS** |
|
||||||
| ------------ | -------------- | ------------------- | --------------- |
|
| ------------------ | ----------------- | ---------------- | ---------------- | --------------- | ------- | --------- | ------------------ | ------------------ |
|
||||||
| X64 | ● | ● | |
|
| X64 | ● | ● | ● | | ● | ● | ● | |
|
||||||
| MIPS64 | | | ● |
|
| Raspberry Pi ARM64 | | | | ● | | | | |
|
||||||
| ARM64 | | ○ | ○ |
|
| HUAWEI cloud ARM64 | | | | | | | | ● |
|
||||||
| Alpha64 | | | ○ |
|
|
||||||
|
|
||||||
Note: ● means officially tested and verified, ○ means unofficially tested and verified.
|
Note: ● means officially tested and verified, ○ means unofficially tested and verified.
|
||||||
|
|
||||||
|
@ -20,15 +19,15 @@ TDengine's connector can support a wide range of platforms, including X64/X86/AR
|
||||||
|
|
||||||
The comparison matrix is as follows.
|
The comparison matrix is as follows.
|
||||||
|
|
||||||
| **CPU** | **X64 64bit** | | | **X86 32bit** | **ARM64** | **ARM32** | **MIPS** | **Alpha** |
|
| **CPU** | **X64 64bit** | | | **X86 32bit** | **ARM64** | **MIPS** | **Alpha** |
|
||||||
| ----------- | ------------- | --------- | --------- | ------------- | --------- | --------- | --------- | --------- |
|
| ----------- | ------------- | --------- | --------- | ------------- | --------- | --------- | --------- |
|
||||||
| **OS** | **Linux** | **Win64** | **Win32** | **Win32** | **Linux** | **Linux** | **Linux** | **Linux** |
|
| **OS** | **Linux** | **Win64** | **Win32** | **Win32** | **Linux** | **Linux** | **Linux** |
|
||||||
| **C/C++** | ● | ● | ● | ○ | ● | ● | ● | ● |
|
| **C/C++** | ● | ● | ● | ○ | ● | ● | ● |
|
||||||
| **JDBC** | ● | ● | ● | ○ | ● | ● | ● | ● |
|
| **JDBC** | ● | ● | ● | ○ | ● | ● | ● |
|
||||||
| **Python** | ● | ● | ● | ○ | ● | ● | ● | -- |
|
| **Python** | ● | ● | ● | ○ | ● | ● | -- |
|
||||||
| **Go** | ● | ● | ● | ○ | ● | ● | ○ | -- |
|
| **Go** | ● | ● | ● | ○ | ● | ○ | -- |
|
||||||
| **NodeJs** | ● | ● | ○ | ○ | ● | ● | ○ | -- |
|
| **NodeJs** | ● | ● | ○ | ○ | ● | ○ | -- |
|
||||||
| **C#** | ● | ● | ○ | ○ | ○ | ○ | ○ | -- |
|
| **C#** | ● | ● | ○ | ○ | ○ | ○ | -- |
|
||||||
| **RESTful** | ● | ● | ● | ● | ● | ● | ● | ● |
|
| **RESTful** | ● | ● | ● | ● | ● | ● | ● |
|
||||||
|
|
||||||
Note: ● means the official test is verified, ○ means the unofficial test is verified, -- means not verified.
|
Note: ● means the official test is verified, ○ means the unofficial test is verified, -- means not verified.
|
||||||
|
|
|
@ -25,7 +25,6 @@ All executable files of TDengine are in the _/usr/local/taos/bin_ directory by d
|
||||||
- _taosBenchmark_: TDengine testing tool
|
- _taosBenchmark_: TDengine testing tool
|
||||||
- _remove.sh_: script to uninstall TDengine, please execute it carefully, link to the **rmtaos** command in the /usr/bin directory. Will remove the TDengine installation directory `/usr/local/taos`, but will keep `/etc/taos`, `/var/lib/taos`, `/var/log/taos`
|
- _remove.sh_: script to uninstall TDengine, please execute it carefully, link to the **rmtaos** command in the /usr/bin directory. Will remove the TDengine installation directory `/usr/local/taos`, but will keep `/etc/taos`, `/var/lib/taos`, `/var/log/taos`
|
||||||
- _taosadapter_: server-side executable that provides RESTful services and accepts writing requests from a variety of other softwares
|
- _taosadapter_: server-side executable that provides RESTful services and accepts writing requests from a variety of other softwares
|
||||||
- _tarbitrator_: provides arbitration for two-node cluster deployments
|
|
||||||
- _TDinsight.sh_: script to download TDinsight and install it
|
- _TDinsight.sh_: script to download TDinsight and install it
|
||||||
- _set_core.sh_: script for setting up the system to generate core dump files for easy debugging
|
- _set_core.sh_: script for setting up the system to generate core dump files for easy debugging
|
||||||
- _taosd-dump-cfg.gdb_: script to facilitate debugging of taosd's gdb execution.
|
- _taosd-dump-cfg.gdb_: script to facilitate debugging of taosd's gdb execution.
|
||||||
|
|
|
@ -4,7 +4,7 @@ sidebar_label: 文档首页
|
||||||
slug: /
|
slug: /
|
||||||
---
|
---
|
||||||
|
|
||||||
TDengine是一款开源、[高性能](https://www.taosdata.com/fast)、云原生的时序数据库(Time-Series Database, TSDB), 它专为物联网、工业互联网、金融等场景优化设计。同时它还带有内建的缓存、流式计算、数据订阅等系统功能,能大幅减少系统设计的复杂度,降低研发和运营成本,是一极简的时序数据处理平台。本文档是 TDengine 用户手册,主要是介绍 TDengine 的基本概念、安装、使用、功能、开发接口、运营维护、TDengine 内核设计等等,它主要是面向架构师、开发者与系统管理员的。
|
TDengine是一款[开源](https://www.taosdata.com/tdengine/open_source_time-series_database)、[高性能](https://www.taosdata.com/fast)、[云原生](https://www.taosdata.com/tdengine/cloud_native_time-series_database)的时序数据库(Time-Series Database, TSDB), 它专为物联网、工业互联网、金融等场景优化设计。同时它还带有内建的缓存、流式计算、数据订阅等系统功能,能大幅减少系统设计的复杂度,降低研发和运营成本,是一极简的时序数据处理平台。本文档是 TDengine 用户手册,主要是介绍 TDengine 的基本概念、安装、使用、功能、开发接口、运营维护、TDengine 内核设计等等,它主要是面向架构师、开发者与系统管理员的。
|
||||||
|
|
||||||
TDengine 充分利用了时序数据的特点,提出了“一个数据采集点一张表”与“超级表”的概念,设计了创新的存储引擎,让数据的写入、查询和存储效率都得到极大的提升。为正确理解并使用TDengine, 无论如何,请您仔细阅读[基本概念](./concept)一章。
|
TDengine 充分利用了时序数据的特点,提出了“一个数据采集点一张表”与“超级表”的概念,设计了创新的存储引擎,让数据的写入、查询和存储效率都得到极大的提升。为正确理解并使用TDengine, 无论如何,请您仔细阅读[基本概念](./concept)一章。
|
||||||
|
|
||||||
|
|
|
@ -3,7 +3,7 @@ title: 产品简介
|
||||||
toc_max_heading_level: 2
|
toc_max_heading_level: 2
|
||||||
---
|
---
|
||||||
|
|
||||||
TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series Database, TSDB)。TDengine 能被广泛运用于物联网、工业互联网、车联网、IT 运维、金融等领域。除核心的时序数据库功能外,TDengine 还提供[缓存](/develop/cache/)、[数据订阅](/develop/subscribe)、[流式计算](/develop/continuous-query)等功能,是一极简的时序数据处理平台,最大程度的减小系统设计的复杂度,降低研发和运营成本。
|
TDengine 是一款[开源](https://www.taosdata.com/tdengine/open_source_time-series_database)、[高性能](https://www.taosdata.com/tdengine/fast)、[云原生](https://www.taosdata.com/tdengine/cloud_native_time-series_database)的时序数据库 (Time-Series Database, TSDB)。TDengine 能被广泛运用于物联网、工业互联网、车联网、IT 运维、金融等领域。除核心的时序数据库功能外,TDengine 还提供[缓存](/develop/cache/)、[数据订阅](/develop/subscribe)、[流式计算](/develop/continuous-query)等功能,是一极简的时序数据处理平台,最大程度的减小系统设计的复杂度,降低研发和运营成本。
|
||||||
|
|
||||||
本章节介绍TDengine的主要功能、竞争优势、适用场景、与其他数据库的对比测试等等,让大家对TDengine有个整体的了解。
|
本章节介绍TDengine的主要功能、竞争优势、适用场景、与其他数据库的对比测试等等,让大家对TDengine有个整体的了解。
|
||||||
|
|
||||||
|
@ -33,17 +33,17 @@ TDengine的主要功能如下:
|
||||||
|
|
||||||
由于 TDengine 充分利用了[时序数据特点](https://www.taosdata.com/blog/2019/07/09/105.html),比如结构化、无需事务、很少删除或更新、写多读少等等,设计了全新的针对时序数据的存储引擎和计算引擎,因此与其他时序数据库相比,TDengine 有以下特点:
|
由于 TDengine 充分利用了[时序数据特点](https://www.taosdata.com/blog/2019/07/09/105.html),比如结构化、无需事务、很少删除或更新、写多读少等等,设计了全新的针对时序数据的存储引擎和计算引擎,因此与其他时序数据库相比,TDengine 有以下特点:
|
||||||
|
|
||||||
- **高性能**:通过创新的存储引擎设计,无论是数据写入还是查询,TDengine 的性能比通用数据库快 10 倍以上,也远超其他时序数据库,存储空间不及通用数据库的1/10。
|
- **[高性能](https://www.taosdata.com/tdengine/fast)**:通过创新的存储引擎设计,无论是数据写入还是查询,TDengine 的性能比通用数据库快 10 倍以上,也远超其他时序数据库,存储空间不及通用数据库的1/10。
|
||||||
|
|
||||||
- **云原生**:通过原生分布式的设计,充分利用云平台的优势,TDengine 提供了水平扩展能力,具备弹性、韧性和可观测性,支持k8s部署,可运行在公有云、私有云和混合云上。
|
- **[云原生](https://www.taosdata.com/tdengine/cloud_native_time-series_database)**:通过原生分布式的设计,充分利用云平台的优势,TDengine 提供了水平扩展能力,具备弹性、韧性和可观测性,支持k8s部署,可运行在公有云、私有云和混合云上。
|
||||||
|
|
||||||
- **极简时序数据平台**:TDengine 内建消息队列、缓存、流式计算等功能,应用无需再集成 Kafka/Redis/HBase/Spark 等软件,大幅降低系统的复杂度,降低应用开发和运营成本。
|
- **[极简时序数据平台](https://www.taosdata.com/tdengine/simplified_solution_for_time-series_data_processing)**:TDengine 内建消息队列、缓存、流式计算等功能,应用无需再集成 Kafka/Redis/HBase/Spark 等软件,大幅降低系统的复杂度,降低应用开发和运营成本。
|
||||||
|
|
||||||
- **分析能力**:支持 SQL,同时为时序数据特有的分析提供SQL扩展。通过超级表、存储计算分离、分区分片、预计算、自定义函数等技术,TDengine 具备强大的分析能力。
|
- **[分析能力](https://www.taosdata.com/tdengine/easy_data_analytics)**:支持 SQL,同时为时序数据特有的分析提供SQL扩展。通过超级表、存储计算分离、分区分片、预计算、自定义函数等技术,TDengine 具备强大的分析能力。
|
||||||
|
|
||||||
- **简单易用**:无任何依赖,安装、集群几秒搞定;提供REST以及各种语言连接器,与众多第三方工具无缝集成;提供命令行程序,便于管理和即席查询;提供各种运维工具。
|
- **[简单易用](https://www.taosdata.com/tdengine/ease_of_use)**:无任何依赖,安装、集群几秒搞定;提供REST以及各种语言连接器,与众多第三方工具无缝集成;提供命令行程序,便于管理和即席查询;提供各种运维工具。
|
||||||
|
|
||||||
- **核心开源**:TDengine 的核心代码包括集群功能全部开源,截止到2022年8月1日,全球超过 135.9k 个运行实例,GitHub Star 18.7k,Fork 4.4k,社区活跃。
|
- **[核心开源](https://www.taosdata.com/tdengine/open_source_time-series_database)**:TDengine 的核心代码包括集群功能全部开源,截止到2022年8月1日,全球超过 135.9k 个运行实例,GitHub Star 18.7k,Fork 4.4k,社区活跃。
|
||||||
|
|
||||||
采用 TDengine,可将典型的物联网、车联网、工业互联网大数据平台的总拥有成本大幅降低。表现在几个方面:
|
采用 TDengine,可将典型的物联网、车联网、工业互联网大数据平台的总拥有成本大幅降低。表现在几个方面:
|
||||||
|
|
||||||
|
|
|
@ -47,9 +47,7 @@ Docker version 20.10.3, build 48d30b5
|
||||||
|
|
||||||
## 运行 TDengine CLI
|
## 运行 TDengine CLI
|
||||||
|
|
||||||
有两种方式在 Docker 环境下使用 TDengine CLI (taos) 访问 TDengine.
|
进入容器,执行 taos
|
||||||
- 进入容器后,执行 taos
|
|
||||||
- 在宿主机使用容器映射到主机的端口进行访问 `taos -h <hostname> -P <port>`
|
|
||||||
|
|
||||||
```
|
```
|
||||||
$ taos
|
$ taos
|
||||||
|
@ -62,47 +60,11 @@ taos>
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## 访问 REST 接口
|
|
||||||
|
|
||||||
taosAdapter 是 TDengine 中提供 REST 服务的组件。下面这条命令会在容器中同时启动 `taosd` 和 `taosadapter` 两个服务组件。默认 Docker 镜像同时启动 TDengine 后台服务 taosd 和 taosAdatper。
|
|
||||||
|
|
||||||
可以在宿主机使用 curl 通过 RESTful 端口访问 Docker 容器内的 TDengine server。
|
|
||||||
|
|
||||||
```
|
|
||||||
curl -L -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
|
|
||||||
```
|
|
||||||
|
|
||||||
输出示例如下:
|
|
||||||
|
|
||||||
```
|
|
||||||
{"code":0,"column_meta":[["name","VARCHAR",64],["create_time","TIMESTAMP",8],["vgroups","SMALLINT",2],["ntables","BIGINT",8],["replica","TINYINT",1],["strict","VARCHAR",4],["duration","VARCHAR",10],["keep","VARCHAR",32],["buffer","INT",4],["pagesize","INT",4],["pages","INT",4],["minrows","INT",4],["maxrows","INT",4],["wal","TINYINT",1],["fsync","INT",4],["comp","TINYINT",1],["cacheModel","VARCHAR",11],["precision","VARCHAR",2],["single_stable","BOOL",1],["status","VARCHAR",10],["retention","VARCHAR",60]],"data":[["information_schema",null,null,14,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"ready"],["performance_schema",null,null,3,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"ready"]],"rows":2}
|
|
||||||
```
|
|
||||||
|
|
||||||
这条命令,通过 REST API 访问 TDengine server,这时连接的是从容器映射到主机的 6041 端口。
|
|
||||||
|
|
||||||
TDengine REST API 详情请参考[官方文档](/reference/rest-api/)。
|
|
||||||
|
|
||||||
## 单独启动 REST 服务
|
|
||||||
|
|
||||||
如果想只启动 `taosadapter`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker run -d --network=host --name tdengine-taosa -e TAOS_FIRST_EP=tdengine-taosd tdengine/tdengine:3.0.0.0 taosadapter
|
|
||||||
```
|
|
||||||
|
|
||||||
只启动 `taosd`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker run -d --network=host --name tdengine-taosd -e TAOS_DISABLE_ADAPTER=true tdengine/tdengine:3.0.0.0
|
|
||||||
```
|
|
||||||
|
|
||||||
注意以上为容器使用 host 方式网络配置进行单独部署 taosAdapter 的命令行参数。其他网络访问方式请设置 hostname、 DNS 等必要的网络配置。
|
|
||||||
|
|
||||||
## 写入数据
|
## 写入数据
|
||||||
|
|
||||||
可以使用 TDengine 的自带工具 taosBenchmark 快速体验 TDengine 的写入。
|
可以使用 TDengine 的自带工具 taosBenchmark 快速体验 TDengine 的写入。
|
||||||
|
|
||||||
假定启动容器时已经将容器的6030端口映射到了宿主机的6030端口,则可以直接在宿主机命令行启动 taosBenchmark,也可以进入容器后执行:
|
进入容器,启动 taosBenchmark:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
$ taosBenchmark
|
$ taosBenchmark
|
||||||
|
@ -117,7 +79,7 @@ docker run -d --network=host --name tdengine-taosd -e TAOS_DISABLE_ADAPTER=true
|
||||||
|
|
||||||
## 体验查询
|
## 体验查询
|
||||||
|
|
||||||
使用上述 taosBenchmark 插入数据后,可以在 TDengine CLI 输入查询命令,体验查询速度。可以直接在宿主机上也可以进入容器后运行。
|
使用上述 taosBenchmark 插入数据后,可以在 TDengine CLI 输入查询命令,体验查询速度。。
|
||||||
|
|
||||||
查询超级表下记录总条数:
|
查询超级表下记录总条数:
|
||||||
|
|
||||||
|
@ -148,3 +110,7 @@ taos> select avg(current), max(voltage), min(phase) from test.meters where group
|
||||||
```sql
|
```sql
|
||||||
taos> select avg(current), max(voltage), min(phase) from test.d10 interval(10s);
|
taos> select avg(current), max(voltage), min(phase) from test.d10 interval(10s);
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## 其它
|
||||||
|
|
||||||
|
更多关于在 Docker 环境下使用 TDengine 的细节,请参考 [在 Docker 下使用 TDengine](../../reference/docker)
|
|
@ -11,7 +11,7 @@ import TabItem from "@theme/TabItem";
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
TDengine 开源版本提供 deb 和 rpm 格式安装包,用户可以根据自己的运行环境选择合适的安装包。其中 deb 支持 Debian/Ubuntu 及衍生系统,rpm 支持 CentOS/RHEL/SUSE 及衍生系统。同时我们也为企业用户提供 tar.gz 格式安装包。也支持通过 `apt-get` 工具从线上进行安装。
|
TDengine 开源版本提供 deb 和 rpm 格式安装包,用户可以根据自己的运行环境选择合适的安装包。其中 deb 支持 Debian/Ubuntu 及衍生系统,rpm 支持 CentOS/RHEL/SUSE 及衍生系统。同时我们也为企业用户提供 tar.gz 格式安装包,也支持通过 `apt-get` 工具从线上进行安装。
|
||||||
|
|
||||||
## 安装
|
## 安装
|
||||||
|
|
||||||
|
|
|
@ -1,84 +1,128 @@
|
||||||
---
|
---
|
||||||
sidebar_label: 连续查询
|
sidebar_label: 流式计算
|
||||||
description: "连续查询是一个按照预设频率自动执行的查询功能,提供按照时间窗口的聚合查询能力,是一种简化的时间驱动流式计算。"
|
description: "TDengine 流式计算将数据的写入、预处理、复杂分析、实时计算、报警触发等功能融为一体,是一个能够降低用户部署成本、存储成本和运维成本的计算引擎。"
|
||||||
title: "连续查询(Continuous Query)"
|
title: 流式计算
|
||||||
---
|
---
|
||||||
|
|
||||||
连续查询是 TDengine 定期自动执行的查询,采用滑动窗口的方式进行计算,是一种简化的时间驱动的流式计算。针对库中的表或超级表,TDengine 可提供定期自动执行的连续查询,用户可让 TDengine 推送查询的结果,也可以将结果再写回到 TDengine 中。每次执行的查询是一个时间窗口,时间窗口随着时间流动向前滑动。在定义连续查询的时候需要指定时间窗口(time window, 参数 interval)大小和每次前向增量时间(forward sliding times, 参数 sliding)。
|
在时序数据的处理中,经常要对原始数据进行清洗、预处理,再使用时序数据库进行长久的储存。用户通常需要在时序数据库之外再搭建 Kafka、Flink、Spark 等流计算处理引擎,增加了用户的开发成本和维护成本。
|
||||||
|
使用 TDengine 3.0 的流式计算引擎能够最大限度的减少对这些额外中间件的依赖,真正将数据的写入、预处理、长期存储、复杂分析、实时计算、实时报警触发等功能融为一体,并且,所有这些任务只需要使用 SQL 完成,极大降低了用户的学习成本、使用成本。
|
||||||
|
|
||||||
TDengine 的连续查询采用时间驱动模式,可以直接使用 TAOS SQL 进行定义,不需要额外的操作。使用连续查询,可以方便快捷地按照时间窗口生成结果,从而对原始采集数据进行降采样(down sampling)。用户通过 TAOS SQL 定义连续查询以后,TDengine 自动在最后的一个完整的时间周期末端拉起查询,并将计算获得的结果推送给用户或者写回 TDengine。
|
## 流式计算的创建
|
||||||
|
|
||||||
TDengine 提供的连续查询与普通流计算中的时间窗口计算具有以下区别:
|
|
||||||
|
|
||||||
- 不同于流计算的实时反馈计算结果,连续查询只在时间窗口关闭以后才开始计算。例如时间周期是 1 天,那么当天的结果只会在 23:59:59 以后才会生成。
|
|
||||||
- 如果有历史记录写入到已经计算完成的时间区间,连续查询并不会重新进行计算,也不会重新将结果推送给用户。对于写回 TDengine 的模式,也不会更新已经存在的计算结果。
|
|
||||||
- 使用连续查询推送结果的模式,服务端并不缓存客户端计算状态,也不提供 Exactly-Once 的语义保证。如果用户的应用端崩溃,再次拉起的连续查询将只会从再次拉起的时间开始重新计算最近的一个完整的时间窗口。如果使用写回模式,TDengine 可确保数据写回的有效性和连续性。
|
|
||||||
|
|
||||||
## 连续查询语法
|
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
[CREATE TABLE AS] SELECT select_expr [, select_expr ...]
|
CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name AS subquery
|
||||||
FROM {tb_name_list}
|
stream_options: {
|
||||||
[WHERE where_condition]
|
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
|
||||||
[INTERVAL(interval_val [, interval_offset]) [SLIDING sliding_val]]
|
WATERMARK time
|
||||||
|
IGNORE EXPIRED
|
||||||
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
INTERVAL: 连续查询作用的时间窗口
|
详细的语法规则参考 [流式计算](../../taos-sql/stream)
|
||||||
|
|
||||||
SLIDING: 连续查询的时间窗口向前滑动的时间间隔
|
## 示例一
|
||||||
|
|
||||||
## 使用连续查询
|
企业电表的数据经常都是成百上千亿条的,那么想要将这些分散、凌乱的数据清洗或转换都需要比较长的时间,很难做到高效性和实时性,以下例子中,通过流计算可以将过去 12 小时电表电压大于 220V 的数据清洗掉,然后以小时为窗口整合并计算出每个窗口中电流的最大值,并将结果输出到指定的数据表中。
|
||||||
|
|
||||||
下面以智能电表场景为例介绍连续查询的具体使用方法。假设我们通过下列 SQL 语句创建了超级表和子表:
|
### 创建 DB 和原始数据表
|
||||||
|
|
||||||
|
首先准备数据,完成建库、建一张超级表和多张子表操作
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
create table meters (ts timestamp, current float, voltage int, phase float) tags (location binary(64), groupId int);
|
drop database if exists stream_db;
|
||||||
create table D1001 using meters tags ("California.SanFrancisco", 2);
|
create database stream_db;
|
||||||
create table D1002 using meters tags ("California.LosAngeles", 2);
|
|
||||||
...
|
create stable stream_db.meters (ts timestamp, current float, voltage int) TAGS (location varchar(64), groupId int);
|
||||||
|
|
||||||
|
create table stream_db.d1001 using stream_db.meters tags("beijing", 1);
|
||||||
|
create table stream_db.d1002 using stream_db.meters tags("guangzhou", 2);
|
||||||
|
create table stream_db.d1003 using stream_db.meters tags("shanghai", 3);
|
||||||
```
|
```
|
||||||
|
|
||||||
可以通过下面这条 SQL 语句以一分钟为时间窗口、30 秒为前向增量统计这些电表的平均电压。
|
### 创建流
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
select avg(voltage) from meters interval(1m) sliding(30s);
|
create stream stream1 into stream_db.stream1_output_stb as select _wstart as start, _wend as end, max(current) as max_current from stream_db.meters where voltage <= 220 and ts > now - 12h interval (1h);
|
||||||
```
|
```
|
||||||
|
|
||||||
每次执行这条语句,都会重新计算所有数据。 如果需要每隔 30 秒执行一次来增量计算最近一分钟的数据,可以把上面的语句改进成下面的样子,每次使用不同的 `startTime` 并定期执行:
|
### 写入数据
|
||||||
|
```sql
|
||||||
|
insert into stream_db.d1001 values(now-14h, 10.3, 210);
|
||||||
|
insert into stream_db.d1001 values(now-13h, 13.5, 216);
|
||||||
|
insert into stream_db.d1001 values(now-12h, 12.5, 219);
|
||||||
|
insert into stream_db.d1002 values(now-11h, 14.7, 221);
|
||||||
|
insert into stream_db.d1002 values(now-10h, 10.5, 218);
|
||||||
|
insert into stream_db.d1002 values(now-9h, 11.2, 220);
|
||||||
|
insert into stream_db.d1003 values(now-8h, 11.5, 217);
|
||||||
|
insert into stream_db.d1003 values(now-7h, 12.3, 227);
|
||||||
|
insert into stream_db.d1003 values(now-6h, 12.3, 215);
|
||||||
|
```
|
||||||
|
|
||||||
|
### 查询以观查结果
|
||||||
|
```sql
|
||||||
|
taos> select * from stream_db.stream1_output_stb;
|
||||||
|
start | end | max_current | group_id |
|
||||||
|
===================================================================================================
|
||||||
|
2022-08-09 14:00:00.000 | 2022-08-09 15:00:00.000 | 10.50000 | 0 |
|
||||||
|
2022-08-09 15:00:00.000 | 2022-08-09 16:00:00.000 | 11.20000 | 0 |
|
||||||
|
2022-08-09 16:00:00.000 | 2022-08-09 17:00:00.000 | 11.50000 | 0 |
|
||||||
|
2022-08-09 18:00:00.000 | 2022-08-09 19:00:00.000 | 12.30000 | 0 |
|
||||||
|
Query OK, 4 rows in database (0.012033s)
|
||||||
|
```
|
||||||
|
|
||||||
|
## 示例二
|
||||||
|
某运营商平台要采集机房所有服务器的系统资源指标,包含 cpu、内存、网络延迟等,采集后需要对数据进行四舍五入运算,将地域和服务器名以下划线拼接,然后将结果按时间排序并以服务器名分组输出到新的数据表中。
|
||||||
|
|
||||||
|
### 创建 DB 和原始数据表
|
||||||
|
首先准备数据,完成建库、建一张超级表和多张子表操作
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
select avg(voltage) from meters where ts > {startTime} interval(1m) sliding(30s);
|
drop database if exists stream_db;
|
||||||
|
create database stream_db;
|
||||||
|
|
||||||
|
create stable stream_db.idc (ts timestamp, cpu float, mem float, latency float) TAGS (location varchar(64), groupId int);
|
||||||
|
|
||||||
|
create table stream_db.server01 using stream_db.idc tags("beijing", 1);
|
||||||
|
create table stream_db.server02 using stream_db.idc tags("shanghai", 2);
|
||||||
|
create table stream_db.server03 using stream_db.idc tags("beijing", 2);
|
||||||
|
create table stream_db.server04 using stream_db.idc tags("tianjin", 3);
|
||||||
|
create table stream_db.server05 using stream_db.idc tags("shanghai", 1);
|
||||||
```
|
```
|
||||||
|
|
||||||
这样做没有问题,但 TDengine 提供了更简单的方法,只要在最初的查询语句前面加上 `create table {tableName} as` 就可以了,例如:
|
### 创建流
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
create table avg_vol as select avg(voltage) from meters interval(1m) sliding(30s);
|
create stream stream2 into stream_db.stream2_output_stb as select ts, concat_ws("_", location, tbname) as server_location, round(cpu) as cpu, round(mem) as mem, round(latency) as latency from stream_db.idc partition by tbname order by ts;
|
||||||
```
|
```
|
||||||
|
|
||||||
会自动创建一个名为 `avg_vol` 的新表,然后每隔 30 秒,TDengine 会增量执行 `as` 后面的 SQL 语句,并将查询结果写入这个表中,用户程序后续只要从 `avg_vol` 中查询数据即可。例如:
|
### 写入数据
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
taos> select * from avg_vol;
|
insert into stream_db.server01 values(now-14h, 50.9, 654.8, 23.11);
|
||||||
ts | avg_voltage_ |
|
insert into stream_db.server01 values(now-13h, 13.5, 221.2, 11.22);
|
||||||
===================================================
|
insert into stream_db.server02 values(now-12h, 154.7, 218.3, 22.33);
|
||||||
2020-07-29 13:37:30.000 | 222.0000000 |
|
insert into stream_db.server02 values(now-11h, 120.5, 111.5, 5.55);
|
||||||
2020-07-29 13:38:00.000 | 221.3500000 |
|
insert into stream_db.server03 values(now-10h, 101.5, 125.6, 5.99);
|
||||||
2020-07-29 13:38:30.000 | 220.1700000 |
|
insert into stream_db.server03 values(now-9h, 12.3, 165.6, 6.02);
|
||||||
2020-07-29 13:39:00.000 | 223.0800000 |
|
insert into stream_db.server04 values(now-8h, 160.9, 120.7, 43.51);
|
||||||
|
insert into stream_db.server04 values(now-7h, 240.9, 520.7, 54.55);
|
||||||
|
insert into stream_db.server05 values(now-6h, 190.9, 320.7, 55.43);
|
||||||
|
insert into stream_db.server05 values(now-5h, 110.9, 600.7, 35.54);
|
||||||
```
|
```
|
||||||
|
### 查询以观查结果
|
||||||
需要注意,查询时间窗口的最小值是 10 毫秒,没有时间窗口范围的上限。
|
|
||||||
|
|
||||||
此外,TDengine 还支持用户指定连续查询的起止时间。如果不输入开始时间,连续查询将从第一条原始数据所在的时间窗口开始;如果没有输入结束时间,连续查询将永久运行;如果用户指定了结束时间,连续查询在系统时间达到指定的时间以后停止运行。比如使用下面的 SQL 创建的连续查询将运行一小时,之后会自动停止。
|
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
create table avg_vol as select avg(voltage) from meters where ts > now and ts <= now + 1h interval(1m) sliding(30s);
|
taos> select ts, server_location, cpu, mem, latency from stream_db.stream2_output_stb;
|
||||||
|
ts | server_location | cpu | mem | latency |
|
||||||
|
================================================================================================================================
|
||||||
|
2022-08-09 21:24:56.785 | beijing_server01 | 51.00000 | 655.00000 | 23.00000 |
|
||||||
|
2022-08-09 22:24:56.795 | beijing_server01 | 14.00000 | 221.00000 | 11.00000 |
|
||||||
|
2022-08-09 23:24:56.806 | shanghai_server02 | 155.00000 | 218.00000 | 22.00000 |
|
||||||
|
2022-08-10 00:24:56.815 | shanghai_server02 | 121.00000 | 112.00000 | 6.00000 |
|
||||||
|
2022-08-10 01:24:56.826 | beijing_server03 | 102.00000 | 126.00000 | 6.00000 |
|
||||||
|
2022-08-10 02:24:56.838 | beijing_server03 | 12.00000 | 166.00000 | 6.00000 |
|
||||||
|
2022-08-10 03:24:56.846 | tianjin_server04 | 161.00000 | 121.00000 | 44.00000 |
|
||||||
|
2022-08-10 04:24:56.853 | tianjin_server04 | 241.00000 | 521.00000 | 55.00000 |
|
||||||
|
2022-08-10 05:24:56.866 | shanghai_server05 | 191.00000 | 321.00000 | 55.00000 |
|
||||||
|
2022-08-10 06:24:57.301 | shanghai_server05 | 111.00000 | 601.00000 | 36.00000 |
|
||||||
|
Query OK, 10 rows in database (0.022950s)
|
||||||
```
|
```
|
||||||
|
|
||||||
需要说明的是,上面例子中的 `now` 是指创建连续查询的时间,而不是查询执行的时间,否则,查询就无法自动停止了。另外,为了尽量避免原始数据延迟写入导致的问题,TDengine 中连续查询的计算有一定的延迟。也就是说,一个时间窗口过去后,TDengine 并不会立即计算这个窗口的数据,所以要稍等一会(一般不会超过 1 分钟)才能查到计算结果。
|
|
||||||
|
|
||||||
## 管理连续查询
|
|
||||||
|
|
||||||
用户可在控制台中通过 `show streams` 命令来查看系统中全部运行的连续查询,并可以通过 `kill stream` 命令杀掉对应的连续查询。后续版本会提供更细粒度和便捷的连续查询管理命令。
|
|
||||||
|
|
|
@ -112,9 +112,9 @@ alter_database_options:
|
||||||
alter_database_option: {
|
alter_database_option: {
|
||||||
CACHEMODEL {'none' | 'last_row' | 'last_value' | 'both'}
|
CACHEMODEL {'none' | 'last_row' | 'last_value' | 'both'}
|
||||||
| CACHESIZE value
|
| CACHESIZE value
|
||||||
| FSYNC value
|
| WAL_LEVEL value
|
||||||
|
| WAL_FSYNC_PERIOD value
|
||||||
| KEEP value
|
| KEEP value
|
||||||
| WAL value
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -140,10 +140,6 @@ taos> SELECT ts, ts AS primary_key_ts FROM d1001;
|
||||||
|
|
||||||
但是针对`first(*)`、`last(*)`、`last_row(*)`不支持针对单列的重命名。
|
但是针对`first(*)`、`last(*)`、`last_row(*)`不支持针对单列的重命名。
|
||||||
|
|
||||||
### 隐式结果列
|
|
||||||
|
|
||||||
`Select_exprs`可以是表所属列的列名,也可以是基于列的函数表达式或计算式,数量的上限 256 个。当用户使用了`interval`或`group by tags`的子句以后,在最后返回结果中会强制返回时间戳列(第一列)和 group by 子句中的标签列。后续的版本中可以支持关闭 group by 子句中隐式列的输出,列输出完全由 select 子句控制。
|
|
||||||
|
|
||||||
### 伪列
|
### 伪列
|
||||||
|
|
||||||
**TBNAME**
|
**TBNAME**
|
||||||
|
@ -152,7 +148,13 @@ taos> SELECT ts, ts AS primary_key_ts FROM d1001;
|
||||||
获取一个超级表所有的子表名及相关的标签信息:
|
获取一个超级表所有的子表名及相关的标签信息:
|
||||||
|
|
||||||
```mysql
|
```mysql
|
||||||
SELECT TBNAME, location FROM meters;
|
SELECT DISTINCT TBNAME, location FROM meters;
|
||||||
|
```
|
||||||
|
|
||||||
|
建议用户使用 INFORMATION_SCHEMA 下的 INS_TAGS 系统表来查询超级表的子表标签信息,例如获取超级表 meters 所有的子表名和标签值:
|
||||||
|
|
||||||
|
```mysql
|
||||||
|
SELECT table_name, tag_name, tag_type, tag_value FROM information_schema.ins_tags WHERE stable_name='meters';
|
||||||
```
|
```
|
||||||
|
|
||||||
统计超级表下辖子表数量:
|
统计超级表下辖子表数量:
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
---
|
---
|
||||||
sidebar_label: 元数据库
|
sidebar_label: 元数据
|
||||||
title: 元数据库
|
title: 存储元数据的 Information_Schema 数据库
|
||||||
---
|
---
|
||||||
|
|
||||||
TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数据库元数据、数据库系统信息和状态的访问,例如数据库或表的名称,当前执行的 SQL 语句等。该数据库存储有关 TDengine 维护的所有其他数据库的信息。它包含多个只读表。实际上,这些表都是视图,而不是基表,因此没有与它们关联的文件。所以对这些表只能查询,不能进行 INSERT 等写入操作。`INFORMATION_SCHEMA` 数据库旨在以一种更一致的方式来提供对 TDengine 支持的各种 SHOW 语句(如 SHOW TABLES、SHOW DATABASES)所提供的信息的访问。与 SHOW 语句相比,使用 SELECT ... FROM INFORMATION_SCHEMA.tablename 具有以下优点:
|
TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数据库元数据、数据库系统信息和状态的访问,例如数据库或表的名称,当前执行的 SQL 语句等。该数据库存储有关 TDengine 维护的所有其他数据库的信息。它包含多个只读表。实际上,这些表都是视图,而不是基表,因此没有与它们关联的文件。所以对这些表只能查询,不能进行 INSERT 等写入操作。`INFORMATION_SCHEMA` 数据库旨在以一种更一致的方式来提供对 TDengine 支持的各种 SHOW 语句(如 SHOW TABLES、SHOW DATABASES)所提供的信息的访问。与 SHOW 语句相比,使用 SELECT ... FROM INFORMATION_SCHEMA.tablename 具有以下优点:
|
||||||
|
|
|
@ -1,9 +1,9 @@
|
||||||
---
|
---
|
||||||
sidebar_label: 性能数据库
|
sidebar_label: 统计数据
|
||||||
title: 性能数据库
|
title: 存储统计数据的 Performance_Schema 数据库
|
||||||
---
|
---
|
||||||
|
|
||||||
TDengine 3.0 版本开始提供一个内置数据库 `performance_schema`,其中存储了与性能有关的统计数据。本节详细介绍其中的表和详细的表结构。
|
TDengine 3.0 版本开始提供一个内置数据库 `performance_schema`,其中存储了与性能有关的统计数据。本节详细介绍其中的表和表结构。
|
||||||
|
|
||||||
## PERF_APP
|
## PERF_APP
|
||||||
|
|
||||||
|
@ -94,16 +94,16 @@ TDengine 3.0 版本开始提供一个内置数据库 `performance_schema`,其
|
||||||
|
|
||||||
## PERF_TRANS
|
## PERF_TRANS
|
||||||
|
|
||||||
| # | **列名** | **数据类型** | **说明** |
|
| # | **列名** | **数据类型** | **说明** |
|
||||||
| --- | :--------------: | ------------ | -------- |
|
| --- | :--------------: | ------------ | -------------------------------------------------------------- |
|
||||||
| 1 | id | INT | |
|
| 1 | id | INT | 正在进行的事务的编号 |
|
||||||
| 2 | create_time | TIMESTAMP | |
|
| 2 | create_time | TIMESTAMP | 事务的创建时间 |
|
||||||
| 3 | stage | BINARY(12) | |
|
| 3 | stage | BINARY(12) | 事务的当前阶段,通常为 redoAction、undoAction、commit 三个阶段 |
|
||||||
| 4 | db1 | BINARY(64) | |
|
| 4 | db1 | BINARY(64) | 与此事务存在冲突的数据库一的名称 |
|
||||||
| 5 | db2 | BINARY(64) | |
|
| 5 | db2 | BINARY(64) | 与此事务存在冲突的数据库二的名称 |
|
||||||
| 6 | failed_times | INT | |
|
| 6 | failed_times | INT | 事务执行失败的总次数 |
|
||||||
| 7 | last_exec_time | TIMESTAMP | |
|
| 7 | last_exec_time | TIMESTAMP | 事务上次执行的时间 |
|
||||||
| 8 | last_action_info | BINARY(511) | |
|
| 8 | last_action_info | BINARY(511) | 事务上次执行失败的明细信息 |
|
||||||
|
|
||||||
## PERF_SMAS
|
## PERF_SMAS
|
||||||
|
|
||||||
|
|
|
@ -17,7 +17,6 @@ TDengine 提供了丰富的应用程序开发接口,为了便于用户快速
|
||||||
| **X86 64bit** | **Win32** | ● | ● | ● | ● | ○ | ○ | ● |
|
| **X86 64bit** | **Win32** | ● | ● | ● | ● | ○ | ○ | ● |
|
||||||
| **X86 32bit** | **Win32** | ○ | ○ | ○ | ○ | ○ | ○ | ● |
|
| **X86 32bit** | **Win32** | ○ | ○ | ○ | ○ | ○ | ○ | ● |
|
||||||
| **ARM64** | **Linux** | ● | ● | ● | ● | ○ | ○ | ● |
|
| **ARM64** | **Linux** | ● | ● | ● | ● | ○ | ○ | ● |
|
||||||
| **ARM32** | **Linux** | ○ | ○ | ○ | ○ | ○ | ○ | ● |
|
|
||||||
| **MIPS 龙芯** | **Linux** | ○ | ○ | ○ | ○ | ○ | ○ | ○ |
|
| **MIPS 龙芯** | **Linux** | ○ | ○ | ○ | ○ | ○ | ○ | ○ |
|
||||||
| **Alpha 申威** | **Linux** | ○ | ○ | -- | -- | -- | -- | ○ |
|
| **Alpha 申威** | **Linux** | ○ | ○ | -- | -- | -- | -- | ○ |
|
||||||
| **X86 海光** | **Linux** | ○ | ○ | ○ | -- | -- | -- | ○ |
|
| **X86 海光** | **Linux** | ○ | ○ | ○ | -- | -- | -- | ○ |
|
||||||
|
@ -49,7 +48,6 @@ TDengine 版本更新往往会增加新的功能特性,列表中的连接器
|
||||||
| -------------- | -------- | ---------- | ------ | ------ | ----------- | -------- |
|
| -------------- | -------- | ---------- | ------ | ------ | ----------- | -------- |
|
||||||
| **连接管理** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
|
| **连接管理** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
|
||||||
| **普通查询** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
|
| **普通查询** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
|
||||||
| **连续查询** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
|
|
||||||
| **参数绑定** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
|
| **参数绑定** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
|
||||||
| ** TMQ ** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
|
| ** TMQ ** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
|
||||||
| **Schemaless** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
|
| **Schemaless** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
|
||||||
|
|
|
@ -48,29 +48,30 @@ taos> SET MAX_BINARY_DISPLAY_WIDTH <nn>;
|
||||||
|
|
||||||
您可通过配置命令行参数来改变 TDengine CLI 的行为。以下为常用的几个命令行参数:
|
您可通过配置命令行参数来改变 TDengine CLI 的行为。以下为常用的几个命令行参数:
|
||||||
|
|
||||||
- -h, --host=HOST: 要连接的 TDengine 服务端所在服务器的 FQDN, 默认为连接本地服务
|
- -h HOST: 要连接的 TDengine 服务端所在服务器的 FQDN, 默认为连接本地服务
|
||||||
- -P, --port=PORT: 指定服务端所用端口号
|
- -P PORT: 指定服务端所用端口号
|
||||||
- -u, --user=USER: 连接时使用的用户名
|
- -u USER: 连接时使用的用户名
|
||||||
- -p, --password=PASSWORD: 连接服务端时使用的密码
|
- -p PASSWORD: 连接服务端时使用的密码
|
||||||
- -?, --help: 打印出所有命令行参数
|
- -?, --help: 打印出所有命令行参数
|
||||||
|
|
||||||
还有更多其他参数:
|
还有更多其他参数:
|
||||||
|
|
||||||
- -c, --config-dir: 指定配置文件目录,Linux 环境下默认为 `/etc/taos`,该目录下的配置文件默认名称为 `taos.cfg`
|
- -a AUTHSTR: 连接服务端的授权信息
|
||||||
- -C, --dump-config: 打印 -c 指定的目录中 `taos.cfg` 的配置参数
|
- -A: 通过用户名和密码计算授权信息
|
||||||
- -d, --database=DATABASE: 指定连接到服务端时使用的数据库
|
- -c CONFIGDIR: 指定配置文件目录,Linux 环境下默认为 `/etc/taos`,该目录下的配置文件默认名称为 `taos.cfg`
|
||||||
- -D, --directory=DIRECTORY: 导入指定路径中的 SQL 脚本文件
|
- -C: 打印 -c 指定的目录中 `taos.cfg` 的配置参数
|
||||||
- -f, --file=FILE: 以非交互模式执行 SQL 脚本文件。文件中一个 SQL 语句只能占一行
|
- -d DATABASE: 指定连接到服务端时使用的数据库
|
||||||
- -k, --check=CHECK: 指定要检查的表
|
- -f FILE: 以非交互模式执行 SQL 脚本文件。文件中一个 SQL 语句只能占一行
|
||||||
- -l, --pktlen=PKTLEN: 网络测试时使用的测试包大小
|
- -k: 测试服务端运行状态,0: unavailable,1: network ok,2: service ok,3: service degraded,4: exiting
|
||||||
- -n, --netrole=NETROLE: 网络连接测试时的测试范围,默认为 `startup`, 可选值为 `client`、`server`、`rpc`、`startup`、`sync`、`speed` 和 `fqdn` 之一
|
- -l PKTLEN: 网络测试时使用的测试包大小
|
||||||
- -r, --raw-time: 将时间输出出无符号 64 位整数类型(即 C 语音中 uint64_t)
|
- -n NETROLE: 网络连接测试时的测试范围,默认为 `client`, 可选值为 `client`、`server`
|
||||||
- -s, --commands=COMMAND: 以非交互模式执行的 SQL 命令
|
- -N PKTNUM: 网络测试时使用的测试包数量
|
||||||
- -S, --pkttype=PKTTYPE: 指定网络测试所用的包类型,默认为 TCP。只有 netrole 为 `speed` 时既可以指定为 TCP 也可以指定为 UDP
|
- -r: 将时间输出出无符号 64 位整数类型(即 C 语音中 uint64_t)
|
||||||
- -T, --thread=THREADNUM: 以多线程模式导入数据时的线程数
|
- -s COMMAND: 以非交互模式执行的 SQL 命令
|
||||||
- -s, --commands: 在不进入终端的情况下运行 TDengine 命令
|
- -t: 测试服务端启动状态,状态同-k
|
||||||
- -z, --timezone=TIMEZONE: 指定时区,默认为本地时区
|
- -w DISPLAYWIDTH: 客户端列显示宽度
|
||||||
- -V, --version: 打印出当前版本号
|
- -z TIMEZONE: 指定时区,默认为本地时区
|
||||||
|
- -V: 打印出当前版本号
|
||||||
|
|
||||||
示例:
|
示例:
|
||||||
|
|
||||||
|
|
|
@ -5,18 +5,11 @@ description: "TDengine 服务端、客户端和连接器支持的平台列表"
|
||||||
|
|
||||||
## TDengine 服务端支持的平台列表
|
## TDengine 服务端支持的平台列表
|
||||||
|
|
||||||
| | **CentOS 7/8** | **Ubuntu 16/18/20** | **Other Linux** | **统信 UOS** | **银河/中标麒麟** | **凝思 V60/V80** | **华为 EulerOS** |
|
| | **Windows 10/11** | **CentOS 7.9/8** | **Ubuntu 18/20** | **Other Linux** | **统信 UOS** | **银河/中标麒麟** | **凝思 V60/V80** | **华为 EulerOS** |
|
||||||
| ------------ | -------------- | ------------------- | --------------- | ------------ | ----------------- | ---------------- | ---------------- |
|
| ------------ | ----------------- | ---------------- | ---------------- | --------------- | ------------ | ----------------- | ---------------- | ---------------- |
|
||||||
| X64 | ● | ● | | ○ | ● | ● | ● |
|
| X64 | ● | ● | ● | | ● | ● | ● | |
|
||||||
| 龙芯 MIPS64 | | | ● | | | | |
|
| 树莓派 ARM64 | | | | ● | | | | |
|
||||||
| 鲲鹏 ARM64 | | ○ | ○ | | ● | | |
|
| 华为云 ARM64 | | | | | | | | ● |
|
||||||
| 申威 Alpha64 | | | ○ | ● | | | |
|
|
||||||
| 飞腾 ARM64 | | ○ 优麒麟 | | | | | |
|
|
||||||
| 海光 X64 | ● | ● | ● | ○ | ● | ● | |
|
|
||||||
| 瑞芯微 ARM64 | | | ○ | | | | |
|
|
||||||
| 全志 ARM64 | | | ○ | | | | |
|
|
||||||
| 炬力 ARM64 | | | ○ | | | | |
|
|
||||||
| 华为云 ARM64 | | | | | | | ● |
|
|
||||||
|
|
||||||
注: ● 表示经过官方测试验证, ○ 表示非官方测试验证。
|
注: ● 表示经过官方测试验证, ○ 表示非官方测试验证。
|
||||||
|
|
||||||
|
@ -26,15 +19,15 @@ description: "TDengine 服务端、客户端和连接器支持的平台列表"
|
||||||
|
|
||||||
对照矩阵如下:
|
对照矩阵如下:
|
||||||
|
|
||||||
| **CPU** | **X64 64bit** | | | **X86 32bit** | **ARM64** | **ARM32** | **MIPS 龙芯** | **Alpha 申威** | **X64 海光** |
|
| **CPU** | **X64 64bit** | | | **X86 32bit** | **ARM64** | **MIPS 龙芯** | **Alpha 申威** | **X64 海光** |
|
||||||
| ----------- | ------------- | --------- | --------- | ------------- | --------- | --------- | ------------- | -------------- | ------------ |
|
| ----------- | ------------- | --------- | --------- | ------------- | --------- | ------------- | -------------- | ------------ |
|
||||||
| **OS** | **Linux** | **Win64** | **Win32** | **Win32** | **Linux** | **Linux** | **Linux** | **Linux** | **Linux** |
|
| **OS** | **Linux** | **Win64** | **Win32** | **Win32** | **Linux** | **Linux** | **Linux** | **Linux** |
|
||||||
| **C/C++** | ● | ● | ● | ○ | ● | ● | ● | ● | ● |
|
| **C/C++** | ● | ● | ● | ○ | ● | ● | ● | ● |
|
||||||
| **JDBC** | ● | ● | ● | ○ | ● | ● | ● | ● | ● |
|
| **JDBC** | ● | ● | ● | ○ | ● | ● | ● | ● |
|
||||||
| **Python** | ● | ● | ● | ○ | ● | ● | ● | -- | ● |
|
| **Python** | ● | ● | ● | ○ | ● | ● | -- | ● |
|
||||||
| **Go** | ● | ● | ● | ○ | ● | ● | ○ | -- | -- |
|
| **Go** | ● | ● | ● | ○ | ● | ○ | -- | -- |
|
||||||
| **NodeJs** | ● | ● | ○ | ○ | ● | ● | ○ | -- | -- |
|
| **NodeJs** | ● | ● | ○ | ○ | ● | ○ | -- | -- |
|
||||||
| **C#** | ● | ● | ○ | ○ | ○ | ○ | ○ | -- | -- |
|
| **C#** | ● | ● | ○ | ○ | ○ | ○ | -- | -- |
|
||||||
| **RESTful** | ● | ● | ● | ● | ● | ● | ● | ● | ● |
|
| **RESTful** | ● | ● | ● | ● | ● | ● | ● | ● |
|
||||||
|
|
||||||
注:● 表示官方测试验证通过,○ 表示非官方测试验证通过,-- 表示未经验证。
|
注:● 表示官方测试验证通过,○ 表示非官方测试验证通过,-- 表示未经验证。
|
||||||
|
|
|
@ -25,7 +25,6 @@ TDengine 的所有可执行文件默认存放在 _/usr/local/taos/bin_ 目录下
|
||||||
- _taosBenchmark_:TDengine 测试工具
|
- _taosBenchmark_:TDengine 测试工具
|
||||||
- _remove.sh_:卸载 TDengine 的脚本,请谨慎执行,链接到/usr/bin 目录下的**rmtaos**命令。会删除 TDengine 的安装目录/usr/local/taos,但会保留/etc/taos、/var/lib/taos、/var/log/taos
|
- _remove.sh_:卸载 TDengine 的脚本,请谨慎执行,链接到/usr/bin 目录下的**rmtaos**命令。会删除 TDengine 的安装目录/usr/local/taos,但会保留/etc/taos、/var/lib/taos、/var/log/taos
|
||||||
- _taosadapter_: 提供 RESTful 服务和接受其他多种软件写入请求的服务端可执行文件
|
- _taosadapter_: 提供 RESTful 服务和接受其他多种软件写入请求的服务端可执行文件
|
||||||
- _tarbitrator_: 提供双节点集群部署的仲裁功能
|
|
||||||
- _TDinsight.sh_:用于下载 TDinsight 并安装的脚本
|
- _TDinsight.sh_:用于下载 TDinsight 并安装的脚本
|
||||||
- _set_core.sh_:用于方便调试设置系统生成 core dump 文件的脚本
|
- _set_core.sh_:用于方便调试设置系统生成 core dump 文件的脚本
|
||||||
- _taosd-dump-cfg.gdb_:用于方便调试 taosd 的 gdb 执行脚本。
|
- _taosd-dump-cfg.gdb_:用于方便调试 taosd 的 gdb 执行脚本。
|
||||||
|
|
Binary file not shown.
Before Width: | Height: | Size: 8.8 KiB After Width: | Height: | Size: 37 KiB |
|
@ -103,12 +103,12 @@ typedef struct SDataBlockInfo {
|
||||||
int16_t hasVarCol;
|
int16_t hasVarCol;
|
||||||
uint32_t capacity;
|
uint32_t capacity;
|
||||||
// TODO: optimize and remove following
|
// TODO: optimize and remove following
|
||||||
int64_t version; // used for stream, and need serialization
|
int64_t version; // used for stream, and need serialization
|
||||||
int64_t ts; // used for stream, and need serialization
|
int64_t ts; // used for stream, and need serialization
|
||||||
int32_t childId; // used for stream, do not serialize
|
int32_t childId; // used for stream, do not serialize
|
||||||
EStreamType type; // used for stream, do not serialize
|
EStreamType type; // used for stream, do not serialize
|
||||||
STimeWindow calWin; // used for stream, do not serialize
|
STimeWindow calWin; // used for stream, do not serialize
|
||||||
TSKEY watermark;// used for stream
|
TSKEY watermark; // used for stream
|
||||||
} SDataBlockInfo;
|
} SDataBlockInfo;
|
||||||
|
|
||||||
typedef struct SSDataBlock {
|
typedef struct SSDataBlock {
|
||||||
|
@ -268,6 +268,15 @@ typedef struct SSortExecInfo {
|
||||||
int32_t readBytes; // read io bytes
|
int32_t readBytes; // read io bytes
|
||||||
} SSortExecInfo;
|
} SSortExecInfo;
|
||||||
|
|
||||||
|
// stream special block column
|
||||||
|
|
||||||
|
#define START_TS_COLUMN_INDEX 0
|
||||||
|
#define END_TS_COLUMN_INDEX 1
|
||||||
|
#define UID_COLUMN_INDEX 2
|
||||||
|
#define GROUPID_COLUMN_INDEX 3
|
||||||
|
#define CALCULATE_START_TS_COLUMN_INDEX 4
|
||||||
|
#define CALCULATE_END_TS_COLUMN_INDEX 5
|
||||||
|
|
||||||
#ifdef __cplusplus
|
#ifdef __cplusplus
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
|
@ -200,8 +200,6 @@ struct STag {
|
||||||
#if 1 //================================================================================================================================================
|
#if 1 //================================================================================================================================================
|
||||||
// Imported since 3.0 and use bitmap to demonstrate None/Null/Norm, while use Null/Norm below 3.0 without of bitmap.
|
// Imported since 3.0 and use bitmap to demonstrate None/Null/Norm, while use Null/Norm below 3.0 without of bitmap.
|
||||||
#define TD_SUPPORT_BITMAP
|
#define TD_SUPPORT_BITMAP
|
||||||
#define TD_SUPPORT_READ2
|
|
||||||
#define TD_SUPPORT_BACK2 // suppport back compatibility of 2.0
|
|
||||||
|
|
||||||
#define TASSERT(x) ASSERT(x)
|
#define TASSERT(x) ASSERT(x)
|
||||||
|
|
||||||
|
|
|
@ -319,17 +319,13 @@ typedef struct {
|
||||||
col_id_t kvIdx; // [0, nKvCols)
|
col_id_t kvIdx; // [0, nKvCols)
|
||||||
} STSRowIter;
|
} STSRowIter;
|
||||||
|
|
||||||
void tdSTSRowIterReset(STSRowIter *pIter, STSRow *pRow);
|
void tdSTSRowIterInit(STSRowIter *pIter, STSchema *pSchema);
|
||||||
void tdSTSRowIterInit(STSRowIter *pIter, STSchema *pSchema);
|
void tdSTSRowIterReset(STSRowIter *pIter, STSRow *pRow);
|
||||||
|
bool tdSTSRowIterFetch(STSRowIter *pIter, col_id_t colId, col_type_t colType, SCellVal *pVal);
|
||||||
|
bool tdSTSRowIterNext(STSRowIter *pIter, SCellVal *pVal);
|
||||||
|
|
||||||
int32_t tdSTSRowNew(SArray *pArray, STSchema *pTSchema, STSRow **ppRow);
|
int32_t tdSTSRowNew(SArray *pArray, STSchema *pTSchema, STSRow **ppRow);
|
||||||
bool tdSTSRowGetVal(STSRowIter *pIter, col_id_t colId, col_type_t colType, SCellVal *pVal);
|
bool tdSTSRowGetVal(STSRowIter *pIter, col_id_t colId, col_type_t colType, SCellVal *pVal);
|
||||||
bool tdGetTpRowDataOfCol(STSRowIter *pIter, col_type_t colType, int32_t offset, SCellVal *pVal);
|
|
||||||
bool tdGetKvRowValOfColEx(STSRowIter *pIter, col_id_t colId, col_type_t colType, col_id_t *nIdx, SCellVal *pVal);
|
|
||||||
bool tdSTSRowIterNext(STSRowIter *pIter, col_id_t colId, col_type_t colType, SCellVal *pVal);
|
|
||||||
bool tdSTpRowGetVal(STSRow *pRow, col_id_t colId, col_type_t colType, int32_t flen, uint32_t offset, col_id_t colIdx,
|
|
||||||
SCellVal *pVal);
|
|
||||||
bool tdSKvRowGetVal(STSRow *pRow, col_id_t colId, col_id_t colIdx, SCellVal *pVal);
|
|
||||||
void tdSCellValPrint(SCellVal *pVal, int8_t colType);
|
|
||||||
void tdSRowPrint(STSRow *row, STSchema *pSchema, const char *tag);
|
void tdSRowPrint(STSRow *row, STSchema *pSchema, const char *tag);
|
||||||
|
|
||||||
#ifdef __cplusplus
|
#ifdef __cplusplus
|
||||||
|
|
|
@ -199,6 +199,7 @@ bool fmIsUserDefinedFunc(int32_t funcId);
|
||||||
bool fmIsDistExecFunc(int32_t funcId);
|
bool fmIsDistExecFunc(int32_t funcId);
|
||||||
bool fmIsForbidFillFunc(int32_t funcId);
|
bool fmIsForbidFillFunc(int32_t funcId);
|
||||||
bool fmIsForbidStreamFunc(int32_t funcId);
|
bool fmIsForbidStreamFunc(int32_t funcId);
|
||||||
|
bool fmIsForbidSuperTableFunc(int32_t funcId);
|
||||||
bool fmIsIntervalInterpoFunc(int32_t funcId);
|
bool fmIsIntervalInterpoFunc(int32_t funcId);
|
||||||
bool fmIsInterpFunc(int32_t funcId);
|
bool fmIsInterpFunc(int32_t funcId);
|
||||||
bool fmIsLastRowFunc(int32_t funcId);
|
bool fmIsLastRowFunc(int32_t funcId);
|
||||||
|
|
|
@ -34,6 +34,8 @@ typedef struct SStreamTask SStreamTask;
|
||||||
|
|
||||||
enum {
|
enum {
|
||||||
STREAM_STATUS__NORMAL = 0,
|
STREAM_STATUS__NORMAL = 0,
|
||||||
|
STREAM_STATUS__STOP,
|
||||||
|
STREAM_STATUS__FAILED,
|
||||||
STREAM_STATUS__RECOVER,
|
STREAM_STATUS__RECOVER,
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -194,6 +194,9 @@ function install_bin() {
|
||||||
${csudo}rm -f ${bin_link_dir}/${serverName} || :
|
${csudo}rm -f ${bin_link_dir}/${serverName} || :
|
||||||
${csudo}rm -f ${bin_link_dir}/${adapterName} || :
|
${csudo}rm -f ${bin_link_dir}/${adapterName} || :
|
||||||
${csudo}rm -f ${bin_link_dir}/${uninstallScript} || :
|
${csudo}rm -f ${bin_link_dir}/${uninstallScript} || :
|
||||||
|
${csudo}rm -f ${bin_link_dir}/${demoName} || :
|
||||||
|
${csudo}rm -f ${bin_link_dir}/${benchmarkName} || :
|
||||||
|
${csudo}rm -f ${bin_link_dir}/${dumpName} || :
|
||||||
${csudo}rm -f ${bin_link_dir}/set_core || :
|
${csudo}rm -f ${bin_link_dir}/set_core || :
|
||||||
${csudo}rm -f ${bin_link_dir}/TDinsight.sh || :
|
${csudo}rm -f ${bin_link_dir}/TDinsight.sh || :
|
||||||
|
|
||||||
|
@ -205,7 +208,6 @@ function install_bin() {
|
||||||
[ -x ${install_main_dir}/bin/${adapterName} ] && ${csudo}ln -s ${install_main_dir}/bin/${adapterName} ${bin_link_dir}/${adapterName} || :
|
[ -x ${install_main_dir}/bin/${adapterName} ] && ${csudo}ln -s ${install_main_dir}/bin/${adapterName} ${bin_link_dir}/${adapterName} || :
|
||||||
[ -x ${install_main_dir}/bin/${benchmarkName} ] && ${csudo}ln -s ${install_main_dir}/bin/${benchmarkName} ${bin_link_dir}/${demoName} || :
|
[ -x ${install_main_dir}/bin/${benchmarkName} ] && ${csudo}ln -s ${install_main_dir}/bin/${benchmarkName} ${bin_link_dir}/${demoName} || :
|
||||||
[ -x ${install_main_dir}/bin/${benchmarkName} ] && ${csudo}ln -s ${install_main_dir}/bin/${benchmarkName} ${bin_link_dir}/${benchmarkName} || :
|
[ -x ${install_main_dir}/bin/${benchmarkName} ] && ${csudo}ln -s ${install_main_dir}/bin/${benchmarkName} ${bin_link_dir}/${benchmarkName} || :
|
||||||
[ -x ${install_main_dir}/bin/${tmqName} ] && ${csudo}ln -s ${install_main_dir}/bin/${tmqName} ${bin_link_dir}/${tmqName} || :
|
|
||||||
[ -x ${install_main_dir}/bin/${dumpName} ] && ${csudo}ln -s ${install_main_dir}/bin/${dumpName} ${bin_link_dir}/${dumpName} || :
|
[ -x ${install_main_dir}/bin/${dumpName} ] && ${csudo}ln -s ${install_main_dir}/bin/${dumpName} ${bin_link_dir}/${dumpName} || :
|
||||||
[ -x ${install_main_dir}/bin/TDinsight.sh ] && ${csudo}ln -s ${install_main_dir}/bin/TDinsight.sh ${bin_link_dir}/TDinsight.sh || :
|
[ -x ${install_main_dir}/bin/TDinsight.sh ] && ${csudo}ln -s ${install_main_dir}/bin/TDinsight.sh ${bin_link_dir}/TDinsight.sh || :
|
||||||
[ -x ${install_main_dir}/bin/remove.sh ] && ${csudo}ln -s ${install_main_dir}/bin/remove.sh ${bin_link_dir}/${uninstallScript} || :
|
[ -x ${install_main_dir}/bin/remove.sh ] && ${csudo}ln -s ${install_main_dir}/bin/remove.sh ${bin_link_dir}/${uninstallScript} || :
|
||||||
|
@ -964,12 +966,17 @@ function installProduct() {
|
||||||
## ==============================Main program starts from here============================
|
## ==============================Main program starts from here============================
|
||||||
serverFqdn=$(hostname)
|
serverFqdn=$(hostname)
|
||||||
if [ "$verType" == "server" ]; then
|
if [ "$verType" == "server" ]; then
|
||||||
# Install server and client
|
# Check default 2.x data file.
|
||||||
if [ -x ${bin_dir}/${serverName} ]; then
|
if [ -x ${data_dir}/dnode/dnodeCfg.json ]; then
|
||||||
update_flag=1
|
echo -e "\033[44;31;5mThe default data directory ${data_dir} contains old data of tdengine 2.x, please clear it before installing!\033[0m"
|
||||||
updateProduct
|
|
||||||
else
|
else
|
||||||
installProduct
|
# Install server and client
|
||||||
|
if [ -x ${bin_dir}/${serverName} ]; then
|
||||||
|
update_flag=1
|
||||||
|
updateProduct
|
||||||
|
else
|
||||||
|
installProduct
|
||||||
|
fi
|
||||||
fi
|
fi
|
||||||
elif [ "$verType" == "client" ]; then
|
elif [ "$verType" == "client" ]; then
|
||||||
interactiveFqdn=no
|
interactiveFqdn=no
|
||||||
|
|
|
@ -135,12 +135,12 @@ static const SSysDbTableSchema streamSchema[] = {
|
||||||
{.name = "stream_name", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR},
|
{.name = "stream_name", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||||
{.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP},
|
{.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP},
|
||||||
{.name = "sql", .bytes = TSDB_SHOW_SQL_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR},
|
{.name = "sql", .bytes = TSDB_SHOW_SQL_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||||
{.name = "status", .bytes = 20 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY},
|
{.name = "status", .bytes = 20 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||||
{.name = "source_db", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR},
|
{.name = "source_db", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||||
{.name = "target_db", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR},
|
{.name = "target_db", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||||
{.name = "target_table", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR},
|
{.name = "target_table", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||||
{.name = "watermark", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT},
|
{.name = "watermark", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT},
|
||||||
{.name = "trigger", .bytes = 4, .type = TSDB_DATA_TYPE_INT},
|
{.name = "trigger", .bytes = 20 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||||
};
|
};
|
||||||
|
|
||||||
static const SSysDbTableSchema userTblsSchema[] = {
|
static const SSysDbTableSchema userTblsSchema[] = {
|
||||||
|
|
|
@ -405,9 +405,8 @@ static int32_t taosAddServerCfg(SConfig *pCfg) {
|
||||||
tsNumOfVnodeWriteThreads = TMAX(tsNumOfVnodeWriteThreads, 1);
|
tsNumOfVnodeWriteThreads = TMAX(tsNumOfVnodeWriteThreads, 1);
|
||||||
if (cfgAddInt32(pCfg, "numOfVnodeWriteThreads", tsNumOfVnodeWriteThreads, 1, 1024, 0) != 0) return -1;
|
if (cfgAddInt32(pCfg, "numOfVnodeWriteThreads", tsNumOfVnodeWriteThreads, 1, 1024, 0) != 0) return -1;
|
||||||
|
|
||||||
// tsNumOfVnodeSyncThreads = tsNumOfCores;
|
tsNumOfVnodeSyncThreads = tsNumOfCores * 2;
|
||||||
tsNumOfVnodeSyncThreads = 32;
|
tsNumOfVnodeSyncThreads = TMAX(tsNumOfVnodeSyncThreads, 16);
|
||||||
tsNumOfVnodeSyncThreads = TMAX(tsNumOfVnodeSyncThreads, 1);
|
|
||||||
if (cfgAddInt32(pCfg, "numOfVnodeSyncThreads", tsNumOfVnodeSyncThreads, 1, 1024, 0) != 0) return -1;
|
if (cfgAddInt32(pCfg, "numOfVnodeSyncThreads", tsNumOfVnodeSyncThreads, 1, 1024, 0) != 0) return -1;
|
||||||
|
|
||||||
tsNumOfQnodeQueryThreads = tsNumOfCores * 2;
|
tsNumOfQnodeQueryThreads = tsNumOfCores * 2;
|
||||||
|
|
|
@ -32,9 +32,13 @@ const uint8_t tdVTypeByte[2][3] = {{
|
||||||
};
|
};
|
||||||
|
|
||||||
// declaration
|
// declaration
|
||||||
static uint8_t tdGetBitmapByte(uint8_t byte);
|
static uint8_t tdGetBitmapByte(uint8_t byte);
|
||||||
static int32_t tdCompareColId(const void *arg1, const void *arg2);
|
static bool tdSTSRowIterGetTpVal(STSRowIter *pIter, col_type_t colType, int32_t offset, SCellVal *pVal);
|
||||||
static FORCE_INLINE int32_t compareKvRowColId(const void *key1, const void *key2);
|
static bool tdSTSRowIterGetKvVal(STSRowIter *pIter, col_id_t colId, col_id_t *nIdx, SCellVal *pVal);
|
||||||
|
static bool tdSTpRowGetVal(STSRow *pRow, col_id_t colId, col_type_t colType, int32_t flen, uint32_t offset,
|
||||||
|
col_id_t colIdx, SCellVal *pVal);
|
||||||
|
static bool tdSKvRowGetVal(STSRow *pRow, col_id_t colId, col_id_t colIdx, SCellVal *pVal);
|
||||||
|
static void tdSCellValPrint(SCellVal *pVal, int8_t colType);
|
||||||
|
|
||||||
// implementation
|
// implementation
|
||||||
/**
|
/**
|
||||||
|
@ -330,14 +334,14 @@ void tdSRowPrint(STSRow *row, STSchema *pSchema, const char *tag) {
|
||||||
tdSTSRowIterInit(&iter, pSchema);
|
tdSTSRowIterInit(&iter, pSchema);
|
||||||
tdSTSRowIterReset(&iter, row);
|
tdSTSRowIterReset(&iter, row);
|
||||||
printf("%s >>>type:%d,sver:%d ", tag, (int32_t)TD_ROW_TYPE(row), (int32_t)TD_ROW_SVER(row));
|
printf("%s >>>type:%d,sver:%d ", tag, (int32_t)TD_ROW_TYPE(row), (int32_t)TD_ROW_SVER(row));
|
||||||
for (int i = 0; i < pSchema->numOfCols; ++i) {
|
STColumn *cols = (STColumn *)&iter.pSchema->columns;
|
||||||
STColumn *stCol = pSchema->columns + i;
|
while (true) {
|
||||||
SCellVal sVal = {255, NULL};
|
SCellVal sVal = {.valType = 255, NULL};
|
||||||
if (!tdSTSRowIterNext(&iter, stCol->colId, stCol->type, &sVal)) {
|
if (!tdSTSRowIterNext(&iter, &sVal)) {
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
ASSERT(sVal.valType == 0 || sVal.valType == 1 || sVal.valType == 2);
|
ASSERT(sVal.valType == 0 || sVal.valType == 1 || sVal.valType == 2);
|
||||||
tdSCellValPrint(&sVal, stCol->type);
|
tdSCellValPrint(&sVal, cols[iter.colIdx - 1].type);
|
||||||
}
|
}
|
||||||
printf("\n");
|
printf("\n");
|
||||||
}
|
}
|
||||||
|
@ -420,6 +424,16 @@ void tdSCellValPrint(SCellVal *pVal, int8_t colType) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static FORCE_INLINE int32_t compareKvRowColId(const void *key1, const void *key2) {
|
||||||
|
if (*(col_id_t *)key1 > ((SKvRowIdx *)key2)->colId) {
|
||||||
|
return 1;
|
||||||
|
} else if (*(col_id_t *)key1 < ((SKvRowIdx *)key2)->colId) {
|
||||||
|
return -1;
|
||||||
|
} else {
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
bool tdSKvRowGetVal(STSRow *pRow, col_id_t colId, col_id_t colIdx, SCellVal *pVal) {
|
bool tdSKvRowGetVal(STSRow *pRow, col_id_t colId, col_id_t colIdx, SCellVal *pVal) {
|
||||||
if (colId == PRIMARYKEY_TIMESTAMP_COL_ID) {
|
if (colId == PRIMARYKEY_TIMESTAMP_COL_ID) {
|
||||||
tdRowSetVal(pVal, TD_VTYPE_NORM, TD_ROW_KEY_ADDR(pRow));
|
tdRowSetVal(pVal, TD_VTYPE_NORM, TD_ROW_KEY_ADDR(pRow));
|
||||||
|
@ -456,7 +470,7 @@ bool tdSTpRowGetVal(STSRow *pRow, col_id_t colId, col_type_t colType, int32_t fl
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
bool tdSTSRowIterNext(STSRowIter *pIter, col_id_t colId, col_type_t colType, SCellVal *pVal) {
|
bool tdSTSRowIterFetch(STSRowIter *pIter, col_id_t colId, col_type_t colType, SCellVal *pVal) {
|
||||||
if (colId == PRIMARYKEY_TIMESTAMP_COL_ID) {
|
if (colId == PRIMARYKEY_TIMESTAMP_COL_ID) {
|
||||||
pVal->val = &pIter->pRow->ts;
|
pVal->val = &pIter->pRow->ts;
|
||||||
pVal->valType = TD_VTYPE_NORM;
|
pVal->valType = TD_VTYPE_NORM;
|
||||||
|
@ -477,10 +491,10 @@ bool tdSTSRowIterNext(STSRowIter *pIter, col_id_t colId, col_type_t colType, SCe
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
tdGetTpRowDataOfCol(pIter, pCol->type, pCol->offset - sizeof(TSKEY), pVal);
|
tdSTSRowIterGetTpVal(pIter, pCol->type, pCol->offset - sizeof(TSKEY), pVal);
|
||||||
++pIter->colIdx;
|
++pIter->colIdx;
|
||||||
} else if (TD_IS_KV_ROW(pIter->pRow)) {
|
} else if (TD_IS_KV_ROW(pIter->pRow)) {
|
||||||
return tdGetKvRowValOfColEx(pIter, colId, colType, &pIter->kvIdx, pVal);
|
return tdSTSRowIterGetKvVal(pIter, colId, &pIter->kvIdx, pVal);
|
||||||
} else {
|
} else {
|
||||||
pVal->valType = TD_VTYPE_NONE;
|
pVal->valType = TD_VTYPE_NONE;
|
||||||
terrno = TSDB_CODE_INVALID_PARA;
|
terrno = TSDB_CODE_INVALID_PARA;
|
||||||
|
@ -489,13 +503,68 @@ bool tdSTSRowIterNext(STSRowIter *pIter, col_id_t colId, col_type_t colType, SCe
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
bool tdGetKvRowValOfColEx(STSRowIter *pIter, col_id_t colId, col_type_t colType, col_id_t *nIdx, SCellVal *pVal) {
|
bool tdSTSRowIterNext(STSRowIter *pIter, SCellVal *pVal) {
|
||||||
|
if (pIter->colIdx >= pIter->pSchema->numOfCols) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
STColumn *pCol = &pIter->pSchema->columns[pIter->colIdx];
|
||||||
|
|
||||||
|
if (pCol->colId == PRIMARYKEY_TIMESTAMP_COL_ID) {
|
||||||
|
pVal->val = &pIter->pRow->ts;
|
||||||
|
pVal->valType = TD_VTYPE_NORM;
|
||||||
|
++pIter->colIdx;
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (TD_IS_TP_ROW(pIter->pRow)) {
|
||||||
|
tdSTSRowIterGetTpVal(pIter, pCol->type, pCol->offset - sizeof(TSKEY), pVal);
|
||||||
|
} else if (TD_IS_KV_ROW(pIter->pRow)) {
|
||||||
|
tdSTSRowIterGetKvVal(pIter, pCol->colId, &pIter->kvIdx, pVal);
|
||||||
|
} else {
|
||||||
|
ASSERT(0);
|
||||||
|
}
|
||||||
|
++pIter->colIdx;
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
bool tdSTSRowIterGetTpVal(STSRowIter *pIter, col_type_t colType, int32_t offset, SCellVal *pVal) {
|
||||||
|
STSRow *pRow = pIter->pRow;
|
||||||
|
if (pRow->statis == 0) {
|
||||||
|
pVal->valType = TD_VTYPE_NORM;
|
||||||
|
if (IS_VAR_DATA_TYPE(colType)) {
|
||||||
|
pVal->val = POINTER_SHIFT(pRow, *(VarDataOffsetT *)POINTER_SHIFT(TD_ROW_DATA(pRow), offset));
|
||||||
|
} else {
|
||||||
|
pVal->val = POINTER_SHIFT(TD_ROW_DATA(pRow), offset);
|
||||||
|
}
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (tdGetBitmapValType(pIter->pBitmap, pIter->colIdx - 1, &pVal->valType, 0) != TSDB_CODE_SUCCESS) {
|
||||||
|
pVal->valType = TD_VTYPE_NONE;
|
||||||
|
return terrno;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (pVal->valType == TD_VTYPE_NORM) {
|
||||||
|
if (IS_VAR_DATA_TYPE(colType)) {
|
||||||
|
pVal->val = POINTER_SHIFT(pRow, *(VarDataOffsetT *)POINTER_SHIFT(TD_ROW_DATA(pRow), offset));
|
||||||
|
} else {
|
||||||
|
pVal->val = POINTER_SHIFT(TD_ROW_DATA(pRow), offset);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
bool tdSTSRowIterGetKvVal(STSRowIter *pIter, col_id_t colId, col_id_t *nIdx, SCellVal *pVal) {
|
||||||
STSRow *pRow = pIter->pRow;
|
STSRow *pRow = pIter->pRow;
|
||||||
SKvRowIdx *pKvIdx = NULL;
|
SKvRowIdx *pKvIdx = NULL;
|
||||||
bool colFound = false;
|
bool colFound = false;
|
||||||
col_id_t kvNCols = tdRowGetNCols(pRow) - 1;
|
col_id_t kvNCols = tdRowGetNCols(pRow) - 1;
|
||||||
|
void *pColIdx = TD_ROW_COL_IDX(pRow);
|
||||||
while (*nIdx < kvNCols) {
|
while (*nIdx < kvNCols) {
|
||||||
pKvIdx = (SKvRowIdx *)POINTER_SHIFT(TD_ROW_COL_IDX(pRow), *nIdx * sizeof(SKvRowIdx));
|
pKvIdx = (SKvRowIdx *)POINTER_SHIFT(pColIdx, *nIdx * sizeof(SKvRowIdx));
|
||||||
if (pKvIdx->colId == colId) {
|
if (pKvIdx->colId == colId) {
|
||||||
++(*nIdx);
|
++(*nIdx);
|
||||||
pVal->val = POINTER_SHIFT(pRow, pKvIdx->offset);
|
pVal->val = POINTER_SHIFT(pRow, pKvIdx->offset);
|
||||||
|
@ -518,48 +587,13 @@ bool tdGetKvRowValOfColEx(STSRowIter *pIter, col_id_t colId, col_type_t colType,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifdef TD_SUPPORT_BITMAP
|
if (tdGetBitmapValType(pIter->pBitmap, pIter->kvIdx - 1, &pVal->valType, 0) != TSDB_CODE_SUCCESS) {
|
||||||
int16_t colIdx = -1;
|
|
||||||
if (pKvIdx) colIdx = POINTER_DISTANCE(pKvIdx, TD_ROW_COL_IDX(pRow)) / sizeof(SKvRowIdx);
|
|
||||||
if (tdGetBitmapValType(pIter->pBitmap, colIdx, &pVal->valType, 0) != TSDB_CODE_SUCCESS) {
|
|
||||||
pVal->valType = TD_VTYPE_NONE;
|
pVal->valType = TD_VTYPE_NONE;
|
||||||
}
|
}
|
||||||
#else
|
|
||||||
pVal->valType = isNull(pVal->val, colType) ? TD_VTYPE_NULL : TD_VTYPE_NORM;
|
|
||||||
#endif
|
|
||||||
|
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
bool tdGetTpRowDataOfCol(STSRowIter *pIter, col_type_t colType, int32_t offset, SCellVal *pVal) {
|
|
||||||
STSRow *pRow = pIter->pRow;
|
|
||||||
if (IS_VAR_DATA_TYPE(colType)) {
|
|
||||||
pVal->val = POINTER_SHIFT(pRow, *(VarDataOffsetT *)POINTER_SHIFT(TD_ROW_DATA(pRow), offset));
|
|
||||||
} else {
|
|
||||||
pVal->val = POINTER_SHIFT(TD_ROW_DATA(pRow), offset);
|
|
||||||
}
|
|
||||||
|
|
||||||
#ifdef TD_SUPPORT_BITMAP
|
|
||||||
if (tdGetBitmapValType(pIter->pBitmap, pIter->colIdx - 1, &pVal->valType, 0) != TSDB_CODE_SUCCESS) {
|
|
||||||
pVal->valType = TD_VTYPE_NONE;
|
|
||||||
}
|
|
||||||
#else
|
|
||||||
pVal->valType = isNull(pVal->val, colType) ? TD_VTYPE_NULL : TD_VTYPE_NORM;
|
|
||||||
#endif
|
|
||||||
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
|
|
||||||
static FORCE_INLINE int32_t compareKvRowColId(const void *key1, const void *key2) {
|
|
||||||
if (*(col_id_t *)key1 > ((SKvRowIdx *)key2)->colId) {
|
|
||||||
return 1;
|
|
||||||
} else if (*(col_id_t *)key1 < ((SKvRowIdx *)key2)->colId) {
|
|
||||||
return -1;
|
|
||||||
} else {
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
int32_t tdSTSRowNew(SArray *pArray, STSchema *pTSchema, STSRow **ppRow) {
|
int32_t tdSTSRowNew(SArray *pArray, STSchema *pTSchema, STSRow **ppRow) {
|
||||||
STColumn *pTColumn;
|
STColumn *pTColumn;
|
||||||
SColVal *pColVal;
|
SColVal *pColVal;
|
||||||
|
@ -625,7 +659,7 @@ int32_t tdSTSRowNew(SArray *pArray, STSchema *pTSchema, STSRow **ppRow) {
|
||||||
if (maxVarDataLen > 0) {
|
if (maxVarDataLen > 0) {
|
||||||
varBuf = taosMemoryMalloc(maxVarDataLen);
|
varBuf = taosMemoryMalloc(maxVarDataLen);
|
||||||
if (!varBuf) {
|
if (!varBuf) {
|
||||||
if(isAlloc) {
|
if (isAlloc) {
|
||||||
taosMemoryFreeClear(*ppRow);
|
taosMemoryFreeClear(*ppRow);
|
||||||
}
|
}
|
||||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
@ -673,6 +707,19 @@ int32_t tdSTSRowNew(SArray *pArray, STSchema *pTSchema, STSRow **ppRow) {
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static FORCE_INLINE int32_t tdCompareColId(const void *arg1, const void *arg2) {
|
||||||
|
int32_t colId = *(int32_t *)arg1;
|
||||||
|
STColumn *pCol = (STColumn *)arg2;
|
||||||
|
|
||||||
|
if (colId < pCol->colId) {
|
||||||
|
return -1;
|
||||||
|
} else if (colId == pCol->colId) {
|
||||||
|
return 0;
|
||||||
|
} else {
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
bool tdSTSRowGetVal(STSRowIter *pIter, col_id_t colId, col_type_t colType, SCellVal *pVal) {
|
bool tdSTSRowGetVal(STSRowIter *pIter, col_id_t colId, col_type_t colType, SCellVal *pVal) {
|
||||||
if (colId == PRIMARYKEY_TIMESTAMP_COL_ID) {
|
if (colId == PRIMARYKEY_TIMESTAMP_COL_ID) {
|
||||||
pVal->val = &pIter->pRow->ts;
|
pVal->val = &pIter->pRow->ts;
|
||||||
|
@ -712,19 +759,6 @@ bool tdSTSRowGetVal(STSRowIter *pIter, col_id_t colId, col_type_t colType, SCell
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t tdCompareColId(const void *arg1, const void *arg2) {
|
|
||||||
int32_t colId = *(int32_t *)arg1;
|
|
||||||
STColumn *pCol = (STColumn *)arg2;
|
|
||||||
|
|
||||||
if (colId < pCol->colId) {
|
|
||||||
return -1;
|
|
||||||
} else if (colId == pCol->colId) {
|
|
||||||
return 0;
|
|
||||||
} else {
|
|
||||||
return 1;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
int32_t tdGetBitmapValTypeII(const void *pBitmap, int16_t colIdx, TDRowValT *pValType) {
|
int32_t tdGetBitmapValTypeII(const void *pBitmap, int16_t colIdx, TDRowValT *pValType) {
|
||||||
if (!pBitmap || colIdx < 0) {
|
if (!pBitmap || colIdx < 0) {
|
||||||
TASSERT(0);
|
TASSERT(0);
|
||||||
|
@ -938,7 +972,7 @@ int32_t tdAppendColValToRow(SRowBuilder *pBuilder, col_id_t colId, int8_t colTyp
|
||||||
break;
|
break;
|
||||||
case TD_VTYPE_NONE:
|
case TD_VTYPE_NONE:
|
||||||
if (!pBuilder->hasNone) pBuilder->hasNone = true;
|
if (!pBuilder->hasNone) pBuilder->hasNone = true;
|
||||||
break;
|
return TSDB_CODE_SUCCESS;
|
||||||
default:
|
default:
|
||||||
ASSERT(0);
|
ASSERT(0);
|
||||||
break;
|
break;
|
||||||
|
@ -970,13 +1004,11 @@ int32_t tdAppendColValToKvRow(SRowBuilder *pBuilder, TDRowValT valType, const vo
|
||||||
|
|
||||||
STSRow *row = pBuilder->pBuf;
|
STSRow *row = pBuilder->pBuf;
|
||||||
// No need to store None/Null values.
|
// No need to store None/Null values.
|
||||||
|
SKvRowIdx *pColIdx = (SKvRowIdx *)POINTER_SHIFT(TD_ROW_COL_IDX(row), offset);
|
||||||
|
pColIdx->colId = colId;
|
||||||
|
pColIdx->offset = TD_ROW_LEN(row); // the offset include the TD_ROW_HEAD_LEN
|
||||||
if (valType == TD_VTYPE_NORM) {
|
if (valType == TD_VTYPE_NORM) {
|
||||||
// ts key stored in STSRow.ts
|
char *ptr = (char *)POINTER_SHIFT(row, TD_ROW_LEN(row));
|
||||||
SKvRowIdx *pColIdx = (SKvRowIdx *)POINTER_SHIFT(TD_ROW_COL_IDX(row), offset);
|
|
||||||
char *ptr = (char *)POINTER_SHIFT(row, TD_ROW_LEN(row));
|
|
||||||
pColIdx->colId = colId;
|
|
||||||
pColIdx->offset = TD_ROW_LEN(row); // the offset include the TD_ROW_HEAD_LEN
|
|
||||||
|
|
||||||
if (IS_VAR_DATA_TYPE(colType)) {
|
if (IS_VAR_DATA_TYPE(colType)) {
|
||||||
if (isCopyVarData) {
|
if (isCopyVarData) {
|
||||||
memcpy(ptr, val, varDataTLen(val));
|
memcpy(ptr, val, varDataTLen(val));
|
||||||
|
@ -987,26 +1019,6 @@ int32_t tdAppendColValToKvRow(SRowBuilder *pBuilder, TDRowValT valType, const vo
|
||||||
TD_ROW_LEN(row) += TYPE_BYTES[colType];
|
TD_ROW_LEN(row) += TYPE_BYTES[colType];
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
#ifdef TD_SUPPORT_BACK2
|
|
||||||
// NULL/None value
|
|
||||||
else {
|
|
||||||
SKvRowIdx *pColIdx = (SKvRowIdx *)POINTER_SHIFT(TD_ROW_COL_IDX(row), offset);
|
|
||||||
char *ptr = (char *)POINTER_SHIFT(row, TD_ROW_LEN(row));
|
|
||||||
pColIdx->colId = colId;
|
|
||||||
pColIdx->offset = TD_ROW_LEN(row); // the offset include the TD_ROW_HEAD_LEN
|
|
||||||
const void *nullVal = getNullValue(colType);
|
|
||||||
|
|
||||||
if (IS_VAR_DATA_TYPE(colType)) {
|
|
||||||
if (isCopyVarData) {
|
|
||||||
memcpy(ptr, nullVal, varDataTLen(nullVal));
|
|
||||||
}
|
|
||||||
TD_ROW_LEN(row) += varDataTLen(nullVal);
|
|
||||||
} else {
|
|
||||||
memcpy(ptr, nullVal, TYPE_BYTES[colType]);
|
|
||||||
TD_ROW_LEN(row) += TYPE_BYTES[colType];
|
|
||||||
}
|
|
||||||
}
|
|
||||||
#endif
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -1044,24 +1056,6 @@ int32_t tdAppendColValToTpRow(SRowBuilder *pBuilder, TDRowValT valType, const vo
|
||||||
memcpy(POINTER_SHIFT(TD_ROW_DATA(row), offset), val, TYPE_BYTES[colType]);
|
memcpy(POINTER_SHIFT(TD_ROW_DATA(row), offset), val, TYPE_BYTES[colType]);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
#ifdef TD_SUPPORT_BACK2
|
|
||||||
// NULL/None value
|
|
||||||
else {
|
|
||||||
// TODO: Null value for new data types imported since 3.0 need to be defined.
|
|
||||||
const void *nullVal = getNullValue(colType);
|
|
||||||
if (IS_VAR_DATA_TYPE(colType)) {
|
|
||||||
// ts key stored in STSRow.ts
|
|
||||||
*(VarDataOffsetT *)POINTER_SHIFT(TD_ROW_DATA(row), offset) = TD_ROW_LEN(row);
|
|
||||||
|
|
||||||
if (isCopyVarData) {
|
|
||||||
memcpy(POINTER_SHIFT(row, TD_ROW_LEN(row)), nullVal, varDataTLen(nullVal));
|
|
||||||
}
|
|
||||||
TD_ROW_LEN(row) += varDataTLen(nullVal);
|
|
||||||
} else {
|
|
||||||
memcpy(POINTER_SHIFT(TD_ROW_DATA(row), offset), nullVal, TYPE_BYTES[colType]);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
#endif
|
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
@ -1329,7 +1323,7 @@ void tdSTSRowIterReset(STSRowIter *pIter, STSRow *pRow) {
|
||||||
pIter->pRow = pRow;
|
pIter->pRow = pRow;
|
||||||
pIter->pBitmap = tdGetBitmapAddr(pRow, pRow->type, pIter->pSchema->flen, tdRowGetNCols(pRow));
|
pIter->pBitmap = tdGetBitmapAddr(pRow, pRow->type, pIter->pSchema->flen, tdRowGetNCols(pRow));
|
||||||
pIter->offset = 0;
|
pIter->offset = 0;
|
||||||
pIter->colIdx = PRIMARYKEY_TIMESTAMP_COL_ID;
|
pIter->colIdx = 0; // PRIMARYKEY_TIMESTAMP_COL_ID;
|
||||||
pIter->kvIdx = 0;
|
pIter->kvIdx = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -53,14 +53,27 @@ static bool dmCheckDiskSpace() {
|
||||||
osUpdate();
|
osUpdate();
|
||||||
if (!osDataSpaceAvailable()) {
|
if (!osDataSpaceAvailable()) {
|
||||||
dError("free disk size: %f GB, too little, require %f GB at least at least , quit", (double)tsDataSpace.size.avail / 1024.0 / 1024.0 / 1024.0, (double)tsDataSpace.reserved / 1024.0 / 1024.0 / 1024.0);
|
dError("free disk size: %f GB, too little, require %f GB at least at least , quit", (double)tsDataSpace.size.avail / 1024.0 / 1024.0 / 1024.0, (double)tsDataSpace.reserved / 1024.0 / 1024.0 / 1024.0);
|
||||||
|
terrno = TSDB_CODE_NO_AVAIL_DISK;
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
if (!osLogSpaceAvailable()) {
|
if (!osLogSpaceAvailable()) {
|
||||||
dError("free disk size: %f GB, too little, require %f GB at least at least, quit", (double)tsLogSpace.size.avail / 1024.0 / 1024.0 / 1024.0, (double)tsLogSpace.reserved / 1024.0 / 1024.0 / 1024.0);
|
dError("free disk size: %f GB, too little, require %f GB at least at least, quit", (double)tsLogSpace.size.avail / 1024.0 / 1024.0 / 1024.0, (double)tsLogSpace.reserved / 1024.0 / 1024.0 / 1024.0);
|
||||||
|
terrno = TSDB_CODE_NO_AVAIL_DISK;
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
if (!osTempSpaceAvailable()) {
|
if (!osTempSpaceAvailable()) {
|
||||||
dError("free disk size: %f GB, too little, require %f GB at least at least, quit", (double)tsTempSpace.size.avail / 1024.0 / 1024.0 / 1024.0, (double)tsTempSpace.reserved / 1024.0 / 1024.0 / 1024.0);
|
dError("free disk size: %f GB, too little, require %f GB at least at least, quit", (double)tsTempSpace.size.avail / 1024.0 / 1024.0 / 1024.0, (double)tsTempSpace.reserved / 1024.0 / 1024.0 / 1024.0);
|
||||||
|
terrno = TSDB_CODE_NO_AVAIL_DISK;
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
static bool dmCheckDataDirVersion() {
|
||||||
|
char checkDataDirJsonFileName[PATH_MAX];
|
||||||
|
snprintf(checkDataDirJsonFileName, PATH_MAX, "%s/dnode/dnodeCfg.json", tsDataDir);
|
||||||
|
if (taosCheckExistFile(checkDataDirJsonFileName)) {
|
||||||
|
dError("The default data directory %s contains old data of tdengine 2.x, please clear it before running!", tsDataDir);
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
return true;
|
return true;
|
||||||
|
@ -68,6 +81,7 @@ static bool dmCheckDiskSpace() {
|
||||||
|
|
||||||
int32_t dmInit(int8_t rtype) {
|
int32_t dmInit(int8_t rtype) {
|
||||||
dInfo("start to init dnode env");
|
dInfo("start to init dnode env");
|
||||||
|
if (!dmCheckDataDirVersion()) return -1;
|
||||||
if (!dmCheckDiskSpace()) return -1;
|
if (!dmCheckDiskSpace()) return -1;
|
||||||
if (dmCheckRepeatInit(dmInstance()) != 0) return -1;
|
if (dmCheckRepeatInit(dmInstance()) != 0) return -1;
|
||||||
if (dmInitSystem() != 0) return -1;
|
if (dmInitSystem() != 0) return -1;
|
||||||
|
|
|
@ -197,6 +197,30 @@ void mndReleaseStream(SMnode *pMnode, SStreamObj *pStream) {
|
||||||
sdbRelease(pSdb, pStream);
|
sdbRelease(pSdb, pStream);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void mndShowStreamStatus(char *dst, SStreamObj *pStream) {
|
||||||
|
int8_t status = atomic_load_8(&pStream->status);
|
||||||
|
if (status == STREAM_STATUS__NORMAL) {
|
||||||
|
strcpy(dst, "normal");
|
||||||
|
} else if (status == STREAM_STATUS__STOP) {
|
||||||
|
strcpy(dst, "stop");
|
||||||
|
} else if (status == STREAM_STATUS__FAILED) {
|
||||||
|
strcpy(dst, "failed");
|
||||||
|
} else if (status == STREAM_STATUS__RECOVER) {
|
||||||
|
strcpy(dst, "recover");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
static void mndShowStreamTrigger(char *dst, SStreamObj *pStream) {
|
||||||
|
int8_t trigger = pStream->trigger;
|
||||||
|
if (trigger == STREAM_TRIGGER_AT_ONCE) {
|
||||||
|
strcpy(dst, "at once");
|
||||||
|
} else if (trigger == STREAM_TRIGGER_WINDOW_CLOSE) {
|
||||||
|
strcpy(dst, "window close");
|
||||||
|
} else if (trigger == STREAM_TRIGGER_MAX_DELAY) {
|
||||||
|
strcpy(dst, "max delay");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
static int32_t mndCheckCreateStreamReq(SCMCreateStreamReq *pCreate) {
|
static int32_t mndCheckCreateStreamReq(SCMCreateStreamReq *pCreate) {
|
||||||
if (pCreate->name[0] == 0 || pCreate->sql == NULL || pCreate->sql[0] == 0 || pCreate->sourceDB[0] == 0 ||
|
if (pCreate->name[0] == 0 || pCreate->sql == NULL || pCreate->sql[0] == 0 || pCreate->sourceDB[0] == 0 ||
|
||||||
pCreate->targetStbFullName[0] == 0) {
|
pCreate->targetStbFullName[0] == 0) {
|
||||||
|
@ -926,8 +950,11 @@ static int32_t mndRetrieveStream(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pB
|
||||||
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
|
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
|
||||||
colDataAppend(pColInfo, numOfRows, (const char *)sql, false);
|
colDataAppend(pColInfo, numOfRows, (const char *)sql, false);
|
||||||
|
|
||||||
|
char status[20 + VARSTR_HEADER_SIZE] = {0};
|
||||||
|
mndShowStreamStatus(&status[VARSTR_HEADER_SIZE], pStream);
|
||||||
|
varDataSetLen(status, strlen(varDataVal(status)));
|
||||||
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
|
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
|
||||||
colDataAppend(pColInfo, numOfRows, (const char *)&pStream->status, true);
|
colDataAppend(pColInfo, numOfRows, (const char *)&status, false);
|
||||||
|
|
||||||
char sourceDB[TSDB_DB_NAME_LEN + VARSTR_HEADER_SIZE] = {0};
|
char sourceDB[TSDB_DB_NAME_LEN + VARSTR_HEADER_SIZE] = {0};
|
||||||
tNameFromString(&n, pStream->sourceDb, T_NAME_ACCT | T_NAME_DB);
|
tNameFromString(&n, pStream->sourceDb, T_NAME_ACCT | T_NAME_DB);
|
||||||
|
@ -958,8 +985,11 @@ static int32_t mndRetrieveStream(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pB
|
||||||
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
|
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
|
||||||
colDataAppend(pColInfo, numOfRows, (const char *)&pStream->watermark, false);
|
colDataAppend(pColInfo, numOfRows, (const char *)&pStream->watermark, false);
|
||||||
|
|
||||||
|
char trigger[20 + VARSTR_HEADER_SIZE] = {0};
|
||||||
|
mndShowStreamTrigger(&trigger[VARSTR_HEADER_SIZE], pStream);
|
||||||
|
varDataSetLen(trigger, strlen(varDataVal(trigger)));
|
||||||
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
|
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
|
||||||
colDataAppend(pColInfo, numOfRows, (const char *)&pStream->trigger, false);
|
colDataAppend(pColInfo, numOfRows, (const char *)&trigger, false);
|
||||||
|
|
||||||
numOfRows++;
|
numOfRows++;
|
||||||
sdbRelease(pSdb, pStream);
|
sdbRelease(pSdb, pStream);
|
||||||
|
|
|
@ -171,7 +171,7 @@ int32_t tqProcessTaskRetrieveReq(STQ* pTq, SRpcMsg* pMsg);
|
||||||
int32_t tqProcessTaskRetrieveRsp(STQ* pTq, SRpcMsg* pMsg);
|
int32_t tqProcessTaskRetrieveRsp(STQ* pTq, SRpcMsg* pMsg);
|
||||||
int32_t tsdbGetStbIdList(SMeta* pMeta, int64_t suid, SArray* list);
|
int32_t tsdbGetStbIdList(SMeta* pMeta, int64_t suid, SArray* list);
|
||||||
|
|
||||||
SSubmitReq* tdBlockToSubmit(const SArray* pBlocks, const STSchema* pSchema, bool createTb, int64_t suid,
|
SSubmitReq* tdBlockToSubmit(SVnode* pVnode, const SArray* pBlocks, const STSchema* pSchema, bool createTb, int64_t suid,
|
||||||
const char* stbFullName, int32_t vgId, SBatchDeleteReq* pDeleteReq);
|
const char* stbFullName, int32_t vgId, SBatchDeleteReq* pDeleteReq);
|
||||||
|
|
||||||
// sma
|
// sma
|
||||||
|
|
|
@ -201,8 +201,9 @@ int32_t tdProcessTSmaInsertImpl(SSma *pSma, int64_t indexUid, const char *msg) {
|
||||||
}
|
}
|
||||||
|
|
||||||
SBatchDeleteReq deleteReq;
|
SBatchDeleteReq deleteReq;
|
||||||
SSubmitReq *pSubmitReq = tdBlockToSubmit((const SArray *)msg, pTsmaStat->pTSchema, true, pTsmaStat->pTSma->dstTbUid,
|
SSubmitReq *pSubmitReq =
|
||||||
pTsmaStat->pTSma->dstTbName, pTsmaStat->pTSma->dstVgId, &deleteReq);
|
tdBlockToSubmit(pSma->pVnode, (const SArray *)msg, pTsmaStat->pTSchema, true, pTsmaStat->pTSma->dstTbUid,
|
||||||
|
pTsmaStat->pTSma->dstTbName, pTsmaStat->pTSma->dstVgId, &deleteReq);
|
||||||
|
|
||||||
if (!pSubmitReq) {
|
if (!pSubmitReq) {
|
||||||
smaError("vgId:%d, failed to gen submit blk while tsma insert for smaIndex %" PRIi64 " since %s", SMA_VID(pSma),
|
smaError("vgId:%d, failed to gen submit blk while tsma insert for smaIndex %" PRIi64 " since %s", SMA_VID(pSma),
|
||||||
|
|
|
@ -325,7 +325,7 @@ int32_t tqRetrieveDataBlock(SSDataBlock* pBlock, STqReader* pReader) {
|
||||||
for (int32_t i = 0; i < colActual; i++) {
|
for (int32_t i = 0; i < colActual; i++) {
|
||||||
SColumnInfoData* pColData = taosArrayGet(pBlock->pDataBlock, i);
|
SColumnInfoData* pColData = taosArrayGet(pBlock->pDataBlock, i);
|
||||||
SCellVal sVal = {0};
|
SCellVal sVal = {0};
|
||||||
if (!tdSTSRowIterNext(&iter, pColData->info.colId, pColData->info.type, &sVal)) {
|
if (!tdSTSRowIterFetch(&iter, pColData->info.colId, pColData->info.type, &sVal)) {
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
if (colDataAppend(pColData, curRow, sVal.val, sVal.valType != TD_VTYPE_NORM) < 0) {
|
if (colDataAppend(pColData, curRow, sVal.val, sVal.valType != TD_VTYPE_NORM) < 0) {
|
||||||
|
|
|
@ -13,10 +13,44 @@
|
||||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
|
#include "tcommon.h"
|
||||||
|
#include "tmsg.h"
|
||||||
#include "tq.h"
|
#include "tq.h"
|
||||||
|
|
||||||
SSubmitReq* tdBlockToSubmit(const SArray* pBlocks, const STSchema* pTSchema, bool createTb, int64_t suid,
|
int32_t tdBuildDeleteReq(SVnode* pVnode, const char* stbFullName, const SSDataBlock* pDataBlock,
|
||||||
const char* stbFullName, int32_t vgId, SBatchDeleteReq* deleteReq) {
|
SBatchDeleteReq* deleteReq) {
|
||||||
|
ASSERT(pDataBlock->info.type == STREAM_DELETE_RESULT);
|
||||||
|
int32_t totRow = pDataBlock->info.rows;
|
||||||
|
SColumnInfoData* pTsCol = taosArrayGet(pDataBlock->pDataBlock, START_TS_COLUMN_INDEX);
|
||||||
|
SColumnInfoData* pGidCol = taosArrayGet(pDataBlock->pDataBlock, GROUPID_COLUMN_INDEX);
|
||||||
|
for (int32_t row = 0; row < totRow; row++) {
|
||||||
|
int64_t ts = *(int64_t*)colDataGetData(pTsCol, row);
|
||||||
|
/*int64_t groupId = *(int64_t*)colDataGetData(pGidCol, row);*/
|
||||||
|
int64_t groupId = 0;
|
||||||
|
char* name = buildCtbNameByGroupId(stbFullName, groupId);
|
||||||
|
tqDebug("stream delete msg: groupId :%ld, name: %s", groupId, name);
|
||||||
|
SMetaReader mr = {0};
|
||||||
|
metaReaderInit(&mr, pVnode->pMeta, 0);
|
||||||
|
if (metaGetTableEntryByName(&mr, name) < 0) {
|
||||||
|
metaReaderClear(&mr);
|
||||||
|
taosMemoryFree(name);
|
||||||
|
return -1;
|
||||||
|
}
|
||||||
|
|
||||||
|
int64_t uid = mr.me.uid;
|
||||||
|
metaReaderClear(&mr);
|
||||||
|
taosMemoryFree(name);
|
||||||
|
SSingleDeleteReq req = {
|
||||||
|
.ts = ts,
|
||||||
|
.uid = uid,
|
||||||
|
};
|
||||||
|
taosArrayPush(deleteReq->deleteReqs, &req);
|
||||||
|
}
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
SSubmitReq* tdBlockToSubmit(SVnode* pVnode, const SArray* pBlocks, const STSchema* pTSchema, bool createTb,
|
||||||
|
int64_t suid, const char* stbFullName, int32_t vgId, SBatchDeleteReq* pDeleteReq) {
|
||||||
SSubmitReq* ret = NULL;
|
SSubmitReq* ret = NULL;
|
||||||
SArray* schemaReqs = NULL;
|
SArray* schemaReqs = NULL;
|
||||||
SArray* schemaReqSz = NULL;
|
SArray* schemaReqSz = NULL;
|
||||||
|
@ -33,9 +67,13 @@ SSubmitReq* tdBlockToSubmit(const SArray* pBlocks, const STSchema* pTSchema, boo
|
||||||
schemaReqSz = taosArrayInit(sz, sizeof(int32_t));
|
schemaReqSz = taosArrayInit(sz, sizeof(int32_t));
|
||||||
for (int32_t i = 0; i < sz; i++) {
|
for (int32_t i = 0; i < sz; i++) {
|
||||||
SSDataBlock* pDataBlock = taosArrayGet(pBlocks, i);
|
SSDataBlock* pDataBlock = taosArrayGet(pBlocks, i);
|
||||||
if (pDataBlock->info.type == STREAM_DELETE_DATA) {
|
if (pDataBlock->info.type == STREAM_DELETE_RESULT) {
|
||||||
//
|
int32_t padding1 = 0;
|
||||||
|
void* padding2 = taosMemoryMalloc(1);
|
||||||
|
taosArrayPush(schemaReqSz, &padding1);
|
||||||
|
taosArrayPush(schemaReqs, &padding2);
|
||||||
}
|
}
|
||||||
|
|
||||||
STagVal tagVal = {
|
STagVal tagVal = {
|
||||||
.cid = taosArrayGetSize(pDataBlock->pDataBlock) + 1,
|
.cid = taosArrayGetSize(pDataBlock->pDataBlock) + 1,
|
||||||
.type = TSDB_DATA_TYPE_UBIGINT,
|
.type = TSDB_DATA_TYPE_UBIGINT,
|
||||||
|
@ -97,7 +135,10 @@ SSubmitReq* tdBlockToSubmit(const SArray* pBlocks, const STSchema* pTSchema, boo
|
||||||
int32_t cap = sizeof(SSubmitReq);
|
int32_t cap = sizeof(SSubmitReq);
|
||||||
for (int32_t i = 0; i < sz; i++) {
|
for (int32_t i = 0; i < sz; i++) {
|
||||||
SSDataBlock* pDataBlock = taosArrayGet(pBlocks, i);
|
SSDataBlock* pDataBlock = taosArrayGet(pBlocks, i);
|
||||||
int32_t rows = pDataBlock->info.rows;
|
if (pDataBlock->info.type == STREAM_DELETE_RESULT) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
int32_t rows = pDataBlock->info.rows;
|
||||||
// TODO min
|
// TODO min
|
||||||
int32_t rowSize = pDataBlock->info.rowSize;
|
int32_t rowSize = pDataBlock->info.rowSize;
|
||||||
int32_t maxLen = TD_ROW_MAX_BYTES_FROM_SCHEMA(pTSchema);
|
int32_t maxLen = TD_ROW_MAX_BYTES_FROM_SCHEMA(pTSchema);
|
||||||
|
@ -119,6 +160,11 @@ SSubmitReq* tdBlockToSubmit(const SArray* pBlocks, const STSchema* pTSchema, boo
|
||||||
SSubmitBlk* blkHead = POINTER_SHIFT(ret, sizeof(SSubmitReq));
|
SSubmitBlk* blkHead = POINTER_SHIFT(ret, sizeof(SSubmitReq));
|
||||||
for (int32_t i = 0; i < sz; i++) {
|
for (int32_t i = 0; i < sz; i++) {
|
||||||
SSDataBlock* pDataBlock = taosArrayGet(pBlocks, i);
|
SSDataBlock* pDataBlock = taosArrayGet(pBlocks, i);
|
||||||
|
if (pDataBlock->info.type == STREAM_DELETE_RESULT) {
|
||||||
|
pDeleteReq->suid = suid;
|
||||||
|
tdBuildDeleteReq(pVnode, stbFullName, pDataBlock, pDeleteReq);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
blkHead->numOfRows = htonl(pDataBlock->info.rows);
|
blkHead->numOfRows = htonl(pDataBlock->info.rows);
|
||||||
blkHead->sversion = htonl(pTSchema->version);
|
blkHead->sversion = htonl(pTSchema->version);
|
||||||
|
@ -188,7 +234,7 @@ void tqTableSink(SStreamTask* pTask, void* vnode, int64_t ver, void* data) {
|
||||||
|
|
||||||
ASSERT(pTask->tbSink.pTSchema);
|
ASSERT(pTask->tbSink.pTSchema);
|
||||||
deleteReq.deleteReqs = taosArrayInit(0, sizeof(SSingleDeleteReq));
|
deleteReq.deleteReqs = taosArrayInit(0, sizeof(SSingleDeleteReq));
|
||||||
SSubmitReq* pReq = tdBlockToSubmit(pRes, pTask->tbSink.pTSchema, true, pTask->tbSink.stbUid,
|
SSubmitReq* pReq = tdBlockToSubmit(pVnode, pRes, pTask->tbSink.pTSchema, true, pTask->tbSink.stbUid,
|
||||||
pTask->tbSink.stbFullName, pVnode->config.vgId, &deleteReq);
|
pTask->tbSink.stbFullName, pVnode->config.vgId, &deleteReq);
|
||||||
|
|
||||||
tqDebug("vgId:%d, task %d convert blocks over, put into write-queue", TD_VID(pVnode), pTask->taskId);
|
tqDebug("vgId:%d, task %d convert blocks over, put into write-queue", TD_VID(pVnode), pTask->taskId);
|
||||||
|
@ -201,12 +247,14 @@ void tqTableSink(SStreamTask* pTask, void* vnode, int64_t ver, void* data) {
|
||||||
ASSERT(0);
|
ASSERT(0);
|
||||||
}
|
}
|
||||||
SEncoder encoder;
|
SEncoder encoder;
|
||||||
void* buf = taosMemoryCalloc(1, len + sizeof(SMsgHead));
|
void* buf = rpcMallocCont(len + sizeof(SMsgHead));
|
||||||
void* abuf = POINTER_SHIFT(buf, sizeof(SMsgHead));
|
void* abuf = POINTER_SHIFT(buf, sizeof(SMsgHead));
|
||||||
tEncoderInit(&encoder, abuf, len);
|
tEncoderInit(&encoder, abuf, len);
|
||||||
tEncodeSBatchDeleteReq(&encoder, &deleteReq);
|
tEncodeSBatchDeleteReq(&encoder, &deleteReq);
|
||||||
tEncoderClear(&encoder);
|
tEncoderClear(&encoder);
|
||||||
|
|
||||||
|
((SMsgHead*)buf)->vgId = pVnode->config.vgId;
|
||||||
|
|
||||||
if (taosArrayGetSize(deleteReq.deleteReqs) != 0) {
|
if (taosArrayGetSize(deleteReq.deleteReqs) != 0) {
|
||||||
SRpcMsg msg = {
|
SRpcMsg msg = {
|
||||||
.msgType = TDMT_VND_BATCH_DEL,
|
.msgType = TDMT_VND_BATCH_DEL,
|
||||||
|
|
|
@ -182,7 +182,7 @@ static int32_t setColumnIdSlotList(STsdbReader* pReader, SSDataBlock* pBlock) {
|
||||||
|
|
||||||
if (IS_VAR_DATA_TYPE(pCol->info.type)) {
|
if (IS_VAR_DATA_TYPE(pCol->info.type)) {
|
||||||
pSupInfo->buildBuf[i] = taosMemoryMalloc(pCol->info.bytes);
|
pSupInfo->buildBuf[i] = taosMemoryMalloc(pCol->info.bytes);
|
||||||
tsdbInfo("-------------------%d\n", pCol->info.bytes);
|
// tsdbInfo("-------------------%d\n", pCol->info.bytes);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -145,7 +145,7 @@ int32_t vnodeProcessWriteMsg(SVnode *pVnode, SRpcMsg *pMsg, int64_t version, SRp
|
||||||
int32_t len;
|
int32_t len;
|
||||||
int32_t ret;
|
int32_t ret;
|
||||||
|
|
||||||
vTrace("vgId:%d, start to process write request %s, index:%" PRId64, TD_VID(pVnode), TMSG_INFO(pMsg->msgType),
|
vDebug("vgId:%d, start to process write request %s, index:%" PRId64, TD_VID(pVnode), TMSG_INFO(pMsg->msgType),
|
||||||
version);
|
version);
|
||||||
|
|
||||||
pVnode->state.applied = version;
|
pVnode->state.applied = version;
|
||||||
|
@ -1071,6 +1071,7 @@ static int32_t vnodeProcessBatchDeleteReq(SVnode *pVnode, int64_t version, void
|
||||||
// TODO
|
// TODO
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
taosArrayDestroy(deleteReq.deleteReqs);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -883,6 +883,32 @@ int32_t ctgGetVgInfoFromHashValue(SCatalog *pCtg, SDBVgInfo *dbInfo, const SName
|
||||||
CTG_RET(code);
|
CTG_RET(code);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
int32_t ctgHashValueComp(void const *lp, void const *rp) {
|
||||||
|
uint32_t *key = (uint32_t *)lp;
|
||||||
|
SVgroupInfo *pVg = *(SVgroupInfo **)rp;
|
||||||
|
|
||||||
|
if (*key < pVg->hashBegin) {
|
||||||
|
return -1;
|
||||||
|
} else if (*key > pVg->hashEnd) {
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
int ctgVgInfoComp(const void* lp, const void* rp) {
|
||||||
|
SVgroupInfo *pLeft = *(SVgroupInfo **)lp;
|
||||||
|
SVgroupInfo *pRight = *(SVgroupInfo **)rp;
|
||||||
|
if (pLeft->hashBegin < pRight->hashBegin) {
|
||||||
|
return -1;
|
||||||
|
} else if (pLeft->hashBegin > pRight->hashBegin) {
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
int32_t ctgGetVgInfosFromHashValue(SCatalog *pCtg, SCtgTaskReq* tReq, SDBVgInfo *dbInfo, SCtgTbHashsCtx *pCtx, char* dbFName, SArray* pNames, bool update) {
|
int32_t ctgGetVgInfosFromHashValue(SCatalog *pCtg, SCtgTaskReq* tReq, SDBVgInfo *dbInfo, SCtgTbHashsCtx *pCtx, char* dbFName, SArray* pNames, bool update) {
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
SCtgTask* pTask = tReq->pTask;
|
SCtgTask* pTask = tReq->pTask;
|
||||||
|
@ -923,9 +949,19 @@ int32_t ctgGetVgInfosFromHashValue(SCatalog *pCtg, SCtgTaskReq* tReq, SDBVgInfo
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
taosHashCancelIterate(dbInfo->vgHash, pIter);
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
SArray* pVgList = taosArrayInit(vgNum, POINTER_BYTES);
|
||||||
|
void *pIter = taosHashIterate(dbInfo->vgHash, NULL);
|
||||||
|
while (pIter) {
|
||||||
|
taosArrayPush(pVgList, &pIter);
|
||||||
|
pIter = taosHashIterate(dbInfo->vgHash, pIter);
|
||||||
|
}
|
||||||
|
|
||||||
|
taosArraySort(pVgList, ctgVgInfoComp);
|
||||||
|
|
||||||
char tbFullName[TSDB_TABLE_FNAME_LEN];
|
char tbFullName[TSDB_TABLE_FNAME_LEN];
|
||||||
sprintf(tbFullName, "%s.", dbFName);
|
sprintf(tbFullName, "%s.", dbFName);
|
||||||
int32_t offset = strlen(tbFullName);
|
int32_t offset = strlen(tbFullName);
|
||||||
|
@ -940,25 +976,20 @@ int32_t ctgGetVgInfosFromHashValue(SCatalog *pCtg, SCtgTaskReq* tReq, SDBVgInfo
|
||||||
|
|
||||||
uint32_t hashValue = (*fp)(tbFullName, (uint32_t)tbNameLen);
|
uint32_t hashValue = (*fp)(tbFullName, (uint32_t)tbNameLen);
|
||||||
|
|
||||||
void *pIter = taosHashIterate(dbInfo->vgHash, NULL);
|
SVgroupInfo **p = taosArraySearch(pVgList, &hashValue, ctgHashValueComp, TD_EQ);
|
||||||
while (pIter) {
|
|
||||||
vgInfo = pIter;
|
|
||||||
if (hashValue >= vgInfo->hashBegin && hashValue <= vgInfo->hashEnd) {
|
|
||||||
taosHashCancelIterate(dbInfo->vgHash, pIter);
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
|
|
||||||
pIter = taosHashIterate(dbInfo->vgHash, pIter);
|
if (NULL == p) {
|
||||||
vgInfo = NULL;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (NULL == vgInfo) {
|
|
||||||
ctgError("no hash range found for hash value [%u], db:%s, numOfVgId:%d", hashValue, dbFName, taosHashGetSize(dbInfo->vgHash));
|
ctgError("no hash range found for hash value [%u], db:%s, numOfVgId:%d", hashValue, dbFName, taosHashGetSize(dbInfo->vgHash));
|
||||||
|
ASSERT(0);
|
||||||
|
taosArrayDestroy(pVgList);
|
||||||
CTG_ERR_RET(TSDB_CODE_CTG_INTERNAL_ERROR);
|
CTG_ERR_RET(TSDB_CODE_CTG_INTERNAL_ERROR);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
vgInfo = *p;
|
||||||
|
|
||||||
SVgroupInfo* pNewVg = taosMemoryMalloc(sizeof(SVgroupInfo));
|
SVgroupInfo* pNewVg = taosMemoryMalloc(sizeof(SVgroupInfo));
|
||||||
if (NULL == pNewVg) {
|
if (NULL == pNewVg) {
|
||||||
|
taosArrayDestroy(pVgList);
|
||||||
CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY);
|
CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -977,6 +1008,8 @@ int32_t ctgGetVgInfosFromHashValue(SCatalog *pCtg, SCtgTaskReq* tReq, SDBVgInfo
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
taosArrayDestroy(pVgList);
|
||||||
|
|
||||||
CTG_RET(code);
|
CTG_RET(code);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -52,13 +52,6 @@ typedef int32_t (*__block_search_fn_t)(char* data, int32_t num, int64_t key, int
|
||||||
|
|
||||||
#define NEEDTO_COMPRESS_QUERY(size) ((size) > tsCompressColData ? 1 : 0)
|
#define NEEDTO_COMPRESS_QUERY(size) ((size) > tsCompressColData ? 1 : 0)
|
||||||
|
|
||||||
#define START_TS_COLUMN_INDEX 0
|
|
||||||
#define END_TS_COLUMN_INDEX 1
|
|
||||||
#define UID_COLUMN_INDEX 2
|
|
||||||
#define GROUPID_COLUMN_INDEX 3
|
|
||||||
#define CALCULATE_START_TS_COLUMN_INDEX 4
|
|
||||||
#define CALCULATE_END_TS_COLUMN_INDEX 5
|
|
||||||
|
|
||||||
enum {
|
enum {
|
||||||
// when this task starts to execute, this status will set
|
// when this task starts to execute, this status will set
|
||||||
TASK_NOT_COMPLETED = 0x1u,
|
TASK_NOT_COMPLETED = 0x1u,
|
||||||
|
@ -682,6 +675,7 @@ typedef struct SWindowRowsSup {
|
||||||
TSKEY prevTs;
|
TSKEY prevTs;
|
||||||
int32_t startRowIndex;
|
int32_t startRowIndex;
|
||||||
int32_t numOfRows;
|
int32_t numOfRows;
|
||||||
|
uint64_t groupId;
|
||||||
} SWindowRowsSup;
|
} SWindowRowsSup;
|
||||||
|
|
||||||
typedef struct SSessionAggOperatorInfo {
|
typedef struct SSessionAggOperatorInfo {
|
||||||
|
@ -701,6 +695,7 @@ typedef struct SSessionAggOperatorInfo {
|
||||||
typedef struct SResultWindowInfo {
|
typedef struct SResultWindowInfo {
|
||||||
SResultRowPosition pos;
|
SResultRowPosition pos;
|
||||||
STimeWindow win;
|
STimeWindow win;
|
||||||
|
uint64_t groupId;
|
||||||
bool isOutput;
|
bool isOutput;
|
||||||
bool isClosed;
|
bool isClosed;
|
||||||
} SResultWindowInfo;
|
} SResultWindowInfo;
|
||||||
|
@ -741,6 +736,7 @@ typedef struct STimeSliceOperatorInfo {
|
||||||
SArray* pPrevRow; // SArray<SGroupValue>
|
SArray* pPrevRow; // SArray<SGroupValue>
|
||||||
SArray* pNextRow; // SArray<SGroupValue>
|
SArray* pNextRow; // SArray<SGroupValue>
|
||||||
SArray* pLinearInfo; // SArray<SFillLinearInfo>
|
SArray* pLinearInfo; // SArray<SFillLinearInfo>
|
||||||
|
bool fillLastPoint;
|
||||||
bool isPrevRowSet;
|
bool isPrevRowSet;
|
||||||
bool isNextRowSet;
|
bool isNextRowSet;
|
||||||
int32_t fillType; // fill type
|
int32_t fillType; // fill type
|
||||||
|
@ -1014,9 +1010,8 @@ SResultWindowInfo* getSessionTimeWindow(SStreamAggSupporter* pAggSup, TSKEY star
|
||||||
SResultWindowInfo* getCurSessionWindow(SStreamAggSupporter* pAggSup, TSKEY startTs,
|
SResultWindowInfo* getCurSessionWindow(SStreamAggSupporter* pAggSup, TSKEY startTs,
|
||||||
TSKEY endTs, uint64_t groupId, int64_t gap, int32_t* pIndex);
|
TSKEY endTs, uint64_t groupId, int64_t gap, int32_t* pIndex);
|
||||||
bool isInTimeWindow(STimeWindow* pWin, TSKEY ts, int64_t gap);
|
bool isInTimeWindow(STimeWindow* pWin, TSKEY ts, int64_t gap);
|
||||||
int32_t updateSessionWindowInfo(SResultWindowInfo* pWinInfo, TSKEY* pStartTs,
|
|
||||||
TSKEY* pEndTs, int32_t rows, int32_t start, int64_t gap, SHashObj* pStDeleted);
|
|
||||||
bool functionNeedToExecute(SqlFunctionCtx* pCtx);
|
bool functionNeedToExecute(SqlFunctionCtx* pCtx);
|
||||||
|
bool isOverdue(TSKEY ts, STimeWindowAggSupp* pSup);
|
||||||
bool isCloseWindow(STimeWindow* pWin, STimeWindowAggSupp* pSup);
|
bool isCloseWindow(STimeWindow* pWin, STimeWindowAggSupp* pSup);
|
||||||
bool isDeletedWindow(STimeWindow* pWin, uint64_t groupId, SAggSupporter* pSup);
|
bool isDeletedWindow(STimeWindow* pWin, uint64_t groupId, SAggSupporter* pSup);
|
||||||
void appendOneRow(SSDataBlock* pBlock, TSKEY* pStartTs, TSKEY* pEndTs, uint64_t* pUid);
|
void appendOneRow(SSDataBlock* pBlock, TSKEY* pStartTs, TSKEY* pEndTs, uint64_t* pUid);
|
||||||
|
|
|
@ -37,7 +37,6 @@ typedef struct SFillLinearInfo {
|
||||||
SPoint start;
|
SPoint start;
|
||||||
SPoint end;
|
SPoint end;
|
||||||
bool hasNull;
|
bool hasNull;
|
||||||
bool fillLastPoint;
|
|
||||||
int16_t type;
|
int16_t type;
|
||||||
int32_t bytes;
|
int32_t bytes;
|
||||||
} SFillLinearInfo;
|
} SFillLinearInfo;
|
||||||
|
|
|
@ -1277,8 +1277,12 @@ void destroyTableQueryInfoImpl(STableQueryInfo* pTableQueryInfo) {
|
||||||
}
|
}
|
||||||
|
|
||||||
void setResultRowInitCtx(SResultRow* pResult, SqlFunctionCtx* pCtx, int32_t numOfOutput, int32_t* rowEntryInfoOffset) {
|
void setResultRowInitCtx(SResultRow* pResult, SqlFunctionCtx* pCtx, int32_t numOfOutput, int32_t* rowEntryInfoOffset) {
|
||||||
|
bool init = false;
|
||||||
for (int32_t i = 0; i < numOfOutput; ++i) {
|
for (int32_t i = 0; i < numOfOutput; ++i) {
|
||||||
pCtx[i].resultInfo = getResultEntryInfo(pResult, i, rowEntryInfoOffset);
|
pCtx[i].resultInfo = getResultEntryInfo(pResult, i, rowEntryInfoOffset);
|
||||||
|
if (init) {
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
struct SResultRowEntryInfo* pResInfo = pCtx[i].resultInfo;
|
struct SResultRowEntryInfo* pResInfo = pCtx[i].resultInfo;
|
||||||
if (isRowEntryCompleted(pResInfo) && isRowEntryInitialized(pResInfo)) {
|
if (isRowEntryCompleted(pResInfo) && isRowEntryInitialized(pResInfo)) {
|
||||||
|
@ -1295,6 +1299,8 @@ void setResultRowInitCtx(SResultRow* pResult, SqlFunctionCtx* pCtx, int32_t numO
|
||||||
} else {
|
} else {
|
||||||
pResInfo->initialized = true;
|
pResInfo->initialized = true;
|
||||||
}
|
}
|
||||||
|
} else {
|
||||||
|
init = true;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -1943,6 +1949,7 @@ int32_t loadRemoteDataCallback(void* param, SDataBuf* pMsg, int32_t code) {
|
||||||
SExchangeInfo* pExchangeInfo = taosAcquireRef(exchangeObjRefPool, pWrapper->exchangeId);
|
SExchangeInfo* pExchangeInfo = taosAcquireRef(exchangeObjRefPool, pWrapper->exchangeId);
|
||||||
if (pExchangeInfo == NULL) {
|
if (pExchangeInfo == NULL) {
|
||||||
qWarn("failed to acquire exchange operator, since it may have been released");
|
qWarn("failed to acquire exchange operator, since it may have been released");
|
||||||
|
taosMemoryFree(pMsg->pData);
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1963,6 +1970,7 @@ int32_t loadRemoteDataCallback(void* param, SDataBuf* pMsg, int32_t code) {
|
||||||
qDebug("%s fetch rsp received, index:%d, blocks:%d, rows:%d", pSourceDataInfo->taskId, index, pRsp->numOfBlocks,
|
qDebug("%s fetch rsp received, index:%d, blocks:%d, rows:%d", pSourceDataInfo->taskId, index, pRsp->numOfBlocks,
|
||||||
pRsp->numOfRows);
|
pRsp->numOfRows);
|
||||||
} else {
|
} else {
|
||||||
|
taosMemoryFree(pMsg->pData);
|
||||||
pSourceDataInfo->code = code;
|
pSourceDataInfo->code = code;
|
||||||
qDebug("%s fetch rsp received, index:%d, error:%d", pSourceDataInfo->taskId, index, tstrerror(code));
|
qDebug("%s fetch rsp received, index:%d, error:%d", pSourceDataInfo->taskId, index, tstrerror(code));
|
||||||
}
|
}
|
||||||
|
|
|
@ -1174,10 +1174,15 @@ static void checkUpdateData(SStreamScanInfo* pInfo, bool invertible, SSDataBlock
|
||||||
for (int32_t rowId = 0; rowId < pBlock->info.rows; rowId++) {
|
for (int32_t rowId = 0; rowId < pBlock->info.rows; rowId++) {
|
||||||
SResultRowInfo dumyInfo;
|
SResultRowInfo dumyInfo;
|
||||||
dumyInfo.cur.pageId = -1;
|
dumyInfo.cur.pageId = -1;
|
||||||
STimeWindow win = getActiveTimeWindow(NULL, &dumyInfo, tsCol[rowId], &pInfo->interval, TSDB_ORDER_ASC);
|
bool isClosed = false;
|
||||||
|
STimeWindow win = {.skey = INT64_MIN, .ekey = INT64_MAX};
|
||||||
|
if (isOverdue(tsCol[rowId], &pInfo->twAggSup)) {
|
||||||
|
win = getActiveTimeWindow(NULL, &dumyInfo, tsCol[rowId], &pInfo->interval, TSDB_ORDER_ASC);
|
||||||
|
isClosed = isCloseWindow(&win, &pInfo->twAggSup);
|
||||||
|
}
|
||||||
// must check update info first.
|
// must check update info first.
|
||||||
bool update = updateInfoIsUpdated(pInfo->pUpdateInfo, pBlock->info.uid, tsCol[rowId]);
|
bool update = updateInfoIsUpdated(pInfo->pUpdateInfo, pBlock->info.uid, tsCol[rowId]);
|
||||||
if ((update || (isSignleIntervalWindow(pInfo) && isCloseWindow(&win, &pInfo->twAggSup) &&
|
if ((update || (isSignleIntervalWindow(pInfo) && isClosed &&
|
||||||
isDeletedWindow(&win, pBlock->info.groupId, pInfo->sessionSup.pIntervalAggSup))) && out) {
|
isDeletedWindow(&win, pBlock->info.groupId, pInfo->sessionSup.pIntervalAggSup))) && out) {
|
||||||
appendOneRow(pInfo->pUpdateDataRes, tsCol + rowId, tsCol + rowId, &pBlock->info.uid);
|
appendOneRow(pInfo->pUpdateDataRes, tsCol + rowId, tsCol + rowId, &pBlock->info.uid);
|
||||||
}
|
}
|
||||||
|
|
|
@ -90,16 +90,18 @@ static void updateTimeWindowInfo(SColumnInfoData* pColData, STimeWindow* pWin, b
|
||||||
ts[4] = pWin->ekey + delta; // window end key
|
ts[4] = pWin->ekey + delta; // window end key
|
||||||
}
|
}
|
||||||
|
|
||||||
static void doKeepTuple(SWindowRowsSup* pRowSup, int64_t ts) {
|
static void doKeepTuple(SWindowRowsSup* pRowSup, int64_t ts, uint64_t groupId) {
|
||||||
pRowSup->win.ekey = ts;
|
pRowSup->win.ekey = ts;
|
||||||
pRowSup->prevTs = ts;
|
pRowSup->prevTs = ts;
|
||||||
pRowSup->numOfRows += 1;
|
pRowSup->numOfRows += 1;
|
||||||
|
pRowSup->groupId = groupId;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void doKeepNewWindowStartInfo(SWindowRowsSup* pRowSup, const int64_t* tsList, int32_t rowIndex) {
|
static void doKeepNewWindowStartInfo(SWindowRowsSup* pRowSup, const int64_t* tsList, int32_t rowIndex, uint64_t groupId) {
|
||||||
pRowSup->startRowIndex = rowIndex;
|
pRowSup->startRowIndex = rowIndex;
|
||||||
pRowSup->numOfRows = 0;
|
pRowSup->numOfRows = 0;
|
||||||
pRowSup->win.skey = tsList[rowIndex];
|
pRowSup->win.skey = tsList[rowIndex];
|
||||||
|
pRowSup->groupId = groupId;
|
||||||
}
|
}
|
||||||
|
|
||||||
static FORCE_INLINE int32_t getForwardStepsInBlock(int32_t numOfRows, __block_search_fn_t searchFn, TSKEY ekey,
|
static FORCE_INLINE int32_t getForwardStepsInBlock(int32_t numOfRows, __block_search_fn_t searchFn, TSKEY ekey,
|
||||||
|
@ -851,23 +853,34 @@ static int32_t saveResult(int64_t ts, int32_t pageId, int32_t offset, uint64_t g
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int32_t saveWinResult(int64_t ts, int32_t pageId, int32_t offset, uint64_t groupId, SHashObj* pUpdatedMap) {
|
||||||
|
SResKeyPos* newPos = taosMemoryMalloc(sizeof(SResKeyPos) + sizeof(uint64_t));
|
||||||
|
if (newPos == NULL) {
|
||||||
|
return TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
}
|
||||||
|
newPos->groupId = groupId;
|
||||||
|
newPos->pos = (SResultRowPosition){.pageId = pageId, .offset = offset};
|
||||||
|
*(int64_t*)newPos->key = ts;
|
||||||
|
SWinRes key = {.ts = ts, .groupId = groupId};
|
||||||
|
if (taosHashPut(pUpdatedMap, &key, sizeof(SWinRes), &newPos, sizeof(void*)) != TSDB_CODE_SUCCESS) {
|
||||||
|
taosMemoryFree(newPos);
|
||||||
|
}
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int32_t saveWinResultRow(SResultRow* result, uint64_t groupId, SHashObj* pUpdatedMap) {
|
||||||
|
return saveWinResult(result->win.skey, result->pageId, result->offset, groupId, pUpdatedMap);;
|
||||||
|
}
|
||||||
|
|
||||||
static int32_t saveResultRow(SResultRow* result, uint64_t groupId, SArray* pUpdated) {
|
static int32_t saveResultRow(SResultRow* result, uint64_t groupId, SArray* pUpdated) {
|
||||||
return saveResult(result->win.skey, result->pageId, result->offset, groupId, pUpdated);
|
return saveResult(result->win.skey, result->pageId, result->offset, groupId, pUpdated);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void removeResult(SArray* pUpdated, SWinRes* pKey) {
|
static void removeResults(SArray* pWins, SHashObj* pUpdatedMap) {
|
||||||
int32_t size = taosArrayGetSize(pUpdated);
|
|
||||||
int32_t index = binarySearchCom(pUpdated, size, pKey, TSDB_ORDER_DESC, compareResKey);
|
|
||||||
if (index >= 0 && 0 == compareResKey(pKey, pUpdated, index)) {
|
|
||||||
taosArrayRemove(pUpdated, index);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
static void removeResults(SArray* pWins, SArray* pUpdated) {
|
|
||||||
int32_t size = taosArrayGetSize(pWins);
|
int32_t size = taosArrayGetSize(pWins);
|
||||||
for (int32_t i = 0; i < size; i++) {
|
for (int32_t i = 0; i < size; i++) {
|
||||||
SWinRes* pW = taosArrayGet(pWins, i);
|
SWinRes* pW = taosArrayGet(pWins, i);
|
||||||
removeResult(pUpdated, pW);
|
taosHashRemove(pUpdatedMap, pW, sizeof(SWinRes));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -894,11 +907,14 @@ int32_t compareWinRes(void* pKey, void* data, int32_t index) {
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void removeDeleteResults(SArray* pUpdated, SArray* pDelWins) {
|
static void removeDeleteResults(SHashObj* pUpdatedMap, SArray* pDelWins) {
|
||||||
int32_t upSize = taosArrayGetSize(pUpdated);
|
if (!pUpdatedMap || taosHashGetSize(pUpdatedMap) == 0) {
|
||||||
|
return;
|
||||||
|
}
|
||||||
int32_t delSize = taosArrayGetSize(pDelWins);
|
int32_t delSize = taosArrayGetSize(pDelWins);
|
||||||
for (int32_t i = 0; i < upSize; i++) {
|
void* pIte = NULL;
|
||||||
SResKeyPos* pResKey = taosArrayGetP(pUpdated, i);
|
while ((pIte = taosHashIterate(pUpdatedMap, pIte)) != NULL) {
|
||||||
|
SResKeyPos* pResKey = (SResKeyPos*)pIte;
|
||||||
int32_t index = binarySearchCom(pDelWins, delSize, pResKey, TSDB_ORDER_DESC, compareWinRes);
|
int32_t index = binarySearchCom(pDelWins, delSize, pResKey, TSDB_ORDER_DESC, compareWinRes);
|
||||||
if (index >= 0 && 0 == compareWinRes(pResKey, pDelWins, index)) {
|
if (index >= 0 && 0 == compareWinRes(pResKey, pDelWins, index)) {
|
||||||
taosArrayRemove(pDelWins, index);
|
taosArrayRemove(pDelWins, index);
|
||||||
|
@ -914,7 +930,7 @@ bool isOverdue(TSKEY ts, STimeWindowAggSupp* pSup) {
|
||||||
bool isCloseWindow(STimeWindow* pWin, STimeWindowAggSupp* pSup) { return isOverdue(pWin->ekey, pSup); }
|
bool isCloseWindow(STimeWindow* pWin, STimeWindowAggSupp* pSup) { return isOverdue(pWin->ekey, pSup); }
|
||||||
|
|
||||||
static void hashIntervalAgg(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResultRowInfo, SSDataBlock* pBlock,
|
static void hashIntervalAgg(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResultRowInfo, SSDataBlock* pBlock,
|
||||||
int32_t scanFlag, SArray* pUpdated) {
|
int32_t scanFlag, SHashObj* pUpdatedMap) {
|
||||||
SIntervalAggOperatorInfo* pInfo = (SIntervalAggOperatorInfo*)pOperatorInfo->info;
|
SIntervalAggOperatorInfo* pInfo = (SIntervalAggOperatorInfo*)pOperatorInfo->info;
|
||||||
|
|
||||||
SExecTaskInfo* pTaskInfo = pOperatorInfo->pTaskInfo;
|
SExecTaskInfo* pTaskInfo = pOperatorInfo->pTaskInfo;
|
||||||
|
@ -940,7 +956,7 @@ static void hashIntervalAgg(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResul
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pInfo->execModel == OPTR_EXEC_MODEL_STREAM && pInfo->twAggSup.calTrigger == STREAM_TRIGGER_AT_ONCE) {
|
if (pInfo->execModel == OPTR_EXEC_MODEL_STREAM && pInfo->twAggSup.calTrigger == STREAM_TRIGGER_AT_ONCE) {
|
||||||
saveResultRow(pResult, tableGroupId, pUpdated);
|
saveWinResultRow(pResult, tableGroupId, pUpdatedMap);
|
||||||
setResultBufPageDirty(pInfo->aggSup.pResultBuf, &pResultRowInfo->cur);
|
setResultBufPageDirty(pInfo->aggSup.pResultBuf, &pResultRowInfo->cur);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -997,7 +1013,7 @@ static void hashIntervalAgg(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResul
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pInfo->execModel == OPTR_EXEC_MODEL_STREAM && pInfo->twAggSup.calTrigger == STREAM_TRIGGER_AT_ONCE) {
|
if (pInfo->execModel == OPTR_EXEC_MODEL_STREAM && pInfo->twAggSup.calTrigger == STREAM_TRIGGER_AT_ONCE) {
|
||||||
saveResultRow(pResult, tableGroupId, pUpdated);
|
saveWinResultRow(pResult, tableGroupId, pUpdatedMap);
|
||||||
setResultBufPageDirty(pInfo->aggSup.pResultBuf, &pResultRowInfo->cur);
|
setResultBufPageDirty(pInfo->aggSup.pResultBuf, &pResultRowInfo->cur);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1142,7 +1158,7 @@ static void doStateWindowAggImpl(SOperatorInfo* pOperator, SStateWindowOperatorI
|
||||||
|
|
||||||
char* val = colDataGetData(pStateColInfoData, j);
|
char* val = colDataGetData(pStateColInfoData, j);
|
||||||
|
|
||||||
if (!pInfo->hasKey) {
|
if (gid != pRowSup->groupId || !pInfo->hasKey) {
|
||||||
// todo extract method
|
// todo extract method
|
||||||
if (IS_VAR_DATA_TYPE(pInfo->stateKey.type)) {
|
if (IS_VAR_DATA_TYPE(pInfo->stateKey.type)) {
|
||||||
varDataCopy(pInfo->stateKey.pData, val);
|
varDataCopy(pInfo->stateKey.pData, val);
|
||||||
|
@ -1152,10 +1168,10 @@ static void doStateWindowAggImpl(SOperatorInfo* pOperator, SStateWindowOperatorI
|
||||||
|
|
||||||
pInfo->hasKey = true;
|
pInfo->hasKey = true;
|
||||||
|
|
||||||
doKeepNewWindowStartInfo(pRowSup, tsList, j);
|
doKeepNewWindowStartInfo(pRowSup, tsList, j, gid);
|
||||||
doKeepTuple(pRowSup, tsList[j]);
|
doKeepTuple(pRowSup, tsList[j], gid);
|
||||||
} else if (compareVal(val, &pInfo->stateKey)) {
|
} else if (compareVal(val, &pInfo->stateKey)) {
|
||||||
doKeepTuple(pRowSup, tsList[j]);
|
doKeepTuple(pRowSup, tsList[j], gid);
|
||||||
if (j == 0 && pRowSup->startRowIndex != 0) {
|
if (j == 0 && pRowSup->startRowIndex != 0) {
|
||||||
pRowSup->startRowIndex = 0;
|
pRowSup->startRowIndex = 0;
|
||||||
}
|
}
|
||||||
|
@ -1177,8 +1193,8 @@ static void doStateWindowAggImpl(SOperatorInfo* pOperator, SStateWindowOperatorI
|
||||||
pRowSup->numOfRows, NULL, pBlock->info.rows, numOfOutput, TSDB_ORDER_ASC);
|
pRowSup->numOfRows, NULL, pBlock->info.rows, numOfOutput, TSDB_ORDER_ASC);
|
||||||
|
|
||||||
// here we start a new session window
|
// here we start a new session window
|
||||||
doKeepNewWindowStartInfo(pRowSup, tsList, j);
|
doKeepNewWindowStartInfo(pRowSup, tsList, j, gid);
|
||||||
doKeepTuple(pRowSup, tsList[j]);
|
doKeepTuple(pRowSup, tsList[j], gid);
|
||||||
|
|
||||||
// todo extract method
|
// todo extract method
|
||||||
if (IS_VAR_DATA_TYPE(pInfo->stateKey.type)) {
|
if (IS_VAR_DATA_TYPE(pInfo->stateKey.type)) {
|
||||||
|
@ -1437,7 +1453,7 @@ static void doClearWindows(SAggSupporter* pAggSup, SExprSupp* pSup1, SInterval*
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t getAllIntervalWindow(SHashObj* pHashMap, SArray* resWins) {
|
static int32_t getAllIntervalWindow(SHashObj* pHashMap, SHashObj* resWins) {
|
||||||
void* pIte = NULL;
|
void* pIte = NULL;
|
||||||
size_t keyLen = 0;
|
size_t keyLen = 0;
|
||||||
while ((pIte = taosHashIterate(pHashMap, pIte)) != NULL) {
|
while ((pIte = taosHashIterate(pHashMap, pIte)) != NULL) {
|
||||||
|
@ -1446,7 +1462,7 @@ static int32_t getAllIntervalWindow(SHashObj* pHashMap, SArray* resWins) {
|
||||||
ASSERT(keyLen == GET_RES_WINDOW_KEY_LEN(sizeof(TSKEY)));
|
ASSERT(keyLen == GET_RES_WINDOW_KEY_LEN(sizeof(TSKEY)));
|
||||||
TSKEY ts = *(int64_t*)((char*)key + sizeof(uint64_t));
|
TSKEY ts = *(int64_t*)((char*)key + sizeof(uint64_t));
|
||||||
SResultRowPosition* pPos = (SResultRowPosition*)pIte;
|
SResultRowPosition* pPos = (SResultRowPosition*)pIte;
|
||||||
int32_t code = saveResult(ts, pPos->pageId, pPos->offset, groupId, resWins);
|
int32_t code = saveWinResult(ts, pPos->pageId, pPos->offset, groupId, resWins);
|
||||||
if (code != TSDB_CODE_SUCCESS) {
|
if (code != TSDB_CODE_SUCCESS) {
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
@ -1455,7 +1471,7 @@ static int32_t getAllIntervalWindow(SHashObj* pHashMap, SArray* resWins) {
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t closeIntervalWindow(SHashObj* pHashMap, STimeWindowAggSupp* pSup, SInterval* pInterval,
|
static int32_t closeIntervalWindow(SHashObj* pHashMap, STimeWindowAggSupp* pSup, SInterval* pInterval,
|
||||||
SHashObj* pPullDataMap, SArray* closeWins, SArray* pRecyPages,
|
SHashObj* pPullDataMap, SHashObj* closeWins, SArray* pRecyPages,
|
||||||
SDiskbasedBuf* pDiscBuf) {
|
SDiskbasedBuf* pDiscBuf) {
|
||||||
qDebug("===stream===close interval window");
|
qDebug("===stream===close interval window");
|
||||||
void* pIte = NULL;
|
void* pIte = NULL;
|
||||||
|
@ -1487,7 +1503,7 @@ static int32_t closeIntervalWindow(SHashObj* pHashMap, STimeWindowAggSupp* pSup,
|
||||||
}
|
}
|
||||||
SResultRowPosition* pPos = (SResultRowPosition*)pIte;
|
SResultRowPosition* pPos = (SResultRowPosition*)pIte;
|
||||||
if (pSup->calTrigger == STREAM_TRIGGER_WINDOW_CLOSE) {
|
if (pSup->calTrigger == STREAM_TRIGGER_WINDOW_CLOSE) {
|
||||||
int32_t code = saveResult(ts, pPos->pageId, pPos->offset, groupId, closeWins);
|
int32_t code = saveWinResult(ts, pPos->pageId, pPos->offset, groupId, closeWins);
|
||||||
if (code != TSDB_CODE_SUCCESS) {
|
if (code != TSDB_CODE_SUCCESS) {
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
@ -1577,11 +1593,14 @@ static SSDataBlock* doStreamIntervalAgg(SOperatorInfo* pOperator) {
|
||||||
SOperatorInfo* downstream = pOperator->pDownstream[0];
|
SOperatorInfo* downstream = pOperator->pDownstream[0];
|
||||||
|
|
||||||
SArray* pUpdated = taosArrayInit(4, POINTER_BYTES); // SResKeyPos
|
SArray* pUpdated = taosArrayInit(4, POINTER_BYTES); // SResKeyPos
|
||||||
|
_hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_TIMESTAMP);
|
||||||
|
SHashObj* pUpdatedMap = taosHashInit(1024, hashFn, false, HASH_NO_LOCK);
|
||||||
while (1) {
|
while (1) {
|
||||||
SSDataBlock* pBlock = downstream->fpSet.getNextFn(downstream);
|
SSDataBlock* pBlock = downstream->fpSet.getNextFn(downstream);
|
||||||
if (pBlock == NULL) {
|
if (pBlock == NULL) {
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
// qInfo("===stream===%ld", pBlock->info.version);
|
||||||
printDataBlock(pBlock, "single interval recv");
|
printDataBlock(pBlock, "single interval recv");
|
||||||
|
|
||||||
if (pBlock->info.type == STREAM_CLEAR) {
|
if (pBlock->info.type == STREAM_CLEAR) {
|
||||||
|
@ -1594,7 +1613,7 @@ static SSDataBlock* doStreamIntervalAgg(SOperatorInfo* pOperator) {
|
||||||
doDeleteSpecifyIntervalWindow(&pInfo->aggSup, pBlock, pInfo->pDelWins, &pInfo->interval);
|
doDeleteSpecifyIntervalWindow(&pInfo->aggSup, pBlock, pInfo->pDelWins, &pInfo->interval);
|
||||||
continue;
|
continue;
|
||||||
} else if (pBlock->info.type == STREAM_GET_ALL) {
|
} else if (pBlock->info.type == STREAM_GET_ALL) {
|
||||||
getAllIntervalWindow(pInfo->aggSup.pResultRowHashTable, pUpdated);
|
getAllIntervalWindow(pInfo->aggSup.pResultRowHashTable, pUpdatedMap);
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1617,17 +1636,24 @@ static SSDataBlock* doStreamIntervalAgg(SOperatorInfo* pOperator) {
|
||||||
}
|
}
|
||||||
|
|
||||||
pInfo->twAggSup.maxTs = TMAX(pInfo->twAggSup.maxTs, pBlock->info.window.ekey);
|
pInfo->twAggSup.maxTs = TMAX(pInfo->twAggSup.maxTs, pBlock->info.window.ekey);
|
||||||
hashIntervalAgg(pOperator, &pInfo->binfo.resultRowInfo, pBlock, MAIN_SCAN, pUpdated);
|
hashIntervalAgg(pOperator, &pInfo->binfo.resultRowInfo, pBlock, MAIN_SCAN, pUpdatedMap);
|
||||||
}
|
}
|
||||||
|
|
||||||
pOperator->status = OP_RES_TO_RETURN;
|
pOperator->status = OP_RES_TO_RETURN;
|
||||||
closeIntervalWindow(pInfo->aggSup.pResultRowHashTable, &pInfo->twAggSup, &pInfo->interval, NULL, pUpdated,
|
closeIntervalWindow(pInfo->aggSup.pResultRowHashTable, &pInfo->twAggSup, &pInfo->interval, NULL, pUpdatedMap,
|
||||||
pInfo->pRecycledPages, pInfo->aggSup.pResultBuf);
|
pInfo->pRecycledPages, pInfo->aggSup.pResultBuf);
|
||||||
|
|
||||||
|
void* pIte = NULL;
|
||||||
|
while ((pIte = taosHashIterate(pUpdatedMap, pIte)) != NULL) {
|
||||||
|
taosArrayPush(pUpdated, pIte);
|
||||||
|
}
|
||||||
|
taosArraySort(pUpdated, resultrowComparAsc);
|
||||||
|
|
||||||
finalizeUpdatedResult(pOperator->exprSupp.numOfExprs, pInfo->aggSup.pResultBuf, pUpdated, pSup->rowEntryInfoOffset);
|
finalizeUpdatedResult(pOperator->exprSupp.numOfExprs, pInfo->aggSup.pResultBuf, pUpdated, pSup->rowEntryInfoOffset);
|
||||||
initMultiResInfoFromArrayList(&pInfo->groupResInfo, pUpdated);
|
initMultiResInfoFromArrayList(&pInfo->groupResInfo, pUpdated);
|
||||||
blockDataEnsureCapacity(pInfo->binfo.pRes, pOperator->resultInfo.capacity);
|
blockDataEnsureCapacity(pInfo->binfo.pRes, pOperator->resultInfo.capacity);
|
||||||
removeDeleteResults(pUpdated, pInfo->pDelWins);
|
removeDeleteResults(pUpdatedMap, pInfo->pDelWins);
|
||||||
|
taosHashCleanup(pUpdatedMap);
|
||||||
doBuildDeleteResult(pInfo->pDelWins, &pInfo->delIndex, pInfo->pDelRes);
|
doBuildDeleteResult(pInfo->pDelWins, &pInfo->delIndex, pInfo->pDelRes);
|
||||||
if (pInfo->pDelRes->info.rows > 0) {
|
if (pInfo->pDelRes->info.rows > 0) {
|
||||||
return pInfo->pDelRes;
|
return pInfo->pDelRes;
|
||||||
|
@ -1911,7 +1937,7 @@ _error:
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
// todo handle multiple tables cases.
|
// todo handle multiple timeline cases. assume no timeline interweaving
|
||||||
static void doSessionWindowAggImpl(SOperatorInfo* pOperator, SSessionAggOperatorInfo* pInfo, SSDataBlock* pBlock) {
|
static void doSessionWindowAggImpl(SOperatorInfo* pOperator, SSessionAggOperatorInfo* pInfo, SSDataBlock* pBlock) {
|
||||||
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
|
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
|
||||||
SExprSupp* pSup = &pOperator->exprSupp;
|
SExprSupp* pSup = &pOperator->exprSupp;
|
||||||
|
@ -1935,12 +1961,13 @@ static void doSessionWindowAggImpl(SOperatorInfo* pOperator, SSessionAggOperator
|
||||||
// In case of ascending or descending order scan data, only one time window needs to be kepted for each table.
|
// In case of ascending or descending order scan data, only one time window needs to be kepted for each table.
|
||||||
TSKEY* tsList = (TSKEY*)pColInfoData->pData;
|
TSKEY* tsList = (TSKEY*)pColInfoData->pData;
|
||||||
for (int32_t j = 0; j < pBlock->info.rows; ++j) {
|
for (int32_t j = 0; j < pBlock->info.rows; ++j) {
|
||||||
if (pInfo->winSup.prevTs == INT64_MIN) {
|
if (gid != pRowSup->groupId || pInfo->winSup.prevTs == INT64_MIN) {
|
||||||
doKeepNewWindowStartInfo(pRowSup, tsList, j);
|
doKeepNewWindowStartInfo(pRowSup, tsList, j, gid);
|
||||||
doKeepTuple(pRowSup, tsList[j]);
|
doKeepTuple(pRowSup, tsList[j], gid);
|
||||||
} else if (tsList[j] - pRowSup->prevTs <= gap && (tsList[j] - pRowSup->prevTs) >= 0) {
|
} else if ((tsList[j] - pRowSup->prevTs >= 0) && tsList[j] - pRowSup->prevTs <= gap ||
|
||||||
|
(pRowSup->prevTs - tsList[j] >= 0 ) && (pRowSup->prevTs - tsList[j] <= gap)) {
|
||||||
// The gap is less than the threshold, so it belongs to current session window that has been opened already.
|
// The gap is less than the threshold, so it belongs to current session window that has been opened already.
|
||||||
doKeepTuple(pRowSup, tsList[j]);
|
doKeepTuple(pRowSup, tsList[j], gid);
|
||||||
if (j == 0 && pRowSup->startRowIndex != 0) {
|
if (j == 0 && pRowSup->startRowIndex != 0) {
|
||||||
pRowSup->startRowIndex = 0;
|
pRowSup->startRowIndex = 0;
|
||||||
}
|
}
|
||||||
|
@ -1963,8 +1990,8 @@ static void doSessionWindowAggImpl(SOperatorInfo* pOperator, SSessionAggOperator
|
||||||
pRowSup->numOfRows, NULL, pBlock->info.rows, numOfOutput, TSDB_ORDER_ASC);
|
pRowSup->numOfRows, NULL, pBlock->info.rows, numOfOutput, TSDB_ORDER_ASC);
|
||||||
|
|
||||||
// here we start a new session window
|
// here we start a new session window
|
||||||
doKeepNewWindowStartInfo(pRowSup, tsList, j);
|
doKeepNewWindowStartInfo(pRowSup, tsList, j, gid);
|
||||||
doKeepTuple(pRowSup, tsList[j]);
|
doKeepTuple(pRowSup, tsList[j], gid);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2088,8 +2115,10 @@ static void doKeepNextRows(STimeSliceOperatorInfo* pSliceInfo, const SSDataBlock
|
||||||
pSliceInfo->isNextRowSet = true;
|
pSliceInfo->isNextRowSet = true;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void doKeepLinearInfo(STimeSliceOperatorInfo* pSliceInfo, const SSDataBlock* pBlock, int32_t rowIndex) {
|
static void doKeepLinearInfo(STimeSliceOperatorInfo* pSliceInfo, const SSDataBlock* pBlock, int32_t rowIndex,
|
||||||
|
bool isLastRow) {
|
||||||
int32_t numOfCols = taosArrayGetSize(pBlock->pDataBlock);
|
int32_t numOfCols = taosArrayGetSize(pBlock->pDataBlock);
|
||||||
|
bool fillLastPoint = pSliceInfo->fillLastPoint;
|
||||||
for (int32_t i = 0; i < numOfCols; ++i) {
|
for (int32_t i = 0; i < numOfCols; ++i) {
|
||||||
SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, i);
|
SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, i);
|
||||||
SColumnInfoData* pTsCol = taosArrayGet(pBlock->pDataBlock, pSliceInfo->tsCol.slotId);
|
SColumnInfoData* pTsCol = taosArrayGet(pBlock->pDataBlock, pSliceInfo->tsCol.slotId);
|
||||||
|
@ -2097,16 +2126,22 @@ static void doKeepLinearInfo(STimeSliceOperatorInfo* pSliceInfo, const SSDataBlo
|
||||||
|
|
||||||
// null data should not be kept since it can not be used to perform interpolation
|
// null data should not be kept since it can not be used to perform interpolation
|
||||||
if (!colDataIsNull_s(pColInfoData, i)) {
|
if (!colDataIsNull_s(pColInfoData, i)) {
|
||||||
int64_t startKey = *(int64_t*)colDataGetData(pTsCol, rowIndex);
|
if (isLastRow) {
|
||||||
int64_t endKey = *(int64_t*)colDataGetData(pTsCol, rowIndex + 1);
|
pLinearInfo->start.key = *(int64_t*)colDataGetData(pTsCol, rowIndex);
|
||||||
pLinearInfo->start.key = startKey;
|
memcpy(pLinearInfo->start.val, colDataGetData(pColInfoData, rowIndex), pLinearInfo->bytes);
|
||||||
pLinearInfo->end.key = endKey;
|
} else if (fillLastPoint) {
|
||||||
|
pLinearInfo->end.key = *(int64_t*)colDataGetData(pTsCol, rowIndex);
|
||||||
|
memcpy(pLinearInfo->end.val, colDataGetData(pColInfoData, rowIndex), pLinearInfo->bytes);
|
||||||
|
} else {
|
||||||
|
pLinearInfo->start.key = *(int64_t*)colDataGetData(pTsCol, rowIndex);
|
||||||
|
pLinearInfo->end.key = *(int64_t*)colDataGetData(pTsCol, rowIndex + 1);
|
||||||
|
|
||||||
char* val;
|
char* val;
|
||||||
val = colDataGetData(pColInfoData, rowIndex);
|
val = colDataGetData(pColInfoData, rowIndex);
|
||||||
memcpy(pLinearInfo->start.val, val, pLinearInfo->bytes);
|
memcpy(pLinearInfo->start.val, val, pLinearInfo->bytes);
|
||||||
val = colDataGetData(pColInfoData, rowIndex + 1);
|
val = colDataGetData(pColInfoData, rowIndex + 1);
|
||||||
memcpy(pLinearInfo->end.val, val, pLinearInfo->bytes);
|
memcpy(pLinearInfo->end.val, val, pLinearInfo->bytes);
|
||||||
|
}
|
||||||
|
|
||||||
pLinearInfo->hasNull = false;
|
pLinearInfo->hasNull = false;
|
||||||
} else {
|
} else {
|
||||||
|
@ -2114,6 +2149,8 @@ static void doKeepLinearInfo(STimeSliceOperatorInfo* pSliceInfo, const SSDataBlo
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pSliceInfo->fillLastPoint = isLastRow ? true : false;
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void genInterpolationResult(STimeSliceOperatorInfo* pSliceInfo, SExprSupp* pExprSup,
|
static void genInterpolationResult(STimeSliceOperatorInfo* pSliceInfo, SExprSupp* pExprSup,
|
||||||
|
@ -2270,7 +2307,7 @@ static int32_t initFillLinearInfo(STimeSliceOperatorInfo* pInfo, SSDataBlock* pB
|
||||||
}
|
}
|
||||||
|
|
||||||
pInfo->pLinearInfo = taosArrayInit(4, sizeof(SFillLinearInfo));
|
pInfo->pLinearInfo = taosArrayInit(4, sizeof(SFillLinearInfo));
|
||||||
if (pInfo->pNextRow == NULL) {
|
if (pInfo->pLinearInfo == NULL) {
|
||||||
return TSDB_CODE_OUT_OF_MEMORY;
|
return TSDB_CODE_OUT_OF_MEMORY;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2284,15 +2321,20 @@ static int32_t initFillLinearInfo(STimeSliceOperatorInfo* pInfo, SSDataBlock* pB
|
||||||
linearInfo.start.val = taosMemoryCalloc(1, pColInfo->info.bytes);
|
linearInfo.start.val = taosMemoryCalloc(1, pColInfo->info.bytes);
|
||||||
linearInfo.end.val = taosMemoryCalloc(1, pColInfo->info.bytes);
|
linearInfo.end.val = taosMemoryCalloc(1, pColInfo->info.bytes);
|
||||||
linearInfo.hasNull = false;
|
linearInfo.hasNull = false;
|
||||||
linearInfo.fillLastPoint = false;
|
|
||||||
linearInfo.type = pColInfo->info.type;
|
linearInfo.type = pColInfo->info.type;
|
||||||
linearInfo.bytes = pColInfo->info.bytes;
|
linearInfo.bytes = pColInfo->info.bytes;
|
||||||
taosArrayPush(pInfo->pLinearInfo, &linearInfo);
|
taosArrayPush(pInfo->pLinearInfo, &linearInfo);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
pInfo->fillLastPoint = false;
|
||||||
|
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static bool needToFillLastPoint(STimeSliceOperatorInfo* pSliceInfo) {
|
||||||
|
return (pSliceInfo->fillLastPoint == true && pSliceInfo->fillType == TSDB_FILL_LINEAR);
|
||||||
|
}
|
||||||
|
|
||||||
static int32_t initKeeperInfo(STimeSliceOperatorInfo* pInfo, SSDataBlock* pBlock) {
|
static int32_t initKeeperInfo(STimeSliceOperatorInfo* pInfo, SSDataBlock* pBlock) {
|
||||||
int32_t code;
|
int32_t code;
|
||||||
code = initPrevRowsKeeper(pInfo, pBlock);
|
code = initPrevRowsKeeper(pInfo, pBlock);
|
||||||
|
@ -2357,6 +2399,23 @@ static SSDataBlock* doTimeslice(SOperatorInfo* pOperator) {
|
||||||
for (int32_t i = 0; i < pBlock->info.rows; ++i) {
|
for (int32_t i = 0; i < pBlock->info.rows; ++i) {
|
||||||
int64_t ts = *(int64_t*)colDataGetData(pTsCol, i);
|
int64_t ts = *(int64_t*)colDataGetData(pTsCol, i);
|
||||||
|
|
||||||
|
if (i == 0 && needToFillLastPoint(pSliceInfo)) { // first row in current block
|
||||||
|
doKeepLinearInfo(pSliceInfo, pBlock, i, false);
|
||||||
|
while (pSliceInfo->current < ts && pSliceInfo->current <= pSliceInfo->win.ekey) {
|
||||||
|
genInterpolationResult(pSliceInfo, &pOperator->exprSupp, pResBlock);
|
||||||
|
pSliceInfo->current =
|
||||||
|
taosTimeAdd(pSliceInfo->current, pInterval->interval, pInterval->intervalUnit, pInterval->precision);
|
||||||
|
if (pResBlock->info.rows >= pResBlock->info.capacity) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (pSliceInfo->current > pSliceInfo->win.ekey) {
|
||||||
|
doSetOperatorCompleted(pOperator);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if (ts == pSliceInfo->current) {
|
if (ts == pSliceInfo->current) {
|
||||||
for (int32_t j = 0; j < pOperator->exprSupp.numOfExprs; ++j) {
|
for (int32_t j = 0; j < pOperator->exprSupp.numOfExprs; ++j) {
|
||||||
SExprInfo* pExprInfo = &pOperator->exprSupp.pExprInfo[j];
|
SExprInfo* pExprInfo = &pOperator->exprSupp.pExprInfo[j];
|
||||||
|
@ -2375,9 +2434,10 @@ static SSDataBlock* doTimeslice(SOperatorInfo* pOperator) {
|
||||||
|
|
||||||
// for linear interpolation, always fill value between this and next points;
|
// for linear interpolation, always fill value between this and next points;
|
||||||
// if its the first point in data block, also fill values between previous(if there's any) and this point;
|
// if its the first point in data block, also fill values between previous(if there's any) and this point;
|
||||||
// if its the last point in data block, no need to fill, but reserve this point as the start value for next data block.
|
// if its the last point in data block, no need to fill, but reserve this point as the start value and do
|
||||||
|
// the interpolation when processing next data block.
|
||||||
if (pSliceInfo->fillType == TSDB_FILL_LINEAR) {
|
if (pSliceInfo->fillType == TSDB_FILL_LINEAR) {
|
||||||
doKeepLinearInfo(pSliceInfo, pBlock, i);
|
doKeepLinearInfo(pSliceInfo, pBlock, i, false);
|
||||||
pSliceInfo->current =
|
pSliceInfo->current =
|
||||||
taosTimeAdd(pSliceInfo->current, pInterval->interval, pInterval->intervalUnit, pInterval->precision);
|
taosTimeAdd(pSliceInfo->current, pInterval->interval, pInterval->intervalUnit, pInterval->precision);
|
||||||
if (i < pBlock->info.rows - 1) {
|
if (i < pBlock->info.rows - 1) {
|
||||||
|
@ -2397,6 +2457,9 @@ static SSDataBlock* doTimeslice(SOperatorInfo* pOperator) {
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
} else {// it is the last row of current block
|
||||||
|
//store ts value as start, and calculate interp value when processing next block
|
||||||
|
doKeepLinearInfo(pSliceInfo, pBlock, i, true);
|
||||||
}
|
}
|
||||||
} else { // non-linear interpolation
|
} else { // non-linear interpolation
|
||||||
pSliceInfo->current =
|
pSliceInfo->current =
|
||||||
|
@ -2415,7 +2478,7 @@ static SSDataBlock* doTimeslice(SOperatorInfo* pOperator) {
|
||||||
doKeepPrevRows(pSliceInfo, pBlock, i);
|
doKeepPrevRows(pSliceInfo, pBlock, i);
|
||||||
|
|
||||||
if (pSliceInfo->fillType == TSDB_FILL_LINEAR) {
|
if (pSliceInfo->fillType == TSDB_FILL_LINEAR) {
|
||||||
doKeepLinearInfo(pSliceInfo, pBlock, i);
|
doKeepLinearInfo(pSliceInfo, pBlock, i, false);
|
||||||
// no need to increate pSliceInfo->current here
|
// no need to increate pSliceInfo->current here
|
||||||
//pSliceInfo->current =
|
//pSliceInfo->current =
|
||||||
// taosTimeAdd(pSliceInfo->current, pInterval->interval, pInterval->intervalUnit, pInterval->precision);
|
// taosTimeAdd(pSliceInfo->current, pInterval->interval, pInterval->intervalUnit, pInterval->precision);
|
||||||
|
@ -2495,7 +2558,7 @@ static SSDataBlock* doTimeslice(SOperatorInfo* pOperator) {
|
||||||
|
|
||||||
|
|
||||||
if (pSliceInfo->fillType == TSDB_FILL_LINEAR) {
|
if (pSliceInfo->fillType == TSDB_FILL_LINEAR) {
|
||||||
doKeepLinearInfo(pSliceInfo, pBlock, i);
|
doKeepLinearInfo(pSliceInfo, pBlock, i, false);
|
||||||
pSliceInfo->current =
|
pSliceInfo->current =
|
||||||
taosTimeAdd(pSliceInfo->current, pInterval->interval, pInterval->intervalUnit, pInterval->precision);
|
taosTimeAdd(pSliceInfo->current, pInterval->interval, pInterval->intervalUnit, pInterval->precision);
|
||||||
if (i < pBlock->info.rows - 1) {
|
if (i < pBlock->info.rows - 1) {
|
||||||
|
@ -2831,7 +2894,7 @@ STimeWindow getFinalTimeWindow(int64_t ts, SInterval* pInterval) {
|
||||||
}
|
}
|
||||||
|
|
||||||
static void doHashInterval(SOperatorInfo* pOperatorInfo, SSDataBlock* pSDataBlock, uint64_t tableGroupId,
|
static void doHashInterval(SOperatorInfo* pOperatorInfo, SSDataBlock* pSDataBlock, uint64_t tableGroupId,
|
||||||
SArray* pUpdated) {
|
SHashObj* pUpdatedMap) {
|
||||||
SStreamFinalIntervalOperatorInfo* pInfo = (SStreamFinalIntervalOperatorInfo*)pOperatorInfo->info;
|
SStreamFinalIntervalOperatorInfo* pInfo = (SStreamFinalIntervalOperatorInfo*)pOperatorInfo->info;
|
||||||
SResultRowInfo* pResultRowInfo = &(pInfo->binfo.resultRowInfo);
|
SResultRowInfo* pResultRowInfo = &(pInfo->binfo.resultRowInfo);
|
||||||
SExecTaskInfo* pTaskInfo = pOperatorInfo->pTaskInfo;
|
SExecTaskInfo* pTaskInfo = pOperatorInfo->pTaskInfo;
|
||||||
|
@ -2913,8 +2976,8 @@ static void doHashInterval(SOperatorInfo* pOperatorInfo, SSDataBlock* pSDataBloc
|
||||||
forwardRows = getNumOfRowsInTimeWindow(&pSDataBlock->info, tsCols, startPos, nextWin.ekey, binarySearchForKey,
|
forwardRows = getNumOfRowsInTimeWindow(&pSDataBlock->info, tsCols, startPos, nextWin.ekey, binarySearchForKey,
|
||||||
NULL, TSDB_ORDER_ASC);
|
NULL, TSDB_ORDER_ASC);
|
||||||
}
|
}
|
||||||
if (pInfo->twAggSup.calTrigger == STREAM_TRIGGER_AT_ONCE && pUpdated) {
|
if (pInfo->twAggSup.calTrigger == STREAM_TRIGGER_AT_ONCE && pUpdatedMap) {
|
||||||
saveResultRow(pResult, tableGroupId, pUpdated);
|
saveWinResultRow(pResult, tableGroupId, pUpdatedMap);
|
||||||
setResultBufPageDirty(pInfo->aggSup.pResultBuf, &pResultRowInfo->cur);
|
setResultBufPageDirty(pInfo->aggSup.pResultBuf, &pResultRowInfo->cur);
|
||||||
}
|
}
|
||||||
updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &nextWin, true);
|
updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &nextWin, true);
|
||||||
|
@ -3020,6 +3083,8 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
||||||
SStreamFinalIntervalOperatorInfo* pInfo = pOperator->info;
|
SStreamFinalIntervalOperatorInfo* pInfo = pOperator->info;
|
||||||
SOperatorInfo* downstream = pOperator->pDownstream[0];
|
SOperatorInfo* downstream = pOperator->pDownstream[0];
|
||||||
SArray* pUpdated = taosArrayInit(4, POINTER_BYTES);
|
SArray* pUpdated = taosArrayInit(4, POINTER_BYTES);
|
||||||
|
_hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_TIMESTAMP);
|
||||||
|
SHashObj* pUpdatedMap = taosHashInit(1024, hashFn, false, HASH_NO_LOCK);
|
||||||
TSKEY maxTs = INT64_MIN;
|
TSKEY maxTs = INT64_MIN;
|
||||||
|
|
||||||
SExprSupp* pSup = &pOperator->exprSupp;
|
SExprSupp* pSup = &pOperator->exprSupp;
|
||||||
|
@ -3077,7 +3142,7 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
||||||
SSDataBlock* pBlock = downstream->fpSet.getNextFn(downstream);
|
SSDataBlock* pBlock = downstream->fpSet.getNextFn(downstream);
|
||||||
if (pBlock == NULL) {
|
if (pBlock == NULL) {
|
||||||
clearSpecialDataBlock(pInfo->pUpdateRes);
|
clearSpecialDataBlock(pInfo->pUpdateRes);
|
||||||
removeDeleteResults(pUpdated, pInfo->pDelWins);
|
removeDeleteResults(pUpdatedMap, pInfo->pDelWins);
|
||||||
pOperator->status = OP_RES_TO_RETURN;
|
pOperator->status = OP_RES_TO_RETURN;
|
||||||
qDebug("%s return data", IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
qDebug("%s return data", IS_FINAL_OP(pInfo) ? "interval final" : "interval semi");
|
||||||
break;
|
break;
|
||||||
|
@ -3104,7 +3169,7 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
||||||
taosArrayDestroy(pUpWins);
|
taosArrayDestroy(pUpWins);
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
removeResults(pUpWins, pUpdated);
|
removeResults(pUpWins, pUpdatedMap);
|
||||||
copyDataBlock(pInfo->pUpdateRes, pBlock);
|
copyDataBlock(pInfo->pUpdateRes, pBlock);
|
||||||
// copyUpdateDataBlock(pInfo->pUpdateRes, pBlock, pInfo->primaryTsIndex);
|
// copyUpdateDataBlock(pInfo->pUpdateRes, pBlock, pInfo->primaryTsIndex);
|
||||||
pInfo->returnUpdate = true;
|
pInfo->returnUpdate = true;
|
||||||
|
@ -3122,15 +3187,15 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
||||||
pOperator->exprSupp.numOfExprs, pOperator->pTaskInfo, pUpdated);
|
pOperator->exprSupp.numOfExprs, pOperator->pTaskInfo, pUpdated);
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
removeResults(pInfo->pDelWins, pUpdated);
|
removeResults(pInfo->pDelWins, pUpdatedMap);
|
||||||
break;
|
break;
|
||||||
} else if (pBlock->info.type == STREAM_GET_ALL && IS_FINAL_OP(pInfo)) {
|
} else if (pBlock->info.type == STREAM_GET_ALL && IS_FINAL_OP(pInfo)) {
|
||||||
getAllIntervalWindow(pInfo->aggSup.pResultRowHashTable, pUpdated);
|
getAllIntervalWindow(pInfo->aggSup.pResultRowHashTable, pUpdatedMap);
|
||||||
continue;
|
continue;
|
||||||
} else if (pBlock->info.type == STREAM_RETRIEVE && !IS_FINAL_OP(pInfo)) {
|
} else if (pBlock->info.type == STREAM_RETRIEVE && !IS_FINAL_OP(pInfo)) {
|
||||||
SArray* pUpWins = taosArrayInit(8, sizeof(SWinRes));
|
SArray* pUpWins = taosArrayInit(8, sizeof(SWinRes));
|
||||||
doClearWindows(&pInfo->aggSup, pSup, &pInfo->interval, pOperator->exprSupp.numOfExprs, pBlock, pUpWins);
|
doClearWindows(&pInfo->aggSup, pSup, &pInfo->interval, pOperator->exprSupp.numOfExprs, pBlock, pUpWins);
|
||||||
removeResults(pUpWins, pUpdated);
|
removeResults(pUpWins, pUpdatedMap);
|
||||||
taosArrayDestroy(pUpWins);
|
taosArrayDestroy(pUpWins);
|
||||||
if (taosArrayGetSize(pUpdated) > 0) {
|
if (taosArrayGetSize(pUpdated) > 0) {
|
||||||
break;
|
break;
|
||||||
|
@ -3146,7 +3211,7 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
||||||
projectApplyFunctions(pExprSup->pExprInfo, pBlock, pBlock, pExprSup->pCtx, pExprSup->numOfExprs, NULL);
|
projectApplyFunctions(pExprSup->pExprInfo, pBlock, pBlock, pExprSup->pCtx, pExprSup->numOfExprs, NULL);
|
||||||
}
|
}
|
||||||
setInputDataBlock(pOperator, pSup->pCtx, pBlock, pInfo->order, MAIN_SCAN, true);
|
setInputDataBlock(pOperator, pSup->pCtx, pBlock, pInfo->order, MAIN_SCAN, true);
|
||||||
doHashInterval(pOperator, pBlock, pBlock->info.groupId, pUpdated);
|
doHashInterval(pOperator, pBlock, pBlock->info.groupId, pUpdatedMap);
|
||||||
if (IS_FINAL_OP(pInfo)) {
|
if (IS_FINAL_OP(pInfo)) {
|
||||||
int32_t chIndex = getChildIndex(pBlock);
|
int32_t chIndex = getChildIndex(pBlock);
|
||||||
int32_t size = taosArrayGetSize(pInfo->pChildren);
|
int32_t size = taosArrayGetSize(pInfo->pChildren);
|
||||||
|
@ -3171,12 +3236,19 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
|
||||||
pInfo->twAggSup.maxTs = TMAX(pInfo->twAggSup.maxTs, maxTs);
|
pInfo->twAggSup.maxTs = TMAX(pInfo->twAggSup.maxTs, maxTs);
|
||||||
if (IS_FINAL_OP(pInfo)) {
|
if (IS_FINAL_OP(pInfo)) {
|
||||||
closeIntervalWindow(pInfo->aggSup.pResultRowHashTable, &pInfo->twAggSup, &pInfo->interval, pInfo->pPullDataMap,
|
closeIntervalWindow(pInfo->aggSup.pResultRowHashTable, &pInfo->twAggSup, &pInfo->interval, pInfo->pPullDataMap,
|
||||||
pUpdated, pInfo->pRecycledPages, pInfo->aggSup.pResultBuf);
|
pUpdatedMap, pInfo->pRecycledPages, pInfo->aggSup.pResultBuf);
|
||||||
closeChildIntervalWindow(pInfo->pChildren, pInfo->twAggSup.maxTs);
|
closeChildIntervalWindow(pInfo->pChildren, pInfo->twAggSup.maxTs);
|
||||||
} else {
|
} else {
|
||||||
pInfo->binfo.pRes->info.watermark = pInfo->twAggSup.maxTs;
|
pInfo->binfo.pRes->info.watermark = pInfo->twAggSup.maxTs;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void* pIte = NULL;
|
||||||
|
while ((pIte = taosHashIterate(pUpdatedMap, pIte)) != NULL) {
|
||||||
|
taosArrayPush(pUpdated, pIte);
|
||||||
|
}
|
||||||
|
taosHashCleanup(pUpdatedMap);
|
||||||
|
taosArraySort(pUpdated, resultrowComparAsc);
|
||||||
|
|
||||||
finalizeUpdatedResult(pOperator->exprSupp.numOfExprs, pInfo->aggSup.pResultBuf, pUpdated, pSup->rowEntryInfoOffset);
|
finalizeUpdatedResult(pOperator->exprSupp.numOfExprs, pInfo->aggSup.pResultBuf, pUpdated, pSup->rowEntryInfoOffset);
|
||||||
initMultiResInfoFromArrayList(&pInfo->groupResInfo, pUpdated);
|
initMultiResInfoFromArrayList(&pInfo->groupResInfo, pUpdated);
|
||||||
blockDataEnsureCapacity(pInfo->binfo.pRes, pOperator->resultInfo.capacity);
|
blockDataEnsureCapacity(pInfo->binfo.pRes, pOperator->resultInfo.capacity);
|
||||||
|
@ -3517,9 +3589,7 @@ SOperatorInfo* createStreamSessionAggOperatorInfo(SOperatorInfo* downstream, SPh
|
||||||
_hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY);
|
_hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY);
|
||||||
pInfo->pStDeleted = taosHashInit(64, hashFn, true, HASH_NO_LOCK);
|
pInfo->pStDeleted = taosHashInit(64, hashFn, true, HASH_NO_LOCK);
|
||||||
pInfo->pDelIterator = NULL;
|
pInfo->pDelIterator = NULL;
|
||||||
// pInfo->pDelRes = createSpecialDataBlock(STREAM_DELETE_RESULT);
|
pInfo->pDelRes = createSpecialDataBlock(STREAM_DELETE_RESULT);
|
||||||
pInfo->pDelRes = createOneDataBlock(pInfo->binfo.pRes, false); // todo(liuyao) for delete
|
|
||||||
pInfo->pDelRes->info.type = STREAM_DELETE_RESULT; // todo(liuyao) for delete
|
|
||||||
pInfo->pChildren = NULL;
|
pInfo->pChildren = NULL;
|
||||||
pInfo->isFinal = false;
|
pInfo->isFinal = false;
|
||||||
pInfo->pPhyNode = pPhyNode;
|
pInfo->pPhyNode = pPhyNode;
|
||||||
|
@ -3665,7 +3735,7 @@ SResultWindowInfo* getSessionTimeWindow(SStreamAggSupporter* pAggSup, TSKEY star
|
||||||
return insertNewSessionWindow(pWinInfos, startTs, index + 1);
|
return insertNewSessionWindow(pWinInfos, startTs, index + 1);
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t updateSessionWindowInfo(SResultWindowInfo* pWinInfo, TSKEY* pStartTs, TSKEY* pEndTs, int32_t rows,
|
int32_t updateSessionWindowInfo(SResultWindowInfo* pWinInfo, TSKEY* pStartTs, TSKEY* pEndTs, uint64_t groupId,int32_t rows,
|
||||||
int32_t start, int64_t gap, SHashObj* pStDeleted) {
|
int32_t start, int64_t gap, SHashObj* pStDeleted) {
|
||||||
for (int32_t i = start; i < rows; ++i) {
|
for (int32_t i = start; i < rows; ++i) {
|
||||||
if (!isInWindow(pWinInfo, pStartTs[i], gap) && (!pEndTs || !isInWindow(pWinInfo, pEndTs[i], gap))) {
|
if (!isInWindow(pWinInfo, pStartTs[i], gap) && (!pEndTs || !isInWindow(pWinInfo, pEndTs[i], gap))) {
|
||||||
|
@ -3673,7 +3743,8 @@ int32_t updateSessionWindowInfo(SResultWindowInfo* pWinInfo, TSKEY* pStartTs, TS
|
||||||
}
|
}
|
||||||
if (pWinInfo->win.skey > pStartTs[i]) {
|
if (pWinInfo->win.skey > pStartTs[i]) {
|
||||||
if (pStDeleted && pWinInfo->isOutput) {
|
if (pStDeleted && pWinInfo->isOutput) {
|
||||||
taosHashPut(pStDeleted, &pWinInfo->pos, sizeof(SResultRowPosition), &pWinInfo->win.skey, sizeof(TSKEY));
|
SWinRes res = {.ts = pWinInfo->win.skey, .groupId = groupId};
|
||||||
|
taosHashPut(pStDeleted, &pWinInfo->pos, sizeof(SResultRowPosition), &res, sizeof(SWinRes));
|
||||||
pWinInfo->isOutput = false;
|
pWinInfo->isOutput = false;
|
||||||
}
|
}
|
||||||
pWinInfo->win.skey = pStartTs[i];
|
pWinInfo->win.skey = pStartTs[i];
|
||||||
|
@ -3792,7 +3863,8 @@ void compactTimeWindow(SStreamSessionAggOperatorInfo* pInfo, int32_t startIndex,
|
||||||
compactFunctions(pSup->pCtx, pInfo->pDummyCtx, numOfOutput, pTaskInfo);
|
compactFunctions(pSup->pCtx, pInfo->pDummyCtx, numOfOutput, pTaskInfo);
|
||||||
taosHashRemove(pStUpdated, &pWinInfo->pos, sizeof(SResultRowPosition));
|
taosHashRemove(pStUpdated, &pWinInfo->pos, sizeof(SResultRowPosition));
|
||||||
if (pWinInfo->isOutput) {
|
if (pWinInfo->isOutput) {
|
||||||
taosHashPut(pStDeleted, &pWinInfo->pos, sizeof(SResultRowPosition), &pWinInfo->win.skey, sizeof(TSKEY));
|
SWinRes res = {.ts = pWinInfo->win.skey, .groupId = groupId};
|
||||||
|
taosHashPut(pStDeleted, &pWinInfo->pos, sizeof(SResultRowPosition), &res, sizeof(SWinRes));
|
||||||
pWinInfo->isOutput = false;
|
pWinInfo->isOutput = false;
|
||||||
}
|
}
|
||||||
taosArrayRemove(pInfo->streamAggSup.pCurWins, i);
|
taosArrayRemove(pInfo->streamAggSup.pCurWins, i);
|
||||||
|
@ -3842,7 +3914,7 @@ static void doStreamSessionAggImpl(SOperatorInfo* pOperator, SSDataBlock* pSData
|
||||||
int32_t winIndex = 0;
|
int32_t winIndex = 0;
|
||||||
SResultWindowInfo* pCurWin = getSessionTimeWindow(pAggSup, startTsCols[i], endTsCols[i], groupId, gap, &winIndex);
|
SResultWindowInfo* pCurWin = getSessionTimeWindow(pAggSup, startTsCols[i], endTsCols[i], groupId, gap, &winIndex);
|
||||||
winRows =
|
winRows =
|
||||||
updateSessionWindowInfo(pCurWin, startTsCols, endTsCols, pSDataBlock->info.rows, i, pInfo->gap, pStDeleted);
|
updateSessionWindowInfo(pCurWin, startTsCols, endTsCols, groupId, pSDataBlock->info.rows, i, pInfo->gap, pStDeleted);
|
||||||
code = doOneWindowAgg(pInfo, pSDataBlock, pCurWin, &pResult, i, winRows, numOfOutput, pOperator);
|
code = doOneWindowAgg(pInfo, pSDataBlock, pCurWin, &pResult, i, winRows, numOfOutput, pOperator);
|
||||||
if (code != TSDB_CODE_SUCCESS || pResult == NULL) {
|
if (code != TSDB_CODE_SUCCESS || pResult == NULL) {
|
||||||
longjmp(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY);
|
longjmp(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY);
|
||||||
|
@ -3891,6 +3963,7 @@ static void doDeleteTimeWindows(SStreamAggSupporter* pAggSup, SSDataBlock* pBloc
|
||||||
}
|
}
|
||||||
deleteWindow(pAggSup->pCurWins, winIndex, fp);
|
deleteWindow(pAggSup->pCurWins, winIndex, fp);
|
||||||
if (result) {
|
if (result) {
|
||||||
|
pCurWin->groupId = gpDatas[i];
|
||||||
taosArrayPush(result, pCurWin);
|
taosArrayPush(result, pCurWin);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -3911,7 +3984,7 @@ static void doClearSessionWindows(SStreamAggSupporter* pAggSup, SExprSupp* pSup,
|
||||||
step = 1;
|
step = 1;
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
step = updateSessionWindowInfo(pCurWin, tsCols, NULL, pBlock->info.rows, i, gap, NULL);
|
step = updateSessionWindowInfo(pCurWin, tsCols, NULL, 0, pBlock->info.rows, i, gap, NULL);
|
||||||
ASSERT(isInWindow(pCurWin, tsCols[i], gap));
|
ASSERT(isInWindow(pCurWin, tsCols[i], gap));
|
||||||
doClearWindowImpl(&pCurWin->pos, pAggSup->pResultBuf, pSup, numOfOutput);
|
doClearWindowImpl(&pCurWin->pos, pAggSup->pResultBuf, pSup, numOfOutput);
|
||||||
if (result) {
|
if (result) {
|
||||||
|
@ -3948,12 +4021,11 @@ void doBuildDeleteDataBlock(SHashObj* pStDeleted, SSDataBlock* pBlock, void** It
|
||||||
blockDataEnsureCapacity(pBlock, size);
|
blockDataEnsureCapacity(pBlock, size);
|
||||||
size_t keyLen = 0;
|
size_t keyLen = 0;
|
||||||
while (((*Ite) = taosHashIterate(pStDeleted, *Ite)) != NULL) {
|
while (((*Ite) = taosHashIterate(pStDeleted, *Ite)) != NULL) {
|
||||||
SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, START_TS_COLUMN_INDEX);
|
SWinRes* res = *Ite;
|
||||||
colDataAppend(pColInfoData, pBlock->info.rows, *Ite, false);
|
SColumnInfoData* pTsCol = taosArrayGet(pBlock->pDataBlock, START_TS_COLUMN_INDEX);
|
||||||
for (int32_t i = 1; i < taosArrayGetSize(pBlock->pDataBlock); i++) {
|
colDataAppend(pTsCol, pBlock->info.rows, (const char*)&res->ts, false);
|
||||||
pColInfoData = taosArrayGet(pBlock->pDataBlock, i);
|
SColumnInfoData* pGpCol = taosArrayGet(pBlock->pDataBlock, GROUPID_COLUMN_INDEX);
|
||||||
colDataAppendNULL(pColInfoData, pBlock->info.rows);
|
colDataAppend(pGpCol, pBlock->info.rows, (const char*)&res->groupId, false);
|
||||||
}
|
|
||||||
pBlock->info.rows += 1;
|
pBlock->info.rows += 1;
|
||||||
if (pBlock->info.rows + 1 >= pBlock->info.capacity) {
|
if (pBlock->info.rows + 1 >= pBlock->info.capacity) {
|
||||||
break;
|
break;
|
||||||
|
@ -4080,7 +4152,8 @@ static void copyDeleteWindowInfo(SArray* pResWins, SHashObj* pStDeleted) {
|
||||||
int32_t size = taosArrayGetSize(pResWins);
|
int32_t size = taosArrayGetSize(pResWins);
|
||||||
for (int32_t i = 0; i < size; i++) {
|
for (int32_t i = 0; i < size; i++) {
|
||||||
SResultWindowInfo* pWinInfo = taosArrayGet(pResWins, i);
|
SResultWindowInfo* pWinInfo = taosArrayGet(pResWins, i);
|
||||||
taosHashPut(pStDeleted, &pWinInfo->pos, sizeof(SResultRowPosition), &pWinInfo->win.skey, sizeof(TSKEY));
|
SWinRes res = {.ts = pWinInfo->win.skey, .groupId = pWinInfo->groupId};
|
||||||
|
taosHashPut(pStDeleted, &pWinInfo->pos, sizeof(SResultRowPosition), &res, sizeof(SWinRes));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -13,38 +13,37 @@
|
||||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
#include "os.h"
|
|
||||||
#include "tsimplehash.h"
|
#include "tsimplehash.h"
|
||||||
|
#include "os.h"
|
||||||
#include "taoserror.h"
|
#include "taoserror.h"
|
||||||
|
|
||||||
#define SHASH_DEFAULT_LOAD_FACTOR 0.75
|
#define SHASH_DEFAULT_LOAD_FACTOR 0.75
|
||||||
#define HASH_MAX_CAPACITY (1024*1024*16)
|
#define HASH_MAX_CAPACITY (1024 * 1024 * 16)
|
||||||
#define SHASH_NEED_RESIZE(_h) ((_h)->size >= (_h)->capacity * SHASH_DEFAULT_LOAD_FACTOR)
|
#define SHASH_NEED_RESIZE(_h) ((_h)->size >= (_h)->capacity * SHASH_DEFAULT_LOAD_FACTOR)
|
||||||
|
|
||||||
#define GET_SHASH_NODE_KEY(_n, _dl) ((char*)(_n) + sizeof(SHNode) + (_dl))
|
#define GET_SHASH_NODE_KEY(_n, _dl) ((char *)(_n) + sizeof(SHNode) + (_dl))
|
||||||
#define GET_SHASH_NODE_DATA(_n) ((char*)(_n) + sizeof(SHNode))
|
#define GET_SHASH_NODE_DATA(_n) ((char *)(_n) + sizeof(SHNode))
|
||||||
|
|
||||||
#define HASH_INDEX(v, c) ((v) & ((c)-1))
|
#define HASH_INDEX(v, c) ((v) & ((c)-1))
|
||||||
#define HASH_NEED_RESIZE(_h) ((_h)->size >= (_h)->capacity * SHASH_DEFAULT_LOAD_FACTOR)
|
|
||||||
|
|
||||||
#define FREE_HASH_NODE(_n) \
|
#define FREE_HASH_NODE(_n) \
|
||||||
do { \
|
do { \
|
||||||
taosMemoryFreeClear(_n); \
|
taosMemoryFreeClear(_n); \
|
||||||
} while (0);
|
} while (0);
|
||||||
|
|
||||||
typedef struct SHNode {
|
typedef struct SHNode {
|
||||||
struct SHNode *next;
|
struct SHNode *next;
|
||||||
char data[];
|
char data[];
|
||||||
} SHNode;
|
} SHNode;
|
||||||
|
|
||||||
struct SSHashObj {
|
struct SSHashObj {
|
||||||
SHNode **hashList;
|
SHNode **hashList;
|
||||||
size_t capacity; // number of slots
|
size_t capacity; // number of slots
|
||||||
int64_t size; // number of elements in hash table
|
int64_t size; // number of elements in hash table
|
||||||
_hash_fn_t hashFp; // hash function
|
_hash_fn_t hashFp; // hash function
|
||||||
_equal_fn_t equalFp; // equal function
|
_equal_fn_t equalFp; // equal function
|
||||||
int32_t keyLen;
|
int32_t keyLen;
|
||||||
int32_t dataLen;
|
int32_t dataLen;
|
||||||
};
|
};
|
||||||
|
|
||||||
static FORCE_INLINE int32_t taosHashCapacity(int32_t length) {
|
static FORCE_INLINE int32_t taosHashCapacity(int32_t length) {
|
||||||
|
@ -62,7 +61,7 @@ SSHashObj *tSimpleHashInit(size_t capacity, _hash_fn_t fn, size_t keyLen, size_t
|
||||||
capacity = 4;
|
capacity = 4;
|
||||||
}
|
}
|
||||||
|
|
||||||
SSHashObj* pHashObj = (SSHashObj*) taosMemoryCalloc(1, sizeof(SSHashObj));
|
SSHashObj *pHashObj = (SSHashObj *)taosMemoryCalloc(1, sizeof(SSHashObj));
|
||||||
if (pHashObj == NULL) {
|
if (pHashObj == NULL) {
|
||||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
return NULL;
|
return NULL;
|
||||||
|
@ -72,7 +71,7 @@ SSHashObj *tSimpleHashInit(size_t capacity, _hash_fn_t fn, size_t keyLen, size_t
|
||||||
pHashObj->capacity = taosHashCapacity((int32_t)capacity);
|
pHashObj->capacity = taosHashCapacity((int32_t)capacity);
|
||||||
|
|
||||||
pHashObj->equalFp = memcmp;
|
pHashObj->equalFp = memcmp;
|
||||||
pHashObj->hashFp = fn;
|
pHashObj->hashFp = fn;
|
||||||
ASSERT((pHashObj->capacity & (pHashObj->capacity - 1)) == 0);
|
ASSERT((pHashObj->capacity & (pHashObj->capacity - 1)) == 0);
|
||||||
|
|
||||||
pHashObj->keyLen = keyLen;
|
pHashObj->keyLen = keyLen;
|
||||||
|
@ -91,7 +90,7 @@ int32_t tSimpleHashGetSize(const SSHashObj *pHashObj) {
|
||||||
if (pHashObj == NULL) {
|
if (pHashObj == NULL) {
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
return (int32_t)atomic_load_64((int64_t*)&pHashObj->size);
|
return (int32_t)atomic_load_64((int64_t *)&pHashObj->size);
|
||||||
}
|
}
|
||||||
|
|
||||||
static SHNode *doCreateHashNode(const void *key, size_t keyLen, const void *pData, size_t dsize, uint32_t hashVal) {
|
static SHNode *doCreateHashNode(const void *key, size_t keyLen, const void *pData, size_t dsize, uint32_t hashVal) {
|
||||||
|
@ -108,41 +107,42 @@ static SHNode *doCreateHashNode(const void *key, size_t keyLen, const void *pDat
|
||||||
}
|
}
|
||||||
|
|
||||||
static void taosHashTableResize(SSHashObj *pHashObj) {
|
static void taosHashTableResize(SSHashObj *pHashObj) {
|
||||||
if (!HASH_NEED_RESIZE(pHashObj)) {
|
if (!SHASH_NEED_RESIZE(pHashObj)) {
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t newCapacity = (int32_t)(pHashObj->capacity << 1u);
|
int32_t newCapacity = (int32_t)(pHashObj->capacity << 1u);
|
||||||
if (newCapacity > HASH_MAX_CAPACITY) {
|
if (newCapacity > HASH_MAX_CAPACITY) {
|
||||||
// uDebug("current capacity:%zu, maximum capacity:%d, no resize applied due to limitation is reached",
|
// uDebug("current capacity:%zu, maximum capacity:%d, no resize applied due to limitation is reached",
|
||||||
// pHashObj->capacity, HASH_MAX_CAPACITY);
|
// pHashObj->capacity, HASH_MAX_CAPACITY);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
int64_t st = taosGetTimestampUs();
|
int64_t st = taosGetTimestampUs();
|
||||||
void *pNewEntryList = taosMemoryRealloc(pHashObj->hashList, sizeof(void *) * newCapacity);
|
void *pNewEntryList = taosMemoryRealloc(pHashObj->hashList, sizeof(void *) * newCapacity);
|
||||||
if (pNewEntryList == NULL) {
|
if (pNewEntryList == NULL) {
|
||||||
// qWarn("hash resize failed due to out of memory, capacity remain:%zu", pHashObj->capacity);
|
// qWarn("hash resize failed due to out of memory, capacity remain:%zu", pHashObj->capacity);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
size_t inc = newCapacity - pHashObj->capacity;
|
size_t inc = newCapacity - pHashObj->capacity;
|
||||||
memset((char*)pNewEntryList + pHashObj->capacity * sizeof(void*), 0, inc);
|
memset((char *)pNewEntryList + pHashObj->capacity * sizeof(void *), 0, inc);
|
||||||
|
|
||||||
pHashObj->hashList = pNewEntryList;
|
pHashObj->hashList = pNewEntryList;
|
||||||
pHashObj->capacity = newCapacity;
|
pHashObj->capacity = newCapacity;
|
||||||
|
|
||||||
for (int32_t idx = 0; idx < pHashObj->capacity; ++idx) {
|
for (int32_t idx = 0; idx < pHashObj->capacity; ++idx) {
|
||||||
SHNode* pNode = pHashObj->hashList[idx];
|
SHNode *pNode = pHashObj->hashList[idx];
|
||||||
SHNode *pNext;
|
|
||||||
SHNode *pPrev = NULL;
|
|
||||||
|
|
||||||
if (pNode == NULL) {
|
if (pNode == NULL) {
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
SHNode *pNext;
|
||||||
|
SHNode *pPrev = NULL;
|
||||||
|
|
||||||
|
|
||||||
while (pNode != NULL) {
|
while (pNode != NULL) {
|
||||||
void* key = GET_SHASH_NODE_KEY(pNode, pHashObj->dataLen);
|
void *key = GET_SHASH_NODE_KEY(pNode, pHashObj->dataLen);
|
||||||
uint32_t hashVal = (*pHashObj->hashFp)(key, (uint32_t)pHashObj->dataLen);
|
uint32_t hashVal = (*pHashObj->hashFp)(key, (uint32_t)pHashObj->dataLen);
|
||||||
|
|
||||||
int32_t newIdx = HASH_INDEX(hashVal, pHashObj->capacity);
|
int32_t newIdx = HASH_INDEX(hashVal, pHashObj->capacity);
|
||||||
|
@ -166,8 +166,9 @@ static void taosHashTableResize(SSHashObj *pHashObj) {
|
||||||
|
|
||||||
int64_t et = taosGetTimestampUs();
|
int64_t et = taosGetTimestampUs();
|
||||||
|
|
||||||
// uDebug("hash table resize completed, new capacity:%d, load factor:%f, elapsed time:%fms", (int32_t)pHashObj->capacity,
|
// uDebug("hash table resize completed, new capacity:%d, load factor:%f, elapsed time:%fms",
|
||||||
// ((double)pHashObj->size) / pHashObj->capacity, (et - st) / 1000.0);
|
// (int32_t)pHashObj->capacity,
|
||||||
|
// ((double)pHashObj->size) / pHashObj->capacity, (et - st) / 1000.0);
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t tSimpleHashPut(SSHashObj *pHashObj, const void *key, const void *data) {
|
int32_t tSimpleHashPut(SSHashObj *pHashObj, const void *key, const void *data) {
|
||||||
|
@ -210,7 +211,7 @@ int32_t tSimpleHashPut(SSHashObj *pHashObj, const void *key, const void *data) {
|
||||||
pNewNode->next = pHashObj->hashList[slot];
|
pNewNode->next = pHashObj->hashList[slot];
|
||||||
pHashObj->hashList[slot] = pNewNode;
|
pHashObj->hashList[slot] = pNewNode;
|
||||||
atomic_add_fetch_64(&pHashObj->size, 1);
|
atomic_add_fetch_64(&pHashObj->size, 1);
|
||||||
} else { //update data
|
} else { // update data
|
||||||
memcpy(GET_SHASH_NODE_DATA(pNode), data, pHashObj->dataLen);
|
memcpy(GET_SHASH_NODE_DATA(pNode), data, pHashObj->dataLen);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -230,9 +231,7 @@ static FORCE_INLINE SHNode *doSearchInEntryList(SSHashObj *pHashObj, const void
|
||||||
return pNode;
|
return pNode;
|
||||||
}
|
}
|
||||||
|
|
||||||
static FORCE_INLINE bool taosHashTableEmpty(const SSHashObj *pHashObj) {
|
static FORCE_INLINE bool taosHashTableEmpty(const SSHashObj *pHashObj) { return tSimpleHashGetSize(pHashObj) == 0; }
|
||||||
return tSimpleHashGetSize(pHashObj) == 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
void *tSimpleHashGet(SSHashObj *pHashObj, const void *key) {
|
void *tSimpleHashGet(SSHashObj *pHashObj, const void *key) {
|
||||||
if (pHashObj == NULL || taosHashTableEmpty(pHashObj) || key == NULL) {
|
if (pHashObj == NULL || taosHashTableEmpty(pHashObj) || key == NULL) {
|
||||||
|
@ -299,9 +298,9 @@ size_t tSimpleHashGetMemSize(const SSHashObj *pHashObj) {
|
||||||
return (pHashObj->capacity * sizeof(void *)) + sizeof(SHNode) * tSimpleHashGetSize(pHashObj) + sizeof(SSHashObj);
|
return (pHashObj->capacity * sizeof(void *)) + sizeof(SHNode) * tSimpleHashGetSize(pHashObj) + sizeof(SSHashObj);
|
||||||
}
|
}
|
||||||
|
|
||||||
void *tSimpleHashGetKey(const SSHashObj* pHashObj, void *data, size_t* keyLen) {
|
void *tSimpleHashGetKey(const SSHashObj *pHashObj, void *data, size_t *keyLen) {
|
||||||
int32_t offset = offsetof(SHNode, data);
|
int32_t offset = offsetof(SHNode, data);
|
||||||
SHNode *node = ((SHNode*)(char*)data - offset);
|
SHNode *node = ((SHNode *)(char *)data - offset);
|
||||||
if (keyLen != NULL) {
|
if (keyLen != NULL) {
|
||||||
*keyLen = pHashObj->keyLen;
|
*keyLen = pHashObj->keyLen;
|
||||||
}
|
}
|
||||||
|
|
|
@ -49,6 +49,7 @@ extern "C" {
|
||||||
#define FUNC_MGT_MULTI_ROWS_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(20)
|
#define FUNC_MGT_MULTI_ROWS_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(20)
|
||||||
#define FUNC_MGT_KEEP_ORDER_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(21)
|
#define FUNC_MGT_KEEP_ORDER_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(21)
|
||||||
#define FUNC_MGT_CUMULATIVE_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(22)
|
#define FUNC_MGT_CUMULATIVE_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(22)
|
||||||
|
#define FUNC_MGT_FORBID_STABLE_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(23)
|
||||||
|
|
||||||
#define FUNC_MGT_TEST_MASK(val, mask) (((val) & (mask)) != 0)
|
#define FUNC_MGT_TEST_MASK(val, mask) (((val) & (mask)) != 0)
|
||||||
|
|
||||||
|
|
|
@ -2287,7 +2287,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
||||||
{
|
{
|
||||||
.name = "interp",
|
.name = "interp",
|
||||||
.type = FUNCTION_TYPE_INTERP,
|
.type = FUNCTION_TYPE_INTERP,
|
||||||
.classification = FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_INTERVAL_INTERPO_FUNC | FUNC_MGT_IMPLICIT_TS_FUNC | FUNC_MGT_FORBID_STREAM_FUNC,
|
.classification = FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_INTERVAL_INTERPO_FUNC | FUNC_MGT_IMPLICIT_TS_FUNC |
|
||||||
|
FUNC_MGT_FORBID_STREAM_FUNC | FUNC_MGT_FORBID_STABLE_FUNC,
|
||||||
.translateFunc = translateInterp,
|
.translateFunc = translateInterp,
|
||||||
.getEnvFunc = getSelectivityFuncEnv,
|
.getEnvFunc = getSelectivityFuncEnv,
|
||||||
.initFunc = functionSetup,
|
.initFunc = functionSetup,
|
||||||
|
|
|
@ -498,8 +498,7 @@ int32_t functionFinalizeWithResultBuf(SqlFunctionCtx* pCtx, SSDataBlock* pBlock,
|
||||||
SColumnInfoData* pCol = taosArrayGet(pBlock->pDataBlock, slotId);
|
SColumnInfoData* pCol = taosArrayGet(pBlock->pDataBlock, slotId);
|
||||||
|
|
||||||
SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx);
|
SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx);
|
||||||
pResInfo->isNullRes = (pResInfo->numOfRes == 0) ? 1 : 0;
|
pResInfo->isNullRes = (pResInfo->isNullRes == 1) ? 1 : (pResInfo->numOfRes == 0);;
|
||||||
cleanupResultRowEntry(pResInfo);
|
|
||||||
|
|
||||||
char* in = finalResult;
|
char* in = finalResult;
|
||||||
colDataAppend(pCol, pBlock->info.rows, in, pResInfo->isNullRes);
|
colDataAppend(pCol, pBlock->info.rows, in, pResInfo->isNullRes);
|
||||||
|
@ -749,6 +748,7 @@ int32_t sumCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
|
||||||
pDBuf->dsum += pSBuf->dsum;
|
pDBuf->dsum += pSBuf->dsum;
|
||||||
}
|
}
|
||||||
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
|
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
|
||||||
|
pDResInfo->isNullRes &= pSResInfo->isNullRes;
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1747,6 +1747,7 @@ int32_t minMaxCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx, int3
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
|
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
|
||||||
|
pDResInfo->isNullRes &= pSResInfo->isNullRes;
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2122,6 +2123,7 @@ int32_t stddevCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
|
||||||
}
|
}
|
||||||
pDBuf->count += pSBuf->count;
|
pDBuf->count += pSBuf->count;
|
||||||
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
|
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
|
||||||
|
pDResInfo->isNullRes &= pSResInfo->isNullRes;
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2312,6 +2314,7 @@ int32_t leastSQRCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
|
||||||
pDparam[1][2] += pSparam[1][2];
|
pDparam[1][2] += pSparam[1][2];
|
||||||
pDBuf->num += pSBuf->num;
|
pDBuf->num += pSBuf->num;
|
||||||
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
|
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
|
||||||
|
pDResInfo->isNullRes &= pSResInfo->isNullRes;
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -2708,6 +2711,7 @@ int32_t apercentileCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx)
|
||||||
|
|
||||||
apercentileTransferInfo(pSBuf, pDBuf);
|
apercentileTransferInfo(pSBuf, pDBuf);
|
||||||
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
|
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
|
||||||
|
pDResInfo->isNullRes &= pSResInfo->isNullRes;
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -3891,6 +3895,7 @@ int32_t spreadCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
|
||||||
SSpreadInfo* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo);
|
SSpreadInfo* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo);
|
||||||
spreadTransferInfo(pSBuf, pDBuf);
|
spreadTransferInfo(pSBuf, pDBuf);
|
||||||
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
|
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
|
||||||
|
pDResInfo->isNullRes &= pSResInfo->isNullRes;
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -4063,6 +4068,7 @@ int32_t elapsedCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
|
||||||
|
|
||||||
elapsedTransferInfo(pSBuf, pDBuf);
|
elapsedTransferInfo(pSBuf, pDBuf);
|
||||||
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
|
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
|
||||||
|
pDResInfo->isNullRes &= pSResInfo->isNullRes;
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -4380,6 +4386,7 @@ int32_t histogramCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
|
||||||
|
|
||||||
histogramTransferInfo(pSBuf, pDBuf);
|
histogramTransferInfo(pSBuf, pDBuf);
|
||||||
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
|
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
|
||||||
|
pDResInfo->isNullRes &= pSResInfo->isNullRes;
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -4577,6 +4584,7 @@ int32_t hllCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
|
||||||
|
|
||||||
hllTransferInfo(pSBuf, pDBuf);
|
hllTransferInfo(pSBuf, pDBuf);
|
||||||
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
|
pDResInfo->numOfRes = TMAX(pDResInfo->numOfRes, pSResInfo->numOfRes);
|
||||||
|
pDResInfo->isNullRes &= pSResInfo->isNullRes;
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -212,6 +212,8 @@ bool fmIsKeepOrderFunc(int32_t funcId) { return isSpecificClassifyFunc(funcId, F
|
||||||
|
|
||||||
bool fmIsCumulativeFunc(int32_t funcId) { return isSpecificClassifyFunc(funcId, FUNC_MGT_CUMULATIVE_FUNC); }
|
bool fmIsCumulativeFunc(int32_t funcId) { return isSpecificClassifyFunc(funcId, FUNC_MGT_CUMULATIVE_FUNC); }
|
||||||
|
|
||||||
|
bool fmIsForbidSuperTableFunc(int32_t funcId) { return isSpecificClassifyFunc(funcId, FUNC_MGT_FORBID_STABLE_FUNC); }
|
||||||
|
|
||||||
bool fmIsInterpFunc(int32_t funcId) {
|
bool fmIsInterpFunc(int32_t funcId) {
|
||||||
if (funcId < 0 || funcId >= funcMgtBuiltinsNum) {
|
if (funcId < 0 || funcId >= funcMgtBuiltinsNum) {
|
||||||
return false;
|
return false;
|
||||||
|
|
|
@ -37,17 +37,6 @@ int32_t getNumOfResult(SqlFunctionCtx* pCtx, int32_t num, SSDataBlock* pResBlock
|
||||||
int32_t maxRows = 0;
|
int32_t maxRows = 0;
|
||||||
|
|
||||||
for (int32_t j = 0; j < num; ++j) {
|
for (int32_t j = 0; j < num; ++j) {
|
||||||
#if 0
|
|
||||||
int32_t id = pCtx[j].functionId;
|
|
||||||
|
|
||||||
/*
|
|
||||||
* ts, tag, tagprj function can not decide the output number of current query
|
|
||||||
* the number of output result is decided by main output
|
|
||||||
*/
|
|
||||||
if (id == FUNCTION_TS || id == FUNCTION_TAG || id == FUNCTION_TAGPRJ) {
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
#endif
|
|
||||||
SResultRowEntryInfo *pResInfo = GET_RES_INFO(&pCtx[j]);
|
SResultRowEntryInfo *pResInfo = GET_RES_INFO(&pCtx[j]);
|
||||||
if (pResInfo != NULL && maxRows < pResInfo->numOfRes) {
|
if (pResInfo != NULL && maxRows < pResInfo->numOfRes) {
|
||||||
maxRows = pResInfo->numOfRes;
|
maxRows = pResInfo->numOfRes;
|
||||||
|
|
|
@ -1192,7 +1192,10 @@ static int parseOneRow(SInsertParseContext* pCxt, STableDataBlocks* pDataBlocks,
|
||||||
pBuilder->hasNone = true;
|
pBuilder->hasNone = true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
tdSRowEnd(pBuilder);
|
||||||
|
|
||||||
*gotRow = true;
|
*gotRow = true;
|
||||||
|
|
||||||
#ifdef TD_DEBUG_PRINT_ROW
|
#ifdef TD_DEBUG_PRINT_ROW
|
||||||
STSchema* pSTSchema = tdGetSTSChemaFromSSChema(schema, spd->numOfCols, 1);
|
STSchema* pSTSchema = tdGetSTSChemaFromSSChema(schema, spd->numOfCols, 1);
|
||||||
tdSRowPrint(row, pSTSchema, __func__);
|
tdSRowPrint(row, pSTSchema, __func__);
|
||||||
|
@ -1201,7 +1204,6 @@ static int parseOneRow(SInsertParseContext* pCxt, STableDataBlocks* pDataBlocks,
|
||||||
}
|
}
|
||||||
|
|
||||||
// *len = pBuilder->extendedRowSize;
|
// *len = pBuilder->extendedRowSize;
|
||||||
tdSRowEnd(pBuilder);
|
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -1270,6 +1270,25 @@ static int32_t translateRepeatScanFunc(STranslateContext* pCxt, SFunctionNode* p
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int32_t translateForbidSuperTableFunc(STranslateContext* pCxt, SFunctionNode* pFunc) {
|
||||||
|
if (!fmIsForbidSuperTableFunc(pFunc->funcId)) {
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
if (!isSelectStmt(pCxt->pCurrStmt)) {
|
||||||
|
return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_ONLY_SUPPORT_SINGLE_TABLE,
|
||||||
|
"%s is only supported in single table query", pFunc->functionName);
|
||||||
|
}
|
||||||
|
SSelectStmt* pSelect = (SSelectStmt*)pCxt->pCurrStmt;
|
||||||
|
SNode* pTable = pSelect->pFromTable;
|
||||||
|
if ((NULL != pTable && (QUERY_NODE_REAL_TABLE != nodeType(pTable) ||
|
||||||
|
(TSDB_CHILD_TABLE != ((SRealTableNode*)pTable)->pMeta->tableType &&
|
||||||
|
TSDB_NORMAL_TABLE != ((SRealTableNode*)pTable)->pMeta->tableType)))) {
|
||||||
|
return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_ONLY_SUPPORT_SINGLE_TABLE,
|
||||||
|
"%s is only supported in single table query", pFunc->functionName);
|
||||||
|
}
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
|
||||||
static bool isStar(SNode* pNode) {
|
static bool isStar(SNode* pNode) {
|
||||||
return (QUERY_NODE_COLUMN == nodeType(pNode)) && ('\0' == ((SColumnNode*)pNode)->tableAlias[0]) &&
|
return (QUERY_NODE_COLUMN == nodeType(pNode)) && ('\0' == ((SColumnNode*)pNode)->tableAlias[0]) &&
|
||||||
(0 == strcmp(((SColumnNode*)pNode)->colName, "*"));
|
(0 == strcmp(((SColumnNode*)pNode)->colName, "*"));
|
||||||
|
@ -1426,6 +1445,9 @@ static int32_t rewriteSystemInfoFunc(STranslateContext* pCxt, SNode** pNode) {
|
||||||
|
|
||||||
static int32_t translateNoramlFunction(STranslateContext* pCxt, SFunctionNode* pFunc) {
|
static int32_t translateNoramlFunction(STranslateContext* pCxt, SFunctionNode* pFunc) {
|
||||||
int32_t code = translateAggFunc(pCxt, pFunc);
|
int32_t code = translateAggFunc(pCxt, pFunc);
|
||||||
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
|
code = translateForbidSuperTableFunc(pCxt, pFunc);
|
||||||
|
}
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
code = translateScanPseudoColumnFunc(pCxt, pFunc);
|
code = translateScanPseudoColumnFunc(pCxt, pFunc);
|
||||||
}
|
}
|
||||||
|
|
|
@ -124,7 +124,7 @@ SUpdateInfo *updateInfoInit(int64_t interval, int32_t precision, int64_t waterma
|
||||||
}
|
}
|
||||||
pInfo->numBuckets = DEFAULT_BUCKET_SIZE;
|
pInfo->numBuckets = DEFAULT_BUCKET_SIZE;
|
||||||
pInfo->pCloseWinSBF = NULL;
|
pInfo->pCloseWinSBF = NULL;
|
||||||
_hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY);
|
_hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_UBIGINT);
|
||||||
pInfo->pMap = taosHashInit(DEFAULT_MAP_CAPACITY, hashFn, true, HASH_NO_LOCK);
|
pInfo->pMap = taosHashInit(DEFAULT_MAP_CAPACITY, hashFn, true, HASH_NO_LOCK);
|
||||||
pInfo->maxVersion = 0;
|
pInfo->maxVersion = 0;
|
||||||
pInfo->scanGroupId = 0;
|
pInfo->scanGroupId = 0;
|
||||||
|
|
|
@ -163,6 +163,7 @@ typedef struct SSyncNode {
|
||||||
bool changing;
|
bool changing;
|
||||||
|
|
||||||
int64_t startTime;
|
int64_t startTime;
|
||||||
|
int64_t leaderTime;
|
||||||
int64_t lastReplicateTime;
|
int64_t lastReplicateTime;
|
||||||
|
|
||||||
} SSyncNode;
|
} SSyncNode;
|
||||||
|
|
|
@ -31,6 +31,29 @@ extern "C" {
|
||||||
|
|
||||||
#define MAX_CONFIG_INDEX_COUNT 512
|
#define MAX_CONFIG_INDEX_COUNT 512
|
||||||
|
|
||||||
|
// SRaftCfgIndex ------------------------------------------
|
||||||
|
typedef struct SRaftCfgIndex {
|
||||||
|
TdFilePtr pFile;
|
||||||
|
char path[TSDB_FILENAME_LEN * 2];
|
||||||
|
|
||||||
|
SyncIndex configIndexArr[MAX_CONFIG_INDEX_COUNT];
|
||||||
|
int32_t configIndexCount;
|
||||||
|
} SRaftCfgIndex;
|
||||||
|
|
||||||
|
SRaftCfgIndex *raftCfgIndexOpen(const char *path);
|
||||||
|
int32_t raftCfgIndexClose(SRaftCfgIndex *pRaftCfgIndex);
|
||||||
|
int32_t raftCfgIndexPersist(SRaftCfgIndex *pRaftCfgIndex);
|
||||||
|
int32_t raftCfgIndexAddConfigIndex(SRaftCfgIndex *pRaftCfgIndex, SyncIndex configIndex);
|
||||||
|
|
||||||
|
cJSON *raftCfgIndex2Json(SRaftCfgIndex *pRaftCfgIndex);
|
||||||
|
char *raftCfgIndex2Str(SRaftCfgIndex *pRaftCfgIndex);
|
||||||
|
int32_t raftCfgIndexFromJson(const cJSON *pRoot, SRaftCfgIndex *pRaftCfgIndex);
|
||||||
|
int32_t raftCfgIndexFromStr(const char *s, SRaftCfgIndex *pRaftCfgIndex);
|
||||||
|
|
||||||
|
int32_t raftCfgIndexCreateFile(const char *path);
|
||||||
|
|
||||||
|
// ---------------------------------------------------------
|
||||||
|
|
||||||
typedef struct SRaftCfg {
|
typedef struct SRaftCfg {
|
||||||
SSyncCfg cfg;
|
SSyncCfg cfg;
|
||||||
TdFilePtr pFile;
|
TdFilePtr pFile;
|
||||||
|
@ -50,14 +73,14 @@ int32_t raftCfgClose(SRaftCfg *pRaftCfg);
|
||||||
int32_t raftCfgPersist(SRaftCfg *pRaftCfg);
|
int32_t raftCfgPersist(SRaftCfg *pRaftCfg);
|
||||||
int32_t raftCfgAddConfigIndex(SRaftCfg *pRaftCfg, SyncIndex configIndex);
|
int32_t raftCfgAddConfigIndex(SRaftCfg *pRaftCfg, SyncIndex configIndex);
|
||||||
|
|
||||||
cJSON * syncCfg2Json(SSyncCfg *pSyncCfg);
|
cJSON *syncCfg2Json(SSyncCfg *pSyncCfg);
|
||||||
char * syncCfg2Str(SSyncCfg *pSyncCfg);
|
char *syncCfg2Str(SSyncCfg *pSyncCfg);
|
||||||
char * syncCfg2SimpleStr(SSyncCfg *pSyncCfg);
|
char *syncCfg2SimpleStr(SSyncCfg *pSyncCfg);
|
||||||
int32_t syncCfgFromJson(const cJSON *pRoot, SSyncCfg *pSyncCfg);
|
int32_t syncCfgFromJson(const cJSON *pRoot, SSyncCfg *pSyncCfg);
|
||||||
int32_t syncCfgFromStr(const char *s, SSyncCfg *pSyncCfg);
|
int32_t syncCfgFromStr(const char *s, SSyncCfg *pSyncCfg);
|
||||||
|
|
||||||
cJSON * raftCfg2Json(SRaftCfg *pRaftCfg);
|
cJSON *raftCfg2Json(SRaftCfg *pRaftCfg);
|
||||||
char * raftCfg2Str(SRaftCfg *pRaftCfg);
|
char *raftCfg2Str(SRaftCfg *pRaftCfg);
|
||||||
int32_t raftCfgFromJson(const cJSON *pRoot, SRaftCfg *pRaftCfg);
|
int32_t raftCfgFromJson(const cJSON *pRoot, SRaftCfg *pRaftCfg);
|
||||||
int32_t raftCfgFromStr(const char *s, SRaftCfg *pRaftCfg);
|
int32_t raftCfgFromStr(const char *s, SRaftCfg *pRaftCfg);
|
||||||
|
|
||||||
|
@ -82,6 +105,11 @@ void raftCfgPrint2(char *s, SRaftCfg *pCfg);
|
||||||
void raftCfgLog(SRaftCfg *pCfg);
|
void raftCfgLog(SRaftCfg *pCfg);
|
||||||
void raftCfgLog2(char *s, SRaftCfg *pCfg);
|
void raftCfgLog2(char *s, SRaftCfg *pCfg);
|
||||||
|
|
||||||
|
void raftCfgIndexPrint(SRaftCfgIndex *pCfg);
|
||||||
|
void raftCfgIndexPrint2(char *s, SRaftCfgIndex *pCfg);
|
||||||
|
void raftCfgIndexLog(SRaftCfgIndex *pCfg);
|
||||||
|
void raftCfgIndexLog2(char *s, SRaftCfgIndex *pCfg);
|
||||||
|
|
||||||
#ifdef __cplusplus
|
#ifdef __cplusplus
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
|
@ -32,9 +32,9 @@ typedef struct SRespStub {
|
||||||
} SRespStub;
|
} SRespStub;
|
||||||
|
|
||||||
typedef struct SSyncRespMgr {
|
typedef struct SSyncRespMgr {
|
||||||
SHashObj * pRespHash;
|
SHashObj *pRespHash;
|
||||||
int64_t ttl;
|
int64_t ttl;
|
||||||
void * data;
|
void *data;
|
||||||
TdThreadMutex mutex;
|
TdThreadMutex mutex;
|
||||||
uint64_t seqNum;
|
uint64_t seqNum;
|
||||||
} SSyncRespMgr;
|
} SSyncRespMgr;
|
||||||
|
@ -46,7 +46,8 @@ int32_t syncRespMgrDel(SSyncRespMgr *pObj, uint64_t index);
|
||||||
int32_t syncRespMgrGet(SSyncRespMgr *pObj, uint64_t index, SRespStub *pStub);
|
int32_t syncRespMgrGet(SSyncRespMgr *pObj, uint64_t index, SRespStub *pStub);
|
||||||
int32_t syncRespMgrGetAndDel(SSyncRespMgr *pObj, uint64_t index, SRespStub *pStub);
|
int32_t syncRespMgrGetAndDel(SSyncRespMgr *pObj, uint64_t index, SRespStub *pStub);
|
||||||
void syncRespClean(SSyncRespMgr *pObj);
|
void syncRespClean(SSyncRespMgr *pObj);
|
||||||
void syncRespCleanByTTL(SSyncRespMgr *pObj, int64_t ttl);
|
void syncRespCleanRsp(SSyncRespMgr *pObj);
|
||||||
|
void syncRespCleanByTTL(SSyncRespMgr *pObj, int64_t ttl, bool rsp);
|
||||||
|
|
||||||
#ifdef __cplusplus
|
#ifdef __cplusplus
|
||||||
}
|
}
|
||||||
|
|
|
@ -1100,6 +1100,7 @@ SSyncNode* syncNodeOpen(const SSyncInfo* pOldSyncInfo) {
|
||||||
|
|
||||||
int64_t timeNow = taosGetTimestampMs();
|
int64_t timeNow = taosGetTimestampMs();
|
||||||
pSyncNode->startTime = timeNow;
|
pSyncNode->startTime = timeNow;
|
||||||
|
pSyncNode->leaderTime = timeNow;
|
||||||
pSyncNode->lastReplicateTime = timeNow;
|
pSyncNode->lastReplicateTime = timeNow;
|
||||||
|
|
||||||
syncNodeEventLog(pSyncNode, "sync open");
|
syncNodeEventLog(pSyncNode, "sync open");
|
||||||
|
@ -2015,6 +2016,8 @@ void syncNodeUpdateTermWithoutStepDown(SSyncNode* pSyncNode, SyncTerm term) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void syncNodeLeaderChangeRsp(SSyncNode* pSyncNode) { syncRespCleanRsp(pSyncNode->pSyncRespMgr); }
|
||||||
|
|
||||||
void syncNodeBecomeFollower(SSyncNode* pSyncNode, const char* debugStr) {
|
void syncNodeBecomeFollower(SSyncNode* pSyncNode, const char* debugStr) {
|
||||||
// maybe clear leader cache
|
// maybe clear leader cache
|
||||||
if (pSyncNode->state == TAOS_SYNC_STATE_LEADER) {
|
if (pSyncNode->state == TAOS_SYNC_STATE_LEADER) {
|
||||||
|
@ -2028,6 +2031,9 @@ void syncNodeBecomeFollower(SSyncNode* pSyncNode, const char* debugStr) {
|
||||||
// reset elect timer
|
// reset elect timer
|
||||||
syncNodeResetElectTimer(pSyncNode);
|
syncNodeResetElectTimer(pSyncNode);
|
||||||
|
|
||||||
|
// send rsp to client
|
||||||
|
syncNodeLeaderChangeRsp(pSyncNode);
|
||||||
|
|
||||||
// call back
|
// call back
|
||||||
if (pSyncNode->pFsm != NULL && pSyncNode->pFsm->FpBecomeFollowerCb != NULL) {
|
if (pSyncNode->pFsm != NULL && pSyncNode->pFsm->FpBecomeFollowerCb != NULL) {
|
||||||
pSyncNode->pFsm->FpBecomeFollowerCb(pSyncNode->pFsm);
|
pSyncNode->pFsm->FpBecomeFollowerCb(pSyncNode->pFsm);
|
||||||
|
@ -2068,6 +2074,8 @@ void syncNodeBecomeFollower(SSyncNode* pSyncNode, const char* debugStr) {
|
||||||
// /\ UNCHANGED <<messages, currentTerm, votedFor, candidateVars, logVars>>
|
// /\ UNCHANGED <<messages, currentTerm, votedFor, candidateVars, logVars>>
|
||||||
//
|
//
|
||||||
void syncNodeBecomeLeader(SSyncNode* pSyncNode, const char* debugStr) {
|
void syncNodeBecomeLeader(SSyncNode* pSyncNode, const char* debugStr) {
|
||||||
|
pSyncNode->leaderTime = taosGetTimestampMs();
|
||||||
|
|
||||||
// reset restoreFinish
|
// reset restoreFinish
|
||||||
pSyncNode->restoreFinish = false;
|
pSyncNode->restoreFinish = false;
|
||||||
|
|
||||||
|
@ -2954,8 +2962,11 @@ int32_t syncNodeCommit(SSyncNode* ths, SyncIndex beginIndex, SyncIndex endIndex,
|
||||||
}
|
}
|
||||||
ths->restoreFinish = true;
|
ths->restoreFinish = true;
|
||||||
|
|
||||||
|
int64_t restoreDelay = taosGetTimestampMs() - ths->leaderTime;
|
||||||
|
|
||||||
char eventLog[128];
|
char eventLog[128];
|
||||||
snprintf(eventLog, sizeof(eventLog), "restore finish, index:%" PRId64, pEntry->index);
|
snprintf(eventLog, sizeof(eventLog), "restore finish, index:%ld, elapsed:%ld ms, ", pEntry->index,
|
||||||
|
restoreDelay);
|
||||||
syncNodeEventLog(ths, eventLog);
|
syncNodeEventLog(ths, eventLog);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -18,6 +18,149 @@
|
||||||
#include "syncEnv.h"
|
#include "syncEnv.h"
|
||||||
#include "syncUtil.h"
|
#include "syncUtil.h"
|
||||||
|
|
||||||
|
// file must already exist!
|
||||||
|
SRaftCfgIndex *raftCfgIndexOpen(const char *path) {
|
||||||
|
SRaftCfgIndex *pRaftCfgIndex = taosMemoryMalloc(sizeof(SRaftCfg));
|
||||||
|
snprintf(pRaftCfgIndex->path, sizeof(pRaftCfgIndex->path), "%s", path);
|
||||||
|
|
||||||
|
pRaftCfgIndex->pFile = taosOpenFile(pRaftCfgIndex->path, TD_FILE_READ | TD_FILE_WRITE);
|
||||||
|
ASSERT(pRaftCfgIndex->pFile != NULL);
|
||||||
|
|
||||||
|
taosLSeekFile(pRaftCfgIndex->pFile, 0, SEEK_SET);
|
||||||
|
|
||||||
|
int32_t bufLen = MAX_CONFIG_INDEX_COUNT * 16;
|
||||||
|
char *pBuf = taosMemoryMalloc(bufLen);
|
||||||
|
memset(pBuf, 0, bufLen);
|
||||||
|
int64_t len = taosReadFile(pRaftCfgIndex->pFile, pBuf, bufLen);
|
||||||
|
ASSERT(len > 0);
|
||||||
|
|
||||||
|
int32_t ret = raftCfgIndexFromStr(pBuf, pRaftCfgIndex);
|
||||||
|
ASSERT(ret == 0);
|
||||||
|
|
||||||
|
taosMemoryFree(pBuf);
|
||||||
|
|
||||||
|
return pRaftCfgIndex;
|
||||||
|
}
|
||||||
|
|
||||||
|
int32_t raftCfgIndexClose(SRaftCfgIndex *pRaftCfgIndex) {
|
||||||
|
if (pRaftCfgIndex != NULL) {
|
||||||
|
int64_t ret = taosCloseFile(&(pRaftCfgIndex->pFile));
|
||||||
|
ASSERT(ret == 0);
|
||||||
|
taosMemoryFree(pRaftCfgIndex);
|
||||||
|
}
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
int32_t raftCfgIndexPersist(SRaftCfgIndex *pRaftCfgIndex) {
|
||||||
|
ASSERT(pRaftCfgIndex != NULL);
|
||||||
|
|
||||||
|
char *s = raftCfgIndex2Str(pRaftCfgIndex);
|
||||||
|
taosLSeekFile(pRaftCfgIndex->pFile, 0, SEEK_SET);
|
||||||
|
|
||||||
|
int64_t ret = taosWriteFile(pRaftCfgIndex->pFile, s, strlen(s) + 1);
|
||||||
|
ASSERT(ret == strlen(s) + 1);
|
||||||
|
|
||||||
|
taosMemoryFree(s);
|
||||||
|
taosFsyncFile(pRaftCfgIndex->pFile);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
int32_t raftCfgIndexAddConfigIndex(SRaftCfgIndex *pRaftCfgIndex, SyncIndex configIndex) {
|
||||||
|
ASSERT(pRaftCfgIndex->configIndexCount <= MAX_CONFIG_INDEX_COUNT);
|
||||||
|
(pRaftCfgIndex->configIndexArr)[pRaftCfgIndex->configIndexCount] = configIndex;
|
||||||
|
++(pRaftCfgIndex->configIndexCount);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
cJSON *raftCfgIndex2Json(SRaftCfgIndex *pRaftCfgIndex) {
|
||||||
|
cJSON *pRoot = cJSON_CreateObject();
|
||||||
|
|
||||||
|
cJSON_AddNumberToObject(pRoot, "configIndexCount", pRaftCfgIndex->configIndexCount);
|
||||||
|
cJSON *pIndexArr = cJSON_CreateArray();
|
||||||
|
cJSON_AddItemToObject(pRoot, "configIndexArr", pIndexArr);
|
||||||
|
for (int i = 0; i < pRaftCfgIndex->configIndexCount; ++i) {
|
||||||
|
char buf64[128];
|
||||||
|
snprintf(buf64, sizeof(buf64), "%" PRId64, (pRaftCfgIndex->configIndexArr)[i]);
|
||||||
|
cJSON *pIndexObj = cJSON_CreateObject();
|
||||||
|
cJSON_AddStringToObject(pIndexObj, "index", buf64);
|
||||||
|
cJSON_AddItemToArray(pIndexArr, pIndexObj);
|
||||||
|
}
|
||||||
|
|
||||||
|
cJSON *pJson = cJSON_CreateObject();
|
||||||
|
cJSON_AddItemToObject(pJson, "SRaftCfgIndex", pRoot);
|
||||||
|
return pJson;
|
||||||
|
}
|
||||||
|
|
||||||
|
char *raftCfgIndex2Str(SRaftCfgIndex *pRaftCfgIndex) {
|
||||||
|
cJSON *pJson = raftCfgIndex2Json(pRaftCfgIndex);
|
||||||
|
char *serialized = cJSON_Print(pJson);
|
||||||
|
cJSON_Delete(pJson);
|
||||||
|
return serialized;
|
||||||
|
}
|
||||||
|
|
||||||
|
int32_t raftCfgIndexFromJson(const cJSON *pRoot, SRaftCfgIndex *pRaftCfgIndex) {
|
||||||
|
cJSON *pJson = cJSON_GetObjectItem(pRoot, "SRaftCfgIndex");
|
||||||
|
|
||||||
|
cJSON *pJsonConfigIndexCount = cJSON_GetObjectItem(pJson, "configIndexCount");
|
||||||
|
pRaftCfgIndex->configIndexCount = cJSON_GetNumberValue(pJsonConfigIndexCount);
|
||||||
|
|
||||||
|
cJSON *pIndexArr = cJSON_GetObjectItem(pJson, "configIndexArr");
|
||||||
|
int arraySize = cJSON_GetArraySize(pIndexArr);
|
||||||
|
ASSERT(arraySize == pRaftCfgIndex->configIndexCount);
|
||||||
|
|
||||||
|
memset(pRaftCfgIndex->configIndexArr, 0, sizeof(pRaftCfgIndex->configIndexArr));
|
||||||
|
for (int i = 0; i < arraySize; ++i) {
|
||||||
|
cJSON *pIndexObj = cJSON_GetArrayItem(pIndexArr, i);
|
||||||
|
ASSERT(pIndexObj != NULL);
|
||||||
|
|
||||||
|
cJSON *pIndex = cJSON_GetObjectItem(pIndexObj, "index");
|
||||||
|
ASSERT(cJSON_IsString(pIndex));
|
||||||
|
(pRaftCfgIndex->configIndexArr)[i] = atoll(pIndex->valuestring);
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
int32_t raftCfgIndexFromStr(const char *s, SRaftCfgIndex *pRaftCfgIndex) {
|
||||||
|
cJSON *pRoot = cJSON_Parse(s);
|
||||||
|
ASSERT(pRoot != NULL);
|
||||||
|
|
||||||
|
int32_t ret = raftCfgIndexFromJson(pRoot, pRaftCfgIndex);
|
||||||
|
ASSERT(ret == 0);
|
||||||
|
|
||||||
|
cJSON_Delete(pRoot);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
int32_t raftCfgIndexCreateFile(const char *path) {
|
||||||
|
TdFilePtr pFile = taosOpenFile(path, TD_FILE_CREATE | TD_FILE_WRITE);
|
||||||
|
if (pFile == NULL) {
|
||||||
|
int32_t err = terrno;
|
||||||
|
const char *errStr = tstrerror(err);
|
||||||
|
int32_t sysErr = errno;
|
||||||
|
const char *sysErrStr = strerror(errno);
|
||||||
|
sError("create raft cfg index file error, err:%d %X, msg:%s, syserr:%d, sysmsg:%s", err, err, errStr, sysErr,
|
||||||
|
sysErrStr);
|
||||||
|
ASSERT(0);
|
||||||
|
|
||||||
|
return -1;
|
||||||
|
}
|
||||||
|
|
||||||
|
SRaftCfgIndex raftCfgIndex;
|
||||||
|
memset(raftCfgIndex.configIndexArr, 0, sizeof(raftCfgIndex.configIndexArr));
|
||||||
|
raftCfgIndex.configIndexCount = 1;
|
||||||
|
raftCfgIndex.configIndexArr[0] = -1;
|
||||||
|
|
||||||
|
char *s = raftCfgIndex2Str(&raftCfgIndex);
|
||||||
|
int64_t ret = taosWriteFile(pFile, s, strlen(s) + 1);
|
||||||
|
ASSERT(ret == strlen(s) + 1);
|
||||||
|
|
||||||
|
taosMemoryFree(s);
|
||||||
|
taosCloseFile(&pFile);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ---------------------------------------
|
||||||
// file must already exist!
|
// file must already exist!
|
||||||
SRaftCfg *raftCfgOpen(const char *path) {
|
SRaftCfg *raftCfgOpen(const char *path) {
|
||||||
SRaftCfg *pCfg = taosMemoryMalloc(sizeof(SRaftCfg));
|
SRaftCfg *pCfg = taosMemoryMalloc(sizeof(SRaftCfg));
|
||||||
|
@ -101,7 +244,7 @@ cJSON *syncCfg2Json(SSyncCfg *pSyncCfg) {
|
||||||
|
|
||||||
char *syncCfg2Str(SSyncCfg *pSyncCfg) {
|
char *syncCfg2Str(SSyncCfg *pSyncCfg) {
|
||||||
cJSON *pJson = syncCfg2Json(pSyncCfg);
|
cJSON *pJson = syncCfg2Json(pSyncCfg);
|
||||||
char * serialized = cJSON_Print(pJson);
|
char *serialized = cJSON_Print(pJson);
|
||||||
cJSON_Delete(pJson);
|
cJSON_Delete(pJson);
|
||||||
return serialized;
|
return serialized;
|
||||||
}
|
}
|
||||||
|
@ -109,7 +252,7 @@ char *syncCfg2Str(SSyncCfg *pSyncCfg) {
|
||||||
char *syncCfg2SimpleStr(SSyncCfg *pSyncCfg) {
|
char *syncCfg2SimpleStr(SSyncCfg *pSyncCfg) {
|
||||||
if (pSyncCfg != NULL) {
|
if (pSyncCfg != NULL) {
|
||||||
int32_t len = 512;
|
int32_t len = 512;
|
||||||
char * s = taosMemoryMalloc(len);
|
char *s = taosMemoryMalloc(len);
|
||||||
memset(s, 0, len);
|
memset(s, 0, len);
|
||||||
|
|
||||||
snprintf(s, len, "{r-num:%d, my:%d, ", pSyncCfg->replicaNum, pSyncCfg->myIndex);
|
snprintf(s, len, "{r-num:%d, my:%d, ", pSyncCfg->replicaNum, pSyncCfg->myIndex);
|
||||||
|
@ -206,7 +349,7 @@ cJSON *raftCfg2Json(SRaftCfg *pRaftCfg) {
|
||||||
|
|
||||||
char *raftCfg2Str(SRaftCfg *pRaftCfg) {
|
char *raftCfg2Str(SRaftCfg *pRaftCfg) {
|
||||||
cJSON *pJson = raftCfg2Json(pRaftCfg);
|
cJSON *pJson = raftCfg2Json(pRaftCfg);
|
||||||
char * serialized = cJSON_Print(pJson);
|
char *serialized = cJSON_Print(pJson);
|
||||||
cJSON_Delete(pJson);
|
cJSON_Delete(pJson);
|
||||||
return serialized;
|
return serialized;
|
||||||
}
|
}
|
||||||
|
@ -285,7 +428,7 @@ int32_t raftCfgFromJson(const cJSON *pRoot, SRaftCfg *pRaftCfg) {
|
||||||
(pRaftCfg->configIndexArr)[i] = atoll(pIndex->valuestring);
|
(pRaftCfg->configIndexArr)[i] = atoll(pIndex->valuestring);
|
||||||
}
|
}
|
||||||
|
|
||||||
cJSON * pJsonSyncCfg = cJSON_GetObjectItem(pJson, "SSyncCfg");
|
cJSON *pJsonSyncCfg = cJSON_GetObjectItem(pJson, "SSyncCfg");
|
||||||
int32_t code = syncCfgFromJson(pJsonSyncCfg, &(pRaftCfg->cfg));
|
int32_t code = syncCfgFromJson(pJsonSyncCfg, &(pRaftCfg->cfg));
|
||||||
ASSERT(code == 0);
|
ASSERT(code == 0);
|
||||||
|
|
||||||
|
@ -361,3 +504,30 @@ void raftCfgLog2(char *s, SRaftCfg *pCfg) {
|
||||||
sTrace("raftCfgLog2 | len:%" PRIu64 " | %s | %s", strlen(serialized), s, serialized);
|
sTrace("raftCfgLog2 | len:%" PRIu64 " | %s | %s", strlen(serialized), s, serialized);
|
||||||
taosMemoryFree(serialized);
|
taosMemoryFree(serialized);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ---------
|
||||||
|
void raftCfgIndexPrint(SRaftCfgIndex *pCfg) {
|
||||||
|
char *serialized = raftCfgIndex2Str(pCfg);
|
||||||
|
printf("raftCfgIndexPrint | len:%" PRIu64 " | %s \n", strlen(serialized), serialized);
|
||||||
|
fflush(NULL);
|
||||||
|
taosMemoryFree(serialized);
|
||||||
|
}
|
||||||
|
|
||||||
|
void raftCfgIndexPrint2(char *s, SRaftCfgIndex *pCfg) {
|
||||||
|
char *serialized = raftCfgIndex2Str(pCfg);
|
||||||
|
printf("raftCfgIndexPrint2 | len:%" PRIu64 " | %s | %s \n", strlen(serialized), s, serialized);
|
||||||
|
fflush(NULL);
|
||||||
|
taosMemoryFree(serialized);
|
||||||
|
}
|
||||||
|
|
||||||
|
void raftCfgIndexLog(SRaftCfgIndex *pCfg) {
|
||||||
|
char *serialized = raftCfgIndex2Str(pCfg);
|
||||||
|
sTrace("raftCfgIndexLog | len:%" PRIu64 " | %s", strlen(serialized), serialized);
|
||||||
|
taosMemoryFree(serialized);
|
||||||
|
}
|
||||||
|
|
||||||
|
void raftCfgIndexLog2(char *s, SRaftCfgIndex *pCfg) {
|
||||||
|
char *serialized = raftCfgIndex2Str(pCfg);
|
||||||
|
sTrace("raftCfgIndexLog2 | len:%" PRIu64 " | %s | %s", strlen(serialized), s, serialized);
|
||||||
|
taosMemoryFree(serialized);
|
||||||
|
}
|
||||||
|
|
|
@ -108,13 +108,19 @@ int32_t syncRespMgrGetAndDel(SSyncRespMgr *pObj, uint64_t index, SRespStub *pStu
|
||||||
return 0; // get none object
|
return 0; // get none object
|
||||||
}
|
}
|
||||||
|
|
||||||
void syncRespClean(SSyncRespMgr *pObj) {
|
void syncRespCleanRsp(SSyncRespMgr *pObj) {
|
||||||
taosThreadMutexLock(&(pObj->mutex));
|
taosThreadMutexLock(&(pObj->mutex));
|
||||||
syncRespCleanByTTL(pObj, pObj->ttl);
|
syncRespCleanByTTL(pObj, -1, true);
|
||||||
taosThreadMutexUnlock(&(pObj->mutex));
|
taosThreadMutexUnlock(&(pObj->mutex));
|
||||||
}
|
}
|
||||||
|
|
||||||
void syncRespCleanByTTL(SSyncRespMgr *pObj, int64_t ttl) {
|
void syncRespClean(SSyncRespMgr *pObj) {
|
||||||
|
taosThreadMutexLock(&(pObj->mutex));
|
||||||
|
syncRespCleanByTTL(pObj, pObj->ttl, false);
|
||||||
|
taosThreadMutexUnlock(&(pObj->mutex));
|
||||||
|
}
|
||||||
|
|
||||||
|
void syncRespCleanByTTL(SSyncRespMgr *pObj, int64_t ttl, bool rsp) {
|
||||||
SRespStub *pStub = (SRespStub *)taosHashIterate(pObj->pRespHash, NULL);
|
SRespStub *pStub = (SRespStub *)taosHashIterate(pObj->pRespHash, NULL);
|
||||||
int cnt = 0;
|
int cnt = 0;
|
||||||
int sum = 0;
|
int sum = 0;
|
||||||
|
@ -126,12 +132,12 @@ void syncRespCleanByTTL(SSyncRespMgr *pObj, int64_t ttl) {
|
||||||
|
|
||||||
while (pStub) {
|
while (pStub) {
|
||||||
size_t len;
|
size_t len;
|
||||||
void * key = taosHashGetKey(pStub, &len);
|
void *key = taosHashGetKey(pStub, &len);
|
||||||
uint64_t *pSeqNum = (uint64_t *)key;
|
uint64_t *pSeqNum = (uint64_t *)key;
|
||||||
sum++;
|
sum++;
|
||||||
|
|
||||||
int64_t nowMS = taosGetTimestampMs();
|
int64_t nowMS = taosGetTimestampMs();
|
||||||
if (nowMS - pStub->createTime > ttl) {
|
if (nowMS - pStub->createTime > ttl || -1 == ttl) {
|
||||||
taosArrayPush(delIndexArray, pSeqNum);
|
taosArrayPush(delIndexArray, pSeqNum);
|
||||||
cnt++;
|
cnt++;
|
||||||
|
|
||||||
|
@ -148,7 +154,14 @@ void syncRespCleanByTTL(SSyncRespMgr *pObj, int64_t ttl) {
|
||||||
|
|
||||||
pStub->rpcMsg.pCont = NULL;
|
pStub->rpcMsg.pCont = NULL;
|
||||||
pStub->rpcMsg.contLen = 0;
|
pStub->rpcMsg.contLen = 0;
|
||||||
pSyncNode->pFsm->FpCommitCb(pSyncNode->pFsm, &(pStub->rpcMsg), cbMeta);
|
|
||||||
|
// TODO: and make rpcMsg body, call commit cb
|
||||||
|
// pSyncNode->pFsm->FpCommitCb(pSyncNode->pFsm, &(pStub->rpcMsg), cbMeta);
|
||||||
|
|
||||||
|
pStub->rpcMsg.code = TSDB_CODE_SYN_NOT_LEADER;
|
||||||
|
if (pStub->rpcMsg.info.handle != NULL) {
|
||||||
|
tmsgSendRsp(&(pStub->rpcMsg));
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
pStub = (SRespStub *)taosHashIterate(pObj->pRespHash, pStub);
|
pStub = (SRespStub *)taosHashIterate(pObj->pRespHash, pStub);
|
||||||
|
|
|
@ -56,6 +56,7 @@ add_executable(syncRaftLogTest3 "")
|
||||||
add_executable(syncLeaderTransferTest "")
|
add_executable(syncLeaderTransferTest "")
|
||||||
add_executable(syncReconfigFinishTest "")
|
add_executable(syncReconfigFinishTest "")
|
||||||
add_executable(syncRestoreFromSnapshot "")
|
add_executable(syncRestoreFromSnapshot "")
|
||||||
|
add_executable(syncRaftCfgIndexTest "")
|
||||||
|
|
||||||
|
|
||||||
target_sources(syncTest
|
target_sources(syncTest
|
||||||
|
@ -290,6 +291,10 @@ target_sources(syncRestoreFromSnapshot
|
||||||
PRIVATE
|
PRIVATE
|
||||||
"syncRestoreFromSnapshot.cpp"
|
"syncRestoreFromSnapshot.cpp"
|
||||||
)
|
)
|
||||||
|
target_sources(syncRaftCfgIndexTest
|
||||||
|
PRIVATE
|
||||||
|
"syncRaftCfgIndexTest.cpp"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
target_include_directories(syncTest
|
target_include_directories(syncTest
|
||||||
|
@ -582,6 +587,11 @@ target_include_directories(syncRestoreFromSnapshot
|
||||||
"${TD_SOURCE_DIR}/include/libs/sync"
|
"${TD_SOURCE_DIR}/include/libs/sync"
|
||||||
"${CMAKE_CURRENT_SOURCE_DIR}/../inc"
|
"${CMAKE_CURRENT_SOURCE_DIR}/../inc"
|
||||||
)
|
)
|
||||||
|
target_include_directories(syncRaftCfgIndexTest
|
||||||
|
PUBLIC
|
||||||
|
"${TD_SOURCE_DIR}/include/libs/sync"
|
||||||
|
"${CMAKE_CURRENT_SOURCE_DIR}/../inc"
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
target_link_libraries(syncTest
|
target_link_libraries(syncTest
|
||||||
|
@ -816,6 +826,10 @@ target_link_libraries(syncRestoreFromSnapshot
|
||||||
sync
|
sync
|
||||||
gtest_main
|
gtest_main
|
||||||
)
|
)
|
||||||
|
target_link_libraries(syncRaftCfgIndexTest
|
||||||
|
sync
|
||||||
|
gtest_main
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
enable_testing()
|
enable_testing()
|
||||||
|
|
|
@ -0,0 +1,96 @@
|
||||||
|
#include "syncRaftStore.h"
|
||||||
|
//#include <gtest/gtest.h>
|
||||||
|
#include <stdio.h>
|
||||||
|
#include "syncIO.h"
|
||||||
|
#include "syncInt.h"
|
||||||
|
#include "syncRaftCfg.h"
|
||||||
|
#include "syncUtil.h"
|
||||||
|
|
||||||
|
void logTest() {
|
||||||
|
sTrace("--- sync log test: trace");
|
||||||
|
sDebug("--- sync log test: debug");
|
||||||
|
sInfo("--- sync log test: info");
|
||||||
|
sWarn("--- sync log test: warn");
|
||||||
|
sError("--- sync log test: error");
|
||||||
|
sFatal("--- sync log test: fatal");
|
||||||
|
}
|
||||||
|
|
||||||
|
SRaftCfg* createRaftCfg() {
|
||||||
|
SRaftCfg* pCfg = (SRaftCfg*)taosMemoryMalloc(sizeof(SRaftCfg));
|
||||||
|
memset(pCfg, 0, sizeof(SRaftCfg));
|
||||||
|
|
||||||
|
pCfg->cfg.replicaNum = 3;
|
||||||
|
pCfg->cfg.myIndex = 1;
|
||||||
|
for (int i = 0; i < pCfg->cfg.replicaNum; ++i) {
|
||||||
|
((pCfg->cfg.nodeInfo)[i]).nodePort = i * 100;
|
||||||
|
snprintf(((pCfg->cfg.nodeInfo)[i]).nodeFqdn, sizeof(((pCfg->cfg.nodeInfo)[i]).nodeFqdn), "100.200.300.%d", i);
|
||||||
|
}
|
||||||
|
pCfg->isStandBy = taosGetTimestampSec() % 100;
|
||||||
|
pCfg->batchSize = taosGetTimestampSec() % 100;
|
||||||
|
|
||||||
|
pCfg->configIndexCount = 5;
|
||||||
|
for (int i = 0; i < MAX_CONFIG_INDEX_COUNT; ++i) {
|
||||||
|
(pCfg->configIndexArr)[i] = -1;
|
||||||
|
}
|
||||||
|
for (int i = 0; i < pCfg->configIndexCount; ++i) {
|
||||||
|
(pCfg->configIndexArr)[i] = i * 100;
|
||||||
|
}
|
||||||
|
|
||||||
|
return pCfg;
|
||||||
|
}
|
||||||
|
|
||||||
|
SSyncCfg* createSyncCfg() {
|
||||||
|
SSyncCfg* pCfg = (SSyncCfg*)taosMemoryMalloc(sizeof(SSyncCfg));
|
||||||
|
memset(pCfg, 0, sizeof(SSyncCfg));
|
||||||
|
|
||||||
|
pCfg->replicaNum = 3;
|
||||||
|
pCfg->myIndex = 1;
|
||||||
|
for (int i = 0; i < pCfg->replicaNum; ++i) {
|
||||||
|
((pCfg->nodeInfo)[i]).nodePort = i * 100;
|
||||||
|
snprintf(((pCfg->nodeInfo)[i]).nodeFqdn, sizeof(((pCfg->nodeInfo)[i]).nodeFqdn), "100.200.300.%d", i);
|
||||||
|
}
|
||||||
|
|
||||||
|
return pCfg;
|
||||||
|
}
|
||||||
|
|
||||||
|
const char *pFile = "./raft_config_index.json";
|
||||||
|
|
||||||
|
void test1() {
|
||||||
|
int32_t code = raftCfgIndexCreateFile(pFile);
|
||||||
|
ASSERT(code == 0);
|
||||||
|
|
||||||
|
SRaftCfgIndex *pRaftCfgIndex = raftCfgIndexOpen(pFile);
|
||||||
|
raftCfgIndexLog2((char*)"==test1==", pRaftCfgIndex);
|
||||||
|
|
||||||
|
raftCfgIndexClose(pRaftCfgIndex);
|
||||||
|
}
|
||||||
|
|
||||||
|
void test2() {
|
||||||
|
SRaftCfgIndex *pRaftCfgIndex = raftCfgIndexOpen(pFile);
|
||||||
|
for (int i = 0; i < 500; ++i) {
|
||||||
|
raftCfgIndexAddConfigIndex(pRaftCfgIndex, i);
|
||||||
|
}
|
||||||
|
raftCfgIndexPersist(pRaftCfgIndex);
|
||||||
|
|
||||||
|
raftCfgIndexLog2((char*)"==test2==", pRaftCfgIndex);
|
||||||
|
raftCfgIndexClose(pRaftCfgIndex);
|
||||||
|
}
|
||||||
|
|
||||||
|
void test3() {
|
||||||
|
SRaftCfgIndex *pRaftCfgIndex = raftCfgIndexOpen(pFile);
|
||||||
|
|
||||||
|
raftCfgIndexLog2((char*)"==test3==", pRaftCfgIndex);
|
||||||
|
raftCfgIndexClose(pRaftCfgIndex);
|
||||||
|
}
|
||||||
|
|
||||||
|
int main() {
|
||||||
|
tsAsyncLog = 0;
|
||||||
|
sDebugFlag = DEBUG_TRACE + DEBUG_SCREEN + DEBUG_FILE;
|
||||||
|
|
||||||
|
logTest();
|
||||||
|
test1();
|
||||||
|
test2();
|
||||||
|
test3();
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
|
@ -211,27 +211,27 @@ static void cliReleaseUnfinishedMsg(SCliConn* conn) {
|
||||||
#define CONN_PERSIST_TIME(para) ((para) <= 90000 ? 90000 : (para))
|
#define CONN_PERSIST_TIME(para) ((para) <= 90000 ? 90000 : (para))
|
||||||
#define CONN_GET_HOST_THREAD(conn) (conn ? ((SCliConn*)conn)->hostThrd : NULL)
|
#define CONN_GET_HOST_THREAD(conn) (conn ? ((SCliConn*)conn)->hostThrd : NULL)
|
||||||
#define CONN_GET_INST_LABEL(conn) (((STrans*)(((SCliThrd*)(conn)->hostThrd)->pTransInst))->label)
|
#define CONN_GET_INST_LABEL(conn) (((STrans*)(((SCliThrd*)(conn)->hostThrd)->pTransInst))->label)
|
||||||
#define CONN_SHOULD_RELEASE(conn, head) \
|
#define CONN_SHOULD_RELEASE(conn, head) \
|
||||||
do { \
|
do { \
|
||||||
if ((head)->release == 1 && (head->msgLen) == sizeof(*head)) { \
|
if ((head)->release == 1 && (head->msgLen) == sizeof(*head)) { \
|
||||||
uint64_t ahandle = head->ahandle; \
|
uint64_t ahandle = head->ahandle; \
|
||||||
CONN_GET_MSGCTX_BY_AHANDLE(conn, ahandle); \
|
CONN_GET_MSGCTX_BY_AHANDLE(conn, ahandle); \
|
||||||
transClearBuffer(&conn->readBuf); \
|
transClearBuffer(&conn->readBuf); \
|
||||||
transFreeMsg(transContFromHead((char*)head)); \
|
transFreeMsg(transContFromHead((char*)head)); \
|
||||||
if (transQueueSize(&conn->cliMsgs) > 0 && ahandle == 0) { \
|
if (transQueueSize(&conn->cliMsgs) > 0 && ahandle == 0) { \
|
||||||
SCliMsg* cliMsg = transQueueGet(&conn->cliMsgs, 0); \
|
SCliMsg* cliMsg = transQueueGet(&conn->cliMsgs, 0); \
|
||||||
if (cliMsg->type == Release) return; \
|
if (cliMsg->type == Release) return; \
|
||||||
} \
|
} \
|
||||||
tDebug("%s conn %p receive release request, ref:%d", CONN_GET_INST_LABEL(conn), conn, T_REF_VAL_GET(conn)); \
|
tDebug("%s conn %p receive release request, refId:%" PRId64 "", CONN_GET_INST_LABEL(conn), conn, conn->refId); \
|
||||||
if (T_REF_VAL_GET(conn) > 1) { \
|
if (T_REF_VAL_GET(conn) > 1) { \
|
||||||
transUnrefCliHandle(conn); \
|
transUnrefCliHandle(conn); \
|
||||||
} \
|
} \
|
||||||
destroyCmsg(pMsg); \
|
destroyCmsg(pMsg); \
|
||||||
cliReleaseUnfinishedMsg(conn); \
|
cliReleaseUnfinishedMsg(conn); \
|
||||||
transQueueClear(&conn->cliMsgs); \
|
transQueueClear(&conn->cliMsgs); \
|
||||||
addConnToPool(((SCliThrd*)conn->hostThrd)->pool, conn); \
|
addConnToPool(((SCliThrd*)conn->hostThrd)->pool, conn); \
|
||||||
return; \
|
return; \
|
||||||
} \
|
} \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
#define CONN_GET_MSGCTX_BY_AHANDLE(conn, ahandle) \
|
#define CONN_GET_MSGCTX_BY_AHANDLE(conn, ahandle) \
|
||||||
|
@ -890,8 +890,8 @@ SCliConn* cliGetConn(SCliMsg* pMsg, SCliThrd* pThrd, bool* ignore) {
|
||||||
if (refId != 0) {
|
if (refId != 0) {
|
||||||
SExHandle* exh = transAcquireExHandle(transGetRefMgt(), refId);
|
SExHandle* exh = transAcquireExHandle(transGetRefMgt(), refId);
|
||||||
if (exh == NULL) {
|
if (exh == NULL) {
|
||||||
|
tError("failed to get conn, refId: %" PRId64 "", refId);
|
||||||
*ignore = true;
|
*ignore = true;
|
||||||
destroyCmsg(pMsg);
|
|
||||||
return NULL;
|
return NULL;
|
||||||
} else {
|
} else {
|
||||||
conn = exh->handle;
|
conn = exh->handle;
|
||||||
|
@ -937,7 +937,16 @@ void cliHandleReq(SCliMsg* pMsg, SCliThrd* pThrd) {
|
||||||
bool ignore = false;
|
bool ignore = false;
|
||||||
SCliConn* conn = cliGetConn(pMsg, pThrd, &ignore);
|
SCliConn* conn = cliGetConn(pMsg, pThrd, &ignore);
|
||||||
if (ignore == true) {
|
if (ignore == true) {
|
||||||
tError("ignore msg");
|
// persist conn already release by server
|
||||||
|
STransMsg resp = {0};
|
||||||
|
resp.code = TSDB_CODE_RPC_BROKEN_LINK;
|
||||||
|
resp.msgType = pMsg->msg.msgType + 1;
|
||||||
|
|
||||||
|
resp.info.ahandle = pMsg && pMsg->ctx ? pMsg->ctx->ahandle : NULL;
|
||||||
|
resp.info.traceId = pMsg->msg.info.traceId;
|
||||||
|
|
||||||
|
pTransInst->cfp(pTransInst->parent, &resp, NULL);
|
||||||
|
destroyCmsg(pMsg);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -1,62 +0,0 @@
|
||||||
{
|
|
||||||
"filetype": "insert",
|
|
||||||
"cfgdir": "/etc/taos",
|
|
||||||
"host": "127.0.0.1",
|
|
||||||
"port": 6030,
|
|
||||||
"user": "root",
|
|
||||||
"password": "taosdata",
|
|
||||||
"thread_count": 4,
|
|
||||||
"create_table_thread_count": 4,
|
|
||||||
"result_file":"./insert_res.txt",
|
|
||||||
"confirm_parameter_prompt": "no",
|
|
||||||
"insert_interval": 0,
|
|
||||||
"interlace_rows": 10,
|
|
||||||
"num_of_records_per_req": 1000,
|
|
||||||
"max_sql_len": 1024000,
|
|
||||||
"databases": [{
|
|
||||||
"dbinfo": {
|
|
||||||
"name": "db",
|
|
||||||
"drop": "yes",
|
|
||||||
"replica": 1,
|
|
||||||
|
|
||||||
"cache": 50,
|
|
||||||
|
|
||||||
"precision": "ms",
|
|
||||||
"keep": 365,
|
|
||||||
"minRows": 100,
|
|
||||||
"maxRows": 4096,
|
|
||||||
"comp":2
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
},
|
|
||||||
"super_tables": [{
|
|
||||||
"name": "stb0",
|
|
||||||
"child_table_exists":"no",
|
|
||||||
"childtable_count": 1,
|
|
||||||
"childtable_prefix": "stb0_",
|
|
||||||
"auto_create_table": "no",
|
|
||||||
"batch_create_tbl_num": 10,
|
|
||||||
"data_source": "rand",
|
|
||||||
"insert_mode": "taosc",
|
|
||||||
"insert_rows": 100000,
|
|
||||||
"childtable_limit": -1,
|
|
||||||
"childtable_offset": 0,
|
|
||||||
"multi_thread_write_one_tbl": "no",
|
|
||||||
"interlace_rows": 0,
|
|
||||||
"insert_interval": 0,
|
|
||||||
"max_sql_len": 1024000,
|
|
||||||
"disorder_ratio": 0,
|
|
||||||
"disorder_range": 1,
|
|
||||||
"timestamp_step": 1000,
|
|
||||||
"start_timestamp": "2020-10-01 00:00:00.000",
|
|
||||||
"sample_format": "csv",
|
|
||||||
"sample_file": "./tools/taosdemoAllTest/sample.csv",
|
|
||||||
"tags_file": "",
|
|
||||||
"columns": [{"type": "INT", "count":1}, {"type": "BINARY", "len": 16, "count":1}, {"type": "BOOL"}],
|
|
||||||
"tags": [{"type": "TINYINT", "count":1}, {"type": "BINARY", "len": 16, "count":1}]
|
|
||||||
}]
|
|
||||||
}]
|
|
||||||
}
|
|
|
@ -1,81 +0,0 @@
|
||||||
###################################################################
|
|
||||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This file is proprietary and confidential to TAOS Technologies.
|
|
||||||
# No part of this file may be reproduced, stored, transmitted,
|
|
||||||
# disclosed or used in any form or by any means other than as
|
|
||||||
# expressly provided by the written permission from Jianhui Tao
|
|
||||||
#
|
|
||||||
###################################################################
|
|
||||||
|
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
from util.log import *
|
|
||||||
from util.cases import *
|
|
||||||
from util.sql import *
|
|
||||||
from util.dnodes import *
|
|
||||||
|
|
||||||
|
|
||||||
class TDTestCase:
|
|
||||||
def init(self, conn, logSql):
|
|
||||||
tdLog.debug("start to execute %s" % __file__)
|
|
||||||
tdSql.init(conn.cursor(), logSql)
|
|
||||||
|
|
||||||
def getBuildPath(self):
|
|
||||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
|
||||||
|
|
||||||
if ("community" in selfPath):
|
|
||||||
projPath = selfPath[:selfPath.find("community")]
|
|
||||||
else:
|
|
||||||
projPath = selfPath[:selfPath.find("tests")]
|
|
||||||
|
|
||||||
for root, dirs, files in os.walk(projPath):
|
|
||||||
if ("taosd" in files):
|
|
||||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
|
||||||
if ("packaging" not in rootRealPath):
|
|
||||||
buildPath = root[:len(root)-len("/build/bin")]
|
|
||||||
break
|
|
||||||
return buildPath
|
|
||||||
|
|
||||||
def run(self):
|
|
||||||
buildPath = self.getBuildPath()
|
|
||||||
if (buildPath == ""):
|
|
||||||
tdLog.exit("taosd not found!")
|
|
||||||
else:
|
|
||||||
tdLog.info("taosd found in %s" % buildPath)
|
|
||||||
binPath = buildPath+ "/build/bin/"
|
|
||||||
|
|
||||||
# insert: create one or mutiple tables per sql and insert multiple rows per sql
|
|
||||||
os.system("%staosdemo -f query/nestedQuery/insertData.json -y " % binPath)
|
|
||||||
tdSql.execute("use db")
|
|
||||||
tdSql.query("select count (tbname) from stb0")
|
|
||||||
tdSql.checkData(0, 0, 1000)
|
|
||||||
tdSql.query("select count (tbname) from stb1")
|
|
||||||
tdSql.checkData(0, 0, 1000)
|
|
||||||
tdSql.query("select count(*) from stb00_0")
|
|
||||||
tdSql.checkData(0, 0, 100)
|
|
||||||
tdSql.query("select count(*) from stb0")
|
|
||||||
tdSql.checkData(0, 0, 100000)
|
|
||||||
tdSql.query("select count(*) from stb01_1")
|
|
||||||
tdSql.checkData(0, 0, 200)
|
|
||||||
tdSql.query("select count(*) from stb1")
|
|
||||||
tdSql.checkData(0, 0, 200000)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
testcaseFilename = os.path.split(__file__)[-1]
|
|
||||||
os.system("rm -rf ./insert_res.txt")
|
|
||||||
os.system("rm -rf query/nestedQuery/%s.sql" % testcaseFilename )
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def stop(self):
|
|
||||||
tdSql.close()
|
|
||||||
tdLog.success("%s successfully executed" % __file__)
|
|
||||||
|
|
||||||
|
|
||||||
tdCases.addWindows(__file__, TDTestCase())
|
|
||||||
tdCases.addLinux(__file__, TDTestCase())
|
|
|
@ -1,3 +0,0 @@
|
||||||
1
|
|
||||||
2
|
|
||||||
3
|
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1,100 +0,0 @@
|
||||||
8.855,"binary_str0" ,1626870128248246976
|
|
||||||
8.75,"binary_str1" ,1626870128249060032
|
|
||||||
5.44,"binary_str2" ,1626870128249067968
|
|
||||||
8.45,"binary_str3" ,1626870128249072064
|
|
||||||
4.07,"binary_str4" ,1626870128249075904
|
|
||||||
6.97,"binary_str5" ,1626870128249078976
|
|
||||||
6.86,"binary_str6" ,1626870128249082048
|
|
||||||
1.585,"binary_str7" ,1626870128249085120
|
|
||||||
1.4,"binary_str8" ,1626870128249087936
|
|
||||||
5.135,"binary_str9" ,1626870128249092032
|
|
||||||
3.15,"binary_str10" ,1626870128249095104
|
|
||||||
1.765,"binary_str11" ,1626870128249097920
|
|
||||||
7.71,"binary_str12" ,1626870128249100992
|
|
||||||
3.91,"binary_str13" ,1626870128249104064
|
|
||||||
5.615,"binary_str14" ,1626870128249106880
|
|
||||||
9.495,"binary_str15" ,1626870128249109952
|
|
||||||
3.825,"binary_str16" ,1626870128249113024
|
|
||||||
1.94,"binary_str17" ,1626870128249117120
|
|
||||||
5.385,"binary_str18" ,1626870128249119936
|
|
||||||
7.075,"binary_str19" ,1626870128249123008
|
|
||||||
5.715,"binary_str20" ,1626870128249126080
|
|
||||||
1.83,"binary_str21" ,1626870128249128896
|
|
||||||
6.365,"binary_str22" ,1626870128249131968
|
|
||||||
6.55,"binary_str23" ,1626870128249135040
|
|
||||||
6.315,"binary_str24" ,1626870128249138112
|
|
||||||
3.82,"binary_str25" ,1626870128249140928
|
|
||||||
2.455,"binary_str26" ,1626870128249145024
|
|
||||||
7.795,"binary_str27" ,1626870128249148096
|
|
||||||
2.47,"binary_str28" ,1626870128249150912
|
|
||||||
1.37,"binary_str29" ,1626870128249155008
|
|
||||||
5.39,"binary_str30" ,1626870128249158080
|
|
||||||
5.13,"binary_str31" ,1626870128249160896
|
|
||||||
4.09,"binary_str32" ,1626870128249163968
|
|
||||||
5.855,"binary_str33" ,1626870128249167040
|
|
||||||
0.17,"binary_str34" ,1626870128249170112
|
|
||||||
1.955,"binary_str35" ,1626870128249173952
|
|
||||||
0.585,"binary_str36" ,1626870128249178048
|
|
||||||
0.33,"binary_str37" ,1626870128249181120
|
|
||||||
7.925,"binary_str38" ,1626870128249183936
|
|
||||||
9.685,"binary_str39" ,1626870128249187008
|
|
||||||
2.6,"binary_str40" ,1626870128249191104
|
|
||||||
5.705,"binary_str41" ,1626870128249193920
|
|
||||||
3.965,"binary_str42" ,1626870128249196992
|
|
||||||
4.43,"binary_str43" ,1626870128249200064
|
|
||||||
8.73,"binary_str44" ,1626870128249202880
|
|
||||||
3.105,"binary_str45" ,1626870128249205952
|
|
||||||
9.39,"binary_str46" ,1626870128249209024
|
|
||||||
2.825,"binary_str47" ,1626870128249212096
|
|
||||||
9.675,"binary_str48" ,1626870128249214912
|
|
||||||
9.99,"binary_str49" ,1626870128249217984
|
|
||||||
4.51,"binary_str50" ,1626870128249221056
|
|
||||||
4.94,"binary_str51" ,1626870128249223872
|
|
||||||
7.72,"binary_str52" ,1626870128249226944
|
|
||||||
4.135,"binary_str53" ,1626870128249231040
|
|
||||||
2.325,"binary_str54" ,1626870128249234112
|
|
||||||
4.585,"binary_str55" ,1626870128249236928
|
|
||||||
8.76,"binary_str56" ,1626870128249240000
|
|
||||||
4.715,"binary_str57" ,1626870128249243072
|
|
||||||
0.56,"binary_str58" ,1626870128249245888
|
|
||||||
5.35,"binary_str59" ,1626870128249249984
|
|
||||||
5.075,"binary_str60" ,1626870128249253056
|
|
||||||
6.665,"binary_str61" ,1626870128249256128
|
|
||||||
7.13,"binary_str62" ,1626870128249258944
|
|
||||||
2.775,"binary_str63" ,1626870128249262016
|
|
||||||
5.775,"binary_str64" ,1626870128249265088
|
|
||||||
1.62,"binary_str65" ,1626870128249267904
|
|
||||||
1.625,"binary_str66" ,1626870128249270976
|
|
||||||
8.15,"binary_str67" ,1626870128249274048
|
|
||||||
0.75,"binary_str68" ,1626870128249277120
|
|
||||||
3.265,"binary_str69" ,1626870128249280960
|
|
||||||
8.585,"binary_str70" ,1626870128249284032
|
|
||||||
1.88,"binary_str71" ,1626870128249287104
|
|
||||||
8.44,"binary_str72" ,1626870128249289920
|
|
||||||
5.12,"binary_str73" ,1626870128249295040
|
|
||||||
2.58,"binary_str74" ,1626870128249298112
|
|
||||||
9.42,"binary_str75" ,1626870128249300928
|
|
||||||
1.765,"binary_str76" ,1626870128249304000
|
|
||||||
2.66,"binary_str77" ,1626870128249308096
|
|
||||||
1.405,"binary_str78" ,1626870128249310912
|
|
||||||
5.595,"binary_str79" ,1626870128249315008
|
|
||||||
2.28,"binary_str80" ,1626870128249318080
|
|
||||||
9.24,"binary_str81" ,1626870128249320896
|
|
||||||
9.03,"binary_str82" ,1626870128249323968
|
|
||||||
6.055,"binary_str83" ,1626870128249327040
|
|
||||||
1.74,"binary_str84" ,1626870128249330112
|
|
||||||
5.77,"binary_str85" ,1626870128249332928
|
|
||||||
1.97,"binary_str86" ,1626870128249336000
|
|
||||||
0.3,"binary_str87" ,1626870128249339072
|
|
||||||
7.145,"binary_str88" ,1626870128249342912
|
|
||||||
0.88,"binary_str89" ,1626870128249345984
|
|
||||||
8.025,"binary_str90" ,1626870128249349056
|
|
||||||
4.81,"binary_str91" ,1626870128249351872
|
|
||||||
0.725,"binary_str92" ,1626870128249355968
|
|
||||||
3.85,"binary_str93" ,1626870128249359040
|
|
||||||
9.455,"binary_str94" ,1626870128249362112
|
|
||||||
2.265,"binary_str95" ,1626870128249364928
|
|
||||||
3.985,"binary_str96" ,1626870128249368000
|
|
||||||
9.375,"binary_str97" ,1626870128249371072
|
|
||||||
0.2,"binary_str98" ,1626870128249373888
|
|
||||||
6.95,"binary_str99" ,1626870128249377984
|
|
Can't render this file because it contains an unexpected character in line 1 and column 19.
|
|
@ -1,100 +0,0 @@
|
||||||
"string0",7,8.615
|
|
||||||
"string1",4,9.895
|
|
||||||
"string2",3,2.92
|
|
||||||
"string3",3,5.62
|
|
||||||
"string4",7,1.615
|
|
||||||
"string5",6,1.45
|
|
||||||
"string6",5,7.48
|
|
||||||
"string7",7,3.01
|
|
||||||
"string8",5,4.76
|
|
||||||
"string9",10,7.09
|
|
||||||
"string10",2,8.38
|
|
||||||
"string11",7,8.65
|
|
||||||
"string12",5,5.025
|
|
||||||
"string13",10,5.765
|
|
||||||
"string14",2,4.57
|
|
||||||
"string15",2,1.03
|
|
||||||
"string16",7,6.98
|
|
||||||
"string17",10,0.23
|
|
||||||
"string18",7,5.815
|
|
||||||
"string19",1,2.37
|
|
||||||
"string20",10,8.865
|
|
||||||
"string21",3,1.235
|
|
||||||
"string22",2,8.62
|
|
||||||
"string23",9,1.045
|
|
||||||
"string24",8,4.34
|
|
||||||
"string25",1,5.455
|
|
||||||
"string26",2,4.475
|
|
||||||
"string27",1,6.95
|
|
||||||
"string28",2,3.39
|
|
||||||
"string29",3,6.79
|
|
||||||
"string30",7,9.735
|
|
||||||
"string31",1,9.79
|
|
||||||
"string32",10,9.955
|
|
||||||
"string33",1,5.095
|
|
||||||
"string34",3,3.86
|
|
||||||
"string35",9,5.105
|
|
||||||
"string36",10,4.22
|
|
||||||
"string37",1,2.78
|
|
||||||
"string38",9,6.345
|
|
||||||
"string39",1,0.975
|
|
||||||
"string40",5,6.16
|
|
||||||
"string41",4,7.735
|
|
||||||
"string42",5,6.6
|
|
||||||
"string43",8,2.845
|
|
||||||
"string44",1,0.655
|
|
||||||
"string45",3,2.995
|
|
||||||
"string46",9,3.6
|
|
||||||
"string47",8,3.47
|
|
||||||
"string48",3,7.98
|
|
||||||
"string49",6,2.225
|
|
||||||
"string50",9,5.44
|
|
||||||
"string51",4,6.335
|
|
||||||
"string52",3,2.955
|
|
||||||
"string53",1,0.565
|
|
||||||
"string54",6,5.575
|
|
||||||
"string55",6,9.905
|
|
||||||
"string56",9,6.025
|
|
||||||
"string57",8,0.94
|
|
||||||
"string58",10,0.15
|
|
||||||
"string59",8,1.555
|
|
||||||
"string60",4,2.28
|
|
||||||
"string61",2,8.29
|
|
||||||
"string62",9,6.22
|
|
||||||
"string63",6,3.35
|
|
||||||
"string64",10,6.7
|
|
||||||
"string65",3,9.345
|
|
||||||
"string66",7,9.815
|
|
||||||
"string67",1,5.365
|
|
||||||
"string68",10,3.81
|
|
||||||
"string69",1,6.405
|
|
||||||
"string70",8,2.715
|
|
||||||
"string71",3,8.58
|
|
||||||
"string72",8,6.34
|
|
||||||
"string73",2,7.49
|
|
||||||
"string74",4,8.64
|
|
||||||
"string75",3,8.995
|
|
||||||
"string76",7,3.465
|
|
||||||
"string77",1,7.64
|
|
||||||
"string78",6,3.65
|
|
||||||
"string79",6,1.4
|
|
||||||
"string80",6,5.875
|
|
||||||
"string81",2,1.22
|
|
||||||
"string82",5,7.87
|
|
||||||
"string83",9,8.41
|
|
||||||
"string84",9,8.9
|
|
||||||
"string85",9,3.89
|
|
||||||
"string86",2,5.0
|
|
||||||
"string87",2,4.495
|
|
||||||
"string88",4,2.835
|
|
||||||
"string89",3,5.895
|
|
||||||
"string90",7,8.41
|
|
||||||
"string91",5,5.125
|
|
||||||
"string92",7,9.165
|
|
||||||
"string93",5,8.315
|
|
||||||
"string94",10,7.485
|
|
||||||
"string95",7,4.635
|
|
||||||
"string96",2,6.015
|
|
||||||
"string97",8,0.595
|
|
||||||
"string98",3,8.79
|
|
||||||
"string99",4,1.72
|
|
|
|
@ -1,63 +0,0 @@
|
||||||
{
|
|
||||||
"filetype": "insert",
|
|
||||||
"cfgdir": "/etc/taos",
|
|
||||||
"host": "127.0.0.1",
|
|
||||||
"port": 6030,
|
|
||||||
"user": "root",
|
|
||||||
"password": "taosdata",
|
|
||||||
"thread_count": 10,
|
|
||||||
"create_table_thread_count": 10,
|
|
||||||
"result_file": "./insert_res.txt",
|
|
||||||
"confirm_parameter_prompt": "no",
|
|
||||||
"insert_interval": 0,
|
|
||||||
"interlace_rows": 100,
|
|
||||||
"num_of_records_per_req": 1000,
|
|
||||||
"max_sql_len": 1024000,
|
|
||||||
"databases": [{
|
|
||||||
"dbinfo": {
|
|
||||||
"name": "testdb3",
|
|
||||||
"drop": "yes",
|
|
||||||
"replica": 1,
|
|
||||||
|
|
||||||
"cache": 50,
|
|
||||||
|
|
||||||
"precision": "ms",
|
|
||||||
"keep": 3600,
|
|
||||||
"minRows": 100,
|
|
||||||
"maxRows": 4096,
|
|
||||||
"comp":2
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
},
|
|
||||||
"super_tables": [{
|
|
||||||
"name": "stb0",
|
|
||||||
"child_table_exists":"no",
|
|
||||||
"childtable_count": 100,
|
|
||||||
"childtable_prefix": "tb0_",
|
|
||||||
"auto_create_table": "no",
|
|
||||||
"batch_create_tbl_num": 20,
|
|
||||||
"data_source": "rand",
|
|
||||||
"insert_mode": "taosc",
|
|
||||||
"insert_rows": 100,
|
|
||||||
"childtable_offset":0,
|
|
||||||
"multi_thread_write_one_tbl": "no",
|
|
||||||
"insert_interval":0,
|
|
||||||
"max_sql_len": 1024000,
|
|
||||||
"disorder_ratio": 0,
|
|
||||||
"disorder_range": 1000,
|
|
||||||
"timestamp_step": 1000,
|
|
||||||
"start_timestamp": "2021-07-01 00:00:00.000",
|
|
||||||
"sample_format": "",
|
|
||||||
"sample_file": "",
|
|
||||||
"tags_file": "",
|
|
||||||
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":3}, {"type": "BINARY", "len": 16, "count":2}, {"type": "BINARY", "len": 32, "count":2},
|
|
||||||
{"type": "TIMESTAMP"}, {"type": "BIGINT", "count":3},{"type": "FLOAT", "count":1},{"type": "SMALLINT", "count":1},{"type": "TINYINT", "count":1},
|
|
||||||
{"type": "BOOL"},{"type": "NCHAR","len":16}],
|
|
||||||
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":5},{"type": "NCHAR","len":16, "count":1}]
|
|
||||||
}]
|
|
||||||
}]
|
|
||||||
}
|
|
||||||
|
|
|
@ -1,63 +0,0 @@
|
||||||
{
|
|
||||||
"filetype": "insert",
|
|
||||||
"cfgdir": "/etc/taos",
|
|
||||||
"host": "127.0.0.1",
|
|
||||||
"port": 6030,
|
|
||||||
"user": "root",
|
|
||||||
"password": "taosdata",
|
|
||||||
"thread_count": 10,
|
|
||||||
"create_table_thread_count": 10,
|
|
||||||
"result_file": "./insert_res.txt",
|
|
||||||
"confirm_parameter_prompt": "no",
|
|
||||||
"insert_interval": 0,
|
|
||||||
"interlace_rows": 100,
|
|
||||||
"num_of_records_per_req": 1000,
|
|
||||||
"max_sql_len": 1024000,
|
|
||||||
"databases": [{
|
|
||||||
"dbinfo": {
|
|
||||||
"name": "testdb1",
|
|
||||||
"drop": "yes",
|
|
||||||
"replica": 1,
|
|
||||||
|
|
||||||
"cache": 50,
|
|
||||||
|
|
||||||
"precision": "ns",
|
|
||||||
"keep": 3600,
|
|
||||||
"minRows": 100,
|
|
||||||
"maxRows": 4096,
|
|
||||||
"comp":2
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
},
|
|
||||||
"super_tables": [{
|
|
||||||
"name": "stb0",
|
|
||||||
"child_table_exists":"no",
|
|
||||||
"childtable_count": 100,
|
|
||||||
"childtable_prefix": "tb0_",
|
|
||||||
"auto_create_table": "no",
|
|
||||||
"batch_create_tbl_num": 20,
|
|
||||||
"data_source": "rand",
|
|
||||||
"insert_mode": "taosc",
|
|
||||||
"insert_rows": 100,
|
|
||||||
"childtable_offset":0,
|
|
||||||
"multi_thread_write_one_tbl": "no",
|
|
||||||
"insert_interval":0,
|
|
||||||
"max_sql_len": 1024000,
|
|
||||||
"disorder_ratio": 0,
|
|
||||||
"disorder_range": 1000,
|
|
||||||
"timestamp_step": 1000,
|
|
||||||
"start_timestamp": "2021-07-01 00:00:00.000",
|
|
||||||
"sample_format": "",
|
|
||||||
"sample_file": "",
|
|
||||||
"tags_file": "",
|
|
||||||
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":3}, {"type": "BINARY", "len": 16, "count":2}, {"type": "BINARY", "len": 32, "count":2},
|
|
||||||
{"type": "TIMESTAMP"}, {"type": "BIGINT", "count":3},{"type": "FLOAT", "count":1},{"type": "SMALLINT", "count":1},{"type": "TINYINT", "count":1},
|
|
||||||
{"type": "BOOL"},{"type": "NCHAR","len":16}],
|
|
||||||
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":5},{"type": "NCHAR","len":16, "count":1}]
|
|
||||||
}]
|
|
||||||
}]
|
|
||||||
}
|
|
||||||
|
|
|
@ -1,63 +0,0 @@
|
||||||
{
|
|
||||||
"filetype": "insert",
|
|
||||||
"cfgdir": "/etc/taos",
|
|
||||||
"host": "127.0.0.1",
|
|
||||||
"port": 6030,
|
|
||||||
"user": "root",
|
|
||||||
"password": "taosdata",
|
|
||||||
"thread_count": 10,
|
|
||||||
"create_table_thread_count": 10,
|
|
||||||
"result_file": "./insert_res.txt",
|
|
||||||
"confirm_parameter_prompt": "no",
|
|
||||||
"insert_interval": 0,
|
|
||||||
"interlace_rows": 100,
|
|
||||||
"num_of_records_per_req": 1000,
|
|
||||||
"max_sql_len": 1024000,
|
|
||||||
"databases": [{
|
|
||||||
"dbinfo": {
|
|
||||||
"name": "testdb2",
|
|
||||||
"drop": "yes",
|
|
||||||
"replica": 1,
|
|
||||||
|
|
||||||
"cache": 50,
|
|
||||||
|
|
||||||
"precision": "us",
|
|
||||||
"keep": 3600,
|
|
||||||
"minRows": 100,
|
|
||||||
"maxRows": 4096,
|
|
||||||
"comp":2
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
},
|
|
||||||
"super_tables": [{
|
|
||||||
"name": "stb0",
|
|
||||||
"child_table_exists":"no",
|
|
||||||
"childtable_count": 100,
|
|
||||||
"childtable_prefix": "tb0_",
|
|
||||||
"auto_create_table": "no",
|
|
||||||
"batch_create_tbl_num": 20,
|
|
||||||
"data_source": "rand",
|
|
||||||
"insert_mode": "taosc",
|
|
||||||
"insert_rows": 100,
|
|
||||||
"childtable_offset":0,
|
|
||||||
"multi_thread_write_one_tbl": "no",
|
|
||||||
"insert_interval":0,
|
|
||||||
"max_sql_len": 1024000,
|
|
||||||
"disorder_ratio": 0,
|
|
||||||
"disorder_range": 1000,
|
|
||||||
"timestamp_step": 1000,
|
|
||||||
"start_timestamp": "2021-07-01 00:00:00.000",
|
|
||||||
"sample_format": "",
|
|
||||||
"sample_file": "",
|
|
||||||
"tags_file": "",
|
|
||||||
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":3}, {"type": "BINARY", "len": 16, "count":2}, {"type": "BINARY", "len": 32, "count":2},
|
|
||||||
{"type": "TIMESTAMP"}, {"type": "BIGINT", "count":3},{"type": "FLOAT", "count":1},{"type": "SMALLINT", "count":1},{"type": "TINYINT", "count":1},
|
|
||||||
{"type": "BOOL"},{"type": "NCHAR","len":16}],
|
|
||||||
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":5},{"type": "NCHAR","len":16, "count":1}]
|
|
||||||
}]
|
|
||||||
}]
|
|
||||||
}
|
|
||||||
|
|
|
@ -1,115 +0,0 @@
|
||||||
###################################################################
|
|
||||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This file is proprietary and confidential to TAOS Technologies.
|
|
||||||
# No part of this file may be reproduced, stored, transmitted,
|
|
||||||
# disclosed or used in any form or by any means other than as
|
|
||||||
# expressly provided by the written permission from Jianhui Tao
|
|
||||||
#
|
|
||||||
###################################################################
|
|
||||||
|
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
from util.log import *
|
|
||||||
from util.cases import *
|
|
||||||
from util.sql import *
|
|
||||||
from util.dnodes import *
|
|
||||||
|
|
||||||
|
|
||||||
class TDTestCase:
|
|
||||||
def init(self, conn, logSql):
|
|
||||||
tdLog.debug("start to execute %s" % __file__)
|
|
||||||
tdSql.init(conn.cursor(), logSql)
|
|
||||||
|
|
||||||
def getBuildPath(self):
|
|
||||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
|
||||||
|
|
||||||
if ("community" in selfPath):
|
|
||||||
projPath = selfPath[:selfPath.find("community")]
|
|
||||||
else:
|
|
||||||
projPath = selfPath[:selfPath.find("tests")]
|
|
||||||
|
|
||||||
for root, dirs, files in os.walk(projPath):
|
|
||||||
if ("taosd" in files):
|
|
||||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
|
||||||
if ("packaging" not in rootRealPath):
|
|
||||||
buildPath = root[:len(root)-len("/build/bin")]
|
|
||||||
break
|
|
||||||
return buildPath
|
|
||||||
|
|
||||||
def run(self):
|
|
||||||
buildPath = self.getBuildPath()
|
|
||||||
if (buildPath == ""):
|
|
||||||
tdLog.exit("taosd not found!")
|
|
||||||
else:
|
|
||||||
tdLog.info("taosd found in %s" % buildPath)
|
|
||||||
binPath = buildPath+ "/build/bin/"
|
|
||||||
|
|
||||||
# insert: create one or mutiple tables per sql and insert multiple rows per sql
|
|
||||||
|
|
||||||
# check the params of taosdemo about time_step is nano
|
|
||||||
os.system("%staosdemo -f tools/taosdemoAllTest/NanoTestCase/taosdemoInsertNanoDB.json -y " % binPath)
|
|
||||||
tdSql.execute("use testdb1")
|
|
||||||
tdSql.query("show stables")
|
|
||||||
tdSql.checkData(0, 4, 100)
|
|
||||||
tdSql.query("select count (tbname) from stb0")
|
|
||||||
tdSql.checkData(0, 0, 100)
|
|
||||||
tdSql.query("select count(*) from tb0_0")
|
|
||||||
tdSql.checkData(0, 0, 100)
|
|
||||||
tdSql.query("select count(*) from stb0")
|
|
||||||
tdSql.checkData(0, 0, 10000)
|
|
||||||
tdSql.query("describe stb0")
|
|
||||||
tdSql.getData(9, 1)
|
|
||||||
tdSql.checkDataType(9, 1,"TIMESTAMP")
|
|
||||||
tdSql.query("select last(ts) from stb0")
|
|
||||||
tdSql.checkData(0, 0,"2021-07-01 00:00:00.000099000")
|
|
||||||
|
|
||||||
# check the params of taosdemo about time_step is us
|
|
||||||
os.system("%staosdemo -f tools/taosdemoAllTest/NanoTestCase/taosdemoInsertUSDB.json -y " % binPath)
|
|
||||||
tdSql.execute("use testdb2")
|
|
||||||
tdSql.query("show stables")
|
|
||||||
tdSql.checkData(0, 4, 100)
|
|
||||||
tdSql.query("select count (tbname) from stb0")
|
|
||||||
tdSql.checkData(0, 0, 100)
|
|
||||||
tdSql.query("select count(*) from tb0_0")
|
|
||||||
tdSql.checkData(0, 0, 100)
|
|
||||||
tdSql.query("select count(*) from stb0")
|
|
||||||
tdSql.checkData(0, 0, 10000)
|
|
||||||
tdSql.query("describe stb0")
|
|
||||||
tdSql.getData(9, 1)
|
|
||||||
tdSql.checkDataType(9, 1,"TIMESTAMP")
|
|
||||||
tdSql.query("select last(ts) from stb0")
|
|
||||||
tdSql.checkData(0, 0,"2021-07-01 00:00:00.099000")
|
|
||||||
|
|
||||||
# check the params of taosdemo about time_step is ms
|
|
||||||
os.system("%staosdemo -f tools/taosdemoAllTest/NanoTestCase/taosdemoInsertMSDB.json -y " % binPath)
|
|
||||||
tdSql.execute("use testdb3")
|
|
||||||
tdSql.query("show stables")
|
|
||||||
tdSql.checkData(0, 4, 100)
|
|
||||||
tdSql.query("select count (tbname) from stb0")
|
|
||||||
tdSql.checkData(0, 0, 100)
|
|
||||||
tdSql.query("select count(*) from tb0_0")
|
|
||||||
tdSql.checkData(0, 0, 100)
|
|
||||||
tdSql.query("select count(*) from stb0")
|
|
||||||
tdSql.checkData(0, 0, 10000)
|
|
||||||
tdSql.query("describe stb0")
|
|
||||||
tdSql.checkDataType(9, 1,"TIMESTAMP")
|
|
||||||
tdSql.query("select last(ts) from stb0")
|
|
||||||
tdSql.checkData(0, 0,"2021-07-01 00:01:39.000")
|
|
||||||
|
|
||||||
|
|
||||||
os.system("rm -rf ./res.txt")
|
|
||||||
os.system("rm -rf ./*.py.sql")
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def stop(self):
|
|
||||||
tdSql.close()
|
|
||||||
tdLog.success("%s successfully executed" % __file__)
|
|
||||||
|
|
||||||
|
|
||||||
tdCases.addWindows(__file__, TDTestCase())
|
|
||||||
tdCases.addLinux(__file__, TDTestCase())
|
|
|
@ -1,88 +0,0 @@
|
||||||
{
|
|
||||||
"filetype": "insert",
|
|
||||||
"cfgdir": "/etc/taos",
|
|
||||||
"host": "127.0.0.1",
|
|
||||||
"port": 6030,
|
|
||||||
"user": "root",
|
|
||||||
"password": "taosdata",
|
|
||||||
"thread_count": 10,
|
|
||||||
"create_table_thread_count": 10,
|
|
||||||
"result_file": "./insert_res.txt",
|
|
||||||
"confirm_parameter_prompt": "no",
|
|
||||||
"insert_interval": 0,
|
|
||||||
"interlace_rows": 100,
|
|
||||||
"num_of_records_per_req": 1000,
|
|
||||||
"max_sql_len": 1024000,
|
|
||||||
"databases": [{
|
|
||||||
"dbinfo": {
|
|
||||||
"name": "nsdb",
|
|
||||||
"drop": "yes",
|
|
||||||
"replica": 1,
|
|
||||||
|
|
||||||
"cache": 50,
|
|
||||||
|
|
||||||
"precision": "ns",
|
|
||||||
"keep": 3600,
|
|
||||||
"minRows": 100,
|
|
||||||
"maxRows": 4096,
|
|
||||||
"comp":2
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
},
|
|
||||||
"super_tables": [{
|
|
||||||
"name": "stb0",
|
|
||||||
"child_table_exists":"no",
|
|
||||||
"childtable_count": 100,
|
|
||||||
"childtable_prefix": "tb0_",
|
|
||||||
"auto_create_table": "no",
|
|
||||||
"batch_create_tbl_num": 20,
|
|
||||||
"data_source": "rand",
|
|
||||||
"insert_mode": "taosc",
|
|
||||||
"insert_rows": 100,
|
|
||||||
"childtable_offset":0,
|
|
||||||
"multi_thread_write_one_tbl": "no",
|
|
||||||
"insert_interval":0,
|
|
||||||
"max_sql_len": 1024000,
|
|
||||||
"disorder_ratio": 0,
|
|
||||||
"disorder_range": 1000,
|
|
||||||
"timestamp_step": 10000000,
|
|
||||||
"start_timestamp": "2021-07-01 00:00:00.000",
|
|
||||||
"sample_format": "",
|
|
||||||
"sample_file": "",
|
|
||||||
"tags_file": "",
|
|
||||||
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":3}, {"type": "BINARY", "len": 16, "count":2}, {"type": "BINARY", "len": 32, "count":2},
|
|
||||||
{"type": "TIMESTAMP"}, {"type": "BIGINT", "count":3},{"type": "FLOAT", "count":1},{"type": "SMALLINT", "count":1},{"type": "TINYINT", "count":1},
|
|
||||||
{"type": "BOOL"},{"type": "NCHAR","len":16}],
|
|
||||||
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":5},{"type": "NCHAR","len":16, "count":1}]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "stb1",
|
|
||||||
"child_table_exists":"no",
|
|
||||||
"childtable_count": 100,
|
|
||||||
"childtable_prefix": "tb1_",
|
|
||||||
"auto_create_table": "no",
|
|
||||||
"batch_create_tbl_num": 20,
|
|
||||||
"data_source": "rand",
|
|
||||||
"insert_mode": "taosc",
|
|
||||||
"insert_rows": 100,
|
|
||||||
"childtable_offset":0,
|
|
||||||
"multi_thread_write_one_tbl": "no",
|
|
||||||
"insert_interval":0,
|
|
||||||
"max_sql_len": 1024000,
|
|
||||||
"disorder_ratio": 10,
|
|
||||||
"disorder_range": 1000,
|
|
||||||
"timestamp_step": 10000000,
|
|
||||||
"start_timestamp": "2021-07-01 00:00:00.000",
|
|
||||||
"sample_format": "",
|
|
||||||
"sample_file": "",
|
|
||||||
"tags_file": "",
|
|
||||||
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":3}, {"type": "BINARY", "len": 16, "count":2}, {"type": "BINARY", "len": 32, "count":2},
|
|
||||||
{"type": "TIMESTAMP"}, {"type": "BIGINT", "count":3},{"type": "FLOAT", "count":1},{"type": "SMALLINT", "count":1},{"type": "TINYINT", "count":1},
|
|
||||||
{"type": "BOOL"},{"type": "NCHAR","len":16}],
|
|
||||||
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":5},{"type": "NCHAR","len":16, "count":1}]
|
|
||||||
}]
|
|
||||||
}]
|
|
||||||
}
|
|
|
@ -1,84 +0,0 @@
|
||||||
{
|
|
||||||
"filetype": "insert",
|
|
||||||
"cfgdir": "/etc/taos",
|
|
||||||
"host": "127.0.0.1",
|
|
||||||
"port": 6030,
|
|
||||||
"user": "root",
|
|
||||||
"password": "taosdata",
|
|
||||||
"thread_count": 10,
|
|
||||||
"create_table_thread_count": 10,
|
|
||||||
"result_file": "./insert_res.txt",
|
|
||||||
"confirm_parameter_prompt": "no",
|
|
||||||
"insert_interval": 0,
|
|
||||||
"interlace_rows": 100,
|
|
||||||
"num_of_records_per_req": 1000,
|
|
||||||
"max_sql_len": 1024000,
|
|
||||||
"databases": [{
|
|
||||||
"dbinfo": {
|
|
||||||
"name": "subnsdb",
|
|
||||||
"drop": "yes",
|
|
||||||
"replica": 1,
|
|
||||||
|
|
||||||
"cache": 50,
|
|
||||||
|
|
||||||
"precision": "ns",
|
|
||||||
"keep": 3600,
|
|
||||||
"minRows": 100,
|
|
||||||
"maxRows": 4096,
|
|
||||||
"comp":2
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
},
|
|
||||||
"super_tables": [{
|
|
||||||
"name": "stb0",
|
|
||||||
"child_table_exists":"no",
|
|
||||||
"childtable_count": 10,
|
|
||||||
"childtable_prefix": "tb0_",
|
|
||||||
"auto_create_table": "no",
|
|
||||||
"batch_create_tbl_num": 20,
|
|
||||||
"data_source": "samples",
|
|
||||||
"insert_mode": "taosc",
|
|
||||||
"insert_rows": 10,
|
|
||||||
"childtable_offset":0,
|
|
||||||
"multi_thread_write_one_tbl": "no",
|
|
||||||
"insert_interval":0,
|
|
||||||
"max_sql_len": 1024000,
|
|
||||||
"disorder_ratio": 0,
|
|
||||||
"disorder_range": 1000,
|
|
||||||
"timestamp_step": 10000000,
|
|
||||||
"start_timestamp": "2021-07-01 00:00:00.000",
|
|
||||||
"sample_format": "csv",
|
|
||||||
"sample_file": "./tools/taosdemoAllTest/NanoTestCase/nano_samples.csv",
|
|
||||||
"tags_file": "./tools/taosdemoAllTest/NanoTestCase/nano_sampletags.csv",
|
|
||||||
"columns": [{"type": "DOUBLE"}, {"type": "BINARY", "len": 64, "count":1}, {"type": "TIMESTAMP", "count":1}],
|
|
||||||
"tags": [{"type": "BINARY", "len": 16, "count":1},{"type": "INT"},{"type": "DOUBLE"}]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "stb1",
|
|
||||||
"child_table_exists":"no",
|
|
||||||
"childtable_count": 10,
|
|
||||||
"childtable_prefix": "tb1_",
|
|
||||||
"auto_create_table": "no",
|
|
||||||
"batch_create_tbl_num": 20,
|
|
||||||
"data_source": "samples",
|
|
||||||
"insert_mode": "taosc",
|
|
||||||
"insert_rows": 10,
|
|
||||||
"childtable_offset":0,
|
|
||||||
"multi_thread_write_one_tbl": "no",
|
|
||||||
"insert_interval":0,
|
|
||||||
"max_sql_len": 1024000,
|
|
||||||
"disorder_ratio": 10,
|
|
||||||
"disorder_range": 1000,
|
|
||||||
"timestamp_step": 10000000,
|
|
||||||
"start_timestamp": "2021-07-01 00:00:00.000",
|
|
||||||
"sample_format": "csv",
|
|
||||||
"sample_file": "./tools/taosdemoAllTest/NanoTestCase/nano_samples.csv",
|
|
||||||
"tags_file": "./tools/taosdemoAllTest/NanoTestCase/nano_sampletags.csv",
|
|
||||||
"columns": [{"type": "DOUBLE"}, {"type": "BINARY", "len": 64, "count":1}, {"type": "TIMESTAMP", "count":1}],
|
|
||||||
"tags": [{"type": "BINARY", "len": 16, "count":1},{"type": "INT"},{"type": "DOUBLE"}]
|
|
||||||
}]
|
|
||||||
}]
|
|
||||||
}
|
|
|
@ -1,62 +0,0 @@
|
||||||
{
|
|
||||||
"filetype": "insert",
|
|
||||||
"cfgdir": "/etc/taos",
|
|
||||||
"host": "127.0.0.1",
|
|
||||||
"port": 6030,
|
|
||||||
"user": "root",
|
|
||||||
"password": "taosdata",
|
|
||||||
"thread_count": 10,
|
|
||||||
"create_table_thread_count": 10,
|
|
||||||
"result_file": "./insert_res.txt",
|
|
||||||
"confirm_parameter_prompt": "no",
|
|
||||||
"insert_interval": 0,
|
|
||||||
"interlace_rows": 100,
|
|
||||||
"num_of_records_per_req": 1000,
|
|
||||||
"max_sql_len": 1024000,
|
|
||||||
"databases": [{
|
|
||||||
"dbinfo": {
|
|
||||||
"name": "nsdb2",
|
|
||||||
"drop": "yes",
|
|
||||||
"replica": 1,
|
|
||||||
|
|
||||||
"cache": 50,
|
|
||||||
|
|
||||||
"precision": "ns",
|
|
||||||
"keep": 3600,
|
|
||||||
"minRows": 100,
|
|
||||||
"maxRows": 4096,
|
|
||||||
"comp":2
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
},
|
|
||||||
"super_tables": [{
|
|
||||||
"name": "stb0",
|
|
||||||
"child_table_exists":"no",
|
|
||||||
"childtable_count": 100,
|
|
||||||
"childtable_prefix": "tb0_",
|
|
||||||
"auto_create_table": "no",
|
|
||||||
"batch_create_tbl_num": 20,
|
|
||||||
"data_source": "rand",
|
|
||||||
"insert_mode": "taosc",
|
|
||||||
"insert_rows": 100,
|
|
||||||
"childtable_offset":0,
|
|
||||||
"multi_thread_write_one_tbl": "no",
|
|
||||||
"insert_interval":0,
|
|
||||||
"max_sql_len": 1024000,
|
|
||||||
"disorder_ratio": 0,
|
|
||||||
"disorder_range": 1000,
|
|
||||||
"timestamp_step": 10,
|
|
||||||
"start_timestamp": "now",
|
|
||||||
"sample_format": "",
|
|
||||||
"sample_file": "",
|
|
||||||
"tags_file": "",
|
|
||||||
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":3}, {"type": "BINARY", "len": 16, "count":2}, {"type": "BINARY", "len": 32, "count":2},
|
|
||||||
{"type": "TIMESTAMP"}, {"type": "BIGINT", "count":3},{"type": "FLOAT", "count":1},{"type": "SMALLINT", "count":1},{"type": "TINYINT", "count":1},
|
|
||||||
{"type": "BOOL"},{"type": "NCHAR","len":16}],
|
|
||||||
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":5},{"type": "NCHAR","len":16, "count":1}]
|
|
||||||
}]
|
|
||||||
}]
|
|
||||||
}
|
|
|
@ -1,84 +0,0 @@
|
||||||
{
|
|
||||||
"filetype": "insert",
|
|
||||||
"cfgdir": "/etc/taos",
|
|
||||||
"host": "127.0.0.1",
|
|
||||||
"port": 6030,
|
|
||||||
"user": "root",
|
|
||||||
"password": "taosdata",
|
|
||||||
"thread_count": 10,
|
|
||||||
"create_table_thread_count": 10,
|
|
||||||
"result_file": "./insert_res.txt",
|
|
||||||
"confirm_parameter_prompt": "no",
|
|
||||||
"insert_interval": 0,
|
|
||||||
"interlace_rows": 100,
|
|
||||||
"num_of_records_per_req": 1000,
|
|
||||||
"max_sql_len": 1024000,
|
|
||||||
"databases": [{
|
|
||||||
"dbinfo": {
|
|
||||||
"name": "nsdbcsv",
|
|
||||||
"drop": "yes",
|
|
||||||
"replica": 1,
|
|
||||||
|
|
||||||
"cache": 50,
|
|
||||||
|
|
||||||
"precision": "ns",
|
|
||||||
"keep": 3600,
|
|
||||||
"minRows": 100,
|
|
||||||
"maxRows": 4096,
|
|
||||||
"comp":2
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
},
|
|
||||||
"super_tables": [{
|
|
||||||
"name": "stb0",
|
|
||||||
"child_table_exists":"no",
|
|
||||||
"childtable_count": 100,
|
|
||||||
"childtable_prefix": "tb0_",
|
|
||||||
"auto_create_table": "no",
|
|
||||||
"batch_create_tbl_num": 20,
|
|
||||||
"data_source": "samples",
|
|
||||||
"insert_mode": "taosc",
|
|
||||||
"insert_rows": 100,
|
|
||||||
"childtable_offset":0,
|
|
||||||
"multi_thread_write_one_tbl": "no",
|
|
||||||
"insert_interval":0,
|
|
||||||
"max_sql_len": 1024000,
|
|
||||||
"disorder_ratio": 0,
|
|
||||||
"disorder_range": 1000,
|
|
||||||
"timestamp_step": 10000000,
|
|
||||||
"start_timestamp": "2021-07-01 00:00:00.000",
|
|
||||||
"sample_format": "csv",
|
|
||||||
"sample_file": "./tools/taosdemoAllTest/NanoTestCase/nano_samples.csv",
|
|
||||||
"tags_file": "./tools/taosdemoAllTest/NanoTestCase/nano_sampletags.csv",
|
|
||||||
"columns": [{"type": "DOUBLE"}, {"type": "BINARY", "len": 64, "count":1}, {"type": "TIMESTAMP", "count":1}],
|
|
||||||
"tags": [{"type": "BINARY", "len": 16, "count":1},{"type": "INT"},{"type": "DOUBLE"}]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "stb1",
|
|
||||||
"child_table_exists":"no",
|
|
||||||
"childtable_count": 100,
|
|
||||||
"childtable_prefix": "tb1_",
|
|
||||||
"auto_create_table": "no",
|
|
||||||
"batch_create_tbl_num": 20,
|
|
||||||
"data_source": "samples",
|
|
||||||
"insert_mode": "taosc",
|
|
||||||
"insert_rows": 100,
|
|
||||||
"childtable_offset":0,
|
|
||||||
"multi_thread_write_one_tbl": "no",
|
|
||||||
"insert_interval":0,
|
|
||||||
"max_sql_len": 1024000,
|
|
||||||
"disorder_ratio": 10,
|
|
||||||
"disorder_range": 1000,
|
|
||||||
"timestamp_step": 10000000,
|
|
||||||
"start_timestamp": "2021-07-01 00:00:00.000",
|
|
||||||
"sample_format": "csv",
|
|
||||||
"sample_file": "./tools/taosdemoAllTest/NanoTestCase/nano_samples.csv",
|
|
||||||
"tags_file": "./tools/taosdemoAllTest/NanoTestCase/nano_sampletags.csv",
|
|
||||||
"columns": [{"type": "DOUBLE"}, {"type": "BINARY", "len": 64, "count":1}, {"type": "TIMESTAMP", "count":1}],
|
|
||||||
"tags": [{"type": "BINARY", "len": 16, "count":1},{"type": "INT"},{"type": "DOUBLE"}]
|
|
||||||
}]
|
|
||||||
}]
|
|
||||||
}
|
|
|
@ -1,169 +0,0 @@
|
||||||
###################################################################
|
|
||||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This file is proprietary and confidential to TAOS Technologies.
|
|
||||||
# No part of this file may be reproduced, stored, transmitted,
|
|
||||||
# disclosed or used in any form or by any means other than as
|
|
||||||
# expressly provided by the written permission from Jianhui Tao
|
|
||||||
#
|
|
||||||
###################################################################
|
|
||||||
|
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
from util.log import *
|
|
||||||
from util.cases import *
|
|
||||||
from util.sql import *
|
|
||||||
from util.dnodes import *
|
|
||||||
|
|
||||||
|
|
||||||
class TDTestCase:
|
|
||||||
def init(self, conn, logSql):
|
|
||||||
tdLog.debug("start to execute %s" % __file__)
|
|
||||||
tdSql.init(conn.cursor(), logSql)
|
|
||||||
|
|
||||||
def getBuildPath(self):
|
|
||||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
|
||||||
|
|
||||||
if ("community" in selfPath):
|
|
||||||
projPath = selfPath[:selfPath.find("community")]
|
|
||||||
else:
|
|
||||||
projPath = selfPath[:selfPath.find("tests")]
|
|
||||||
|
|
||||||
for root, dirs, files in os.walk(projPath):
|
|
||||||
if ("taosd" in files):
|
|
||||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
|
||||||
if ("packaging" not in rootRealPath):
|
|
||||||
buildPath = root[:len(root) - len("/build/bin")]
|
|
||||||
break
|
|
||||||
return buildPath
|
|
||||||
|
|
||||||
def run(self):
|
|
||||||
buildPath = self.getBuildPath()
|
|
||||||
if (buildPath == ""):
|
|
||||||
tdLog.exit("taosd not found!")
|
|
||||||
else:
|
|
||||||
tdLog.info("taosd found in %s" % buildPath)
|
|
||||||
binPath = buildPath + "/build/bin/"
|
|
||||||
|
|
||||||
# insert: create one or mutiple tables per sql and insert multiple rows per sql
|
|
||||||
# insert data from a special timestamp
|
|
||||||
# check stable stb0
|
|
||||||
|
|
||||||
os.system(
|
|
||||||
"%staosdemo -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabase.json -y " %
|
|
||||||
binPath)
|
|
||||||
tdSql.execute("use nsdb")
|
|
||||||
tdSql.query("show stables")
|
|
||||||
tdSql.checkData(0, 4, 100)
|
|
||||||
tdSql.query("select count (tbname) from stb0")
|
|
||||||
tdSql.checkData(0, 0, 100)
|
|
||||||
tdSql.query("select count(*) from tb0_0")
|
|
||||||
tdSql.checkData(0, 0, 100)
|
|
||||||
tdSql.query("select count(*) from stb0")
|
|
||||||
tdSql.checkData(0, 0, 10000)
|
|
||||||
tdSql.query("describe stb0")
|
|
||||||
tdSql.checkDataType(9, 1, "TIMESTAMP")
|
|
||||||
tdSql.query("select last(ts) from stb0")
|
|
||||||
tdSql.checkData(0, 0, "2021-07-01 00:00:00.990000000")
|
|
||||||
|
|
||||||
# check stable stb1 which is insert with disord
|
|
||||||
|
|
||||||
tdSql.query("select count (tbname) from stb1")
|
|
||||||
tdSql.checkData(0, 0, 100)
|
|
||||||
tdSql.query("select count(*) from tb1_0")
|
|
||||||
tdSql.checkData(0, 0, 100)
|
|
||||||
tdSql.query("select count(*) from stb1")
|
|
||||||
tdSql.checkData(0, 0, 10000)
|
|
||||||
# check c8 is an nano timestamp
|
|
||||||
tdSql.query("describe stb1")
|
|
||||||
tdSql.checkDataType(9, 1, "TIMESTAMP")
|
|
||||||
# check insert timestamp_step is nano_second
|
|
||||||
tdSql.query("select last(ts) from stb1")
|
|
||||||
tdSql.checkData(0, 0, "2021-07-01 00:00:00.990000000")
|
|
||||||
|
|
||||||
# insert data from now time
|
|
||||||
|
|
||||||
# check stable stb0
|
|
||||||
os.system(
|
|
||||||
"%staosdemo -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabaseNow.json -y " %
|
|
||||||
binPath)
|
|
||||||
|
|
||||||
tdSql.execute("use nsdb2")
|
|
||||||
tdSql.query("show stables")
|
|
||||||
tdSql.checkData(0, 4, 100)
|
|
||||||
tdSql.query("select count (tbname) from stb0")
|
|
||||||
tdSql.checkData(0, 0, 100)
|
|
||||||
tdSql.query("select count(*) from tb0_0")
|
|
||||||
tdSql.checkData(0, 0, 100)
|
|
||||||
tdSql.query("select count(*) from stb0")
|
|
||||||
tdSql.checkData(0, 0, 10000)
|
|
||||||
# check c8 is an nano timestamp
|
|
||||||
tdSql.query("describe stb0")
|
|
||||||
tdSql.checkDataType(9, 1, "TIMESTAMP")
|
|
||||||
|
|
||||||
# insert by csv files and timetamp is long int , strings in ts and
|
|
||||||
# cols
|
|
||||||
|
|
||||||
os.system(
|
|
||||||
"%staosdemo -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabasecsv.json -y " %
|
|
||||||
binPath)
|
|
||||||
tdSql.execute("use nsdbcsv")
|
|
||||||
tdSql.query("show stables")
|
|
||||||
tdSql.checkData(0, 4, 100)
|
|
||||||
tdSql.query("select count(*) from stb0")
|
|
||||||
tdSql.checkData(0, 0, 10000)
|
|
||||||
tdSql.query("describe stb0")
|
|
||||||
tdSql.checkDataType(3, 1, "TIMESTAMP")
|
|
||||||
tdSql.query(
|
|
||||||
"select count(*) from stb0 where ts > \"2021-07-01 00:00:00.490000000\"")
|
|
||||||
tdSql.checkData(0, 0, 5000)
|
|
||||||
tdSql.query("select count(*) from stb0 where ts < 1626918583000000000")
|
|
||||||
tdSql.checkData(0, 0, 10000)
|
|
||||||
|
|
||||||
os.system("rm -rf ./insert_res.txt")
|
|
||||||
os.system(
|
|
||||||
"rm -rf tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNano*.py.sql")
|
|
||||||
|
|
||||||
# taosdemo test insert with command and parameter , detals show
|
|
||||||
# taosdemo --help
|
|
||||||
os.system(
|
|
||||||
"%staosdemo -u root -ptaosdata -P 6030 -a 1 -m pre -n 10 -T 20 -t 60 -o res.txt -y " %
|
|
||||||
binPath)
|
|
||||||
tdSql.query("select count(*) from test.meters")
|
|
||||||
tdSql.checkData(0, 0, 600)
|
|
||||||
# check taosdemo -s
|
|
||||||
|
|
||||||
sqls_ls = [
|
|
||||||
'drop database if exists nsdbsql;',
|
|
||||||
'create database nsdbsql precision "ns" keep 3600 duration 6 update 1;',
|
|
||||||
'use nsdbsql;',
|
|
||||||
'CREATE STABLE meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupdId int);',
|
|
||||||
'CREATE TABLE d1001 USING meters TAGS ("Beijing.Chaoyang", 2);',
|
|
||||||
'INSERT INTO d1001 USING METERS TAGS ("Beijng.Chaoyang", 2) VALUES (now, 10.2, 219, 0.32);',
|
|
||||||
'INSERT INTO d1001 USING METERS TAGS ("Beijng.Chaoyang", 2) VALUES (now, 85, 32, 0.76);']
|
|
||||||
|
|
||||||
with open("./taosdemoTestNanoCreateDB.sql", mode="a") as sql_files:
|
|
||||||
for sql in sqls_ls:
|
|
||||||
sql_files.write(sql + "\n")
|
|
||||||
sql_files.close()
|
|
||||||
|
|
||||||
sleep(10)
|
|
||||||
|
|
||||||
os.system("%staosdemo -s taosdemoTestNanoCreateDB.sql -y " % binPath)
|
|
||||||
tdSql.query("select count(*) from nsdbsql.meters")
|
|
||||||
tdSql.checkData(0, 0, 2)
|
|
||||||
|
|
||||||
os.system("rm -rf ./res.txt")
|
|
||||||
os.system("rm -rf ./*.py.sql")
|
|
||||||
os.system("rm -rf ./taosdemoTestNanoCreateDB.sql")
|
|
||||||
|
|
||||||
def stop(self):
|
|
||||||
tdSql.close()
|
|
||||||
tdLog.success("%s successfully executed" % __file__)
|
|
||||||
|
|
||||||
|
|
||||||
tdCases.addWindows(__file__, TDTestCase())
|
|
||||||
tdCases.addLinux(__file__, TDTestCase())
|
|
|
@ -1,92 +0,0 @@
|
||||||
{
|
|
||||||
"filetype": "query",
|
|
||||||
"cfgdir": "/etc/taos",
|
|
||||||
"host": "127.0.0.1",
|
|
||||||
"port": 6030,
|
|
||||||
"user": "root",
|
|
||||||
"password": "taosdata",
|
|
||||||
"confirm_parameter_prompt": "no",
|
|
||||||
"databases": "nsdb",
|
|
||||||
"query_times": 10,
|
|
||||||
"query_mode": "taosc",
|
|
||||||
"specified_table_query": {
|
|
||||||
"query_interval": 1,
|
|
||||||
"concurrent": 2,
|
|
||||||
"sqls": [
|
|
||||||
{
|
|
||||||
"sql": "select count(*) from stb0 where ts>\"2021-07-01 00:01:00.000000000 \" ;",
|
|
||||||
"result": "./query_res0.txt"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql": "select count(*) from stb0 where ts>\"2021-07-01 00:01:00.000000000\" and ts <=\"2021-07-01 00:01:10.000000000\" ;",
|
|
||||||
"result": "./query_res1.txt"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql": "select count(*) from stb0 where ts>now-20d ;",
|
|
||||||
"result": "./query_res2.txt"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql": "select max(c10) from stb0;",
|
|
||||||
"result": "./query_res3.txt"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql": "select min(c1) from stb0;",
|
|
||||||
"result": "./query_res4.txt"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql": "select avg(c1) from stb0;",
|
|
||||||
"result": "./query_res5.txt"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql":"select count(*) from stb0 group by tbname;",
|
|
||||||
"result":"./query_res6.txt"
|
|
||||||
}
|
|
||||||
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"super_table_query": {
|
|
||||||
"stblname": "stb0",
|
|
||||||
"query_interval": 0,
|
|
||||||
"threads": 4,
|
|
||||||
"sqls": [
|
|
||||||
{
|
|
||||||
"sql": "select count(*) from xxxx where ts>\"2021-07-01 00:01:00.000000000 \" ;",
|
|
||||||
"result": "./query_res_tb0.txt"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql":"select count(*) from xxxx where ts>\"2021-07-01 00:01:00.000000000\" and ts <=\"2021-07-01 00:01:10.000000000\" ;",
|
|
||||||
"result": "./query_res_tb1.txt"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql":"select first(*) from xxxx ;",
|
|
||||||
"result": "./query_res_tb2.txt"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql":"select last(*) from xxxx;",
|
|
||||||
"result": "./query_res_tb3.txt"
|
|
||||||
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql":"select last_row(*) from xxxx ;",
|
|
||||||
"result": "./query_res_tb4.txt"
|
|
||||||
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql":"select max(c10) from xxxx ;",
|
|
||||||
"result": "./query_res_tb5.txt"
|
|
||||||
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql":"select min(c1) from xxxx ;",
|
|
||||||
"result": "./query_res_tb6.txt"
|
|
||||||
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql":"select avg(c10) from xxxx ;",
|
|
||||||
"result": "./query_res_tb7.txt"
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,157 +0,0 @@
|
||||||
###################################################################
|
|
||||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This file is proprietary and confidential to TAOS Technologies.
|
|
||||||
# No part of this file may be reproduced, stored, transmitted,
|
|
||||||
# disclosed or used in any form or by any means other than as
|
|
||||||
# expressly provided by the written permission from Jianhui Tao
|
|
||||||
#
|
|
||||||
###################################################################
|
|
||||||
|
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
from util.log import *
|
|
||||||
from util.cases import *
|
|
||||||
from util.sql import *
|
|
||||||
from util.dnodes import *
|
|
||||||
|
|
||||||
|
|
||||||
class TDTestCase:
|
|
||||||
def init(self, conn, logSql):
|
|
||||||
tdLog.debug("start to execute %s" % __file__)
|
|
||||||
tdSql.init(conn.cursor(), logSql)
|
|
||||||
|
|
||||||
def getBuildPath(self):
|
|
||||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
|
||||||
|
|
||||||
if ("community" in selfPath):
|
|
||||||
projPath = selfPath[:selfPath.find("community")]
|
|
||||||
else:
|
|
||||||
projPath = selfPath[:selfPath.find("tests")]
|
|
||||||
|
|
||||||
for root, dirs, files in os.walk(projPath):
|
|
||||||
if ("taosd" in files):
|
|
||||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
|
||||||
if ("packaging" not in rootRealPath):
|
|
||||||
buildPath = root[:len(root)-len("/build/bin")]
|
|
||||||
break
|
|
||||||
return buildPath
|
|
||||||
|
|
||||||
def run(self):
|
|
||||||
buildPath = self.getBuildPath()
|
|
||||||
if (buildPath == ""):
|
|
||||||
tdLog.exit("taosd not found!")
|
|
||||||
else:
|
|
||||||
tdLog.info("taosd found in %s" % buildPath)
|
|
||||||
binPath = buildPath+ "/build/bin/"
|
|
||||||
|
|
||||||
# query: query test for nanoSecond with where and max min groupby order
|
|
||||||
os.system("%staosdemo -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabase.json -y " % binPath)
|
|
||||||
|
|
||||||
tdSql.execute("use nsdb")
|
|
||||||
|
|
||||||
# use where to filter
|
|
||||||
|
|
||||||
tdSql.query("select count(*) from stb0 where ts>\"2021-07-01 00:00:00.590000000 \" ")
|
|
||||||
tdSql.checkData(0, 0, 4000)
|
|
||||||
tdSql.query("select count(*) from stb0 where ts>\"2021-07-01 00:00:00.000000000\" and ts <=\"2021-07-01 00:00:00.590000000\" ")
|
|
||||||
tdSql.checkData(0, 0, 5900)
|
|
||||||
|
|
||||||
tdSql.query("select count(*) from tb0_0 where ts>\"2021-07-01 00:00:00.590000000 \" ;")
|
|
||||||
tdSql.checkData(0, 0, 40)
|
|
||||||
tdSql.query("select count(*) from tb0_0 where ts>\"2021-07-01 00:00:00.000000000\" and ts <=\"2021-07-01 00:00:00.590000000\" ")
|
|
||||||
tdSql.checkData(0, 0, 59)
|
|
||||||
|
|
||||||
|
|
||||||
# select max min avg from special col
|
|
||||||
tdSql.query("select max(c10) from stb0;")
|
|
||||||
print("select max(c10) from stb0 : " , tdSql.getData(0, 0))
|
|
||||||
|
|
||||||
tdSql.query("select max(c10) from tb0_0;")
|
|
||||||
print("select max(c10) from tb0_0 : " , tdSql.getData(0, 0))
|
|
||||||
|
|
||||||
|
|
||||||
tdSql.query("select min(c1) from stb0;")
|
|
||||||
print( "select min(c1) from stb0 : " , tdSql.getData(0, 0))
|
|
||||||
|
|
||||||
tdSql.query("select min(c1) from tb0_0;")
|
|
||||||
print( "select min(c1) from tb0_0 : " , tdSql.getData(0, 0))
|
|
||||||
|
|
||||||
tdSql.query("select avg(c1) from stb0;")
|
|
||||||
print( "select avg(c1) from stb0 : " , tdSql.getData(0, 0))
|
|
||||||
|
|
||||||
tdSql.query("select avg(c1) from tb0_0;")
|
|
||||||
print( "select avg(c1) from tb0_0 : " , tdSql.getData(0, 0))
|
|
||||||
|
|
||||||
tdSql.query("select count(*) from stb0 group by tbname;")
|
|
||||||
tdSql.checkData(0, 0, 100)
|
|
||||||
tdSql.checkData(10, 0, 100)
|
|
||||||
|
|
||||||
# query : query above sqls by taosdemo and continuously
|
|
||||||
|
|
||||||
os.system("%staosdemo -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanoQuery.json -y " % binPath)
|
|
||||||
|
|
||||||
|
|
||||||
os.system("%staosdemo -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabasecsv.json -y " % binPath)
|
|
||||||
tdSql.execute("use nsdbcsv")
|
|
||||||
tdSql.query("show stables")
|
|
||||||
tdSql.checkData(0, 4, 100)
|
|
||||||
tdSql.query("select count(*) from stb0")
|
|
||||||
tdSql.checkData(0, 0, 10000)
|
|
||||||
tdSql.query("describe stb0")
|
|
||||||
tdSql.checkDataType(3, 1, "TIMESTAMP")
|
|
||||||
tdSql.query("select count(*) from stb0 where ts >\"2021-07-01 00:00:00.490000000\"")
|
|
||||||
tdSql.checkData(0, 0, 5000)
|
|
||||||
tdSql.query("select count(*) from stb0 where ts <now -1d-1h-3s")
|
|
||||||
tdSql.checkData(0, 0, 10000)
|
|
||||||
tdSql.query("select count(*) from stb0 where ts < 1626918583000000000")
|
|
||||||
tdSql.checkData(0, 0, 10000)
|
|
||||||
tdSql.execute('select count(*) from stb0 where c2 > 162687012800000000')
|
|
||||||
tdSql.execute('select count(*) from stb0 where c2 < 162687012800000000')
|
|
||||||
tdSql.execute('select count(*) from stb0 where c2 = 162687012800000000')
|
|
||||||
tdSql.execute('select count(*) from stb0 where c2 != 162687012800000000')
|
|
||||||
tdSql.execute('select count(*) from stb0 where c2 <> 162687012800000000')
|
|
||||||
tdSql.execute('select count(*) from stb0 where c2 > "2021-07-21 20:22:08.248246976"')
|
|
||||||
tdSql.execute('select count(*) from stb0 where c2 < "2021-07-21 20:22:08.248246976"')
|
|
||||||
tdSql.execute('select count(*) from stb0 where c2 = "2021-07-21 20:22:08.248246976"')
|
|
||||||
tdSql.execute('select count(*) from stb0 where c2 != "2021-07-21 20:22:08.248246976"')
|
|
||||||
tdSql.execute('select count(*) from stb0 where c2 <> "2021-07-21 20:22:08.248246976"')
|
|
||||||
tdSql.execute('select count(*) from stb0 where ts between "2021-07-01 00:00:00.000000000" and "2021-07-01 00:00:00.990000000"')
|
|
||||||
tdSql.execute('select count(*) from stb0 where ts between 1625068800000000000 and 1625068801000000000')
|
|
||||||
tdSql.query('select avg(c0) from stb0 interval(5000000000b)')
|
|
||||||
tdSql.checkRows(1)
|
|
||||||
|
|
||||||
tdSql.query('select avg(c0) from stb0 interval(100000000b)')
|
|
||||||
tdSql.checkRows(10)
|
|
||||||
|
|
||||||
tdSql.error('select avg(c0) from stb0 interval(1b)')
|
|
||||||
tdSql.error('select avg(c0) from stb0 interval(999b)')
|
|
||||||
|
|
||||||
tdSql.query('select avg(c0) from stb0 interval(1000b)')
|
|
||||||
tdSql.checkRows(100)
|
|
||||||
|
|
||||||
tdSql.query('select avg(c0) from stb0 interval(1u)')
|
|
||||||
tdSql.checkRows(100)
|
|
||||||
|
|
||||||
tdSql.query('select avg(c0) from stb0 interval(100000000b) sliding (100000000b)')
|
|
||||||
tdSql.checkRows(10)
|
|
||||||
|
|
||||||
# query : query above sqls by taosdemo and continuously
|
|
||||||
os.system("%staosdemo -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanoQuerycsv.json -y " % binPath)
|
|
||||||
|
|
||||||
os.system("rm -rf ./query_res*.txt*")
|
|
||||||
os.system("rm -rf tools/taosdemoAllTest/NanoTestCase/*.py.sql")
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def stop(self):
|
|
||||||
tdSql.close()
|
|
||||||
tdLog.success("%s successfully executed" % __file__)
|
|
||||||
|
|
||||||
|
|
||||||
tdCases.addWindows(__file__, TDTestCase())
|
|
||||||
tdCases.addLinux(__file__, TDTestCase())
|
|
|
@ -1,110 +0,0 @@
|
||||||
{
|
|
||||||
"filetype": "query",
|
|
||||||
"cfgdir": "/etc/taos",
|
|
||||||
"host": "127.0.0.1",
|
|
||||||
"port": 6030,
|
|
||||||
"user": "root",
|
|
||||||
"password": "taosdata",
|
|
||||||
"confirm_parameter_prompt": "no",
|
|
||||||
"databases": "nsdbcsv",
|
|
||||||
"query_times": 10,
|
|
||||||
"query_mode": "taosc",
|
|
||||||
"specified_table_query": {
|
|
||||||
"query_interval": 1,
|
|
||||||
"concurrent": 2,
|
|
||||||
"sqls": [
|
|
||||||
{
|
|
||||||
"sql": "select count(*) from stb0 where ts> \"2021-07-01 00:00:00.490000000\" ;",
|
|
||||||
"result": "./query_res0.txt"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql": "select count(*) from stb0 where ts < now -22d-1h-3s ;",
|
|
||||||
"result": "./query_res1.txt"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql": "select count(*) from stb0 where ts < 1626918583000000000 ;",
|
|
||||||
"result": "./query_res2.txt"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql": "select count(*) from stb0 where c2 <> 162687012800000000;",
|
|
||||||
"result": "./query_res3.txt"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql": "select count(*) from stb0 where c2 != \"2021-07-21 20:22:08.248246976\";",
|
|
||||||
"result": "./query_res4.txt"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql": "select count(*) from stb0 where ts between \"2021-07-01 00:00:00.000000000\" and \"2021-07-01 00:00:00.990000000\";",
|
|
||||||
"result": "./query_res5.txt"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql":"select count(*) from stb0 group by tbname;",
|
|
||||||
"result":"./query_res6.txt"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql":"select count(*) from stb0 where ts between 1625068800000000000 and 1625068801000000000;",
|
|
||||||
"result":"./query_res7.txt"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql":"select avg(c0) from stb0 interval(5000000000b);",
|
|
||||||
"result":"./query_res8.txt"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql":"select avg(c0) from stb0 interval(100000000b) sliding (100000000b);",
|
|
||||||
"result":"./query_res9.txt"
|
|
||||||
}
|
|
||||||
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"super_table_query": {
|
|
||||||
"stblname": "stb0",
|
|
||||||
"query_interval": 0,
|
|
||||||
"threads": 4,
|
|
||||||
"sqls": [
|
|
||||||
{
|
|
||||||
"sql": "select count(*) from xxxx where ts > \"2021-07-01 00:00:00.490000000\" ;",
|
|
||||||
"result": "./query_res_tb0.txt"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql":"select count(*) from xxxx where ts between \"2021-07-01 00:00:00.000000000\" and \"2021-07-01 00:00:00.990000000\" ;",
|
|
||||||
"result": "./query_res_tb1.txt"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql":"select first(*) from xxxx ;",
|
|
||||||
"result": "./query_res_tb2.txt"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql":"select last(*) from xxxx;",
|
|
||||||
"result": "./query_res_tb3.txt"
|
|
||||||
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql":"select last_row(*) from xxxx ;",
|
|
||||||
"result": "./query_res_tb4.txt"
|
|
||||||
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql":"select max(c0) from xxxx ;",
|
|
||||||
"result": "./query_res_tb5.txt"
|
|
||||||
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql":"select min(c0) from xxxx ;",
|
|
||||||
"result": "./query_res_tb6.txt"
|
|
||||||
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql":"select avg(c0) from xxxx ;",
|
|
||||||
"result": "./query_res_tb7.txt"
|
|
||||||
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql":"select avg(c0) from xxxx interval(100000000b) sliding (100000000b) ;",
|
|
||||||
"result": "./query_res_tb8.txt"
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,32 +0,0 @@
|
||||||
{
|
|
||||||
"filetype":"subscribe",
|
|
||||||
"cfgdir": "/etc/taos",
|
|
||||||
"host": "127.0.0.1",
|
|
||||||
"port": 6030,
|
|
||||||
"user": "root",
|
|
||||||
"password": "taosdata",
|
|
||||||
"databases": "subnsdb",
|
|
||||||
"confirm_parameter_prompt": "no",
|
|
||||||
"specified_table_query":
|
|
||||||
{
|
|
||||||
"concurrent":2,
|
|
||||||
"mode":"sync",
|
|
||||||
"interval":10000,
|
|
||||||
"restart":"yes",
|
|
||||||
"keepProgress":"yes",
|
|
||||||
"sqls": [
|
|
||||||
{
|
|
||||||
"sql": "select * from stb0 where ts>= \"2021-07-01 00:00:00.000000000\" ;",
|
|
||||||
"result": "./subscribe_res0.txt"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql": "select * from stb0 where ts < now -2d-1h-3s ;",
|
|
||||||
"result": "./subscribe_res1.txt"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"sql": "select * from stb0 where ts < 1626918583000000000 ;",
|
|
||||||
"result": "./subscribe_res2.txt"
|
|
||||||
}]
|
|
||||||
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,125 +0,0 @@
|
||||||
###################################################################
|
|
||||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This file is proprietary and confidential to TAOS Technologies.
|
|
||||||
# No part of this file may be reproduced, stored, transmitted,
|
|
||||||
# disclosed or used in any form or by any means other than as
|
|
||||||
# expressly provided by the written permission from Jianhui Tao
|
|
||||||
#
|
|
||||||
###################################################################
|
|
||||||
|
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
from util.log import *
|
|
||||||
from util.cases import *
|
|
||||||
from util.sql import *
|
|
||||||
from util.dnodes import *
|
|
||||||
import time
|
|
||||||
from datetime import datetime
|
|
||||||
import subprocess
|
|
||||||
|
|
||||||
|
|
||||||
class TDTestCase:
|
|
||||||
def init(self, conn, logSql):
|
|
||||||
tdLog.debug("start to execute %s" % __file__)
|
|
||||||
tdSql.init(conn.cursor(), logSql)
|
|
||||||
|
|
||||||
def getBuildPath(self):
|
|
||||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
|
||||||
|
|
||||||
if ("community" in selfPath):
|
|
||||||
projPath = selfPath[:selfPath.find("community")]
|
|
||||||
else:
|
|
||||||
projPath = selfPath[:selfPath.find("tests")]
|
|
||||||
|
|
||||||
for root, dirs, files in os.walk(projPath):
|
|
||||||
if ("taosd" in files):
|
|
||||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
|
||||||
if ("packaging" not in rootRealPath):
|
|
||||||
buildPath = root[:len(root)-len("/build/bin")]
|
|
||||||
break
|
|
||||||
return buildPath
|
|
||||||
|
|
||||||
# get the number of subscriptions
|
|
||||||
def subTimes(self,filename):
|
|
||||||
self.filename = filename
|
|
||||||
command = 'cat %s |wc -l'% filename
|
|
||||||
times = int(subprocess.getstatusoutput(command)[1])
|
|
||||||
return times
|
|
||||||
|
|
||||||
# assert results
|
|
||||||
def assertCheck(self,filename,subResult,expectResult):
|
|
||||||
self.filename = filename
|
|
||||||
self.subResult = subResult
|
|
||||||
self.expectResult = expectResult
|
|
||||||
args0 = (filename, subResult, expectResult)
|
|
||||||
assert subResult == expectResult , "Queryfile:%s ,result is %s != expect: %s" % args0
|
|
||||||
|
|
||||||
def run(self):
|
|
||||||
buildPath = self.getBuildPath()
|
|
||||||
if (buildPath == ""):
|
|
||||||
tdLog.exit("taosd not found!")
|
|
||||||
else:
|
|
||||||
tdLog.info("taosd found in %s" % buildPath)
|
|
||||||
binPath = buildPath+ "/build/bin/"
|
|
||||||
|
|
||||||
# clear env
|
|
||||||
os.system("ps -ef |grep 'taosdemoAllTest/taosdemoTestSupportNanoSubscribe.json' |grep -v 'grep' |awk '{print $2}'|xargs kill -9")
|
|
||||||
os.system("rm -rf ./subscribe_res*")
|
|
||||||
os.system("rm -rf ./all_subscribe_res*")
|
|
||||||
|
|
||||||
|
|
||||||
# insert data
|
|
||||||
os.system("%staosdemo -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestNanoDatabaseInsertForSub.json" % binPath)
|
|
||||||
os.system("nohup %staosdemo -f tools/taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanoSubscribe.json &" % binPath)
|
|
||||||
query_pid = int(subprocess.getstatusoutput('ps aux|grep "taosdemoAllTest/NanoTestCase/taosdemoTestSupportNanoSubscribe.json" |grep -v "grep"|awk \'{print $2}\'')[1])
|
|
||||||
|
|
||||||
|
|
||||||
# merge result files
|
|
||||||
sleep(5)
|
|
||||||
os.system("cat subscribe_res0.txt* > all_subscribe_res0.txt")
|
|
||||||
os.system("cat subscribe_res1.txt* > all_subscribe_res1.txt")
|
|
||||||
os.system("cat subscribe_res2.txt* > all_subscribe_res2.txt")
|
|
||||||
|
|
||||||
|
|
||||||
# correct subscribeTimes testcase
|
|
||||||
subTimes0 = self.subTimes("all_subscribe_res0.txt")
|
|
||||||
self.assertCheck("all_subscribe_res0.txt",subTimes0 ,200)
|
|
||||||
|
|
||||||
subTimes1 = self.subTimes("all_subscribe_res1.txt")
|
|
||||||
self.assertCheck("all_subscribe_res1.txt",subTimes1 ,200)
|
|
||||||
|
|
||||||
subTimes2 = self.subTimes("all_subscribe_res2.txt")
|
|
||||||
self.assertCheck("all_subscribe_res2.txt",subTimes2 ,200)
|
|
||||||
|
|
||||||
|
|
||||||
# insert extral data
|
|
||||||
tdSql.execute("use subnsdb")
|
|
||||||
tdSql.execute("insert into tb0_0 values(now,100.1000,'subtest1',now-1s)")
|
|
||||||
sleep(15)
|
|
||||||
|
|
||||||
os.system("cat subscribe_res0.txt* > all_subscribe_res0.txt")
|
|
||||||
subTimes0 = self.subTimes("all_subscribe_res0.txt")
|
|
||||||
print("pass")
|
|
||||||
self.assertCheck("all_subscribe_res0.txt",subTimes0 ,202)
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# correct data testcase
|
|
||||||
os.system("kill -9 %d" % query_pid)
|
|
||||||
sleep(3)
|
|
||||||
os.system("rm -rf ./subscribe_res*")
|
|
||||||
os.system("rm -rf ./all_subscribe*")
|
|
||||||
os.system("rm -rf ./*.py.sql")
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def stop(self):
|
|
||||||
tdSql.close()
|
|
||||||
tdLog.success("%s successfully executed" % __file__)
|
|
||||||
|
|
||||||
tdCases.addWindows(__file__, TDTestCase())
|
|
||||||
tdCases.addLinux(__file__, TDTestCase())
|
|
|
@ -1,41 +0,0 @@
|
||||||
{
|
|
||||||
"filetype":"subscribe",
|
|
||||||
"cfgdir": "/etc/taos",
|
|
||||||
"host": "127.0.0.1",
|
|
||||||
"port": 6030,
|
|
||||||
"user": "root",
|
|
||||||
"password": "taosdata",
|
|
||||||
"databases": "db",
|
|
||||||
"confirm_parameter_prompt": "no",
|
|
||||||
"specified_table_query":
|
|
||||||
{
|
|
||||||
"concurrent":2,
|
|
||||||
"mode":"sync",
|
|
||||||
"interval":0,
|
|
||||||
"resubAfterConsume":1,
|
|
||||||
"endAfterConsume":1,
|
|
||||||
"keepProgress":"yes",
|
|
||||||
"restart":"no",
|
|
||||||
"sqls": [
|
|
||||||
{
|
|
||||||
"sql": "select * from stb00_0",
|
|
||||||
"result": "./subscribe_res0.txt"
|
|
||||||
}]
|
|
||||||
},
|
|
||||||
"super_table_query":
|
|
||||||
{
|
|
||||||
"stblname": "stb0",
|
|
||||||
"threads":2,
|
|
||||||
"mode":"sync",
|
|
||||||
"interval":1000,
|
|
||||||
"resubAfterConsume":1,
|
|
||||||
"endAfterConsume":1,
|
|
||||||
"keepProgress":"yes",
|
|
||||||
"restart":"no",
|
|
||||||
"sqls": [
|
|
||||||
{
|
|
||||||
"sql": "select * from xxxx where ts >= '2021-02-25 10:00:01.000' ",
|
|
||||||
"result": "./subscribe_res2.txt"
|
|
||||||
}]
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,41 +0,0 @@
|
||||||
{
|
|
||||||
"filetype":"subscribe",
|
|
||||||
"cfgdir": "/etc/taos",
|
|
||||||
"host": "127.0.0.1",
|
|
||||||
"port": 6030,
|
|
||||||
"user": "root",
|
|
||||||
"password": "taosdata",
|
|
||||||
"databases": "db",
|
|
||||||
"confirm_parameter_prompt": "no",
|
|
||||||
"specified_table_query":
|
|
||||||
{
|
|
||||||
"concurrent":2,
|
|
||||||
"mode":"sync",
|
|
||||||
"interval":0,
|
|
||||||
"resubAfterConsume":1,
|
|
||||||
"endAfterConsume":-1,
|
|
||||||
"keepProgress":"no",
|
|
||||||
"restart":"yes",
|
|
||||||
"sqls": [
|
|
||||||
{
|
|
||||||
"sql": "select * from stb00_0",
|
|
||||||
"result": "./subscribe_res0.txt"
|
|
||||||
}]
|
|
||||||
},
|
|
||||||
"super_table_query":
|
|
||||||
{
|
|
||||||
"stblname": "stb0",
|
|
||||||
"threads":2,
|
|
||||||
"mode":"sync",
|
|
||||||
"interval":1000,
|
|
||||||
"resubAfterConsume":1,
|
|
||||||
"endAfterConsume":-1,
|
|
||||||
"keepProgress":"no",
|
|
||||||
"restart":"yes",
|
|
||||||
"sqls": [
|
|
||||||
{
|
|
||||||
"sql": "select * from xxxx where ts >= '2021-02-25 10:00:01.000' ",
|
|
||||||
"result": "./subscribe_res2.txt"
|
|
||||||
}]
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,41 +0,0 @@
|
||||||
{
|
|
||||||
"filetype":"subscribe",
|
|
||||||
"cfgdir": "/etc/taos",
|
|
||||||
"host": "127.0.0.1",
|
|
||||||
"port": 6030,
|
|
||||||
"user": "root",
|
|
||||||
"password": "taosdata",
|
|
||||||
"databases": "db",
|
|
||||||
"confirm_parameter_prompt": "no",
|
|
||||||
"specified_table_query":
|
|
||||||
{
|
|
||||||
"concurrent":2,
|
|
||||||
"mode":"sync",
|
|
||||||
"interval":0,
|
|
||||||
"resubAfterConsume":-1,
|
|
||||||
"endAfterConsume":-1,
|
|
||||||
"keepProgress":"no",
|
|
||||||
"restart":"yes",
|
|
||||||
"sqls": [
|
|
||||||
{
|
|
||||||
"sql": "select * from stb00_0",
|
|
||||||
"result": "./subscribe_res0.txt"
|
|
||||||
}]
|
|
||||||
},
|
|
||||||
"super_table_query":
|
|
||||||
{
|
|
||||||
"stblname": "stb0",
|
|
||||||
"threads":2,
|
|
||||||
"mode":"sync",
|
|
||||||
"interval":1000,
|
|
||||||
"resubAfterConsume":-1,
|
|
||||||
"endAfterConsume":-1,
|
|
||||||
"keepProgress":"no",
|
|
||||||
"restart":"yes",
|
|
||||||
"sqls": [
|
|
||||||
{
|
|
||||||
"sql": "select * from xxxx where ts >= '2021-02-25 10:00:01.000' ",
|
|
||||||
"result": "./subscribe_res2.txt"
|
|
||||||
}]
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,41 +0,0 @@
|
||||||
{
|
|
||||||
"filetype":"subscribe",
|
|
||||||
"cfgdir": "/etc/taos",
|
|
||||||
"host": "127.0.0.1",
|
|
||||||
"port": 6030,
|
|
||||||
"user": "root",
|
|
||||||
"password": "taosdata",
|
|
||||||
"databases": "db",
|
|
||||||
"confirm_parameter_prompt": "no",
|
|
||||||
"specified_table_query":
|
|
||||||
{
|
|
||||||
"concurrent":2,
|
|
||||||
"mode":"sync",
|
|
||||||
"interval":0,
|
|
||||||
"resubAfterConsume":-1,
|
|
||||||
"endAfterConsume":0,
|
|
||||||
"keepProgress":"no",
|
|
||||||
"restart":"yes",
|
|
||||||
"sqls": [
|
|
||||||
{
|
|
||||||
"sql": "select * from stb00_0",
|
|
||||||
"result": "./subscribe_res0.txt"
|
|
||||||
}]
|
|
||||||
},
|
|
||||||
"super_table_query":
|
|
||||||
{
|
|
||||||
"stblname": "stb0",
|
|
||||||
"threads":2,
|
|
||||||
"mode":"sync",
|
|
||||||
"interval":0,
|
|
||||||
"resubAfterConsume":-1,
|
|
||||||
"endAfterConsume":0,
|
|
||||||
"keepProgress":"no",
|
|
||||||
"restart":"yes",
|
|
||||||
"sqls": [
|
|
||||||
{
|
|
||||||
"sql": "select * from xxxx where ts >= '2021-02-25 10:00:01.000' ",
|
|
||||||
"result": "./subscribe_res2.txt"
|
|
||||||
}]
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,41 +0,0 @@
|
||||||
{
|
|
||||||
"filetype":"subscribe",
|
|
||||||
"cfgdir": "/etc/taos",
|
|
||||||
"host": "127.0.0.1",
|
|
||||||
"port": 6030,
|
|
||||||
"user": "root",
|
|
||||||
"password": "taosdata",
|
|
||||||
"databases": "db",
|
|
||||||
"confirm_parameter_prompt": "no",
|
|
||||||
"specified_table_query":
|
|
||||||
{
|
|
||||||
"concurrent":2,
|
|
||||||
"mode":"sync",
|
|
||||||
"interval":0,
|
|
||||||
"resubAfterConsume":-1,
|
|
||||||
"endAfterConsume":1,
|
|
||||||
"keepProgress":"no",
|
|
||||||
"restart":"yes",
|
|
||||||
"sqls": [
|
|
||||||
{
|
|
||||||
"sql": "select * from stb00_0",
|
|
||||||
"result": "./subscribe_res0.txt"
|
|
||||||
}]
|
|
||||||
},
|
|
||||||
"super_table_query":
|
|
||||||
{
|
|
||||||
"stblname": "stb0",
|
|
||||||
"threads":2,
|
|
||||||
"mode":"sync",
|
|
||||||
"interval":1000,
|
|
||||||
"resubAfterConsume":-1,
|
|
||||||
"endAfterConsume":2,
|
|
||||||
"keepProgress":"no",
|
|
||||||
"restart":"yes",
|
|
||||||
"sqls": [
|
|
||||||
{
|
|
||||||
"sql": "select * from xxxx where ts >= '2021-02-25 10:00:01.000' ",
|
|
||||||
"result": "./subscribe_res2.txt"
|
|
||||||
}]
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,62 +0,0 @@
|
||||||
{
|
|
||||||
"filetype": "insert",
|
|
||||||
"cfgdir": "/etc/taos",
|
|
||||||
"host": "127.0.0.1",
|
|
||||||
"port": 6030,
|
|
||||||
"user": "root",
|
|
||||||
"password": "taosdata",
|
|
||||||
"thread_count": 4,
|
|
||||||
"create_table_thread_count": 4,
|
|
||||||
"result_file": "./insert_res.txt",
|
|
||||||
"confirm_parameter_prompt": "no",
|
|
||||||
"insert_interval": 0,
|
|
||||||
"interlace_rows": 100,
|
|
||||||
"num_of_records_per_req": 1000,
|
|
||||||
"max_sql_len": 1024000,
|
|
||||||
"databases": [{
|
|
||||||
"dbinfo": {
|
|
||||||
"name": "db",
|
|
||||||
"drop": "yes",
|
|
||||||
"replica": 1,
|
|
||||||
|
|
||||||
"cache": 50,
|
|
||||||
|
|
||||||
"precision": "ms",
|
|
||||||
"keep": 365,
|
|
||||||
"minRows": 100,
|
|
||||||
"maxRows": 4096,
|
|
||||||
"comp":2
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
},
|
|
||||||
"super_tables": [{
|
|
||||||
"name": "stb0",
|
|
||||||
"child_table_exists":"no",
|
|
||||||
"childtable_count": 60,
|
|
||||||
"childtable_prefix": "stb00_",
|
|
||||||
"auto_create_table": "no",
|
|
||||||
"batch_create_tbl_num": 20,
|
|
||||||
"data_source": "rand",
|
|
||||||
"insert_mode": "taosc",
|
|
||||||
"insert_rows": 100000,
|
|
||||||
"childtable_limit": -1,
|
|
||||||
"childtable_offset":0,
|
|
||||||
"multi_thread_write_one_tbl": "no",
|
|
||||||
"interlace_rows": 1000,
|
|
||||||
"insert_interval":0,
|
|
||||||
"max_sql_len": 1024000,
|
|
||||||
"disorder_ratio": 0,
|
|
||||||
"disorder_range": 1000,
|
|
||||||
"timestamp_step": 1,
|
|
||||||
"start_timestamp": "2020-10-01 00:00:00.000",
|
|
||||||
"sample_format": "csv",
|
|
||||||
"sample_file": "./sample.csv",
|
|
||||||
"tags_file": "",
|
|
||||||
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":10}, {"type": "BINARY", "len": 16, "count":3}, {"type": "BINARY", "len": 32, "count":6}],
|
|
||||||
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":5}]
|
|
||||||
}]
|
|
||||||
}]
|
|
||||||
}
|
|
|
@ -1,89 +0,0 @@
|
||||||
###################################################################
|
|
||||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This file is proprietary and confidential to TAOS Technologies.
|
|
||||||
# No part of this file may be reproduced, stored, transmitted,
|
|
||||||
# disclosed or used in any form or by any means other than as
|
|
||||||
# expressly provided by the written permission from Jianhui Tao
|
|
||||||
#
|
|
||||||
###################################################################
|
|
||||||
|
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
import subprocess
|
|
||||||
import time
|
|
||||||
from util.log import *
|
|
||||||
from util.cases import *
|
|
||||||
from util.sql import *
|
|
||||||
from util.dnodes import *
|
|
||||||
|
|
||||||
|
|
||||||
class TDTestCase:
|
|
||||||
def init(self, conn, logSql):
|
|
||||||
tdLog.debug("start to execute %s" % __file__)
|
|
||||||
tdSql.init(conn.cursor(), logSql)
|
|
||||||
|
|
||||||
def getBuildPath(self):
|
|
||||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
|
||||||
|
|
||||||
if ("community" in selfPath):
|
|
||||||
projPath = selfPath[:selfPath.find("community")]
|
|
||||||
else:
|
|
||||||
projPath = selfPath[:selfPath.find("tests")]
|
|
||||||
|
|
||||||
for root, dirs, files in os.walk(projPath):
|
|
||||||
if ("taosd" in files):
|
|
||||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
|
||||||
if ("packaging" not in rootRealPath):
|
|
||||||
buildPath = root[:len(root)-len("/build/bin")]
|
|
||||||
break
|
|
||||||
return buildPath
|
|
||||||
|
|
||||||
def run(self):
|
|
||||||
tdSql.prepare()
|
|
||||||
buildPath = self.getBuildPath()
|
|
||||||
if (buildPath == ""):
|
|
||||||
tdLog.exit("taosd not found!")
|
|
||||||
else:
|
|
||||||
tdLog.info("taosd found in %s" % buildPath)
|
|
||||||
binPath = buildPath+ "/build/bin/"
|
|
||||||
|
|
||||||
# # insert 1000w rows in stb0
|
|
||||||
os.system("%staosdemo -f tools/taosdemoAllTest/TD-3453/query-interrupt.json -y " % binPath)
|
|
||||||
tdSql.execute("use db")
|
|
||||||
tdSql.query("select count (tbname) from stb0")
|
|
||||||
tdSql.checkData(0, 0,60)
|
|
||||||
tdSql.query("select count(*) from stb0")
|
|
||||||
tdSql.checkData(0, 0, 6000000)
|
|
||||||
os.system('%staosdemo -f tools/taosdemoAllTest/TD-3453/queryall.json -y & ' % binPath)
|
|
||||||
time.sleep(2)
|
|
||||||
query_pid = int(subprocess.getstatusoutput('ps aux|grep "TD-3453/queryall.json" |grep -v "grep"|awk \'{print $2}\'')[1])
|
|
||||||
taosd_cpu_load_1 = float(subprocess.getstatusoutput('top -n 1 -b -p $(ps aux|grep "bin/taosd -c"|grep -v "grep" |awk \'{print $2}\')|awk \'END{print}\' |awk \'{print $9}\'')[1])
|
|
||||||
if taosd_cpu_load_1 > 10.0 :
|
|
||||||
os.system("kill -9 %d" % query_pid)
|
|
||||||
time.sleep(5)
|
|
||||||
taosd_cpu_load_2 = float(subprocess.getstatusoutput('top -n 1 -b -p $(ps aux|grep "bin/taosd -c"|grep -v "grep" |awk \'{print $2}\')|awk \'END{print}\' |awk \'{print $9}\'')[1])
|
|
||||||
if taosd_cpu_load_2 < 10.0 :
|
|
||||||
suc_kill = 60
|
|
||||||
else:
|
|
||||||
suc_kill = 10
|
|
||||||
print("taosd_cpu_load is higher than 10%")
|
|
||||||
else:
|
|
||||||
suc_kill = 20
|
|
||||||
print("taosd_cpu_load is still less than 10%")
|
|
||||||
tdSql.query("select count (tbname) from stb0")
|
|
||||||
tdSql.checkData(0, 0, "%d" % suc_kill)
|
|
||||||
os.system("rm -rf querySystemInfo*")
|
|
||||||
os.system("rm -rf insert_res.txt")
|
|
||||||
os.system("rm -rf query_res.txt")
|
|
||||||
|
|
||||||
def stop(self):
|
|
||||||
tdSql.close()
|
|
||||||
tdLog.success("%s successfully executed" % __file__)
|
|
||||||
|
|
||||||
|
|
||||||
tdCases.addWindows(__file__, TDTestCase())
|
|
||||||
tdCases.addLinux(__file__, TDTestCase())
|
|
|
@ -1,20 +0,0 @@
|
||||||
{
|
|
||||||
"filetype":"query",
|
|
||||||
"cfgdir": "/etc/taos",
|
|
||||||
"host": "127.0.0.1",
|
|
||||||
"port": 6030,
|
|
||||||
"user": "root",
|
|
||||||
"password": "taosdata",
|
|
||||||
"confirm_parameter_prompt": "no",
|
|
||||||
"databases": "db",
|
|
||||||
"specified_table_query":{
|
|
||||||
"query_interval":1,
|
|
||||||
"concurrent":1,
|
|
||||||
"sqls":[
|
|
||||||
{
|
|
||||||
"sql": "select * from stb0",
|
|
||||||
"result": "./query_res.txt"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,10 +0,0 @@
|
||||||
0,0,'TAOSdata-0'
|
|
||||||
1,1,'TAOSdata-1'
|
|
||||||
2,22,'TAOSdata-2'
|
|
||||||
3,333,'TAOSdata-3'
|
|
||||||
4,4444,'TAOSdata-4'
|
|
||||||
5,55555,'TAOSdata-5'
|
|
||||||
6,666666,'TAOSdata-6'
|
|
||||||
7,7777777,'TAOSdata-7'
|
|
||||||
8,88888888,'TAOSdata-8'
|
|
||||||
9,999999999,'TAOSdata-9'
|
|
|
|
@ -1,62 +0,0 @@
|
||||||
{
|
|
||||||
"filetype": "insert",
|
|
||||||
"cfgdir": "/etc/taos",
|
|
||||||
"host": "127.0.0.1",
|
|
||||||
"port": 6030,
|
|
||||||
"user": "root",
|
|
||||||
"password": "taosdata",
|
|
||||||
"thread_count": 10,
|
|
||||||
"create_table_thread_count": 10,
|
|
||||||
"result_file": "./insert_res.txt",
|
|
||||||
"confirm_parameter_prompt": "no",
|
|
||||||
"insert_interval": 0,
|
|
||||||
"interlace_rows": 10,
|
|
||||||
"num_of_records_per_req": 1,
|
|
||||||
"max_sql_len": 1024000,
|
|
||||||
"databases": [{
|
|
||||||
"dbinfo": {
|
|
||||||
"name": "db",
|
|
||||||
"drop": "yes",
|
|
||||||
"replica": 1,
|
|
||||||
|
|
||||||
"cache": 50,
|
|
||||||
|
|
||||||
"precision": "ms",
|
|
||||||
"keep": 36500,
|
|
||||||
"minRows": 100,
|
|
||||||
"maxRows": 4096,
|
|
||||||
"comp":2
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
},
|
|
||||||
"super_tables": [{
|
|
||||||
"name": "stb0",
|
|
||||||
"child_table_exists":"no",
|
|
||||||
"childtable_count": 10000,
|
|
||||||
"childtable_prefix": "stb00_",
|
|
||||||
"auto_create_table": "no",
|
|
||||||
"batch_create_tbl_num": 1,
|
|
||||||
"data_source": "sample",
|
|
||||||
"insert_mode": "taosc",
|
|
||||||
"insert_rows": 10,
|
|
||||||
"childtable_limit": 0,
|
|
||||||
"childtable_offset":0,
|
|
||||||
"multi_thread_write_one_tbl": "no",
|
|
||||||
"interlace_rows": 0,
|
|
||||||
"insert_interval":0,
|
|
||||||
"max_sql_len": 1024000,
|
|
||||||
"disorder_ratio": 0,
|
|
||||||
"disorder_range": 1000,
|
|
||||||
"timestamp_step": 1,
|
|
||||||
"start_timestamp": "2020-10-01 00:00:00.000",
|
|
||||||
"sample_format": "csv",
|
|
||||||
"sample_file": "./tools/taosdemoAllTest/TD-4985/query-limit-offset.csv",
|
|
||||||
"tags_file": "./tools/taosdemoAllTest/TD-4985/query-limit-offset.csv",
|
|
||||||
"columns": [{"type": "INT","count":2}, {"type": "BINARY", "len": 16, "count":1}],
|
|
||||||
"tags": [{"type": "INT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
|
|
||||||
}]
|
|
||||||
}]
|
|
||||||
}
|
|
|
@ -1,192 +0,0 @@
|
||||||
###################################################################
|
|
||||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This file is proprietary and confidential to TAOS Technologies.
|
|
||||||
# No part of this file may be reproduced, stored, transmitted,
|
|
||||||
# disclosed or used in any form or by any means other than as
|
|
||||||
# expressly provided by the written permission from Jianhui Tao
|
|
||||||
#
|
|
||||||
###################################################################
|
|
||||||
|
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
|
|
||||||
import sys
|
|
||||||
import os
|
|
||||||
import time
|
|
||||||
from util.log import *
|
|
||||||
from util.cases import *
|
|
||||||
from util.sql import *
|
|
||||||
from util.dnodes import *
|
|
||||||
|
|
||||||
|
|
||||||
class TDTestCase:
|
|
||||||
def init(self, conn, logSql):
|
|
||||||
tdLog.debug("start to execute %s" % __file__)
|
|
||||||
tdSql.init(conn.cursor(), logSql)
|
|
||||||
|
|
||||||
now = time.time()
|
|
||||||
self.ts = int(round(now * 1000))
|
|
||||||
|
|
||||||
def getBuildPath(self):
|
|
||||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
|
||||||
|
|
||||||
if ("community" in selfPath):
|
|
||||||
projPath = selfPath[:selfPath.find("community")]
|
|
||||||
else:
|
|
||||||
projPath = selfPath[:selfPath.find("tests")]
|
|
||||||
|
|
||||||
for root, dirs, files in os.walk(projPath):
|
|
||||||
if ("taosd" in files):
|
|
||||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
|
||||||
if ("packaging" not in rootRealPath):
|
|
||||||
buildPath = root[:len(root)-len("/build/bin")]
|
|
||||||
break
|
|
||||||
return buildPath
|
|
||||||
|
|
||||||
def run(self):
|
|
||||||
buildPath = self.getBuildPath()
|
|
||||||
if (buildPath == ""):
|
|
||||||
tdLog.exit("taosd not found!")
|
|
||||||
else:
|
|
||||||
tdLog.info("taosd found in %s" % buildPath)
|
|
||||||
binPath = buildPath+ "/build/bin/"
|
|
||||||
|
|
||||||
# insert: create one or mutiple tables per sql and insert multiple rows per sql
|
|
||||||
# test case for https://jira.taosdata.com:18080/browse/TD-4985
|
|
||||||
os.system("rm -rf tools/taosdemoAllTest/TD-4985/query-limit-offset.py.sql")
|
|
||||||
os.system("%staosdemo -f tools/taosdemoAllTest/TD-4985/query-limit-offset.json -y " % binPath)
|
|
||||||
tdSql.execute("use db")
|
|
||||||
tdSql.query("select count (tbname) from stb0")
|
|
||||||
tdSql.checkData(0, 0, 10000)
|
|
||||||
|
|
||||||
for i in range(1000):
|
|
||||||
tdSql.execute('''insert into stb00_9999 values(%d, %d, %d,'test99.%s')'''
|
|
||||||
% (self.ts + i, i, -10000+i, i))
|
|
||||||
tdSql.execute('''insert into stb00_8888 values(%d, %d, %d,'test98.%s')'''
|
|
||||||
% (self.ts + i, i, -10000+i, i))
|
|
||||||
tdSql.execute('''insert into stb00_7777 values(%d, %d, %d,'test97.%s')'''
|
|
||||||
% (self.ts + i, i, -10000+i, i))
|
|
||||||
tdSql.execute('''insert into stb00_6666 values(%d, %d, %d,'test96.%s')'''
|
|
||||||
% (self.ts + i, i, -10000+i, i))
|
|
||||||
tdSql.execute('''insert into stb00_5555 values(%d, %d, %d,'test95.%s')'''
|
|
||||||
% (self.ts + i, i, -10000+i, i))
|
|
||||||
tdSql.execute('''insert into stb00_4444 values(%d, %d, %d,'test94.%s')'''
|
|
||||||
% (self.ts + i, i, -10000+i, i))
|
|
||||||
tdSql.execute('''insert into stb00_3333 values(%d, %d, %d,'test93.%s')'''
|
|
||||||
% (self.ts + i, i, -10000+i, i))
|
|
||||||
tdSql.execute('''insert into stb00_2222 values(%d, %d, %d,'test92.%s')'''
|
|
||||||
% (self.ts + i, i, -10000+i, i))
|
|
||||||
tdSql.execute('''insert into stb00_1111 values(%d, %d, %d,'test91.%s')'''
|
|
||||||
% (self.ts + i, i, -10000+i, i))
|
|
||||||
tdSql.execute('''insert into stb00_100 values(%d, %d, %d,'test90.%s')'''
|
|
||||||
% (self.ts + i, i, -10000+i, i))
|
|
||||||
tdSql.query("select * from stb0 where c2 like 'test99%' ")
|
|
||||||
tdSql.checkRows(1000)
|
|
||||||
tdSql.query("select * from stb0 where tbname like 'stb00_9999' limit 10" )
|
|
||||||
tdSql.checkData(0, 1, 0)
|
|
||||||
tdSql.checkData(1, 1, 1)
|
|
||||||
tdSql.checkData(2, 1, 2)
|
|
||||||
tdSql.query("select * from stb0 where tbname like 'stb00_9999' limit 10 offset 5" )
|
|
||||||
tdSql.checkData(0, 1, 5)
|
|
||||||
tdSql.checkData(1, 1, 6)
|
|
||||||
tdSql.checkData(2, 1, 7)
|
|
||||||
tdSql.query("select * from stb0 where c2 like 'test98%' ")
|
|
||||||
tdSql.checkRows(1000)
|
|
||||||
tdSql.query("select * from stb0 where tbname like 'stb00_8888' limit 10" )
|
|
||||||
tdSql.checkData(0, 1, 0)
|
|
||||||
tdSql.checkData(1, 1, 1)
|
|
||||||
tdSql.checkData(2, 1, 2)
|
|
||||||
tdSql.query("select * from stb0 where tbname like 'stb00_8888' limit 10 offset 5" )
|
|
||||||
tdSql.checkData(0, 1, 5)
|
|
||||||
tdSql.checkData(1, 1, 6)
|
|
||||||
tdSql.checkData(2, 1, 7)
|
|
||||||
tdSql.query("select * from stb0 where c2 like 'test97%' ")
|
|
||||||
tdSql.checkRows(1000)
|
|
||||||
tdSql.query("select * from stb0 where tbname like 'stb00_7777' limit 10" )
|
|
||||||
tdSql.checkData(0, 1, 0)
|
|
||||||
tdSql.checkData(1, 1, 1)
|
|
||||||
tdSql.checkData(2, 1, 2)
|
|
||||||
tdSql.query("select * from stb0 where tbname like 'stb00_7777' limit 10 offset 5" )
|
|
||||||
tdSql.checkData(0, 1, 5)
|
|
||||||
tdSql.checkData(1, 1, 6)
|
|
||||||
tdSql.checkData(2, 1, 7)
|
|
||||||
tdSql.query("select * from stb0 where c2 like 'test96%' ")
|
|
||||||
tdSql.checkRows(1000)
|
|
||||||
tdSql.query("select * from stb0 where tbname like 'stb00_6666' limit 10" )
|
|
||||||
tdSql.checkData(0, 1, 0)
|
|
||||||
tdSql.checkData(1, 1, 1)
|
|
||||||
tdSql.checkData(2, 1, 2)
|
|
||||||
tdSql.query("select * from stb0 where tbname like 'stb00_6666' limit 10 offset 5" )
|
|
||||||
tdSql.checkData(0, 1, 5)
|
|
||||||
tdSql.checkData(1, 1, 6)
|
|
||||||
tdSql.checkData(2, 1, 7)
|
|
||||||
tdSql.query("select * from stb0 where c2 like 'test95%' ")
|
|
||||||
tdSql.checkRows(1000)
|
|
||||||
tdSql.query("select * from stb0 where tbname like 'stb00_5555' limit 10" )
|
|
||||||
tdSql.checkData(0, 1, 0)
|
|
||||||
tdSql.checkData(1, 1, 1)
|
|
||||||
tdSql.checkData(2, 1, 2)
|
|
||||||
tdSql.query("select * from stb0 where tbname like 'stb00_5555' limit 10 offset 5" )
|
|
||||||
tdSql.checkData(0, 1, 5)
|
|
||||||
tdSql.checkData(1, 1, 6)
|
|
||||||
tdSql.checkData(2, 1, 7)
|
|
||||||
tdSql.query("select * from stb0 where c2 like 'test94%' ")
|
|
||||||
tdSql.checkRows(1000)
|
|
||||||
tdSql.query("select * from stb0 where tbname like 'stb00_4444' limit 10" )
|
|
||||||
tdSql.checkData(0, 1, 0)
|
|
||||||
tdSql.checkData(1, 1, 1)
|
|
||||||
tdSql.checkData(2, 1, 2)
|
|
||||||
tdSql.query("select * from stb0 where tbname like 'stb00_4444' limit 10 offset 5" )
|
|
||||||
tdSql.checkData(0, 1, 5)
|
|
||||||
tdSql.checkData(1, 1, 6)
|
|
||||||
tdSql.checkData(2, 1, 7)
|
|
||||||
tdSql.query("select * from stb0 where c2 like 'test93%' ")
|
|
||||||
tdSql.checkRows(1000)
|
|
||||||
tdSql.query("select * from stb0 where tbname like 'stb00_3333' limit 100" )
|
|
||||||
tdSql.checkData(0, 1, 0)
|
|
||||||
tdSql.checkData(1, 1, 1)
|
|
||||||
tdSql.checkData(2, 1, 2)
|
|
||||||
tdSql.query("select * from stb0 where tbname like 'stb00_3333' limit 100 offset 5" )
|
|
||||||
tdSql.checkData(0, 1, 5)
|
|
||||||
tdSql.checkData(1, 1, 6)
|
|
||||||
tdSql.checkData(2, 1, 7)
|
|
||||||
tdSql.query("select * from stb0 where c2 like 'test92%' ")
|
|
||||||
tdSql.checkRows(1000)
|
|
||||||
tdSql.query("select * from stb0 where tbname like 'stb00_2222' limit 100" )
|
|
||||||
tdSql.checkData(0, 1, 0)
|
|
||||||
tdSql.checkData(1, 1, 1)
|
|
||||||
tdSql.checkData(2, 1, 2)
|
|
||||||
tdSql.query("select * from stb0 where tbname like 'stb00_2222' limit 100 offset 5" )
|
|
||||||
tdSql.checkData(0, 1, 5)
|
|
||||||
tdSql.checkData(1, 1, 6)
|
|
||||||
tdSql.checkData(2, 1, 7)
|
|
||||||
tdSql.query("select * from stb0 where c2 like 'test91%' ")
|
|
||||||
tdSql.checkRows(1000)
|
|
||||||
tdSql.query("select * from stb0 where tbname like 'stb00_1111' limit 100" )
|
|
||||||
tdSql.checkData(0, 1, 0)
|
|
||||||
tdSql.checkData(1, 1, 1)
|
|
||||||
tdSql.checkData(2, 1, 2)
|
|
||||||
tdSql.query("select * from stb0 where tbname like 'stb00_1111' limit 100 offset 5" )
|
|
||||||
tdSql.checkData(0, 1, 5)
|
|
||||||
tdSql.checkData(1, 1, 6)
|
|
||||||
tdSql.checkData(2, 1, 7)
|
|
||||||
tdSql.query("select * from stb0 where c2 like 'test90%' ")
|
|
||||||
tdSql.checkRows(1000)
|
|
||||||
tdSql.query("select * from stb0 where tbname like 'stb00_100' limit 100" )
|
|
||||||
tdSql.checkData(0, 1, 0)
|
|
||||||
tdSql.checkData(1, 1, 1)
|
|
||||||
tdSql.checkData(2, 1, 2)
|
|
||||||
tdSql.query("select * from stb0 where tbname like 'stb00_100' limit 100 offset 5" )
|
|
||||||
tdSql.checkData(0, 1, 5)
|
|
||||||
tdSql.checkData(1, 1, 6)
|
|
||||||
tdSql.checkData(2, 1, 7)
|
|
||||||
|
|
||||||
|
|
||||||
def stop(self):
|
|
||||||
tdSql.close()
|
|
||||||
tdLog.success("%s successfully executed" % __file__)
|
|
||||||
|
|
||||||
|
|
||||||
tdCases.addWindows(__file__, TDTestCase())
|
|
||||||
tdCases.addLinux(__file__, TDTestCase())
|
|
|
@ -1,703 +0,0 @@
|
||||||
###################################################################
|
|
||||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
|
||||||
# All rights reserved.
|
|
||||||
#
|
|
||||||
# This file is proprietary and confidential to TAOS Technologies.
|
|
||||||
# No part of this file may be reproduced, stored, transmitted,
|
|
||||||
# disclosed or used in any form or by any means other than as
|
|
||||||
# expressly provided by the written permission from Jianhui Tao
|
|
||||||
#
|
|
||||||
###################################################################
|
|
||||||
|
|
||||||
# -*- coding: utf-8 -*-
|
|
||||||
|
|
||||||
import random
|
|
||||||
import string
|
|
||||||
import os
|
|
||||||
import time
|
|
||||||
from util.log import tdLog
|
|
||||||
from util.cases import tdCases
|
|
||||||
from util.sql import tdSql
|
|
||||||
from util.dnodes import tdDnodes
|
|
||||||
|
|
||||||
class TDTestCase:
|
|
||||||
updatecfgDict={'maxSQLLength':1048576}
|
|
||||||
def init(self, conn, logSql):
|
|
||||||
tdLog.debug("start to execute %s" % __file__)
|
|
||||||
tdSql.init(conn.cursor(), logSql)
|
|
||||||
|
|
||||||
self.ts = 1538548685000
|
|
||||||
self.num = 100
|
|
||||||
|
|
||||||
def get_random_string(self, length):
|
|
||||||
letters = string.ascii_lowercase
|
|
||||||
result_str = ''.join(random.choice(letters) for i in range(length))
|
|
||||||
return result_str
|
|
||||||
|
|
||||||
def run(self):
|
|
||||||
tdSql.prepare()
|
|
||||||
# test case for https://jira.taosdata.com:18080/browse/TD-5213
|
|
||||||
|
|
||||||
print("==============step1, regular table, 1 ts + 4094 cols + 1 binary==============")
|
|
||||||
startTime = time.time()
|
|
||||||
sql = "create table regular_table_1(ts timestamp, "
|
|
||||||
for i in range(4094):
|
|
||||||
sql += "col%d int, " % (i + 1)
|
|
||||||
sql += "col4095 binary(22))"
|
|
||||||
tdLog.info(len(sql))
|
|
||||||
tdSql.execute(sql)
|
|
||||||
|
|
||||||
for i in range(self.num):
|
|
||||||
sql = "insert into regular_table_1 values(%d, "
|
|
||||||
for j in range(4094):
|
|
||||||
str = "'%s', " % random.randint(0,1000)
|
|
||||||
sql += str
|
|
||||||
sql += "'%s')" % self.get_random_string(22)
|
|
||||||
tdSql.execute(sql % (self.ts + i))
|
|
||||||
time.sleep(1)
|
|
||||||
tdSql.query("select count(*) from regular_table_1")
|
|
||||||
tdSql.checkData(0, 0, self.num)
|
|
||||||
tdSql.query("select * from regular_table_1")
|
|
||||||
tdSql.checkRows(self.num)
|
|
||||||
tdSql.checkCols(4096)
|
|
||||||
|
|
||||||
endTime = time.time()
|
|
||||||
print("total time %ds" % (endTime - startTime))
|
|
||||||
|
|
||||||
#insert in order
|
|
||||||
tdLog.info('test insert in order')
|
|
||||||
for i in range(self.num):
|
|
||||||
sql = "insert into regular_table_1 (ts,col1,col2,col3,col4,col5,col6,col7,col8,col9,col10,col4095) values(%d, "
|
|
||||||
for j in range(10):
|
|
||||||
str = "'%s', " % random.randint(0,1000)
|
|
||||||
sql += str
|
|
||||||
sql += "'%s')" % self.get_random_string(22)
|
|
||||||
tdSql.execute(sql % (self.ts + i + 1000))
|
|
||||||
time.sleep(1)
|
|
||||||
tdSql.query("select count(*) from regular_table_1")
|
|
||||||
tdSql.checkData(0, 0, 2*self.num)
|
|
||||||
tdSql.query("select * from regular_table_1")
|
|
||||||
tdSql.checkRows(2*self.num)
|
|
||||||
tdSql.checkCols(4096)
|
|
||||||
|
|
||||||
#insert out of order
|
|
||||||
tdLog.info('test insert out of order')
|
|
||||||
for i in range(self.num):
|
|
||||||
sql = "insert into regular_table_1 (ts,col123,col2213,col331,col41,col523,col236,col71,col813,col912,col1320,col4095) values(%d, "
|
|
||||||
for j in range(10):
|
|
||||||
str = "'%s', " % random.randint(0,1000)
|
|
||||||
sql += str
|
|
||||||
sql += "'%s')" % self.get_random_string(22)
|
|
||||||
tdSql.execute(sql % (self.ts + i + 2000))
|
|
||||||
time.sleep(1)
|
|
||||||
tdSql.query("select count(*) from regular_table_1")
|
|
||||||
tdSql.checkData(0, 0, 3*self.num)
|
|
||||||
tdSql.query("select * from regular_table_1")
|
|
||||||
tdSql.checkRows(3*self.num)
|
|
||||||
tdSql.checkCols(4096)
|
|
||||||
|
|
||||||
|
|
||||||
print("==============step2,regular table error col or value==============")
|
|
||||||
tdLog.info('test regular table exceeds row num')
|
|
||||||
# column > 4096
|
|
||||||
sql = "create table regular_table_2(ts timestamp, "
|
|
||||||
for i in range(4095):
|
|
||||||
sql += "col%d int, " % (i + 1)
|
|
||||||
sql += "col4096 binary(22))"
|
|
||||||
tdLog.info(len(sql))
|
|
||||||
tdSql.error(sql)
|
|
||||||
|
|
||||||
# column > 4096
|
|
||||||
sql = "insert into regular_table_1 values(%d, "
|
|
||||||
for j in range(4095):
|
|
||||||
str = "'%s', " % random.randint(0,1000)
|
|
||||||
sql += str
|
|
||||||
sql += "'%s')" % self.get_random_string(22)
|
|
||||||
tdSql.error(sql)
|
|
||||||
|
|
||||||
# insert column < 4096
|
|
||||||
sql = "insert into regular_table_1 values(%d, "
|
|
||||||
for j in range(4092):
|
|
||||||
str = "'%s', " % random.randint(0,1000)
|
|
||||||
sql += str
|
|
||||||
sql += "'%s')" % self.get_random_string(22)
|
|
||||||
tdSql.error(sql)
|
|
||||||
|
|
||||||
# alter column > 4096
|
|
||||||
sql = "alter table regular_table_1 add column max int; "
|
|
||||||
tdSql.error(sql)
|
|
||||||
|
|
||||||
print("==============step3,regular table , mix data type==============")
|
|
||||||
startTime = time.time()
|
|
||||||
sql = "create table regular_table_3(ts timestamp, "
|
|
||||||
for i in range(2000):
|
|
||||||
sql += "col%d int, " % (i + 1)
|
|
||||||
for i in range(2000,4094):
|
|
||||||
sql += "col%d bigint, " % (i + 1)
|
|
||||||
sql += "col4095 binary(22))"
|
|
||||||
tdLog.info(len(sql))
|
|
||||||
tdSql.execute(sql)
|
|
||||||
|
|
||||||
for i in range(self.num):
|
|
||||||
sql = "insert into regular_table_3 values(%d, "
|
|
||||||
for j in range(4094):
|
|
||||||
str = "'%s', " % random.randint(0,1000)
|
|
||||||
sql += str
|
|
||||||
sql += "'%s')" % self.get_random_string(22)
|
|
||||||
tdSql.execute(sql % (self.ts + i))
|
|
||||||
time.sleep(1)
|
|
||||||
tdSql.query("select count(*) from regular_table_3")
|
|
||||||
tdSql.checkData(0, 0, self.num)
|
|
||||||
tdSql.query("select * from regular_table_3")
|
|
||||||
tdSql.checkRows(self.num)
|
|
||||||
tdSql.checkCols(4096)
|
|
||||||
|
|
||||||
endTime = time.time()
|
|
||||||
print("total time %ds" % (endTime - startTime))
|
|
||||||
|
|
||||||
sql = "create table regular_table_4(ts timestamp, "
|
|
||||||
for i in range(500):
|
|
||||||
sql += "int_%d int, " % (i + 1)
|
|
||||||
for i in range(500,1000):
|
|
||||||
sql += "smallint_%d smallint, " % (i + 1)
|
|
||||||
for i in range(1000,1500):
|
|
||||||
sql += "tinyint_%d tinyint, " % (i + 1)
|
|
||||||
for i in range(1500,2000):
|
|
||||||
sql += "double_%d double, " % (i + 1)
|
|
||||||
for i in range(2000,2500):
|
|
||||||
sql += "float_%d float, " % (i + 1)
|
|
||||||
for i in range(2500,3000):
|
|
||||||
sql += "bool_%d bool, " % (i + 1)
|
|
||||||
for i in range(3000,3500):
|
|
||||||
sql += "bigint_%d bigint, " % (i + 1)
|
|
||||||
for i in range(3500,3800):
|
|
||||||
sql += "nchar_%d nchar(4), " % (i + 1)
|
|
||||||
for i in range(3800,4090):
|
|
||||||
sql += "binary_%d binary(10), " % (i + 1)
|
|
||||||
for i in range(4090,4094):
|
|
||||||
sql += "timestamp_%d timestamp, " % (i + 1)
|
|
||||||
sql += "col4095 binary(22))"
|
|
||||||
tdLog.info(len(sql))
|
|
||||||
tdSql.execute(sql)
|
|
||||||
|
|
||||||
for i in range(self.num):
|
|
||||||
sql = "insert into regular_table_4 values(%d, "
|
|
||||||
for j in range(500):
|
|
||||||
str = "'%s', " % random.randint(-2147483647,2147483647)
|
|
||||||
sql += str
|
|
||||||
for j in range(500,1000):
|
|
||||||
str = "'%s', " % random.randint(-32767,32767 )
|
|
||||||
sql += str
|
|
||||||
for j in range(1000,1500):
|
|
||||||
str = "'%s', " % random.randint(-127,127)
|
|
||||||
sql += str
|
|
||||||
for j in range(1500,2000):
|
|
||||||
str = "'%s', " % random.randint(-922337203685477580700,922337203685477580700)
|
|
||||||
sql += str
|
|
||||||
for j in range(2000,2500):
|
|
||||||
str = "'%s', " % random.randint(-92233720368547758070,92233720368547758070)
|
|
||||||
sql += str
|
|
||||||
for j in range(2500,3000):
|
|
||||||
str = "'%s', " % random.choice(['true','false'])
|
|
||||||
sql += str
|
|
||||||
for j in range(3000,3500):
|
|
||||||
str = "'%s', " % random.randint(-9223372036854775807,9223372036854775807)
|
|
||||||
sql += str
|
|
||||||
for j in range(3500,3800):
|
|
||||||
str = "'%s', " % self.get_random_string(4)
|
|
||||||
sql += str
|
|
||||||
for j in range(3800,4090):
|
|
||||||
str = "'%s', " % self.get_random_string(10)
|
|
||||||
sql += str
|
|
||||||
for j in range(4090,4094):
|
|
||||||
str = "%s, " % (self.ts + j)
|
|
||||||
sql += str
|
|
||||||
sql += "'%s')" % self.get_random_string(22)
|
|
||||||
tdSql.execute(sql % (self.ts + i))
|
|
||||||
time.sleep(1)
|
|
||||||
tdSql.query("select count(*) from regular_table_4")
|
|
||||||
tdSql.checkData(0, 0, self.num)
|
|
||||||
tdSql.query("select * from regular_table_4")
|
|
||||||
tdSql.checkRows(self.num)
|
|
||||||
tdSql.checkCols(4096)
|
|
||||||
tdLog.info("end ,now new one")
|
|
||||||
|
|
||||||
#insert null value
|
|
||||||
tdLog.info('test insert null value')
|
|
||||||
for i in range(self.num):
|
|
||||||
sql = "insert into regular_table_4 values(%d, "
|
|
||||||
for j in range(2500):
|
|
||||||
str = "'%s', " % random.choice(['NULL' ,'NULL' ,'NULL' ,1 , 10 ,100 ,-100 ,-10, 88 ,66 ,'NULL' ,'NULL' ,'NULL' ])
|
|
||||||
sql += str
|
|
||||||
for j in range(2500,3000):
|
|
||||||
str = "'%s', " % random.choice(['true' ,'false'])
|
|
||||||
sql += str
|
|
||||||
for j in range(3000,3500):
|
|
||||||
str = "'%s', " % random.randint(-9223372036854775807,9223372036854775807)
|
|
||||||
sql += str
|
|
||||||
for j in range(3500,3800):
|
|
||||||
str = "'%s', " % self.get_random_string(4)
|
|
||||||
sql += str
|
|
||||||
for j in range(3800,4090):
|
|
||||||
str = "'%s', " % self.get_random_string(10)
|
|
||||||
sql += str
|
|
||||||
for j in range(4090,4094):
|
|
||||||
str = "%s, " % (self.ts + j)
|
|
||||||
sql += str
|
|
||||||
sql += "'%s')" % self.get_random_string(22)
|
|
||||||
tdSql.execute(sql % (self.ts + i + 10000))
|
|
||||||
time.sleep(1)
|
|
||||||
tdSql.query("select count(*) from regular_table_4")
|
|
||||||
tdSql.checkData(0, 0, 2*self.num)
|
|
||||||
tdSql.query("select * from regular_table_4")
|
|
||||||
tdSql.checkRows(2*self.num)
|
|
||||||
tdSql.checkCols(4096)
|
|
||||||
|
|
||||||
#insert in order
|
|
||||||
tdLog.info('test insert in order')
|
|
||||||
for i in range(self.num):
|
|
||||||
sql = "insert into regular_table_4 (ts,int_2,int_22,int_169,smallint_537,smallint_607,tinyint_1030,tinyint_1491,double_1629,double_1808,float_2075,col4095) values(%d, "
|
|
||||||
for j in range(10):
|
|
||||||
str = "'%s', " % random.randint(0,100)
|
|
||||||
sql += str
|
|
||||||
sql += "'%s')" % self.get_random_string(22)
|
|
||||||
tdSql.execute(sql % (self.ts + i + 1000))
|
|
||||||
time.sleep(1)
|
|
||||||
tdSql.query("select count(*) from regular_table_4")
|
|
||||||
tdSql.checkData(0, 0, 3*self.num)
|
|
||||||
tdSql.query("select * from regular_table_4")
|
|
||||||
tdSql.checkRows(3*self.num)
|
|
||||||
tdSql.checkCols(4096)
|
|
||||||
|
|
||||||
#insert out of order
|
|
||||||
tdLog.info('test insert out of order')
|
|
||||||
for i in range(self.num):
|
|
||||||
sql = "insert into regular_table_4 (ts,int_169,float_2075,int_369,tinyint_1491,tinyint_1030,float_2360,smallint_537,double_1808,double_1608,double_1629,col4095) values(%d, "
|
|
||||||
for j in range(10):
|
|
||||||
str = "'%s', " % random.randint(0,100)
|
|
||||||
sql += str
|
|
||||||
sql += "'%s')" % self.get_random_string(22)
|
|
||||||
tdSql.execute(sql % (self.ts + i + 2000))
|
|
||||||
time.sleep(1)
|
|
||||||
tdSql.query("select count(*) from regular_table_4")
|
|
||||||
tdSql.checkData(0, 0, 4*self.num)
|
|
||||||
tdSql.query("select * from regular_table_4")
|
|
||||||
tdSql.checkRows(4*self.num)
|
|
||||||
tdSql.checkCols(4096)
|
|
||||||
|
|
||||||
#define TSDB_MAX_BYTES_PER_ROW 49151[old:1024 && 16384]
|
|
||||||
#ts:8\int:4\smallint:2\bigint:8\bool:1\float:4\tinyint:1\nchar:4*()+2[offset]\binary:1*()+2[offset]
|
|
||||||
tdLog.info('test regular_table max bytes per row 49151')
|
|
||||||
sql = "create table regular_table_5(ts timestamp, "
|
|
||||||
for i in range(500):
|
|
||||||
sql += "int_%d int, " % (i + 1)
|
|
||||||
for i in range(500,1000):
|
|
||||||
sql += "smallint_%d smallint, " % (i + 1)
|
|
||||||
for i in range(1000,1500):
|
|
||||||
sql += "tinyint_%d tinyint, " % (i + 1)
|
|
||||||
for i in range(1500,2000):
|
|
||||||
sql += "double_%d double, " % (i + 1)
|
|
||||||
for i in range(2000,2500):
|
|
||||||
sql += "float_%d float, " % (i + 1)
|
|
||||||
for i in range(2500,3000):
|
|
||||||
sql += "bool_%d bool, " % (i + 1)
|
|
||||||
for i in range(3000,3500):
|
|
||||||
sql += "bigint_%d bigint, " % (i + 1)
|
|
||||||
for i in range(3500,3800):
|
|
||||||
sql += "nchar_%d nchar(20), " % (i + 1)
|
|
||||||
for i in range(3800,4090):
|
|
||||||
sql += "binary_%d binary(34), " % (i + 1)
|
|
||||||
for i in range(4090,4094):
|
|
||||||
sql += "timestamp_%d timestamp, " % (i + 1)
|
|
||||||
sql += "col4095 binary(69))"
|
|
||||||
tdSql.execute(sql)
|
|
||||||
tdSql.query("select * from regular_table_5")
|
|
||||||
tdSql.checkCols(4096)
|
|
||||||
# TD-5324
|
|
||||||
sql = "alter table regular_table_5 modify column col4095 binary(70); "
|
|
||||||
tdSql.error(sql)
|
|
||||||
|
|
||||||
# drop and add
|
|
||||||
sql = "alter table regular_table_5 drop column col4095; "
|
|
||||||
tdSql.execute(sql)
|
|
||||||
sql = "select * from regular_table_5; "
|
|
||||||
tdSql.query(sql)
|
|
||||||
tdSql.checkCols(4095)
|
|
||||||
sql = "alter table regular_table_5 add column col4095 binary(70); "
|
|
||||||
tdSql.error(sql)
|
|
||||||
sql = "alter table regular_table_5 add column col4095 binary(69); "
|
|
||||||
tdSql.execute(sql)
|
|
||||||
sql = "select * from regular_table_5; "
|
|
||||||
tdSql.query(sql)
|
|
||||||
tdSql.checkCols(4096)
|
|
||||||
|
|
||||||
#out TSDB_MAX_BYTES_PER_ROW 49151
|
|
||||||
tdLog.info('test regular_table max bytes per row out 49151')
|
|
||||||
sql = "create table regular_table_6(ts timestamp, "
|
|
||||||
for i in range(500):
|
|
||||||
sql += "int_%d int, " % (i + 1)
|
|
||||||
for i in range(500,1000):
|
|
||||||
sql += "smallint_%d smallint, " % (i + 1)
|
|
||||||
for i in range(1000,1500):
|
|
||||||
sql += "tinyint_%d tinyint, " % (i + 1)
|
|
||||||
for i in range(1500,2000):
|
|
||||||
sql += "double_%d double, " % (i + 1)
|
|
||||||
for i in range(2000,2500):
|
|
||||||
sql += "float_%d float, " % (i + 1)
|
|
||||||
for i in range(2500,3000):
|
|
||||||
sql += "bool_%d bool, " % (i + 1)
|
|
||||||
for i in range(3000,3500):
|
|
||||||
sql += "bigint_%d bigint, " % (i + 1)
|
|
||||||
for i in range(3500,3800):
|
|
||||||
sql += "nchar_%d nchar(20), " % (i + 1)
|
|
||||||
for i in range(3800,4090):
|
|
||||||
sql += "binary_%d binary(34), " % (i + 1)
|
|
||||||
for i in range(4090,4094):
|
|
||||||
sql += "timestamp_%d timestamp, " % (i + 1)
|
|
||||||
sql += "col4095 binary(70))"
|
|
||||||
tdLog.info(len(sql))
|
|
||||||
tdSql.error(sql)
|
|
||||||
|
|
||||||
|
|
||||||
print("==============step4, super table , 1 ts + 4090 cols + 4 tags ==============")
|
|
||||||
startTime = time.time()
|
|
||||||
sql = "create stable stable_1(ts timestamp, "
|
|
||||||
for i in range(4090):
|
|
||||||
sql += "col%d int, " % (i + 1)
|
|
||||||
sql += "col4091 binary(22))"
|
|
||||||
sql += " tags (loc nchar(10),tag_1 int,tag_2 int,tag_3 int) "
|
|
||||||
tdLog.info(len(sql))
|
|
||||||
tdSql.execute(sql)
|
|
||||||
sql = '''create table table_0 using stable_1
|
|
||||||
tags('table_0' , '1' , '2' , '3' );'''
|
|
||||||
tdSql.execute(sql)
|
|
||||||
|
|
||||||
for i in range(self.num):
|
|
||||||
sql = "insert into table_0 values(%d, "
|
|
||||||
for j in range(4090):
|
|
||||||
str = "'%s', " % random.randint(0,1000)
|
|
||||||
sql += str
|
|
||||||
sql += "'%s')" % self.get_random_string(22)
|
|
||||||
tdSql.execute(sql % (self.ts + i))
|
|
||||||
time.sleep(1)
|
|
||||||
tdSql.query("select count(*) from table_0")
|
|
||||||
tdSql.checkData(0, 0, self.num)
|
|
||||||
tdSql.query("select * from table_0")
|
|
||||||
tdSql.checkRows(self.num)
|
|
||||||
tdSql.checkCols(4092)
|
|
||||||
|
|
||||||
sql = '''create table table_1 using stable_1
|
|
||||||
tags('table_1' , '1' , '2' , '3' );'''
|
|
||||||
tdSql.execute(sql)
|
|
||||||
|
|
||||||
for i in range(self.num):
|
|
||||||
sql = "insert into table_1 values(%d, "
|
|
||||||
for j in range(2080):
|
|
||||||
sql += "'%d', " % random.randint(0,1000)
|
|
||||||
for j in range(2080,4080):
|
|
||||||
sql += "'%s', " % 'NULL'
|
|
||||||
for j in range(4080,4090):
|
|
||||||
sql += "'%s', " % random.randint(0,10000)
|
|
||||||
sql += "'%s')" % self.get_random_string(22)
|
|
||||||
tdSql.execute(sql % (self.ts + i))
|
|
||||||
time.sleep(1)
|
|
||||||
tdSql.query("select count(*) from table_1")
|
|
||||||
tdSql.checkData(0, 0, self.num)
|
|
||||||
tdSql.query("select * from table_1")
|
|
||||||
tdSql.checkRows(self.num)
|
|
||||||
tdSql.checkCols(4092)
|
|
||||||
|
|
||||||
endTime = time.time()
|
|
||||||
print("total time %ds" % (endTime - startTime))
|
|
||||||
|
|
||||||
#insert in order
|
|
||||||
tdLog.info('test insert in order')
|
|
||||||
for i in range(self.num):
|
|
||||||
sql = "insert into table_1 (ts,col1,col2,col3,col4,col5,col6,col7,col8,col9,col10,col4091) values(%d, "
|
|
||||||
for j in range(10):
|
|
||||||
str = "'%s', " % random.randint(0,1000)
|
|
||||||
sql += str
|
|
||||||
sql += "'%s')" % self.get_random_string(22)
|
|
||||||
tdSql.execute(sql % (self.ts + i + 1000))
|
|
||||||
time.sleep(1)
|
|
||||||
tdSql.query("select count(*) from table_1")
|
|
||||||
tdSql.checkData(0, 0, 2*self.num)
|
|
||||||
tdSql.query("select * from table_1")
|
|
||||||
tdSql.checkRows(2*self.num)
|
|
||||||
tdSql.checkCols(4092)
|
|
||||||
|
|
||||||
#insert out of order
|
|
||||||
tdLog.info('test insert out of order')
|
|
||||||
for i in range(self.num):
|
|
||||||
sql = "insert into table_1 (ts,col123,col2213,col331,col41,col523,col236,col71,col813,col912,col1320,col4091) values(%d, "
|
|
||||||
for j in range(10):
|
|
||||||
str = "'%s', " % random.randint(0,1000)
|
|
||||||
sql += str
|
|
||||||
sql += "'%s')" % self.get_random_string(22)
|
|
||||||
tdSql.execute(sql % (self.ts + i + 2000))
|
|
||||||
time.sleep(1)
|
|
||||||
tdSql.query("select count(*) from table_1")
|
|
||||||
tdSql.checkData(0, 0, 3*self.num)
|
|
||||||
tdSql.query("select * from table_1")
|
|
||||||
tdSql.checkRows(3*self.num)
|
|
||||||
tdSql.checkCols(4092)
|
|
||||||
|
|
||||||
print("==============step5,stable table , mix data type==============")
|
|
||||||
sql = "create stable stable_3(ts timestamp, "
|
|
||||||
for i in range(500):
|
|
||||||
sql += "int_%d int, " % (i + 1)
|
|
||||||
for i in range(500,1000):
|
|
||||||
sql += "smallint_%d smallint, " % (i + 1)
|
|
||||||
for i in range(1000,1500):
|
|
||||||
sql += "tinyint_%d tinyint, " % (i + 1)
|
|
||||||
for i in range(1500,2000):
|
|
||||||
sql += "double_%d double, " % (i + 1)
|
|
||||||
for i in range(2000,2500):
|
|
||||||
sql += "float_%d float, " % (i + 1)
|
|
||||||
for i in range(2500,3000):
|
|
||||||
sql += "bool_%d bool, " % (i + 1)
|
|
||||||
for i in range(3000,3500):
|
|
||||||
sql += "bigint_%d bigint, " % (i + 1)
|
|
||||||
for i in range(3500,3800):
|
|
||||||
sql += "nchar_%d nchar(4), " % (i + 1)
|
|
||||||
for i in range(3800,4090):
|
|
||||||
sql += "binary_%d binary(10), " % (i + 1)
|
|
||||||
sql += "col4091 binary(22))"
|
|
||||||
sql += " tags (loc nchar(10),tag_1 int,tag_2 int,tag_3 int) "
|
|
||||||
tdLog.info(len(sql))
|
|
||||||
tdSql.execute(sql)
|
|
||||||
sql = '''create table table_30 using stable_3
|
|
||||||
tags('table_30' , '1' , '2' , '3' );'''
|
|
||||||
tdSql.execute(sql)
|
|
||||||
|
|
||||||
for i in range(self.num):
|
|
||||||
sql = "insert into table_30 values(%d, "
|
|
||||||
for j in range(500):
|
|
||||||
str = "'%s', " % random.randint(-2147483647,2147483647)
|
|
||||||
sql += str
|
|
||||||
for j in range(500,1000):
|
|
||||||
str = "'%s', " % random.randint(-32767,32767 )
|
|
||||||
sql += str
|
|
||||||
for j in range(1000,1500):
|
|
||||||
str = "'%s', " % random.randint(-127,127)
|
|
||||||
sql += str
|
|
||||||
for j in range(1500,2000):
|
|
||||||
str = "'%s', " % random.randint(-922337203685477580700,922337203685477580700)
|
|
||||||
sql += str
|
|
||||||
for j in range(2000,2500):
|
|
||||||
str = "'%s', " % random.randint(-92233720368547758070,92233720368547758070)
|
|
||||||
sql += str
|
|
||||||
for j in range(2500,3000):
|
|
||||||
str = "'%s', " % random.choice(['true','false'])
|
|
||||||
sql += str
|
|
||||||
for j in range(3000,3500):
|
|
||||||
str = "'%s', " % random.randint(-9223372036854775807,9223372036854775807)
|
|
||||||
sql += str
|
|
||||||
for j in range(3500,3800):
|
|
||||||
str = "'%s', " % self.get_random_string(4)
|
|
||||||
sql += str
|
|
||||||
for j in range(3800,4090):
|
|
||||||
str = "'%s', " % self.get_random_string(10)
|
|
||||||
sql += str
|
|
||||||
sql += "'%s')" % self.get_random_string(22)
|
|
||||||
tdSql.execute(sql % (self.ts + i))
|
|
||||||
time.sleep(1)
|
|
||||||
tdSql.query("select count(*) from table_30")
|
|
||||||
tdSql.checkData(0, 0, self.num)
|
|
||||||
tdSql.query("select * from table_30")
|
|
||||||
tdSql.checkRows(self.num)
|
|
||||||
tdSql.checkCols(4092)
|
|
||||||
|
|
||||||
#insert null value
|
|
||||||
tdLog.info('test insert null value')
|
|
||||||
sql = '''create table table_31 using stable_3
|
|
||||||
tags('table_31' , '1' , '2' , '3' );'''
|
|
||||||
tdSql.execute(sql)
|
|
||||||
|
|
||||||
for i in range(self.num):
|
|
||||||
sql = "insert into table_31 values(%d, "
|
|
||||||
for j in range(2500):
|
|
||||||
str = "'%s', " % random.choice(['NULL' ,'NULL' ,'NULL' ,1 , 10 ,100 ,-100 ,-10, 88 ,66 ,'NULL' ,'NULL' ,'NULL' ])
|
|
||||||
sql += str
|
|
||||||
for j in range(2500,3000):
|
|
||||||
str = "'%s', " % random.choice(['true' ,'false'])
|
|
||||||
sql += str
|
|
||||||
for j in range(3000,3500):
|
|
||||||
str = "'%s', " % random.randint(-9223372036854775807,9223372036854775807)
|
|
||||||
sql += str
|
|
||||||
for j in range(3500,3800):
|
|
||||||
str = "'%s', " % self.get_random_string(4)
|
|
||||||
sql += str
|
|
||||||
for j in range(3800,4090):
|
|
||||||
str = "'%s', " % self.get_random_string(10)
|
|
||||||
sql += str
|
|
||||||
sql += "'%s')" % self.get_random_string(22)
|
|
||||||
tdSql.execute(sql % (self.ts + i))
|
|
||||||
time.sleep(1)
|
|
||||||
tdSql.query("select count(*) from table_31")
|
|
||||||
tdSql.checkData(0, 0, self.num)
|
|
||||||
tdSql.query("select * from table_31")
|
|
||||||
tdSql.checkRows(self.num)
|
|
||||||
tdSql.checkCols(4092)
|
|
||||||
|
|
||||||
#insert in order
|
|
||||||
tdLog.info('test insert in order')
|
|
||||||
for i in range(self.num):
|
|
||||||
sql = "insert into table_31 (ts,int_2,int_22,int_169,smallint_537,smallint_607,tinyint_1030,tinyint_1491,double_1629,double_1808,float_2075,col4091) values(%d, "
|
|
||||||
for j in range(10):
|
|
||||||
str = "'%s', " % random.randint(0,100)
|
|
||||||
sql += str
|
|
||||||
sql += "'%s')" % self.get_random_string(22)
|
|
||||||
tdSql.execute(sql % (self.ts + i + 1000))
|
|
||||||
time.sleep(1)
|
|
||||||
tdSql.query("select count(*) from table_31")
|
|
||||||
tdSql.checkData(0, 0, 2*self.num)
|
|
||||||
tdSql.query("select * from table_31")
|
|
||||||
tdSql.checkRows(2*self.num)
|
|
||||||
tdSql.checkCols(4092)
|
|
||||||
|
|
||||||
#insert out of order
|
|
||||||
tdLog.info('test insert out of order')
|
|
||||||
for i in range(self.num):
|
|
||||||
sql = "insert into table_31 (ts,int_169,float_2075,int_369,tinyint_1491,tinyint_1030,float_2360,smallint_537,double_1808,double_1608,double_1629,col4091) values(%d, "
|
|
||||||
for j in range(10):
|
|
||||||
str = "'%s', " % random.randint(0,100)
|
|
||||||
sql += str
|
|
||||||
sql += "'%s')" % self.get_random_string(22)
|
|
||||||
tdSql.execute(sql % (self.ts + i + 2000))
|
|
||||||
time.sleep(1)
|
|
||||||
tdSql.query("select count(*) from table_31")
|
|
||||||
tdSql.checkData(0, 0, 3*self.num)
|
|
||||||
tdSql.query("select * from table_31")
|
|
||||||
tdSql.checkRows(3*self.num)
|
|
||||||
tdSql.checkCols(4092)
|
|
||||||
|
|
||||||
#define TSDB_MAX_BYTES_PER_ROW 49151 TSDB_MAX_TAGS_LEN 16384
|
|
||||||
#ts:8\int:4\smallint:2\bigint:8\bool:1\float:4\tinyint:1\nchar:4*()+2[offset]\binary:1*()+2[offset]
|
|
||||||
tdLog.info('test super table max bytes per row 49151')
|
|
||||||
sql = "create table stable_4(ts timestamp, "
|
|
||||||
for i in range(500):
|
|
||||||
sql += "int_%d int, " % (i + 1)
|
|
||||||
for i in range(500,1000):
|
|
||||||
sql += "smallint_%d smallint, " % (i + 1)
|
|
||||||
for i in range(1000,1500):
|
|
||||||
sql += "tinyint_%d tinyint, " % (i + 1)
|
|
||||||
for i in range(1500,2000):
|
|
||||||
sql += "double_%d double, " % (i + 1)
|
|
||||||
for i in range(2000,2500):
|
|
||||||
sql += "float_%d float, " % (i + 1)
|
|
||||||
for i in range(2500,3000):
|
|
||||||
sql += "bool_%d bool, " % (i + 1)
|
|
||||||
for i in range(3000,3500):
|
|
||||||
sql += "bigint_%d bigint, " % (i + 1)
|
|
||||||
for i in range(3500,3800):
|
|
||||||
sql += "nchar_%d nchar(20), " % (i + 1)
|
|
||||||
for i in range(3800,4090):
|
|
||||||
sql += "binary_%d binary(34), " % (i + 1)
|
|
||||||
sql += "col4091 binary(101))"
|
|
||||||
sql += " tags (loc nchar(10),tag_1 int,tag_2 int,tag_3 int) "
|
|
||||||
tdSql.execute(sql)
|
|
||||||
sql = '''create table table_40 using stable_4
|
|
||||||
tags('table_40' , '1' , '2' , '3' );'''
|
|
||||||
tdSql.execute(sql)
|
|
||||||
tdSql.query("select * from table_40")
|
|
||||||
tdSql.checkCols(4092)
|
|
||||||
tdSql.query("describe table_40")
|
|
||||||
tdSql.checkRows(4096)
|
|
||||||
|
|
||||||
tdLog.info('test super table drop and add column or tag')
|
|
||||||
sql = "alter stable stable_4 drop column col4091; "
|
|
||||||
tdSql.execute(sql)
|
|
||||||
sql = "select * from stable_4; "
|
|
||||||
tdSql.query(sql)
|
|
||||||
tdSql.checkCols(4095)
|
|
||||||
sql = "alter table stable_4 add column col4091 binary(102); "
|
|
||||||
tdSql.error(sql)
|
|
||||||
sql = "alter table stable_4 add column col4091 binary(101); "
|
|
||||||
tdSql.execute(sql)
|
|
||||||
sql = "select * from stable_4; "
|
|
||||||
tdSql.query(sql)
|
|
||||||
tdSql.checkCols(4096)
|
|
||||||
|
|
||||||
sql = "alter stable stable_4 drop tag tag_1; "
|
|
||||||
tdSql.execute(sql)
|
|
||||||
sql = "select * from stable_4; "
|
|
||||||
tdSql.query(sql)
|
|
||||||
tdSql.checkCols(4095)
|
|
||||||
sql = "alter table stable_4 add tag tag_1 int; "
|
|
||||||
tdSql.execute(sql)
|
|
||||||
sql = "select * from stable_4; "
|
|
||||||
tdSql.query(sql)
|
|
||||||
tdSql.checkCols(4096)
|
|
||||||
sql = "alter table stable_4 add tag loc1 nchar(10); "
|
|
||||||
tdSql.error(sql)
|
|
||||||
|
|
||||||
tdLog.info('test super table max bytes per row 49151')
|
|
||||||
sql = "create table stable_5(ts timestamp, "
|
|
||||||
for i in range(500):
|
|
||||||
sql += "int_%d int, " % (i + 1)
|
|
||||||
for i in range(500,1000):
|
|
||||||
sql += "smallint_%d smallint, " % (i + 1)
|
|
||||||
for i in range(1000,1500):
|
|
||||||
sql += "tinyint_%d tinyint, " % (i + 1)
|
|
||||||
for i in range(1500,2000):
|
|
||||||
sql += "double_%d double, " % (i + 1)
|
|
||||||
for i in range(2000,2500):
|
|
||||||
sql += "float_%d float, " % (i + 1)
|
|
||||||
for i in range(2500,3000):
|
|
||||||
sql += "bool_%d bool, " % (i + 1)
|
|
||||||
for i in range(3000,3500):
|
|
||||||
sql += "bigint_%d bigint, " % (i + 1)
|
|
||||||
for i in range(3500,3800):
|
|
||||||
sql += "nchar_%d nchar(20), " % (i + 1)
|
|
||||||
for i in range(3800,4090):
|
|
||||||
sql += "binary_%d binary(34), " % (i + 1)
|
|
||||||
sql += "col4091 binary(102))"
|
|
||||||
sql += " tags (loc nchar(10),tag_1 int,tag_2 int,tag_3 int) "
|
|
||||||
tdSql.error(sql)
|
|
||||||
|
|
||||||
print("==============step6, super table error col ==============")
|
|
||||||
tdLog.info('test exceeds row num')
|
|
||||||
# column + tag > 4096
|
|
||||||
sql = "create stable stable_2(ts timestamp, "
|
|
||||||
for i in range(4091):
|
|
||||||
sql += "col%d int, " % (i + 1)
|
|
||||||
sql += "col4092 binary(22))"
|
|
||||||
sql += " tags (loc nchar(10),tag_1 int,tag_2 int,tag_3 int) "
|
|
||||||
tdLog.info(len(sql))
|
|
||||||
tdSql.error(sql)
|
|
||||||
|
|
||||||
# column + tag > 4096
|
|
||||||
sql = "create stable stable_2(ts timestamp, "
|
|
||||||
for i in range(4090):
|
|
||||||
sql += "col%d int, " % (i + 1)
|
|
||||||
sql += "col4091 binary(22))"
|
|
||||||
sql += " tags (loc nchar(10),tag_1 int,tag_2 int,tag_3 int,tag_4 int) "
|
|
||||||
tdLog.info(len(sql))
|
|
||||||
tdSql.error(sql)
|
|
||||||
|
|
||||||
# alter column + tag > 4096
|
|
||||||
sql = "alter table stable_1 add column max int; "
|
|
||||||
tdSql.error(sql)
|
|
||||||
# TD-5322
|
|
||||||
sql = "alter table stable_1 add tag max int; "
|
|
||||||
tdSql.error(sql)
|
|
||||||
# TD-5324
|
|
||||||
sql = "alter table stable_4 modify column col4091 binary(102); "
|
|
||||||
tdSql.error(sql)
|
|
||||||
sql = "alter table stable_4 modify tag loc nchar(20); "
|
|
||||||
tdSql.query("select * from table_40")
|
|
||||||
tdSql.checkCols(4092)
|
|
||||||
tdSql.query("describe table_40")
|
|
||||||
tdSql.checkRows(4096)
|
|
||||||
|
|
||||||
os.system("rm -rf tools/taosdemoAllTest/TD-5213/insert4096columns_not_use_taosdemo.py.sql")
|
|
||||||
|
|
||||||
|
|
||||||
def stop(self):
|
|
||||||
tdSql.close()
|
|
||||||
tdLog.success("%s successfully executed" % __file__)
|
|
||||||
|
|
||||||
|
|
||||||
tdCases.addWindows(__file__, TDTestCase())
|
|
||||||
tdCases.addLinux(__file__, TDTestCase())
|
|
File diff suppressed because one or more lines are too long
|
@ -1,137 +0,0 @@
|
||||||
{
|
|
||||||
"filetype": "insert",
|
|
||||||
"cfgdir": "/etc/taos",
|
|
||||||
"host": "127.0.0.1",
|
|
||||||
"port": 6030,
|
|
||||||
"user": "root",
|
|
||||||
"password": "taosdata",
|
|
||||||
"thread_count": 10,
|
|
||||||
"create_table_thread_count": 10,
|
|
||||||
"result_file": "./insert_res.txt",
|
|
||||||
"confirm_parameter_prompt": "no",
|
|
||||||
"insert_interval": 0,
|
|
||||||
"interlace_rows": 10,
|
|
||||||
"num_of_records_per_req": 1,
|
|
||||||
"max_sql_len": 102400000,
|
|
||||||
"databases": [{
|
|
||||||
"dbinfo": {
|
|
||||||
"name": "json",
|
|
||||||
"drop": "yes",
|
|
||||||
"replica": 1,
|
|
||||||
|
|
||||||
"cache": 50,
|
|
||||||
|
|
||||||
"precision": "ms",
|
|
||||||
"keep": 365,
|
|
||||||
"minRows": 100,
|
|
||||||
"maxRows": 4096,
|
|
||||||
"comp":2
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
},
|
|
||||||
"super_tables": [{
|
|
||||||
"name": "stb_old",
|
|
||||||
"child_table_exists":"no",
|
|
||||||
"childtable_count": 1,
|
|
||||||
"childtable_prefix": "stb_old_",
|
|
||||||
"auto_create_table": "no",
|
|
||||||
"batch_create_tbl_num": 5,
|
|
||||||
"data_source": "rand",
|
|
||||||
"insert_mode": "taosc",
|
|
||||||
"insert_rows": 10,
|
|
||||||
"childtable_limit": 0,
|
|
||||||
"childtable_offset":0,
|
|
||||||
"multi_thread_write_one_tbl": "no",
|
|
||||||
"interlace_rows": 0,
|
|
||||||
"insert_interval":0,
|
|
||||||
"max_sql_len": 1024000,
|
|
||||||
"disorder_ratio": 0,
|
|
||||||
"disorder_range": 1000,
|
|
||||||
"timestamp_step": 1,
|
|
||||||
"start_timestamp": "2020-10-01 00:00:00.000",
|
|
||||||
"sample_format": "csv",
|
|
||||||
"sample_file": "./tools/taosdemoAllTest/TD-5213/insertSigcolumnsNum4096.csv",
|
|
||||||
"tags_file": "",
|
|
||||||
"columns": [{"type": "INT","count":1000}, {"type": "BINARY", "len": 16, "count":20}],
|
|
||||||
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
|
|
||||||
},{
|
|
||||||
"name": "stb_new",
|
|
||||||
"child_table_exists":"no",
|
|
||||||
"childtable_count": 1,
|
|
||||||
"childtable_prefix": "stb_new_",
|
|
||||||
"auto_create_table": "no",
|
|
||||||
"batch_create_tbl_num": 5,
|
|
||||||
"data_source": "rand",
|
|
||||||
"insert_mode": "taosc",
|
|
||||||
"insert_rows": 10,
|
|
||||||
"childtable_limit": 0,
|
|
||||||
"childtable_offset":0,
|
|
||||||
"multi_thread_write_one_tbl": "no",
|
|
||||||
"interlace_rows": 0,
|
|
||||||
"insert_interval":0,
|
|
||||||
"max_sql_len": 1024000,
|
|
||||||
"disorder_ratio": 0,
|
|
||||||
"disorder_range": 1000,
|
|
||||||
"timestamp_step": 1,
|
|
||||||
"start_timestamp": "2020-10-01 00:00:00.000",
|
|
||||||
"sample_format": "csv",
|
|
||||||
"sample_file": "./tools/taosdemoAllTest/sample.csv",
|
|
||||||
"tags_file": "",
|
|
||||||
"columns": [{"type": "INT","count":4000}, {"type": "BINARY", "len": 16, "count":90}],
|
|
||||||
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":3}]
|
|
||||||
},{
|
|
||||||
"name": "stb_mix",
|
|
||||||
"child_table_exists":"no",
|
|
||||||
"childtable_count": 1,
|
|
||||||
"childtable_prefix": "stb_mix_",
|
|
||||||
"auto_create_table": "no",
|
|
||||||
"batch_create_tbl_num": 5,
|
|
||||||
"data_source": "rand",
|
|
||||||
"insert_mode": "taosc",
|
|
||||||
"insert_rows": 10,
|
|
||||||
"childtable_limit": 0,
|
|
||||||
"childtable_offset":0,
|
|
||||||
"multi_thread_write_one_tbl": "no",
|
|
||||||
"interlace_rows": 0,
|
|
||||||
"insert_interval":0,
|
|
||||||
"max_sql_len": 1024000,
|
|
||||||
"disorder_ratio": 0,
|
|
||||||
"disorder_range": 1000,
|
|
||||||
"timestamp_step": 1,
|
|
||||||
"start_timestamp": "2020-10-01 00:00:00.000",
|
|
||||||
"sample_format": "csv",
|
|
||||||
"sample_file": "./tools/taosdemoAllTest/sample.csv",
|
|
||||||
"tags_file": "",
|
|
||||||
"columns": [{"type": "INT","count":500},{"type": "SMALLINT","count":500},{"type": "TINYINT","count":500},{"type": "DOUBLE","count":500},{"type": "FLOAT","count":500},{"type": "BOOL","count":500},{"type": "BIGINT","count":500},{"type": "NCHAR","len": 20,"count":300},{"type": "BINARY","len": 34,"count":290},{"type": "BINARY","len": 101,"count":1}],
|
|
||||||
"tags": [{"type": "INT", "count":3}, {"type": "NCHAR", "len": 10, "count":1}]
|
|
||||||
},{
|
|
||||||
"name": "stb_excel",
|
|
||||||
"child_table_exists":"no",
|
|
||||||
"childtable_count": 1,
|
|
||||||
"childtable_prefix": "stb_excel_",
|
|
||||||
"auto_create_table": "no",
|
|
||||||
"batch_create_tbl_num": 5,
|
|
||||||
"data_source": "sample",
|
|
||||||
"insert_mode": "taosc",
|
|
||||||
"insert_rows": 10,
|
|
||||||
"childtable_limit": 0,
|
|
||||||
"childtable_offset":0,
|
|
||||||
"multi_thread_write_one_tbl": "no",
|
|
||||||
"interlace_rows": 0,
|
|
||||||
"insert_interval":0,
|
|
||||||
"max_sql_len": 1024000,
|
|
||||||
"disorder_ratio": 0,
|
|
||||||
"disorder_range": 1000,
|
|
||||||
"timestamp_step": 1,
|
|
||||||
"start_timestamp": "2020-10-01 00:00:00.000",
|
|
||||||
"sample_format": "csv",
|
|
||||||
"sample_file": "./tools/taosdemoAllTest/TD-5213/insertSigcolumnsNum4096.csv",
|
|
||||||
"tags_file": "",
|
|
||||||
"columns": [{"type": "INT","count":500},{"type": "SMALLINT","count":500},{"type": "SMALLINT","count":500},{"type": "DOUBLE","count":500},{"type": "FLOAT","count":500},{"type": "BOOL","count":500},{"type": "BIGINT","count":500},{"type": "NCHAR","len": 19,"count":300},{"type": "BINARY","len": 34,"count":290},{"type": "BINARY","len": 101,"count":1}],
|
|
||||||
"tags": [{"type": "INT", "count":3}, {"type": "NCHAR", "len": 10, "count":1}]
|
|
||||||
}]
|
|
||||||
}]
|
|
||||||
}
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue