Merge branch 'main' into merge/mainto3.0

This commit is contained in:
Shengliang Guan 2025-01-19 11:19:14 +08:00
commit 585e639328
54 changed files with 1222 additions and 479 deletions

26
.github/CODEOWNERS vendored Normal file
View File

@ -0,0 +1,26 @@
# reference
# https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-code-owners
# merge team
# @guanshengliang Shengliang Guan
# @zitsen Linhe Huo
# @wingwing2005 Ya Qiang Li
# @feici02 WANG Xu
# @hzcheng Hongze Cheng
# @dapan1121 Pan Wei
# @sheyanjie-qq She Yanjie
# @pigzhou ZacharyZhou
* @taosdata/merge
/.github/ @feici02
/cmake/ @guanshengliang
/contrib/ @guanshengliang
/deps/ @guanshengliang
/docs/ @guanshengliang @zitsen
/examples/ @guanshengliang @zitsen
/include/ @guanshengliang @hzcheng @dapan1121
/packaging/ @feici02
/source/ @guanshengliang @hzcheng @dapan1121
/tests/ @guanshengliang @zitsen
/tools/ @guanshengliang @zitsen
/utils/ @guanshengliang

394
README.md
View File

@ -26,24 +26,33 @@ English | [简体中文](README-CN.md) | [TDengine Cloud](https://cloud.tdengine
# Table of Contents
1. [What is TDengine?](#1-what-is-tdengine)
2. [Documentation](#2-documentation)
3. [Building](#3-building)
1. [Install build tools](#31-install-build-tools)
1. [Get the source codes](#32-get-the-source-codes)
1. [Special Note](#33-special-note)
1. [Build TDengine](#34-build-tdengine)
4. [Installing](#4-installing)
1. [On Linux platform](#41-on-linux-platform)
1. [On Windows platform](#42-on-windows-platform)
1. [On macOS platform](#43-on-macos-platform)
1. [Quick Run](#44-quick-run)
5. [Try TDengine](#5-try-tdengine)
6. [Developing with TDengine](#6-developing-with-tdengine)
7. [Contribute to TDengine](#7-contribute-to-tdengine)
8. [Join the TDengine Community](#8-join-the-tdengine-community)
1. [Introduction](#1-introduction)
1. [Documentation](#2-documentation)
1. [Prerequisites](#3-prerequisites)
- [3.1 Prerequisites On Linux](#31-on-linux)
- [3.2 Prerequisites On macOS](#32-on-macos)
- [3.3 Prerequisites On Windows](#33-on-windows)
- [3.4 Clone the repo](#34-clone-the-repo)
1. [Building](#4-building)
- [4.1 Build on Linux](#41-build-on-linux)
- [4.2 Build on macOS](#42-build-on-macos)
- [4.3 Build On Windows](#43-build-on-windows)
1. [Packaging](#5-packaging)
1. [Installation](#6-installation)
- [6.1 Install on Linux](#61-install-on-linux)
- [6.2 Install on macOS](#62-install-on-macos)
- [6.3 Install on Windows](#63-install-on-windows)
1. [Running](#7-running)
- [7.1 Run TDengine on Linux](#71-run-tdengine-on-linux)
- [7.2 Run TDengine on macOS](#72-run-tdengine-on-macos)
- [7.3 Run TDengine on Windows](#73-run-tdengine-on-windows)
1. [Testing](#8-testing)
1. [Releasing](#9-releasing)
1. [Workflow](#10-workflow)
1. [Coverage](#11-coverage)
1. [Contributing](#12-contributing)
# 1. What is TDengine
# 1. Introduction
TDengine is an open source, high-performance, cloud native [time-series database](https://tdengine.com/tsdb/) optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. TDengine differentiates itself from other time-series databases with the following advantages:
@ -65,132 +74,91 @@ For a full list of TDengine competitive advantages, please [check here](https://
For user manual, system design and architecture, please refer to [TDengine Documentation](https://docs.tdengine.com) ([TDengine 文档](https://docs.taosdata.com))
# 3. Building
# 3. Prerequisites
At the moment, TDengine server supports running on Linux/Windows/macOS systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future. Right now we don't support build with cross-compiling environment.
## 3.1 On Linux
You can choose to install through source code, [container](https://docs.tdengine.com/get-started/docker/), [installation package](https://docs.tdengine.com/get-started/package/) or [Kubernetes](https://docs.tdengine.com/deployment/k8s/). This quick guide only applies to installing from source.
<details>
TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine.
<summary>Install required tools on Linux</summary>
To build TDengine, use [CMake](https://cmake.org/) 3.13.0 or higher versions in the project directory.
## 3.1 Install build tools
### Ubuntu 18.04 and above or Debian
### For Ubuntu 18.04、20.04、22.04
```bash
sudo apt-get install -y gcc cmake build-essential git libssl-dev libgflags2.2 libgflags-dev
sudo apt-get udpate
sudo apt-get install -y gcc cmake build-essential git libjansson-dev \
libsnappy-dev liblzma-dev zlib1g-dev pkg-config
```
#### Install build dependencies for taosTools
To build the [taosTools](https://github.com/taosdata/taos-tools) on Ubuntu/Debian, the following packages need to be installed.
### For CentOS 8
```bash
sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev zlib1g pkg-config
```
### CentOS 7.9
```bash
sudo yum install epel-release
sudo yum update
sudo yum install -y gcc gcc-c++ make cmake3 gflags git openssl-devel
sudo ln -sf /usr/bin/cmake3 /usr/bin/cmake
yum install -y epel-release gcc gcc-c++ make cmake git perl dnf-plugins-core
yum config-manager --set-enabled powertools
yum install -y zlib-static xz-devel snappy-devel jansson-devel pkgconfig libatomic-static libstdc++-static
```
### CentOS 8/Fedora/Rocky Linux
</details>
## 3.2 On macOS
<details>
<summary>Install required tools on macOS</summary>
Please intall the dependencies with [brew](https://brew.sh/).
```bash
sudo dnf install -y gcc gcc-c++ make cmake epel-release gflags git openssl-devel
```
#### Install build dependencies for taosTools on CentOS
#### CentOS 7.9
```
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
```
#### CentOS 8/Fedora/Rocky Linux
```
sudo yum install -y epel-release
sudo yum install -y dnf-plugins-core
sudo yum config-manager --set-enabled powertools
sudo yum install -y zlib-devel zlib-static xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libatomic-static libstdc++-static openssl-devel
```
Note: Since snappy lacks pkg-config support (refer to [link](https://github.com/google/snappy/pull/86)), it leads a cmake prompt libsnappy not found. But snappy still works well.
If the PowerTools installation fails, you can try to use:
```
sudo yum config-manager --set-enabled powertools
```
#### For CentOS + devtoolset
Besides above dependencies, please run following commands:
```
sudo yum install centos-release-scl
sudo yum install devtoolset-9 devtoolset-9-libatomic-devel
scl enable devtoolset-9 -- bash
```
### macOS
```
brew install argp-standalone gflags pkgconfig
```
### Setup golang environment
</details>
TDengine includes a few components like taosAdapter developed by Go language. Please refer to golang.org official documentation for golang environment setup.
## 3.3 On Windows
Please use version 1.20+. For the user in China, we recommend using a proxy to accelerate package downloading.
<details>
```
go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.cn,direct
```
<summary>Install required tools on Windows</summary>
The default will not build taosAdapter, but you can use the following command to build taosAdapter as the service for RESTful interface.
Work in Progress.
```
cmake .. -DBUILD_HTTP=false
```
</details>
### Setup rust environment
## 3.4 Clone the repo
TDengine includes a few components developed by Rust language. Please refer to rust-lang.org official documentation for rust environment setup.
<details>
## 3.2 Get the source codes
<summary>Clone the repo</summary>
First of all, you may clone the source codes from github:
Clone the repository to the target machine:
```bash
git clone https://github.com/taosdata/TDengine.git
cd TDengine
```
You can modify the file ~/.gitconfig to use ssh protocol instead of https for better download speed. You will need to upload ssh public key to GitHub first. Please refer to GitHub official documentation for detail.
```
[url "git@github.com:"]
insteadOf = https://github.com/
```
> **NOTE:**
> TDengine Connectors can be found in following repositories: [JDBC Connector](https://github.com/taosdata/taos-connector-jdbc), [Go Connector](https://github.com/taosdata/driver-go), [Python Connector](https://github.com/taosdata/taos-connector-python), [Node.js Connector](https://github.com/taosdata/taos-connector-node), [C# Connector](https://github.com/taosdata/taos-connector-dotnet), [Rust Connector](https://github.com/taosdata/taos-connector-rust).
## 3.3 Special Note
</details>
[JDBC Connector](https://github.com/taosdata/taos-connector-jdbc) [Go Connector](https://github.com/taosdata/driver-go)[Python Connector](https://github.com/taosdata/taos-connector-python)[Node.js Connector](https://github.com/taosdata/taos-connector-node)[C# Connector](https://github.com/taosdata/taos-connector-dotnet) [Rust Connector](https://github.com/taosdata/taos-connector-rust) and [Grafana plugin](https://github.com/taosdata/grafanaplugin) has been moved to standalone repository.
# 4. Building
## 3.4 Build TDengine
At the moment, TDengine server supports running on Linux/Windows/MacOS systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service. TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future. Right now we don't support build with cross-compiling environment.
### On Linux platform
You can choose to install through source code, [container](https://docs.tdengine.com/get-started/deploy-in-docker/), [installation package](https://docs.tdengine.com/get-started/deploy-from-package/) or [Kubernetes](https://docs.tdengine.com/operations-and-maintenance/deploy-your-cluster/#kubernetes-deployment). This quick guide only applies to install from source.
TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine.
To build TDengine, use [CMake](https://cmake.org/) 3.13.0 or higher versions in the project directory.
## 4.1 Build on Linux
<details>
<summary>Detailed steps to build on Linux</summary>
You can run the bash script `build.sh` to build both TDengine and taosTools including taosBenchmark and taosdump as below:
@ -201,29 +169,46 @@ You can run the bash script `build.sh` to build both TDengine and taosTools incl
It equals to execute following commands:
```bash
mkdir debug
cd debug
mkdir debug && cd debug
cmake .. -DBUILD_TOOLS=true -DBUILD_CONTRIB=true
make
```
You can use Jemalloc as memory allocator instead of glibc:
```
apt install autoconf
```bash
cmake .. -DJEMALLOC_ENABLED=true
```
TDengine build script can detect the host machine's architecture on X86-64, X86, arm64 platform.
You can also specify CPUTYPE option like aarch64 too if the detection result is not correct:
aarch64:
TDengine build script can auto-detect the host machine's architecture on x86, x86-64, arm64 platform.
You can also specify architecture manually by CPUTYPE option:
```bash
cmake .. -DCPUTYPE=aarch64 && cmake --build .
```
### On Windows platform
</details>
## 4.2 Build on macOS
<details>
<summary>Detailed steps to build on macOS</summary>
Please install XCode command line tools and cmake. Verified with XCode 11.4+ on Catalina and Big Sur.
```shell
mkdir debug && cd debug
cmake .. && cmake --build .
```
</details>
## 4.3 Build on Windows
<details>
<summary>Detailed steps to build on Windows</summary>
If you use the Visual Studio 2013, please open a command window by executing "cmd.exe".
Please specify "amd64" for 64 bits Windows or specify "x86" for 32 bits Windows when you execute vcvarsall.bat.
@ -254,31 +239,67 @@ mkdir debug && cd debug
cmake .. -G "NMake Makefiles"
nmake
```
</details>
### On macOS platform
# 5. Packaging
Please install XCode command line tools and cmake. Verified with XCode 11.4+ on Catalina and Big Sur.
The TDengine community installer can NOT be created by this repository only, due to some component dependencies. We are still working on this improvement.
```shell
mkdir debug && cd debug
cmake .. && cmake --build .
```
# 6. Installation
# 4. Installing
## 6.1 Install on Linux
## 4.1 On Linux platform
<details>
After building successfully, TDengine can be installed by
<summary>Detailed steps to install on Linux</summary>
After building successfully, TDengine can be installed by:
```bash
sudo make install
```
Users can find more information about directories installed on the system in the [directory and files](https://docs.tdengine.com/reference/directory/) section.
Installing from source code will also configure service management for TDengine. Users can also choose to [install from packages](https://docs.tdengine.com/get-started/deploy-from-package/) for it.
Installing from source code will also configure service management for TDengine.Users can also choose to [install from packages](https://docs.tdengine.com/get-started/package/) for it.
</details>
To start the service after installation, in a terminal, use:
## 6.2 Install on macOS
<details>
<summary>Detailed steps to install on macOS</summary>
After building successfully, TDengine can be installed by:
```bash
sudo make install
```
</details>
## 6.3 Install on Windows
<details>
<summary>Detailed steps to install on windows</summary>
After building successfully, TDengine can be installed by:
```cmd
nmake install
```
</details>
# 7. Running
## 7.1 Run TDengine on Linux
<details>
<summary>Detailed steps to run on Linux</summary>
To start the service after installation on linux, in a terminal, use:
```bash
sudo systemctl start taosd
@ -292,27 +313,29 @@ taos
If TDengine CLI connects the server successfully, welcome messages and version info are printed. Otherwise, an error message is shown.
## 4.2 On Windows platform
After building successfully, TDengine can be installed by:
```cmd
nmake install
```
## 4.3 On macOS platform
After building successfully, TDengine can be installed by:
If you don't want to run TDengine as a service, you can run it in current shell. For example, to quickly start a TDengine server after building, run the command below in terminal: (We take Linux as an example, command on Windows will be `taosd.exe`)
```bash
sudo make install
./build/bin/taosd -c test/cfg
```
Users can find more information about directories installed on the system in the [directory and files](https://docs.tdengine.com/reference/directory/) section.
In another terminal, use the TDengine CLI to connect the server:
Installing from source code will also configure service management for TDengine.Users can also choose to [install from packages](https://docs.tdengine.com/get-started/package/) for it.
```bash
./build/bin/taos -c test/cfg
```
To start the service after installation, double-click the /applications/TDengine to start the program, or in a terminal, use:
Option `-c test/cfg` specifies the system configuration file directory.
</details>
## 7.2 Run TDengine on macOS
<details>
<summary>Detailed steps to run on macOS</summary>
To start the service after installation on macOS, double-click the /applications/TDengine to start the program, or in a terminal, use:
```bash
sudo launchctl start com.tdengine.taosd
@ -326,64 +349,63 @@ taos
If TDengine CLI connects the server successfully, welcome messages and version info are printed. Otherwise, an error message is shown.
## 4.4 Quick Run
</details>
If you don't want to run TDengine as a service, you can run it in current shell. For example, to quickly start a TDengine server after building, run the command below in terminal: (We take Linux as an example, command on Windows will be `taosd.exe`)
```bash
./build/bin/taosd -c test/cfg
## 7.3 Run TDengine on Windows
<details>
<summary>Detailed steps to run on windows</summary>
You can start TDengine server on Windows platform with below commands:
```cmd
.\build\bin\taosd.exe -c test\cfg
```
In another terminal, use the TDengine CLI to connect the server:
```bash
./build/bin/taos -c test/cfg
```cmd
.\build\bin\taos.exe -c test\cfg
```
option "-c test/cfg" specifies the system configuration file directory.
# 5. Try TDengine
</details>
It is easy to run SQL commands from TDengine CLI which is the same as other SQL databases.
# 8. Testing
```sql
CREATE DATABASE demo;
USE demo;
CREATE TABLE t (ts TIMESTAMP, speed INT);
INSERT INTO t VALUES('2019-07-15 00:00:00', 10);
INSERT INTO t VALUES('2019-07-15 01:00:00', 20);
SELECT * FROM t;
ts | speed |
===================================
19-07-15 00:00:00.000| 10|
19-07-15 01:00:00.000| 20|
Query OK, 2 row(s) in set (0.001700s)
For how to run different types of tests on TDengine, please see [Testing TDengine](./tests/README.md).
# 9. Releasing
For the complete list of TDengine Releases, please see [Releases](https://github.com/taosdata/TDengine/releases).
# 10. Workflow
TDengine build check workflow can be found in this [Github Action](https://github.com/taosdata/TDengine/actions/workflows/taosd-ci-build.yml). More workflows will be available soon.
# 11. Coverage
Latest TDengine test coverage report can be found on [coveralls.io](https://coveralls.io/github/taosdata/TDengine)
<details>
<summary>How to run the coverage report locally?</summary>
To create the test coverage report (in HTML format) locally, please run following commands:
```bash
cd tests
bash setup-lcov.sh -v 1.16 && ./run_local_coverage.sh -b main -c task
# on main branch and run cases in longtimeruning_cases.task
# for more infomation about options please refer to ./run_local_coverage.sh -h
```
> **NOTE:**
> Please note that the -b and -i options will recompile TDengine with the -DCOVER=true option, which may take a amount of time.
# 6. Developing with TDengine
</details>
## Official Connectors
# 12. Contributing
TDengine provides abundant developing tools for users to develop on TDengine. Follow the links below to find your desired connectors and relevant documentation.
- [Java](https://docs.tdengine.com/reference/connectors/java/)
- [C/C++](https://docs.tdengine.com/reference/connectors/cpp/)
- [Python](https://docs.tdengine.com/reference/connectors/python/)
- [Go](https://docs.tdengine.com/reference/connectors/go/)
- [Node.js](https://docs.tdengine.com/reference/connectors/node/)
- [Rust](https://docs.tdengine.com/reference/connectors/rust/)
- [C#](https://docs.tdengine.com/reference/connectors/csharp/)
- [RESTful API](https://docs.tdengine.com/reference/connectors/rest-api/)
# 7. Contribute to TDengine
Please follow the [contribution guidelines](CONTRIBUTING.md) to contribute to the project.
# 8. Join the TDengine Community
For more information about TDengine, you can follow us on social media and join our Discord server:
- [Discord](https://discord.com/invite/VZdSuUg4pS)
- [Twitter](https://twitter.com/TDengineDB)
- [LinkedIn](https://www.linkedin.com/company/tdengine/)
- [YouTube](https://www.youtube.com/@tdengine)
Please follow the [contribution guidelines](CONTRIBUTING.md) to contribute to TDengine.

View File

@ -109,7 +109,7 @@ If you are using Maven to manage your project, simply add the following dependen
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.2</version>
</dependency>
```

View File

@ -15,6 +15,19 @@ When inserting data using parameter binding, it can avoid the resource consumpti
**Tips: It is recommended to use parameter binding for data insertion**
:::note
We only recommend using the following two forms of SQL for parameter binding data insertion:
```sql
a. Subtables already exists:
1. INSERT INTO meters (tbname, ts, current, voltage, phase) VALUES(?, ?, ?, ?, ?)
b. Automatic table creation on insert:
1. INSERT INTO meters (tbname, ts, current, voltage, phase, location, group_id) VALUES(?, ?, ?, ?, ?, ?, ?)
2. INSERT INTO ? USING meters TAGS (?, ?) VALUES (?, ?, ?, ?)
```
:::
Next, we continue to use smart meters as an example to demonstrate the efficient writing functionality of parameter binding with various language connectors:
1. Prepare a parameterized SQL insert statement for inserting data into the supertable `meters`. This statement allows dynamically specifying subtable names, tags, and column values.

View File

@ -77,12 +77,7 @@ After modifying configuration file parameters, you need to restart the *taosd* s
|minReservedMemorySize | |Not supported |The minimum reserved system available memory size, all memory except reserved can be used for queries, unit: MB, default reserved size is 20% of system physical memory, value range 1024-1000000000|
|singleQueryMaxMemorySize| |Not supported |The memory limit that a single query can use on a single node (dnode), exceeding this limit will return an error, unit: MB, default value: 0 (no limit), value range 0-1000000000|
|filterScalarMode | |Not supported |Force scalar filter mode, 0: off; 1: on, default value 0|
|queryPlannerTrace | |Supported, effective immediately |Internal parameter, whether the query plan outputs detailed logs|
|queryNodeChunkSize | |Supported, effective immediately |Internal parameter, chunk size of the query plan|
|queryUseNodeAllocator | |Supported, effective immediately |Internal parameter, allocation method of the query plan|
|queryMaxConcurrentTables| |Not supported |Internal parameter, concurrency number of the query plan|
|queryRsmaTolerance | |Not supported |Internal parameter, tolerance time for determining which level of rsma data to query, in milliseconds|
|enableQueryHb | |Supported, effective immediately |Internal parameter, whether to send query heartbeat messages|
|pqSortMemThreshold | |Not supported |Internal parameter, memory threshold for sorting|
### Region Related

View File

@ -65,7 +65,7 @@ database_option: {
- MINROWS: The minimum number of records in a file block, default is 100.
- KEEP: Indicates the number of days data files are kept, default value is 3650, range [1, 365000], and must be greater than or equal to 3 times the DURATION parameter value. The database will automatically delete data that has been saved for longer than the KEEP value to free up storage space. KEEP can use unit-specified formats, such as KEEP 100h, KEEP 10d, etc., supports m (minutes), h (hours), and d (days) three units. It can also be written without a unit, like KEEP 50, where the default unit is days. The enterprise version supports multi-tier storage feature, thus, multiple retention times can be set (multiple separated by commas, up to 3, satisfying keep 0 \<= keep 1 \<= keep 2, such as KEEP 100h,100d,3650d); the community version does not support multi-tier storage feature (even if multiple retention times are configured, it will not take effect, KEEP will take the longest retention time).
- KEEP_TIME_OFFSET: Effective from version 3.2.0.0. The delay execution time for deleting or migrating data that has been saved for longer than the KEEP value, default value is 0 (hours). After the data file's save time exceeds KEEP, the deletion or migration operation will not be executed immediately, but will wait an additional interval specified by this parameter, to avoid peak business periods.
- STT_TRIGGER: Indicates the number of file merges triggered by disk files. The open-source version is fixed at 1, the enterprise version can be set from 1 to 16. For scenarios with few tables and high-frequency writing, this parameter is recommended to use the default configuration; for scenarios with many tables and low-frequency writing, this parameter is recommended to be set to a larger value.
- STT_TRIGGER: Indicates the number of file merges triggered by disk files. For scenarios with few tables and high-frequency writing, this parameter is recommended to use the default configuration; for scenarios with many tables and low-frequency writing, this parameter is recommended to be set to a larger value.
- SINGLE_STABLE: Indicates whether only one supertable can be created in this database, used in cases where the supertable has a very large number of columns.
- 0: Indicates that multiple supertables can be created.
- 1: Indicates that only one supertable can be created.
@ -144,10 +144,6 @@ You can view cacheload through show \<db_name>.vgroups;
If cacheload is very close to cachesize, then cachesize may be too small. If cacheload is significantly less than cachesize, then cachesize is sufficient. You can decide whether to modify cachesize based on this principle. The specific modification value can be determined based on the available system memory, whether to double it or increase it several times.
4. stt_trigger
Please stop database writing before modifying the stt_trigger parameter.
:::note
Other parameters are not supported for modification in version 3.0.0.0

View File

@ -2171,7 +2171,7 @@ ignore_negative: {
**Usage Instructions**:
- Can be used with the columns associated with the selection. For example: select _rowts, DERIVATIVE() from.
- Can be used with the columns associated with the selection. For example: select _rowts, DERIVATIVE(col1, 1s, 1) from tb1.
### DIFF

View File

@ -33,6 +33,7 @@ The JDBC driver implementation for TDengine strives to be consistent with relati
| taos-jdbcdriver Version | Major Changes | TDengine Version |
| ----------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ |
| 3.5.2 | Fixed WebSocket result set free bug. | - |
| 3.5.1 | Fixed the getObject issue in data subscription. | - |
| 3.5.0 | 1. Optimized the performance of WebSocket connection parameter binding, supporting parameter binding queries using binary data. <br/> 2. Optimized the performance of small queries in WebSocket connection. <br/> 3. Added support for setting time zone and app info on WebSocket connection. | 3.3.5.0 and higher |
| 3.4.0 | 1. Replaced fastjson library with jackson. <br/> 2. WebSocket uses a separate protocol identifier. <br/> 3. Optimized background thread usage to avoid user misuse leading to timeouts. | - |

View File

@ -19,7 +19,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.2</version>
</dependency>
<dependency>
<groupId>org.locationtech.jts</groupId>

View File

@ -1,6 +1,4 @@
package com.taosdata.example;
import com.alibaba.fastjson.JSON;
import com.taosdata.jdbc.AbstractStatement;
import java.sql.*;

View File

@ -104,8 +104,9 @@ public class JdbcDemo {
private void executeQuery(String sql) {
long start = System.currentTimeMillis();
try (Statement statement = connection.createStatement()) {
ResultSet resultSet = statement.executeQuery(sql);
try (Statement statement = connection.createStatement();
ResultSet resultSet = statement.executeQuery(sql)) {
long end = System.currentTimeMillis();
printSql(sql, true, (end - start));
Util.printResult(resultSet);

View File

@ -47,7 +47,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.2</version>
</dependency>
</dependencies>

View File

@ -18,7 +18,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.2</version>
</dependency>
<!-- druid -->
<dependency>

View File

@ -17,7 +17,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.2</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>

View File

@ -47,7 +47,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.2</version>
</dependency>
<dependency>

View File

@ -70,7 +70,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.2</version>
</dependency>
<dependency>

View File

@ -67,7 +67,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.2</version>
<!-- <scope>system</scope>-->
<!-- <systemPath>${project.basedir}/src/main/resources/lib/taos-jdbcdriver-2.0.15-dist.jar</systemPath>-->
</dependency>

View File

@ -22,7 +22,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.2</version>
</dependency>
<!-- ANCHOR_END: dep-->

View File

@ -89,7 +89,7 @@ TDengine 提供了丰富的应用程序开发接口,为了便于用户快速
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.1</version>
<version>3.5.2</version>
</dependency>
```

View File

@ -15,6 +15,19 @@ import TabItem from "@theme/TabItem";
**Tips: 数据写入推荐使用参数绑定方式**
:::note
我们只推荐使用下面两种形式的 SQL 进行参数绑定写入:
```sql
一、确定子表存在:
1. INSERT INTO meters (tbname, ts, current, voltage, phase) VALUES(?, ?, ?, ?, ?)
二、自动建表:
1. INSERT INTO meters (tbname, ts, current, voltage, phase, location, group_id) VALUES(?, ?, ?, ?, ?, ?, ?)
2. INSERT INTO ? USING meters TAGS (?, ?) VALUES (?, ?, ?, ?)
```
:::
下面我们继续以智能电表为例,展示各语言连接器使用参数绑定高效写入的功能:
1. 准备一个参数化的 SQL 插入语句,用于向超级表 `meters` 中插入数据。这个语句允许动态地指定子表名、标签和列值。
2. 循环生成多个子表及其对应的数据行。对于每个子表:

View File

@ -73,12 +73,7 @@ taosd 命令行参数如下
|minReservedMemorySize | |不支持动态修改 |最小预留的系统可用内存数量除预留外的内存都可以被用于查询单位MB默认预留大小为系统物理内存的 20%,取值范围 1024 - 1000000000|
|singleQueryMaxMemorySize| |不支持动态修改 |单个查询在单个节点(dnode)上可以使用的内存上限超过该上限将返回错误单位MB默认值0无上限取值范围 0 - 1000000000|
|filterScalarMode | |不支持动态修改 |强制使用标量过滤模式0关闭1开启默认值 0|
|queryPlannerTrace | |支持动态修改 立即生效 |内部参数,查询计划是否输出详细日志|
|queryNodeChunkSize | |支持动态修改 立即生效 |内部参数,查询计划的块大小|
|queryUseNodeAllocator | |支持动态修改 立即生效 |内部参数,查询计划的分配方法|
|queryMaxConcurrentTables| |不支持动态修改 |内部参数,查询计划的并发数目|
|queryRsmaTolerance | |不支持动态修改 |内部参数,用于判定查询哪一级 rsma 数据时的容忍时间,单位为毫秒|
|enableQueryHb | |支持动态修改 立即生效 |内部参数,是否发送查询心跳消息|
|pqSortMemThreshold | |不支持动态修改 |内部参数,排序使用的内存阈值|
### 区域相关
@ -194,7 +189,7 @@ charset 的有效值是 UTF-8。
|numOfQnodeQueryThreads | |支持动态修改 重启生效 |qnode 的 Query 线程数目,取值范围 0-1024默认值为 CPU 核数的两倍(不超过 16|
|numOfSnodeSharedThreads | |支持动态修改 重启生效 |snode 的共享线程数目,取值范围 0-1024默认值为 CPU 核数的四分之一(不小于 2不超过 4|
|numOfSnodeUniqueThreads | |支持动态修改 重启生效 |snode 的独占线程数目,取值范围 0-1024默认值为 CPU 核数的四分之一(不小于 2不超过 4|
|ratioOfVnodeStreamThreads | |支持动态修改 重启生效 |流计算使用 vnode 线程的比例,取值范围 0.01-4默认值 4|
|ratioOfVnodeStreamThreads | |支持动态修改 重启生效 |流计算使用 vnode 线程的比例,取值范围 0.01-4默认值 0.5|
|ttlUnit | |不支持动态修改 |ttl 参数的单位,取值范围 1-31572500单位为秒默认值 86400|
|ttlPushInterval | |支持动态修改 立即生效 |ttl 检测超时频率,取值范围 1-100000单位为秒默认值 10|
|ttlChangeOnWrite | |支持动态修改 立即生效 |ttl 到期时间是否伴随表的修改操作改变0不改变1改变默认值为 0|

View File

@ -67,7 +67,7 @@ database_option: {
- KEEP表示数据文件保存的天数缺省值为 3650取值范围 [1, 365000],且必须大于或等于 3 倍的 DURATION 参数值。数据库会自动删除保存时间超过 KEEP 值的数据从而释放存储空间。KEEP 可以使用加单位的表示形式,如 KEEP 100h、KEEP 10d 等,支持 m分钟、h小时和 d三个单位。也可以不写单位如 KEEP 50此时默认单位为天。企业版支持[多级存储](../../operation/planning/#%E5%A4%9A%E7%BA%A7%E5%AD%98%E5%82%A8)功能, 因此, 可以设置多个保存时间(多个以英文逗号分隔,最多 3 个,满足 keep 0 \<= keep 1 \<= keep 2如 KEEP 100h,100d,3650d; 社区版不支持多级存储功能(即使配置了多个保存时间, 也不会生效, KEEP 会取最大的保存时间)。了解更多,请点击 [关于主键时间戳](https://docs.taosdata.com/reference/taos-sql/insert/)
- KEEP_TIME_OFFSET自 3.2.0.0 版本生效。删除或迁移保存时间超过 KEEP 值的数据的延迟执行时间,默认值为 0 (小时)。在数据文件保存时间超过 KEEP 后,删除或迁移操作不会立即执行,而会额外等待本参数指定的时间间隔,以实现与业务高峰期错开的目的。
- STT_TRIGGER表示落盘文件触发文件合并的个数。开源版本固定为 1企业版本可设置范围为 1 到 16。对于少表高频写入场景,此参数建议使用默认配置;而对于多表低频写入场景,此参数建议配置较大的值。
- STT_TRIGGER表示落盘文件触发文件合并的个数。对于少表高频写入场景此参数建议使用默认配置而对于多表低频写入场景此参数建议配置较大的值。
- SINGLE_STABLE表示此数据库中是否只可以创建一个超级表用于超级表列非常多的情况。
- 0表示可以创建多张超级表。
- 1表示只可以创建一张超级表。
@ -146,10 +146,6 @@ alter_database_option: {
如果 cacheload 非常接近 cachesize则 cachesize 可能过小。 如果 cacheload 明显小于 cachesize 则 cachesize 是够用的。可以根据这个原则判断是否需要修改 cachesize 。具体修改值可以根据系统可用内存情况来决定是加倍或者是提高几倍。
4. stt_trigger
在修改 stt_trigger 参数之前请先停止数据库写入。
:::note
其它参数在 3.0.0.0 中暂不支持修改
@ -209,7 +205,7 @@ REDISTRIBUTE VGROUP vgroup_no DNODE dnode_id1 [DNODE dnode_id2] [DNODE dnode_id3
BALANCE VGROUP LEADER
```
触发集群所有 vgroup 中的 leader 重新选主,对集群各节点进行负载再均衡操作。
触发集群所有 vgroup 中的 leader 重新选主,对集群各节点进行负载再均衡操作。(企业版功能)
## 查看数据库工作状态

View File

@ -2099,7 +2099,7 @@ ignore_negative: {
**使用说明**:
- 可以与选择相关联的列一起使用。 例如: select \_rowts, DERIVATIVE() from。
- 可以与选择相关联的列一起使用。 例如: select \_rowts, DERIVATIVE(col1, 1s, 1) from tb1
### DIFF

View File

@ -33,6 +33,7 @@ TDengine 的 JDBC 驱动实现尽可能与关系型数据库驱动保持一致
| taos-jdbcdriver 版本 | 主要变化 | TDengine 版本 |
| ------------------| ---------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------- |
| 3.5.2 | 解决了 WebSocket 查询结果集释放 bug | - |
| 3.5.1 | 解决了数据订阅获取时间戳对象类型问题 | - |
| 3.5.0 | 1. 优化了 WebSocket 连接参数绑定性能,支持参数绑定查询使用二进制数据 <br/> 2. 优化了 WebSocket 连接在小查询上的性能 <br/> 3. WebSocket 连接上支持设置时区和应用信息 | 3.3.5.0 及更高版本 |
| 3.4.0 | 1. 使用 jackson 库替换 fastjson 库 <br/> 2. WebSocket 采用独立协议标识 <br/> 3. 优化后台拉取线程使用,避免用户误用导致超时 | - |

View File

@ -460,13 +460,13 @@ typedef enum ELogicConditionType {
#define TSDB_DB_SCHEMALESS_OFF 0
#define TSDB_DEFAULT_DB_SCHEMALESS TSDB_DB_SCHEMALESS_OFF
#define TSDB_MIN_STT_TRIGGER 1
#ifdef TD_ENTERPRISE
// #ifdef TD_ENTERPRISE
#define TSDB_MAX_STT_TRIGGER 16
#define TSDB_DEFAULT_SST_TRIGGER 2
#else
#define TSDB_MAX_STT_TRIGGER 1
#define TSDB_DEFAULT_SST_TRIGGER 1
#endif
// #else
// #define TSDB_MAX_STT_TRIGGER 1
// #define TSDB_DEFAULT_SST_TRIGGER 1
// #endif
#define TSDB_STT_TRIGGER_ARRAY_SIZE 16 // maximum of TSDB_MAX_STT_TRIGGER of TD_ENTERPRISE and TD_COMMUNITY
#define TSDB_MIN_HASH_PREFIX (2 - TSDB_TABLE_NAME_LEN)
#define TSDB_MAX_HASH_PREFIX (TSDB_TABLE_NAME_LEN - 2)

View File

@ -174,6 +174,7 @@ help() {
echo " config_qemu_guest_agent - Configure QEMU guest agent"
echo " deploy_docker - Deploy Docker"
echo " deploy_docker_compose - Deploy Docker Compose"
echo " install_trivy - Install Trivy"
echo " clone_enterprise - Clone the enterprise repository"
echo " clone_community - Clone the community repository"
echo " clone_taosx - Clone TaosX repository"
@ -316,6 +317,17 @@ add_config_if_not_exist() {
grep -qF -- "$config" "$file" || echo "$config" >> "$file"
}
# Function to check if a tool is installed
check_installed() {
local command_name="$1"
if command -v "$command_name" >/dev/null 2>&1; then
echo "$command_name is already installed. Skipping installation."
return 0
else
echo "$command_name is not installed."
return 1
fi
}
# General error handling function
check_status() {
local message_on_failure="$1"
@ -584,9 +596,12 @@ centos_skip_check() {
# Deploy cmake
deploy_cmake() {
# Check if cmake is installed
if command -v cmake >/dev/null 2>&1; then
echo "Cmake is already installed. Skipping installation."
cmake --version
# if command -v cmake >/dev/null 2>&1; then
# echo "Cmake is already installed. Skipping installation."
# cmake --version
# return
# fi
if check_installed "cmake"; then
return
fi
install_package "cmake3"
@ -1058,11 +1073,13 @@ deploy_go() {
GOPATH_DIR="/root/go"
# Check if Go is installed
if command -v go >/dev/null 2>&1; then
echo "Go is already installed. Skipping installation."
# if command -v go >/dev/null 2>&1; then
# echo "Go is already installed. Skipping installation."
# return
# fi
if check_installed "go"; then
return
fi
# Fetch the latest version number of Go
GO_LATEST_DATA=$(curl --retry 10 --retry-delay 5 --retry-max-time 120 -s https://golang.google.cn/VERSION?m=text)
GO_LATEST_VERSION=$(echo "$GO_LATEST_DATA" | grep -oP 'go[0-9]+\.[0-9]+\.[0-9]+')
@ -1731,6 +1748,42 @@ deploy_docker_compose() {
fi
}
# Instal trivy
install_trivy() {
echo -e "${YELLOW}Installing Trivy...${NO_COLOR}"
# Check if Trivy is already installed
# if command -v trivy >/dev/null 2>&1; then
# echo "Trivy is already installed. Skipping installation."
# trivy --version
# return
# fi
if check_installed "trivy"; then
return
fi
# Install jq
install_package jq
# Get latest version
LATEST_VERSION=$(curl -s https://api.github.com/repos/aquasecurity/trivy/releases/latest | jq -r .tag_name)
# Download
if [ -f /etc/debian_version ]; then
wget https://github.com/aquasecurity/trivy/releases/download/"${LATEST_VERSION}"/trivy_"${LATEST_VERSION#v}"_Linux-64bit.deb
# Install
dpkg -i trivy_"${LATEST_VERSION#v}"_Linux-64bit.deb
elif [ -f /etc/redhat-release ]; then
wget https://github.com/aquasecurity/trivy/releases/download/"${LATEST_VERSION}"/trivy_"${LATEST_VERSION#v}"_Linux-64bit.rpm
# Install
rpm -ivh trivy_"${LATEST_VERSION#v}"_Linux-64bit.rpm
else
echo "Unsupported Linux distribution."
exit 1
fi
# Check
trivy --version
check_status "Failed to install Trivy" "Trivy installed successfully." $?
rm -rf trivy_"${LATEST_VERSION#v}"_Linux-64bit.deb trivy_"${LATEST_VERSION#v}"_Linux-64bit.rpm
}
# Reconfigure cloud-init
reconfig_cloud_init() {
echo "Reconfiguring cloud-init..."
@ -2004,6 +2057,7 @@ deploy_dev() {
install_nginx
deploy_docker
deploy_docker_compose
install_trivy
check_status "Failed to deploy some tools" "Deploy all tools successfully" $?
}
@ -2159,6 +2213,9 @@ main() {
deploy_docker_compose)
deploy_docker_compose
;;
install_trivy)
install_trivy
;;
clone_enterprise)
clone_enterprise
;;

View File

@ -6,12 +6,6 @@ SUCCESS_FILE="success.txt"
FAILED_FILE="failed.txt"
REPORT_FILE="report.txt"
# Initialize/clear result files
> "$SUCCESS_FILE"
> "$FAILED_FILE"
> "$LOG_FILE"
> "$REPORT_FILE"
# Switch to the target directory
TARGET_DIR="../../tests/system-test/"
@ -24,6 +18,12 @@ else
exit 1
fi
# Initialize/clear result files
> "$SUCCESS_FILE"
> "$FAILED_FILE"
> "$LOG_FILE"
> "$REPORT_FILE"
# Define the Python commands to execute
commands=(
"python3 ./test.py -f 2-query/join.py"
@ -102,4 +102,4 @@ fi
echo "Detailed logs can be found in: $(realpath "$LOG_FILE")"
echo "Successful commands can be found in: $(realpath "$SUCCESS_FILE")"
echo "Failed commands can be found in: $(realpath "$FAILED_FILE")"
echo "Test report can be found in: $(realpath "$REPORT_FILE")"
echo "Test report can be found in: $(realpath "$REPORT_FILE")"

View File

@ -90,7 +90,7 @@ fi
kill_service_of() {
_service=$1
pid=$(ps -C $_service | grep -v $uninstallScript | awk '{print $2}')
pid=$(ps -C $_service | grep -w $_service | grep -v $uninstallScript | awk '{print $1}')
if [ -n "$pid" ]; then
${csudo}kill -9 $pid || :
fi
@ -140,9 +140,8 @@ clean_service_of() {
clean_service_on_systemd_of $_service
elif ((${service_mod} == 1)); then
clean_service_on_sysvinit_of $_service
else
kill_service_of $_service
fi
kill_service_of $_service
}
remove_service_of() {

View File

@ -40,7 +40,7 @@ if command -v sudo > /dev/null; then
fi
function kill_client() {
pid=$(ps -C ${clientName2} | grep -v $uninstallScript2 | awk '{print $2}')
pid=$(ps -C ${clientName2} | grep -w ${clientName2} | grep -v $uninstallScript2 | awk '{print $1}')
if [ -n "$pid" ]; then
${csudo}kill -9 $pid || :
fi

View File

@ -532,6 +532,10 @@ TEST(clientCase, create_stable_Test) {
taos_free_result(pRes);
pRes = taos_query(pConn, "use abc1");
while (taos_errno(pRes) == TSDB_CODE_MND_DB_IN_CREATING || taos_errno(pRes) == TSDB_CODE_MND_DB_IN_DROPPING) {
taosMsleep(2000);
pRes = taos_query(pConn, "use abc1");
}
taos_free_result(pRes);
pRes = taos_query(pConn, "create table if not exists abc1.st1(ts timestamp, k int) tags(a int)");
@ -664,6 +668,10 @@ TEST(clientCase, create_multiple_tables) {
taos_free_result(pRes);
pRes = taos_query(pConn, "use abc1");
while (taos_errno(pRes) == TSDB_CODE_MND_DB_IN_CREATING || taos_errno(pRes) == TSDB_CODE_MND_DB_IN_DROPPING) {
taosMsleep(2000);
pRes = taos_query(pConn, "use abc1");
}
if (taos_errno(pRes) != 0) {
(void)printf("failed to use db, reason:%s\n", taos_errstr(pRes));
taos_free_result(pRes);
@ -1524,6 +1532,10 @@ TEST(clientCase, timezone_Test) {
taos_free_result(pRes);
pRes = taos_query(pConn, "create table db1.t1 (ts timestamp, v int)");
while (taos_errno(pRes) == TSDB_CODE_MND_DB_IN_CREATING || taos_errno(pRes) == TSDB_CODE_MND_DB_IN_DROPPING) {
taosMsleep(2000);
pRes = taos_query(pConn, "create table db1.t1 (ts timestamp, v int)");
}
ASSERT_EQ(taos_errno(pRes), TSDB_CODE_SUCCESS);
taos_free_result(pRes);

View File

@ -55,7 +55,13 @@ TAOS* getConnWithOption(const char *tz){
void execQuery(TAOS* pConn, const char *sql){
TAOS_RES* pRes = taos_query(pConn, sql);
ASSERT(taos_errno(pRes) == TSDB_CODE_SUCCESS);
int code = taos_errno(pRes);
while (code == TSDB_CODE_MND_DB_IN_CREATING || code == TSDB_CODE_MND_DB_IN_DROPPING) {
taosMsleep(2000);
TAOS_RES* pRes = taos_query(pConn, sql);
code = taos_errno(pRes);
}
ASSERT(code == TSDB_CODE_SUCCESS);
taos_free_result(pRes);
}

View File

@ -112,6 +112,11 @@ void do_query(TAOS* taos, const char* sql) {
TAOS_RES* result = taos_query(taos, sql);
// printf("sql: %s\n", sql);
int code = taos_errno(result);
while (code == TSDB_CODE_MND_DB_IN_CREATING || code == TSDB_CODE_MND_DB_IN_DROPPING) {
taosMsleep(2000);
result = taos_query(taos, sql);
code = taos_errno(result);
}
if (code != TSDB_CODE_SUCCESS) {
printf("query failen sql : %s\n errstr : %s\n", sql, taos_errstr(result));
ASSERT_EQ(taos_errno(result), TSDB_CODE_SUCCESS);
@ -122,9 +127,9 @@ void do_query(TAOS* taos, const char* sql) {
void do_stmt(TAOS* taos, TAOS_STMT2_OPTION* option, const char* sql, int CTB_NUMS, int ROW_NUMS, int CYC_NUMS,
bool hastags, bool createTable) {
printf("test sql : %s\n", sql);
do_query(taos, "drop database if exists testdb1");
do_query(taos, "create database IF NOT EXISTS testdb1");
do_query(taos, "create stable testdb1.stb (ts timestamp, b binary(10)) tags(t1 int, t2 binary(10))");
do_query(taos, "drop database if exists stmt2_testdb_1");
do_query(taos, "create database IF NOT EXISTS stmt2_testdb_1");
do_query(taos, "create stable stmt2_testdb_1.stb (ts timestamp, b binary(10)) tags(t1 int, t2 binary(10))");
TAOS_STMT2* stmt = taos_stmt2_init(taos, option);
ASSERT_NE(stmt, nullptr);
@ -139,7 +144,7 @@ void do_stmt(TAOS* taos, TAOS_STMT2_OPTION* option, const char* sql, int CTB_NUM
sprintf(tbs[i], "ctb_%d", i);
if (createTable) {
char* tmp = (char*)taosMemoryMalloc(sizeof(char) * 100);
sprintf(tmp, "create table testdb1.%s using testdb1.stb tags(0, 'after')", tbs[i]);
sprintf(tmp, "create table stmt2_testdb_1.%s using stmt2_testdb_1.stb tags(0, 'after')", tbs[i]);
do_query(taos, tmp);
}
}
@ -192,7 +197,7 @@ void do_stmt(TAOS* taos, TAOS_STMT2_OPTION* option, const char* sql, int CTB_NUM
checkError(stmt, code);
// exec
int affected;
int affected = 0;
code = taos_stmt2_exec(stmt, &affected);
total_affected += affected;
checkError(stmt, code);
@ -214,8 +219,9 @@ void do_stmt(TAOS* taos, TAOS_STMT2_OPTION* option, const char* sql, int CTB_NUM
taosMemoryFree(tags);
}
}
ASSERT_EQ(total_affected, CYC_NUMS * ROW_NUMS * CTB_NUMS);
if (option->asyncExecFn == NULL) {
ASSERT_EQ(total_affected, CYC_NUMS * ROW_NUMS * CTB_NUMS);
}
for (int i = 0; i < CTB_NUMS; i++) {
taosMemoryFree(tbs[i]);
}
@ -235,14 +241,15 @@ TEST(stmt2Case, insert_stb_get_fields_Test) {
TAOS* taos = taos_connect("localhost", "root", "taosdata", NULL, 0);
ASSERT_NE(taos, nullptr);
do_query(taos, "drop database if exists testdb2");
do_query(taos, "create database IF NOT EXISTS testdb2 PRECISION 'ns'");
do_query(taos, "drop database if exists stmt2_testdb_2");
do_query(taos, "create database IF NOT EXISTS stmt2_testdb_2 PRECISION 'ns'");
do_query(taos,
"create stable testdb2.stb (ts timestamp, b binary(10)) tags(t1 "
"create stable stmt2_testdb_2.stb (ts timestamp, b binary(10)) tags(t1 "
"int, t2 binary(10))");
do_query(
taos,
"create stable if not exists testdb2.all_stb(ts timestamp, v1 bool, v2 tinyint, v3 smallint, v4 int, v5 bigint, "
"create stable if not exists stmt2_testdb_2.all_stb(ts timestamp, v1 bool, v2 tinyint, v3 smallint, v4 int, v5 "
"bigint, "
"v6 tinyint unsigned, v7 smallint unsigned, v8 int unsigned, v9 bigint unsigned, v10 float, v11 double, v12 "
"binary(20), v13 varbinary(20), v14 geometry(100), v15 nchar(20))tags(tts timestamp, tv1 bool, tv2 tinyint, tv3 "
"smallint, tv4 int, tv5 bigint, tv6 tinyint unsigned, tv7 smallint unsigned, tv8 int unsigned, tv9 bigint "
@ -251,7 +258,7 @@ TEST(stmt2Case, insert_stb_get_fields_Test) {
// case 1 : test super table
{
const char* sql = "insert into testdb2.stb(t1,t2,ts,b,tbname) values(?,?,?,?,?)";
const char* sql = "insert into stmt2_testdb_2.stb(t1,t2,ts,b,tbname) values(?,?,?,?,?)";
TAOS_FIELD_ALL expectedFields[5] = {{"t1", TSDB_DATA_TYPE_INT, 0, 0, 4, TAOS_FIELD_TAG},
{"t2", TSDB_DATA_TYPE_BINARY, 0, 0, 12, TAOS_FIELD_TAG},
{"ts", TSDB_DATA_TYPE_TIMESTAMP, 2, 0, 8, TAOS_FIELD_COL},
@ -263,7 +270,7 @@ TEST(stmt2Case, insert_stb_get_fields_Test) {
{
// case 2 : no tag
const char* sql = "insert into testdb2.stb(ts,b,tbname) values(?,?,?)";
const char* sql = "insert into stmt2_testdb_2.stb(ts,b,tbname) values(?,?,?)";
TAOS_FIELD_ALL expectedFields[3] = {{"ts", TSDB_DATA_TYPE_TIMESTAMP, 2, 0, 8, TAOS_FIELD_COL},
{"b", TSDB_DATA_TYPE_BINARY, 0, 0, 12, TAOS_FIELD_COL},
{"tbname", TSDB_DATA_TYPE_BINARY, 0, 0, 271, TAOS_FIELD_TBNAME}};
@ -273,7 +280,7 @@ TEST(stmt2Case, insert_stb_get_fields_Test) {
// case 3 : random order
{
const char* sql = "insert into testdb2.stb(tbname,ts,t2,b,t1) values(?,?,?,?,?)";
const char* sql = "insert into stmt2_testdb_2.stb(tbname,ts,t2,b,t1) values(?,?,?,?,?)";
TAOS_FIELD_ALL expectedFields[5] = {{"tbname", TSDB_DATA_TYPE_BINARY, 0, 0, 271, TAOS_FIELD_TBNAME},
{"ts", TSDB_DATA_TYPE_TIMESTAMP, 2, 0, 8, TAOS_FIELD_COL},
{"t2", TSDB_DATA_TYPE_BINARY, 0, 0, 12, TAOS_FIELD_TAG},
@ -285,7 +292,7 @@ TEST(stmt2Case, insert_stb_get_fields_Test) {
// case 4 : random order 2
{
const char* sql = "insert into testdb2.stb(ts,tbname,b,t2,t1) values(?,?,?,?,?)";
const char* sql = "insert into stmt2_testdb_2.stb(ts,tbname,b,t2,t1) values(?,?,?,?,?)";
TAOS_FIELD_ALL expectedFields[5] = {{"ts", TSDB_DATA_TYPE_TIMESTAMP, 2, 0, 8, TAOS_FIELD_COL},
{"tbname", TSDB_DATA_TYPE_BINARY, 0, 0, 271, TAOS_FIELD_TBNAME},
{"b", TSDB_DATA_TYPE_BINARY, 0, 0, 12, TAOS_FIELD_COL},
@ -297,7 +304,7 @@ TEST(stmt2Case, insert_stb_get_fields_Test) {
// case 5 : 'db'.'stb'
{
const char* sql = "insert into 'testdb2'.'stb'(t1,t2,ts,b,tbname) values(?,?,?,?,?)";
const char* sql = "insert into 'stmt2_testdb_2'.'stb'(t1,t2,ts,b,tbname) values(?,?,?,?,?)";
TAOS_FIELD_ALL expectedFields[5] = {{"t1", TSDB_DATA_TYPE_INT, 0, 0, 4, TAOS_FIELD_TAG},
{"t2", TSDB_DATA_TYPE_BINARY, 0, 0, 12, TAOS_FIELD_TAG},
{"ts", TSDB_DATA_TYPE_TIMESTAMP, 2, 0, 8, TAOS_FIELD_COL},
@ -309,7 +316,7 @@ TEST(stmt2Case, insert_stb_get_fields_Test) {
// case 6 : use db
{
do_query(taos, "use testdb2");
do_query(taos, "use stmt2_testdb_2");
const char* sql = "insert into stb(t1,t2,ts,b,tbname) values(?,?,?,?,?)";
TAOS_FIELD_ALL expectedFields[5] = {{"t1", TSDB_DATA_TYPE_INT, 0, 0, 4, TAOS_FIELD_TAG},
{"t2", TSDB_DATA_TYPE_BINARY, 0, 0, 12, TAOS_FIELD_TAG},
@ -322,7 +329,7 @@ TEST(stmt2Case, insert_stb_get_fields_Test) {
// case 7 : less param
{
const char* sql = "insert into testdb2.stb(ts,tbname) values(?,?)";
const char* sql = "insert into stmt2_testdb_2.stb(ts,tbname) values(?,?)";
TAOS_FIELD_ALL expectedFields[2] = {{"ts", TSDB_DATA_TYPE_TIMESTAMP, 2, 0, 8, TAOS_FIELD_COL},
{"tbname", TSDB_DATA_TYPE_BINARY, 0, 0, 271, TAOS_FIELD_TBNAME}};
printf("case 7 : %s\n", sql);
@ -378,67 +385,68 @@ TEST(stmt2Case, insert_stb_get_fields_Test) {
// case 1 : add in main TD-33353
{
const char* sql = "insert into testdb2.stb(t1,t2,ts,b,tbname) values(1,?,?,'abc',?)";
const char* sql = "insert into stmt2_testdb_2.stb(t1,t2,ts,b,tbname) values(1,?,?,'abc',?)";
printf("case 1dif : %s\n", sql);
getFieldsError(taos, sql, TSDB_CODE_PAR_INVALID_COLUMNS_NUM);
}
// case 2 : no pk
{
const char* sql = "insert into testdb2.stb(b,tbname) values(?,?)";
const char* sql = "insert into stmt2_testdb_2.stb(b,tbname) values(?,?)";
printf("case 2 : %s\n", sql);
getFieldsError(taos, sql, TSDB_CODE_TSC_INVALID_OPERATION);
}
// case 3 : no tbname and tag(not support bind)
{
const char* sql = "insert into testdb2.stb(ts,b) values(?,?)";
const char* sql = "insert into stmt2_testdb_2.stb(ts,b) values(?,?)";
printf("case 3 : %s\n", sql);
getFieldsError(taos, sql, TSDB_CODE_TSC_INVALID_OPERATION);
}
// case 4 : no col and tag(not support bind)
{
const char* sql = "insert into testdb2.stb(tbname) values(?)";
const char* sql = "insert into stmt2_testdb_2.stb(tbname) values(?)";
printf("case 4 : %s\n", sql);
getFieldsError(taos, sql, TSDB_CODE_TSC_INVALID_OPERATION);
}
// case 5 : no field name
{
const char* sql = "insert into testdb2.stb(?,?,?,?,?)";
const char* sql = "insert into stmt2_testdb_2.stb(?,?,?,?,?)";
printf("case 5 : %s\n", sql);
getFieldsError(taos, sql, TSDB_CODE_PAR_SYNTAX_ERROR);
}
// case 6 : test super table not exist
{
const char* sql = "insert into testdb2.nstb(?,?,?,?,?)";
const char* sql = "insert into stmt2_testdb_2.nstb(?,?,?,?,?)";
printf("case 6 : %s\n", sql);
getFieldsError(taos, sql, TSDB_CODE_PAR_SYNTAX_ERROR);
}
// case 7 : no col
{
const char* sql = "insert into testdb2.stb(t1,t2,tbname) values(?,?,?)";
const char* sql = "insert into stmt2_testdb_2.stb(t1,t2,tbname) values(?,?,?)";
printf("case 7 : %s\n", sql);
getFieldsError(taos, sql, TSDB_CODE_TSC_INVALID_OPERATION);
}
// case 8 : wrong para nums
{
const char* sql = "insert into testdb2.stb(ts,b,tbname) values(?,?,?,?,?)";
const char* sql = "insert into stmt2_testdb_2.stb(ts,b,tbname) values(?,?,?,?,?)";
printf("case 8 : %s\n", sql);
getFieldsError(taos, sql, TSDB_CODE_PAR_INVALID_COLUMNS_NUM);
}
// case 9 : wrong simbol
{
const char* sql = "insert into testdb2.stb(t1,t2,ts,b,tbname) values(*,*,*,*,*)";
const char* sql = "insert into stmt2_testdb_2.stb(t1,t2,ts,b,tbname) values(*,*,*,*,*)";
printf("case 9 : %s\n", sql);
getFieldsError(taos, sql, TSDB_CODE_PAR_INVALID_COLUMNS_NUM);
}
do_query(taos, "drop database if exists stmt2_testdb_2");
taos_close(taos);
}
@ -446,24 +454,25 @@ TEST(stmt2Case, insert_ctb_using_get_fields_Test) {
TAOS* taos = taos_connect("localhost", "root", "taosdata", NULL, 0);
ASSERT_NE(taos, nullptr);
do_query(taos, "drop database if exists testdb3");
do_query(taos, "create database IF NOT EXISTS testdb3 PRECISION 'ns'");
do_query(taos, "drop database if exists stmt2_testdb_3");
do_query(taos, "create database IF NOT EXISTS stmt2_testdb_3 PRECISION 'ns'");
do_query(taos,
"create stable testdb3.stb (ts timestamp, b binary(10)) tags(t1 "
"create stable stmt2_testdb_3.stb (ts timestamp, b binary(10)) tags(t1 "
"int, t2 binary(10))");
do_query(
taos,
"create stable if not exists testdb3.all_stb(ts timestamp, v1 bool, v2 tinyint, v3 smallint, v4 int, v5 bigint, "
"create stable if not exists stmt2_testdb_3.all_stb(ts timestamp, v1 bool, v2 tinyint, v3 smallint, v4 int, v5 "
"bigint, "
"v6 tinyint unsigned, v7 smallint unsigned, v8 int unsigned, v9 bigint unsigned, v10 float, v11 double, v12 "
"binary(20), v13 varbinary(20), v14 geometry(100), v15 nchar(20))tags(tts timestamp, tv1 bool, tv2 tinyint, tv3 "
"smallint, tv4 int, tv5 bigint, tv6 tinyint unsigned, tv7 smallint unsigned, tv8 int unsigned, tv9 bigint "
"unsigned, tv10 float, tv11 double, tv12 binary(20), tv13 varbinary(20), tv14 geometry(100), tv15 nchar(20));");
do_query(taos, "CREATE TABLE testdb3.t0 USING testdb3.stb (t1,t2) TAGS (7,'Cali');");
do_query(taos, "CREATE TABLE stmt2_testdb_3.t0 USING stmt2_testdb_3.stb (t1,t2) TAGS (7,'Cali');");
printf("support case \n");
// case 1 : test child table already exist
{
const char* sql = "INSERT INTO testdb3.t0(ts,b)using testdb3.stb (t1,t2) TAGS(?,?) VALUES (?,?)";
const char* sql = "INSERT INTO stmt2_testdb_3.t0(ts,b)using stmt2_testdb_3.stb (t1,t2) TAGS(?,?) VALUES (?,?)";
TAOS_FIELD_ALL expectedFields[4] = {{"t1", TSDB_DATA_TYPE_INT, 0, 0, 4, TAOS_FIELD_TAG},
{"t2", TSDB_DATA_TYPE_BINARY, 0, 0, 12, TAOS_FIELD_TAG},
{"ts", TSDB_DATA_TYPE_TIMESTAMP, 2, 0, 8, TAOS_FIELD_COL},
@ -474,7 +483,7 @@ TEST(stmt2Case, insert_ctb_using_get_fields_Test) {
// case 2 : insert clause
{
const char* sql = "INSERT INTO testdb3.? using testdb3.stb (t1,t2) TAGS(?,?) (ts,b)VALUES(?,?)";
const char* sql = "INSERT INTO stmt2_testdb_3.? using stmt2_testdb_3.stb (t1,t2) TAGS(?,?) (ts,b)VALUES(?,?)";
TAOS_FIELD_ALL expectedFields[5] = {{"tbname", TSDB_DATA_TYPE_BINARY, 0, 0, 271, TAOS_FIELD_TBNAME},
{"t1", TSDB_DATA_TYPE_INT, 0, 0, 4, TAOS_FIELD_TAG},
{"t2", TSDB_DATA_TYPE_BINARY, 0, 0, 12, TAOS_FIELD_TAG},
@ -486,7 +495,7 @@ TEST(stmt2Case, insert_ctb_using_get_fields_Test) {
// case 3 : insert child table not exist
{
const char* sql = "INSERT INTO testdb3.d1 using testdb3.stb (t1,t2)TAGS(?,?) (ts,b)VALUES(?,?)";
const char* sql = "INSERT INTO stmt2_testdb_3.d1 using stmt2_testdb_3.stb (t1,t2)TAGS(?,?) (ts,b)VALUES(?,?)";
TAOS_FIELD_ALL expectedFields[4] = {{"t1", TSDB_DATA_TYPE_INT, 0, 0, 4, TAOS_FIELD_TAG},
{"t2", TSDB_DATA_TYPE_BINARY, 0, 0, 12, TAOS_FIELD_TAG},
{"ts", TSDB_DATA_TYPE_TIMESTAMP, 2, 0, 8, TAOS_FIELD_COL},
@ -497,7 +506,7 @@ TEST(stmt2Case, insert_ctb_using_get_fields_Test) {
// case 4 : random order
{
const char* sql = "INSERT INTO testdb3.? using testdb3.stb (t2,t1)TAGS(?,?) (b,ts)VALUES(?,?)";
const char* sql = "INSERT INTO stmt2_testdb_3.? using stmt2_testdb_3.stb (t2,t1)TAGS(?,?) (b,ts)VALUES(?,?)";
TAOS_FIELD_ALL expectedFields[5] = {{"tbname", TSDB_DATA_TYPE_BINARY, 0, 0, 271, TAOS_FIELD_TBNAME},
{"t2", TSDB_DATA_TYPE_BINARY, 0, 0, 12, TAOS_FIELD_TAG},
{"t1", TSDB_DATA_TYPE_INT, 0, 0, 4, TAOS_FIELD_TAG},
@ -509,7 +518,7 @@ TEST(stmt2Case, insert_ctb_using_get_fields_Test) {
// case 5 : less para
{
const char* sql = "insert into testdb3.? using testdb3.stb (t2)tags(?) (ts)values(?)";
const char* sql = "insert into stmt2_testdb_3.? using stmt2_testdb_3.stb (t2)tags(?) (ts)values(?)";
TAOS_FIELD_ALL expectedFields[3] = {{"tbname", TSDB_DATA_TYPE_BINARY, 0, 0, 271, TAOS_FIELD_TBNAME},
{"t2", TSDB_DATA_TYPE_BINARY, 0, 0, 12, TAOS_FIELD_TAG},
{"ts", TSDB_DATA_TYPE_TIMESTAMP, 2, 0, 8, TAOS_FIELD_COL}};
@ -520,7 +529,7 @@ TEST(stmt2Case, insert_ctb_using_get_fields_Test) {
// case 6 : insert into db.? using db.stb tags(?, ?) values(?,?)
// no field name
{
const char* sql = "insert into testdb3.? using testdb3.stb tags(?, ?) values(?,?)";
const char* sql = "insert into stmt2_testdb_3.? using stmt2_testdb_3.stb tags(?, ?) values(?,?)";
TAOS_FIELD_ALL expectedFields[5] = {{"tbname", TSDB_DATA_TYPE_BINARY, 0, 0, 271, TAOS_FIELD_TBNAME},
{"t1", TSDB_DATA_TYPE_INT, 0, 0, 4, TAOS_FIELD_TAG},
{"t2", TSDB_DATA_TYPE_BINARY, 0, 0, 12, TAOS_FIELD_TAG},
@ -533,7 +542,7 @@ TEST(stmt2Case, insert_ctb_using_get_fields_Test) {
// case 7 : insert into db.d0 (ts)values(?)
// less para
{
const char* sql = "insert into testdb3.t0 (ts)values(?)";
const char* sql = "insert into stmt2_testdb_3.t0 (ts)values(?)";
TAOS_FIELD_ALL expectedFields[1] = {{"ts", TSDB_DATA_TYPE_TIMESTAMP, 2, 0, 8, TAOS_FIELD_COL}};
printf("case 7 : %s\n", sql);
getFieldsSuccess(taos, sql, expectedFields, 1);
@ -541,7 +550,7 @@ TEST(stmt2Case, insert_ctb_using_get_fields_Test) {
// case 8 : 'db' 'stb'
{
const char* sql = "INSERT INTO 'testdb3'.? using 'testdb3'.'stb' (t1,t2) TAGS(?,?) (ts,b)VALUES(?,?)";
const char* sql = "INSERT INTO 'stmt2_testdb_3'.? using 'stmt2_testdb_3'.'stb' (t1,t2) TAGS(?,?) (ts,b)VALUES(?,?)";
TAOS_FIELD_ALL expectedFields[5] = {{"tbname", TSDB_DATA_TYPE_BINARY, 0, 0, 271, TAOS_FIELD_TBNAME},
{"t1", TSDB_DATA_TYPE_INT, 0, 0, 4, TAOS_FIELD_TAG},
{"t2", TSDB_DATA_TYPE_BINARY, 0, 0, 12, TAOS_FIELD_TAG},
@ -553,7 +562,7 @@ TEST(stmt2Case, insert_ctb_using_get_fields_Test) {
// case 9 : use db
{
do_query(taos, "use testdb3");
do_query(taos, "use stmt2_testdb_3");
const char* sql = "INSERT INTO ? using stb (t1,t2) TAGS(?,?) (ts,b)VALUES(?,?)";
TAOS_FIELD_ALL expectedFields[5] = {{"tbname", TSDB_DATA_TYPE_BINARY, 0, 0, 271, TAOS_FIELD_TBNAME},
{"t1", TSDB_DATA_TYPE_INT, 0, 0, 4, TAOS_FIELD_TAG},
@ -608,38 +617,40 @@ TEST(stmt2Case, insert_ctb_using_get_fields_Test) {
// case 1 : test super table not exist
{
const char* sql = "INSERT INTO testdb3.?(ts,b)using testdb3.nstb (t1,t2) TAGS(?,?) VALUES (?,?)";
const char* sql = "INSERT INTO stmt2_testdb_3.?(ts,b)using stmt2_testdb_3.nstb (t1,t2) TAGS(?,?) VALUES (?,?)";
printf("case 1 : %s\n", sql);
getFieldsError(taos, sql, TSDB_CODE_PAR_SYNTAX_ERROR);
}
// case 2 : no pk
{
const char* sql = "INSERT INTO testdb3.?(ts,b)using testdb3.nstb (t1,t2) TAGS(?,?) (n)VALUES (?)";
const char* sql = "INSERT INTO stmt2_testdb_3.?(ts,b)using stmt2_testdb_3.nstb (t1,t2) TAGS(?,?) (n)VALUES (?)";
printf("case 2 : %s\n", sql);
getFieldsError(taos, sql, TSDB_CODE_PAR_SYNTAX_ERROR);
}
// case 3 : less param and no filed name
{
const char* sql = "INSERT INTO testdb3.?(ts,b)using testdb3.stb TAGS(?)VALUES (?,?)";
const char* sql = "INSERT INTO stmt2_testdb_3.?(ts,b)using stmt2_testdb_3.stb TAGS(?)VALUES (?,?)";
printf("case 3 : %s\n", sql);
getFieldsError(taos, sql, TSDB_CODE_PAR_SYNTAX_ERROR);
}
// case 4 : none para for ctbname
{
const char* sql = "INSERT INTO testdb3.d0 using testdb3.stb values(?,?)";
const char* sql = "INSERT INTO stmt2_testdb_3.d0 using stmt2_testdb_3.stb values(?,?)";
printf("case 4 : %s\n", sql);
getFieldsError(taos, sql, TSDB_CODE_TSC_SQL_SYNTAX_ERROR);
}
// case 5 : none para for ctbname
{
const char* sql = "insert into ! using testdb3.stb tags(?, ?) values(?,?)";
const char* sql = "insert into ! using stmt2_testdb_3.stb tags(?, ?) values(?,?)";
printf("case 5 : %s\n", sql);
getFieldsError(taos, sql, TSDB_CODE_TSC_SQL_SYNTAX_ERROR);
}
do_query(taos, "drop database if exists stmt2_testdb_3");
taos_close(taos);
}
@ -647,19 +658,20 @@ TEST(stmt2Case, insert_ntb_get_fields_Test) {
TAOS* taos = taos_connect("localhost", "root", "taosdata", NULL, 0);
ASSERT_NE(taos, nullptr);
do_query(taos, "drop database if exists testdb4");
do_query(taos, "create database IF NOT EXISTS testdb4 PRECISION 'ms'");
do_query(taos, "CREATE TABLE testdb4.ntb(nts timestamp, nb binary(10),nvc varchar(16),ni int);");
do_query(taos,
"create table if not exists testdb4.all_ntb(ts timestamp, v1 bool, v2 tinyint, v3 smallint, v4 int, v5 "
"bigint, v6 tinyint unsigned, v7 smallint unsigned, v8 int unsigned, v9 bigint unsigned, v10 float, v11 "
"double, v12 binary(20), v13 varbinary(20), v14 geometry(100), v15 nchar(20));");
do_query(taos, "drop database if exists stmt2_testdb_4");
do_query(taos, "create database IF NOT EXISTS stmt2_testdb_4 PRECISION 'ms'");
do_query(taos, "CREATE TABLE stmt2_testdb_4.ntb(nts timestamp, nb binary(10),nvc varchar(16),ni int);");
do_query(
taos,
"create table if not exists stmt2_testdb_4.all_ntb(ts timestamp, v1 bool, v2 tinyint, v3 smallint, v4 int, v5 "
"bigint, v6 tinyint unsigned, v7 smallint unsigned, v8 int unsigned, v9 bigint unsigned, v10 float, v11 "
"double, v12 binary(20), v13 varbinary(20), v14 geometry(100), v15 nchar(20));");
printf("support case \n");
// case 1 : test normal table no field name
{
const char* sql = "INSERT INTO testdb4.ntb VALUES(?,?,?,?)";
const char* sql = "INSERT INTO stmt2_testdb_4.ntb VALUES(?,?,?,?)";
TAOS_FIELD_ALL expectedFields[4] = {{"nts", TSDB_DATA_TYPE_TIMESTAMP, 0, 0, 8, TAOS_FIELD_COL},
{"nb", TSDB_DATA_TYPE_BINARY, 0, 0, 12, TAOS_FIELD_COL},
{"nvc", TSDB_DATA_TYPE_BINARY, 0, 0, 18, TAOS_FIELD_COL},
@ -670,7 +682,7 @@ TEST(stmt2Case, insert_ntb_get_fields_Test) {
// case 2 : test random order
{
const char* sql = "INSERT INTO testdb4.ntb (ni,nb,nvc,nts)VALUES(?,?,?,?)";
const char* sql = "INSERT INTO stmt2_testdb_4.ntb (ni,nb,nvc,nts)VALUES(?,?,?,?)";
TAOS_FIELD_ALL expectedFields[4] = {{"ni", TSDB_DATA_TYPE_INT, 0, 0, 4, TAOS_FIELD_COL},
{"nb", TSDB_DATA_TYPE_BINARY, 0, 0, 12, TAOS_FIELD_COL},
{"nvc", TSDB_DATA_TYPE_BINARY, 0, 0, 18, TAOS_FIELD_COL},
@ -681,7 +693,7 @@ TEST(stmt2Case, insert_ntb_get_fields_Test) {
// case 3 : less param
{
const char* sql = "INSERT INTO testdb4.ntb (nts)VALUES(?)";
const char* sql = "INSERT INTO stmt2_testdb_4.ntb (nts)VALUES(?)";
TAOS_FIELD_ALL expectedFields[1] = {{"nts", TSDB_DATA_TYPE_TIMESTAMP, 0, 0, 8, TAOS_FIELD_COL}};
printf("case 3 : %s\n", sql);
getFieldsSuccess(taos, sql, expectedFields, 1);
@ -689,7 +701,7 @@ TEST(stmt2Case, insert_ntb_get_fields_Test) {
// case 4 : test all types
{
const char* sql = "insert into testdb4.all_ntb values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)";
const char* sql = "insert into stmt2_testdb_4.all_ntb values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)";
TAOS_FIELD_ALL expectedFields[16] = {{"ts", TSDB_DATA_TYPE_TIMESTAMP, 0, 0, 8, TAOS_FIELD_COL},
{"v1", TSDB_DATA_TYPE_BOOL, 0, 0, 1, TAOS_FIELD_COL},
{"v2", TSDB_DATA_TYPE_TINYINT, 0, 0, 1, TAOS_FIELD_COL},
@ -721,26 +733,29 @@ TEST(stmt2Case, insert_ntb_get_fields_Test) {
// case 2 : normal table must have tbnam
{
const char* sql = "insert into testdb4.? values(?,?)";
const char* sql = "insert into stmt2_testdb_4.? values(?,?)";
printf("case 2 : %s\n", sql);
getFieldsError(taos, sql, TSDB_CODE_PAR_TABLE_NOT_EXIST);
}
// case 3 : wrong para nums
{
const char* sql = "insert into testdb4.ntb(nts,ni) values(?,?,?,?,?)";
const char* sql = "insert into stmt2_testdb_4.ntb(nts,ni) values(?,?,?,?,?)";
printf("case 3 : %s\n", sql);
getFieldsError(taos, sql, TSDB_CODE_PAR_INVALID_COLUMNS_NUM);
}
do_query(taos, "drop database if exists stmt2_testdb_4");
taos_close(taos);
}
TEST(stmt2Case, select_get_fields_Test) {
TAOS* taos = taos_connect("localhost", "root", "taosdata", NULL, 0);
ASSERT_NE(taos, nullptr);
do_query(taos, "drop database if exists testdb5");
do_query(taos, "create database IF NOT EXISTS testdb5 PRECISION 'ns'");
do_query(taos, "use testdb5");
do_query(taos, "CREATE TABLE testdb5.ntb(nts timestamp, nb binary(10),nvc varchar(16),ni int);");
do_query(taos, "drop database if exists stmt2_testdb_5");
do_query(taos, "create database IF NOT EXISTS stmt2_testdb_5 PRECISION 'ns'");
do_query(taos, "use stmt2_testdb_5");
do_query(taos, "CREATE TABLE stmt2_testdb_5.ntb(nts timestamp, nb binary(10),nvc varchar(16),ni int);");
{
// case 1 :
const char* sql = "select * from ntb where ts = ?";
@ -761,6 +776,8 @@ TEST(stmt2Case, select_get_fields_Test) {
printf("case 3 : %s\n", sql);
getFieldsError(taos, sql, TSDB_CODE_PAR_SYNTAX_ERROR);
}
do_query(taos, "drop database if exists stmt2_testdb_5");
taos_close(taos);
}
@ -797,9 +814,9 @@ TEST(stmt2Case, stmt2_init_prepare_Test) {
ASSERT_NE(stmt, nullptr);
ASSERT_EQ(((STscStmt2*)stmt)->db, nullptr);
code = taos_stmt2_prepare(stmt, "insert into 'testdb5'.stb(t1,t2,ts,b,tbname) values(?,?,?,?,?)", 0);
code = taos_stmt2_prepare(stmt, "insert into 'stmt2_testdb_5'.stb(t1,t2,ts,b,tbname) values(?,?,?,?,?)", 0);
ASSERT_NE(stmt, nullptr);
ASSERT_STREQ(((STscStmt2*)stmt)->db, "testdb5"); // add in main TD-33332
ASSERT_STREQ(((STscStmt2*)stmt)->db, "stmt2_testdb_5"); // add in main TD-33332
taos_stmt2_close(stmt);
}
@ -824,22 +841,28 @@ TEST(stmt2Case, stmt2_stb_insert) {
ASSERT_NE(taos, nullptr);
// normal
TAOS_STMT2_OPTION option = {0, true, true, NULL, NULL};
{ do_stmt(taos, &option, "insert into `testdb1`.`stb` (tbname,ts,b,t1,t2) values(?,?,?,?,?)", 3, 3, 3, true, true); }
{
do_stmt(taos, &option, "insert into `testdb1`.? using `testdb1`.`stb` tags(?,?) values(?,?)", 3, 3, 3, true, true);
do_stmt(taos, &option, "insert into `stmt2_testdb_1`.`stb` (tbname,ts,b,t1,t2) values(?,?,?,?,?)", 3, 3, 3, true,
true);
}
{
do_stmt(taos, &option, "insert into `stmt2_testdb_1`.? using `stmt2_testdb_1`.`stb` tags(?,?) values(?,?)", 3, 3, 3,
true, true);
}
// async
option = {0, true, true, stmtAsyncQueryCb, NULL};
{ do_stmt(taos, &option, "insert into testdb1.stb (ts,b,tbname,t1,t2) values(?,?,?,?,?)", 3, 3, 3, true, true); }
{
do_stmt(taos, &option, "insert into testdb1.? using testdb1.stb (t1,t2)tags(?,?) (ts,b)values(?,?)", 3, 3, 3, true,
true);
do_stmt(taos, &option, "insert into stmt2_testdb_1.stb (ts,b,tbname,t1,t2) values(?,?,?,?,?)", 3, 3, 3, true, true);
}
{
do_stmt(taos, &option, "insert into stmt2_testdb_1.? using stmt2_testdb_1.stb (t1,t2)tags(?,?) (ts,b)values(?,?)",
3, 3, 3, true, true);
}
// { do_stmt(taos, &option, "insert into db.? values(?,?)", 3, 3, 3, false, true); }
// interlace = 0 & use db]
do_query(taos, "use testdb1");
do_query(taos, "use stmt2_testdb_1");
option = {0, false, false, NULL, NULL};
{ do_stmt(taos, &option, "insert into stb (tbname,ts,b) values(?,?,?)", 3, 3, 3, false, true); }
{ do_stmt(taos, &option, "insert into ? using stb (t1,t2)tags(?,?) (ts,b)values(?,?)", 3, 3, 3, true, true); }
@ -851,6 +874,7 @@ TEST(stmt2Case, stmt2_stb_insert) {
option = {0, true, true, NULL, NULL};
{ do_stmt(taos, &option, "insert into ? values(?,?)", 3, 3, 3, false, true); }
do_query(taos, "drop database if exists stmt2_testdb_1");
taos_close(taos);
}
@ -858,10 +882,10 @@ TEST(stmt2Case, stmt2_stb_insert) {
TEST(stmt2Case, stmt2_insert_non_statndard) {
TAOS* taos = taos_connect("localhost", "root", "taosdata", "", 0);
ASSERT_NE(taos, nullptr);
do_query(taos, "drop database if exists example_all_type_stmt1");
do_query(taos, "create database IF NOT EXISTS example_all_type_stmt1");
do_query(taos, "drop database if exists stmt2_testdb_6");
do_query(taos, "create database IF NOT EXISTS stmt2_testdb_6");
do_query(taos,
"create stable example_all_type_stmt1.stb1 (ts timestamp, int_col int,long_col bigint,double_col "
"create stable stmt2_testdb_6.stb1 (ts timestamp, int_col int,long_col bigint,double_col "
"double,bool_col bool,binary_col binary(20),nchar_col nchar(20),varbinary_col varbinary(20),geometry_col "
"geometry(200)) tags(int_tag int,long_tag bigint,double_tag double,bool_tag bool,binary_tag "
"binary(20),nchar_tag nchar(20),varbinary_tag varbinary(20),geometry_tag geometry(200));");
@ -872,7 +896,7 @@ TEST(stmt2Case, stmt2_insert_non_statndard) {
{
TAOS_STMT2* stmt = taos_stmt2_init(taos, &option);
ASSERT_NE(stmt, nullptr);
const char* sql = "INSERT INTO example_all_type_stmt1.stb1 (ts,int_tag,tbname) VALUES (?,?,?)";
const char* sql = "INSERT INTO stmt2_testdb_6.stb1 (ts,int_tag,tbname) VALUES (?,?,?)";
int code = taos_stmt2_prepare(stmt, sql, 0);
checkError(stmt, code);
int total_affect_rows = 0;
@ -912,9 +936,8 @@ TEST(stmt2Case, stmt2_insert_non_statndard) {
{
TAOS_STMT2* stmt = taos_stmt2_init(taos, &option);
ASSERT_NE(stmt, nullptr);
const char* sql =
"INSERT INTO example_all_type_stmt1.stb1 (binary_tag,int_col,tbname,ts,int_tag) VALUES (?,?,?,?,?)";
int code = taos_stmt2_prepare(stmt, sql, 0);
const char* sql = "INSERT INTO stmt2_testdb_6.stb1 (binary_tag,int_col,tbname,ts,int_tag) VALUES (?,?,?,?,?)";
int code = taos_stmt2_prepare(stmt, sql, 0);
checkError(stmt, code);
int tag_i = 0;
@ -954,6 +977,7 @@ TEST(stmt2Case, stmt2_insert_non_statndard) {
taos_stmt2_close(stmt);
}
do_query(taos, "drop database if exists stmt2_testdb_6");
taos_close(taos);
}
@ -961,10 +985,10 @@ TEST(stmt2Case, stmt2_insert_non_statndard) {
TEST(stmt2Case, stmt2_insert_db) {
TAOS* taos = taos_connect("localhost", "root", "taosdata", "", 0);
ASSERT_NE(taos, nullptr);
do_query(taos, "drop database if exists example_all_type_stmt1");
do_query(taos, "create database IF NOT EXISTS example_all_type_stmt1");
do_query(taos, "drop database if exists stmt2_testdb_12");
do_query(taos, "create database IF NOT EXISTS stmt2_testdb_12");
do_query(taos,
"create stable `example_all_type_stmt1`.`stb1` (ts timestamp, int_col int,long_col bigint,double_col "
"create stable `stmt2_testdb_12`.`stb1` (ts timestamp, int_col int,long_col bigint,double_col "
"double,bool_col bool,binary_col binary(20),nchar_col nchar(20),varbinary_col varbinary(20),geometry_col "
"geometry(200)) tags(int_tag int,long_tag bigint,double_tag double,bool_tag bool,binary_tag "
"binary(20),nchar_tag nchar(20),varbinary_tag varbinary(20),geometry_tag geometry(200));");
@ -973,7 +997,7 @@ TEST(stmt2Case, stmt2_insert_db) {
TAOS_STMT2* stmt = taos_stmt2_init(taos, &option);
ASSERT_NE(stmt, nullptr);
const char* sql = "INSERT INTO `example_all_type_stmt1`.`stb1` (ts,int_tag,tbname) VALUES (?,?,?)";
const char* sql = "INSERT INTO `stmt2_testdb_12`.`stb1` (ts,int_tag,tbname) VALUES (?,?,?)";
int code = taos_stmt2_prepare(stmt, sql, 0);
checkError(stmt, code);
@ -1006,38 +1030,38 @@ TEST(stmt2Case, stmt2_insert_db) {
ASSERT_EQ(total_affect_rows, 12);
taos_stmt2_close(stmt);
do_query(taos, "drop database if exists stmt2_testdb_12");
taos_close(taos);
}
TEST(stmt2Case, stmt2_query) {
TAOS* taos = taos_connect("localhost", "root", "taosdata", "", 0);
ASSERT_NE(taos, nullptr);
do_query(taos, "drop database if exists testdb7");
do_query(taos, "create database IF NOT EXISTS testdb7");
do_query(taos, "create stable testdb7.stb (ts timestamp, b binary(10)) tags(t1 int, t2 binary(10))");
do_query(taos, "drop database if exists stmt2_testdb_7");
do_query(taos, "create database IF NOT EXISTS stmt2_testdb_7");
do_query(taos, "create stable stmt2_testdb_7.stb (ts timestamp, b binary(10)) tags(t1 int, t2 binary(10))");
do_query(taos,
"insert into testdb7.tb1 using testdb7.stb tags(1,'abc') values(1591060628000, "
"insert into stmt2_testdb_7.tb1 using stmt2_testdb_7.stb tags(1,'abc') values(1591060628000, "
"'abc'),(1591060628001,'def'),(1591060628002, 'hij')");
do_query(taos,
"insert into testdb7.tb2 using testdb7.stb tags(2,'xyz') values(1591060628000, "
"'abc'),(1591060628001,'def'),(1591060628002, 'hij')");
do_query(taos, "use testdb7");
"insert into stmt2_testdb_7.tb2 using stmt2_testdb_7.stb tags(2,'xyz') values(1591060628000, "
"'abc'),(1591060628001,'def'),(1591060628004, 'hij')");
do_query(taos, "use stmt2_testdb_7");
TAOS_STMT2_OPTION option = {0, true, true, NULL, NULL};
TAOS_STMT2* stmt = taos_stmt2_init(taos, &option);
ASSERT_NE(stmt, nullptr);
const char* sql = "select * from testdb7.stb where ts = ? and tbname = ?";
const char* sql = "select * from stmt2_testdb_7.stb where ts = ?";
int code = taos_stmt2_prepare(stmt, sql, 0);
checkError(stmt, code);
int t64_len[1] = {sizeof(int64_t)};
int b_len[1] = {3};
int64_t ts = 1591060628000;
TAOS_STMT2_BIND params[2] = {{TSDB_DATA_TYPE_TIMESTAMP, &ts, t64_len, NULL, 1},
{TSDB_DATA_TYPE_BINARY, (void*)"tb1", b_len, NULL, 1}};
TAOS_STMT2_BIND* paramv = &params[0];
TAOS_STMT2_BIND params = {TSDB_DATA_TYPE_TIMESTAMP, &ts, t64_len, NULL, 1};
TAOS_STMT2_BIND* paramv = &params;
TAOS_STMT2_BINDV bindv = {1, NULL, NULL, &paramv};
code = taos_stmt2_bind_param(stmt, &bindv, -1);
checkError(stmt, code);
@ -1048,15 +1072,31 @@ TEST(stmt2Case, stmt2_query) {
TAOS_RES* pRes = taos_stmt2_result(stmt);
ASSERT_NE(pRes, nullptr);
int getRecordCounts = 0;
TAOS_ROW row;
while ((row = taos_fetch_row(pRes))) {
int getRecordCounts = 0;
while ((taos_fetch_row(pRes))) {
getRecordCounts++;
}
ASSERT_EQ(getRecordCounts, 2);
// test 1 result
ts = 1591060628004;
params = {TSDB_DATA_TYPE_TIMESTAMP, &ts, t64_len, NULL, 1};
code = taos_stmt2_bind_param(stmt, &bindv, -1);
checkError(stmt, code);
taos_stmt2_exec(stmt, NULL);
checkError(stmt, code);
pRes = taos_stmt2_result(stmt);
ASSERT_NE(pRes, nullptr);
getRecordCounts = 0;
while ((taos_fetch_row(pRes))) {
getRecordCounts++;
}
ASSERT_EQ(getRecordCounts, 1);
// taos_free_result(pRes);
taos_stmt2_close(stmt);
do_query(taos, "drop database if exists stmt2_testdb_7");
taos_close(taos);
}
@ -1064,16 +1104,16 @@ TEST(stmt2Case, stmt2_ntb_insert) {
TAOS* taos = taos_connect("localhost", "root", "taosdata", "", 0);
ASSERT_NE(taos, nullptr);
TAOS_STMT2_OPTION option = {0, true, true, NULL, NULL};
do_query(taos, "drop database if exists testdb8");
do_query(taos, "create database IF NOT EXISTS testdb8");
do_query(taos, "create table testdb8.ntb(ts timestamp, b binary(10))");
do_query(taos, "use testdb8");
do_query(taos, "drop database if exists stmt2_testdb_8");
do_query(taos, "create database IF NOT EXISTS stmt2_testdb_8");
do_query(taos, "create table stmt2_testdb_8.ntb(ts timestamp, b binary(10))");
do_query(taos, "use stmt2_testdb_8");
TAOS_STMT2* stmt = taos_stmt2_init(taos, &option);
ASSERT_NE(stmt, nullptr);
int total_affected_rows = 0;
const char* sql = "insert into testdb8.ntb values(?,?)";
const char* sql = "insert into stmt2_testdb_8.ntb values(?,?)";
int code = taos_stmt2_prepare(stmt, sql, 0);
checkError(stmt, code);
for (int i = 0; i < 3; i++) {
@ -1101,6 +1141,7 @@ TEST(stmt2Case, stmt2_ntb_insert) {
ASSERT_EQ(total_affected_rows, 9);
taos_stmt2_close(stmt);
do_query(taos, "drop database if exists stmt2_testdb_8");
taos_close(taos);
}
@ -1125,7 +1166,7 @@ TEST(stmt2Case, stmt2_status_Test) {
ASSERT_EQ(code, TSDB_CODE_TSC_STMT_API_ERROR);
ASSERT_STREQ(taos_stmt2_error(stmt), "Stmt API usage error");
const char* sql = "insert into testdb9.ntb values(?,?)";
const char* sql = "insert into stmt2_testdb_9.ntb values(?,?)";
code = taos_stmt2_prepare(stmt, sql, 0);
ASSERT_EQ(code, TSDB_CODE_TSC_STMT_API_ERROR);
ASSERT_STREQ(taos_stmt2_error(stmt), "Stmt API usage error");
@ -1136,9 +1177,9 @@ TEST(stmt2Case, stmt2_status_Test) {
TEST(stmt2Case, stmt2_nchar) {
TAOS* taos = taos_connect("localhost", "root", "taosdata", "", 0);
do_query(taos, "drop database if exists testdb10;");
do_query(taos, "create database IF NOT EXISTS testdb10;");
do_query(taos, "use testdb10;");
do_query(taos, "drop database if exists stmt2_testdb_10;");
do_query(taos, "create database IF NOT EXISTS stmt2_testdb_10;");
do_query(taos, "use stmt2_testdb_10;");
do_query(taos,
"create table m1 (ts timestamp, blob2 nchar(10), blob nchar(10),blob3 nchar(10),blob4 nchar(10),blob5 "
"nchar(10))");
@ -1244,6 +1285,7 @@ TEST(stmt2Case, stmt2_nchar) {
ASSERT_EQ(affected_rows, 10);
taos_stmt2_close(stmt);
do_query(taos, "drop database if exists stmt2_testdb_10;");
taos_close(taos);
taosMemoryFree(blob_len);
taosMemoryFree(blob_len2);
@ -1256,11 +1298,12 @@ TEST(stmt2Case, all_type) {
TAOS* taos = taos_connect("localhost", "root", "taosdata", "", 0);
ASSERT_NE(taos, nullptr);
do_query(taos, "drop database if exists testdb11");
do_query(taos, "create database IF NOT EXISTS testdb11");
do_query(taos, "drop database if exists stmt2_testdb_11");
do_query(taos, "create database IF NOT EXISTS stmt2_testdb_11");
do_query(
taos,
"create stable testdb11.stb(ts timestamp, c1 int, c2 bigint, c3 float, c4 double, c5 binary(8), c6 smallint, c7 "
"create stable stmt2_testdb_11.stb(ts timestamp, c1 int, c2 bigint, c3 float, c4 double, c5 binary(8), c6 "
"smallint, c7 "
"tinyint, c8 bool, c9 nchar(8), c10 geometry(256))TAGS(tts timestamp, t1 int, t2 bigint, t3 float, t4 double, t5 "
"binary(8), t6 smallint, t7 tinyint, t8 bool, t9 nchar(8), t10 geometry(256))");
@ -1370,7 +1413,7 @@ TEST(stmt2Case, all_type) {
params[10].is_null = NULL;
params[10].num = 1;
char* stmt_sql = "insert into testdb11.? using stb tags(?,?,?,?,?,?,?,?,?,?,?)values (?,?,?,?,?,?,?,?,?,?,?)";
char* stmt_sql = "insert into stmt2_testdb_11.? using stb tags(?,?,?,?,?,?,?,?,?,?,?)values (?,?,?,?,?,?,?,?,?,?,?)";
code = taos_stmt2_prepare(stmt, stmt_sql, 0);
checkError(stmt, code);
@ -1388,6 +1431,7 @@ TEST(stmt2Case, all_type) {
geosFreeBuffer(outputGeom1);
taos_stmt2_close(stmt);
do_query(taos, "drop database if exists stmt2_testdb_11");
taos_close(taos);
}
@ -1395,31 +1439,29 @@ TEST(stmt2Case, geometry) {
TAOS* taos = taos_connect("localhost", "root", "taosdata", NULL, 0);
ASSERT_NE(taos, nullptr);
do_query(taos, "DROP DATABASE IF EXISTS testdb15");
do_query(taos, "CREATE DATABASE IF NOT EXISTS testdb15");
do_query(taos, "CREATE TABLE testdb15.tb1(ts timestamp,c1 geometry(256))");
do_query(taos, "DROP DATABASE IF EXISTS stmt2_testdb_13");
do_query(taos, "CREATE DATABASE IF NOT EXISTS stmt2_testdb_13");
do_query(taos, "CREATE TABLE stmt2_testdb_13.tb1(ts timestamp,c1 geometry(256))");
TAOS_STMT2_OPTION option = {0};
TAOS_STMT2* stmt = taos_stmt2_init(taos, &option);
ASSERT_NE(stmt, nullptr);
unsigned char wkb1[] = { // 1
0x01, // 字节顺序:小端字节序
0x01, 0x00, 0x00, 0x00, // 几何类型Point (1)
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xF0, 0x3F, // p1
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40, // p2
// 2
0x01, 0x03, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0,
0x3f,
// 3
0x01,
0x02, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0,
0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x40};
unsigned char wkb1[] = {
// 1
0x01, // 字节顺序:小端字节序
0x01, 0x00, 0x00, 0x00, // 几何类型Point (1)
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xF0, 0x3F, // p1
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40, // p2
// 2
0x01, 0x03, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0xf0, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0xf0, 0x3f,
// 3
0x01, 0x02, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x40};
// unsigned char* wkb_all[3]{&wkb1[0], &wkb2[0], &wkb3[0]};
int32_t wkb_len[3] = {21, 61, 41};
@ -1440,7 +1482,7 @@ TEST(stmt2Case, geometry) {
params[1].is_null = NULL;
params[1].num = 3;
char* stmt_sql = "insert into testdb15.tb1 (ts,c1)values(?,?)";
char* stmt_sql = "insert into stmt2_testdb_13.tb1 (ts,c1)values(?,?)";
int code = taos_stmt2_prepare(stmt, stmt_sql, 0);
checkError(stmt, code);
@ -1455,6 +1497,7 @@ TEST(stmt2Case, geometry) {
ASSERT_EQ(affected_rows, 3);
taos_stmt2_close(stmt);
do_query(taos, "DROP DATABASE IF EXISTS stmt2_testdb_13");
taos_close(taos);
}
#pragma GCC diagnostic pop

View File

@ -52,6 +52,11 @@ void do_query(TAOS *taos, const char *sql) {
TAOS_RES *result = taos_query(taos, sql);
// printf("sql: %s\n", sql);
int code = taos_errno(result);
while (code == TSDB_CODE_MND_DB_IN_CREATING || code == TSDB_CODE_MND_DB_IN_DROPPING) {
taosMsleep(2000);
result = taos_query(taos, sql);
code = taos_errno(result);
}
if (code != TSDB_CODE_SUCCESS) {
printf("query failen sql : %s\n errstr : %s\n", sql, taos_errstr(result));
ASSERT_EQ(taos_errno(result), TSDB_CODE_SUCCESS);
@ -69,12 +74,13 @@ typedef struct {
void insertData(TAOS *taos, TAOS_STMT_OPTIONS *option, const char *sql, int CTB_NUMS, int ROW_NUMS, int CYC_NUMS,
bool isCreateTable) {
// create database and table
do_query(taos, "DROP DATABASE IF EXISTS testdb2");
do_query(taos, "CREATE DATABASE IF NOT EXISTS testdb2");
do_query(taos,
"CREATE STABLE IF NOT EXISTS testdb2.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS "
"(groupId INT, location BINARY(24))");
do_query(taos, "USE testdb2");
do_query(taos, "DROP DATABASE IF EXISTS stmt_testdb_2");
do_query(taos, "CREATE DATABASE IF NOT EXISTS stmt_testdb_2");
do_query(
taos,
"CREATE STABLE IF NOT EXISTS stmt_testdb_2.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS "
"(groupId INT, location BINARY(24))");
do_query(taos, "USE stmt_testdb_2");
// init
TAOS_STMT *stmt;
@ -173,7 +179,7 @@ void insertData(TAOS *taos, TAOS_STMT_OPTIONS *option, const char *sql, int CTB_
for (int j = 0; j < ROW_NUMS; j++) {
struct timeval tv;
(&tv, NULL);
int64_t ts = 1591060628000 + j + k * 100;
int64_t ts = 1591060628000 + j + k * 100000;
float current = (float)0.0001f * j;
int voltage = j;
float phase = (float)0.0001f * j;
@ -207,12 +213,13 @@ void insertData(TAOS *taos, TAOS_STMT_OPTIONS *option, const char *sql, int CTB_
void getFields(TAOS *taos, const char *sql, int expectedALLFieldNum, TAOS_FIELD_E *expectedTagFields,
int expectedTagFieldNum, TAOS_FIELD_E *expectedColFields, int expectedColFieldNum) {
// create database and table
do_query(taos, "DROP DATABASE IF EXISTS testdb3");
do_query(taos, "CREATE DATABASE IF NOT EXISTS testdb3");
do_query(taos, "USE testdb3");
do_query(taos,
"CREATE STABLE IF NOT EXISTS testdb3.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS "
"(groupId INT, location BINARY(24))");
do_query(taos, "DROP DATABASE IF EXISTS stmt_testdb_3");
do_query(taos, "CREATE DATABASE IF NOT EXISTS stmt_testdb_3");
do_query(taos, "USE stmt_testdb_3");
do_query(
taos,
"CREATE STABLE IF NOT EXISTS stmt_testdb_3.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS "
"(groupId INT, location BINARY(24))");
TAOS_STMT *stmt = taos_stmt_init(taos);
ASSERT_NE(stmt, nullptr);
@ -271,7 +278,7 @@ TEST(stmtCase, stb_insert) {
TAOS *taos = taos_connect("localhost", "root", "taosdata", NULL, 0);
ASSERT_NE(taos, nullptr);
// interlace = 0
{ insertData(taos, nullptr, "INSERT INTO testdb2.? USING meters TAGS(?,?) VALUES (?,?,?,?)", 1, 1, 1, false); }
{ insertData(taos, nullptr, "INSERT INTO stmt_testdb_2.? USING meters TAGS(?,?) VALUES (?,?,?,?)", 1, 1, 1, false); }
{ insertData(taos, nullptr, "INSERT INTO ? USING meters TAGS(?,?) VALUES (?,?,?,?)", 3, 3, 3, false); }
@ -283,6 +290,7 @@ TEST(stmtCase, stb_insert) {
insertData(taos, &options, "INSERT INTO ? VALUES (?,?,?,?)", 3, 3, 3, true);
}
do_query(taos, "DROP DATABASE IF EXISTS stmt_testdb_2");
taos_close(taos);
}
@ -299,18 +307,20 @@ TEST(stmtCase, get_fields) {
{"phase", TSDB_DATA_TYPE_FLOAT, 0, 0, sizeof(float)}};
getFields(taos, "INSERT INTO ? USING meters TAGS(?,?) VALUES (?,?,?,?)", 7, &tagFields[0], 2, &colFields[0], 4);
}
do_query(taos, "DROP DATABASE IF EXISTS stmt_testdb_3");
taos_close(taos);
}
/*
TEST(stmtCase, all_type) {
TAOS *taos = taos_connect("localhost", "root", "taosdata", NULL, 0);
ASSERT_NE(taos, nullptr);
do_query(taos, "DROP DATABASE IF EXISTS testdb1");
do_query(taos, "CREATE DATABASE IF NOT EXISTS testdb1");
do_query(taos, "DROP DATABASE IF EXISTS stmt_testdb_1");
do_query(taos, "CREATE DATABASE IF NOT EXISTS stmt_testdb_1");
do_query(
taos,
"CREATE STABLE testdb1.stb(ts timestamp, c1 int, c2 bigint, c3 float, c4 double, c5 binary(8), c6 smallint, c7 "
"CREATE STABLE stmt_testdb_1.stb1(ts timestamp, c1 int, c2 bigint, c3 float, c4 double, c5 binary(8), c6 "
"smallint, c7 "
"tinyint, c8 bool, c9 nchar(8), c10 geometry(100))TAGS(tts timestamp, t1 int, t2 bigint, t3 float, t4 double, t5 "
"binary(8), t6 smallint, t7 tinyint, t8 bool, t9 nchar(8), t10 geometry(100))");
@ -418,18 +428,22 @@ TEST(stmtCase, all_type) {
params[9].is_null = NULL;
params[9].num = 1;
size_t size;
int code = initCtxGeomFromText();
checkError(stmt, code);
unsigned char *outputGeom1;
size_t size1;
initCtxMakePoint();
int code = doMakePoint(1.000, 2.000, &outputGeom1, &size1);
const char *wkt = "LINESTRING(1.0 1.0, 2.0 2.0)";
code = doGeomFromText(wkt, &outputGeom1, &size);
checkError(stmt, code);
params[10].buffer_type = TSDB_DATA_TYPE_GEOMETRY;
params[10].buffer = outputGeom1;
params[10].length = (int32_t *)&size1;
params[9].buffer_length = size;
params[10].length = (int32_t *)&size;
params[10].is_null = NULL;
params[10].num = 1;
char *stmt_sql = "insert into testdb1.? using stb tags(?,?,?,?,?,?,?,?,?,?,?)values (?,?,?,?,?,?,?,?,?,?,?)";
char *stmt_sql = "insert into stmt_testdb_1.? using stb1 tags(?,?,?,?,?,?,?,?,?,?,?)values (?,?,?,?,?,?,?,?,?,?,?)";
code = taos_stmt_prepare(stmt, stmt_sql, 0);
checkError(stmt, code);
@ -449,17 +463,17 @@ TEST(stmtCase, all_type) {
checkError(stmt, code);
taos_stmt_close(stmt);
do_query(taos, "DROP DATABASE IF EXISTS stmt_testdb_1");
taos_close(taos);
}
*/
TEST(stmtCase, geometry) {
TAOS *taos = taos_connect("localhost", "root", "taosdata", NULL, 0);
ASSERT_NE(taos, nullptr);
do_query(taos, "DROP DATABASE IF EXISTS testdb5");
do_query(taos, "CREATE DATABASE IF NOT EXISTS testdb5");
do_query(taos, "CREATE TABLE testdb5.tb1(ts timestamp,c1 geometry(256))");
do_query(taos, "DROP DATABASE IF EXISTS stmt_testdb_5");
do_query(taos, "CREATE DATABASE IF NOT EXISTS stmt_testdb_5");
do_query(taos, "CREATE TABLE stmt_testdb_5.tb1(ts timestamp,c1 geometry(256))");
TAOS_STMT *stmt = taos_stmt_init(taos);
ASSERT_NE(stmt, nullptr);
@ -468,7 +482,6 @@ TEST(stmtCase, geometry) {
0x01, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0xF0, 0x3F, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40,
},
//
{0x01, 0x03, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00,
@ -503,7 +516,7 @@ TEST(stmtCase, geometry) {
params[1].is_null = NULL;
params[1].num = 3;
char *stmt_sql = "insert into testdb5.tb1 (ts,c1)values(?,?)";
char *stmt_sql = "insert into stmt_testdb_5.tb1 (ts,c1)values(?,?)";
int code = taos_stmt_prepare(stmt, stmt_sql, 0);
checkError(stmt, code);
@ -522,6 +535,7 @@ TEST(stmtCase, geometry) {
taosMemoryFree(t64_len);
taosMemoryFree(wkb_len);
taos_stmt_close(stmt);
do_query(taos, "DROP DATABASE IF EXISTS stmt_testdb_5");
taos_close(taos);
}

View File

@ -469,6 +469,13 @@ int32_t qUpdateTableListForStreamScanner(qTaskInfo_t tinfo, const SArray* tableI
}
SStreamScanInfo* pScanInfo = pInfo->info;
if (pInfo->pTaskInfo->execModel == OPTR_EXEC_MODEL_QUEUE) { // clear meta cache for subscription if tag is changed
for (int32_t i = 0; i < taosArrayGetSize(tableIdList); ++i) {
int64_t* uid = (int64_t*)taosArrayGet(tableIdList, i);
STableScanInfo* pTableScanInfo = pScanInfo->pTableScanOp->info;
taosLRUCacheErase(pTableScanInfo->base.metaCache.pTableMetaEntryCache, uid, LONG_BYTES);
}
}
if (isAdd) { // add new table id
SArray* qa = NULL;

View File

@ -771,7 +771,34 @@ bool getSumFuncEnv(SFunctionNode* UNUSED_PARAM(pFunc), SFuncExecEnv* pEnv) {
return true;
}
static bool funcNotSupportStringSma(SFunctionNode* pFunc) {
SNode* pParam;
switch (pFunc->funcType) {
case FUNCTION_TYPE_MAX:
case FUNCTION_TYPE_MIN:
case FUNCTION_TYPE_SUM:
case FUNCTION_TYPE_AVG:
case FUNCTION_TYPE_AVG_PARTIAL:
case FUNCTION_TYPE_PERCENTILE:
case FUNCTION_TYPE_SPREAD:
case FUNCTION_TYPE_SPREAD_PARTIAL:
case FUNCTION_TYPE_SPREAD_MERGE:
case FUNCTION_TYPE_TWA:
pParam = nodesListGetNode(pFunc->pParameterList, 0);
if (pParam && nodesIsExprNode(pParam) && (IS_VAR_DATA_TYPE(((SExprNode*)pParam)->resType.type))) {
return true;
}
break;
default:
break;
}
return false;
}
EFuncDataRequired statisDataRequired(SFunctionNode* pFunc, STimeWindow* pTimeWindow) {
if(funcNotSupportStringSma(pFunc)) {
return FUNC_DATA_REQUIRED_DATA_LOAD;
}
return FUNC_DATA_REQUIRED_SMA_LOAD;
}

View File

@ -196,23 +196,16 @@ TEST_F(ParserInitialATest, alterDatabase) {
setAlterDbFsync(200);
setAlterDbWal(1);
setAlterDbCacheModel(TSDB_CACHE_MODEL_LAST_ROW);
#ifndef _STORAGE
setAlterDbSttTrigger(-1);
#else
setAlterDbSttTrigger(16);
#endif
setAlterDbBuffer(16);
setAlterDbPages(128);
setAlterDbReplica(3);
setAlterDbWalRetentionPeriod(10);
setAlterDbWalRetentionSize(20);
#ifndef _STORAGE
run("ALTER DATABASE test BUFFER 16 CACHEMODEL 'last_row' CACHESIZE 32 WAL_FSYNC_PERIOD 200 KEEP 10 PAGES 128 "
"REPLICA 3 WAL_LEVEL 1 WAL_RETENTION_PERIOD 10 WAL_RETENTION_SIZE 20");
#else
run("ALTER DATABASE test BUFFER 16 CACHEMODEL 'last_row' CACHESIZE 32 WAL_FSYNC_PERIOD 200 KEEP 10 PAGES 128 "
"REPLICA 3 WAL_LEVEL 1 STT_TRIGGER 16 WAL_RETENTION_PERIOD 10 WAL_RETENTION_SIZE 20");
#endif
"REPLICA 3 WAL_LEVEL 1 "
"STT_TRIGGER 16 "
"WAL_RETENTION_PERIOD 10 WAL_RETENTION_SIZE 20");
clearAlterDbReq();
initAlterDb("test");

View File

@ -292,11 +292,7 @@ TEST_F(ParserInitialCTest, createDatabase) {
setDbWalRetentionSize(-1);
setDbWalRollPeriod(10);
setDbWalSegmentSize(20);
#ifndef _STORAGE
setDbSstTrigger(1);
#else
setDbSstTrigger(16);
#endif
setDbHashPrefix(3);
setDbHashSuffix(4);
setDbTsdbPageSize(32);
@ -354,7 +350,7 @@ TEST_F(ParserInitialCTest, createDatabase) {
"WAL_RETENTION_SIZE -1 "
"WAL_ROLL_PERIOD 10 "
"WAL_SEGMENT_SIZE 20 "
"STT_TRIGGER 16 "
"STT_TRIGGER 1 "
"TABLE_PREFIX 3 "
"TABLE_SUFFIX 4 "
"TSDB_PAGESIZE 32");

View File

@ -3428,7 +3428,8 @@ _out:;
ths->pLogBuf->matchIndex, ths->pLogBuf->endIndex);
if (code == 0 && ths->state == TAOS_SYNC_STATE_ASSIGNED_LEADER) {
TAOS_CHECK_RETURN(syncNodeUpdateAssignedCommitIndex(ths, matchIndex));
int64_t index = syncNodeUpdateAssignedCommitIndex(ths, matchIndex);
sTrace("vgId:%d, update assigned commit index %" PRId64 "", ths->vgId, index);
if (ths->fsmState != SYNC_FSM_STATE_INCOMPLETE &&
syncLogBufferCommit(ths->pLogBuf, ths, ths->assignedCommitIndex) < 0) {

View File

@ -234,6 +234,13 @@ TEST(TcsTest, InterfaceTest) {
// TEST(TcsTest, DISABLED_InterfaceNonBlobTest) {
TEST(TcsTest, InterfaceNonBlobTest) {
#ifndef TD_ENTERPRISE
// NOTE: this test case will coredump for community edition of taos
// thus we bypass this test case for the moment
// code = tcsGetObjectBlock(object_name, 0, size, check, &pBlock);
// tcsGetObjectBlock succeeded but pBlock is nullptr
// which results in nullptr-access-coredump shortly after
#else
int code = 0;
bool check = false;
bool withcp = false;
@ -348,4 +355,5 @@ TEST(TcsTest, InterfaceNonBlobTest) {
GTEST_ASSERT_EQ(code, 0);
tcsUninit();
#endif
}

View File

@ -142,10 +142,6 @@ target_include_directories(
PRIVATE "${CMAKE_CURRENT_SOURCE_DIR}/../inc"
)
IF(COMPILER_SUPPORT_AVX2)
MESSAGE(STATUS "AVX2 instructions is ACTIVATED")
set_source_files_properties(decompressTest.cpp PROPERTIES COMPILE_FLAGS -mavx2)
ENDIF()
add_executable(decompressTest "decompressTest.cpp")
target_link_libraries(decompressTest os util common gtest_main)
add_test(

View File

@ -524,23 +524,20 @@ static void decompressBasicTest(size_t dataSize, const CompF& compress, const De
decltype(origData) decompData(origData.size());
// test simple implementation without SIMD instructions
tsSIMDEnable = 0;
tsAVX2Supported = 0;
cnt = decompress(compData.data(), compData.size(), decompData.size(), decompData.data(), decompData.size(),
ONE_STAGE_COMP, nullptr, 0);
ASSERT_EQ(cnt, compData.size() - 1);
EXPECT_EQ(origData, decompData);
#ifdef __AVX2__
if (DataTypeSupportAvx<T>::value) {
taosGetSystemInfo();
if (DataTypeSupportAvx<T>::value && tsAVX2Supported) {
// test AVX2 implementation
tsSIMDEnable = 1;
tsAVX2Supported = 1;
cnt = decompress(compData.data(), compData.size(), decompData.size(), decompData.data(), decompData.size(),
ONE_STAGE_COMP, nullptr, 0);
ASSERT_EQ(cnt, compData.size() - 1);
EXPECT_EQ(origData, decompData);
}
#endif
}
template <typename T, typename CompF, typename DecompF>
@ -557,7 +554,7 @@ static void decompressPerfTest(const char* typname, const CompF& compress, const
<< "; Compression ratio: " << 1.0 * (compData.size() - 1) / cnt << "\n";
decltype(origData) decompData(origData.size());
tsSIMDEnable = 0;
tsAVX2Supported = 0;
auto ms = measureRunTime(
[&]() {
decompress(compData.data(), compData.size(), decompData.size(), decompData.data(), decompData.size(),
@ -567,10 +564,8 @@ static void decompressPerfTest(const char* typname, const CompF& compress, const
std::cout << "Decompression of " << NROUND * DATA_SIZE << " " << typname << " without SIMD costs " << ms
<< " ms, avg speed: " << NROUND * DATA_SIZE * 1000 / ms << " tuples/s\n";
#ifdef __AVX2__
if (DataTypeSupportAvx<T>::value) {
tsSIMDEnable = 1;
tsAVX2Supported = 1;
taosGetSystemInfo();
if (DataTypeSupportAvx<T>::value && tsAVX2Supported) {
ms = measureRunTime(
[&]() {
decompress(compData.data(), compData.size(), decompData.size(), decompData.data(), decompData.size(),
@ -580,7 +575,6 @@ static void decompressPerfTest(const char* typname, const CompF& compress, const
std::cout << "Decompression of " << NROUND * DATA_SIZE << " " << typname << " using AVX2 costs " << ms
<< " ms, avg speed: " << NROUND * DATA_SIZE * 1000 / ms << " tuples/s\n";
}
#endif
}
#define RUN_PERF_TEST(typname, comp, decomp, min, max) \

233
tests/README.md Normal file
View File

@ -0,0 +1,233 @@
# Table of Contents
1. [Introduction](#1-introduction)
1. [Prerequisites](#2-prerequisites)
1. [Testing Guide](#3-testing-guide)
- [3.1 Unit Test](#31-unit-test)
- [3.2 System Test](#32-system-test)
- [3.3 Legacy Test](#33-legacy-test)
- [3.4 Smoke Test](#34-smoke-test)
- [3.5 Chaos Test](#35-chaos-test)
- [3.6 CI Test](#36-ci-test)
# 1. Introduction
This manual is intended to give developers a comprehensive guidance to test TDengine efficiently. It is divided into three main sections: introduction, prerequisites and testing guide.
> [!NOTE]
> - The commands and scripts below are verified on Linux (Ubuntu 18.04/20.04/22.04).
> - The commands and steps described below are to run the tests on a single host.
# 2. Prerequisites
- Install Python3
```bash
apt install python3
apt install python3-pip
```
- Install Python dependencies
```bash
pip3 install pandas psutil fabric2 requests faker simplejson \
toml pexpect tzlocal distro decorator loguru hyperloglog
```
- Install Python connector for TDengine
```bash
pip3 install taospy taos-ws-py
```
- Building
Before testing, please make sure the building operation with option `-DBUILD_TOOLS=true -DBUILD_TEST=true -DBUILD_CONTRIB=true` has been done, otherwise execute commands below:
```bash
cd debug
cmake .. -DBUILD_TOOLS=true -DBUILD_TEST=true -DBUILD_CONTRIB=true
make && make install
```
# 3. Testing Guide
In `tests` directory, there are different types of tests for TDengine. Below is a brief introduction about how to run them and how to add new cases.
### 3.1 Unit Test
Unit tests are the smallest testable units, which are used to test functions, methods or classes in TDengine code.
### 3.1.1 How to run single test case?
```bash
cd debug/build/bin
./osTimeTests
```
### 3.1.2 How to run all unit test cases?
```bash
cd tests/unit-test/
bash test.sh -e 0
```
### 3.1.3 How to add new cases?
<details>
<summary>Detailed steps to add new unit test case</summary>
The Google test framwork is used for unit testing to specific function module, please refer to steps below to add a new test case:
##### a. Create test case file and develop the test scripts
In the test directory corresponding to the target function module, create test files in CPP format and write corresponding test cases.
##### b. Update build configuration
Modify the CMakeLists.txt file in this directory to ensure that the new test files are properly included in the compilation process. See the `source/os/test/CMakeLists.txt` file for configuration examples.
##### c. Compile test code
In the root directory of the project, create a compilation directory (e.g., debug), switch to the directory and run CMake commands (e.g., `cmake .. -DBUILD_TEST=1`) to generate a compilation file,
and then run a compilation command (e.g. make) to complete the compilation of the test code.
##### d. Execute the test program
Find the executable file in the compiled directory(e.g. `TDengine/debug/build/bin/`) and run it.
##### e. Integrate into CI tests
Use the add_test command to add new compiled test cases into CI test collection, ensure that the new added test cases can be run for every build.
</details>
## 3.2 System Test
System tests are end-to-end test cases written in Python from a system point of view. Some of them are designed to test features only in enterprise ediiton, so when running on community edition, they may fail. We'll fix this issue by separating the cases into different gruops in the future.
### 3.2.1 How to run a single test case?
Take test file `system-test/2-query/avg.py` for example:
```bash
cd tests/system-test
python3 ./test.py -f 2-query/avg.py
```
### 3.2.2 How to run all system test cases?
```bash
cd tests
./run_all_ci_cases.sh -t python # all python cases
```
### 3.2.3 How to add new case?
<details>
<summary>Detailed steps to add new system test case</summary>
The Python test framework is developed by TDengine team, and test.py is the test case execution and monitoring of the entry program, Use `python3 ./test.py -h` to view more features.
Please refer to steps below for how to add a new test case:
##### a. Create a test case file and develop the test cases
Create a file in `tests/system-test` containing each functional directory and refer to the use case template `tests/system-test/0-others/test_case_template.py` to add a new test case.
##### b. Execute the test case
Ensure the test case execution is successful.
``` bash
cd tests/system-test && python3 ./test.py -f 0-others/test_case_template.py
```
##### c. Integrate into CI tests
Edit `tests/parallel_test/cases.task` and add the testcase path and executions in the specified format. The third column indicates whether to use Address Sanitizer mode for testing.
```bash
#caseID,rerunTimes,Run with Sanitizer,casePath,caseCommand
,,n,system-test, python3 ./test.py -f 0-others/test_case_template.py
```
</details>
## 3.3 Legacy Test
In the early stage of TDengine development, test cases are run by an internal test framework called TSIM, which is developed in C++.
### 3.3.1 How to run single test case?
To run the legacy test cases, please execute the following commands:
```bash
cd tests/script
./test.sh -f tsim/db/basic1.sim
```
### 3.3.2 How to run all legacy test cases?
```bash
cd tests
./run_all_ci_cases.sh -t legacy # all legacy cases
```
### 3.3.3 How to add new cases?
> [!NOTE]
> TSIM test framwork is deprecated by system test now, it is encouraged to add new test cases in system test, please refer to [System Test](#32-system-test) for details.
## 3.4 Smoke Test
Smoke test is a group of test cases selected from system test, which is also known as sanity test to ensure the critical functionalities of TDengine.
### 3.4.1 How to run test?
```bash
cd /root/TDengine/packaging/smokeTest
./test_smoking_selfhost.sh
```
### 3.4.2 How to add new cases?
New cases can be added by updating the value of `commands` variable in `test_smoking_selfhost.sh`.
## 3.5 Chaos Test
A simple tool to execute various functions of the system in a randomized way, hoping to expose potential problems without a pre-defined test scenario.
### 3.5.1 How to run test?
```bash
cd tests/pytest
python3 auto_crash_gen.py
```
### 3.5.2 How to add new cases?
1. Add a function, such as `TaskCreateNewFunction` in `pytest/crash_gen/crash_gen_main.py`.
2. Integrate `TaskCreateNewFunction` into the `balance_pickTaskType` function in `crash_gen_main.py`.
## 3.6 CI Test
CI testing (Continuous Integration testing), is an important practice in software development that aims to automate frequent integration of code into a shared codebase, build and test it to ensure code quality and stability.
TDengine CI testing will run all the test cases from the following three types of tests: unit test, system test and legacy test.
### 3.6.1 How to run all CI test cases?
If this is the first time to run all the CI test cases, it is recommended to add the test branch, please run it with following commands:
```bash
cd tests
./run_all_ci_cases.sh -b main # on main branch
```
### 3.6.2 How to add new cases?
Please refer to the [Unit Test](#31-unit-test)、[System Test](#32-system-test) and [Legacy Test](#33-legacy-test) sections for detailed steps to add new test cases, when new cases are added in aboved tests, they will be run automatically by CI test.

View File

@ -35,6 +35,12 @@ class TDTestCase(TBase):
time.sleep(1)
tdSql.execute("use db;")
tdSql.execute("CREATE STABLE meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int);")
tdSql.execute("CREATE TABLE d0 USING meters TAGS (\"California.SanFrancisco\", 2);");
count = 0
while count < 100:
@ -72,6 +78,8 @@ class TDTestCase(TBase):
count += 1
tdSql.execute("INSERT INTO d0 VALUES (NOW, 10.3, 219, 0.31);")
def stop(self):
tdSql.close()
tdLog.success(f"{__file__} successfully executed")

View File

@ -329,7 +329,9 @@
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/dataFromTsdbNWal-multiCtb.py
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmq_taosx.py
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmq_ts5466.py
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmq_td33504.py
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmq_ts-5473.py
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmq_ts5906.py
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/td-32187.py
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/td-33225.py
,,y,system-test,./pytest.sh python3 ./test.py -f 7-tmq/tmq_ts4563.py

View File

@ -244,7 +244,7 @@ def start_taosd():
else:
pass
start_cmd = 'cd %s && python3 test.py >>/dev/null '%(start_path)
start_cmd = 'cd %s && python3 test.py -G >>/dev/null '%(start_path)
os.system(start_cmd)
def get_cmds(args_list):
@ -371,7 +371,7 @@ Result: {msg_dict[status]}
Details
Owner: Jayden Jia
Start time: {starttime}
End time: {endtime}
End time: {endtime}
Hostname: {hostname}
Commit: {git_commit}
Cmd: {cmd}
@ -380,14 +380,13 @@ Core dir: {core_dir}
'''
text_result=text.split("Result: ")[1].split("Details")[0].strip()
print(text_result)
if text_result == "success":
send_msg(notification_robot_url, get_msg(text))
send_msg(notification_robot_url, get_msg(text))
else:
send_msg(alert_robot_url, get_msg(text))
send_msg(notification_robot_url, get_msg(text))
#send_msg(get_msg(text))
send_msg(alert_robot_url, get_msg(text))
send_msg(notification_robot_url, get_msg(text))
#send_msg(get_msg(text))
except Exception as e:
print("exception:", e)
exit(status)

View File

@ -245,7 +245,7 @@ def start_taosd():
else:
pass
start_cmd = 'cd %s && python3 test.py '%(start_path)
start_cmd = 'cd %s && python3 test.py -G'%(start_path)
os.system(start_cmd +">>/dev/null")
def get_cmds(args_list):
@ -404,24 +404,24 @@ Result: {msg_dict[status]}
Details
Owner: Jayden Jia
Start time: {starttime}
End time: {endtime}
End time: {endtime}
Hostname: {hostname}
Commit: {git_commit}
Cmd: {cmd}
Log dir: {log_dir}
Core dir: {core_dir}
'''
text_result=text.split("Result: ")[1].split("Details")[0].strip()
print(text_result)
if text_result == "success":
send_msg(notification_robot_url, get_msg(text))
else:
send_msg(alert_robot_url, get_msg(text))
send_msg(alert_robot_url, get_msg(text))
send_msg(notification_robot_url, get_msg(text))
#send_msg(get_msg(text))
#send_msg(get_msg(text))
except Exception as e:
print("exception:", e)
exit(status)

View File

@ -236,7 +236,7 @@ def start_taosd():
else:
pass
start_cmd = 'cd %s && python3 test.py -N 4 -M 1 '%(start_path)
start_cmd = 'cd %s && python3 test.py -N 4 -M 1 -G '%(start_path)
os.system(start_cmd +">>/dev/null")
def get_cmds(args_list):
@ -388,28 +388,28 @@ def main():
text = f'''
Result: {msg_dict[status]}
Details
Owner: Jayden Jia
Start time: {starttime}
End time: {endtime}
End time: {endtime}
Hostname: {hostname}
Commit: {git_commit}
Cmd: {cmd}
Log dir: {log_dir}
Core dir: {core_dir}
'''
text_result=text.split("Result: ")[1].split("Details")[0].strip()
print(text_result)
if text_result == "success":
send_msg(notification_robot_url, get_msg(text))
else:
send_msg(alert_robot_url, get_msg(text))
send_msg(notification_robot_url, get_msg(text))
#send_msg(get_msg(text))
send_msg(alert_robot_url, get_msg(text))
send_msg(notification_robot_url, get_msg(text))
#send_msg(get_msg(text))
except Exception as e:
print("exception:", e)
exit(status)

View File

@ -23,20 +23,24 @@ function printHelp() {
echo " -b [Build test branch] Build test branch (default: null)"
echo " Options: "
echo " e.g., -b main (pull main branch, build and install)"
echo " -t [Run test cases] Run test cases type(default: all)"
echo " Options: "
echo " e.g., -t all/python/legacy"
echo " -s [Save cases log] Save cases log(default: notsave)"
echo " Options:"
echo " e.g., -c notsave : do not save the log "
echo " -c save : default save ci case log in Project dir/tests/ci_bak"
echo " e.g., -s notsave : do not save the log "
echo " -s save : default save ci case log in Project dir/tests/ci_bak"
exit 0
}
# Initialization parameter
PROJECT_DIR=""
BRANCH=""
TEST_TYPE=""
SAVE_LOG="notsave"
# Parse command line parameters
while getopts "hb:d:s:" arg; do
while getopts "hb:d:t:s:" arg; do
case $arg in
d)
PROJECT_DIR=$OPTARG
@ -44,6 +48,9 @@ while getopts "hb:d:s:" arg; do
b)
BRANCH=$OPTARG
;;
t)
TEST_TYPE=$OPTARG
;;
s)
SAVE_LOG=$OPTARG
;;
@ -315,9 +322,9 @@ function runTest() {
[ -d sim ] && rm -rf sim
[ -f $TDENGINE_ALLCI_REPORT ] && rm $TDENGINE_ALLCI_REPORT
runUnitTest
runSimCases
runPythonCases
runUnitTest
stopTaosd
cd $TDENGINE_DIR/tests/script
@ -361,7 +368,13 @@ print_color "$GREEN" "Run all ci test cases" | tee -a $WORK_DIR/date.log
stopTaosd
runTest
if [ -z "$TEST_TYPE" -o "$TEST_TYPE" = "all" -o "$TEST_TYPE" = "ALL" ]; then
runTest
elif [ "$TEST_TYPE" = "python" -o "$TEST_TYPE" = "PYTHON" ]; then
runPythonCases
elif [ "$TEST_TYPE" = "legacy" -o "$TEST_TYPE" = "LEGACY" ]; then
runSimCases
fi
date >> $WORK_DIR/date.log
print_color "$GREEN" "End of ci test cases" | tee -a $WORK_DIR/date.log

View File

@ -0,0 +1,55 @@
from util.log import tdLog
from util.cases import tdCases
from util.sql import tdSql
from util.dnodes import tdDnodes
from util.dnodes import *
from util.common import *
class TDTestCase:
"""
Here is the class description for the whole file cases
"""
# add the configuration of the client and server here
clientCfgDict = {'debugFlag': 131}
updatecfgDict = {
"debugFlag" : "131",
"queryBufferSize" : 10240,
'clientCfg' : clientCfgDict
}
def init(self, conn, logSql, replicaVar=1):
tdLog.debug(f"start to excute {__file__}")
tdSql.init(conn.cursor())
self.replicaVar = int(replicaVar)
def test_function(self): # case function should be named start with test_
"""
Here is the function description for single test:
Test case for custom function
"""
tdLog.info(f"Test case test custom function")
# excute the sql
tdSql.execute(f"create database db_test_function")
tdSql.execute(f"create table db_test_function.stb (ts timestamp, c1 int, c2 float, c3 double) tags (t1 int unsigned);")
# qury the result and return the result
tdSql.query(f"show databases")
# print result and check the result
database_info = tdLog.info(f"{tdSql.queryResult}")
tdSql.checkRows(3)
tdSql.checkData(2,0,"db_test_function")
def run(self):
self.test_function()
def stop(self):
tdSql.close()
tdLog.success(f"{__file__} successfully executed")
tdCases.addLinux(__file__, TDTestCase())
tdCases.addWindows(__file__, TDTestCase())

View File

@ -75,6 +75,7 @@ class TDTestCase:
tdLog.debug(" LIMIT test_case2 ............ [OK]")
self.test_TD_33336()
self.ts5900()
# stop
def stop(self):
@ -137,6 +138,47 @@ class TDTestCase:
tdLog.debug("INSERT TABLE DATA ............ [OK]")
return
def ts5900query(self):
sql = "select max(c0) from ts5900.tt1"
tdSql.query(sql)
tdSql.checkRows(1)
tdSql.checkData(0, 0, '99.0')
sql = "select min(c0) from ts5900.tt1"
tdSql.query(sql)
tdSql.checkRows(1)
tdSql.checkData(0, 0, '1.0')
def ts5900(self):
tdSql.execute("drop database if exists ts5900;")
tdSql.execute("create database ts5900;")
tdSql.execute("create table ts5900.meters (ts timestamp, c0 varchar(64)) tags(t0 varchar(64));")
sql = "CREATE TABLE ts5900.`tt1` USING ts5900.`meters` TAGS ('t11')"
tdSql.execute(sql)
for i in range(155):
tdSql.query(f"insert into ts5900.tt1 values(now+{i*10}s, '{i+1}.0')")
tdSql.query("insert into ts5900.tt1 values(now, '1.2')")
tdSql.query("insert into ts5900.tt1 values(now+1s, '2.0')")
tdSql.query("insert into ts5900.tt1 values(now+2s, '3.0')")
tdSql.query("insert into ts5900.tt1 values(now+3s, '105.0')")
tdSql.query("insert into ts5900.tt1 values(now+4s, '4.0')")
sql = "select count(*) from ts5900.tt1"
tdSql.query(sql)
tdSql.checkRows(1)
tdSql.checkData(0, 0, '160')
for i in range(10):
tdSql.execute("flush database ts5900")
time.sleep(1)
self.ts5900query()
tdSql.query(f"insert into ts5900.tt1 values(now, '23.0')")
self.ts5900query()
tdLog.info(f"ts5900 test {i} ............ [OK]")
time.sleep(1)
# test case1 base
# def test_case1(self):

View File

@ -0,0 +1,84 @@
import taos
import sys
import time
import socket
import os
import threading
from util.log import *
from util.sql import *
from util.cases import *
from util.dnodes import *
from util.common import *
from taos.tmq import *
from taos import *
sys.path.append("./7-tmq")
from tmqCommon import *
class TDTestCase:
def init(self, conn, logSql, replicaVar=1):
self.replicaVar = int(replicaVar)
tdLog.debug(f"start to excute {__file__}")
tdSql.init(conn.cursor())
#tdSql.init(conn.cursor(), logSql) # output sql.txt file
def test(self):
tdSql.execute(f'create database if not exists db')
tdSql.execute(f'use db')
tdSql.execute(f'CREATE STABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)')
tdSql.execute("INSERT INTO d1001 USING meters TAGS('California.SanFrancisco', 2) VALUES('2018-10-05 14:38:05.000',10.30000,219,0.31000)")
tdSql.execute("INSERT INTO d1002 USING meters TAGS('California.SanFrancisco', 2) VALUES('2018-10-05 14:38:05.000',10.30000,219,0.31000)")
tdSql.execute("INSERT INTO d1003 USING meters TAGS('California.SanFrancisco', 2) VALUES('2018-10-05 14:38:05.000',10.30000,219,0.31000)")
tdSql.execute("INSERT INTO d1004 USING meters TAGS('California.SanFrancisco', 2) VALUES('2018-10-05 14:38:05.000',10.30000,219,0.31000)")
tdSql.execute(f'create topic t0 as select * from meters')
tdSql.execute(f'create topic t1 as select * from meters')
consumer_dict = {
"group.id": "g1",
"td.connect.user": "root",
"td.connect.pass": "taosdata",
"auto.offset.reset": "earliest",
}
consumer = Consumer(consumer_dict)
try:
consumer.subscribe(["t0"])
except TmqError:
tdLog.exit(f"subscribe error")
try:
res = consumer.poll(1)
print(res)
consumer.unsubscribe()
try:
consumer.subscribe(["t1"])
except TmqError:
tdLog.exit(f"subscribe error")
res = consumer.poll(1)
print(res)
if res == None and taos_errno(None) != 0:
tdLog.exit(f"poll error %d" % taos_errno(None))
except TmqError:
tdLog.exit(f"poll error")
finally:
consumer.close()
def run(self):
self.test()
def stop(self):
tdSql.close()
tdLog.success(f"{__file__} successfully executed")
tdCases.addLinux(__file__, TDTestCase())
tdCases.addWindows(__file__, TDTestCase())

View File

@ -0,0 +1,90 @@
import taos
import sys
import time
import socket
import os
import threading
from util.log import *
from util.sql import *
from util.cases import *
from util.dnodes import *
from util.common import *
from taos.tmq import *
from taos import *
sys.path.append("./7-tmq")
from tmqCommon import *
class TDTestCase:
updatecfgDict = {'debugFlag': 143, 'asynclog': 0}
def init(self, conn, logSql, replicaVar=1):
self.replicaVar = int(replicaVar)
tdLog.debug(f"start to excute {__file__}")
tdSql.init(conn.cursor())
#tdSql.init(conn.cursor(), logSql) # output sql.txt file
def test(self):
tdSql.execute(f'create database if not exists db vgroups 1')
tdSql.execute(f'use db')
tdSql.execute(f'CREATE STABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)')
tdSql.execute("INSERT INTO d1001 USING meters TAGS('California.SanFrancisco1', 2) VALUES('2018-10-05 14:38:05.000',10.30000,219,0.31000)")
tdSql.execute(f'create topic t0 as select * from meters')
consumer_dict = {
"group.id": "g1",
"td.connect.user": "root",
"td.connect.pass": "taosdata",
"auto.offset.reset": "earliest",
}
consumer = Consumer(consumer_dict)
try:
consumer.subscribe(["t0"])
except TmqError:
tdLog.exit(f"subscribe error")
index = 0;
try:
while True:
if index == 2:
break
res = consumer.poll(5)
print(res)
if not res:
print("res null")
break
val = res.value()
if val is None:
continue
for block in val:
data = block.fetchall()
for element in data:
print(f"data len: {len(data)}")
print(element)
if index == 0 and data[0][-1] != 2:
tdLog.exit(f"error: {data[0][-1]}")
if index == 1 and data[0][-1] != 100:
tdLog.exit(f"error: {data[0][-1]}")
tdSql.execute("alter table d1001 set tag groupId = 100")
tdSql.execute("INSERT INTO d1001 VALUES('2018-10-05 14:38:06.000',10.30000,219,0.31000)")
index += 1
finally:
consumer.close()
def run(self):
self.test()
def stop(self):
tdSql.close()
tdLog.success(f"{__file__} successfully executed")
tdCases.addLinux(__file__, TDTestCase())
tdCases.addWindows(__file__, TDTestCase())

View File

@ -58,12 +58,12 @@ def checkRunTimeError():
if hwnd:
os.system("TASKKILL /F /IM taosd.exe")
#
#
# run case on previous cluster
#
def runOnPreviousCluster(host, config, fileName):
print("enter run on previeous")
# load case module
sep = "/"
if platform.system().lower() == 'windows':
@ -113,8 +113,9 @@ if __name__ == "__main__":
asan = False
independentMnode = False
previousCluster = False
opts, args = getopt.gnu_getopt(sys.argv[1:], 'f:p:m:l:scghrd:k:e:N:M:Q:C:RWD:n:i:aP', [
'file=', 'path=', 'master', 'logSql', 'stop', 'cluster', 'valgrind', 'help', 'restart', 'updateCfgDict', 'killv', 'execCmd','dnodeNums','mnodeNums','queryPolicy','createDnodeNums','restful','websocket','adaptercfgupdate','replicaVar','independentMnode','previous'])
crashGen = False
opts, args = getopt.gnu_getopt(sys.argv[1:], 'f:p:m:l:scghrd:k:e:N:M:Q:C:RWD:n:i:aP:G', [
'file=', 'path=', 'master', 'logSql', 'stop', 'cluster', 'valgrind', 'help', 'restart', 'updateCfgDict', 'killv', 'execCmd','dnodeNums','mnodeNums','queryPolicy','createDnodeNums','restful','websocket','adaptercfgupdate','replicaVar','independentMnode','previous',"crashGen"])
for key, value in opts:
if key in ['-h', '--help']:
tdLog.printNoPrefix(
@ -141,6 +142,7 @@ if __name__ == "__main__":
tdLog.printNoPrefix('-i independentMnode Mnode')
tdLog.printNoPrefix('-a address sanitizer mode')
tdLog.printNoPrefix('-P run case with [P]revious cluster, do not create new cluster to run case.')
tdLog.printNoPrefix('-G crashGen mode')
sys.exit(0)
@ -208,7 +210,7 @@ if __name__ == "__main__":
if key in ['-R', '--restful']:
restful = True
if key in ['-W', '--websocket']:
websocket = True
@ -228,6 +230,10 @@ if __name__ == "__main__":
if key in ['-P', '--previous']:
previousCluster = True
if key in ['-G', '--crashGen']:
crashGen = True
#
# do exeCmd command
#
@ -405,7 +411,7 @@ if __name__ == "__main__":
for dnode in tdDnodes.dnodes:
tdDnodes.starttaosd(dnode.index)
tdCases.logSql(logSql)
if restful or websocket:
tAdapter.deploy(adapter_cfg_dict)
tAdapter.start()
@ -450,7 +456,7 @@ if __name__ == "__main__":
else:
tdLog.debug(res)
tdLog.exit(f"alter queryPolicy to {queryPolicy} failed")
if ucase is not None and hasattr(ucase, 'noConn') and ucase.noConn == True:
conn = None
else:
@ -640,7 +646,7 @@ if __name__ == "__main__":
else:
tdLog.debug(res)
tdLog.exit(f"alter queryPolicy to {queryPolicy} failed")
# run case
if testCluster:
@ -692,6 +698,7 @@ if __name__ == "__main__":
# tdDnodes.StopAllSigint()
tdLog.info("Address sanitizer mode finished")
else:
tdDnodes.stopAll()
if not crashGen:
tdDnodes.stopAll()
tdLog.info("stop all td process finished")
sys.exit(0)

View File

@ -7,10 +7,10 @@ function usage() {
}
ent=1
while getopts "eh" opt; do
while getopts "e:h" opt; do
case $opt in
e)
ent=1
ent="$OPTARG"
;;
h)
usage