fix:confilcts from 3.0

This commit is contained in:
wangmm0220 2022-08-16 14:42:19 +08:00
commit 19c5051b8c
203 changed files with 9794 additions and 8530 deletions

View File

@ -35,16 +35,18 @@ TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series
# 文档
关于完整的使用手册,系统架构和更多细节,请参考 [TDengine 文档](https://docs.taosdata.com) 或者 [English Documents](https://docs.tdengine.com)。
关于完整的使用手册,系统架构和更多细节,请参考 [TDengine 文档](https://docs.taosdata.com) 或者 [TDengine Documentation](https://docs.tdengine.com)。
# 构建
TDengine 目前可以在 Linux、 Windows 等平台上安装和运行。任何 OS 的应用也可以选择 taosAdapter 的 RESTful 接口连接服务端 taosd。CPU 支持 X64/ARM64后续会支持 MIPS64、Alpha64、ARM32、RISC-V 等 CPU 架构。
用户可根据需求选择通过源码、[容器](https://docs.taosdata.com/3.0/get-started/docker/)、[安装包](https://docs.taosdata.com/3.0/get-started/package/)或[Kubenetes](https://docs.taosdata.com/3.0/deployment/k8s/)来安装。本快速指南仅适用于通过源码安装。
用户可根据需求选择通过源码、[容器](https://docs.taosdata.com/get-started/docker/)、[安装包](https://docs.taosdata.com/get-started/package/)或[Kubenetes](https://docs.taosdata.com/deployment/k8s/)来安装。本快速指南仅适用于通过源码安装。
TDengine 还提供一组辅助工具软件 taosTools目前它包含 taosBenchmark曾命名为 taosdemo和 taosdump 两个软件。默认 TDengine 编译不包含 taosTools, 您可以在编译 TDengine 时使用`cmake .. -DBUILD_TOOLS=true` 来同时编译 taosTools。
为了构建TDengine, 请使用 [CMake](https://cmake.org/) 3.0.2 或者更高版本。
## 安装工具
### Ubuntu 18.04 及以上版本 & Debian
@ -61,7 +63,7 @@ sudo apt-get install -y gcc cmake build-essential git libssl-dev
sudo apt install build-essential libjansson-dev libsnappy-dev liblzma-dev libz-dev pkg-config
```
### CentOS 7.9
### CentOS 7.9
```bash
sudo yum install epel-release
@ -78,13 +80,15 @@ sudo dnf install -y gcc gcc-c++ make cmake epel-release git openssl-devel
#### 在 CentOS 上构建 taosTools 安装依赖软件
#### For CentOS 7/RHEL
#### CentOS 7.9
```
sudo yum install -y zlib-devel xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libstdc++-static openssl-devel
```
#### For CentOS 8/Rocky Linux
#### CentOS 8/Rocky Linux
```
sudo yum install -y epel-release
@ -129,14 +133,16 @@ TDengine 包含数个使用 Rust 语言开发的组件. 请参考 rust-lang.org
git clone https://github.com/taosdata/TDengine.git
cd TDengine
```
Go 连接器和 Grafana 插件已移到其他独立仓库。
如果使用 https 协议下载比较慢,可以通过修改 ~/.gitconfig 文件添加以下两行设置使用 ssh 协议下载。需要首先上传 ssh 密钥到 GitHub详细方法请参考 GitHub 官方文档。
```
[url "git@github.com:"]
insteadOf = https://github.com/
```
## 特别说明
[JDBC 连接器](https://github.com/taosdata/taos-connector-jdbc) [Go 连接器](https://github.com/taosdata/driver-go)[Python 连接器](https://github.com/taosdata/taos-connector-python)[Node.js 连接器](https://github.com/taosdata/taos-connector-node)[C# 连接器](https://github.com/taosdata/taos-connector-dotnet) [Rust 连接器](https://github.com/taosdata/taos-connector-rust) 和 [Grafana 插件](https://github.com/taosdata/grafanaplugin)已移到独立仓库。
## 构建 TDengine
@ -223,9 +229,9 @@ cmake .. && cmake --build .
sudo make install
```
用户可以在[文件目录结构](https://www.taosdata.com/cn/documentation/administrator#directories)中了解更多在操作系统中生成的目录或文件。
从 2.0 版本开始, 从源代码安装也会为 TDengine 配置服务管理。
用户也可以选择[从安装包中安装](https://www.taosdata.com/en/getting-started/#Install-from-Package)。
用户可以在[文件目录结构](https://docs.taosdata.com/reference/directory/)中了解更多在操作系统中生成的目录或文件。
从源代码安装也会为 TDengine 配置服务管理 用户也可以选择[从安装包中安装](https://docs.taosdata.com/get-started/package/)。
安装成功后,在终端中启动 TDengine 服务:
@ -233,13 +239,13 @@ sudo make install
sudo systemctl start taosd
```
用户可以使用 TDengine Shell 来连接 TDengine 服务,在终端中,输入:
用户可以使用 TDengine CLI 来连接 TDengine 服务,在终端中,输入:
```bash
taos
```
如果 TDengine Shell 连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印出错误消息。
如果 TDengine CLI 连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印出错误消息。
## Windows 系统
@ -265,7 +271,7 @@ sudo make install
./build/bin/taosd -c test/cfg
```
在另一个终端,使用 TDengine shell 连接服务器:
在另一个终端,使用 TDengine CLI 连接服务器:
```bash
./build/bin/taos -c test/cfg

View File

@ -14,13 +14,17 @@
[![Build status](https://ci.appveyor.com/api/projects/status/kf3pwh2or5afsgl9/branch/master?svg=true)](https://ci.appveyor.com/project/sangshuduo/tdengine-2n8ge/branch/master)
[![Coverage Status](https://coveralls.io/repos/github/taosdata/TDengine/badge.svg?branch=develop)](https://coveralls.io/github/taosdata/TDengine?branch=develop)
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/4201/badge)](https://bestpractices.coreinfrastructure.org/projects/4201)
[![tdengine](https://snapcraft.io//tdengine/badge.svg)](https://snapcraft.io/tdengine)
English | [简体中文](README-CN.md) | We are hiring, check [here](https://tdengine.com/careers)
# What is TDengine
TDengine is an open source, cloud native time-series database optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. TDengine differentiates itself from other TSDBs with the following advantages.:
TDengine is an open source, high performance , cloud native time-series database (Time-Series Database, TSDB).
TDengine can be optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT, IT operation and maintenance, finance and other fields. In addition to the core time series database functions, TDengine also provides functions such as caching, data subscription, and streaming computing. It is a minimalist time series data processing platform that minimizes the complexity of system design and reduces R&D and operating costs. Compared with other time series databases, the main advantages of TDengine are as follows:
- High-Performance: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression.
@ -36,17 +40,24 @@ TDengine is an open source, cloud native time-series database optimized for Inte
# Documentation
For user manual, system design and architecture, please refer to [TDengine Documentation](https://docs.tdengine.com) ([中文版](https://docs.taosdata.com))
For user manual, system design and architecture, please refer to [TDengine Documentation](https://docs.taosdata.com) ([TDengine 文档](https://docs.taosdata.com))
# Building
At the moment, TDengine server supports running on Linux, Windows, and macOS systems. You can choose to [install from packages](https://www.tdengine.com/getting-started/#Install-from-Package) or build it from the source code. This quick guide is for installation from the source only.
We provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use 'cmake .. -DBUILD_TOOLS=true' to make them be compiled with TDengine.
At the moment, TDengine server supports running on Linux, Windows systems.Any OS application can also choose the RESTful interface of taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU , and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future.
You can choose to install through source code according to your needs, [container](https://docs.taosdata.com/get-started/docker/), [installation package](https://docs.taosdata.com/get-started/package/) or [Kubenetes](https://docs.taosdata.com/deployment/k8s/) to install. This quick guide only applies to installing from source.
TDengine provide a few useful tools such as taosBenchmark (was named taosdemo) and taosdump. They were part of TDengine. By default, TDengine compiling does not include taosTools. You can use `cmake .. -DBUILD_TOOLS=true` to make them be compiled with TDengine.
To build TDengine, use [CMake](https://cmake.org/) 3.0.2 or higher versions in the project directory.
## Install build dependencies
## Install build tools
### Ubuntu 18.04 and above or Debian
@ -56,6 +67,7 @@ sudo apt-get install -y gcc cmake build-essential git libssl-dev
#### Install build dependencies for taosTools
To build the [taosTools](https://github.com/taosdata/taos-tools) on Ubuntu/Debian, the following packages need to be installed.
```bash
@ -79,16 +91,32 @@ sudo dnf install -y gcc gcc-c++ make cmake epel-release git openssl-devel
#### Install build dependencies for taosTools on CentOS
To build the [taosTools](https://github.com/taosdata/taos-tools) on CentOS, the following packages need to be installed.
```bash
sudo yum install zlib-devel xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libstdc++-static openssl-devel
#### CentOS 7.9
```
sudo yum install -y zlib-devel xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libstdc++-static openssl-devel
```
#### CentOS 8/Rocky Linux
```
sudo yum install -y epel-release
sudo yum install -y dnf-plugins-core
sudo yum config-manager --set-enabled powertools
sudo yum install -y zlib-devel xz-devel snappy-devel jansson jansson-devel pkgconfig libatomic libstdc++-static openssl-devel
```
Note: Since snappy lacks pkg-config support (refer to [link](https://github.com/google/snappy/pull/86)), it leads a cmake prompt libsnappy not found. But snappy still works well.
If the powertools installation fails, you can try to use:
```
sudo yum config-manager --set-enabled Powertools
```
### Setup golang environment
TDengine includes a few components like taosAdapter developed by Go language. Please refer to golang.org official documentation for golang environment setup.
Please use version 1.14+. For the user in China, we recommend using a proxy to accelerate package downloading.
@ -98,6 +126,12 @@ go env -w GO111MODULE=on
go env -w GOPROXY=https://goproxy.cn,direct
```
The default will not build taosAdapter, but you can use the following command to build taosAdapter as the service for RESTful interface.
```
cmake .. -DBUILD_HTTP=false
```
### Setup rust environment
TDengine includes a few compoments developed by Rust language. Please refer to rust-lang.org official documentation for rust environment setup.
@ -111,7 +145,6 @@ git clone https://github.com/taosdata/TDengine.git
cd TDengine
```
The connectors for go & Grafana and some tools have been moved to separated repositories.
You can modify the file ~/.gitconfig to use ssh protocol instead of https for better download speed. You will need to upload ssh public key to GitHub first. Please refer to GitHub official documentation for detail.
@ -120,10 +153,16 @@ You can modify the file ~/.gitconfig to use ssh protocol instead of https for be
insteadOf = https://github.com/
```
## Special Note
[JDBC Connector](https://github.com/taosdata/taos-connector-jdbc) [Go Connector](https://github.com/taosdata/driver-go)[Python Connector](https://github.com/taosdata/taos-connector-python)[Node.js Connector](https://github.com/taosdata/taos-connector-node)[C# Connector](https://github.com/taosdata/taos-connector-dotnet) [Rust Connector](https://github.com/taosdata/taos-connector-rust) and [Grafana plugin](https://github.com/taosdata/grafanaplugin) has been moved to standalone repository.
## Build TDengine
### On Linux platform
You can run the bash script `build.sh` to build both TDengine and taosTools including taosBenchmark and taosdump as below:
```bash
@ -139,11 +178,6 @@ cmake .. -DBUILD_TOOLS=true
make
```
Note TDengine 2.3.x.0 and later use a component named 'taosAdapter' to play http daemon role. If you pull TDengine source code to the latest from an existing codebase, please execute 'git submodule update --init --recursive' to pull taosAdapter source code, and use the following command to choose to build taosAdapter.
```
cmake .. -DBUILD_HTTP=false
```
You can use Jemalloc as memory allocator instead of glibc:
@ -212,8 +246,9 @@ After building successfully, TDengine can be installed by
sudo make install
```
Users can find more information about directories installed on the system in the [directory and files](https://www.taosdata.com/en/documentation/administrator/#Directory-and-Files) section. Since version 2.0, installing from source code will also configure service management for TDengine.
Users can also choose to [install from packages](https://www.taosdata.com/en/getting-started/#Install-from-Package) for it.
Users can find more information about directories installed on the system in the [directory and files](https://docs.taosdata.com/reference/directory/) section.
Installing from source code will also configure service management for TDengine.Users can also choose to [install from packages](https://docs.taosdata.com/get-started/package/) for it.
To start the service after installation, in a terminal, use:
@ -221,13 +256,13 @@ To start the service after installation, in a terminal, use:
sudo systemctl start taosd
```
Then users can use the [TDengine shell](https://www.taosdata.com/en/getting-started/#TDengine-Shell) to connect the TDengine server. In a terminal, use:
Then users can use the TDengine CLI to connect the TDengine server. In a terminal, use:
```bash
taos
```
If TDengine shell connects the server successfully, welcome messages and version info are printed. Otherwise, an error message is shown.
If TDengine CLI connects the server successfully, welcome messages and version info are printed. Otherwise, an error message is shown.
## On Windows platform
@ -253,7 +288,7 @@ If you don't want to run TDengine as a service, you can run it in current shell.
./build/bin/taosd -c test/cfg
```
In another terminal, use the TDengine shell to connect the server:
In another terminal, use the TDengine CLI to connect the server:
```bash
./build/bin/taos -c test/cfg
@ -263,7 +298,7 @@ option "-c test/cfg" specifies the system configuration file directory.
# Try TDengine
It is easy to run SQL commands from TDengine shell which is the same as other SQL databases.
It is easy to run SQL commands from TDengine CLI which is the same as other SQL databases.
```sql
CREATE DATABASE demo;
@ -283,7 +318,7 @@ Query OK, 2 row(s) in set (0.001700s)
## Official Connectors
TDengine provides abundant developing tools for users to develop on TDengine. Follow the links below to find your desired connectors and relevant documentation.
TDengine provides abundant developing tools for users to develop on TDengine. include C/C++、Java、Python、Go、Node.js、C# 、RESTful ,Follow the links below to find your desired connectors and relevant documentation.
- [Java](https://docs.taosdata.com/reference/connector/java/)
- [C/C++](https://docs.taosdata.com/reference/connector/cpp/)
@ -294,11 +329,6 @@ TDengine provides abundant developing tools for users to develop on TDengine. Fo
- [C#](https://docs.taosdata.com/reference/connector/csharp/)
- [RESTful API](https://docs.taosdata.com/reference/rest-api/)
# How to run the test cases and how to add a new test case
TDengine's test framework and all test cases are fully open source.
Please refer to [this document](https://github.com/taosdata/TDengine/blob/develop/tests/How-To-Run-Test-And-How-To-Add-New-Test-Case.md) for how to run test and develop new test case.
# Contribute to TDengine
Please follow the [contribution guidelines](CONTRIBUTING.md) to contribute to the project.
@ -306,7 +336,3 @@ Please follow the [contribution guidelines](CONTRIBUTING.md) to contribute to th
# Join TDengine WeChat Group
Add WeChat “tdengine” to join the groupyou can communicate with other users.
# [User List](https://github.com/taosdata/TDengine/issues/2432)
If you are using TDengine and feel it helps or you'd like to do some contributions, please add your company to [user list](https://github.com/taosdata/TDengine/issues/2432) and let us know your needs.

View File

@ -2,7 +2,7 @@
# taos-tools
ExternalProject_Add(taos-tools
GIT_REPOSITORY https://github.com/taosdata/taos-tools.git
GIT_TAG 53a0103
GIT_TAG d237772
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools"
BINARY_DIR ""
#BUILD_IN_SOURCE TRUE

View File

@ -27,10 +27,6 @@ else ()
cat("${TD_SUPPORT_DIR}/taosadapter_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
endif()
if(TD_LINUX_64 AND JEMALLOC_ENABLED)
cat("${TD_SUPPORT_DIR}/jemalloc_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
endif()
# pthread
if(${BUILD_PTHREAD})
cat("${TD_SUPPORT_DIR}/pthread_CMakeLists.txt.in" ${CONTRIB_TMP_FILE})
@ -396,19 +392,6 @@ if(${BUILD_WITH_SQLITE})
endif(NOT TD_WINDOWS)
endif(${BUILD_WITH_SQLITE})
# jemalloc
IF (TD_LINUX_64 AND JEMALLOC_ENABLED)
include(ExternalProject)
ExternalProject_Add(jemalloc
PREFIX "jemalloc"
SOURCE_DIR ${CMAKE_CURRENT_SOURCE_DIR}/jemalloc
BUILD_IN_SOURCE 1
CONFIGURE_COMMAND ./autogen.sh COMMAND ./configure --prefix=${CMAKE_BINARY_DIR}/build/
BUILD_COMMAND ${MAKE}
)
INCLUDE_DIRECTORIES(${CMAKE_BINARY_DIR}/build/include)
ENDIF ()
# addr2line
if(${BUILD_ADDR2LINE})
if(NOT ${TD_WINDOWS})

View File

@ -1,7 +1,9 @@
```java
{{#include docs/examples/java/src/main/java/com/taos/example/SubscribeDemo.java}}
```
:::note
For now Java connector doesn't provide asynchronous subscription, but `TimerTask` can be used to achieve similar purpose.
:::
```java
{{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}}
```
```java
{{#include docs/examples/java/src/main/java/com/taos/example/Meters.java}}
```

View File

@ -1,3 +1,3 @@
```rs
```rust
{{#include docs/examples/rust/nativeexample/examples/subscribe_demo.rs}}
```
```

View File

@ -8,16 +8,13 @@ TDengine provides a rich set of APIs (application development interface). To fac
## Supported platforms
Currently, TDengine's native interface connectors can support platforms such as X64/X86/ARM64/ARM32/MIPS/Alpha hardware platforms and Linux/Win64/Win32 development environments. The comparison matrix is as follows.
Currently, TDengine's native interface connectors can support platforms such as X64/ARM64 hardware platforms and Linux/Win64 development environments. The comparison matrix is as follows.
| **CPU** | **OS** | **JDBC** | **Python** | **Go** | **Node.js** | **C#** | **Rust** | C/C++ |
| ------- | ------ | -------- | ---------- | ------ | ----------- | ------ | -------- | ----- |
| **X86 64bit** | **Linux** | ● | ● | ● | ● | ● | ● | ● |
| **X86 64bit** | **Win64** | ● | ● | ● | ● | ● | ● | ● |
| **X86 64bit** | **Win32** | ● | ● | ● | ● | ○ | ○ | ● |
| **X86 32bit** | **Win32** | ○ | ○ | ○ | ○ | ○ | ○ | ● |
| **ARM64** | **Linux** | ● | ● | ● | ● | ○ | ○ | ● |
| **MIPS** | **Linux** | ○ | ○ | ○ | ○ | ○ | ○ | ○ |
Where ● means the official test verification passed, ○ means the unofficial test verification passed, -- means no assurance.

View File

@ -41,19 +41,20 @@ Please refer to [Version Support List](/reference/connector#version-support).
TDengine currently supports timestamp, number, character, Boolean type, and the corresponding type conversion with Java is as follows:
| TDengine DataType | JDBCType (driver version < 2.0.24) | JDBCType (driver version > = 2.0.24) |
| ----------------- | ---------------------------------- | ------------------------------------ |
| TIMESTAMP | java.lang.Long | java.sql.Timestamp |
| INT | java.lang.Integer | java.lang.Integer |
| BIGINT | java.lang.Long | java.lang.Long |
| FLOAT | java.lang.Float | java.lang.Float |
| DOUBLE | java.lang.Double | java.lang.Double |
| SMALLINT | java.lang.Short | java.lang.Short |
| TINYINT | java.lang.Byte | java.lang.Byte |
| BOOL | java.lang.Boolean | java.lang.Boolean |
| BINARY | java.lang.String | byte array |
| NCHAR | java.lang.String | java.lang.String |
| JSON | - | java.lang.String |
| TDengine DataType | JDBCType |
| ----------------- | ---------------------------------- |
| TIMESTAMP | java.sql.Timestamp |
| INT | java.lang.Integer |
| BIGINT | java.lang.Long |
| FLOAT | java.lang.Float |
| DOUBLE | java.lang.Double |
| SMALLINT | java.lang.Short |
| TINYINT | java.lang.Byte |
| BOOL | java.lang.Boolean |
| BINARY | byte array |
| NCHAR | java.lang.String |
| JSON | java.lang.String |
**Note**: Only TAG supports JSON types
@ -81,7 +82,7 @@ Add following dependency in the `pom.xml` file of your Maven project:
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>2.0.**</version>
<version>3.0.0</version>
</dependency>
```
@ -845,7 +846,13 @@ Please refer to: [JDBC example](https://github.com/taosdata/TDengine/tree/develo
**Cause**: Currently, TDengine only supports 64-bit JDK.
**Solution**: Reinstall the 64-bit JDK. 4.
**Solution**: Reinstall the 64-bit JDK.
4. java.lang.NoSuchMethodError: setByteArray
**Cause**: taos-jdbcdriver version 3.* only supports TDengine 3.0 or above.
**Solution**: connect TDengine 2.* using taos-jdbcdriver 2.* version.
For other questions, please refer to [FAQ](/train-faq/faq)

View File

@ -5,11 +5,11 @@ description: "List of platforms supported by TDengine server, client, and connec
## List of supported platforms for TDengine server
| | **Windows 10/11** | **CentOS 7.9/8** | **Ubuntu 18/20** | **Other Linux** | **UOS** | **Kylin** | **Ningsi V60/V80** | **HUAWEI EulerOS** |
| ------------------ | ----------------- | ---------------- | ---------------- | --------------- | ------- | --------- | ------------------ | ------------------ |
| X64 | ● | ● | ● | | ● | ● | ● | |
| Raspberry Pi ARM64 | | | | ● | | | | |
| HUAWEI cloud ARM64 | | | | | | | | ● |
| | **Windows server 2016/2019** | **Windows 10/11** | **CentOS 7.9/8** | **Ubuntu 18/20** | **UOS** | **kylin** | **Ningsi V60/V80** |
| ------------------ | ---------------------------- | ----------------- | ---------------- | ---------------- | ------- | --------- | ------------------ |
| X64 | ● | ● | ● | ● | ● | ● | ● |
| Raspberry Pi ARM64 | | | ● | | | | |
| HUAWEI Cloud ARM64 | | | | ● | | | |
Note: ● means officially tested and verified, ○ means unofficially tested and verified.
@ -19,15 +19,15 @@ TDengine's connector can support a wide range of platforms, including X64/X86/AR
The comparison matrix is as follows.
| **CPU** | **X64 64bit** | | | **X86 32bit** | **ARM64** | **MIPS** | **Alpha** |
| ----------- | ------------- | --------- | --------- | ------------- | --------- | --------- | --------- |
| **OS** | **Linux** | **Win64** | **Win32** | **Win32** | **Linux** | **Linux** | **Linux** |
| **C/C++** | ● | ● | ● | ○ | ● | ● | ● |
| **JDBC** | ● | ● | ● | ○ | ● | ● | ● |
| **Python** | ● | ● | ● | ○ | ● | ● | -- |
| **Go** | ● | ● | ● | ○ | ● | ○ | -- |
| **NodeJs** | ● | ● | ○ | ○ | ● | ○ | -- |
| **C#** | ● | ● | ○ | ○ | ○ | ○ | -- |
| **RESTful** | ● | ● | ● | ● | ● | ● | ● |
| **CPU** | **X64 64bit** | **X64 64bit** | **ARM64** |
| ----------- | ------------- | ------------- | --------- |
| **OS** | **Linux** | **Win64** | **Linux** |
| **C/C++** | ● | ● | ● |
| **JDBC** | ● | ● | ● |
| **Python** | ● | ● | ● |
| **Go** | ● | ● | ● |
| **NodeJs** | ● | ● | ● |
| **C#** | ● | ● | ○ |
| **RESTful** | ● | ● | ● |
Note: ● means the official test is verified, ○ means the unofficial test is verified, -- means not verified.

View File

@ -10,7 +10,7 @@ namespace TDengineExample
{
IntPtr conn = GetConnection();
// run query
IntPtr res = TDengine.Query(conn, "SELECT * FROM test.meters LIMIT 2");
IntPtr res = TDengine.Query(conn, "SELECT * FROM meters LIMIT 2");
if (TDengine.ErrorNo(res) != 0)
{
Console.WriteLine("Failed to query since: " + TDengine.Error(res));

View File

@ -21,7 +21,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>2.0.38</version>
<version>3.0.0</version>
</dependency>
<!-- ANCHOR_END: dep-->
<dependency>

View File

@ -0,0 +1,62 @@
package com.taos.example;
import java.sql.Timestamp;
public class Meters {
private Timestamp ts;
private float current;
private int voltage;
private int groupid;
private String location;
public Timestamp getTs() {
return ts;
}
public void setTs(Timestamp ts) {
this.ts = ts;
}
public float getCurrent() {
return current;
}
public void setCurrent(float current) {
this.current = current;
}
public int getVoltage() {
return voltage;
}
public void setVoltage(int voltage) {
this.voltage = voltage;
}
public int getGroupid() {
return groupid;
}
public void setGroupid(int groupid) {
this.groupid = groupid;
}
public String getLocation() {
return location;
}
public void setLocation(String location) {
this.location = location;
}
@Override
public String toString() {
return "Meters{" +
"ts=" + ts +
", current=" + current +
", voltage=" + voltage +
", groupid=" + groupid +
", location='" + location + '\'' +
'}';
}
}

View File

@ -0,0 +1,6 @@
package com.taos.example;
import com.taosdata.jdbc.tmq.ReferenceDeserializer;
public class MetersDeserializer extends ReferenceDeserializer<Meters> {
}

View File

@ -1,65 +1,78 @@
package com.taos.example;
import com.taosdata.jdbc.TSDBConnection;
import com.taosdata.jdbc.TSDBDriver;
import com.taosdata.jdbc.TSDBResultSet;
import com.taosdata.jdbc.TSDBSubscribe;
import com.taosdata.jdbc.tmq.ConsumerRecords;
import com.taosdata.jdbc.tmq.TMQConstants;
import com.taosdata.jdbc.tmq.TaosConsumer;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSetMetaData;
import java.sql.SQLException;
import java.sql.Statement;
import java.time.Duration;
import java.util.Collections;
import java.util.Properties;
import java.util.concurrent.TimeUnit;
import java.util.Timer;
import java.util.TimerTask;
import java.util.concurrent.atomic.AtomicBoolean;
public class SubscribeDemo {
private static final String topic = "topic-meter-current-bg-10";
private static final String sql = "select * from meters where current > 10";
private static final String TOPIC = "tmq_topic";
private static final String DB_NAME = "meters";
private static final AtomicBoolean shutdown = new AtomicBoolean(false);
public static void main(String[] args) {
Connection connection = null;
TSDBSubscribe subscribe = null;
Timer timer = new Timer();
timer.schedule(new TimerTask() {
public void run() {
shutdown.set(true);
}
}, 3_000);
try {
// prepare
Class.forName("com.taosdata.jdbc.TSDBDriver");
String jdbcUrl = "jdbc:TAOS://127.0.0.1:6030/?user=root&password=taosdata";
Connection connection = DriverManager.getConnection(jdbcUrl);
try (Statement statement = connection.createStatement()) {
statement.executeUpdate("drop topic if exists " + TOPIC);
statement.executeUpdate("drop database if exists " + DB_NAME);
statement.executeUpdate("create database " + DB_NAME);
statement.executeUpdate("use " + DB_NAME);
statement.executeUpdate(
"CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT) TAGS (`groupid` INT, `location` BINARY(16))");
statement.executeUpdate("CREATE TABLE `d0` USING `meters` TAGS(0, 'Los Angles')");
statement.executeUpdate("INSERT INTO `d0` values(now - 10s, 0.32, 116)");
statement.executeUpdate("INSERT INTO `d0` values(now - 8s, NULL, NULL)");
statement.executeUpdate(
"INSERT INTO `d1` USING `meters` TAGS(1, 'San Francisco') values(now - 9s, 10.1, 119)");
statement.executeUpdate(
"INSERT INTO `d1` values (now-8s, 10, 120) (now - 6s, 10, 119) (now - 4s, 11.2, 118)");
// create topic
statement.executeUpdate("create topic " + TOPIC + " as select * from meters");
}
// create consumer
Properties properties = new Properties();
properties.setProperty(TSDBDriver.PROPERTY_KEY_CHARSET, "UTF-8");
properties.setProperty(TSDBDriver.PROPERTY_KEY_TIME_ZONE, "UTC-8");
String jdbcUrl = "jdbc:TAOS://127.0.0.1:6030/power?user=root&password=taosdata";
connection = DriverManager.getConnection(jdbcUrl, properties);
// create subscribe
subscribe = ((TSDBConnection) connection).subscribe(topic, sql, true);
int count = 0;
while (count < 10) {
// wait 1 second to avoid frequent calls to consume
TimeUnit.SECONDS.sleep(1);
// consume
TSDBResultSet resultSet = subscribe.consume();
if (resultSet == null) {
continue;
}
ResultSetMetaData metaData = resultSet.getMetaData();
while (resultSet.next()) {
int columnCount = metaData.getColumnCount();
for (int i = 1; i <= columnCount; i++) {
System.out.print(metaData.getColumnLabel(i) + ": " + resultSet.getString(i) + "\t");
properties.setProperty(TMQConstants.BOOTSTRAP_SERVERS, "127.0.0.1:6030");
properties.setProperty(TMQConstants.MSG_WITH_TABLE_NAME, "true");
properties.setProperty(TMQConstants.ENABLE_AUTO_COMMIT, "true");
properties.setProperty(TMQConstants.GROUP_ID, "test");
properties.setProperty(TMQConstants.VALUE_DESERIALIZER,
"com.taosdata.jdbc.MetersDeserializer");
// poll data
try (TaosConsumer<Meters> consumer = new TaosConsumer<>(properties)) {
consumer.subscribe(Collections.singletonList(TOPIC));
while (!shutdown.get()) {
ConsumerRecords<Meters> meters = consumer.poll(Duration.ofMillis(100));
for (Meters meter : meters) {
System.out.println(meter);
}
System.out.println();
count++;
}
consumer.unsubscribe();
}
} catch (Exception e) {
} catch (ClassNotFoundException | SQLException e) {
e.printStackTrace();
} finally {
try {
if (null != subscribe)
// close subscribe
subscribe.close(true);
if (connection != null)
connection.close();
} catch (SQLException throwable) {
throwable.printStackTrace();
}
}
timer.cancel();
}
}

View File

@ -1,20 +1,13 @@
const { options, connect } = require("@tdengine/rest");
//A cursor also needs to be initialized in order to interact with TDengine from Node.js.
const taos = require("@tdengine/client");
var conn = taos.connect({
host: "127.0.0.1",
user: "root",
password: "taosdata",
config: "/etc/taos",
port: 0,
});
var cursor = conn.cursor(); // Initializing a new cursor
async function test() {
options.path = "/rest/sql";
options.host = "localhost";
let conn = connect(options);
let cursor = conn.cursor();
try {
let res = await cursor.query("SELECT server_version()");
res.toString();
} catch (err) {
console.log(err);
}
}
test();
// output:
// server_version() |
// ===================
// 3.0.0.0 |
//Close a connection
conn.close();

View File

@ -28,7 +28,8 @@ function runConsumer() {
console.log(msg.topicPartition);
console.log(msg.block);
console.log(msg.fields)
consumer.commit(msg);
// fixme(@xiaolei): commented temp, should be fixed.
//consumer.commit(msg);
console.log(`=======consumer ${i} done`)
}
@ -48,4 +49,4 @@ try {
cursor.close();
conn.close();
}, 2000);
}
}

View File

@ -1,7 +1,7 @@
const { options, connect } = require("@tdengine/rest");
async function test() {
options.path = "/rest/sqlt";
options.path = "/rest/sql";
options.host = "localhost";
let conn = connect(options);
let cursor = conn.cursor();

View File

@ -0,0 +1,59 @@
import taos
from taos.tmq import *
conn = taos.connect()
# create database
conn.execute("drop database if exists py_tmq")
conn.execute("create database if not exists py_tmq vgroups 2")
# create table and stables
conn.select_db("py_tmq")
conn.execute("create stable if not exists stb1 (ts timestamp, c1 int, c2 float, c3 binary(10)) tags(t1 int)")
conn.execute("create table if not exists tb1 using stb1 tags(1)")
conn.execute("create table if not exists tb2 using stb1 tags(2)")
conn.execute("create table if not exists tb3 using stb1 tags(3)")
# create topic
conn.execute("drop topic if exists topic_ctb_column")
conn.execute("create topic if not exists topic_ctb_column as select ts, c1, c2, c3 from stb1")
# set consumer configure options
conf = TaosTmqConf()
conf.set("group.id", "tg2")
conf.set("td.connect.user", "root")
conf.set("td.connect.pass", "taosdata")
conf.set("enable.auto.commit", "true")
conf.set("msg.with.table.name", "true")
def tmq_commit_cb_print(tmq, resp, offset, param=None):
print(f"commit: {resp}, tmq: {tmq}, offset: {offset}, param: {param}")
conf.set_auto_commit_cb(tmq_commit_cb_print, None)
# build consumer
tmq = conf.new_consumer()
# build topic list
topic_list = TaosTmqList()
topic_list.append("topic_ctb_column")
# subscribe consumer
tmq.subscribe(topic_list)
# check subscriptions
sub_list = tmq.subscription()
print("subscribed topics: ",sub_list)
# start subscribe
while 1:
res = tmq.poll(1000)
if res:
topic = res.get_topic_name()
vg = res.get_vgroup_id()
db = res.get_db_name()
print(f"topic: {topic}\nvgroup id: {vg}\ndb: {db}")
for row in res:
print(row)
tb = res.get_table_name()
print(f"from table: {tb}")

View File

@ -4,7 +4,7 @@ sidebar_label: 文档首页
slug: /
---
TDengine是一款[开源](https://www.taosdata.com/tdengine/open_source_time-series_database)、[高性能](https://www.taosdata.com/fast)、[云原生](https://www.taosdata.com/tdengine/cloud_native_time-series_database)的时序数据库(Time-Series Database, TSDB), 它专为物联网、工业互联网、金融等场景优化设计。同时它还带有内建的缓存、流式计算、数据订阅等系统功能,能大幅减少系统设计的复杂度,降低研发和运营成本,是一极简的时序数据处理平台。本文档是 TDengine 用户手册,主要是介绍 TDengine 的基本概念、安装、使用、功能、开发接口、运营维护、TDengine 内核设计等等,它主要是面向架构师、开发者与系统管理员的。
TDengine是一款[开源](https://www.taosdata.com/tdengine/open_source_time-series_database)、[高性能](https://www.taosdata.com/fast)、[云原生](https://www.taosdata.com/tdengine/cloud_native_time-series_database)的<a href="https://www.taosdata.com/" data-internallinksmanager029f6b8e52c="2" title="时序数据库" target="_blank" rel="noopener">时序数据库</a><a href="https://www.taosdata.com/time-series-database" data-internallinksmanager029f6b8e52c="9" title="Time Series DataBase" target="_blank" rel="noopener">Time Series Database</a>, <a href="https://www.taosdata.com/tsdb" data-internallinksmanager029f6b8e52c="8" title="TSDB" target="_blank" rel="noopener">TSDB</a>, 它专为物联网、工业互联网、金融等场景优化设计。同时它还带有内建的缓存、流式计算、数据订阅等系统功能,能大幅减少系统设计的复杂度,降低研发和运营成本,是一极简的时序数据处理平台。本文档是 TDengine 用户手册,主要是介绍 TDengine 的基本概念、安装、使用、功能、开发接口、运营维护、TDengine 内核设计等等,它主要是面向架构师、开发者与系统管理员的。
TDengine 充分利用了时序数据的特点提出了“一个数据采集点一张表”与“超级表”的概念设计了创新的存储引擎让数据的写入、查询和存储效率都得到极大的提升。为正确理解并使用TDengine, 无论如何,请您仔细阅读[基本概念](./concept)一章。

View File

@ -3,7 +3,7 @@ title: 产品简介
toc_max_heading_level: 2
---
TDengine 是一款[开源](https://www.taosdata.com/tdengine/open_source_time-series_database)、[高性能](https://www.taosdata.com/tdengine/fast)、[云原生](https://www.taosdata.com/tdengine/cloud_native_time-series_database)的时序数据库 (Time-Series Database, TSDB)。TDengine 能被广泛运用于物联网、工业互联网、车联网、IT 运维、金融等领域。除核心的时序数据库功能外TDengine 还提供[缓存](../develop/cache/)、[数据订阅](../develop/tmq)、[流式计算](../develop/stream)等功能,是一极简的时序数据处理平台,最大程度的减小系统设计的复杂度,降低研发和运营成本。
TDengine 是一款[开源](https://www.taosdata.com/tdengine/open_source_time-series_database)、[高性能](https://www.taosdata.com/tdengine/fast)、[云原生](https://www.taosdata.com/tdengine/cloud_native_time-series_database)的<a href="https://www.taosdata.com/" data-internallinksmanager029f6b8e52c="2" title="时序数据库" target="_blank" rel="noopener">时序数据库</a><a href="https://www.taosdata.com/time-series-database" data-internallinksmanager029f6b8e52c="9" title="Time Series DataBase" target="_blank" rel="noopener">Time Series Database</a>, <a href="https://www.taosdata.com/tsdb" data-internallinksmanager029f6b8e52c="8" title="TSDB" target="_blank" rel="noopener">TSDB</a>。TDengine 能被广泛运用于物联网、工业互联网、车联网、IT 运维、金融等领域。除核心的时序数据库功能外TDengine 还提供[缓存](../develop/cache/)、[数据订阅](../develop/tmq)、[流式计算](../develop/stream)等功能,是一极简的时序数据处理平台,最大程度的减小系统设计的复杂度,降低研发和运营成本。
本章节介绍TDengine的主要功能、竞争优势、适用场景、与其他数据库的对比测试等等让大家对TDengine有个整体的了解。

View File

@ -2,18 +2,15 @@
sidebar_label: Docker
title: 通过 Docker 快速体验 TDengine
---
:::info
如果您希望为 TDengine 贡献代码或对内部技术实现感兴趣,请参考[TDengine GitHub 主页](https://github.com/taosdata/TDengine) 下载源码构建和安装.
:::
本节首先介绍如何通过 Docker 快速体验 TDengine然后介绍如何在 Docker 环境下体验 TDengine 的写入和查询功能。
本节首先介绍如何通过 Docker 快速体验 TDengine然后介绍如何在 Docker 环境下体验 TDengine 的写入和查询功能。如果你不熟悉 Docker请使用[安装包的方式快速体验](../../get-started/package/)。如果您希望为 TDengine 贡献代码或对内部技术实现感兴趣,请参考 [TDengine GitHub 主页](https://github.com/taosdata/TDengine) 下载源码构建和安装.
## 启动 TDengine
如果已经安装了 docker 只需执行下面的命令。
```shell
docker run -d -p 6030:6030 -p 6041/6041 -p 6043-6049/6043-6049 -p 6043-6049:6043-6049/udp tdengine/tdengine
docker run -d -p 6030:6030 -p 6041:6041 -p 6043-6049:6043-6049 -p 6043-6049:6043-6049/udp tdengine/tdengine
```
注意TDengine 3.0 服务端仅使用 6030 TCP 端口。6041 为 taosAdapter 所使用提供 REST 服务端口。6043-6049 为 taosAdapter 提供第三方应用接入所使用端口,可根据需要选择是否打开。

View File

@ -6,12 +6,7 @@ title: 使用安装包立即开始
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
:::info
如果您希望对 TDengine 贡献代码或对内部实现感兴趣,请参考我们的 [TDengine GitHub 主页](https://github.com/taosdata/TDengine) 下载源码构建和安装.
:::
在 Linux 系统上TDengine 开源版本提供 deb 和 rpm 格式安装包,用户可以根据自己的运行环境选择合适的安装包。其中 deb 支持 Debian/Ubuntu 及衍生系统rpm 支持 CentOS/RHEL/SUSE 及衍生系统。同时我们也为企业用户提供 tar.gz 格式安装包,也支持通过 `apt-get` 工具从线上进行安装。TDengine 也提供 Windows x64 平台的安装包。
在 Linux 系统上TDengine 开源版本提供 deb 和 rpm 格式安装包,用户可以根据自己的运行环境选择合适的安装包。其中 deb 支持 Debian/Ubuntu 及衍生系统rpm 支持 CentOS/RHEL/SUSE 及衍生系统。同时我们也为企业用户提供 tar.gz 格式安装包,也支持通过 `apt-get` 工具从线上进行安装。TDengine 也提供 Windows x64 平台的安装包。您也可以[用Docker立即体验](../../get-started/docker/)。如果您希望对 TDengine 贡献代码或对内部实现感兴趣,请参考我们的 [TDengine GitHub 主页](https://github.com/taosdata/TDengine) 下载源码构建和安装.
## 安装
@ -29,6 +24,7 @@ echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-stable stable main" |
如果安装 Beta 版需要安装包仓库
```bash
wget -qO - http://repos.taosdata.com/tdengine.key | sudo apt-key add -
echo "deb [arch=amd64] http://repos.taosdata.com/tdengine-beta beta main" | sudo tee /etc/apt/sources.list.d/tdengine-beta.list
```
@ -46,7 +42,7 @@ apt-get 方式只适用于 Debian 或 Ubuntu 系统
</TabItem>
<TabItem label="Deb 安装" value="debinst">
1. 从官网下载获得 deb 安装包,例如 TDengine-server-3.0.0.0-Linux-x64.deb
1. 从 [发布历史页面](../../releases) 下载获得 deb 安装包,例如 TDengine-server-3.0.0.0-Linux-x64.deb
2. 进入到 TDengine-server-3.0.0.0-Linux-x64.deb 安装包所在目录,执行如下的安装命令:
```bash
@ -57,7 +53,7 @@ sudo dpkg -i TDengine-server-3.0.0.0-Linux-x64.deb
<TabItem label="RPM 安装" value="rpminst">
1. 从官网下载获得 rpm 安装包,例如 TDengine-server-3.0.0.0-Linux-x64.rpm
1. 从 [发布历史页面](../../releases) 下载获得 rpm 安装包,例如 TDengine-server-3.0.0.0-Linux-x64.rpm
2. 进入到 TDengine-server-3.0.0.0-Linux-x64.rpm 安装包所在目录,执行如下的安装命令:
```bash
@ -68,7 +64,7 @@ sudo rpm -ivh TDengine-server-3.0.0.0-Linux-x64.rpm
<TabItem label="tar.gz 安装" value="tarinst">
1. 从官网下载获得 tar.gz 安装包,例如 TDengine-server-3.0.0.0-Linux-x64.tar.gz
1. 从 [发布历史页面](../../releases) 下载获得 tar.gz 安装包,例如 TDengine-server-3.0.0.0-Linux-x64.tar.gz
2. 进入到 TDengine-server-3.0.0.0-Linux-x64.tar.gz 安装包所在目录,先解压文件后,进入子目录,执行其中的 install.sh 安装脚本:
```bash
@ -90,7 +86,7 @@ install.sh 安装脚本在执行过程中,会通过命令行交互界面询问
<TabItem label="Windows 安装" value="windows">
1. 从官网下载获得 exe 安装程序,例如 TDengine-server-3.0.0.0-Windows-x64.exe
1. 从 [发布历史页面](../../releases) 下载获得 exe 安装程序,例如 TDengine-server-3.0.0.0-Windows-x64.exe
2. 运行 TDengine-server-3.0.0.0-Windows-x64.exe 来安装 TDengine。
</TabItem>

View File

@ -43,7 +43,7 @@ Query OK, 2 row(s) in set (0.001100s)
为满足物联网场景的需求TDengine 支持几个特殊的函数,比如 twa(时间加权平均)spread (最大值与最小值的差)last_row(最后一条记录)等,更多与物联网场景相关的函数将添加进来。
具体的查询语法请看 [TAOS SQL 的数据查询](/taos-sql/select) 章节。
具体的查询语法请看 [TAOS SQL 的数据查询](../../taos-sql/select) 章节。
## 多表聚合查询
@ -74,7 +74,7 @@ taos> SELECT count(*), max(current) FROM meters where groupId = 2 and ts > now -
Query OK, 1 row(s) in set (0.002136s)
```
在 [TAOS SQL 的数据查询](/taos-sql/select) 一章,查询类操作都会注明是否支持超级表。
在 [TAOS SQL 的数据查询](../../taos-sql/select) 一章,查询类操作都会注明是否支持超级表。
## 降采样查询、插值
@ -121,7 +121,7 @@ Query OK, 5 row(s) in set (0.001521s)
如果一个时间间隔里没有采集的数据TDengine 还提供插值计算的功能。
语法规则细节请见 [TAOS SQL 的按时间窗口切分聚合](/taos-sql/interval) 章节。
语法规则细节请见 [TAOS SQL 的按时间窗口切分聚合](../../taos-sql/distinguished) 章节。
## 示例代码

View File

@ -4,8 +4,16 @@ description: "TDengine 流式计算将数据的写入、预处理、复杂分析
title: 流式计算
---
在时序数据的处理中,经常要对原始数据进行清洗、预处理,再使用时序数据库进行长久的储存。用户通常需要在时序数据库之外再搭建 Kafka、Flink、Spark 等流计算处理引擎,增加了用户的开发成本和维护成本。
使用 TDengine 3.0 的流式计算引擎能够最大限度的减少对这些额外中间件的依赖,真正将数据的写入、预处理、长期存储、复杂分析、实时计算、实时报警触发等功能融为一体,并且,所有这些任务只需要使用 SQL 完成,极大降低了用户的学习成本、使用成本。
在时序数据的处理中,经常要对原始数据进行清洗、预处理,再使用时序数据库进行长久的储存。在传统的时序数据解决方案中,常常需要部署 Kafka、Flink 等流处理系统。而流处理系统的复杂性,带来了高昂的开发与运维成本。
TDengine 3.0 的流式计算引擎提供了实时处理写入的数据流的能力,使用 SQL 定义实时流变换,当数据被写入流的源表后,数据会被以定义的方式自动处理,并根据定义的触发模式向目的表推送结果。它提供了替代复杂流处理系统的轻量级解决方案,并能够在高吞吐的数据写入的情况下,提供毫秒级的计算结果延迟。
流式计算可以包含数据过滤标量函数计算含UDF以及窗口聚合支持滑动窗口、会话窗口与状态窗口可以以超级表、子表、普通表为源表写入到目的超级表。在创建流时目的超级表将被自动创建随后新插入的数据会被流定义的方式处理并写入其中通过 partition by 子句,可以以表名或标签划分 partition不同的 partition 将写入到目的超级表的不同子表。
TDengine 的流式计算能够支持分布在多个 vnode 中的超级表聚合;还能够处理乱序数据的写入:它提供了 watermark 机制以度量容忍数据乱序的程度,并提供了 ignore expired 配置项以决定乱序数据的处理策略——丢弃或者重新计算。
详见 [流式计算](../../taos-sql/stream)
## 流式计算的创建
@ -14,7 +22,7 @@ CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name AS subq
stream_options: {
TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time]
WATERMARK time
IGNORE EXPIRED
IGNORE EXPIRED [0 | 1]
}
```
@ -22,107 +30,84 @@ stream_options: {
## 示例一
企业电表的数据经常都是成百上千亿条的,那么想要将这些分散、凌乱的数据清洗或转换都需要比较长的时间,很难做到高效性和实时性,以下例子中,通过流计算可以将过去 12 小时电表电压大于 220V 的数据清洗掉,然后以小时为窗口整合并计算出每个窗口中电流的最大值,并将结果输出到指定的数据表中。
企业电表的数据经常都是成百上千亿条的,那么想要将这些分散、凌乱的数据清洗或转换都需要比较长的时间,很难做到高效性和实时性,以下例子中,通过流计算可以将电表电压大于 220V 的数据清洗掉,然后以 5 秒为窗口整合并计算出每个窗口中电流的最大值,最后将结果输出到指定的数据表中。
### 创建 DB 和原始数据表
首先准备数据,完成建库、建一张超级表和多张子表操作
```sql
drop database if exists stream_db;
create database stream_db;
DROP DATABASE IF EXISTS power;
CREATE DATABASE power;
USE power;
create stable stream_db.meters (ts timestamp, current float, voltage int) TAGS (location varchar(64), groupId int);
CREATE STABLE meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int);
create table stream_db.d1001 using stream_db.meters tags("beijing", 1);
create table stream_db.d1002 using stream_db.meters tags("guangzhou", 2);
create table stream_db.d1003 using stream_db.meters tags("shanghai", 3);
CREATE TABLE d1001 USING meters TAGS ("California.SanFrancisco", 2);
CREATE TABLE d1002 USING meters TAGS ("California.SanFrancisco", 3);
CREATE TABLE d1003 USING meters TAGS ("California.LosAngeles", 2);
CREATE TABLE d1004 USING meters TAGS ("California.LosAngeles", 3);
```
### 创建流
```sql
create stream stream1 into stream_db.stream1_output_stb as select _wstart as start, _wend as end, max(current) as max_current from stream_db.meters where voltage <= 220 and ts > now - 12h interval (1h);
create stream current_stream into current_stream_output_stb as select _wstart as start, _wend as end, max(current) as max_current from meters where voltage <= 220 interval (5s);
```
### 写入数据
```sql
insert into stream_db.d1001 values(now-14h, 10.3, 210);
insert into stream_db.d1001 values(now-13h, 13.5, 216);
insert into stream_db.d1001 values(now-12h, 12.5, 219);
insert into stream_db.d1002 values(now-11h, 14.7, 221);
insert into stream_db.d1002 values(now-10h, 10.5, 218);
insert into stream_db.d1002 values(now-9h, 11.2, 220);
insert into stream_db.d1003 values(now-8h, 11.5, 217);
insert into stream_db.d1003 values(now-7h, 12.3, 227);
insert into stream_db.d1003 values(now-6h, 12.3, 215);
insert into d1001 values("2018-10-03 14:38:05.000", 10.30000, 219, 0.31000);
insert into d1001 values("2018-10-03 14:38:15.000", 12.60000, 218, 0.33000);
insert into d1001 values("2018-10-03 14:38:16.800", 12.30000, 221, 0.31000);
insert into d1002 values("2018-10-03 14:38:16.650", 10.30000, 218, 0.25000);
insert into d1003 values("2018-10-03 14:38:05.500", 11.80000, 221, 0.28000);
insert into d1003 values("2018-10-03 14:38:16.600", 13.40000, 223, 0.29000);
insert into d1004 values("2018-10-03 14:38:05.000", 10.80000, 223, 0.29000);
insert into d1004 values("2018-10-03 14:38:06.500", 11.50000, 221, 0.35000);
```
### 查询以观查结果
### 查询以观察结果
```sql
taos> select * from stream_db.stream1_output_stb;
start | end | max_current | group_id |
===================================================================================================
2022-08-09 14:00:00.000 | 2022-08-09 15:00:00.000 | 10.50000 | 0 |
2022-08-09 15:00:00.000 | 2022-08-09 16:00:00.000 | 11.20000 | 0 |
2022-08-09 16:00:00.000 | 2022-08-09 17:00:00.000 | 11.50000 | 0 |
2022-08-09 18:00:00.000 | 2022-08-09 19:00:00.000 | 12.30000 | 0 |
Query OK, 4 rows in database (0.012033s)
taos> select start, end, max_current from current_stream_output_stb;
start | end | max_current |
===========================================================================
2018-10-03 14:38:05.000 | 2018-10-03 14:38:10.000 | 10.30000 |
2018-10-03 14:38:15.000 | 2018-10-03 14:38:20.000 | 12.60000 |
Query OK, 2 rows in database (0.018762s)
```
## 示例二
某运营商平台要采集机房所有服务器的系统资源指标,包含 cpu、内存、网络延迟等采集后需要对数据进行四舍五入运算将地域和服务器名以下划线拼接然后将结果按时间排序并以服务器名分组输出到新的数据表中。
依然以示例一中的数据为基础,我们已经采集到了每个智能电表的电流和电压数据,现在需要求出有功功率和无功功率,并将地域和电表名以符号 "." 拼接,然后以电表名称分组输出到新的数据表中。
### 创建 DB 和原始数据表
首先准备数据,完成建库、建一张超级表和多张子表操作
```sql
drop database if exists stream_db;
create database stream_db;
create stable stream_db.idc (ts timestamp, cpu float, mem float, latency float) TAGS (location varchar(64), groupId int);
create table stream_db.server01 using stream_db.idc tags("beijing", 1);
create table stream_db.server02 using stream_db.idc tags("shanghai", 2);
create table stream_db.server03 using stream_db.idc tags("beijing", 2);
create table stream_db.server04 using stream_db.idc tags("tianjin", 3);
create table stream_db.server05 using stream_db.idc tags("shanghai", 1);
```
参考示例一 [创建 DB 和原始数据表](#创建-db-和原始数据表)
### 创建流
```sql
create stream stream2 into stream_db.stream2_output_stb as select ts, concat_ws("_", location, tbname) as server_location, round(cpu) as cpu, round(mem) as mem, round(latency) as latency from stream_db.idc partition by tbname order by ts;
create stream power_stream into power_stream_output_stb as select ts, concat_ws(".", location, tbname) as meter_location, current*voltage*cos(phase) as active_power, current*voltage*sin(phase) as reactive_power from meters partition by tbname;
```
### 写入数据
```sql
insert into stream_db.server01 values(now-14h, 50.9, 654.8, 23.11);
insert into stream_db.server01 values(now-13h, 13.5, 221.2, 11.22);
insert into stream_db.server02 values(now-12h, 154.7, 218.3, 22.33);
insert into stream_db.server02 values(now-11h, 120.5, 111.5, 5.55);
insert into stream_db.server03 values(now-10h, 101.5, 125.6, 5.99);
insert into stream_db.server03 values(now-9h, 12.3, 165.6, 6.02);
insert into stream_db.server04 values(now-8h, 160.9, 120.7, 43.51);
insert into stream_db.server04 values(now-7h, 240.9, 520.7, 54.55);
insert into stream_db.server05 values(now-6h, 190.9, 320.7, 55.43);
insert into stream_db.server05 values(now-5h, 110.9, 600.7, 35.54);
```
### 查询以观查结果
```sql
taos> select ts, server_location, cpu, mem, latency from stream_db.stream2_output_stb;
ts | server_location | cpu | mem | latency |
================================================================================================================================
2022-08-09 21:24:56.785 | beijing_server01 | 51.00000 | 655.00000 | 23.00000 |
2022-08-09 22:24:56.795 | beijing_server01 | 14.00000 | 221.00000 | 11.00000 |
2022-08-09 23:24:56.806 | shanghai_server02 | 155.00000 | 218.00000 | 22.00000 |
2022-08-10 00:24:56.815 | shanghai_server02 | 121.00000 | 112.00000 | 6.00000 |
2022-08-10 01:24:56.826 | beijing_server03 | 102.00000 | 126.00000 | 6.00000 |
2022-08-10 02:24:56.838 | beijing_server03 | 12.00000 | 166.00000 | 6.00000 |
2022-08-10 03:24:56.846 | tianjin_server04 | 161.00000 | 121.00000 | 44.00000 |
2022-08-10 04:24:56.853 | tianjin_server04 | 241.00000 | 521.00000 | 55.00000 |
2022-08-10 05:24:56.866 | shanghai_server05 | 191.00000 | 321.00000 | 55.00000 |
2022-08-10 06:24:57.301 | shanghai_server05 | 111.00000 | 601.00000 | 36.00000 |
Query OK, 10 rows in database (0.022950s)
```
参考示例一 [写入数据](#写入数据)
### 查询以观察结果
```sql
taos> select ts, meter_location, active_power, reactive_power from power_stream_output_stb;
ts | meter_location | active_power | reactive_power |
===================================================================================================================
2018-10-03 14:38:05.000 | California.LosAngeles.d1004 | 2307.834596289 | 688.687331847 |
2018-10-03 14:38:06.500 | California.LosAngeles.d1004 | 2387.415754896 | 871.474763418 |
2018-10-03 14:38:05.500 | California.LosAngeles.d1003 | 2506.240411679 | 720.680274962 |
2018-10-03 14:38:16.600 | California.LosAngeles.d1003 | 2863.424274422 | 854.482390839 |
2018-10-03 14:38:05.000 | California.SanFrancisco.d1001 | 2148.178871730 | 688.120784090 |
2018-10-03 14:38:15.000 | California.SanFrancisco.d1001 | 2598.589176205 | 890.081451418 |
2018-10-03 14:38:16.800 | California.SanFrancisco.d1001 | 2588.728381186 | 829.240910475 |
2018-10-03 14:38:16.650 | California.SanFrancisco.d1002 | 2175.595991997 | 555.520860397 |
Query OK, 8 rows in database (0.014753s)
```

View File

@ -1,246 +0,0 @@
---
sidebar_label: 数据订阅
description: "数据订阅与推送服务。写入到 TDengine 中的时序数据能够被自动推送到订阅客户端。"
title: 数据订阅
---
为了帮助应用实时获取写入 TDengine 的数据或者以事件到达顺序处理数据TDengine提供了类似消息队列产品的数据订阅、消费接口。这样在很多场景下采用 TDengine 的时序数据处理系统不再需要集成消息队列产品,比如 kafka, 从而简化系统设计的复杂度,降低运营维护成本。
与 kafka 一样,你需要定义 topic, 但 TDengine 的 topic 是基于一个已经存在的超级表、子表或普通表的查询条件,即一个 SELECT 语句。你可以使用 SQL 对标签、表名、列、表达式等条件进行过滤,以及对数据进行标量函数与 UDF 计算(不包括数据聚合)。与其他消息队列软件相比,这是 TDengine 数据订阅功能的最大的优势,它提供了更大的灵活性,数据的颗粒度可以由应用随时调整,而且数据的过滤与预处理交给 TDengine而不是应用完成有效的减少传输的数据量与应用的复杂度。
消费者订阅 topic 后,可以实时获得最新的数据。多个消费者可以组成一个消费者组 (consumer group), 一个消费者组里的多个消费者共享消费进度便于多线程、分布式地消费数据提高消费速度。但不同消费者组中的消费者即使消费同一个topic, 并不共享消费进度。一个消费者可以订阅多个 topic。如果订阅的是超级表数据可能会分布在多个不同的 vnode 上,也就是多个 shard 上这样一个消费组里有多个消费者可以提高消费效率。TDengine 的消息队列提供了消息的ACK机制在宕机、重启等复杂环境下确保 at least once 消费。
为了实现上述功能TDengine 会为 WAL (Write-Ahead-Log) 文件自动创建索引以支持快速随机访问,并提供了灵活可配置的文件切换与保留机制:用户可以按需指定 WAL 文件保留的时间以及大小(详见 create database 语句)。通过以上方式将 WAL 改造成了一个保留事件到达顺序的、可持久化的存储引擎(但由于 TSDB 具有远比 WAL 更高的压缩率,我们不推荐保留太长时间,一般来说,不超过几天)。 对于以 topic 形式创建的查询TDengine 将对接 WAL 而不是 TSDB 作为其存储引擎。在消费时TDengine 根据当前消费进度从 WAL 直接读取数据,并使用统一的查询引擎实现过滤、变换等操作,将数据推送给消费者。
本文档不对消息队列本身的基础知识做介绍,如果需要了解,请自行搜索。
## 主要数据结构和API
TMQ 的 API 中与订阅相关的主要数据结构和API如下
```c
typedef struct tmq_t tmq_t;
typedef struct tmq_conf_t tmq_conf_t;
typedef struct tmq_list_t tmq_list_t;
typedef void(tmq_commit_cb(tmq_t *, int32_t code, void *param));
DLL_EXPORT tmq_list_t *tmq_list_new();
DLL_EXPORT int32_t tmq_list_append(tmq_list_t *, const char *);
DLL_EXPORT void tmq_list_destroy(tmq_list_t *);
DLL_EXPORT tmq_t *tmq_consumer_new(tmq_conf_t *conf, char *errstr, int32_t errstrLen);
DLL_EXPORT const char *tmq_err2str(int32_t code);
DLL_EXPORT int32_t tmq_subscribe(tmq_t *tmq, const tmq_list_t *topic_list);
DLL_EXPORT int32_t tmq_unsubscribe(tmq_t *tmq);
DLL_EXPORT TAOS_RES *tmq_consumer_poll(tmq_t *tmq, int64_t timeout);
DLL_EXPORT int32_t tmq_consumer_close(tmq_t *tmq);
DLL_EXPORT int32_t tmq_commit_sync(tmq_t *tmq, const TAOS_RES *msg);
DLL_EXPORT void tmq_commit_async(tmq_t *tmq, const TAOS_RES *msg, tmq_commit_cb *cb, void *param);
enum tmq_conf_res_t {
TMQ_CONF_UNKNOWN = -2,
TMQ_CONF_INVALID = -1,
TMQ_CONF_OK = 0,
};
typedef enum tmq_conf_res_t tmq_conf_res_t;
DLL_EXPORT tmq_conf_t *tmq_conf_new();
DLL_EXPORT tmq_conf_res_t tmq_conf_set(tmq_conf_t *conf, const char *key, const char *value);
DLL_EXPORT void tmq_conf_destroy(tmq_conf_t *conf);
DLL_EXPORT void tmq_conf_set_auto_commit_cb(tmq_conf_t *conf, tmq_commit_cb *cb, void *param);
```
这些 API 的文档请见 [C/C++ Connector](/reference/connector/cpp),下面介绍一下它们的具体用法(超级表和子表结构请参考“数据建模”一节),完整的示例代码可以在 [tmq.c](https://github.com/taosdata/TDengine/blob/3.0/examples/c/tmq.c) 看到。
## 写入数据
首先完成建库、建一张超级表和多张子表操作,然后就可以写入数据了,比如:
```sql
drop database if exists tmqdb;
create database tmqdb;
create table tmqdb.stb (ts timestamp, c1 int, c2 float, c3 varchar(16) tags(t1 int, t3 varchar(16));
create table tmqdb.ctb0 using tmqdb.stb tags(0, "subtable0");
create table tmqdb.ctb1 using tmqdb.stb tags(1, "subtable1");
create table tmqdb.ctb2 using tmqdb.stb tags(2, "subtable2");
create table tmqdb.ctb3 using tmqdb.stb tags(3, "subtable3");
insert into tmqdb.ctb0 values(now, 0, 0, 'a0')(now+1s, 0, 0, 'a00');
insert into tmqdb.ctb1 values(now, 1, 1, 'a1')(now+1s, 11, 11, 'a11');
insert into tmqdb.ctb2 values(now, 2, 2, 'a1')(now+1s, 22, 22, 'a22');
insert into tmqdb.ctb3 values(now, 3, 3, 'a1')(now+1s, 33, 33, 'a33');
```
## 创建topic
```sql
create topic topicName as select ts, c1, c2, c3 from tmqdb.stb where c1 > 1;
```
TMQ支持多种订阅类型
### 列订阅
语法CREATE TOPIC topic_name as subquery
通过select语句订阅包括select *或select ts, c1等指定列描述订阅可以带条件过滤、标量函数计算但不支持聚合函数、不支持时间窗口聚合
- TOPIC一旦创建则schema确定
- 被订阅或用于计算的column和tag不可被删除、修改
- 若发生schema变更新增的column不出现在结果中
### 超级表订阅
语法CREATE TOPIC topic_name AS STABLE stbName
与select * from stbName订阅的区别是
- 不会限制用户的schema变更
- 返回的是非结构化的数据返回数据的schema会随之超级表的schema变化而变化
- 用户对于要处理的每一个数据块都可能有不同的schema因此必须重新获取schema
- 返回数据不带有tag
## 创建 consumer 以及consumer group
对于consumer, 目前支持的config包括
| 参数名称 | 参数值 | 备注 |
| ---------------------------- | ------------------------------ | ------------------------------------------------------ |
| group.id | 最大长度192 | |
| enable.auto.commit | 合法值true, false | |
| auto.commit.interval.ms | | |
| auto.offset.reset | 合法值earliest, latest, none | |
| td.connect.ip | 用于连接同taos_connect的参数 | |
| td.connect.user | 用于连接同taos_connect的参数 | |
| td.connect.pass | 用于连接同taos_connect的参数 | |
| td.connect.port | 用于连接同taos_connect的参数 | |
| enable.heartbeat.background | 合法值true, false | 开启后台心跳即consumer不会因为长时间不poll而认为离线 |
| experimental.snapshot.enable | 合法值true, false | 从wal开始消费还是从tsbs开始消费 |
| msg.with.table.name | 合法值true, false | 从消息中能否解析表名 |
```sql
/* 根据需要,设置消费组(group.id)、自动提交(enable.auto.commit)、自动提交时间间隔(auto.commit.interval.ms)、用户名(td.connect.user)、密码(td.connect.pass)等参数 */
tmq_conf_t* conf = tmq_conf_new();
tmq_conf_set(conf, "enable.auto.commit", "true");
tmq_conf_set(conf, "auto.commit.interval.ms", "1000");
tmq_conf_set(conf, "group.id", "cgrpName");
tmq_conf_set(conf, "td.connect.user", "root");
tmq_conf_set(conf, "td.connect.pass", "taosdata");
tmq_conf_set(conf, "auto.offset.reset", "earliest");
tmq_conf_set(conf, "experimental.snapshot.enable", "true");
tmq_conf_set(conf, "msg.with.table.name", "true");
tmq_conf_set_auto_commit_cb(conf, tmq_commit_cb_print, NULL);
tmq_t* tmq = tmq_consumer_new(conf, NULL, 0);
tmq_conf_destroy(conf);
return tmq;
```
上述配置中包括consumer group ID如果多个 consumer 指定的 consumer group ID一样则自动形成一个consumer group共享消费进度。
## 创建 topic 列表
单个consumer支持同时订阅多个topic。
```sql
tmq_list_t* topicList = tmq_list_new();
tmq_list_append(topicList, "topicName");
return topicList;
```
## 启动订阅并开始消费
```sql
/* 启动订阅 */
tmq_subscribe(tmq, topicList);
tmq_list_destroy(topicList);
/* 循环poll消息 */
int32_t totalRows = 0;
int32_t msgCnt = 0;
int32_t timeOut = 5000;
while (running) {
TAOS_RES* tmqmsg = tmq_consumer_poll(tmq, timeOut);
if (tmqmsg) {
msgCnt++;
totalRows += msg_process(tmqmsg);
taos_free_result(tmqmsg);
} else {
break;
}
}
fprintf(stderr, "%d msg consumed, include %d rows\n", msgCnt, totalRows);
```
这里是一个 **while** 循环每调用一次tmq_consumer_poll()获取一个消息该消息与普通查询返回的结果集完全相同可以使用相同的解析API完成消息内容的解析
```sql
static int32_t msg_process(TAOS_RES* msg) {
char buf[1024];
int32_t rows = 0;
const char* topicName = tmq_get_topic_name(msg);
const char* dbName = tmq_get_db_name(msg);
int32_t vgroupId = tmq_get_vgroup_id(msg);
printf("topic: %s\n", topicName);
printf("db: %s\n", dbName);
printf("vgroup id: %d\n", vgroupId);
while (1) {
TAOS_ROW row = taos_fetch_row(msg);
if (row == NULL) break;
TAOS_FIELD* fields = taos_fetch_fields(msg);
int32_t numOfFields = taos_field_count(msg);
int32_t* length = taos_fetch_lengths(msg);
int32_t precision = taos_result_precision(msg);
const char* tbName = tmq_get_table_name(msg);
rows++;
taos_print_row(buf, row, fields, numOfFields);
printf("row content from %s: %s\n", (tbName != NULL ? tbName : "null table"), buf);
}
return rows;
}
```
## 结束消费
```sql
/* 取消订阅 */
tmq_unsubscribe(tmq);
/* 关闭消费 */
tmq_consumer_close(tmq);
```
## 删除topic
如果不再需要可以删除创建topic但注意只有没有被订阅的topic才能别删除。
```sql
/* 删除topic */
drop topic topicName;
```
## 状态查看
1、topics查询已经创建的topic
```sql
show topics;
```
2、consumers查询consumer的状态及其订阅的topic
```sql
show consumers;
```
3、subscriptions查询consumer与vgroup之间的分配关系
```sql
show subscriptions;
```

View File

@ -0,0 +1,852 @@
---
sidebar_label: 数据订阅
description: "数据订阅与推送服务。写入到 TDengine 中的时序数据能够被自动推送到订阅客户端。"
title: 数据订阅
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import Java from "./_sub_java.mdx";
import Python from "./_sub_python.mdx";
import Go from "./_sub_go.mdx";
import Rust from "./_sub_rust.mdx";
import Node from "./_sub_node.mdx";
import CSharp from "./_sub_cs.mdx";
import CDemo from "./_sub_c.mdx";
为了帮助应用实时获取写入 TDengine 的数据或者以事件到达顺序处理数据TDengine 提供了类似消息队列产品的数据订阅、消费接口。这样在很多场景下,采用 TDengine 的时序数据处理系统不再需要集成消息队列产品,比如 kafka, 从而简化系统设计的复杂度,降低运营维护成本。
与 kafka 一样,你需要定义 *topic*, 但 TDengine 的 *topic* 是基于一个已经存在的超级表、子表或普通表的查询条件,即一个 `SELECT` 语句。你可以使用 SQL 对标签、表名、列、表达式等条件进行过滤,以及对数据进行标量函数与 UDF 计算(不包括数据聚合)。与其他消息队列软件相比,这是 TDengine 数据订阅功能的最大的优势,它提供了更大的灵活性,数据的颗粒度可以由应用随时调整,而且数据的过滤与预处理交给 TDengine而不是应用完成有效的减少传输的数据量与应用的复杂度。
消费者订阅 *topic* 后,可以实时获得最新的数据。多个消费者可以组成一个消费者组 (consumer group), 一个消费者组里的多个消费者共享消费进度,便于多线程、分布式地消费数据,提高消费速度。但不同消费者组中的消费者即使消费同一个 topic, 并不共享消费进度。一个消费者可以订阅多个 topic。如果订阅的是超级表数据可能会分布在多个不同的 vnode 上,也就是多个 shard 上这样一个消费组里有多个消费者可以提高消费效率。TDengine 的消息队列提供了消息的 ACK 机制,在宕机、重启等复杂环境下确保 at least once 消费。
为了实现上述功能TDengine 会为 WAL (Write-Ahead-Log) 文件自动创建索引以支持快速随机访问,并提供了灵活可配置的文件切换与保留机制:用户可以按需指定 WAL 文件保留的时间以及大小(详见 create database 语句)。通过以上方式将 WAL 改造成了一个保留事件到达顺序的、可持久化的存储引擎(但由于 TSDB 具有远比 WAL 更高的压缩率,我们不推荐保留太长时间,一般来说,不超过几天)。 对于以 topic 形式创建的查询TDengine 将对接 WAL 而不是 TSDB 作为其存储引擎。在消费时TDengine 根据当前消费进度从 WAL 直接读取数据,并使用统一的查询引擎实现过滤、变换等操作,将数据推送给消费者。
本文档不对消息队列本身的基础知识做介绍,如果需要了解,请自行搜索。
## 主要数据结构和 API
不同语言下, TMQ 订阅相关的 API 及数据结构如下:
<Tabs defaultValue="java" groupId="lang">
<TabItem value="c" label="C">
```c
typedef struct tmq_t tmq_t;
typedef struct tmq_conf_t tmq_conf_t;
typedef struct tmq_list_t tmq_list_t;
typedef void(tmq_commit_cb(tmq_t *, int32_t code, void *param));
DLL_EXPORT tmq_list_t *tmq_list_new();
DLL_EXPORT int32_t tmq_list_append(tmq_list_t *, const char *);
DLL_EXPORT void tmq_list_destroy(tmq_list_t *);
DLL_EXPORT tmq_t *tmq_consumer_new(tmq_conf_t *conf, char *errstr, int32_t errstrLen);
DLL_EXPORT const char *tmq_err2str(int32_t code);
DLL_EXPORT int32_t tmq_subscribe(tmq_t *tmq, const tmq_list_t *topic_list);
DLL_EXPORT int32_t tmq_unsubscribe(tmq_t *tmq);
DLL_EXPORT TAOS_RES *tmq_consumer_poll(tmq_t *tmq, int64_t timeout);
DLL_EXPORT int32_t tmq_consumer_close(tmq_t *tmq);
DLL_EXPORT int32_t tmq_commit_sync(tmq_t *tmq, const TAOS_RES *msg);
DLL_EXPORT void tmq_commit_async(tmq_t *tmq, const TAOS_RES *msg, tmq_commit_cb *cb, void *param);
enum tmq_conf_res_t {
TMQ_CONF_UNKNOWN = -2,
TMQ_CONF_INVALID = -1,
TMQ_CONF_OK = 0,
};
typedef enum tmq_conf_res_t tmq_conf_res_t;
DLL_EXPORT tmq_conf_t *tmq_conf_new();
DLL_EXPORT tmq_conf_res_t tmq_conf_set(tmq_conf_t *conf, const char *key, const char *value);
DLL_EXPORT void tmq_conf_destroy(tmq_conf_t *conf);
DLL_EXPORT void tmq_conf_set_auto_commit_cb(tmq_conf_t *conf, tmq_commit_cb *cb, void *param);
```
这些 API 的文档请见 [C/C++ Connector](/reference/connector/cpp),下面介绍一下它们的具体用法(超级表和子表结构请参考“数据建模”一节),完整的示例代码请见下面 C 语言的示例代码。
</TabItem>
<TabItem value="java" label="Java">
```java
void subscribe(Collection<String> topics) throws SQLException;
void unsubscribe() throws SQLException;
Set<String> subscription() throws SQLException;
ConsumerRecords<V> poll(Duration timeout) throws SQLException;
void commitAsync();
void commitAsync(OffsetCommitCallback callback);
void commitSync() throws SQLException;
void close() throws SQLException;
```
</TabItem>
<TabItem label="Go" value="Go">
```go
func NewConsumer(conf *Config) (*Consumer, error)
func (c *Consumer) Close() error
func (c *Consumer) Commit(ctx context.Context, message unsafe.Pointer) error
func (c *Consumer) FreeMessage(message unsafe.Pointer)
func (c *Consumer) Poll(timeout time.Duration) (*Result, error)
func (c *Consumer) Subscribe(topics []string) error
func (c *Consumer) Unsubscribe() error
```
</TabItem>
</Tabs>
## 写入数据
首先完成建库、建一张超级表和多张子表操作,然后就可以写入数据了,比如:
```sql
DROP DATABASE IF EXISTS tmqdb;
CREATE DATABASE tmqdb;
CREATE TABLE tmqdb.stb (ts TIMESTAMP, c1 INT, c2 FLOAT, c3 VARCHAR(16) TAGS(t1 INT, t3 VARCHAR(16));
CREATE TABLE tmqdb.ctb0 USING tmqdb.stb TAGS(0, "subtable0");
CREATE TABLE tmqdb.ctb1 USING tmqdb.stb TAGS(1, "subtable1");
INSERT INTO tmqdb.ctb0 VALUES(now, 0, 0, 'a0')(now+1s, 0, 0, 'a00');
INSERT INTO tmqdb.ctb1 VALUES(now, 1, 1, 'a1')(now+1s, 11, 11, 'a11');
```
## 创建 *topic*
TDengine 使用 SQL 创建一个 topic
```sql
CREATE TOPIC topic_name AS SELECT ts, c1, c2, c3 FROM tmqdb.stb WHERE c1 > 1;
```
TMQ 支持多种订阅类型:
### 列订阅
语法:
```sql
CREATE TOPIC topic_name as subquery
```
通过 `SELECT` 语句订阅(包括 `SELECT *`,或 `SELECT ts, c1` 等指定列订阅,可以带条件过滤、标量函数计算,但不支持聚合函数、不支持时间窗口聚合)。需要注意的是:
- 该类型 TOPIC 一旦创建则订阅数据的结构确定。
- 被订阅或用于计算的列或标签不可被删除(`ALTER table DROP`)、修改(`ALTER table MODIFY`)。
- 若发生表结构变更,新增的列不出现在结果中,若发生列删除则会报错。
### 超级表订阅
语法:
```sql
CREATE TOPIC topic_name AS STABLE stb_name
```
与 `SELECT * from stbName` 订阅的区别是:
- 不会限制用户的表结构变更。
- 返回的是非结构化的数据:返回数据的结构会随之超级表的表结构变化而变化。
- 用户对于要处理的每一个数据块都可能有不同的表结构。
- 返回数据不包含标签。
### 数据库订阅
语法:
```sql
CREATE TOPIC topic_name [WITH META] AS DATABASE db_name;
```
通过该语句可创建一个包含数据库所有表数据的订阅,`WITH META` 可选择将数据库结构变动信息加入到订阅消息流TMQ 将消费当前数据库下所有表结构的变动,包括超级表的创建与删除,列添加、删除或修改,子表的创建、删除及 TAG 变动信息等等。消费者可通过 API 来判断具体的消息类型。这一点也是与 Kafka 不同的地方。
## 创建消费者 *consumer*
消费者需要通过一系列配置选项创建,基础配置项如下表所示:
| 参数名称 | 类型 | 参数说明 | 备注 |
| :----------------------------: | :-----: | -------------------------------------------------------- | ------------------------------------------- |
| `td.connect.ip` | string | 用于创建连接,同 `taos_connect` | |
| `td.connect.user` | string | 用于创建连接,同 `taos_connect` | |
| `td.connect.pass` | string | 用于创建连接,同 `taos_connect` |
| `td.connect.port` | integer | 用于创建连接,同 `taos_connect` |
| `group.id` | string | 消费组 ID同一消费组共享消费进度 | **必填项**。最大长度192。 |
| `client.id` | string | 客户端 ID | 最大长度192。 |
| `auto.offset.reset` | enum | 消费组订阅的初始位置 | 可选:`earliest`, `latest`, `none`(default) |
| `enable.auto.commit` | boolean | 启用自动提交 | 合法值:`true`, `false`。 |
| `auto.commit.interval.ms` | integer | 以毫秒为单位的自动提交时间间隔 |
| `enable.heartbeat.background` | boolean | 启用后台心跳,启用后即使长时间不 poll 消息也不会造成离线 | |
| `experimental.snapshot.enable` | boolean | 从 WAL 开始消费,还是从 TSBS 开始消费 | |
| `msg.with.table.name` | boolean | 是否允许从消息中解析表名 |
对于不同编程语言,其设置方式如下:
<Tabs defaultValue="java" groupId="lang">
<TabItem value="c" label="C">
```c
/* 根据需要,设置消费组 (group.id)、自动提交 (enable.auto.commit)、
自动提交时间间隔 (auto.commit.interval.ms)、用户名 (td.connect.user)、密码 (td.connect.pass) 等参数 */
tmq_conf_t* conf = tmq_conf_new();
tmq_conf_set(conf, "enable.auto.commit", "true");
tmq_conf_set(conf, "auto.commit.interval.ms", "1000");
tmq_conf_set(conf, "group.id", "cgrpName");
tmq_conf_set(conf, "td.connect.user", "root");
tmq_conf_set(conf, "td.connect.pass", "taosdata");
tmq_conf_set(conf, "auto.offset.reset", "earliest");
tmq_conf_set(conf, "experimental.snapshot.enable", "true");
tmq_conf_set(conf, "msg.with.table.name", "true");
tmq_conf_set_auto_commit_cb(conf, tmq_commit_cb_print, NULL);
tmq_t* tmq = tmq_consumer_new(conf, NULL, 0);
tmq_conf_destroy(conf);
```
</TabItem>
<TabItem value="java" label="Java">
对于 Java 程序,使用如下配置项:
| 参数名称 | 类型 | 参数说明 |
| ----------------------------- | ------ | ----------------------------------------------------------------------------------------------------------------------------- |
| `bootstrap.servers` | string | 连接地址,如 `localhost:6030` |
| `value.deserializer` | string | 值解析方法,使用此方法应实现 `com.taosdata.jdbc.tmq.Deserializer` 接口或继承 `com.taosdata.jdbc.tmq.ReferenceDeserializer` 类 |
| `value.deserializer.encoding` | string | 指定字符串解析的字符集 | |
需要注意:此处使用 `bootstrap.servers` 替代 `td.connect.ip` 和 `td.connect.port`,以提供与 Kafka 一致的接口。
```java
Properties properties = new Properties();
properties.setProperty("enable.auto.commit", "true");
properties.setProperty("auto.commit.interval.ms", "1000");
properties.setProperty("group.id", "cgrpName");
properties.setProperty("bootstrap.servers", "127.0.0.1:6030");
properties.setProperty("td.connect.user", "root");
properties.setProperty("td.connect.pass", "taosdata");
properties.setProperty("auto.offset.reset", "earliest");
properties.setProperty("msg.with.table.name", "true");
properties.setProperty("value.deserializer", "com.taos.example.MetersDeserializer");
TaosConsumer<Meters> consumer = new TaosConsumer<>(properties);
/* value deserializer definition. */
import com.taosdata.jdbc.tmq.ReferenceDeserializer;
public class MetersDeserializer extends ReferenceDeserializer<Meters> {
}
```
</TabItem>
<TabItem label="Go" value="Go">
```go
config := tmq.NewConfig()
defer config.Destroy()
err = config.SetGroupID("test")
if err != nil {
panic(err)
}
err = config.SetAutoOffsetReset("earliest")
if err != nil {
panic(err)
}
err = config.SetConnectIP("127.0.0.1")
if err != nil {
panic(err)
}
err = config.SetConnectUser("root")
if err != nil {
panic(err)
}
err = config.SetConnectPass("taosdata")
if err != nil {
panic(err)
}
err = config.SetConnectPort("6030")
if err != nil {
panic(err)
}
err = config.SetMsgWithTableName(true)
if err != nil {
panic(err)
}
err = config.EnableHeartBeat()
if err != nil {
panic(err)
}
err = config.EnableAutoCommit(func(result *wrapper.TMQCommitCallbackResult) {
if result.ErrCode != 0 {
errStr := wrapper.TMQErr2Str(result.ErrCode)
err := errors.NewError(int(result.ErrCode), errStr)
panic(err)
}
})
if err != nil {
panic(err)
}
```
</TabItem>
</Tabs>
上述配置中包括 consumer group ID如果多个 consumer 指定的 consumer group ID 一样,则自动形成一个 consumer group共享消费进度。
## 订阅 *topics*
一个 consumer 支持同时订阅多个 topic。
<Tabs defaultValue="java" groupId="lang">
<TabItem value="c" label="C">
```c
// 创建订阅 topics 列表
tmq_list_t* topicList = tmq_list_new();
tmq_list_append(topicList, "topicName");
// 启动订阅
tmq_subscribe(tmq, topicList);
tmq_list_destroy(topicList);
```
</TabItem>
<TabItem value="java" label="Java">
```java
List<String> topics = new ArrayList<>();
topics.add("tmq_topic");
consumer.subscribe(topics);
```
</TabItem>
<TabItem value="Go" label="Go">
```go
consumer, err := tmq.NewConsumer(config)
if err != nil {
panic(err)
}
err = consumer.Subscribe([]string{"example_tmq_topic"})
if err != nil {
panic(err)
}
```
</TabItem>
</Tabs>
## 消费
以下代码展示了不同语言下如何对 TMQ 消息进行消费。
<Tabs defaultValue="java" groupId="lang">
<TabItem value="c" label="C">
```c
// 消费数据
while (running) {
TAOS_RES* msg = tmq_consumer_poll(tmq, timeOut);
msg_process(msg);
}
```
这里是一个 **while** 循环,每调用一次 tmq_consumer_poll(),获取一个消息,该消息与普通查询返回的结果集完全相同,可以使用相同的解析 API 完成消息内容的解析。
</TabItem>
<TabItem value="java" label="Java">
```java
while(running){
ConsumerRecords<Meters> meters = consumer.poll(Duration.ofMillis(100));
for (Meters meter : meters) {
processMsg(meter);
}
}
```
</TabItem>
<TabItem value="Go" label="Go">
```go
for {
result, err := consumer.Poll(time.Second)
if err != nil {
panic(err)
}
fmt.Println(result)
consumer.Commit(context.Background(), result.Message)
consumer.FreeMessage(result.Message)
}
```
</TabItem>
</Tabs>
## 结束消费
消费结束后,应当取消订阅。
<Tabs defaultValue="java" groupId="lang">
<TabItem value="c" label="C">
```c
/* 取消订阅 */
tmq_unsubscribe(tmq);
/* 关闭消费者对象 */
tmq_consumer_close(tmq);
```
</TabItem>
<TabItem value="java" label="Java">
```java
/* 取消订阅 */
consumer.unsubscribe();
/* 关闭消费 */
consumer.close();
```
</TabItem>
<TabItem value="Go" label="Go">
```go
consumer.Close()
```
</TabItem>
</Tabs>
## 删除 *topic*
如果不再需要订阅数据,可以删除 topic需要注意只有当前未在订阅中的 TOPIC 才能被删除。
```sql
/* 删除 topic */
DROP TOPIC topic_name;
```
## 状态查看
1、*topics*:查询已经创建的 topic
```sql
SHOW TOPICS;
```
2、consumers查询 consumer 的状态及其订阅的 topic
```sql
SHOW CONSUMERS;
```
3、subscriptions查询 consumer 与 vgroup 之间的分配关系
```sql
SHOW SUBSCRIPTIONS;
```
## 示例代码
以下是各语言的完整示例代码。
<Tabs defaultValue="java" groupId="lang">
<TabItem label="C" value="c">
```c
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <assert.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
#include "taos.h"
static int running = 1;
static char dbName[64] = "tmqdb";
static char stbName[64] = "stb";
static char topicName[64] = "topicname";
static int32_t msg_process(TAOS_RES* msg) {
char buf[1024];
int32_t rows = 0;
const char* topicName = tmq_get_topic_name(msg);
const char* dbName = tmq_get_db_name(msg);
int32_t vgroupId = tmq_get_vgroup_id(msg);
printf("topic: %s\n", topicName);
printf("db: %s\n", dbName);
printf("vgroup id: %d\n", vgroupId);
while (1) {
TAOS_ROW row = taos_fetch_row(msg);
if (row == NULL) break;
TAOS_FIELD* fields = taos_fetch_fields(msg);
int32_t numOfFields = taos_field_count(msg);
int32_t* length = taos_fetch_lengths(msg);
int32_t precision = taos_result_precision(msg);
const char* tbName = tmq_get_table_name(msg);
rows++;
taos_print_row(buf, row, fields, numOfFields);
printf("row content from %s: %s\n", (tbName != NULL ? tbName : "table null"), buf);
}
return rows;
}
static int32_t init_env() {
TAOS* pConn = taos_connect("localhost", "root", "taosdata", NULL, 0);
if (pConn == NULL) {
return -1;
}
TAOS_RES* pRes;
// drop database if exists
printf("create database\n");
pRes = taos_query(pConn, "drop database if exists tmqdb");
if (taos_errno(pRes) != 0) {
printf("error in drop tmqdb, reason:%s\n", taos_errstr(pRes));
return -1;
}
taos_free_result(pRes);
// create database
pRes = taos_query(pConn, "create database tmqdb");
if (taos_errno(pRes) != 0) {
printf("error in create tmqdb, reason:%s\n", taos_errstr(pRes));
return -1;
}
taos_free_result(pRes);
// create super table
printf("create super table\n");
pRes = taos_query(
pConn, "create table tmqdb.stb (ts timestamp, c1 int, c2 float, c3 varchar(16)) tags(t1 int, t3 varchar(16))");
if (taos_errno(pRes) != 0) {
printf("failed to create super table stb, reason:%s\n", taos_errstr(pRes));
return -1;
}
taos_free_result(pRes);
// create sub tables
printf("create sub tables\n");
pRes = taos_query(pConn, "create table tmqdb.ctb0 using tmqdb.stb tags(0, 'subtable0')");
if (taos_errno(pRes) != 0) {
printf("failed to create super table ctb0, reason:%s\n", taos_errstr(pRes));
return -1;
}
taos_free_result(pRes);
pRes = taos_query(pConn, "create table tmqdb.ctb1 using tmqdb.stb tags(1, 'subtable1')");
if (taos_errno(pRes) != 0) {
printf("failed to create super table ctb1, reason:%s\n", taos_errstr(pRes));
return -1;
}
taos_free_result(pRes);
pRes = taos_query(pConn, "create table tmqdb.ctb2 using tmqdb.stb tags(2, 'subtable2')");
if (taos_errno(pRes) != 0) {
printf("failed to create super table ctb2, reason:%s\n", taos_errstr(pRes));
return -1;
}
taos_free_result(pRes);
pRes = taos_query(pConn, "create table tmqdb.ctb3 using tmqdb.stb tags(3, 'subtable3')");
if (taos_errno(pRes) != 0) {
printf("failed to create super table ctb3, reason:%s\n", taos_errstr(pRes));
return -1;
}
taos_free_result(pRes);
// insert data
printf("insert data into sub tables\n");
pRes = taos_query(pConn, "insert into tmqdb.ctb0 values(now, 0, 0, 'a0')(now+1s, 0, 0, 'a00')");
if (taos_errno(pRes) != 0) {
printf("failed to insert into ctb0, reason:%s\n", taos_errstr(pRes));
return -1;
}
taos_free_result(pRes);
pRes = taos_query(pConn, "insert into tmqdb.ctb1 values(now, 1, 1, 'a1')(now+1s, 11, 11, 'a11')");
if (taos_errno(pRes) != 0) {
printf("failed to insert into ctb0, reason:%s\n", taos_errstr(pRes));
return -1;
}
taos_free_result(pRes);
pRes = taos_query(pConn, "insert into tmqdb.ctb2 values(now, 2, 2, 'a1')(now+1s, 22, 22, 'a22')");
if (taos_errno(pRes) != 0) {
printf("failed to insert into ctb0, reason:%s\n", taos_errstr(pRes));
return -1;
}
taos_free_result(pRes);
pRes = taos_query(pConn, "insert into tmqdb.ctb3 values(now, 3, 3, 'a1')(now+1s, 33, 33, 'a33')");
if (taos_errno(pRes) != 0) {
printf("failed to insert into ctb0, reason:%s\n", taos_errstr(pRes));
return -1;
}
taos_free_result(pRes);
taos_close(pConn);
return 0;
}
int32_t create_topic() {
printf("create topic\n");
TAOS_RES* pRes;
TAOS* pConn = taos_connect("localhost", "root", "taosdata", NULL, 0);
if (pConn == NULL) {
return -1;
}
pRes = taos_query(pConn, "use tmqdb");
if (taos_errno(pRes) != 0) {
printf("error in use tmqdb, reason:%s\n", taos_errstr(pRes));
return -1;
}
taos_free_result(pRes);
pRes = taos_query(pConn, "create topic topicname as select ts, c1, c2, c3 from tmqdb.stb where c1 > 1");
if (taos_errno(pRes) != 0) {
printf("failed to create topic topicname, reason:%s\n", taos_errstr(pRes));
return -1;
}
taos_free_result(pRes);
taos_close(pConn);
return 0;
}
void tmq_commit_cb_print(tmq_t* tmq, int32_t code, void* param) {
printf("tmq_commit_cb_print() code: %d, tmq: %p, param: %p\n", code, tmq, param);
}
tmq_t* build_consumer() {
tmq_conf_res_t code;
tmq_conf_t* conf = tmq_conf_new();
code = tmq_conf_set(conf, "enable.auto.commit", "true");
if (TMQ_CONF_OK != code) return NULL;
code = tmq_conf_set(conf, "auto.commit.interval.ms", "1000");
if (TMQ_CONF_OK != code) return NULL;
code = tmq_conf_set(conf, "group.id", "cgrpName");
if (TMQ_CONF_OK != code) return NULL;
code = tmq_conf_set(conf, "client.id", "user defined name");
if (TMQ_CONF_OK != code) return NULL;
code = tmq_conf_set(conf, "td.connect.user", "root");
if (TMQ_CONF_OK != code) return NULL;
code = tmq_conf_set(conf, "td.connect.pass", "taosdata");
if (TMQ_CONF_OK != code) return NULL;
code = tmq_conf_set(conf, "auto.offset.reset", "earliest");
if (TMQ_CONF_OK != code) return NULL;
code = tmq_conf_set(conf, "experimental.snapshot.enable", "true");
if (TMQ_CONF_OK != code) return NULL;
code = tmq_conf_set(conf, "msg.with.table.name", "true");
if (TMQ_CONF_OK != code) return NULL;
tmq_conf_set_auto_commit_cb(conf, tmq_commit_cb_print, NULL);
tmq_t* tmq = tmq_consumer_new(conf, NULL, 0);
tmq_conf_destroy(conf);
return tmq;
}
tmq_list_t* build_topic_list() {
tmq_list_t* topicList = tmq_list_new();
int32_t code = tmq_list_append(topicList, "topicname");
if (code) {
return NULL;
}
return topicList;
}
void basic_consume_loop(tmq_t* tmq, tmq_list_t* topicList) {
int32_t code;
if ((code = tmq_subscribe(tmq, topicList))) {
fprintf(stderr, "%% Failed to tmq_subscribe(): %s\n", tmq_err2str(code));
return;
}
int32_t totalRows = 0;
int32_t msgCnt = 0;
int32_t timeout = 5000;
while (running) {
TAOS_RES* tmqmsg = tmq_consumer_poll(tmq, timeout);
if (tmqmsg) {
msgCnt++;
totalRows += msg_process(tmqmsg);
taos_free_result(tmqmsg);
/*} else {*/
/*break;*/
}
}
fprintf(stderr, "%d msg consumed, include %d rows\n", msgCnt, totalRows);
}
int main(int argc, char* argv[]) {
int32_t code;
if (init_env() < 0) {
return -1;
}
if (create_topic() < 0) {
return -1;
}
tmq_t* tmq = build_consumer();
if (NULL == tmq) {
fprintf(stderr, "%% build_consumer() fail!\n");
return -1;
}
tmq_list_t* topic_list = build_topic_list();
if (NULL == topic_list) {
return -1;
}
basic_consume_loop(tmq, topic_list);
code = tmq_unsubscribe(tmq);
if (code) {
fprintf(stderr, "%% Failed to unsubscribe: %s\n", tmq_err2str(code));
} else {
fprintf(stderr, "%% unsubscribe\n");
}
code = tmq_consumer_close(tmq);
if (code) {
fprintf(stderr, "%% Failed to close consumer: %s\n", tmq_err2str(code));
} else {
fprintf(stderr, "%% Consumer closed\n");
}
return 0;
}
```
[查看源码](https://github.com/taosdata/TDengine/blob/develop/examples/c/tmq.c)
</TabItem>
<TabItem label="Java" value="java">
<Java />
</TabItem>
<TabItem label="Go" value="Go">
<Go/>
</TabItem>
<TabItem label="Rust" value="Rust">
<Rust />
</TabItem>
<TabItem label="Python" value="Python">
```python
import taos
from taos.tmq import *
conn = taos.connect()
# create database
conn.execute("drop database if exists py_tmq")
conn.execute("create database if not exists py_tmq vgroups 2")
# create table and stables
conn.select_db("py_tmq")
conn.execute("create stable if not exists stb1 (ts timestamp, c1 int, c2 float, c3 binary(10)) tags(t1 int)")
conn.execute("create table if not exists tb1 using stb1 tags(1)")
conn.execute("create table if not exists tb2 using stb1 tags(2)")
conn.execute("create table if not exists tb3 using stb1 tags(3)")
# create topic
conn.execute("drop topic if exists topic_ctb_column")
conn.execute("create topic if not exists topic_ctb_column as select ts, c1, c2, c3 from stb1")
# set consumer configure options
conf = TaosTmqConf()
conf.set("group.id", "tg2")
conf.set("td.connect.user", "root")
conf.set("td.connect.pass", "taosdata")
conf.set("enable.auto.commit", "true")
conf.set("msg.with.table.name", "true")
def tmq_commit_cb_print(tmq, resp, offset, param=None):
print(f"commit: {resp}, tmq: {tmq}, offset: {offset}, param: {param}")
conf.set_auto_commit_cb(tmq_commit_cb_print, None)
# build consumer
tmq = conf.new_consumer()
# build topic list
topic_list = TaosTmqList()
topic_list.append("topic_ctb_column")
# subscribe consumer
tmq.subscribe(topic_list)
# check subscriptions
sub_list = tmq.subscription()
print("subscribed topics: ",sub_list)
# start subscribe
while 1:
res = tmq.poll(1000)
if res:
topic = res.get_topic_name()
vg = res.get_vgroup_id()
db = res.get_db_name()
print(f"topic: {topic}\nvgroup id: {vg}\ndb: {db}")
for row in res:
print(row)
tb = res.get_table_name()
print(f"from table: {tb}")
```
[查看源码](https://github.com/taosdata/TDengine/blob/develop/docs/examples/python/tmq_example.py)
</TabItem>
<TabItem label="Node.JS" value="Node.JS">
<Node/>
</TabItem>
<TabItem label="C#" value="C#">
<CSharp/>
</TabItem>
</Tabs>

View File

@ -1,7 +1,9 @@
```java
{{#include docs/examples/java/src/main/java/com/taos/example/SubscribeDemo.java}}
```
:::note
目前 Java 接口没有提供异步订阅模式,但用户程序可以通过创建 `TimerTask` 等方式达到同样的效果。
:::
```java
{{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}}
```
```java
{{#include docs/examples/java/src/main/java/com/taos/example/Meters.java}}
```

View File

@ -1,3 +1,3 @@
```rs
```rust
{{#include docs/examples/rust/nativeexample/examples/subscribe_demo.rs}}
```
```

View File

@ -7,7 +7,7 @@ title: 开发指南
2. 根据自己的应用场景,确定数据模型。根据数据特征,决定建立一个还是多个库;分清静态标签、采集量,建立正确的超级表,建立子表。
3. 决定插入数据的方式。TDengine支持使用标准的SQL写入但同时也支持schemaless模式写入这样不用手工建表可以将数据直接写入。
4. 根据业务要求看需要撰写哪些SQL查询语句。
5. 如果你要基于时序数据做实时统计分析,包括各种监测看板,那么建议你采用TDengine的连续查询功能而不用上线Spark, Flink等复杂的流式计算系统。
5. 如果你要基于时序数据做轻量级的实时统计分析,包括各种监测看板,那么建议你采用 TDengine 3.0 的流式计算功能,而不用额外部署 Spark, Flink 等复杂的流式计算系统。
6. 如果你的应用有模块需要消费插入的数据希望有新的数据插入时就能获取通知那么建议你采用TDengine提供的数据订阅功能而无需专门部署Kafka或其他消息队列软件。
7. 在很多场景下(如车辆管理)应用需要获取每个数据采集点的最新状态那么建议你采用TDengine的cache功能而不用单独部署Redis等缓存软件。
8. 如果你发现TDengine的函数无法满足你的要求那么你可以使用用户自定义函数来解决问题。

View File

@ -34,7 +34,7 @@ CREATE DATABASE db_name PRECISION 'ns';
| 7 | DOUBLE | 8 | 双精度浮点型,有效位数 15-16范围 [-1.7E308, 1.7E308] |
| 8 | BINARY | 自定义 | 记录单字节字符串,建议只用于处理 ASCII 可见字符,中文等多字节字符需使用 nchar。 |
| 9 | SMALLINT | 2 | 短整型, 范围 [-32768, 32767] |
| 10 | SMALLINT UNSIGNED | 2| 无符号短整型,范围 [0, 655357] |
| 10 | SMALLINT UNSIGNED | 2| 无符号短整型,范围 [0, 65535] |
| 11 | TINYINT | 1 | 单字节整型,范围 [-128, 127] |
| 12 | TINYINT UNSIGNED | 1 | 无符号单字节整型,范围 [0, 255] |
| 13 | BOOL | 1 | 布尔型,{true, false} |

View File

@ -103,7 +103,7 @@ SELECT d1001.* FROM d1001,d1003 WHERE d1001.ts = d1003.ts;
在超级表和子表的查询中可以指定 _标签列_,且标签列的值会与普通列的数据一起返回。
```sql
ELECT location, groupid, current FROM d1001 LIMIT 2;
SELECT location, groupid, current FROM d1001 LIMIT 2;
```
### 结果去重

View File

@ -1,6 +1,6 @@
---
sidebar_label: 消息队列
title: 消息队列
sidebar_label: 数据订阅
title: 数据订阅
---
TDengine 3.0.0.0 开始对消息队列做了大幅的优化和增强以简化用户的解决方案。
@ -8,24 +8,17 @@ TDengine 3.0.0.0 开始对消息队列做了大幅的优化和增强以简化用
## 创建订阅主题
```sql
CREATE TOPIC [IF NOT EXISTS] topic_name AS {subquery | DATABASE db_name | STABLE stb_name };
CREATE TOPIC [IF NOT EXISTS] topic_name AS subquery;
```
订阅主题包括三种:列订阅、超级表订阅和数据库订阅。
**列订阅是**用 subquery 描述,支持过滤和标量函数和 UDF 标量函数,不支持 JOIN、GROUP BY、窗口切分子句、聚合函数和 UDF 聚合函数。列订阅规则如下:
TOPIC 支持过滤和标量函数和 UDF 标量函数,不支持 JOIN、GROUP BY、窗口切分子句、聚合函数和 UDF 聚合函数。列订阅规则如下:
1. TOPIC 一旦创建则返回结果的字段确定
2. 被订阅或用于计算的列不可被删除、修改
3. 列可以新增,但新增的列不出现在订阅结果字段中
4. 对于 select \*,则订阅展开为创建时所有的列(子表、普通表为数据列,超级表为数据列加标签列)
**超级表订阅和数据库订阅**规则如下:
1. 被订阅主体的 schema 变更不受限
2. 返回消息中 schema 是块级别的,每块的 schema 可能不一样
3. 列变更后写入的数据若未落盘,将以写入时的 schema 返回
4. 列变更后写入的数据若未已落盘,将以落盘时的 schema 返回
## 删除订阅主题

View File

@ -26,10 +26,19 @@ subquery: SELECT [DISTINCT] select_list
[WHERE condition]
[PARTITION BY tag_list]
[window_clause]
[group_by_clause]
```
不支持 order_bylimitslimitfill 语句
支持会话窗口、状态窗口与滑动窗口其中会话窗口与状态窗口搭配超级表时必须与partition by tbname一起使用
```sql
window_clause: {
SESSION(ts_col, tol_val)
| STATE_WINDOW(col)
| INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)]
}
```
其中SESSION 是会话窗口tol_val 是时间间隔的最大范围。在 tol_val 时间间隔范围内的数据都属于同一个窗口,如果连续的两条数据的时间超过 tol_val则自动开启下一个窗口。
例如,如下语句创建流式计算,同时自动创建名为 avg_vol 的超级表此流计算以一分钟为时间窗口、30 秒为前向增量统计这些电表的平均电压,并将来自 meters 表的数据的计算结果写入 avg_vol 表,不同 partition 的数据会分别创建子表并写入不同子表。
@ -88,23 +97,3 @@ T = 最新事件时间 - watermark
2. 重新计算:从 TSDB 中重新查找对应窗口的所有数据并重新计算得到最新结果
无论在哪种模式下watermark 都应该被妥善设置,来得到正确结果(直接丢弃模式)或避免频繁触发重算带来的性能开销(重新计算模式)。
## 流式计算与会话窗口session window
```sql
window_clause: {
SESSION(ts_col, tol_val)
| STATE_WINDOW(col)
| INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)] [FILL(fill_mod_and_val)]
}
```
其中SESSION 是会话窗口tol_val 是时间间隔的最大范围。在 tol_val 时间间隔范围内的数据都属于同一个窗口,如果连续的两条数据的时间超过 tol_val则自动开启下一个窗口。
## 流式计算的暂停与恢复
```sql
STOP STREAM stream_name;
RESUME STREAM stream_name;
```

View File

@ -0,0 +1,95 @@
---
sidebar_label: 3.0 版本语法变更
title: 3.0 版本语法变更
description: "TDengine 3.0 版本的语法变更说明"
---
## SQL 基本元素变更
| # | **元素** | **<div style={{width: 60}}>差异性</div>** | **说明** |
| - | :------- | :-------- | :------- |
| 1 | VARCHAR | 新增 | BINARY类型的别名。
| 2 | TIMESTAMP字面量 | 新增 | 新增支持 TIMESTAMP 'timestamp format' 语法。
| 3 | _ROWTS伪列 | 新增 | 表示时间戳主键。是_C0伪列的别名。
| 4 | INFORMATION_SCHEMA | 新增 | 包含各种SCHEMA定义的系统数据库。
| 5 | PERFORMANCE_SCHEMA | 新增 | 包含运行信息的系统数据库。
| 6 | 连续查询 | 废除 | 不再支持连续查询。相关的各种语法和接口废除。
| 7 | 混合运算 | 增强 | 查询中的混合运算标量运算和矢量运算混合全面增强SELECT的各个子句均全面支持符合语法语义的混合运算。
| 8 | 标签运算 | 新增 |在查询中,标签列可以像普通列一样参与各种运算,用于各种子句。
| 9 | 时间线子句和时间函数用于超级表查询 | 增强 |没有PARTITION BY时超级表的数据会被合并成一条时间线。
## SQL 语句变更
在 TDengine 中,普通表的数据模型中可使用以下数据类型。
| # | **语句** | **<div style={{width: 60}}>差异性</div>** | **说明** |
| - | :------- | :-------- | :------- |
| 1 | ALTER ACCOUNT | 废除 | 2.x中为企业版功能3.0不再支持。语法暂时保留了执行报“This statement is no longer supported”错误。
| 2 | ALTER ALL DNODES | 新增 | 修改所有DNODE的参数。
| 3 | ALTER DATABASE | 调整 | 废除<ul><li>QUORUM写入需要的副本确认数。3.0版本使用STRICT来指定强一致还是弱一致。3.0.0版本STRICT暂不支持修改。</li><li>BLOCKSVNODE使用的内存块数。3.0版本使用BUFFER来表示VNODE写入内存池的大小。</li><li>UPDATE更新操作的支持模式。3.0版本所有数据库都支持部分列更新。</li><li>CACHELAST缓存最新一行数据的模式。3.0版本用CACHEMODEL代替。</li><li>COMP3.0版本暂不支持修改。<br/>新增</li><li>CACHEMODEL表示是否在内存中缓存子表的最近数据。</li><li>CACHESIZE表示缓存子表最近数据的内存大小。</li><li>WAL_FSYNC_PERIOD代替原FSYNC参数。</li><li>WAL_LEVEL代替原WAL参数。<br/>调整</li><li>REPLICA3.0.0版本暂不支持修改。</li><li>KEEP3.0版本新增支持带单位的设置方式。</li></ul>
| 4 | ALTER STABLE | 调整 | 废除<ul><li>CHANGE TAG修改标签列的名称。3.0版本使用RENAME TAG代替。<br/>新增</li><li>RENAME TAG代替原CHANGE TAG子句。</li><li>COMMENT修改超级表的注释。</li></ul>
| 5 | ALTER TABLE | 调整 | 废除<ul><li>CHANGE TAG修改标签列的名称。3.0版本使用RENAME TAG代替。<br/>新增</li><li>RENAME TAG代替原CHANGE TAG子句。</li><li>COMMENT修改表的注释。</li><li>TTL修改表的生命周期。</li></ul>
| 6 | ALTER USER | 调整 | 废除<ul><li>PRIVILEGE修改用户权限。3.0版本使用GRANT和REVOKE来授予和回收权限。<br/>新增</li><li>ENABLE启用或停用此用户。</li><li>SYSINFO修改用户是否可查看系统信息。</li></ul>
| 7 | COMPACT VNODES | 暂不支持 | 整理指定VNODE的数据。3.0.0版本暂不支持。
| 8 | CREATE ACCOUNT | 废除 | 2.x中为企业版功能3.0不再支持。语法暂时保留了执行报“This statement is no longer supported”错误。
| 9 | CREATE DATABASE | 调整 | 废除<ul><li>BLOCKSVNODE使用的内存块数。3.0版本使用BUFFER来表示VNODE写入内存池的大小。</li><li>CACHEVNODE使用的内存块的大小。3.0版本使用BUFFER来表示VNODE写入内存池的大小。</li><li>CACHELAST缓存最新一行数据的模式。3.0版本用CACHEMODEL代替。</li><li>DAYS数据文件存储数据的时间跨度。3.0版本使用DURATION代替。</li><li>FSYNC当 WAL 设置为 2 时,执行 fsync 的周期。3.0版本使用WAL_FSYNC_PERIOD代替。</li><li>QUORUM写入需要的副本确认数。3.0版本使用STRICT来指定强一致还是弱一致。</li><li>UPDATE更新操作的支持模式。3.0版本所有数据库都支持部分列更新。</li><li>WALWAL 级别。3.0版本使用WAL_LEVEL代替。<br/>新增</li><li>BUFFER一个 VNODE 写入内存池大小。</li><li>CACHEMODEL表示是否在内存中缓存子表的最近数据。</li><li>CACHESIZE表示缓存子表最近数据的内存大小。</li><li>DURATION代替原DAYS参数。新增支持带单位的设置方式。</li><li>PAGES一个 VNODE 中元数据存储引擎的缓存页个数。</li><li>PAGESIZE一个 VNODE 中元数据存储引擎的页大小。</li><li>RETENTIONS表示数据的聚合周期和保存时长。</li><li>STRICT表示数据同步的一致性要求。</li><li>SINGLE_STABLE表示此数据库中是否只可以创建一个超级表。</li><li>VGROUPS数据库中初始VGROUP的数目。</li><li>WAL_FSYNC_PERIOD代替原FSYNC参数。</li><li>WAL_LEVEL代替原WAL参数。</li><li>WAL_RETENTION_PERIODwal文件的额外保留策略用于数据订阅。</li><li>WAL_RETENTION_SIZEwal文件的额外保留策略用于数据订阅。</li><li>WAL_ROLL_PERIODwal文件切换时长。</li><li>WAL_SEGMENT_SIZEwal单个文件大小。<br/>调整</li><li>KEEP3.0版本新增支持带单位的设置方式。</li></ul>
| 10 | CREATE DNODE | 调整 | 新增主机名和端口号分开指定语法<ul><li>CREATE DNODE dnode_host_name PORT port_val</li></ul>
| 11 | CREATE INDEX | 新增 | 创建SMA索引。
| 12 | CREATE MNODE | 新增 | 创建管理节点。
| 13 | CREATE QNODE | 新增 | 创建查询节点。
| 14 | CREATE STABLE | 调整 | 新增表参数语法<li>COMMENT表注释。</li>
| 15 | CREATE STREAM | 新增 | 创建流。
| 16 | CREATE TABLE | 调整 | 新增表参数语法<ul><li>COMMENT表注释。</li><li>WATERMARK指定窗口的关闭时间。</li><li>MAX_DELAY用于控制推送计算结果的最大延迟。</li><li>ROLLUP指定的聚合函数提供基于多层级的降采样聚合结果。</li><li>SMA提供基于数据块的自定义预计算功能。</li><li>TTL用来指定表的生命周期的参数。</li></ul>
| 17 | CREATE TOPIC | 新增 | 创建订阅主题。
| 18 | DROP ACCOUNT | 废除 | 2.x中为企业版功能3.0不再支持。语法暂时保留了执行报“This statement is no longer supported”错误。
| 19 | DROP CONSUMER GROUP | 新增 | 删除消费组。
| 20 | DROP INDEX | 新增 | 删除索引。
| 21 | DROP MNODE | 新增 | 创建管理节点。
| 22 | DROP QNODE | 新增 | 创建查询节点。
| 23 | DROP STREAM | 新增 | 删除流。
| 24 | DROP TABLE | 调整 | 新增批量删除语法
| 25 | DROP TOPIC | 新增 | 删除订阅主题。
| 26 | EXPLAIN | 新增 | 查看查询语句的执行计划。
| 27 | GRANT | 新增 | 授予用户权限。
| 28 | KILL TRANSACTION | 新增 | 终止管理节点的事务。
| 29 | KILL STREAM | 废除 | 终止连续查询。3.0版本不再支持连续查询,而是用更通用的流计算来代替。
| 30 | MERGE VGROUP | 新增 | 合并VGROUP。
| 31 | REVOKE | 新增 | 回收用户权限。
| 32 | SELECT | 调整 | <ul><li>SELECT关闭隐式结果列输出列均需要由SELECT子句来指定。</li><li>DISTINCT功能全面支持。2.x版本只支持对标签列去重并且不可以和JOIN、GROUP BY等子句混用。</li><li>JOIN功能增强。增加支持JOIN后WHERE条件中有OR条件JOIN后的多表运算JOIN后的多表GROUP BY。</li><li>FROM后子查询功能大幅增强。不限制子查询嵌套层数支持子查询和UNION ALL混合使用移除其他一些之前版本的语法限制。</li><li>WHERE后可以使用任意的标量表达式。</li><li>GROUP BY功能增强。支持任意标量表达式及其组合的分组。</li><li>SESSION可以用于超级表了。没有PARTITION BY时超级表的数据会被合并成一条时间线。</li><li>STATE_WINDOW可以用于超级表了。没有PARTITION BY时超级表的数据会被合并成一条时间线。</li><li>ORDER BY功能大幅增强。不再必须和GROUP BY子句一起使用不再有排序表达式个数的限制增加支持NULLS FIRST/LAST语法功能支持符合语法语义的任意表达式。</li><li>新增PARTITION BY语法。替代原来的GROUP BY tags。</li></ul>
| 33 | SHOW ACCOUNTS | 废除 | 2.x中为企业版功能3.0不再支持。语法暂时保留了执行报“This statement is no longer supported”错误。
| 34 | SHOW APPS |新增 | 显示接入集群的应用(客户端)信息。
| 35 | SHOW CONSUMERS | 新增 | 显示当前数据库下所有活跃的消费者的信息。
| 36 | SHOW DATABASES | 调整 | 3.0版本只显示数据库名。
| 37 | SHOW FUNCTIONS | 调整 | 3.0版本只显示自定义函数名。
| 38 | SHOW LICENCE | 新增 | 和SHOW GRANTS 命令等效。
| 39 | SHOW INDEXES | 新增 | 显示已创建的索引。
| 40 | SHOW LOCAL VARIABLES | 新增 | 显示当前客户端配置参数的运行值。
| 41 | SHOW MODULES | 废除 | 显示当前系统中所安装的组件的信息。
| 42 | SHOW QNODES | 新增 | 显示当前系统中QNODE的信息。
| 43 | SHOW STABLES | 调整 | 3.0版本只显示超级表名。
| 44 | SHOW STREAMS | 调整 | 2.x版本此命令显示系统中已创建的连续查询的信息。3.0版本废除了连续查询,用流代替。此命令显示已创建的流。
| 45 | SHOW SUBSCRIPTIONS | 新增 | 显示当前数据库下的所有的订阅关系
| 46 | SHOW TABLES | 调整 | 3.0版本只显示表名。
| 47 | SHOW TABLE DISTRIBUTED | 新增 | 显示表的数据分布信息。代替2.x版本中的SELECT _block_dist() FROM { tb_name | stb_name }方式。
| 48 | SHOW TOPICS | 新增 | 显示当前数据库下的所有订阅主题。
| 49 | SHOW TRANSACTIONS | 新增 | 显示当前系统中正在执行的事务的信息。
| 50 | SHOW DNODE VARIABLES | 新增 |显示指定DNODE的配置参数。
| 51 | SHOW VNODES | 暂不支持 | 显示当前系统中VNODE的信息。3.0.0版本暂不支持。
| 52 | SPLIT VGROUP | 新增 | 拆分VGROUP。
| 53 | TRIM DATABASE | 新增 | 删除过期数据,并根据多级存储的配置归整数据。
## SQL 函数变更
| # | **函数** | ** <div style={{width: 60}}>差异性</div> ** | **说明** |
| - | :------- | :-------- | :------- |
| 1 | TWA | 增强 | 可以直接用于超级表了。没有PARTITION BY时超级表的数据会被合并成一条时间线。
| 2 | IRATE | 增强 | 可以直接用于超级表了。没有PARTITION BY时超级表的数据会被合并成一条时间线。
| 3 | LEASTSQUARES | 增强 | 可以用于超级表了。
| 4 | ELAPSED | 增强 | 可以直接用于超级表了。没有PARTITION BY时超级表的数据会被合并成一条时间线。
| 5 | DIFF | 增强 | 可以直接用于超级表了。没有PARTITION BY时超级表的数据会被合并成一条时间线。
| 6 | DERIVATIVE | 增强 | 可以直接用于超级表了。没有PARTITION BY时超级表的数据会被合并成一条时间线。
| 7 | CSUM | 增强 | 可以直接用于超级表了。没有PARTITION BY时超级表的数据会被合并成一条时间线。
| 8 | MAVG | 增强 | 可以直接用于超级表了。没有PARTITION BY时超级表的数据会被合并成一条时间线。
| 9 | SAMPLE | 增强 | 可以直接用于超级表了。没有PARTITION BY时超级表的数据会被合并成一条时间线。
| 10 | STATECOUNT | 增强 | 可以直接用于超级表了。没有PARTITION BY时超级表的数据会被合并成一条时间线。
| 11 | STATEDURATION | 增强 | 可以直接用于超级表了。没有PARTITION BY时超级表的数据会被合并成一条时间线。

View File

@ -3,7 +3,7 @@ title: TAOS SQL
description: "TAOS SQL 支持的语法规则、主要查询功能、支持的 SQL 查询函数,以及常用技巧等内容"
---
本文档说明 TAOS SQL 支持的语法规则、主要查询功能、支持的 SQL 查询函数,以及常用技巧等内容。阅读本文档需要读者具有基本的 SQL 语言的基础。
本文档说明 TAOS SQL 支持的语法规则、主要查询功能、支持的 SQL 查询函数,以及常用技巧等内容。阅读本文档需要读者具有基本的 SQL 语言的基础。TDengine 3.0 版本相比 2.x 版本做了大量改进和优化,特别是查询引擎进行了彻底的重构,因此 SQL 语法相比 2.x 版本有很多变更。详细的变更内容请见 [3.0 版本语法变更](/taos-sql/changes) 章节
TAOS SQL 是用户对 TDengine 进行数据写入和查询的主要工具。TAOS SQL 提供标准的 SQL 语法并针对时序数据和业务的特点优化和新增了许多语法和功能。TAOS SQL 语句的最大长度为 1M。TAOS SQL 不支持关键字的缩写,例如 DELETE 不能缩写为 DEL。

View File

@ -2,7 +2,7 @@
title: REST API
---
为支持各种不同类型平台的开发TDengine 提供符合 REST 设计标准的 API即 REST API。为最大程度降低学习成本不同于其他数据库 REST API 的设计方法TDengine 直接通过 HTTP POST 请求 BODY 中包含的 SQL 语句来操作数据库,仅需要一个 URL。REST 连接器的使用参见[视频教程](https://www.taosdata.com/blog/2020/11/11/1965.html)。
为支持各种不同类型平台的开发TDengine 提供符合 REST 设计标准的 API即 REST API。为最大程度降低学习成本不同于其他数据库 REST API 的设计方法TDengine 直接通过 HTTP POST 请求 BODY 中包含的 SQL 语句来操作数据库,仅需要一个 URL。REST 连接器的使用参见 [视频教程](https://www.taosdata.com/blog/2020/11/11/1965.html)。
:::note
与原生连接器的一个区别是RESTful 接口是无状态的,因此 `USE db_name` 指令没有效果,所有对表名、超级表名的引用都需要指定数据库名前缀。支持在 RESTful URL 中指定 db_name这时如果 SQL 语句中没有指定数据库名前缀的话,会使用 URL 中指定的这个 db_name。
@ -20,8 +20,10 @@ RESTful 接口不依赖于任何 TDengine 的库,因此客户端不需要安
下面示例是列出所有的数据库,请把 h1.taosdata.com 和 6041缺省值替换为实际运行的 TDengine 服务 FQDN 和端口号:
```html
curl -L -H "Authorization: Basic cm9vdDp0YW9zZGF0YQ==" -d "show databases;" h1.taosdata.com:6041/rest/sql
```bash
curl -L -H "Authorization: Basic cm9vdDp0YW9zZGF0YQ==" \
-d "select name, ntables, status from information_schema.ins_databases;" \
h1.taosdata.com:6041/rest/sql
```
返回值结果如下表示验证通过:
@ -35,188 +37,27 @@ curl -L -H "Authorization: Basic cm9vdDp0YW9zZGF0YQ==" -d "show databases;" h1.t
"VARCHAR",
64
],
[
"create_time",
"TIMESTAMP",
8
],
[
"vgroups",
"SMALLINT",
2
],
[
"ntables",
"BIGINT",
8
],
[
"replica",
"TINYINT",
1
],
[
"strict",
"VARCHAR",
4
],
[
"duration",
"VARCHAR",
10
],
[
"keep",
"VARCHAR",
32
],
[
"buffer",
"INT",
4
],
[
"pagesize",
"INT",
4
],
[
"pages",
"INT",
4
],
[
"minrows",
"INT",
4
],
[
"maxrows",
"INT",
4
],
[
"comp",
"TINYINT",
1
],
[
"precision",
"VARCHAR",
2
],
[
"status",
"VARCHAR",
10
],
[
"retention",
"VARCHAR",
60
],
[
"single_stable",
"BOOL",
1
],
[
"cachemodel",
"VARCHAR",
11
],
[
"cachesize",
"INT",
4
],
[
"wal_level",
"TINYINT",
1
],
[
"wal_fsync_period",
"INT",
4
],
[
"wal_retention_period",
"INT",
4
],
[
"wal_retention_size",
"BIGINT",
8
],
[
"wal_roll_period",
"INT",
4
],
[
"wal_seg_size",
"BIGINT",
8
]
],
"data": [
[
"information_schema",
null,
null,
14,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
"ready",
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
16,
"ready"
],
[
"performance_schema",
null,
null,
3,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
null,
"ready",
null,
null,
null,
null,
null,
null,
null,
null,
null,
null
9,
"ready"
]
],
"rows": 2
@ -231,21 +72,21 @@ http://<fqdn>:<port>/rest/sql/[db_name]
参数说明:
- fqnd: 集群中的任一台主机 FQDN 或 IP 地址
- port: 配置文件中 httpPort 配置项,缺省为 6041
- fqnd: 集群中的任一台主机 FQDN 或 IP 地址
- port: 配置文件中 httpPort 配置项,缺省为 6041
- db_name: 可选参数,指定本次所执行的 SQL 语句的默认数据库库名。
例如:`http://h1.taos.com:6041/rest/sql/test` 是指向地址为 `h1.taos.com:6041` 的 URL并将默认使用的数据库库名设置为 `test`。
HTTP 请求的 Header 里需带有身份认证信息TDengine 支持 Basic 认证与自定义认证两种机制,后续版本将提供标准安全的数字签名机制来做身份验证。
- [自定义身份认证信息](#自定义授权码)如下所示
- [自定义身份认证信息](#自定义授权码)如下所示
```text
Authorization: Taosd <TOKEN>
```
- Basic 身份认证信息如下所示
- Basic 身份认证信息如下所示
```text
Authorization: Basic <TOKEN>
@ -259,13 +100,13 @@ HTTP 请求的 BODY 里就是一个完整的 SQL 语句SQL 语句中的数据
curl -L -H "Authorization: Basic <TOKEN>" -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name]
```
或者
或者
```bash
curl -L -u username:password -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name]
```
其中,`TOKEN` 为 `{username}:{password}` 经过 Base64 编码之后的字符串,例如 `root:taosdata` 编码后为 `cm9vdDp0YW9zZGF0YQ==`
其中,`TOKEN` 为 `{username}:{password}` 经过 Base64 编码之后的字符串,例如 `root:taosdata` 编码后为 `cm9vdDp0YW9zZGF0YQ==`
## HTTP 返回格式
@ -282,27 +123,9 @@ curl -L -u username:password -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name]
### HTTP body 结构
<table>
<tr>
<th>执行结果</th>
<th>说明</th>
<th>样例</th>
</tr>
<tr>
<td>正确执行</td>
<td>
codeint0 代表成功
<br/>
<br/>
column_meta[][3]any列信息每个列会用三个值来说明分别为列名(string)、列类型(string)、类型长度(int)
<br/>
<br/>
rowsint数据返回行数
<br/>
<br/>
data[][]any具体数据内容
</td>
<td>
#### 正确执行
样例:
```json
{
@ -313,23 +136,16 @@ curl -L -u username:password -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name]
}
```
</td>
</tr>
<tr>
<td>正确查询</td>
<td>
codeint0 代表成功
<br/>
<br/>
column_meta[][3]any 列信息每个列会用三个值来说明分别为列名string、列类型string、类型长度int
<br/>
<br/>
rowsint数据返回行数
<br/>
<br/>
data[][]any具体数据内容
</td>
<td>
说明:
- code`int`0 代表成功。
- column_meta`[1][3]any`)只返回 `[["affected_rows", "INT", 4]]`。
- rows`int`)只返回 `1`。
- data`[][]any`)返回受影响行数。
#### 正确查询
样例:
```json
{
@ -385,17 +201,35 @@ curl -L -u username:password -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name]
}
```
</td>
</tr>
<tr>
<td>错误</td>
<td>
codeint错误码
<br/>
<br/>
descstring错误描述
</td>
<td>
说明:
- code`int`0 代表成功。
- column_meta`[][3]any` 列信息每个列会用三个值来说明分别为列名string、列类型string、类型长度int
- rows`int`)数据返回行数。
- data`[][]any`)具体数据内容(时间格式仅支持 RFC3339结果集为 0 时区)。
列类型使用如下字符串:
- "NULL"
- "BOOL"
- "TINYINT"
- "SMALLINT"
- "INT"
- "BIGINT"
- "FLOAT"
- "DOUBLE"
- "VARCHAR"
- "TIMESTAMP"
- "NCHAR"
- "TINYINT UNSIGNED"
- "SMALLINT UNSIGNED"
- "INT UNSIGNED"
- "BIGINT UNSIGNED"
- "JSON"
#### 错误
样例:
```json
{
@ -404,30 +238,10 @@ curl -L -u username:password -d "<SQL>" <ip>:<PORT>/rest/sql/[db_name]
}
```
</td>
</tr>
</table>
说明:
### 说明
- 时间格式仅支持 RFC3339结果集为 0 时区
- 列类型使用如下字符串:
> "NULL"
> "BOOL"
> "TINYINT"
> "SMALLINT"
> "INT"
> "BIGINT"
> "FLOAT"
> "DOUBLE"
> "VARCHAR"
> "TIMESTAMP"
> "NCHAR"
> "TINYINT UNSIGNED"
> "SMALLINT UNSIGNED"
> "INT UNSIGNED"
> "BIGINT UNSIGNED"
> "JSON"
- code`int`)错误码。
- desc`string`)错误描述。
## 自定义授权码
@ -439,11 +253,9 @@ curl http://<fqnd>:<port>/rest/login/<username>/<password>
其中,`fqdn` 是 TDengine 数据库的 FQDN 或 IP 地址,`port` 是 TDengine 服务的端口号,`username` 为数据库用户名,`password` 为数据库密码,返回值为 JSON 格式,各字段含义如下:
- status请求结果的标志位
- code返回值代码
- desc授权码
- status请求结果的标志位。
- code返回值代码。
- desc授权码。
获取授权码示例:

View File

@ -8,18 +8,13 @@ TDengine 提供了丰富的应用程序开发接口,为了便于用户快速
## 支持的平台
目前 TDengine 的原生接口连接器可支持的平台包括X64/X86/ARM64/ARM32/MIPS/Alpha 等硬件平台,以及 Linux/Win64/Win32 等开发环境。对照矩阵如下:
目前 TDengine 的原生接口连接器可支持的平台包括X64/ARM64 等硬件平台,以及 Linux/Win64 等开发环境。对照矩阵如下:
| **CPU** | **OS** | **Java** | **Python** | **Go** | **Node.js** | **C#** | **Rust** | C/C++ |
| -------------- | --------- | -------- | ---------- | ------ | ----------- | ------ | -------- | ----- |
| **X86 64bit** | **Linux** | ● | ● | ● | ● | ● | ● | ● |
| **X86 64bit** | **Win64** | ● | ● | ● | ● | ● | ● | ● |
| **X86 64bit** | **Win32** | ● | ● | ● | ● | ○ | ○ | ● |
| **X86 32bit** | **Win32** | ○ | ○ | ○ | ○ | ○ | ○ | ● |
| **ARM64** | **Linux** | ● | ● | ● | ● | ○ | ○ | ● |
| **MIPS 龙芯** | **Linux** | ○ | ○ | ○ | ○ | ○ | ○ | ○ |
| **Alpha 申威** | **Linux** | ○ | ○ | -- | -- | -- | -- | ○ |
| **X86 海光** | **Linux** | ○ | ○ | ○ | -- | -- | -- | ○ |
其中 ● 表示官方测试验证通过,○ 表示非官方测试验证通过,-- 表示未经验证。

View File

@ -404,47 +404,3 @@ TDengine 的异步 API 均采用非阻塞调用模式。应用程序可以用多
**支持版本**
该功能接口从 2.3.0.0 版本开始支持。
### 订阅和消费 API
订阅 API 目前支持订阅一张或多张表,并通过定期轮询的方式不断获取写入表中的最新数据。
- `TAOS_SUB *taos_subscribe(TAOS* taos, int restart, const char* topic, const char *sql, TAOS_SUBSCRIBE_CALLBACK fp, void *param, int interval)`
该函数负责启动订阅服务,成功时返回订阅对象,失败时返回 `NULL`,其参数为:
- taos已经建立好的数据库连接
- restart如果订阅已经存在是重新开始还是继续之前的订阅
- topic订阅的主题即名称此参数是订阅的唯一标识
- sql订阅的查询语句此语句只能是 `select` 语句,只应查询原始数据,只能按时间正序查询数据
- fp收到查询结果时的回调函数稍后介绍函数原型只在异步调用时使用同步调用时此参数应该传 `NULL`
- param调用回调函数时的附加参数系统 API 将其原样传递到回调函数,不进行任何处理
- interval轮询周期单位为毫秒。异步调用时将根据此参数周期性的调用回调函数为避免对系统性能造成影响不建议将此参数设置的过小同步调用时如两次调用 `taos_consume()` 的间隔小于此周期API 将会阻塞,直到时间间隔超过此周期。
- `typedef void (*TAOS_SUBSCRIBE_CALLBACK)(TAOS_SUB* tsub, TAOS_RES *res, void* param, int code)`
异步模式下,回调函数的原型,其参数为:
- tsub订阅对象
- res查询结果集注意结果集中可能没有记录
- param调用 `taos_subscribe()` 时客户程序提供的附加参数
- code错误码
:::note
在这个回调函数里不可以做耗时过长的处理,尤其是对于返回的结果集中数据较多的情况,否则有可能导致客户端阻塞等异常状态。如果必须进行复杂计算,则建议在另外的线程中进行处理。
:::
- `TAOS_RES *taos_consume(TAOS_SUB *tsub)`
同步模式下,该函数用来获取订阅的结果。 用户应用程序将其置于一个循环之中。 如两次调用 `taos_consume()` 的间隔小于订阅的轮询周期API 将会阻塞,直到时间间隔超过此周期。如果数据库有新记录到达,该 API 将返回该最新的记录,否则返回一个没有记录的空结果集。 如果返回值为 `NULL`,说明系统出错。 异步模式下,用户程序不应调用此 API。
:::note
在调用 `taos_consume()` 之后,用户应用应确保尽快调用 `taos_fetch_row()` 或 `taos_fetch_block()` 来处理订阅结果,否则服务端会持续缓存查询结果数据等待客户端读取,极端情况下会导致服务端内存消耗殆尽,影响服务稳定性。
:::
- `void taos_unsubscribe(TAOS_SUB *tsub, int keepProgress)`
取消订阅。 如参数 `keepProgress` 不为 0API 会保留订阅的进度信息,后续调用 `taos_subscribe()` 时可以基于此进度继续;否则将删除进度信息,后续只能重新开始读取数据。

View File

@ -83,7 +83,7 @@ Maven 项目中,在 pom.xml 中添加以下依赖:
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>2.0.**</version>
<version>3.0.0</version>
</dependency>
```
@ -93,12 +93,12 @@ Maven 项目中,在 pom.xml 中添加以下依赖:
可以通过下载 TDengine 的源码,自己编译最新版本的 Java connector
```shell
git clone https://github.com/taosdata/taos-connector-jdbc.git --branch 2.0
git clone https://github.com/taosdata/taos-connector-jdbc.git
cd taos-connector-jdbc
mvn clean install -Dmaven.test.skip=true
```
编译后,在 target 目录下会产生 taos-jdbcdriver-2.0.XX-dist.jar 的 jar 包,并自动将编译的 jar 文件放在本地的 Maven 仓库中。
编译后,在 target 目录下会产生 taos-jdbcdriver-3.0.*-dist.jar 的 jar 包,并自动将编译的 jar 文件放在本地的 Maven 仓库中。
</TabItem>
</Tabs>
@ -198,7 +198,7 @@ url 中的配置参数如下:
- user登录 TDengine 用户名,默认值 'root'。
- password用户登录密码默认值 'taosdata'。
- batchfetch: true在执行查询时批量拉取结果集false逐行拉取结果集。默认值为false。逐行拉取结果集使用 HTTP 方式进行数据传输。从 taos-jdbcdriver-2.0.38 开始JDBC REST 连接增加批量拉取数据功能。taos-jdbcdriver 与 TDengine 之间通过 WebSocket 连接进行数据传输。相较于 HTTPWebSocket 可以使 JDBC REST 连接支持大数据量查询,并提升查询性能。
- batchfetch: true在执行查询时批量拉取结果集false逐行拉取结果集。默认值为false。逐行拉取结果集使用 HTTP 方式进行数据传输。JDBC REST 连接支持批量拉取数据功能。taos-jdbcdriver 与 TDengine 之间通过 WebSocket 连接进行数据传输。相较于 HTTPWebSocket 可以使 JDBC REST 连接支持大数据量查询,并提升查询性能。
- charset: 当开启批量拉取数据时,指定解析字符串数据的字符集。
- batchErrorIgnoretrue在执行 Statement 的 executeBatch 时,如果中间有一条 SQL 执行失败,继续执行下面的 SQL 了。false不再执行失败 SQL 后的任何语句。默认值为false。
- httpConnectTimeout: 连接超时时间,单位 ms 默认值为 5000。
@ -216,7 +216,7 @@ url 中的配置参数如下:
INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('California.SanFrancisco') VALUES(now, 24.6);
```
- 从 taos-jdbcdriver-2.0.36 开始,如果在 url 中指定了 dbname那么JDBC REST 连接会默认使用/rest/sql/dbname 作为 restful 请求的 url在 SQL 中不需要指定 dbname。例如url 为 jdbc:TAOS-RS://127.0.0.1:6041/test那么可以执行 sqlinsert into t1 using weather(ts, temperature) tags('California.SanFrancisco') values(now, 24.6);
- 如果在 url 中指定了 dbname那么JDBC REST 连接会默认使用/rest/sql/dbname 作为 restful 请求的 url在 SQL 中不需要指定 dbname。例如url 为 jdbc:TAOS-RS://127.0.0.1:6041/test那么可以执行 sqlinsert into t1 using weather(ts, temperature) tags('California.SanFrancisco') values(now, 24.6);
:::
@ -230,7 +230,7 @@ INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('California.SanFra
**注意**
- 应用中设置的 client parameter 为进程级别的,即如果要更新 client 的参数,需要重启应用。这是因为 client parameter 是全局参数,仅在应用程序的第一次设置生效。
- 以下示例代码基于 taos-jdbcdriver-2.0.36
- 以下示例代码基于 taos-jdbcdriver-3.0.0
```java
public Connection getConn() throws Exception{
@ -367,7 +367,7 @@ TDengine 的 JDBC 原生连接实现大幅改进了参数绑定方式对数据
**注意**
- JDBC REST 连接目前不支持参数绑定
- 以下示例代码基于 taos-jdbcdriver-2.0.36
- 以下示例代码基于 taos-jdbcdriver-3.0.0
- binary 类型数据需要调用 setString 方法nchar 类型数据需要调用 setNString 方法
- setString 和 setNString 都要求用户在 size 参数里声明表定义中对应列的列宽
@ -635,7 +635,7 @@ TDengine 支持无模式写入功能。无模式写入兼容 InfluxDB 的 行协
**注意**
- JDBC REST 连接目前不支持无模式写入
- 以下示例代码基于 taos-jdbcdriver-2.0.36
- 以下示例代码基于 taos-jdbcdriver-3.0.0
```java
public class SchemalessInsertTest {
@ -666,7 +666,7 @@ public class SchemalessInsertTest {
}
```
### 订阅
### 数据订阅
TDengine Java 连接器支持订阅功能,应用 API 如下:
@ -712,14 +712,19 @@ while(true) {
}
```
`poll` 方法返回一个结果集,其中包含从上次 `poll` 到目前为止的所有新数据。请务必按需选择合理的调用 `poll` 的频率(如例子中的 `Duration.ofMillis(100)`),否则会给服务端造成不必要的压力
`poll` 每次调用获取一个消息
#### 关闭订阅
```java
// 取消订阅
consumer.unsubscribe();
// 关闭消费
consumer.close()
```
详情请参考:[数据订阅](../../../develop/tmq)
### 使用示例如下:
```java
@ -734,7 +739,7 @@ public abstract class ConsumerLoop {
config.setProperty("msg.with.table.name", "true");
config.setProperty("enable.auto.commit", "true");
config.setProperty("group.id", "group1");
config.setProperty("value.deserializer", "com.taosdata.jdbc.tmq.ConsumerTest.ResultDeserializer");
config.setProperty("value.deserializer", "com.taosdata.jdbc.tmq.ConsumerTest.ConsumerLoop$ResultDeserializer");
this.consumer = new TaosConsumer<>(config);
this.topics = Collections.singletonList("topic_speed");
@ -754,8 +759,9 @@ public abstract class ConsumerLoop {
process(record);
}
}
consumer.unsubscribe();
} finally {
consumer.close();
consumer.close();
shutdownLatch.countDown();
}
}
@ -765,11 +771,11 @@ public abstract class ConsumerLoop {
shutdownLatch.await();
}
static class ResultDeserializer extends ReferenceDeserializer<ResultBean> {
public static class ResultDeserializer extends ReferenceDeserializer<ResultBean> {
}
static class ResultBean {
public static class ResultBean {
private Timestamp ts;
private int speed;
@ -875,6 +881,7 @@ public static void main(String[] args) throws Exception {
| taos-jdbcdriver 版本 | 主要变化 |
| :------------------: | :----------------------------: |
| 3.0.0 | 支持 TDengine 3.0 |
| 2.0.39 - 2.0.40 | 增加 REST 连接/请求 超时设置 |
| 2.0.38 | JDBC REST 连接增加批量拉取功能 |
| 2.0.37 | 增加对 json tag 支持 |
@ -900,7 +907,13 @@ public static void main(String[] args) throws Exception {
**解决方法**:重新安装 64 位 JDK。
4. 其它问题请参考 [FAQ](../../../train-faq/faq)
4. java.lang.NoSuchMethodError: setByteArray
**原因**taos-jdbcdriver 3.* 版本仅支持 TDengine 3.0 及以上版本。
**解决方法** 使用 taos-jdbcdriver 2.* 版本连接 TDengine 2.* 版本。
其它问题请参考 [FAQ](../../../train-faq/faq)
## API 参考

View File

@ -5,11 +5,11 @@ description: "TDengine 服务端、客户端和连接器支持的平台列表"
## TDengine 服务端支持的平台列表
| | **Windows 10/11** | **CentOS 7.9/8** | **Ubuntu 18/20** | **Other Linux** | **统信 UOS** | **银河/中标麒麟** | **凝思 V60/V80** | **华为 EulerOS** |
| ------------ | ----------------- | ---------------- | ---------------- | --------------- | ------------ | ----------------- | ---------------- | ---------------- |
| X64 | ● | ● | ● | | ● | ● | ● | |
| 树莓派 ARM64 | | | | ● | | | | |
| 华为云 ARM64 | | | | | | | | ● |
| | **Windows server 2016/2019** | **Windows 10/11** | **CentOS 7.9/8** | **Ubuntu 18/20** | **统信 UOS** | **银河/中标麒麟** | **凝思 V60/V80** |
| ------------ | ---------------------------- | ----------------- | ---------------- | ---------------- | ------------ | ----------------- | ---------------- |
| X64 | ● | ● | ● | ● | ● | ● | ● |
| 树莓派 ARM64 | | | ● | | | | |
| 华为云 ARM64 | | | | ● | | | |
注: ● 表示经过官方测试验证, ○ 表示非官方测试验证。
@ -19,15 +19,15 @@ description: "TDengine 服务端、客户端和连接器支持的平台列表"
对照矩阵如下:
| **CPU** | **X64 64bit** | | | **X86 32bit** | **ARM64** | **MIPS 龙芯** | **Alpha 申威** | **X64 海光** |
| ----------- | ------------- | --------- | --------- | ------------- | --------- | ------------- | -------------- | ------------ |
| **OS** | **Linux** | **Win64** | **Win32** | **Win32** | **Linux** | **Linux** | **Linux** | **Linux** |
| **C/C++** | ● | ● | ● | ○ | ● | ● | ● | ● |
| **JDBC** | ● | ● | ● | ○ | ● | ● | ● | ● |
| **Python** | ● | ● | ● | ○ | ● | ● | -- | ● |
| **Go** | ● | ● | ● | ○ | ● | ○ | -- | -- |
| **NodeJs** | ● | ● | ○ | ○ | ● | ○ | -- | -- |
| **C#** | ● | ● | ○ | ○ | ○ | ○ | -- | -- |
| **RESTful** | ● | ● | ● | ● | ● | ● | ● | ● |
| **CPU** | **X64 64bit** | **X64 64bit** | **ARM64** |
| ----------- | ------------- | ------------- | --------- |
| **OS** | **Linux** | **Win64** | **Linux** |
| **C/C++** | ● | ● | ● |
| **JDBC** | ● | ● | ● |
| **Python** | ● | ● | ● |
| **Go** | ● | ● | ● |
| **NodeJs** | ● | ● | ● |
| **C#** | ● | ● | ○ |
| **RESTful** | ● | ● | ● |
注:● 表示官方测试验证通过,○ 表示非官方测试验证通过,-- 表示未经验证。

View File

@ -647,3 +647,173 @@ charset 的有效值是 UTF-8。
| 含义 | 是否启动 udf 服务 |
| 取值范围 | 0: 不启动1启动 |
| 缺省值 | 1 |
## 2.X 与 3.0 配置参数对比
| # | **参数** | **适用于 2.X 版本** | **适用于 3.0 版本** |
| --- | :-----------------: | --------------- | --------------- |
| 1 | firstEp | 是 | 是 |
| 2 | secondEp | 是 | 是 |
| 3 | fqdn | 是 | 是 |
| 4 | serverPort | 是 | 是 |
| 5 | maxShellConns | 是 | 是 |
| 6 | monitor | 是 | 是 |
| 7 | monitorFqdn | 否 | 是 |
| 8 | monitorPort | 否 | 是 |
| 9 | monitorInterval | 是 | 是 |
| 10 | monitorMaxLogs | 否 | 是 |
| 11 | monitorComp | 否 | 是 |
| 12 | telemetryReporting | 是 | 是 |
| 13 | telemetryInterval | 否 | 是 |
| 14 | telemetryServer | 否 | 是 |
| 15 | telemetryPort | 否 | 是 |
| 16 | queryPolicy | 否 | 是 |
| 17 | querySmaOptimize | 否 | 是 |
| 18 | queryBufferSize | 是 | 是 |
| 19 | maxNumOfDistinctRes | 是 | 是 |
| 20 | minSlidingTime | 是 | 是 |
| 21 | minIntervalTime | 是 | 是 |
| 22 | countAlwaysReturnValue | 是 | 是 |
| 23 | dataDir | 是 | 是 |
| 24 | minimalDataDirGB | 是 | 是 |
| 25 | supportVnodes | 否 | 是 |
| 26 | tempDir | 是 | 是 |
| 27 | minimalTmpDirGB | 是 | 是 |
| 28 | compressMsgSize | 是 | 是 |
| 29 | compressColData | 是 | 是 |
| 30 | smlChildTableName | 是 | 是 |
| 31 | smlTagName | 是 | 是 |
| 32 | smlDataFormat | 否 | 是 |
| 33 | statusInterval | 是 | 是 |
| 34 | shellActivityTimer | 是 | 是 |
| 35 | transPullupInterval | 否 | 是 |
| 36 | mqRebalanceInterval | 否 | 是 |
| 37 | ttlUnit | 否 | 是 |
| 38 | ttlPushInterval | 否 | 是 |
| 39 | numOfTaskQueueThreads | 否 | 是 |
| 40 | numOfRpcThreads | 否 | 是 |
| 41 | numOfCommitThreads | 是 | 是 |
| 42 | numOfMnodeReadThreads | 否 | 是 |
| 43 | numOfVnodeQueryThreads | 否 | 是 |
| 44 | numOfVnodeStreamThreads | 否 | 是 |
| 45 | numOfVnodeFetchThreads | 否 | 是 |
| 46 | numOfVnodeWriteThreads | 否 | 是 |
| 47 | numOfVnodeSyncThreads | 否 | 是 |
| 48 | numOfQnodeQueryThreads | 否 | 是 |
| 49 | numOfQnodeFetchThreads | 否 | 是 |
| 50 | numOfSnodeSharedThreads | 否 | 是 |
| 51 | numOfSnodeUniqueThreads | 否 | 是 |
| 52 | rpcQueueMemoryAllowed | 否 | 是 |
| 53 | logDir | 是 | 是 |
| 54 | minimalLogDirGB | 是 | 是 |
| 55 | numOfLogLines | 是 | 是 |
| 56 | asyncLog | 是 | 是 |
| 57 | logKeepDays | 是 | 是 |
| 58 | debugFlag | 是 | 是 |
| 59 | tmrDebugFlag | 是 | 是 |
| 60 | uDebugFlag | 是 | 是 |
| 61 | rpcDebugFlag | 是 | 是 |
| 62 | jniDebugFlag | 是 | 是 |
| 63 | qDebugFlag | 是 | 是 |
| 64 | cDebugFlag | 是 | 是 |
| 65 | dDebugFlag | 是 | 是 |
| 66 | vDebugFlag | 是 | 是 |
| 67 | mDebugFlag | 是 | 是 |
| 68 | wDebugFlag | 是 | 是 |
| 69 | sDebugFlag | 是 | 是 |
| 70 | tsdbDebugFlag | 是 | 是 |
| 71 | tqDebugFlag | 否 | 是 |
| 72 | fsDebugFlag | 是 | 是 |
| 73 | udfDebugFlag | 否 | 是 |
| 74 | smaDebugFlag | 否 | 是 |
| 75 | idxDebugFlag | 否 | 是 |
| 76 | tdbDebugFlag | 否 | 是 |
| 77 | metaDebugFlag | 否 | 是 |
| 78 | timezone | 是 | 是 |
| 79 | locale | 是 | 是 |
| 80 | charset | 是 | 是 |
| 81 | udf | 是 | 是 |
| 82 | enableCoreFile | 是 | 是 |
| 83 | arbitrator | 是 | 否 |
| 84 | numOfThreadsPerCore | 是 | 否 |
| 85 | numOfMnodes | 是 | 否 |
| 86 | vnodeBak | 是 | 否 |
| 87 | balance | 是 | 否 |
| 88 | balanceInterval | 是 | 否 |
| 89 | offlineThreshold | 是 | 否 |
| 90 | role | 是 | 否 |
| 91 | dnodeNopLoop | 是 | 否 |
| 92 | keepTimeOffset | 是 | 否 |
| 93 | rpcTimer | 是 | 否 |
| 94 | rpcMaxTime | 是 | 否 |
| 95 | rpcForceTcp | 是 | 否 |
| 96 | tcpConnTimeout | 是 | 否 |
| 97 | syncCheckInterval | 是 | 否 |
| 98 | maxTmrCtrl | 是 | 否 |
| 99 | monitorReplica | 是 | 否 |
| 100 | smlTagNullName | 是 | 否 |
| 101 | keepColumnName | 是 | 否 |
| 102 | ratioOfQueryCores | 是 | 否 |
| 103 | maxStreamCompDelay | 是 | 否 |
| 104 | maxFirstStreamCompDelay | 是 | 否 |
| 105 | retryStreamCompDelay | 是 | 否 |
| 106 | streamCompDelayRatio | 是 | 否 |
| 107 | maxVgroupsPerDb | 是 | 否 |
| 108 | maxTablesPerVnode | 是 | 否 |
| 109 | minTablesPerVnode | 是 | 否 |
| 110 | tableIncStepPerVnode | 是 | 否 |
| 111 | cache | 是 | 否 |
| 112 | blocks | 是 | 否 |
| 113 | days | 是 | 否 |
| 114 | keep | 是 | 否 |
| 115 | minRows | 是 | 否 |
| 116 | maxRows | 是 | 否 |
| 117 | quorum | 是 | 否 |
| 118 | comp | 是 | 否 |
| 119 | walLevel | 是 | 否 |
| 120 | fsync | 是 | 否 |
| 121 | replica | 是 | 否 |
| 122 | partitions | 是 | 否 |
| 123 | quorum | 是 | 否 |
| 124 | update | 是 | 否 |
| 125 | cachelast | 是 | 否 |
| 126 | maxSQLLength | 是 | 否 |
| 127 | maxWildCardsLength | 是 | 否 |
| 128 | maxRegexStringLen | 是 | 否 |
| 129 | maxNumOfOrderedRes | 是 | 否 |
| 130 | maxConnections | 是 | 否 |
| 131 | mnodeEqualVnodeNum | 是 | 否 |
| 132 | http | 是 | 否 |
| 133 | httpEnableRecordSql | 是 | 否 |
| 134 | httpMaxThreads | 是 | 否 |
| 135 | restfulRowLimit | 是 | 否 |
| 136 | httpDbNameMandatory | 是 | 否 |
| 137 | httpKeepAlive | 是 | 否 |
| 138 | enableRecordSql | 是 | 否 |
| 139 | maxBinaryDisplayWidth | 是 | 否 |
| 140 | stream | 是 | 否 |
| 141 | retrieveBlockingModel | 是 | 否 |
| 142 | tsdbMetaCompactRatio | 是 | 否 |
| 143 | defaultJSONStrType | 是 | 否 |
| 144 | walFlushSize | 是 | 否 |
| 145 | keepTimeOffset | 是 | 否 |
| 146 | flowctrl | 是 | 否 |
| 147 | slaveQuery | 是 | 否 |
| 148 | adjustMaster | 是 | 否 |
| 149 | topicBinaryLen | 是 | 否 |
| 150 | telegrafUseFieldNum | 是 | 否 |
| 151 | deadLockKillQuery | 是 | 否 |
| 152 | clientMerge | 是 | 否 |
| 153 | sdbDebugFlag | 是 | 否 |
| 154 | odbcDebugFlag | 是 | 否 |
| 155 | httpDebugFlag | 是 | 否 |
| 156 | monDebugFlag | 是 | 否 |
| 157 | cqDebugFlag | 是 | 否 |
| 158 | shortcutFlag | 是 | 否 |
| 159 | probeSeconds | 是 | 否 |
| 160 | probeKillSeconds | 是 | 否 |
| 161 | probeInterval | 是 | 否 |
| 162 | lossyColumns | 是 | 否 |
| 163 | fPrecision | 是 | 否 |
| 164 | dPrecision | 是 | 否 |
| 165 | maxRange | 是 | 否 |
| 166 | range | 是 | 否 |

View File

@ -162,8 +162,6 @@ Vnode 会保持一个数据版本号version对内存数据进行持久
一个 vnode 启动时角色leader、follower是不定的数据是处于未同步状态它需要与虚拟节点组内其他节点建立 TCP 连接,并互相交换 status按照标准的 raft 一致性算法完成选主。
更多的关于数据复制的流程,请见[《TDengine 3.0 数据复制模块设计》](/tdinternal/replica/)。
### 同步复制
对于数据一致性要求更高的场景,异步数据复制提供的最终一致性无法满足要求。因此 TDengine 提供同步复制的机制供用户选择。在创建数据库时,除指定副本数 replica 之外,用户还需要指定新的参数 strict。如果 strict 等于 1它表示每次 leader 转发给副本时,需要等待半数以上副本达成一致后,才能通知应用,数据在 follower 已经写入成功。如果在一定的时间内得不到半数以上副本的确认leader vnode 将返回错误给应用。
@ -241,15 +239,16 @@ dataDir /mnt/data6 2 0
## 数据查询
TDengine 提供了多种多样针对表和超级表的查询处理功能除了常规的聚合查询之外还提供针对时序数据的窗口查询、统计聚合等功能。TDengine 的查询处理需要客户端、vnode、mnode 节点协同完成。
TDengine 提供了多种多样针对表和超级表的查询处理功能除了常规的聚合查询之外还提供针对时序数据的窗口查询、统计聚合等功能。TDengine 的查询处理需要客户端、vnode、qnode、mnode 节点协同完成,一个复杂的超级表聚合查询可能需要多个 vnode 和 qnode 节点公共分担查询和计算任务
### 单表查询
### 查询基本流程
SQL 语句的解析和校验工作在客户端完成。解析 SQL 语句并生成抽象语法树Abstract Syntax TreeAST然后对其进行校验和检查。以及向管理节点mnode请求查询中指定表的元数据信息table metadata
根据元数据信息中的 End Point 信息将查询请求序列化后发送到该表所在的数据节点dnode。dnode 接收到查询请求后识别出该查询请求指向的虚拟节点vnode将消息转发到 vnode 的查询执行队列。vnode 的查询执行线程建立基础的查询执行环境,并立即返回该查询请求,同时开始执行该查询。
客户端在获取查询结果的时候dnode 的查询执行队列中的工作线程会等待 vnode 执行线程执行完成,才能将查询结果返回到请求的客户端。
1. 客户端解析输入 SQL 语句并生成抽象语法树Abstract Syntax TreeAST然后根据元数据信息对其进行校验和检查。在此期间元数据管理模块Catalog会向管理节点mnode或 vnode 请求查询中指定库和表的元数据信息table metadata
2. 在通过校验检查后,客户端将生成分布式的查询计划并对查询计划进行优化处理。
3. 客户端根据配置的查询策略进行任务调度处理,一个查询子任务会根据其数据亲缘关系或负载信息调度到某个 vnode 或 qnode 所属的数据节点dnode进行处理。
4. dnode 接收到查询请求后识别出该查询请求指向的虚拟节点vnode或查询节点qnode将消息转发到 vnode 或 qnode 的查询执行队列。
5. vnode 或 qnode 的查询执行线程建立基础的查询执行环境,并立即执行该查询,在得到部分可获取查询结果后通知客户端。
6. 客户端将启动下级查询任务或直接获取查询结果。
### 按时间轴聚合、降采样、插值
@ -279,12 +278,14 @@ TDengine 对每个数据采集点单独建表,但在实际应用中经常需
<center> 图 5 多表聚合查询原理图 </center>
1. 应用将一个查询条件发往系统;
2. taosc 将超级表的名字发往 meta node管理节点
3. 管理节点将超级表所拥有的 vnode 列表发回 taosc
4. taosc 将计算的请求连同标签过滤条件发往这些 vnode 对应的多个数据节点;
5. 每个 vnode 先在内存里查找出自己节点里符合标签过滤条件的表的集合,然后扫描存储的时序数据,完成相应的聚合计算,将结果返回给 taosc
6. taosc 将多个数据节点返回的结果做最后的聚合,将其返回给应用。
1. 客户端从 mnode 获取库和表的元数据信息;
2. mnode 返回请求的元数据信息;
3. 客户端向超级表所属的每个 vnode 发送查询请求;
4. vnode 启动本地查询,在获得查询结果后返回查询响应;
5. 客户端向聚合节点 (在本例中为 qnode发送查询请求
6. qnode 向每个 vnode 节点发送数据请求消息来拉取数据;
7. vnode 返回本节点的查询计算结果;
8. qnode 完成多节点数据聚合后将最终查询结果返回给客户端;
由于 TDengine 在 vnode 内将标签数据与时序数据分离存储,通过在内存里过滤标签数据,先找到需要参与聚合操作的表的集合,将需要扫描的数据集大幅减少,大幅提升聚合计算速度。同时,由于数据分布在多个 vnode/dnode聚合计算操作在多个 vnode 里并发进行,又进一步提升了聚合的速度。 对普通表的聚合函数以及绝大部分操作都适用于超级表,语法完全一样,细节请看 TAOS SQL。

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 192 KiB

View File

@ -3,7 +3,7 @@ sidebar_label: 发布历史
title: 发布历史
---
import Release from "/components/Release";
import Release from "/components/ReleaseV3";
<Release versionPrefix="3.0" />

View File

@ -45,10 +45,9 @@ static int32_t msg_process(TAOS_RES* msg) {
int32_t numOfFields = taos_field_count(msg);
int32_t* length = taos_fetch_lengths(msg);
int32_t precision = taos_result_precision(msg);
const char* tbName = tmq_get_table_name(msg);
rows++;
taos_print_row(buf, row, fields, numOfFields);
printf("row content from %s: %s\n", (tbName != NULL ? tbName : "table null"), buf);
printf("row content: %s\n", buf);
}
return rows;
@ -167,7 +166,7 @@ int32_t create_topic() {
}
taos_free_result(pRes);
pRes = taos_query(pConn, "create topic topicname as select ts, c1, c2, c3 from tmqdb.stb where c1 > 1");
pRes = taos_query(pConn, "create topic topicname as select ts, c1, c2, c3, tbname from tmqdb.stb where c1 > 1");
if (taos_errno(pRes) != 0) {
printf("failed to create topic topicname, reason:%s\n", taos_errstr(pRes));
return -1;
@ -199,9 +198,7 @@ tmq_t* build_consumer() {
if (TMQ_CONF_OK != code) return NULL;
code = tmq_conf_set(conf, "auto.offset.reset", "earliest");
if (TMQ_CONF_OK != code) return NULL;
code = tmq_conf_set(conf, "experimental.snapshot.enable", "true");
if (TMQ_CONF_OK != code) return NULL;
code = tmq_conf_set(conf, "msg.with.table.name", "true");
code = tmq_conf_set(conf, "experimental.snapshot.enable", "false");
if (TMQ_CONF_OK != code) return NULL;
tmq_conf_set_auto_commit_cb(conf, tmq_commit_cb_print, NULL);
@ -220,14 +217,7 @@ tmq_list_t* build_topic_list() {
return topicList;
}
void basic_consume_loop(tmq_t* tmq, tmq_list_t* topicList) {
int32_t code;
if ((code = tmq_subscribe(tmq, topicList))) {
fprintf(stderr, "%% Failed to tmq_subscribe(): %s\n", tmq_err2str(code));
return;
}
void basic_consume_loop(tmq_t* tmq) {
int32_t totalRows = 0;
int32_t msgCnt = 0;
int32_t timeout = 5000;
@ -237,8 +227,8 @@ void basic_consume_loop(tmq_t* tmq, tmq_list_t* topicList) {
msgCnt++;
totalRows += msg_process(tmqmsg);
taos_free_result(tmqmsg);
/*} else {*/
/*break;*/
} else {
break;
}
}
@ -267,14 +257,12 @@ int main(int argc, char* argv[]) {
return -1;
}
basic_consume_loop(tmq, topic_list);
code = tmq_unsubscribe(tmq);
if (code) {
fprintf(stderr, "%% Failed to unsubscribe: %s\n", tmq_err2str(code));
} else {
fprintf(stderr, "%% unsubscribe\n");
if ((code = tmq_subscribe(tmq, topic_list))) {
fprintf(stderr, "%% Failed to tmq_subscribe(): %s\n", tmq_err2str(code));
}
tmq_list_destroy(topic_list);
basic_consume_loop(tmq);
code = tmq_consumer_close(tmq);
if (code) {

View File

@ -131,10 +131,10 @@ DLL_EXPORT int taos_options(TSDB_OPTION option, const void *arg, ...);
DLL_EXPORT setConfRet taos_set_config(const char *config);
DLL_EXPORT int taos_init(void);
DLL_EXPORT TAOS *taos_connect(const char *ip, const char *user, const char *pass, const char *db, uint16_t port);
DLL_EXPORT TAOS *taos_connect_auth(const char *ip, const char *user, const char *auth, const char *db, uint16_t port);
DLL_EXPORT void taos_close(TAOS *taos);
DLL_EXPORT TAOS *taos_connect_auth(const char *ip, const char *user, const char *auth, const char *db, uint16_t port);
DLL_EXPORT void taos_close(TAOS *taos);
const char *taos_data_type(int type);
const char *taos_data_type(int type);
DLL_EXPORT TAOS_STMT *taos_stmt_init(TAOS *taos);
DLL_EXPORT int taos_stmt_prepare(TAOS_STMT *stmt, const char *sql, unsigned long length);
@ -244,33 +244,37 @@ DLL_EXPORT void tmq_conf_set_auto_commit_cb(tmq_conf_t *conf, tmq_comm
/* -------------------------TMQ MSG HANDLE INTERFACE---------------------- */
DLL_EXPORT const char *tmq_get_topic_name(TAOS_RES *res);
DLL_EXPORT const char *tmq_get_db_name(TAOS_RES *res);
DLL_EXPORT int32_t tmq_get_vgroup_id(TAOS_RES *res);
/* ------------------------------ TAOSX -----------------------------------*/
// note: following apis are unstable
enum tmq_res_t {
TMQ_RES_INVALID = -1,
TMQ_RES_DATA = 1,
TMQ_RES_TABLE_META = 2,
};
typedef struct tmq_raw_data{
void* raw;
typedef struct tmq_raw_data {
void *raw;
uint32_t raw_len;
uint16_t raw_type;
} tmq_raw_data;
typedef enum tmq_res_t tmq_res_t;
DLL_EXPORT tmq_res_t tmq_get_res_type(TAOS_RES *res);
DLL_EXPORT int32_t tmq_get_raw(TAOS_RES *res, tmq_raw_data *raw);
DLL_EXPORT int32_t tmq_write_raw(TAOS *taos, tmq_raw_data raw);
DLL_EXPORT int taos_write_raw_block(TAOS *taos, int numOfRows, char *pData, const char* tbname);
DLL_EXPORT void tmq_free_raw(tmq_raw_data raw);
DLL_EXPORT char *tmq_get_json_meta(TAOS_RES *res); // Returning null means error. Returned result need to be freed by tmq_free_json_meta
DLL_EXPORT void tmq_free_json_meta(char* jsonMeta);
DLL_EXPORT const char *tmq_get_topic_name(TAOS_RES *res);
DLL_EXPORT const char *tmq_get_db_name(TAOS_RES *res);
DLL_EXPORT int32_t tmq_get_vgroup_id(TAOS_RES *res);
DLL_EXPORT const char *tmq_get_table_name(TAOS_RES *res);
DLL_EXPORT const char *tmq_get_table_name(TAOS_RES *res);
DLL_EXPORT tmq_res_t tmq_get_res_type(TAOS_RES *res);
DLL_EXPORT int32_t tmq_get_raw(TAOS_RES *res, tmq_raw_data *raw);
DLL_EXPORT int32_t tmq_write_raw(TAOS *taos, tmq_raw_data raw);
DLL_EXPORT int taos_write_raw_block(TAOS *taos, int numOfRows, char *pData, const char *tbname);
DLL_EXPORT void tmq_free_raw(tmq_raw_data raw);
// Returning null means error. Returned result need to be freed by tmq_free_json_meta
DLL_EXPORT char *tmq_get_json_meta(TAOS_RES *res);
DLL_EXPORT void tmq_free_json_meta(char *jsonMeta);
/* ------------------------------ TMQ END -------------------------------- */
/* ---------------------------- TAOSX END -------------------------------- */
typedef enum {
TSDB_SRV_STATUS_UNAVAILABLE = 0,

View File

@ -1678,9 +1678,10 @@ typedef struct {
int32_t code;
} STaskDropRsp;
#define STREAM_TRIGGER_AT_ONCE 1
#define STREAM_TRIGGER_WINDOW_CLOSE 2
#define STREAM_TRIGGER_MAX_DELAY 3
#define STREAM_TRIGGER_AT_ONCE 1
#define STREAM_TRIGGER_WINDOW_CLOSE 2
#define STREAM_TRIGGER_MAX_DELAY 3
#define STREAM_DEFAULT_IGNORE_EXPIRED 0
typedef struct {
char name[TSDB_STREAM_FNAME_LEN];

View File

@ -359,7 +359,7 @@ typedef struct SStreamOptions {
int8_t triggerType;
SNode* pDelay;
SNode* pWatermark;
bool ignoreExpired;
int8_t ignoreExpired;
} SStreamOptions;
typedef struct SCreateStreamStmt {

View File

@ -213,6 +213,8 @@ typedef struct SWindowLogicNode {
typedef struct SFillLogicNode {
SLogicNode node;
EFillMode mode;
SNodeList* pFillExprs;
SNodeList* pNotFillExprs;
SNode* pWStartTs;
SNode* pValues; // SNodeListNode
STimeWindow timeRange;
@ -440,9 +442,10 @@ typedef SIntervalPhysiNode SStreamSemiIntervalPhysiNode;
typedef struct SFillPhysiNode {
SPhysiNode node;
EFillMode mode;
SNodeList* pFillExprs;
SNodeList* pNotFillExprs;
SNode* pWStartTs; // SColumnNode
SNode* pValues; // SNodeListNode
SNodeList* pTargets;
STimeWindow timeRange;
EOrder inputTsOrder;
} SFillPhysiNode;

View File

@ -53,7 +53,13 @@ typedef struct SExprNode {
bool orderAlias;
} SExprNode;
typedef enum EColumnType { COLUMN_TYPE_COLUMN = 1, COLUMN_TYPE_TAG, COLUMN_TYPE_TBNAME } EColumnType;
typedef enum EColumnType {
COLUMN_TYPE_COLUMN = 1,
COLUMN_TYPE_TAG,
COLUMN_TYPE_TBNAME,
COLUMN_TYPE_WINDOW_PC,
COLUMN_TYPE_GROUP_KEY
} EColumnType;
typedef struct SColumnNode {
SExprNode node; // QUERY_NODE_COLUMN
@ -293,6 +299,7 @@ typedef enum ESqlClause {
SQL_CLAUSE_WHERE,
SQL_CLAUSE_PARTITION_BY,
SQL_CLAUSE_WINDOW,
SQL_CLAUSE_FILL,
SQL_CLAUSE_GROUP_BY,
SQL_CLAUSE_HAVING,
SQL_CLAUSE_DISTINCT,

View File

@ -53,6 +53,8 @@ typedef struct SParseContext {
int8_t schemalessType;
const char* svrVer;
bool nodeOffline;
SArray* pTableMetaPos; // sql table pos => catalog data pos
SArray* pTableVgroupPos; // sql table pos => catalog data pos
} SParseContext;
int32_t qParseSql(SParseContext* pCxt, SQuery** pQuery);
@ -84,8 +86,8 @@ int32_t qBindStmtSingleColValue(void* pBlock, TAOS_MULTI_BIND* bind, char* msgBu
int32_t rowNum);
int32_t qBuildStmtColFields(void* pDataBlock, int32_t* fieldNum, TAOS_FIELD_E** fields);
int32_t qBuildStmtTagFields(void* pBlock, void* boundTags, int32_t* fieldNum, TAOS_FIELD_E** fields);
int32_t qBindStmtTagsValue(void* pBlock, void* boundTags, int64_t suid, const char* sTableName, char* tName, TAOS_MULTI_BIND* bind,
char* msgBuf, int32_t msgBufLen);
int32_t qBindStmtTagsValue(void* pBlock, void* boundTags, int64_t suid, const char* sTableName, char* tName,
TAOS_MULTI_BIND* bind, char* msgBuf, int32_t msgBufLen);
void destroyBoundColumnInfo(void* pBoundInfo);
int32_t qCreateSName(SName* pName, const char* pTableName, int32_t acctId, char* dbName, char* msgBuf,
int32_t msgBufLen);

View File

@ -275,12 +275,8 @@ typedef struct SStreamTask {
int32_t nodeId;
SEpSet epSet;
// used for task source and sink,
// while task agg should have processedVer for each child
int64_t recoverSnapVer;
int64_t startVer;
int64_t checkpointVer;
int64_t processedVer;
// children info
SArray* childEpInfo; // SArray<SStreamChildEpInfo*>

View File

@ -25,33 +25,34 @@ extern "C" {
#endif
typedef struct SUpdateInfo {
SArray *pTsBuckets;
uint64_t numBuckets;
SArray *pTsSBFs;
uint64_t numSBFs;
int64_t interval;
int64_t watermark;
TSKEY minTS;
SScalableBf* pCloseWinSBF;
SHashObj* pMap;
STimeWindow scanWindow;
uint64_t scanGroupId;
uint64_t maxVersion;
SArray *pTsBuckets;
uint64_t numBuckets;
SArray *pTsSBFs;
uint64_t numSBFs;
int64_t interval;
int64_t watermark;
TSKEY minTS;
SScalableBf *pCloseWinSBF;
SHashObj *pMap;
STimeWindow scanWindow;
uint64_t scanGroupId;
uint64_t maxVersion;
} SUpdateInfo;
SUpdateInfo *updateInfoInitP(SInterval* pInterval, int64_t watermark);
SUpdateInfo *updateInfoInitP(SInterval *pInterval, int64_t watermark);
SUpdateInfo *updateInfoInit(int64_t interval, int32_t precision, int64_t watermark);
bool updateInfoIsUpdated(SUpdateInfo *pInfo, uint64_t tableId, TSKEY ts);
void updateInfoSetScanRange(SUpdateInfo *pInfo, STimeWindow* pWin, uint64_t groupId, uint64_t version);
bool updateInfoIgnore(SUpdateInfo *pInfo, STimeWindow* pWin, uint64_t groupId, uint64_t version);
void updateInfoDestroy(SUpdateInfo *pInfo);
void updateInfoAddCloseWindowSBF(SUpdateInfo *pInfo);
void updateInfoDestoryColseWinSBF(SUpdateInfo *pInfo);
int32_t updateInfoSerialize(void *buf, int32_t bufLen, const SUpdateInfo *pInfo);
int32_t updateInfoDeserialize(void *buf, int32_t bufLen, SUpdateInfo *pInfo);
bool updateInfoIsUpdated(SUpdateInfo *pInfo, uint64_t tableId, TSKEY ts);
bool updateInfoIsTableInserted(SUpdateInfo *pInfo, int64_t tbUid);
void updateInfoSetScanRange(SUpdateInfo *pInfo, STimeWindow *pWin, uint64_t groupId, uint64_t version);
bool updateInfoIgnore(SUpdateInfo *pInfo, STimeWindow *pWin, uint64_t groupId, uint64_t version);
void updateInfoDestroy(SUpdateInfo *pInfo);
void updateInfoAddCloseWindowSBF(SUpdateInfo *pInfo);
void updateInfoDestoryColseWinSBF(SUpdateInfo *pInfo);
int32_t updateInfoSerialize(void *buf, int32_t bufLen, const SUpdateInfo *pInfo);
int32_t updateInfoDeserialize(void *buf, int32_t bufLen, SUpdateInfo *pInfo);
#ifdef __cplusplus
}
#endif
#endif /* ifndef _TSTREAMUPDATE_H_ */
#endif /* ifndef _TSTREAMUPDATE_H_ */

View File

@ -30,6 +30,7 @@ extern bool gRaftDetailLog;
#define SYNC_SPEED_UP_HB_TIMER 400
#define SYNC_SPEED_UP_AFTER_MS (1000 * 20)
#define SYNC_SLOW_DOWN_RANGE 100
#define SYNC_MAX_READ_RANGE 10
#define SYNC_MAX_BATCH_SIZE 1
#define SYNC_INDEX_BEGIN 0
@ -210,9 +211,12 @@ void syncStop(int64_t rid);
int32_t syncSetStandby(int64_t rid);
ESyncState syncGetMyRole(int64_t rid);
bool syncIsReady(int64_t rid);
bool syncIsReadyForRead(int64_t rid);
const char* syncGetMyRoleStr(int64_t rid);
bool syncRestoreFinish(int64_t rid);
SyncTerm syncGetMyTerm(int64_t rid);
SyncIndex syncGetLastIndex(int64_t rid);
SyncIndex syncGetCommitIndex(int64_t rid);
SyncGroupId syncGetVgId(int64_t rid);
void syncGetEpSet(int64_t rid, SEpSet* pEpSet);
void syncGetRetryEpSet(int64_t rid, SEpSet* pEpSet);

View File

@ -622,6 +622,7 @@ int32_t* taosGetErrno();
//tmq
#define TSDB_CODE_TMQ_INVALID_MSG TAOS_DEF_ERROR_CODE(0, 0x4000)
#define TSDB_CODE_TMQ_CONSUMER_MISMATCH TAOS_DEF_ERROR_CODE(0, 0x4001)
#define TSDB_CODE_TMQ_CONSUMER_CLOSED TAOS_DEF_ERROR_CODE(0, 0x4002)
#ifdef __cplusplus
}

View File

@ -359,7 +359,7 @@ typedef enum ELogicConditionType {
#define TSDB_DEFAULT_DB_SCHEMALESS TSDB_DB_SCHEMALESS_OFF
#define TSDB_DB_MIN_WAL_RETENTION_PERIOD -1
#define TSDB_DEFAULT_DB_WAL_RETENTION_PERIOD (24 * 60 * 60 * 2)
#define TSDB_DEFAULT_DB_WAL_RETENTION_PERIOD (24 * 60 * 60 * 4)
#define TSDB_DB_MIN_WAL_RETENTION_SIZE -1
#define TSDB_DEFAULT_DB_WAL_RETENTION_SIZE -1
#define TSDB_DB_MIN_WAL_ROLL_PERIOD 0

View File

@ -29,11 +29,11 @@ int32_t taosOpenRef(int32_t max, void (*fp)(void *));
// close the reference set, refId is the return value by taosOpenRef
// return 0 if success. On error, -1 is returned, and terrno is set appropriately
int32_t taosCloseRef(int32_t refId);
int32_t taosCloseRef(int32_t rsetId);
// add ref, p is the pointer to resource or pointer ID
// return Reference ID(rid) allocated. On error, -1 is returned, and terrno is set appropriately
int64_t taosAddRef(int32_t refId, void *p);
int64_t taosAddRef(int32_t rsetId, void *p);
// remove ref, rid is the reference ID returned by taosAddRef
// return 0 if success. On error, -1 is returned, and terrno is set appropriately

View File

@ -689,11 +689,11 @@ int32_t scheduleQuery(SRequestObj* pRequest, SQueryPlan* pDag, SArray* pNodeList
TDMT_VND_CREATE_TABLE == pRequest->type) {
pRequest->body.resInfo.numOfRows = res.numOfRows;
if (TDMT_VND_SUBMIT == pRequest->type) {
STscObj *pTscObj = pRequest->pTscObj;
SAppClusterSummary *pActivity = &pTscObj->pAppInfo->summary;
atomic_add_fetch_64((int64_t *)&pActivity->numOfInsertRows, res.numOfRows);
STscObj* pTscObj = pRequest->pTscObj;
SAppClusterSummary* pActivity = &pTscObj->pAppInfo->summary;
atomic_add_fetch_64((int64_t*)&pActivity->numOfInsertRows, res.numOfRows);
}
schedulerFreeJob(&pRequest->body.queryJob, 0);
}
@ -800,8 +800,8 @@ int32_t handleQueryExecRsp(SRequestObj* pRequest) {
break;
}
case TDMT_VND_SUBMIT: {
atomic_add_fetch_64((int64_t *)&pAppInfo->summary.insertBytes, pRes->numOfBytes);
atomic_add_fetch_64((int64_t*)&pAppInfo->summary.insertBytes, pRes->numOfBytes);
code = handleSubmitExecRes(pRequest, pRes->res, pCatalog, &epset);
break;
}
@ -832,9 +832,9 @@ void schedulerExecCb(SExecResult* pResult, void* param, int32_t code) {
if (pResult) {
pRequest->body.resInfo.numOfRows = pResult->numOfRows;
if (TDMT_VND_SUBMIT == pRequest->type) {
STscObj *pTscObj = pRequest->pTscObj;
SAppClusterSummary *pActivity = &pTscObj->pAppInfo->summary;
atomic_add_fetch_64((int64_t *)&pActivity->numOfInsertRows, pResult->numOfRows);
STscObj* pTscObj = pRequest->pTscObj;
SAppClusterSummary* pActivity = &pTscObj->pAppInfo->summary;
atomic_add_fetch_64((int64_t*)&pActivity->numOfInsertRows, pResult->numOfRows);
}
}
@ -877,14 +877,14 @@ SRequestObj* launchQueryImpl(SRequestObj* pRequest, SQuery* pQuery, bool keepQue
if (pQuery->pRoot) {
pRequest->stmtType = pQuery->pRoot->type;
}
if (pQuery->pRoot && !pRequest->inRetry) {
STscObj *pTscObj = pRequest->pTscObj;
SAppClusterSummary *pActivity = &pTscObj->pAppInfo->summary;
STscObj* pTscObj = pRequest->pTscObj;
SAppClusterSummary* pActivity = &pTscObj->pAppInfo->summary;
if (QUERY_NODE_VNODE_MODIF_STMT == pQuery->pRoot->type) {
atomic_add_fetch_64((int64_t *)&pActivity->numOfInsertsReq, 1);
atomic_add_fetch_64((int64_t*)&pActivity->numOfInsertsReq, 1);
} else if (QUERY_NODE_SELECT_STMT == pQuery->pRoot->type) {
atomic_add_fetch_64((int64_t *)&pActivity->numOfQueryReq, 1);
atomic_add_fetch_64((int64_t*)&pActivity->numOfQueryReq, 1);
}
}
@ -1467,9 +1467,9 @@ void* doFetchRows(SRequestObj* pRequest, bool setupOneRowPtr, bool convertUcs4)
tscDebug("0x%" PRIx64 " fetch results, numOfRows:%d total Rows:%" PRId64 ", complete:%d, reqId:0x%" PRIx64,
pRequest->self, pResInfo->numOfRows, pResInfo->totalRows, pResInfo->completed, pRequest->requestId);
STscObj *pTscObj = pRequest->pTscObj;
SAppClusterSummary *pActivity = &pTscObj->pAppInfo->summary;
atomic_add_fetch_64((int64_t *)&pActivity->fetchBytes, pRequest->body.resInfo.payloadLen);
STscObj* pTscObj = pRequest->pTscObj;
SAppClusterSummary* pActivity = &pTscObj->pAppInfo->summary;
atomic_add_fetch_64((int64_t*)&pActivity->fetchBytes, pRequest->body.resInfo.payloadLen);
if (pResultInfo->numOfRows == 0) {
return NULL;
@ -2006,7 +2006,7 @@ int32_t transferTableNameList(const char* tbList, int32_t acctId, char* dbName,
bool inEscape = false;
int32_t code = 0;
void *pIter = NULL;
void* pIter = NULL;
int32_t vIdx = 0;
int32_t vPos[2];

View File

@ -192,6 +192,7 @@ void taos_free_result(TAOS_RES *res) {
if (pRsp->rsp.withSchema) taosArrayDestroyP(pRsp->rsp.blockSchema, (FDelete)tDeleteSSchemaWrapper);
pRsp->resInfo.pRspMsg = NULL;
doFreeReqResultInfo(&pRsp->resInfo);
taosMemoryFree(pRsp);
} else if (TD_RES_TMQ_META(res)) {
SMqMetaRspObj *pRspObj = (SMqMetaRspObj *)res;
taosMemoryFree(pRspObj->metaRsp.metaRsp);

1628
source/client/src/taosx.c Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -8,7 +8,7 @@ AUX_SOURCE_DIRECTORY(${CMAKE_CURRENT_SOURCE_DIR} SOURCE_LIST)
ADD_EXECUTABLE(clientTest clientTests.cpp)
TARGET_LINK_LIBRARIES(
clientTest
PUBLIC os util common transport parser catalog scheduler function gtest taos_static qcom executor
os util common transport parser catalog scheduler gtest taos_static qcom executor function
)
ADD_EXECUTABLE(tmqTest tmqTest.cpp)

View File

@ -87,10 +87,11 @@ typedef struct {
typedef struct {
tsem_t syncSem;
int64_t sync;
bool standby;
SReplica replica;
int32_t errCode;
int32_t transId;
SRWLatch lock;
int8_t standby;
int8_t leaderTransferFinish;
} SSyncMgmt;

View File

@ -742,7 +742,9 @@ static int32_t mndProcessAlterMnodeReq(SRpcMsg *pReq) {
return code;
} else {
pMgmt->errCode = 0;
taosWLockLatch(&pMgmt->lock);
pMgmt->transId = -1;
taosWUnLockLatch(&pMgmt->lock);
tsem_wait(&pMgmt->syncSem);
mInfo("alter mnode sync result:0x%x %s", pMgmt->errCode, tstrerror(pMgmt->errCode));
terrno = pMgmt->errCode;

View File

@ -238,7 +238,7 @@ static int32_t mndProcessRetrieveSysTableReq(SRpcMsg *pReq) {
} else {
memcpy(pReq->info.conn.user, TSDB_DEFAULT_USER, strlen(TSDB_DEFAULT_USER) + 1);
}
if (mndCheckShowPrivilege(pMnode, pReq->info.conn.user, pShow->type, retrieveReq.db) != 0) {
if (retrieveReq.db[0] && mndCheckShowPrivilege(pMnode, pReq->info.conn.user, pShow->type, retrieveReq.db) != 0) {
return -1;
}

View File

@ -60,18 +60,24 @@ void mndSyncCommitMsg(struct SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMeta cbM
sdbSetApplyInfo(pMnode->pSdb, cbMeta.index, cbMeta.term, cbMeta.lastConfigIndex);
}
taosWLockLatch(&pMgmt->lock);
if (transId <= 0) {
taosWUnLockLatch(&pMgmt->lock);
mError("trans:%d, invalid commit msg", transId);
} else if (transId == pMgmt->transId) {
if (pMgmt->errCode != 0) {
mError("trans:%d, failed to propose since %s", transId, tstrerror(pMgmt->errCode));
mError("trans:%d, failed to propose since %s, post sem", transId, tstrerror(pMgmt->errCode));
} else {
mInfo("trans:%d, is proposed and post sem", transId, tstrerror(pMgmt->errCode));
}
pMgmt->transId = 0;
taosWUnLockLatch(&pMgmt->lock);
tsem_post(&pMgmt->syncSem);
} else {
taosWUnLockLatch(&pMgmt->lock);
STrans *pTrans = mndAcquireTrans(pMnode, transId);
if (pTrans != NULL) {
mDebug("trans:%d, execute in mnode which not leader", transId);
mInfo("trans:%d, execute in mnode which not leader", transId);
mndTransExecute(pMnode, pTrans);
mndReleaseTrans(pMnode, pTrans);
// sdbWriteFile(pMnode->pSdb, SDB_WRITE_DELTA);
@ -115,13 +121,18 @@ void mndReConfig(struct SSyncFSM *pFsm, const SRpcMsg *pMsg, SReConfigCbMeta cbM
mInfo("trans:-1, sync reconfig is proposed, saved:%d code:0x%x, index:%" PRId64 " term:%" PRId64, pMgmt->transId,
cbMeta.code, cbMeta.index, cbMeta.term);
taosWLockLatch(&pMgmt->lock);
if (pMgmt->transId == -1) {
if (pMgmt->errCode != 0) {
mError("trans:-1, failed to propose sync reconfig since %s", tstrerror(pMgmt->errCode));
mError("trans:-1, failed to propose sync reconfig since %s, post sem", tstrerror(pMgmt->errCode));
} else {
mInfo("trans:-1, sync reconfig is proposed, saved:%d code:0x%x, index:%" PRId64 " term:%" PRId64 " post sem",
pMgmt->transId, cbMeta.code, cbMeta.index, cbMeta.term);
}
pMgmt->transId = 0;
tsem_post(&pMgmt->syncSem);
}
taosWUnLockLatch(&pMgmt->lock);
}
int32_t mndSnapshotStartRead(struct SSyncFSM *pFsm, void *pParam, void **ppReader) {
@ -168,14 +179,19 @@ void mndLeaderTransfer(struct SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMeta cb
static void mndBecomeFollower(struct SSyncFSM *pFsm) {
SMnode *pMnode = pFsm->data;
mDebug("vgId:1, become follower");
mDebug("vgId:1, become follower and post sem");
// clear old leader resource
taosWLockLatch(&pMnode->syncMgmt.lock);
if (pMnode->syncMgmt.transId != 0) {
pMnode->syncMgmt.transId = 0;
tsem_post(&pMnode->syncMgmt.syncSem);
}
taosWUnLockLatch(&pMnode->syncMgmt.lock);
}
static void mndBecomeLeader(struct SSyncFSM *pFsm) {
SMnode *pMnode = pFsm->data;
mDebug("vgId:1, become leader");
SMnode *pMnode = pFsm->data;
}
SSyncFSM *mndSyncMakeFsm(SMnode *pMnode) {
@ -202,6 +218,8 @@ SSyncFSM *mndSyncMakeFsm(SMnode *pMnode) {
int32_t mndInitSync(SMnode *pMnode) {
SSyncMgmt *pMgmt = &pMnode->syncMgmt;
taosInitRWLatch(&pMgmt->lock);
pMgmt->transId = 0;
SSyncInfo syncInfo = {.vgId = 1, .FpSendMsg = mndSyncSendMsg, .FpEqMsg = mndSyncEqMsg};
snprintf(syncInfo.path, sizeof(syncInfo.path), "%s%ssync", pMnode->path, TD_DIRSEP);
@ -230,8 +248,12 @@ int32_t mndInitSync(SMnode *pMnode) {
// decrease election timer
setPingTimerMS(pMgmt->sync, 5000);
setElectTimerMS(pMgmt->sync, 600);
setHeartbeatTimerMS(pMgmt->sync, 300);
setElectTimerMS(pMgmt->sync, 3000);
setHeartbeatTimerMS(pMgmt->sync, 500);
/*
setElectTimerMS(pMgmt->sync, 600);
setHeartbeatTimerMS(pMgmt->sync, 300);
*/
mDebug("mnode-sync is opened, id:%" PRId64, pMgmt->sync);
return 0;
@ -254,11 +276,21 @@ int32_t mndSyncPropose(SMnode *pMnode, SSdbRaw *pRaw, int32_t transId) {
memcpy(req.pCont, pRaw, req.contLen);
pMgmt->errCode = 0;
pMgmt->transId = transId;
mTrace("trans:%d, will be proposed", pMgmt->transId);
taosWLockLatch(&pMgmt->lock);
if (pMgmt->transId != 0) {
mError("trans:%d, can't be proposed since trans:%s alrady waiting for confirm", transId, pMgmt->transId);
taosWUnLockLatch(&pMgmt->lock);
terrno = TSDB_CODE_APP_NOT_READY;
return -1;
} else {
pMgmt->transId = transId;
mDebug("trans:%d, will be proposed", pMgmt->transId);
taosWUnLockLatch(&pMgmt->lock);
}
const bool isWeak = false;
int32_t code = syncPropose(pMgmt->sync, &req, isWeak);
if (code == 0) {
tsem_wait(&pMgmt->syncSem);
} else if (code == -1 && terrno == TSDB_CODE_SYN_NOT_LEADER) {
@ -286,10 +318,12 @@ void mndSyncStart(SMnode *pMnode) {
}
void mndSyncStop(SMnode *pMnode) {
taosWLockLatch(&pMnode->syncMgmt.lock);
if (pMnode->syncMgmt.transId != 0) {
pMnode->syncMgmt.transId = 0;
tsem_post(&pMnode->syncMgmt.syncSem);
}
taosWUnLockLatch(&pMnode->syncMgmt.lock);
}
bool mndIsMaster(SMnode *pMnode) {

View File

@ -129,19 +129,19 @@ typedef struct STsdbReader STsdbReader;
#define LASTROW_RETRIEVE_TYPE_ALL 0x1
#define LASTROW_RETRIEVE_TYPE_SINGLE 0x2
int32_t tsdbSetTableId(STsdbReader *pReader, int64_t uid);
int32_t tsdbReaderOpen(SVnode *pVnode, SQueryTableDataCond *pCond, SArray *pTableList, STsdbReader **ppReader,
const char *idstr);
void tsdbReaderClose(STsdbReader *pReader);
bool tsdbNextDataBlock(STsdbReader *pReader);
void tsdbRetrieveDataBlockInfo(STsdbReader *pReader, SDataBlockInfo *pDataBlockInfo);
int32_t tsdbRetrieveDatablockSMA(STsdbReader *pReader, SColumnDataAgg ***pBlockStatis, bool *allHave);
SArray *tsdbRetrieveDataBlock(STsdbReader *pTsdbReadHandle, SArray *pColumnIdList);
int32_t tsdbReaderReset(STsdbReader *pReader, SQueryTableDataCond *pCond);
int32_t tsdbGetFileBlocksDistInfo(STsdbReader *pReader, STableBlockDistInfo *pTableBlockInfo);
int64_t tsdbGetNumOfRowsInMemTable(STsdbReader *pHandle);
void *tsdbGetIdx(SMeta *pMeta);
void *tsdbGetIvtIdx(SMeta *pMeta);
int32_t tsdbSetTableId(STsdbReader *pReader, int64_t uid);
int32_t tsdbReaderOpen(SVnode *pVnode, SQueryTableDataCond *pCond, SArray *pTableList, STsdbReader **ppReader,
const char *idstr);
void tsdbReaderClose(STsdbReader *pReader);
bool tsdbNextDataBlock(STsdbReader *pReader);
void tsdbRetrieveDataBlockInfo(STsdbReader *pReader, SDataBlockInfo *pDataBlockInfo);
int32_t tsdbRetrieveDatablockSMA(STsdbReader *pReader, SColumnDataAgg ***pBlockStatis, bool *allHave);
SArray *tsdbRetrieveDataBlock(STsdbReader *pTsdbReadHandle, SArray *pColumnIdList);
int32_t tsdbReaderReset(STsdbReader *pReader, SQueryTableDataCond *pCond);
int32_t tsdbGetFileBlocksDistInfo(STsdbReader *pReader, STableBlockDistInfo *pTableBlockInfo);
int64_t tsdbGetNumOfRowsInMemTable(STsdbReader *pHandle);
void *tsdbGetIdx(SMeta *pMeta);
void *tsdbGetIvtIdx(SMeta *pMeta);
uint64_t getReaderMaxVersion(STsdbReader *pReader);
int32_t tsdbLastRowReaderOpen(void *pVnode, int32_t type, SArray *pTableIdList, int32_t numOfCols, void **pReader);

View File

@ -80,7 +80,7 @@ int32_t vnodeQueryOpen(SVnode* pVnode);
void vnodeQueryClose(SVnode* pVnode);
int32_t vnodeGetTableMeta(SVnode* pVnode, SRpcMsg* pMsg, bool direct);
int vnodeGetTableCfg(SVnode* pVnode, SRpcMsg* pMsg, bool direct);
int32_t vnodeGetBatchMeta(SVnode *pVnode, SRpcMsg *pMsg);
int32_t vnodeGetBatchMeta(SVnode* pVnode, SRpcMsg* pMsg);
// vnodeCommit.c
int32_t vnodeBegin(SVnode* pVnode);
@ -98,6 +98,8 @@ void vnodeSyncStart(SVnode* pVnode);
void vnodeSyncClose(SVnode* pVnode);
void vnodeRedirectRpcMsg(SVnode* pVnode, SRpcMsg* pMsg);
bool vnodeIsLeader(SVnode* pVnode);
bool vnodeIsReadyForRead(SVnode* pVnode);
bool vnodeIsRoleLeader(SVnode* pVnode);
#ifdef __cplusplus
}

View File

@ -144,6 +144,7 @@ int32_t tsdbDeleteTableData(STsdb* pTsdb, int64_t version, tb_uid_t suid, tb
STsdbReader tsdbQueryCacheLastT(STsdb* tsdb, SQueryTableDataCond* pCond, STableListInfo* tableList, uint64_t qId,
void* pMemRef);
int32_t tsdbSetKeepCfg(STsdb* pTsdb, STsdbCfg* pCfg);
int32_t tsdbGetStbIdList(SMeta* pMeta, int64_t suid, SArray* list);
// tq
int tqInit();
@ -169,10 +170,9 @@ int32_t tqProcessTaskDispatchRsp(STQ* pTq, SRpcMsg* pMsg);
int32_t tqProcessTaskRecoverRsp(STQ* pTq, SRpcMsg* pMsg);
int32_t tqProcessTaskRetrieveReq(STQ* pTq, SRpcMsg* pMsg);
int32_t tqProcessTaskRetrieveRsp(STQ* pTq, SRpcMsg* pMsg);
int32_t tsdbGetStbIdList(SMeta* pMeta, int64_t suid, SArray* list);
SSubmitReq* tdBlockToSubmit(SVnode* pVnode, const SArray* pBlocks, const STSchema* pSchema, bool createTb, int64_t suid,
const char* stbFullName, int32_t vgId, SBatchDeleteReq* pDeleteReq);
SSubmitReq* tqBlockToSubmit(SVnode* pVnode, const SArray* pBlocks, const STSchema* pSchema, bool createTb, int64_t suid,
const char* stbFullName, SBatchDeleteReq* pDeleteReq);
// sma
int32_t smaInit();
@ -308,7 +308,8 @@ struct SVnode {
SSink* pSink;
tsem_t canCommit;
int64_t sync;
int32_t blockCount;
TdThreadMutex lock;
bool blocked;
bool restored;
tsem_t syncSem;
SQHandle* pQuery;

View File

@ -298,14 +298,14 @@ int metaAlterSTable(SMeta *pMeta, int64_t version, SVCreateStbReq *pReq) {
tdbTbcClose(pUidIdxc);
terrno = TSDB_CODE_TDB_STB_NOT_EXIST;
// ASSERT(0);
return -1;
}
ret = tdbTbcGet(pUidIdxc, NULL, NULL, &pData, &nData);
if (ret < 0) {
tdbTbcClose(pUidIdxc);
terrno = TSDB_CODE_TDB_STB_NOT_EXIST;
// ASSERT(0);
return -1;
}

View File

@ -201,9 +201,8 @@ int32_t tdProcessTSmaInsertImpl(SSma *pSma, int64_t indexUid, const char *msg) {
}
SBatchDeleteReq deleteReq;
SSubmitReq *pSubmitReq =
tdBlockToSubmit(pSma->pVnode, (const SArray *)msg, pTsmaStat->pTSchema, true, pTsmaStat->pTSma->dstTbUid,
pTsmaStat->pTSma->dstTbName, pTsmaStat->pTSma->dstVgId, &deleteReq);
SSubmitReq *pSubmitReq = tqBlockToSubmit(pSma->pVnode, (const SArray *)msg, pTsmaStat->pTSchema, true,
pTsmaStat->pTSma->dstTbUid, pTsmaStat->pTSma->dstTbName, &deleteReq);
if (!pSubmitReq) {
smaError("vgId:%d, failed to gen submit blk while tsma insert for smaIndex %" PRIi64 " since %s", SMA_VID(pSma),

View File

@ -14,6 +14,7 @@
*/
#include "tq.h"
#include "vnd.h"
#if 0
void tqTmrRspFunc(void* param, void* tmrId) {
@ -212,9 +213,7 @@ int32_t tqPushMsgNew(STQ* pTq, void* msg, int32_t msgLen, tmsg_t msgType, int64_
#endif
int tqPushMsg(STQ* pTq, void* msg, int32_t msgLen, tmsg_t msgType, int64_t ver) {
walApplyVer(pTq->pVnode->pWal, ver);
if (msgType == TDMT_VND_SUBMIT) {
if (vnodeIsRoleLeader(pTq->pVnode) && msgType == TDMT_VND_SUBMIT) {
if (taosHashGetSize(pTq->pStreamMeta->pTasks) == 0) return 0;
void* data = taosMemoryMalloc(msgLen);

View File

@ -25,8 +25,7 @@ int32_t tdBuildDeleteReq(SVnode* pVnode, const char* stbFullName, const SSDataBl
SColumnInfoData* pGidCol = taosArrayGet(pDataBlock->pDataBlock, GROUPID_COLUMN_INDEX);
for (int32_t row = 0; row < totRow; row++) {
int64_t ts = *(int64_t*)colDataGetData(pTsCol, row);
/*int64_t groupId = *(int64_t*)colDataGetData(pGidCol, row);*/
int64_t groupId = 0;
int64_t groupId = *(int64_t*)colDataGetData(pGidCol, row);
char* name = buildCtbNameByGroupId(stbFullName, groupId);
tqDebug("stream delete msg: groupId :%ld, name: %s", groupId, name);
SMetaReader mr = {0};
@ -49,8 +48,8 @@ int32_t tdBuildDeleteReq(SVnode* pVnode, const char* stbFullName, const SSDataBl
return 0;
}
SSubmitReq* tdBlockToSubmit(SVnode* pVnode, const SArray* pBlocks, const STSchema* pTSchema, bool createTb,
int64_t suid, const char* stbFullName, int32_t vgId, SBatchDeleteReq* pDeleteReq) {
SSubmitReq* tqBlockToSubmit(SVnode* pVnode, const SArray* pBlocks, const STSchema* pTSchema, bool createTb,
int64_t suid, const char* stbFullName, SBatchDeleteReq* pDeleteReq) {
SSubmitReq* ret = NULL;
SArray* schemaReqs = NULL;
SArray* schemaReqSz = NULL;
@ -153,7 +152,7 @@ SSubmitReq* tdBlockToSubmit(SVnode* pVnode, const SArray* pBlocks, const STSchem
// assign data
// TODO
ret = rpcMallocCont(cap);
ret->header.vgId = vgId;
ret->header.vgId = pVnode->config.vgId;
ret->length = sizeof(SSubmitReq);
ret->numOfBlocks = htonl(sz);
@ -234,8 +233,8 @@ void tqTableSink(SStreamTask* pTask, void* vnode, int64_t ver, void* data) {
ASSERT(pTask->tbSink.pTSchema);
deleteReq.deleteReqs = taosArrayInit(0, sizeof(SSingleDeleteReq));
SSubmitReq* pReq = tdBlockToSubmit(pVnode, pRes, pTask->tbSink.pTSchema, true, pTask->tbSink.stbUid,
pTask->tbSink.stbFullName, pVnode->config.vgId, &deleteReq);
SSubmitReq* pReq = tqBlockToSubmit(pVnode, pRes, pTask->tbSink.pTSchema, true, pTask->tbSink.stbUid,
pTask->tbSink.stbFullName, &deleteReq);
tqDebug("vgId:%d, task %d convert blocks over, put into write-queue", TD_VID(pVnode), pTask->taskId);

View File

@ -85,7 +85,8 @@ SVnode *vnodeOpen(const char *path, STfs *pTfs, SMsgCb msgCb) {
pVnode->state.commitTerm = info.state.commitTerm;
pVnode->pTfs = pTfs;
pVnode->msgCb = msgCb;
pVnode->blockCount = 0;
taosThreadMutexInit(&pVnode->lock, NULL);
pVnode->blocked = false;
tsem_init(&pVnode->syncSem, 0, 0);
tsem_init(&(pVnode->canCommit), 0, 1);
@ -199,6 +200,7 @@ void vnodeClose(SVnode *pVnode) {
tsem_destroy(&pVnode->syncSem);
taosThreadCondDestroy(&pVnode->poolNotEmpty);
taosThreadMutexDestroy(&pVnode->mutex);
taosThreadMutexDestroy(&pVnode->lock);
taosMemoryFree(pVnode);
}
}

View File

@ -247,6 +247,8 @@ int32_t vnodeProcessWriteMsg(SVnode *pVnode, SRpcMsg *pMsg, int64_t version, SRp
vTrace("vgId:%d, process %s request success, index:%" PRId64, TD_VID(pVnode), TMSG_INFO(pMsg->msgType), version);
walApplyVer(pVnode->pWal, version);
if (tqPushMsg(pVnode->pTq, pMsg->pCont, pMsg->contLen, pMsg->msgType, version) < 0) {
vError("vgId:%d, failed to push msg to TQ since %s", TD_VID(pVnode), tstrerror(terrno));
return -1;
@ -281,7 +283,7 @@ int32_t vnodePreprocessQueryMsg(SVnode *pVnode, SRpcMsg *pMsg) {
int32_t vnodeProcessQueryMsg(SVnode *pVnode, SRpcMsg *pMsg) {
vTrace("message in vnode query queue is processing");
if ((pMsg->msgType == TDMT_SCH_QUERY) && !vnodeIsLeader(pVnode)) {
if ((pMsg->msgType == TDMT_SCH_QUERY) && !vnodeIsReadyForRead(pVnode)) {
vnodeRedirectRpcMsg(pVnode, pMsg);
return 0;
}
@ -305,7 +307,7 @@ int32_t vnodeProcessFetchMsg(SVnode *pVnode, SRpcMsg *pMsg, SQueueInfo *pInfo) {
vTrace("vgId:%d, msg:%p in fetch queue is processing", pVnode->config.vgId, pMsg);
if ((pMsg->msgType == TDMT_SCH_FETCH || pMsg->msgType == TDMT_VND_TABLE_META || pMsg->msgType == TDMT_VND_TABLE_CFG ||
pMsg->msgType == TDMT_VND_BATCH_META) &&
!vnodeIsLeader(pVnode)) {
!vnodeIsReadyForRead(pVnode)) {
vnodeRedirectRpcMsg(pVnode, pMsg);
return 0;
}

View File

@ -28,20 +28,28 @@ static inline bool vnodeIsMsgWeak(tmsg_t type) { return false; }
static inline void vnodeWaitBlockMsg(SVnode *pVnode, const SRpcMsg *pMsg) {
if (vnodeIsMsgBlock(pMsg->msgType)) {
const STraceId *trace = &pMsg->info.traceId;
vGTrace("vgId:%d, msg:%p wait block, type:%s", pVnode->config.vgId, pMsg, TMSG_INFO(pMsg->msgType));
pVnode->blockCount = 1;
tsem_wait(&pVnode->syncSem);
taosThreadMutexLock(&pVnode->lock);
if (!pVnode->blocked) {
vGTrace("vgId:%d, msg:%p wait block, type:%s", pVnode->config.vgId, pMsg, TMSG_INFO(pMsg->msgType));
pVnode->blocked = true;
taosThreadMutexUnlock(&pVnode->lock);
tsem_wait(&pVnode->syncSem);
} else {
taosThreadMutexUnlock(&pVnode->lock);
}
}
}
static inline void vnodePostBlockMsg(SVnode *pVnode, const SRpcMsg *pMsg) {
if (vnodeIsMsgBlock(pMsg->msgType)) {
const STraceId *trace = &pMsg->info.traceId;
if (pVnode->blockCount) {
taosThreadMutexLock(&pVnode->lock);
if (pVnode->blocked) {
vGTrace("vgId:%d, msg:%p post block, type:%s", pVnode->config.vgId, pMsg, TMSG_INFO(pMsg->msgType));
pVnode->blockCount = 0;
pVnode->blocked = false;
tsem_post(&pVnode->syncSem);
}
taosThreadMutexUnlock(&pVnode->lock);
}
}
@ -677,11 +685,25 @@ static void vnodeBecomeFollower(struct SSyncFSM *pFsm) {
vDebug("vgId:%d, become follower", pVnode->config.vgId);
// clear old leader resource
taosThreadMutexLock(&pVnode->lock);
if (pVnode->blocked) {
pVnode->blocked = false;
vDebug("vgId:%d, become follower and post block", pVnode->config.vgId);
tsem_post(&pVnode->syncSem);
}
taosThreadMutexUnlock(&pVnode->lock);
}
static void vnodeBecomeLeader(struct SSyncFSM *pFsm) {
SVnode *pVnode = pFsm->data;
vDebug("vgId:%d, become leader", pVnode->config.vgId);
// taosThreadMutexLock(&pVnode->lock);
// if (pVnode->blocked) {
// pVnode->blocked = false;
// tsem_post(&pVnode->syncSem);
// }
// taosThreadMutexUnlock(&pVnode->lock);
}
static SSyncFSM *vnodeSyncMakeFsm(SVnode *pVnode) {
@ -742,6 +764,8 @@ void vnodeSyncStart(SVnode *pVnode) {
void vnodeSyncClose(SVnode *pVnode) { syncStop(pVnode->sync); }
bool vnodeIsRoleLeader(SVnode *pVnode) { return syncGetMyRole(pVnode->sync) == TAOS_SYNC_STATE_LEADER; }
bool vnodeIsLeader(SVnode *pVnode) {
if (!syncIsReady(pVnode->sync)) {
vDebug("vgId:%d, vnode not ready, state:%s, restore:%d", pVnode->config.vgId, syncGetMyRoleStr(pVnode->sync),
@ -757,3 +781,17 @@ bool vnodeIsLeader(SVnode *pVnode) {
return true;
}
bool vnodeIsReadyForRead(SVnode *pVnode) {
if (syncIsReady(pVnode->sync)) {
return true;
}
if (syncIsReadyForRead(pVnode->sync)) {
return true;
}
vDebug("vgId:%d, vnode not ready for read, state:%s, last:%ld, cmt:%ld", pVnode->config.vgId,
syncGetMyRoleStr(pVnode->sync), syncGetLastIndex(pVnode->sync), syncGetCommitIndex(pVnode->sync));
return false;
}

View File

@ -1105,6 +1105,7 @@ int32_t ctgHandleGetTbMetasRsp(SCtgTaskReq* tReq, int32_t reqType, const SDataBu
SName* pName = ctgGetFetchName(ctx->pNames, pFetch);
int32_t flag = pFetch->flag;
int32_t* vgId = &pFetch->vgId;
bool taskDone = false;
CTG_ERR_JRET(ctgProcessRspMsg(pMsgCtx->out, reqType, pMsg->pData, pMsg->len, rspCode, pMsgCtx->target));
@ -1250,6 +1251,7 @@ int32_t ctgHandleGetTbMetasRsp(SCtgTaskReq* tReq, int32_t reqType, const SDataBu
pOut->tbMeta = NULL;
if (0 == atomic_sub_fetch_32(&ctx->fetchNum, 1)) {
TSWAP(pTask->res, ctx->pResList);
taskDone = true;
}
_return:
@ -1264,10 +1266,11 @@ _return:
pRes->pRes = NULL;
if (0 == atomic_sub_fetch_32(&ctx->fetchNum, 1)) {
TSWAP(pTask->res, ctx->pResList);
taskDone = true;
}
}
if (pTask->res) {
if (pTask->res && taskDone) {
ctgHandleTaskEnd(pTask, code);
}
@ -1354,6 +1357,7 @@ int32_t ctgHandleGetTbHashsRsp(SCtgTaskReq* tReq, int32_t reqType, const SDataBu
SCatalog* pCtg = pTask->pJob->pCtg;
SCtgMsgCtx* pMsgCtx = CTG_GET_TASK_MSGCTX(pTask, tReq->msgIdx);
SCtgFetch* pFetch = taosArrayGet(ctx->pFetchs, tReq->msgIdx);
bool taskDone = false;
CTG_ERR_JRET(ctgProcessRspMsg(pMsgCtx->out, reqType, pMsg->pData, pMsg->len, rspCode, pMsgCtx->target));
@ -1377,6 +1381,7 @@ int32_t ctgHandleGetTbHashsRsp(SCtgTaskReq* tReq, int32_t reqType, const SDataBu
if (0 == atomic_sub_fetch_32(&ctx->fetchNum, 1)) {
TSWAP(pTask->res, ctx->pResList);
taskDone = true;
}
_return:
@ -1392,10 +1397,11 @@ _return:
if (0 == atomic_sub_fetch_32(&ctx->fetchNum, 1)) {
TSWAP(pTask->res, ctx->pResList);
taskDone = true;
}
}
if (pTask->res) {
if (pTask->res && taskDone) {
ctgHandleTaskEnd(pTask, code);
}

View File

@ -581,6 +581,20 @@ _return:
}
int32_t ctgChkAuthFromCache(SCatalog* pCtg, char* user, char* dbFName, AUTH_TYPE type, bool *inCache, bool *pass) {
char *p = strchr(dbFName, '.');
if (p) {
++p;
} else {
p = dbFName;
}
if (IS_SYS_DBNAME(p)) {
*inCache = true;
*pass = true;
ctgDebug("sysdb %s, pass", dbFName);
return TSDB_CODE_SUCCESS;
}
SCtgUserAuth *pUser = (SCtgUserAuth *)taosHashGet(pCtg->userCache, user, strlen(user));
if (NULL == pUser) {
ctgDebug("user not in cache, user:%s", user);

View File

@ -619,15 +619,20 @@ typedef struct SIndefOperatorInfo {
typedef struct SFillOperatorInfo {
struct SFillInfo* pFillInfo;
SSDataBlock* pRes;
SSDataBlock* pFinalRes;
int64_t totalInputRows;
void** p;
SSDataBlock* existNewGroupBlock;
bool multigroupResult;
STimeWindow win;
SNode* pCondition;
SArray* pColMatchColInfo;
int32_t primaryTsCol;
int32_t primarySrcSlotId;
uint64_t curGroupId; // current handled group id
SExprInfo* pExprInfo;
int32_t numOfExpr;
SExprInfo* pNotFillExprInfo;
int32_t numOfNotFillExpr;
} SFillOperatorInfo;
typedef struct SGroupbyOperatorInfo {

View File

@ -28,8 +28,7 @@ struct SSDataBlock;
typedef struct SFillColInfo {
SExprInfo *pExpr;
int16_t flag; // column flag: TAG COLUMN|NORMAL COLUMN
int16_t tagIndex; // index of current tag in SFillTagColInfo array list
bool notFillCol; // denote if this column needs fill operation
SVariant fillVal;
} SFillColInfo;
@ -46,25 +45,27 @@ typedef struct {
char* tagVal;
} SFillTagColInfo;
typedef struct {
int64_t key;
SArray* pRowVal;
} SRowVal;
typedef struct SFillInfo {
TSKEY start; // start timestamp
TSKEY end; // endKey for fill
TSKEY currentKey; // current active timestamp, the value may be changed during the fill procedure.
int32_t tsSlotId; // primary time stamp slot id
int32_t srcTsSlotId; // timestamp column id in the source data block.
int32_t order; // order [TSDB_ORDER_ASC|TSDB_ORDER_DESC]
int32_t type; // fill type
int32_t numOfRows; // number of rows in the input data block
int32_t index; // active row index
int32_t numOfTotal; // number of filled rows in one round
int32_t numOfCurrent; // number of filled rows in current results
int32_t numOfTags; // number of tags
int32_t numOfCols; // number of columns, including the tags columns
int32_t rowSize; // size of each row
SInterval interval;
SArray *prev;
SArray *next;
SRowVal prev;
SRowVal next;
SSDataBlock *pSrcBlock;
int32_t alloc; // data buffer size in rows
@ -79,10 +80,10 @@ int64_t getNumOfResultsAfterFillGap(SFillInfo* pFillInfo, int64_t ekey, int32_t
void taosFillSetStartInfo(struct SFillInfo* pFillInfo, int32_t numOfRows, TSKEY endKey);
void taosResetFillInfo(struct SFillInfo* pFillInfo, TSKEY startTimestamp);
void taosFillSetInputDataBlock(struct SFillInfo* pFillInfo, const struct SSDataBlock* pInput);
struct SFillColInfo* createFillColInfo(SExprInfo* pExpr, int32_t numOfOutput, const struct SNodeListNode* val);
struct SFillColInfo* createFillColInfo(SExprInfo* pExpr, int32_t numOfFillExpr, SExprInfo* pNotFillExpr, int32_t numOfNotFillCols, const struct SNodeListNode* val);
bool taosFillHasMoreResults(struct SFillInfo* pFillInfo);
SFillInfo* taosCreateFillInfo(TSKEY skey, int32_t numOfTags, int32_t capacity, int32_t numOfCols,
SFillInfo* taosCreateFillInfo(TSKEY skey, int32_t numOfFillCols, int32_t numOfNotFillCols, int32_t capacity,
SInterval* pInterval, int32_t fillType, struct SFillColInfo* pCol, int32_t slotId,
int32_t order, const char* id);

View File

@ -716,7 +716,7 @@ int32_t projectApplyFunctions(SExprInfo* pExpr, SSDataBlock* pResult, SSDataBloc
taosArrayDestroy(pBlockList);
}
} else {
ASSERT(0);
return TSDB_CODE_OPS_NOT_SUPPORT;
}
}
@ -3216,36 +3216,71 @@ int32_t handleLimitOffset(SOperatorInfo* pOperator, SLimitInfo* pLimitInfo, SSDa
}
}
static void doHandleRemainBlockForNewGroupImpl(SFillOperatorInfo* pInfo, SResultInfo* pResultInfo,
static void doApplyScalarCalculation(SOperatorInfo* pOperator, SSDataBlock* pBlock, int32_t order, int32_t scanFlag);
static void doHandleRemainBlockForNewGroupImpl(SOperatorInfo *pOperator, SFillOperatorInfo* pInfo, SResultInfo* pResultInfo,
SExecTaskInfo* pTaskInfo) {
pInfo->totalInputRows = pInfo->existNewGroupBlock->info.rows;
SSDataBlock* pResBlock = pInfo->pFinalRes;
int32_t order = TSDB_ORDER_ASC;
int32_t scanFlag = MAIN_SCAN;
getTableScanInfo(pOperator, &order, &scanFlag);
int64_t ekey =
Q_STATUS_EQUAL(pTaskInfo->status, TASK_COMPLETED) ? pInfo->win.ekey : pInfo->existNewGroupBlock->info.window.ekey;
taosResetFillInfo(pInfo->pFillInfo, getFillInfoStart(pInfo->pFillInfo));
taosFillSetStartInfo(pInfo->pFillInfo, pInfo->existNewGroupBlock->info.rows, ekey);
taosFillSetInputDataBlock(pInfo->pFillInfo, pInfo->existNewGroupBlock);
doApplyScalarCalculation(pOperator, pInfo->existNewGroupBlock, order, scanFlag);
int32_t numOfResultRows = pResultInfo->capacity - pInfo->pRes->info.rows;
taosFillResultDataBlock(pInfo->pFillInfo, pInfo->pRes, numOfResultRows);
taosFillSetStartInfo(pInfo->pFillInfo, pInfo->pRes->info.rows, ekey);
taosFillSetInputDataBlock(pInfo->pFillInfo, pInfo->pRes);
int32_t numOfResultRows = pResultInfo->capacity - pResBlock->info.rows;
taosFillResultDataBlock(pInfo->pFillInfo, pResBlock, numOfResultRows);
pInfo->curGroupId = pInfo->existNewGroupBlock->info.groupId;
pInfo->existNewGroupBlock = NULL;
}
static void doHandleRemainBlockFromNewGroup(SFillOperatorInfo* pInfo, SResultInfo* pResultInfo,
static void doHandleRemainBlockFromNewGroup(SOperatorInfo* pOperator, SFillOperatorInfo* pInfo, SResultInfo* pResultInfo,
SExecTaskInfo* pTaskInfo) {
if (taosFillHasMoreResults(pInfo->pFillInfo)) {
int32_t numOfResultRows = pResultInfo->capacity - pInfo->pRes->info.rows;
taosFillResultDataBlock(pInfo->pFillInfo, pInfo->pRes, numOfResultRows);
int32_t numOfResultRows = pResultInfo->capacity - pInfo->pFinalRes->info.rows;
taosFillResultDataBlock(pInfo->pFillInfo, pInfo->pFinalRes, numOfResultRows);
pInfo->pRes->info.groupId = pInfo->curGroupId;
return;
}
// handle the cached new group data block
if (pInfo->existNewGroupBlock) {
doHandleRemainBlockForNewGroupImpl(pInfo, pResultInfo, pTaskInfo);
doHandleRemainBlockForNewGroupImpl(pOperator, pInfo, pResultInfo, pTaskInfo);
}
}
static void doApplyScalarCalculation(SOperatorInfo* pOperator, SSDataBlock* pBlock, int32_t order, int32_t scanFlag) {
SFillOperatorInfo* pInfo = pOperator->info;
SExprSupp* pSup = &pOperator->exprSupp;
SSDataBlock* pResBlock = pInfo->pFinalRes;
setInputDataBlock(pOperator, pSup->pCtx, pBlock, order, scanFlag, false);
projectApplyFunctions(pSup->pExprInfo, pInfo->pRes, pBlock, pSup->pCtx, pSup->numOfExprs, NULL);
pInfo->pRes->info.groupId = pBlock->info.groupId;
SColumnInfoData* pDst = taosArrayGet(pInfo->pRes->pDataBlock, pInfo->primaryTsCol);
SColumnInfoData* pSrc = taosArrayGet(pBlock->pDataBlock, pInfo->primarySrcSlotId);
colDataAssign(pDst, pSrc, pInfo->pRes->info.rows, &pResBlock->info);
for(int32_t i = 0; i < pInfo->numOfNotFillExpr; ++i) {
SFillColInfo* pCol = &pInfo->pFillInfo->pFillCol[i + pInfo->numOfExpr];
ASSERT(pCol->notFillCol);
SExprInfo* pExpr = pCol->pExpr;
int32_t srcSlotId = pExpr->base.pParam[0].pCol->slotId;
int32_t dstSlotId = pExpr->base.resSchema.slotId;
SColumnInfoData* pDst1 = taosArrayGet(pInfo->pRes->pDataBlock, dstSlotId);
SColumnInfoData* pSrc1 = taosArrayGet(pBlock->pDataBlock, srcSlotId);
colDataAssign(pDst1, pSrc1, pInfo->pRes->info.rows, &pResBlock->info);
}
}
@ -3254,11 +3289,16 @@ static SSDataBlock* doFillImpl(SOperatorInfo* pOperator) {
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
SResultInfo* pResultInfo = &pOperator->resultInfo;
SSDataBlock* pResBlock = pInfo->pRes;
SSDataBlock* pResBlock = pInfo->pFinalRes;
blockDataCleanup(pResBlock);
blockDataCleanup(pInfo->pRes);
doHandleRemainBlockFromNewGroup(pInfo, pResultInfo, pTaskInfo);
int32_t order = TSDB_ORDER_ASC;
int32_t scanFlag = MAIN_SCAN;
getTableScanInfo(pOperator, &order, &scanFlag);
doHandleRemainBlockFromNewGroup(pOperator, pInfo, pResultInfo, pTaskInfo);
if (pResBlock->info.rows > 0) {
pResBlock->info.groupId = pInfo->curGroupId;
return pResBlock;
@ -3269,21 +3309,21 @@ static SSDataBlock* doFillImpl(SOperatorInfo* pOperator) {
SSDataBlock* pBlock = pDownstream->fpSet.getNextFn(pDownstream);
if (pBlock == NULL) {
if (pInfo->totalInputRows == 0) {
pOperator->status = OP_EXEC_DONE;
doSetOperatorCompleted(pOperator);
return NULL;
}
taosFillSetStartInfo(pInfo->pFillInfo, 0, pInfo->win.ekey);
} else {
blockDataUpdateTsWindow(pBlock, pInfo->primaryTsCol);
doApplyScalarCalculation(pOperator, pBlock, order, scanFlag);
if (pInfo->curGroupId == 0 || pInfo->curGroupId == pBlock->info.groupId) {
pInfo->curGroupId = pBlock->info.groupId; // the first data block
if (pInfo->curGroupId == 0 || pInfo->curGroupId == pInfo->pRes->info.groupId) {
pInfo->curGroupId = pInfo->pRes->info.groupId; // the first data block
pInfo->totalInputRows += pInfo->pRes->info.rows;
pInfo->totalInputRows += pBlock->info.rows;
taosFillSetStartInfo(pInfo->pFillInfo, pBlock->info.rows, pBlock->info.window.ekey);
taosFillSetInputDataBlock(pInfo->pFillInfo, pBlock);
taosFillSetStartInfo(pInfo->pFillInfo, pInfo->pRes->info.rows, pBlock->info.window.ekey);
taosFillSetInputDataBlock(pInfo->pFillInfo, pInfo->pRes);
} else if (pInfo->curGroupId != pBlock->info.groupId) { // the new group data block
pInfo->existNewGroupBlock = pBlock;
@ -3293,8 +3333,6 @@ static SSDataBlock* doFillImpl(SOperatorInfo* pOperator) {
}
}
blockDataEnsureCapacity(pResBlock, pOperator->resultInfo.capacity);
int32_t numOfResultRows = pOperator->resultInfo.capacity - pResBlock->info.rows;
taosFillResultDataBlock(pInfo->pFillInfo, pResBlock, numOfResultRows);
@ -3307,14 +3345,18 @@ static SSDataBlock* doFillImpl(SOperatorInfo* pOperator) {
return pResBlock;
}
doHandleRemainBlockFromNewGroup(pInfo, pResultInfo, pTaskInfo);
doHandleRemainBlockFromNewGroup(pOperator, pInfo, pResultInfo, pTaskInfo);
if (pResBlock->info.rows >= pOperator->resultInfo.threshold || pBlock == NULL) {
pResBlock->info.groupId = pInfo->curGroupId;
return pResBlock;
}
} else if (pInfo->existNewGroupBlock) { // try next group
assert(pBlock != NULL);
doHandleRemainBlockForNewGroupImpl(pInfo, pResultInfo, pTaskInfo);
blockDataCleanup(pResBlock);
blockDataCleanup(pInfo->pRes);
doHandleRemainBlockForNewGroupImpl(pOperator, pInfo, pResultInfo, pTaskInfo);
if (pResBlock->info.rows > pResultInfo->threshold) {
pResBlock->info.groupId = pInfo->curGroupId;
return pResBlock;
@ -3605,6 +3647,13 @@ void destroyFillOperatorInfo(void* param, int32_t numOfOutput) {
SFillOperatorInfo* pInfo = (SFillOperatorInfo*)param;
pInfo->pFillInfo = taosDestroyFillInfo(pInfo->pFillInfo);
pInfo->pRes = blockDataDestroy(pInfo->pRes);
pInfo->pFinalRes = blockDataDestroy(pInfo->pFinalRes);
if (pInfo->pNotFillExprInfo != NULL) {
destroyExprInfo(pInfo->pNotFillExprInfo, pInfo->numOfNotFillExpr);
taosMemoryFree(pInfo->pNotFillExprInfo);
}
taosMemoryFreeClear(pInfo->p);
taosArrayDestroy(pInfo->pColMatchColInfo);
taosMemoryFreeClear(param);
@ -3637,16 +3686,16 @@ void doDestroyExchangeOperatorInfo(void* param) {
taosMemoryFreeClear(param);
}
static int32_t initFillInfo(SFillOperatorInfo* pInfo, SExprInfo* pExpr, int32_t numOfCols, SNodeListNode* pValNode,
STimeWindow win, int32_t capacity, const char* id, SInterval* pInterval, int32_t fillType,
int32_t order) {
SFillColInfo* pColInfo = createFillColInfo(pExpr, numOfCols, pValNode);
static int32_t initFillInfo(SFillOperatorInfo* pInfo, SExprInfo* pExpr, int32_t numOfCols, SExprInfo* pNotFillExpr,
int32_t numOfNotFillCols, SNodeListNode* pValNode, STimeWindow win, int32_t capacity,
const char* id, SInterval* pInterval, int32_t fillType, int32_t order) {
SFillColInfo* pColInfo = createFillColInfo(pExpr, numOfCols, pNotFillExpr, numOfNotFillCols, pValNode);
STimeWindow w = getAlignQueryTimeWindow(pInterval, pInterval->precision, win.skey);
w = getFirstQualifiedTimeWindow(win.skey, &w, pInterval, TSDB_ORDER_ASC);
pInfo->pFillInfo =
taosCreateFillInfo(w.skey, 0, capacity, numOfCols, pInterval, fillType, pColInfo, pInfo->primaryTsCol, order, id);
taosCreateFillInfo(w.skey, numOfCols, numOfNotFillCols, capacity, pInterval, fillType, pColInfo, pInfo->primaryTsCol, order, id);
pInfo->win = win;
pInfo->p = taosMemoryCalloc(numOfCols, POINTER_BYTES);
@ -3668,9 +3717,10 @@ SOperatorInfo* createFillOperatorInfo(SOperatorInfo* downstream, SFillPhysiNode*
goto _error;
}
int32_t num = 0;
SSDataBlock* pResBlock = createResDataBlock(pPhyFillNode->node.pOutputDataBlockDesc);
SExprInfo* pExprInfo = createExprInfo(pPhyFillNode->pTargets, NULL, &num);
SExprInfo* pExprInfo = createExprInfo(pPhyFillNode->pFillExprs, NULL, &pInfo->numOfExpr);
pInfo->pNotFillExprInfo = createExprInfo(pPhyFillNode->pNotFillExprs, NULL, &pInfo->numOfNotFillExpr);
SInterval* pInterval =
QUERY_NODE_PHYSICAL_PLAN_MERGE_ALIGNED_INTERVAL == downstream->operatorType
? &((SMergeAlignedIntervalAggOperatorInfo*)downstream->info)->intervalAggOperatorInfo->interval
@ -3681,19 +3731,27 @@ SOperatorInfo* createFillOperatorInfo(SOperatorInfo* downstream, SFillPhysiNode*
SResultInfo* pResultInfo = &pOperator->resultInfo;
initResultSizeInfo(&pOperator->resultInfo, 4096);
pInfo->primaryTsCol = ((SColumnNode*)pPhyFillNode->pWStartTs)->slotId;
blockDataEnsureCapacity(pResBlock, pOperator->resultInfo.capacity);
initExprSupp(&pOperator->exprSupp, pExprInfo, pInfo->numOfExpr);
pInfo->primaryTsCol = ((STargetNode*)pPhyFillNode->pWStartTs)->slotId;
pInfo->primarySrcSlotId = ((SColumnNode*)((STargetNode*)pPhyFillNode->pWStartTs)->pExpr)->slotId;
int32_t numOfOutputCols = 0;
SArray* pColMatchColInfo = extractColMatchInfo(pPhyFillNode->pTargets, pPhyFillNode->node.pOutputDataBlockDesc,
SArray* pColMatchColInfo = extractColMatchInfo(pPhyFillNode->pFillExprs, pPhyFillNode->node.pOutputDataBlockDesc,
&numOfOutputCols, COL_MATCH_FROM_SLOT_ID);
int32_t code = initFillInfo(pInfo, pExprInfo, num, (SNodeListNode*)pPhyFillNode->pValues, pPhyFillNode->timeRange,
pResultInfo->capacity, pTaskInfo->id.str, pInterval, type, order);
int32_t code =
initFillInfo(pInfo, pExprInfo, pInfo->numOfExpr, pInfo->pNotFillExprInfo, pInfo->numOfNotFillExpr, (SNodeListNode*)pPhyFillNode->pValues,
pPhyFillNode->timeRange, pResultInfo->capacity, pTaskInfo->id.str, pInterval, type, order);
if (code != TSDB_CODE_SUCCESS) {
goto _error;
}
pInfo->pRes = pResBlock;
pInfo->pFinalRes = createOneDataBlock(pResBlock, false);
blockDataEnsureCapacity(pInfo->pFinalRes, pOperator->resultInfo.capacity);
pInfo->pCondition = pPhyFillNode->node.pConditions;
pInfo->pColMatchColInfo = pColMatchColInfo;
pOperator->name = "FillOperator";
@ -3701,7 +3759,7 @@ SOperatorInfo* createFillOperatorInfo(SOperatorInfo* downstream, SFillPhysiNode*
pOperator->status = OP_NOT_OPENED;
pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_FILL;
pOperator->exprSupp.pExprInfo = pExprInfo;
pOperator->exprSupp.numOfExprs = num;
pOperator->exprSupp.numOfExprs = pInfo->numOfExpr;
pOperator->info = pInfo;
pOperator->pTaskInfo = pTaskInfo;
@ -4254,8 +4312,7 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
ASSERT(0);
}
taosMemoryFree(ops);
pOptr->resultDataBlockId = pPhyNode->pOutputDataBlockDesc->dataBlockId;
if (pOptr) pOptr->resultDataBlockId = pPhyNode->pOutputDataBlockDesc->dataBlockId;
return pOptr;
}

View File

@ -174,7 +174,8 @@ static int32_t doIngroupLimitOffset(SLimitInfo* pLimitInfo, uint64_t groupId, SS
if (pLimitInfo->limit.limit >= 0 && pLimitInfo->numOfOutputRows + pBlock->info.rows >= pLimitInfo->limit.limit) {
int32_t keepRows = (int32_t)(pLimitInfo->limit.limit - pLimitInfo->numOfOutputRows);
blockDataKeepFirstNRows(pBlock, keepRows);
if (pLimitInfo->slimit.limit > 0 && pLimitInfo->slimit.limit <= pLimitInfo->numOfOutputGroups) {
//TODO: optimize it later when partition by + limit
if ((pLimitInfo->slimit.limit == -1 && pLimitInfo->currentGroupId == 0) || pLimitInfo->slimit.limit > 0 && pLimitInfo->slimit.limit <= pLimitInfo->numOfOutputGroups) {
doSetOperatorCompleted(pOperator);
}
}
@ -240,7 +241,8 @@ SSDataBlock* doProjectOperation(SOperatorInfo* pOperator) {
}
// for stream interval
if (pBlock->info.type == STREAM_RETRIEVE) {
if (pBlock->info.type == STREAM_RETRIEVE || pBlock->info.type == STREAM_DELETE_RESULT ||
pBlock->info.type == STREAM_DELETE_DATA) {
// printDataBlock1(pBlock, "project1");
return pBlock;
}

View File

@ -13,11 +13,11 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include "os.h"
#include "executorimpl.h"
#include "filter.h"
#include "function.h"
#include "functionMgt.h"
#include "os.h"
#include "querynodes.h"
#include "systable.h"
#include "tname.h"
@ -128,7 +128,7 @@ static bool overlapWithTimeWindow(SInterval* pInterval, SDataBlockInfo* pBlockIn
w = getAlignQueryTimeWindow(pInterval, pInterval->precision, pBlockInfo->window.skey);
assert(w.ekey >= pBlockInfo->window.skey);
if (w.ekey < pBlockInfo->window.ekey) {
if (TMAX(w.skey, pBlockInfo->window.skey) <= TMIN(w.ekey, pBlockInfo->window.ekey)) {
return true;
}
@ -178,8 +178,8 @@ static SResultRow* getTableGroupOutputBuf(SOperatorInfo* pOperator, uint64_t gro
STableScanInfo* pTableScanInfo = pOperator->info;
SResultRowPosition* p1 =
(SResultRowPosition*)taosHashGet(pTableScanInfo->pdInfo.pAggSup->pResultRowHashTable, buf, GET_RES_WINDOW_KEY_LEN(sizeof(groupId)));
SResultRowPosition* p1 = (SResultRowPosition*)taosHashGet(pTableScanInfo->pdInfo.pAggSup->pResultRowHashTable, buf,
GET_RES_WINDOW_KEY_LEN(sizeof(groupId)));
if (p1 == NULL) {
return NULL;
@ -238,7 +238,7 @@ static FORCE_INLINE bool doFilterByBlockSMA(const SNode* pFilterNode, SColumnDat
// todo move to the initialization function
int32_t code = filterInitFromNode((SNode*)pFilterNode, &filter, 0);
bool keep = filterRangeExecute(filter, pColsAgg, numOfCols, numOfRows);
bool keep = filterRangeExecute(filter, pColsAgg, numOfCols, numOfRows);
filterFreeInfo(filter);
return keep;
@ -312,9 +312,9 @@ static int32_t loadDataBlock(SOperatorInfo* pOperator, STableScanInfo* pTableSca
return TSDB_CODE_SUCCESS;
} else if (*status == FUNC_DATA_REQUIRED_STATIS_LOAD) {
pCost->loadBlockStatis += 1;
loadSMA = true; // mark the operation of load sma;
loadSMA = true; // mark the operation of load sma;
bool success = doLoadBlockSMA(pTableScanInfo, pBlock, pTaskInfo);
if (success) { // failed to load the block sma data, data block statistics does not exist, load data block instead
if (success) { // failed to load the block sma data, data block statistics does not exist, load data block instead
qDebug("%s data block SMA loaded, brange:%" PRId64 "-%" PRId64 ", rows:%d", GET_TASKID(pTaskInfo),
pBlockInfo->window.skey, pBlockInfo->window.ekey, pBlockInfo->rows);
return TSDB_CODE_SUCCESS;
@ -453,7 +453,7 @@ int32_t addTagPseudoColumnData(SReadHandle* pHandle, SExprInfo* pPseudoExpr, int
colDataAppendNNULL(pColInfoData, 0, pBlock->info.rows);
} else if (pColInfoData->info.type != TSDB_DATA_TYPE_JSON) {
colDataAppendNItems(pColInfoData, 0, data, pBlock->info.rows);
} else { // todo opt for json tag
} else { // todo opt for json tag
for (int32_t i = 0; i < pBlock->info.rows; ++i) {
colDataAppend(pColInfoData, i, data, false);
}
@ -570,7 +570,10 @@ static SSDataBlock* doTableScanGroup(SOperatorInfo* pOperator) {
if (pTableScanInfo->scanTimes < pTableScanInfo->scanInfo.numOfAsc) {
setTaskStatus(pTaskInfo, TASK_NOT_COMPLETED);
pTableScanInfo->scanFlag = REPEAT_SCAN;
qDebug("%s start to repeat ascending order scan data SELECT last_row(*),hostname from cpu group by hostname;blocks due to query func required", GET_TASKID(pTaskInfo));
qDebug(
"%s start to repeat ascending order scan data SELECT last_row(*),hostname from cpu group by hostname;blocks "
"due to query func required",
GET_TASKID(pTaskInfo));
// do prepare for the next round table scan operation
tsdbReaderReset(pTableScanInfo->dataReader, &pTableScanInfo->cond);
@ -1174,16 +1177,18 @@ static void checkUpdateData(SStreamScanInfo* pInfo, bool invertible, SSDataBlock
for (int32_t rowId = 0; rowId < pBlock->info.rows; rowId++) {
SResultRowInfo dumyInfo;
dumyInfo.cur.pageId = -1;
bool isClosed = false;
bool isClosed = false;
STimeWindow win = {.skey = INT64_MIN, .ekey = INT64_MAX};
if (isOverdue(tsCol[rowId], &pInfo->twAggSup)) {
win = getActiveTimeWindow(NULL, &dumyInfo, tsCol[rowId], &pInfo->interval, TSDB_ORDER_ASC);
isClosed = isCloseWindow(&win, &pInfo->twAggSup);
}
bool inserted = updateInfoIsTableInserted(pInfo->pUpdateInfo, pBlock->info.uid);
// must check update info first.
bool update = updateInfoIsUpdated(pInfo->pUpdateInfo, pBlock->info.uid, tsCol[rowId]);
if ((update || (isSignleIntervalWindow(pInfo) && isClosed &&
isDeletedWindow(&win, pBlock->info.groupId, pInfo->sessionSup.pIntervalAggSup))) && out) {
bool closedWin = isClosed && inserted && isSignleIntervalWindow(pInfo) &&
isDeletedWindow(&win, pBlock->info.groupId, pInfo->sessionSup.pIntervalAggSup);
if ((update || closedWin) && out) {
appendOneRow(pInfo->pUpdateDataRes, tsCol + rowId, tsCol + rowId, &pBlock->info.uid);
}
}
@ -1390,8 +1395,8 @@ static SSDataBlock* doStreamScan(SOperatorInfo* pOperator) {
SSDataBlock* pSDB = doRangeScan(pInfo, pInfo->pUpdateRes, pInfo->primaryTsIndex, &pInfo->updateResIndex);
if (pSDB) {
STableScanInfo* pTableScanInfo = pInfo->pTableScanOp->info;
uint64_t version = getReaderMaxVersion(pTableScanInfo->dataReader);
updateInfoSetScanRange(pInfo->pUpdateInfo, &pTableScanInfo->cond.twindows, pInfo->groupId,version);
uint64_t version = getReaderMaxVersion(pTableScanInfo->dataReader);
updateInfoSetScanRange(pInfo->pUpdateInfo, &pTableScanInfo->cond.twindows, pInfo->groupId, version);
pSDB->info.type = pInfo->scanMode == STREAM_SCAN_FROM_DATAREADER_RANGE ? STREAM_NORMAL : STREAM_PULL_DATA;
checkUpdateData(pInfo, true, pSDB, false);
return pSDB;
@ -1445,7 +1450,8 @@ static SSDataBlock* doStreamScan(SOperatorInfo* pOperator) {
setBlockIntoRes(pInfo, &block);
if (updateInfoIgnore(pInfo->pUpdateInfo, &pInfo->pRes->info.window, pInfo->pRes->info.groupId, pInfo->pRes->info.version)) {
if (updateInfoIgnore(pInfo->pUpdateInfo, &pInfo->pRes->info.window, pInfo->pRes->info.groupId,
pInfo->pRes->info.version)) {
printDataBlock(pInfo->pRes, "stream scan ignore");
blockDataCleanup(pInfo->pRes);
continue;
@ -2248,7 +2254,7 @@ static SSDataBlock* doSysTableScan(SOperatorInfo* pOperator) {
// build message and send to mnode to fetch the content of system tables.
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
SSysTableScanInfo* pInfo = pOperator->info;
char dbName[TSDB_DB_NAME_LEN] = {0};
char dbName[TSDB_DB_NAME_LEN] = {0};
const char* name = tNameGetTableName(&pInfo->name);
if (pInfo->showRewrite) {
@ -2260,8 +2266,8 @@ static SSDataBlock* doSysTableScan(SOperatorInfo* pOperator) {
return sysTableScanUserTables(pOperator);
} else if (strncasecmp(name, TSDB_INS_TABLE_TAGS, TSDB_TABLE_FNAME_LEN) == 0) {
return sysTableScanUserTags(pOperator);
} else if (strncasecmp(name, TSDB_INS_TABLE_STABLES, TSDB_TABLE_FNAME_LEN) == 0 &&
pInfo->showRewrite && IS_SYS_DBNAME(dbName)) {
} else if (strncasecmp(name, TSDB_INS_TABLE_STABLES, TSDB_TABLE_FNAME_LEN) == 0 && pInfo->showRewrite &&
IS_SYS_DBNAME(dbName)) {
return sysTableScanUserSTables(pOperator);
} else { // load the meta from mnode of the given epset
if (pOperator->status == OP_EXEC_DONE) {
@ -2541,7 +2547,7 @@ static void destroyTagScanOperatorInfo(void* param, int32_t numOfOutput) {
pInfo->pRes = blockDataDestroy(pInfo->pRes);
taosArrayDestroy(pInfo->pColMatchInfo);
taosMemoryFreeClear(param);
}
@ -2597,7 +2603,6 @@ _error:
int32_t createScanTableListInfo(SScanPhysiNode* pScanNode, SNodeList* pGroupTags, bool groupSort, SReadHandle* pHandle,
STableListInfo* pTableListInfo, SNode* pTagCond, SNode* pTagIndexCond,
const char* idStr) {
int64_t st = taosGetTimestampUs();
int32_t code = getTableList(pHandle->meta, pHandle->vnode, pScanNode, pTagCond, pTagIndexCond, pTableListInfo);
@ -2606,7 +2611,7 @@ int32_t createScanTableListInfo(SScanPhysiNode* pScanNode, SNodeList* pGroupTags
}
int64_t st1 = taosGetTimestampUs();
qDebug("generate queried table list completed, elapsed time:%.2f ms %s", (st1-st)/1000.0, idStr);
qDebug("generate queried table list completed, elapsed time:%.2f ms %s", (st1 - st) / 1000.0, idStr);
if (taosArrayGetSize(pTableListInfo->pTableList) == 0) {
qDebug("no table qualified for query, %s" PRIx64, idStr);
@ -2620,7 +2625,7 @@ int32_t createScanTableListInfo(SScanPhysiNode* pScanNode, SNodeList* pGroupTags
}
int64_t st2 = taosGetTimestampUs();
qDebug("generate group id map completed, elapsed time:%.2f ms %s", (st2-st1)/1000.0, idStr);
qDebug("generate group id map completed, elapsed time:%.2f ms %s", (st2 - st1) / 1000.0, idStr);
return TSDB_CODE_SUCCESS;
}

View File

@ -33,39 +33,29 @@
#define DO_INTERPOLATION(_v1, _v2, _k1, _k2, _k) \
((_v1) + ((_v2) - (_v1)) * (((double)(_k)) - ((double)(_k1))) / (((double)(_k2)) - ((double)(_k1))))
static void setTagsValue(SFillInfo* pFillInfo, void** data, int32_t genRows) {
for (int32_t j = 0; j < pFillInfo->numOfCols; ++j) {
SFillColInfo* pCol = &pFillInfo->pFillCol[j];
if (TSDB_COL_IS_NORMAL_COL(pCol->flag) || TSDB_COL_IS_UD_COL(pCol->flag)) {
continue;
}
SResSchema* pSchema = &pCol->pExpr->base.resSchema;
char* val1 = elePtrAt(data[j], pSchema->bytes, genRows);
assert(pCol->tagIndex >= 0 && pCol->tagIndex < pFillInfo->numOfTags);
SFillTagColInfo* pTag = &pFillInfo->pTags[pCol->tagIndex];
assignVal(val1, pTag->tagVal, pSchema->bytes, pSchema->type);
}
}
static void setNullRow(SSDataBlock* pBlock, int64_t ts, int32_t rowIndex) {
// the first are always the timestamp column, so start from the second column.
for (int32_t i = 0; i < taosArrayGetSize(pBlock->pDataBlock); ++i) {
SColumnInfoData* p = taosArrayGet(pBlock->pDataBlock, i);
if (p->info.type == TSDB_DATA_TYPE_TIMESTAMP) { // handle timestamp
colDataAppend(p, rowIndex, (const char*)&ts, false);
} else {
colDataAppendNULL(p, rowIndex);
}
}
}
#define GET_DEST_SLOT_ID(_p) ((_p)->pExpr->base.resSchema.slotId)
#define GET_SRC_SLOT_ID(_p) ((_p)->pExpr->base.pParam[0].pCol->slotId)
static void doSetVal(SColumnInfoData* pDstColInfoData, int32_t rowIndex, const SGroupKeys* pKey);
static void setNullRow(SSDataBlock* pBlock, SFillInfo* pFillInfo, int32_t rowIndex) {
for(int32_t i = 0; i < pFillInfo->numOfCols; ++i) {
SFillColInfo* pCol = &pFillInfo->pFillCol[i];
int32_t dstSlotId = GET_DEST_SLOT_ID(pCol);
SColumnInfoData* pDstColInfo = taosArrayGet(pBlock->pDataBlock, dstSlotId);
if (pCol->notFillCol) {
if (pDstColInfo->info.type == TSDB_DATA_TYPE_TIMESTAMP) {
colDataAppend(pDstColInfo, rowIndex, (const char*)&pFillInfo->currentKey, false);
} else {
SArray* p = FILL_IS_ASC_FILL(pFillInfo) ? pFillInfo->prev.pRowVal : pFillInfo->next.pRowVal;
SGroupKeys* pKey = taosArrayGet(p, i);
doSetVal(pDstColInfo, rowIndex, pKey);
}
} else {
colDataAppendNULL(pDstColInfo, rowIndex);
}
}
}
static void doSetUserSpecifiedValue(SColumnInfoData* pDst, SVariant* pVar, int32_t rowIndex, int64_t currentKey) {
if (pDst->info.type == TSDB_DATA_TYPE_FLOAT) {
float v = 0;
@ -96,13 +86,10 @@ static void doFillOneRow(SFillInfo* pFillInfo, SSDataBlock* pBlock, SSDataBlock*
// set the other values
if (pFillInfo->type == TSDB_FILL_PREV) {
SArray* p = FILL_IS_ASC_FILL(pFillInfo) ? pFillInfo->prev : pFillInfo->next;
SArray* p = FILL_IS_ASC_FILL(pFillInfo)? pFillInfo->prev.pRowVal : pFillInfo->next.pRowVal;
for (int32_t i = 0; i < pFillInfo->numOfCols; ++i) {
SFillColInfo* pCol = &pFillInfo->pFillCol[i];
if (TSDB_COL_IS_TAG(pCol->flag)) {
continue;
}
SColumnInfoData* pDstColInfoData = taosArrayGet(pBlock->pDataBlock, GET_DEST_SLOT_ID(pCol));
@ -114,14 +101,10 @@ static void doFillOneRow(SFillInfo* pFillInfo, SSDataBlock* pBlock, SSDataBlock*
}
}
} else if (pFillInfo->type == TSDB_FILL_NEXT) {
SArray* p = FILL_IS_ASC_FILL(pFillInfo) ? pFillInfo->next : pFillInfo->prev;
SArray* p = FILL_IS_ASC_FILL(pFillInfo) ? pFillInfo->next.pRowVal : pFillInfo->prev.pRowVal;
// todo refactor: start from 0 not 1
for (int32_t i = 0; i < pFillInfo->numOfCols; ++i) {
SFillColInfo* pCol = &pFillInfo->pFillCol[i];
if (TSDB_COL_IS_TAG(pCol->flag)) {
continue;
}
SColumnInfoData* pDstColInfoData = taosArrayGet(pBlock->pDataBlock, GET_DEST_SLOT_ID(pCol));
if (pDstColInfoData->info.type == TSDB_DATA_TYPE_TIMESTAMP) {
@ -134,59 +117,70 @@ static void doFillOneRow(SFillInfo* pFillInfo, SSDataBlock* pBlock, SSDataBlock*
} else if (pFillInfo->type == TSDB_FILL_LINEAR) {
// TODO : linear interpolation supports NULL value
if (outOfBound) {
setNullRow(pBlock, pFillInfo->currentKey, index);
setNullRow(pBlock, pFillInfo, index);
} else {
for (int32_t i = 0; i < pFillInfo->numOfCols; ++i) {
SFillColInfo* pCol = &pFillInfo->pFillCol[i];
if (TSDB_COL_IS_TAG(pCol->flag)) {
continue;
}
int32_t dstSlotId = GET_DEST_SLOT_ID(pCol);
SColumnInfoData* pDstCol = taosArrayGet(pBlock->pDataBlock, dstSlotId);
int16_t type = pDstCol->info.type;
int16_t type = pDstCol->info.type;
if (type == TSDB_DATA_TYPE_TIMESTAMP) {
colDataAppend(pDstCol, index, (const char*)&pFillInfo->currentKey, false);
continue;
if (pCol->notFillCol) {
if (type == TSDB_DATA_TYPE_TIMESTAMP) {
colDataAppend(pDstCol, index, (const char*)&pFillInfo->currentKey, false);
} else {
SArray* p = FILL_IS_ASC_FILL(pFillInfo) ? pFillInfo->prev.pRowVal : pFillInfo->next.pRowVal;
SGroupKeys* pKey = taosArrayGet(p, i);
doSetVal(pDstCol, index, pKey);
}
} else {
SGroupKeys* pKey = taosArrayGet(pFillInfo->prev.pRowVal, i);
if (IS_VAR_DATA_TYPE(type) || type == TSDB_DATA_TYPE_BOOL || pKey->isNull) {
colDataAppendNULL(pDstCol, index);
continue;
}
SGroupKeys* pKey1 = taosArrayGet(pFillInfo->prev.pRowVal, pFillInfo->tsSlotId);
int64_t prevTs = *(int64_t*)pKey1->pData;
int32_t srcSlotId = GET_DEST_SLOT_ID(pCol);
SColumnInfoData* pSrcCol = taosArrayGet(pSrcBlock->pDataBlock, srcSlotId);
char* data = colDataGetData(pSrcCol, pFillInfo->index);
point1 = (SPoint){.key = prevTs, .val = pKey->pData};
point2 = (SPoint){.key = ts, .val = data};
int64_t out = 0;
point = (SPoint){.key = pFillInfo->currentKey, .val = &out};
taosGetLinearInterpolationVal(&point, type, &point1, &point2, type);
colDataAppend(pDstCol, index, (const char*)&out, false);
}
SGroupKeys* pKey = taosArrayGet(pFillInfo->prev, i);
if (IS_VAR_DATA_TYPE(type) || type == TSDB_DATA_TYPE_BOOL || pKey->isNull) {
colDataAppendNULL(pDstCol, index);
continue;
}
SGroupKeys* pKey1 = taosArrayGet(pFillInfo->prev, pFillInfo->tsSlotId);
int64_t prevTs = *(int64_t*)pKey1->pData;
int32_t srcSlotId = GET_SRC_SLOT_ID(pCol);
SColumnInfoData* pSrcCol = taosArrayGet(pSrcBlock->pDataBlock, srcSlotId);
char* data = colDataGetData(pSrcCol, pFillInfo->index);
point1 = (SPoint){.key = prevTs, .val = pKey->pData};
point2 = (SPoint){.key = ts, .val = data};
int64_t out = 0;
point = (SPoint){.key = pFillInfo->currentKey, .val = &out};
taosGetLinearInterpolationVal(&point, type, &point1, &point2, type);
colDataAppend(pDstCol, index, (const char*)&out, false);
}
}
} else if (pFillInfo->type == TSDB_FILL_NULL) { // fill with NULL
setNullRow(pBlock, pFillInfo->currentKey, index);
setNullRow(pBlock, pFillInfo, index);
} else { // fill with user specified value for each column
for (int32_t i = 0; i < pFillInfo->numOfCols; ++i) {
SFillColInfo* pCol = &pFillInfo->pFillCol[i];
if (TSDB_COL_IS_TAG(pCol->flag)) {
continue;
}
SVariant* pVar = &pFillInfo->pFillCol[i].fillVal;
SColumnInfoData* pDst = taosArrayGet(pBlock->pDataBlock, i);
doSetUserSpecifiedValue(pDst, pVar, index, pFillInfo->currentKey);
int32_t slotId = GET_DEST_SLOT_ID(pCol);
SColumnInfoData* pDst = taosArrayGet(pBlock->pDataBlock, slotId);
if (pCol->notFillCol) {
if (pDst->info.type == TSDB_DATA_TYPE_TIMESTAMP) {
colDataAppend(pDst, index, (const char*)&pFillInfo->currentKey, false);
} else {
SArray* p = FILL_IS_ASC_FILL(pFillInfo) ? pFillInfo->prev.pRowVal : pFillInfo->next.pRowVal;
SGroupKeys* pKey = taosArrayGet(p, i);
doSetVal(pDst, index, pKey);
}
} else {
SVariant* pVar = &pFillInfo->pFillCol[i].fillVal;
doSetUserSpecifiedValue(pDst, pVar, index, pFillInfo->currentKey);
}
}
}
@ -207,7 +201,7 @@ void doSetVal(SColumnInfoData* pDstCol, int32_t rowIndex, const SGroupKeys* pKey
}
static void initBeforeAfterDataBuf(SFillInfo* pFillInfo) {
if (taosArrayGetSize(pFillInfo->next) > 0) {
if (taosArrayGetSize(pFillInfo->next.pRowVal) > 0) {
return;
}
@ -221,10 +215,10 @@ static void initBeforeAfterDataBuf(SFillInfo* pFillInfo) {
key.bytes = pSchema->bytes;
key.type = pSchema->type;
taosArrayPush(pFillInfo->next, &key);
taosArrayPush(pFillInfo->next.pRowVal, &key);
key.pData = taosMemoryMalloc(pSchema->bytes);
taosArrayPush(pFillInfo->prev, &key);
taosArrayPush(pFillInfo->prev.pRowVal, &key);
}
}
@ -232,20 +226,31 @@ static void saveColData(SArray* rowBuf, int32_t columnIndex, const char* src, bo
static void copyCurrentRowIntoBuf(SFillInfo* pFillInfo, int32_t rowIndex, SArray* pRow) {
for (int32_t i = 0; i < pFillInfo->numOfCols; ++i) {
int32_t srcSlotId = GET_SRC_SLOT_ID(&pFillInfo->pFillCol[i]);
int32_t type = pFillInfo->pFillCol[i].pExpr->pExpr->nodeType;
if (type == QUERY_NODE_COLUMN) {
int32_t srcSlotId = GET_DEST_SLOT_ID(&pFillInfo->pFillCol[i]);
SColumnInfoData* pSrcCol = taosArrayGet(pFillInfo->pSrcBlock->pDataBlock, srcSlotId);
SColumnInfoData* pSrcCol = taosArrayGet(pFillInfo->pSrcBlock->pDataBlock, srcSlotId);
bool isNull = colDataIsNull_s(pSrcCol, rowIndex);
char* p = colDataGetData(pSrcCol, rowIndex);
saveColData(pRow, i, p, isNull);
bool isNull = colDataIsNull_s(pSrcCol, rowIndex);
char* p = colDataGetData(pSrcCol, rowIndex);
saveColData(pRow, i, p, isNull);
} else if (type == QUERY_NODE_OPERATOR) {
SColumnInfoData* pSrcCol = taosArrayGet(pFillInfo->pSrcBlock->pDataBlock, i);
bool isNull = colDataIsNull_s(pSrcCol, rowIndex);
char* p = colDataGetData(pSrcCol, rowIndex);
saveColData(pRow, i, p, isNull);
} else {
ASSERT(0);
}
}
}
static int32_t fillResultImpl(SFillInfo* pFillInfo, SSDataBlock* pBlock, int32_t outputRows) {
pFillInfo->numOfCurrent = 0;
SColumnInfoData* pTsCol = taosArrayGet(pFillInfo->pSrcBlock->pDataBlock, pFillInfo->tsSlotId);
SColumnInfoData* pTsCol = taosArrayGet(pFillInfo->pSrcBlock->pDataBlock, pFillInfo->srcTsSlotId);
int32_t step = GET_FORWARD_DIRECTION_FACTOR(pFillInfo->order);
bool ascFill = FILL_IS_ASC_FILL(pFillInfo);
@ -259,7 +264,7 @@ static int32_t fillResultImpl(SFillInfo* pFillInfo, SSDataBlock* pBlock, int32_t
// set the next value for interpolation
if ((pFillInfo->currentKey < ts && ascFill) || (pFillInfo->currentKey > ts && !ascFill)) {
copyCurrentRowIntoBuf(pFillInfo, pFillInfo->index, pFillInfo->next);
copyCurrentRowIntoBuf(pFillInfo, pFillInfo->index, pFillInfo->next.pRowVal);
}
if (((pFillInfo->currentKey < ts && ascFill) || (pFillInfo->currentKey > ts && !ascFill)) &&
@ -281,43 +286,38 @@ static int32_t fillResultImpl(SFillInfo* pFillInfo, SSDataBlock* pBlock, int32_t
if (pFillInfo->type == TSDB_FILL_NEXT && (pFillInfo->index + 1) < pFillInfo->numOfRows) {
int32_t nextRowIndex = pFillInfo->index + 1;
copyCurrentRowIntoBuf(pFillInfo, nextRowIndex, pFillInfo->next);
copyCurrentRowIntoBuf(pFillInfo, nextRowIndex, pFillInfo->next.pRowVal);
}
// assign rows to dst buffer
// copy rows to dst buffer
for (int32_t i = 0; i < pFillInfo->numOfCols; ++i) {
SFillColInfo* pCol = &pFillInfo->pFillCol[i];
if (TSDB_COL_IS_TAG(pCol->flag) /* || IS_VAR_DATA_TYPE(pCol->schema.type)*/) {
continue;
}
int32_t srcSlotId = GET_SRC_SLOT_ID(pCol);
int32_t dstSlotId = GET_DEST_SLOT_ID(pCol);
SColumnInfoData* pDst = taosArrayGet(pBlock->pDataBlock, dstSlotId);
SColumnInfoData* pSrc = taosArrayGet(pFillInfo->pSrcBlock->pDataBlock, srcSlotId);
SColumnInfoData* pSrc = taosArrayGet(pFillInfo->pSrcBlock->pDataBlock, dstSlotId);
char* src = colDataGetData(pSrc, pFillInfo->index);
if (/*i == 0 || (*/ !colDataIsNull_s(pSrc, pFillInfo->index)) {
bool isNull = colDataIsNull_s(pSrc, pFillInfo->index);
colDataAppend(pDst, index, src, isNull);
saveColData(pFillInfo->prev, i, src, isNull);
} else {
if (!colDataIsNull_s(pSrc, pFillInfo->index)) {
colDataAppend(pDst, index, src, false);
saveColData(pFillInfo->prev.pRowVal, i, src, false);
} else { // the value is null
if (pDst->info.type == TSDB_DATA_TYPE_TIMESTAMP) {
colDataAppend(pDst, index, (const char*)&pFillInfo->currentKey, false);
} else { // i > 0 and data is null , do interpolation
if (pFillInfo->type == TSDB_FILL_PREV) {
SArray* p = FILL_IS_ASC_FILL(pFillInfo) ? pFillInfo->prev : pFillInfo->next;
SArray* p = FILL_IS_ASC_FILL(pFillInfo) ? pFillInfo->prev.pRowVal : pFillInfo->next.pRowVal;
SGroupKeys* pKey = taosArrayGet(p, i);
doSetVal(pDst, index, pKey);
} else if (pFillInfo->type == TSDB_FILL_LINEAR) {
bool isNull = colDataIsNull_s(pSrc, pFillInfo->index);
colDataAppend(pDst, index, src, isNull);
saveColData(pFillInfo->prev, i, src, isNull); // todo:
saveColData(pFillInfo->prev.pRowVal, i, src, isNull); // todo:
} else if (pFillInfo->type == TSDB_FILL_NULL) {
colDataAppendNULL(pDst, index);
} else if (pFillInfo->type == TSDB_FILL_NEXT) {
SArray* p = FILL_IS_ASC_FILL(pFillInfo) ? pFillInfo->next : pFillInfo->prev;
SArray* p = FILL_IS_ASC_FILL(pFillInfo) ? pFillInfo->next.pRowVal : pFillInfo->prev.pRowVal;
SGroupKeys* pKey = taosArrayGet(p, i);
doSetVal(pDst, index, pKey);
} else {
@ -340,10 +340,6 @@ static int32_t fillResultImpl(SFillInfo* pFillInfo, SSDataBlock* pBlock, int32_t
}
if (pFillInfo->index >= pFillInfo->numOfRows || pFillInfo->numOfCurrent >= outputRows) {
/* the raw data block is exhausted, next value does not exists */
// if (pFillInfo->index >= pFillInfo->numOfRows) {
// taosMemoryFreeClear(*next);
// }
pFillInfo->numOfTotal += pFillInfo->numOfCurrent;
return pFillInfo->numOfCurrent;
}
@ -357,7 +353,11 @@ static void saveColData(SArray* rowBuf, int32_t columnIndex, const char* src, bo
if (isNull) {
pKey->isNull = true;
} else {
memcpy(pKey->pData, src, pKey->bytes);
if (IS_VAR_DATA_TYPE(pKey->type)) {
memcpy(pKey->pData, src, varDataTLen(src));
} else {
memcpy(pKey->pData, src, pKey->bytes);
}
pKey->isNull = false;
}
}
@ -378,53 +378,6 @@ static int64_t appendFilledResult(SFillInfo* pFillInfo, SSDataBlock* pBlock, int
return resultCapacity;
}
// there are no duplicated tags in the SFillTagColInfo list
static int32_t setTagColumnInfo(SFillInfo* pFillInfo, int32_t numOfCols, int32_t capacity) {
int32_t rowsize = 0;
int32_t numOfTags = 0;
int32_t k = 0;
for (int32_t i = 0; i < numOfCols; ++i) {
SFillColInfo* pColInfo = &pFillInfo->pFillCol[i];
SResSchema* pSchema = &pColInfo->pExpr->base.resSchema;
if (TSDB_COL_IS_TAG(pColInfo->flag) || pSchema->type == TSDB_DATA_TYPE_BINARY) {
numOfTags += 1;
bool exists = false;
int32_t index = -1;
for (int32_t j = 0; j < k; ++j) {
if (pFillInfo->pTags[j].col.colId == pSchema->slotId) {
exists = true;
index = j;
break;
}
}
if (!exists) {
SSchema* pSchema1 = &pFillInfo->pTags[k].col;
pSchema1->colId = pSchema->slotId;
pSchema1->type = pSchema->type;
pSchema1->bytes = pSchema->bytes;
pFillInfo->pTags[k].tagVal = taosMemoryCalloc(1, pSchema->bytes);
pColInfo->tagIndex = k;
k += 1;
} else {
pColInfo->tagIndex = index;
}
}
rowsize += pSchema->bytes;
}
pFillInfo->numOfTags = numOfTags;
assert(k <= pFillInfo->numOfTags);
return rowsize;
}
static int32_t taosNumOfRemainRows(SFillInfo* pFillInfo) {
if (pFillInfo->numOfRows == 0 || (pFillInfo->numOfRows > 0 && pFillInfo->index >= pFillInfo->numOfRows)) {
return 0;
@ -433,7 +386,7 @@ static int32_t taosNumOfRemainRows(SFillInfo* pFillInfo) {
return pFillInfo->numOfRows - pFillInfo->index;
}
struct SFillInfo* taosCreateFillInfo(TSKEY skey, int32_t numOfTags, int32_t capacity, int32_t numOfCols,
struct SFillInfo* taosCreateFillInfo(TSKEY skey, int32_t numOfFillCols, int32_t numOfNotFillCols, int32_t capacity,
SInterval* pInterval, int32_t fillType, struct SFillColInfo* pCol,
int32_t primaryTsSlotId, int32_t order, const char* id) {
if (fillType == TSDB_FILL_NONE) {
@ -447,7 +400,17 @@ struct SFillInfo* taosCreateFillInfo(TSKEY skey, int32_t numOfTags, int32_t capa
}
pFillInfo->order = order;
pFillInfo->tsSlotId = primaryTsSlotId;
pFillInfo->srcTsSlotId = primaryTsSlotId;
for(int32_t i = 0; i < numOfNotFillCols; ++i) {
SFillColInfo* p = &pCol[i + numOfFillCols];
int32_t srcSlotId = GET_DEST_SLOT_ID(p);
if (srcSlotId == primaryTsSlotId) {
pFillInfo->tsSlotId = i + numOfFillCols;
break;
}
}
taosResetFillInfo(pFillInfo, skey);
switch (fillType) {
@ -476,26 +439,15 @@ struct SFillInfo* taosCreateFillInfo(TSKEY skey, int32_t numOfTags, int32_t capa
pFillInfo->type = fillType;
pFillInfo->pFillCol = pCol;
pFillInfo->numOfTags = numOfTags;
pFillInfo->numOfCols = numOfCols;
pFillInfo->numOfCols = numOfFillCols + numOfNotFillCols;
pFillInfo->alloc = capacity;
pFillInfo->id = id;
pFillInfo->interval = *pInterval;
// if (numOfTags > 0) {
pFillInfo->pTags = taosMemoryCalloc(numOfCols, sizeof(SFillTagColInfo));
for (int32_t i = 0; i < numOfCols; ++i) {
pFillInfo->pTags[i].col.colId = -2; // TODO
}
// }
pFillInfo->next = taosArrayInit(numOfCols, sizeof(SGroupKeys));
pFillInfo->prev = taosArrayInit(numOfCols, sizeof(SGroupKeys));
pFillInfo->next.pRowVal = taosArrayInit(pFillInfo->numOfCols, sizeof(SGroupKeys));
pFillInfo->prev.pRowVal = taosArrayInit(pFillInfo->numOfCols, sizeof(SGroupKeys));
initBeforeAfterDataBuf(pFillInfo);
pFillInfo->rowSize = setTagColumnInfo(pFillInfo, pFillInfo->numOfCols, pFillInfo->alloc);
assert(pFillInfo->rowSize > 0);
return pFillInfo;
}
@ -513,20 +465,20 @@ void* taosDestroyFillInfo(SFillInfo* pFillInfo) {
if (pFillInfo == NULL) {
return NULL;
}
for (int32_t i = 0; i < taosArrayGetSize(pFillInfo->prev); ++i) {
SGroupKeys* pKey = taosArrayGet(pFillInfo->prev, i);
for (int32_t i = 0; i < taosArrayGetSize(pFillInfo->prev.pRowVal); ++i) {
SGroupKeys* pKey = taosArrayGet(pFillInfo->prev.pRowVal, i);
taosMemoryFree(pKey->pData);
}
taosArrayDestroy(pFillInfo->prev);
for (int32_t i = 0; i < taosArrayGetSize(pFillInfo->next); ++i) {
SGroupKeys* pKey = taosArrayGet(pFillInfo->next, i);
taosArrayDestroy(pFillInfo->prev.pRowVal);
for (int32_t i = 0; i < taosArrayGetSize(pFillInfo->next.pRowVal); ++i) {
SGroupKeys* pKey = taosArrayGet(pFillInfo->next.pRowVal, i);
taosMemoryFree(pKey->pData);
}
taosArrayDestroy(pFillInfo->next);
taosArrayDestroy(pFillInfo->next.pRowVal);
for (int32_t i = 0; i < pFillInfo->numOfTags; ++i) {
taosMemoryFreeClear(pFillInfo->pTags[i].tagVal);
}
// for (int32_t i = 0; i < pFillInfo->numOfTags; ++i) {
// taosMemoryFreeClear(pFillInfo->pTags[i].tagVal);
// }
taosMemoryFreeClear(pFillInfo->pTags);
taosMemoryFreeClear(pFillInfo->pFillCol);
@ -576,7 +528,7 @@ bool taosFillHasMoreResults(SFillInfo* pFillInfo) {
}
int64_t getNumOfResultsAfterFillGap(SFillInfo* pFillInfo, TSKEY ekey, int32_t maxNumOfRows) {
SColumnInfoData* pCol = taosArrayGet(pFillInfo->pSrcBlock->pDataBlock, 0);
SColumnInfoData* pCol = taosArrayGet(pFillInfo->pSrcBlock->pDataBlock, pFillInfo->srcTsSlotId);
int64_t* tsList = (int64_t*)pCol->pData;
int32_t numOfRows = taosNumOfRemainRows(pFillInfo);
@ -642,17 +594,18 @@ int64_t taosFillResultDataBlock(SFillInfo* pFillInfo, SSDataBlock* p, int32_t ca
int64_t getFillInfoStart(struct SFillInfo* pFillInfo) { return pFillInfo->start; }
SFillColInfo* createFillColInfo(SExprInfo* pExpr, int32_t numOfOutput, const struct SNodeListNode* pValNode) {
SFillColInfo* pFillCol = taosMemoryCalloc(numOfOutput, sizeof(SFillColInfo));
SFillColInfo* createFillColInfo(SExprInfo* pExpr, int32_t numOfFillExpr, SExprInfo* pNotFillExpr,
int32_t numOfNotFillExpr, const struct SNodeListNode* pValNode) {
SFillColInfo* pFillCol = taosMemoryCalloc(numOfFillExpr + numOfNotFillExpr, sizeof(SFillColInfo));
if (pFillCol == NULL) {
return NULL;
}
size_t len = (pValNode != NULL) ? LIST_LENGTH(pValNode->pNodeList) : 0;
for (int32_t i = 0; i < numOfOutput; ++i) {
for (int32_t i = 0; i < numOfFillExpr; ++i) {
SExprInfo* pExprInfo = &pExpr[i];
pFillCol[i].pExpr = pExprInfo;
pFillCol[i].tagIndex = -2;
pFillCol[i].notFillCol = false;
// todo refactor
if (len > 0) {
@ -662,10 +615,12 @@ SFillColInfo* createFillColInfo(SExprInfo* pExpr, int32_t numOfOutput, const str
SValueNode* pv = (SValueNode*)nodesListGetNode(pValNode->pNodeList, index);
nodesValueNodeToVariant(pv, &pFillCol[i].fillVal);
}
}
if (pExprInfo->base.numOfParams > 0) {
pFillCol[i].flag = pExprInfo->base.pParam[0].pCol->flag; // always be the normal column for table query
}
for(int32_t i = 0; i < numOfNotFillExpr; ++i) {
SExprInfo* pExprInfo = &pNotFillExpr[i];
pFillCol[i + numOfFillExpr].pExpr = pExprInfo;
pFillCol[i + numOfFillExpr].notFillCol = true;
}
return pFillCol;

View File

@ -2675,10 +2675,8 @@ SOperatorInfo* createTimeSliceOperatorInfo(SOperatorInfo* downstream, SPhysiNode
pInfo->fillType = convertFillType(pInterpPhyNode->fillMode);
initResultSizeInfo(&pOperator->resultInfo, 4096);
pInfo->pPrevRow = NULL;
pInfo->pNextRow = NULL;
pInfo->pFillColInfo = createFillColInfo(pExprInfo, numOfExprs, NULL, 0, (SNodeListNode*)pInterpPhyNode->pFillValues);
pInfo->pLinearInfo = NULL;
pInfo->pFillColInfo = createFillColInfo(pExprInfo, numOfExprs, (SNodeListNode*)pInterpPhyNode->pFillValues);
pInfo->pRes = createResDataBlock(pPhyNode->pOutputDataBlockDesc);
pInfo->win = pInterpPhyNode->timeRange;
pInfo->interval.interval = pInterpPhyNode->interval;

View File

@ -2176,7 +2176,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
{
.name = "top",
.type = FUNCTION_TYPE_TOP,
.classification = FUNC_MGT_AGG_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_MULTI_ROWS_FUNC | FUNC_MGT_KEEP_ORDER_FUNC | FUNC_MGT_FORBID_STREAM_FUNC,
.classification = FUNC_MGT_AGG_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_MULTI_ROWS_FUNC | FUNC_MGT_KEEP_ORDER_FUNC | FUNC_MGT_FORBID_STREAM_FUNC | FUNC_MGT_FORBID_FILL_FUNC,
.translateFunc = translateTopBot,
.getEnvFunc = getTopBotFuncEnv,
.initFunc = topBotFunctionSetup,
@ -2191,7 +2191,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
{
.name = "bottom",
.type = FUNCTION_TYPE_BOTTOM,
.classification = FUNC_MGT_AGG_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_MULTI_ROWS_FUNC | FUNC_MGT_KEEP_ORDER_FUNC | FUNC_MGT_FORBID_STREAM_FUNC,
.classification = FUNC_MGT_AGG_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_MULTI_ROWS_FUNC | FUNC_MGT_KEEP_ORDER_FUNC | FUNC_MGT_FORBID_STREAM_FUNC | FUNC_MGT_FORBID_FILL_FUNC,
.translateFunc = translateTopBot,
.getEnvFunc = getTopBotFuncEnv,
.initFunc = topBotFunctionSetup,
@ -2590,7 +2590,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
{
.name = "sample",
.type = FUNCTION_TYPE_SAMPLE,
.classification = FUNC_MGT_AGG_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_MULTI_ROWS_FUNC | FUNC_MGT_KEEP_ORDER_FUNC | FUNC_MGT_FORBID_STREAM_FUNC,
.classification = FUNC_MGT_AGG_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_MULTI_ROWS_FUNC | FUNC_MGT_KEEP_ORDER_FUNC | FUNC_MGT_FORBID_STREAM_FUNC | FUNC_MGT_FORBID_FILL_FUNC,
.translateFunc = translateSample,
.getEnvFunc = getSampleFuncEnv,
.initFunc = sampleFunctionSetup,

View File

@ -451,6 +451,8 @@ static int32_t logicWindowCopy(const SWindowLogicNode* pSrc, SWindowLogicNode* p
static int32_t logicFillCopy(const SFillLogicNode* pSrc, SFillLogicNode* pDst) {
COPY_BASE_OBJECT_FIELD(node, logicNodeCopy);
COPY_SCALAR_FIELD(mode);
CLONE_NODE_LIST_FIELD(pFillExprs);
CLONE_NODE_LIST_FIELD(pNotFillExprs);
CLONE_NODE_FIELD(pWStartTs);
CLONE_NODE_FIELD(pValues);
COPY_OBJECT_FIELD(timeRange, sizeof(STimeWindow));

View File

@ -2086,9 +2086,10 @@ static int32_t jsonToPhysiIntervalNode(const SJson* pJson, void* pObj) {
}
static const char* jkFillPhysiPlanMode = "Mode";
static const char* jkFillPhysiPlanFillExprs = "FillExprs";
static const char* jkFillPhysiPlanNotFillExprs = "NotFillExprs";
static const char* jkFillPhysiPlanWStartTs = "WStartTs";
static const char* jkFillPhysiPlanValues = "Values";
static const char* jkFillPhysiPlanTargets = "Targets";
static const char* jkFillPhysiPlanStartTime = "StartTime";
static const char* jkFillPhysiPlanEndTime = "EndTime";
static const char* jkFillPhysiPlanInputTsOrder = "inputTsOrder";
@ -2100,15 +2101,18 @@ static int32_t physiFillNodeToJson(const void* pObj, SJson* pJson) {
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddIntegerToObject(pJson, jkFillPhysiPlanMode, pNode->mode);
}
if (TSDB_CODE_SUCCESS == code) {
code = nodeListToJson(pJson, jkFillPhysiPlanFillExprs, pNode->pFillExprs);
}
if (TSDB_CODE_SUCCESS == code) {
code = nodeListToJson(pJson, jkFillPhysiPlanNotFillExprs, pNode->pNotFillExprs);
}
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddObject(pJson, jkFillPhysiPlanWStartTs, nodeToJson, pNode->pWStartTs);
}
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddObject(pJson, jkFillPhysiPlanValues, nodeToJson, pNode->pValues);
}
if (TSDB_CODE_SUCCESS == code) {
code = nodeListToJson(pJson, jkFillPhysiPlanTargets, pNode->pTargets);
}
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddIntegerToObject(pJson, jkFillPhysiPlanStartTime, pNode->timeRange.skey);
}
@ -2128,7 +2132,12 @@ static int32_t jsonToPhysiFillNode(const SJson* pJson, void* pObj) {
int32_t code = jsonToPhysicPlanNode(pJson, pObj);
if (TSDB_CODE_SUCCESS == code) {
tjsonGetNumberValue(pJson, jkFillPhysiPlanMode, pNode->mode, code);
;
}
if (TSDB_CODE_SUCCESS == code) {
code = jsonToNodeList(pJson, jkFillPhysiPlanFillExprs, &pNode->pFillExprs);
}
if (TSDB_CODE_SUCCESS == code) {
code = jsonToNodeList(pJson, jkFillPhysiPlanNotFillExprs, &pNode->pNotFillExprs);
}
if (TSDB_CODE_SUCCESS == code) {
code = jsonToNodeObject(pJson, jkFillPhysiPlanWStartTs, &pNode->pWStartTs);
@ -2136,9 +2145,6 @@ static int32_t jsonToPhysiFillNode(const SJson* pJson, void* pObj) {
if (TSDB_CODE_SUCCESS == code) {
code = jsonToNodeObject(pJson, jkFillPhysiPlanValues, &pNode->pValues);
}
if (TSDB_CODE_SUCCESS == code) {
code = jsonToNodeList(pJson, jkFillPhysiPlanTargets, &pNode->pTargets);
}
if (TSDB_CODE_SUCCESS == code) {
code = tjsonGetBigIntValue(pJson, jkFillPhysiPlanStartTime, &pNode->timeRange.skey);
}

View File

@ -346,6 +346,7 @@ void nodesWalkSelectStmt(SSelectStmt* pSelect, ESqlClause clause, FNodeWalker wa
if (NULL != pSelect->pWindow && QUERY_NODE_INTERVAL_WINDOW == nodeType(pSelect->pWindow)) {
nodesWalkExpr(((SIntervalWindowNode*)pSelect->pWindow)->pFill, walker, pContext);
}
case SQL_CLAUSE_FILL:
nodesWalkExprs(pSelect->pGroupByList, walker, pContext);
case SQL_CLAUSE_GROUP_BY:
nodesWalkExpr(pSelect->pHaving, walker, pContext);
@ -379,6 +380,7 @@ void nodesRewriteSelectStmt(SSelectStmt* pSelect, ESqlClause clause, FNodeRewrit
if (NULL != pSelect->pWindow && QUERY_NODE_INTERVAL_WINDOW == nodeType(pSelect->pWindow)) {
nodesRewriteExpr(&(((SIntervalWindowNode*)pSelect->pWindow)->pFill), rewriter, pContext);
}
case SQL_CLAUSE_FILL:
nodesRewriteExprs(pSelect->pGroupByList, rewriter, pContext);
case SQL_CLAUSE_GROUP_BY:
nodesRewriteExpr(&(pSelect->pHaving), rewriter, pContext);

View File

@ -931,9 +931,10 @@ void nodesDestroyNode(SNode* pNode) {
case QUERY_NODE_PHYSICAL_PLAN_FILL: {
SFillPhysiNode* pPhyNode = (SFillPhysiNode*)pNode;
destroyPhysiNode((SPhysiNode*)pPhyNode);
nodesDestroyList(pPhyNode->pFillExprs);
nodesDestroyList(pPhyNode->pNotFillExprs);
nodesDestroyNode(pPhyNode->pWStartTs);
nodesDestroyNode(pPhyNode->pValues);
nodesDestroyList(pPhyNode->pTargets);
break;
}
case QUERY_NODE_PHYSICAL_PLAN_MERGE_SESSION:

View File

@ -22,6 +22,7 @@ extern "C" {
#include "catalog.h"
#include "os.h"
#include "parser.h"
#include "query.h"
#define parserFatal(param, ...) qFatal("PARSER: " param, ##__VA_ARGS__)
@ -44,18 +45,37 @@ typedef struct SParseTablesMetaReq {
SHashObj* pTables;
} SParseTablesMetaReq;
typedef enum ECatalogReqType {
CATALOG_REQ_TYPE_META = 1,
CATALOG_REQ_TYPE_VGROUP,
CATALOG_REQ_TYPE_BOTH
} ECatalogReqType;
typedef struct SInsertTablesMetaReq {
char dbFName[TSDB_DB_FNAME_LEN];
SArray* pTableMetaPos;
SArray* pTableMetaReq; // element is SName
SArray* pTableVgroupPos;
SArray* pTableVgroupReq; // element is SName
} SInsertTablesMetaReq;
typedef struct SParseMetaCache {
SHashObj* pTableMeta; // key is tbFName, element is STableMeta*
SHashObj* pDbVgroup; // key is dbFName, element is SArray<SVgroupInfo>*
SHashObj* pTableVgroup; // key is tbFName, element is SVgroupInfo*
SHashObj* pDbCfg; // key is tbFName, element is SDbCfgInfo*
SHashObj* pDbInfo; // key is tbFName, element is SDbInfo*
SHashObj* pUserAuth; // key is SUserAuthInfo serialized string, element is bool indicating whether or not to pass
SHashObj* pUdf; // key is funcName, element is SFuncInfo*
SHashObj* pTableIndex; // key is tbFName, element is SArray<STableIndexInfo>*
SHashObj* pTableCfg; // key is tbFName, element is STableCfg*
SArray* pDnodes; // element is SEpSet
bool dnodeRequired;
SHashObj* pTableMeta; // key is tbFName, element is STableMeta*
SHashObj* pDbVgroup; // key is dbFName, element is SArray<SVgroupInfo>*
SHashObj* pTableVgroup; // key is tbFName, element is SVgroupInfo*
SHashObj* pDbCfg; // key is tbFName, element is SDbCfgInfo*
SHashObj* pDbInfo; // key is tbFName, element is SDbInfo*
SHashObj* pUserAuth; // key is SUserAuthInfo serialized string, element is bool indicating whether or not to pass
SHashObj* pUdf; // key is funcName, element is SFuncInfo*
SHashObj* pTableIndex; // key is tbFName, element is SArray<STableIndexInfo>*
SHashObj* pTableCfg; // key is tbFName, element is STableCfg*
SArray* pDnodes; // element is SEpSet
bool dnodeRequired;
SHashObj* pInsertTables; // key is dbName, element is SInsertTablesMetaReq*, for insert
const char* pUser;
const SArray* pTableMetaData; // pRes = STableMeta*
const SArray* pTableVgroupData; // pRes = SVgroupInfo*
int32_t sqlTableNum;
} SParseMetaCache;
int32_t generateSyntaxErrMsg(SMsgBuf* pBuf, int32_t errCode, ...);
@ -72,8 +92,9 @@ STableMeta* tableMetaDup(const STableMeta* pTableMeta);
int32_t trimString(const char* src, int32_t len, char* dst, int32_t dlen);
int32_t buildCatalogReq(const SParseMetaCache* pMetaCache, SCatalogReq* pCatalogReq);
int32_t putMetaDataToCache(const SCatalogReq* pCatalogReq, const SMetaData* pMetaData, SParseMetaCache* pMetaCache);
int32_t buildCatalogReq(SParseContext* pCxt, const SParseMetaCache* pMetaCache, SCatalogReq* pCatalogReq);
int32_t putMetaDataToCache(const SCatalogReq* pCatalogReq, const SMetaData* pMetaData, SParseMetaCache* pMetaCache,
bool insertValuesStmt);
int32_t reserveTableMetaInCache(int32_t acctId, const char* pDb, const char* pTable, SParseMetaCache* pMetaCache);
int32_t reserveTableMetaInCacheExt(const SName* pName, SParseMetaCache* pMetaCache);
int32_t reserveDbVgInfoInCache(int32_t acctId, const char* pDb, SParseMetaCache* pMetaCache);
@ -100,6 +121,12 @@ int32_t getUdfInfoFromCache(SParseMetaCache* pMetaCache, const char* pFunc, SFun
int32_t getTableIndexFromCache(SParseMetaCache* pMetaCache, const SName* pName, SArray** pIndexes);
int32_t getTableCfgFromCache(SParseMetaCache* pMetaCache, const SName* pName, STableCfg** pOutput);
int32_t getDnodeListFromCache(SParseMetaCache* pMetaCache, SArray** pDnodes);
int32_t reserveTableMetaInCacheForInsert(const SName* pName, ECatalogReqType reqType, int32_t tableNo,
SParseMetaCache* pMetaCache);
int32_t getTableMetaFromCacheForInsert(SArray* pTableMetaPos, SParseMetaCache* pMetaCache, int32_t tableNo,
STableMeta** pMeta);
int32_t getTableVgroupFromCacheForInsert(SArray* pTableVgroupPos, SParseMetaCache* pMetaCache, int32_t tableNo,
SVgroupInfo* pVgroup);
void destoryParseMetaCache(SParseMetaCache* pMetaCache, bool request);
#ifdef __cplusplus

View File

@ -506,7 +506,7 @@ stream_options(A) ::= stream_options(B) TRIGGER AT_ONCE.
stream_options(A) ::= stream_options(B) TRIGGER WINDOW_CLOSE. { ((SStreamOptions*)B)->triggerType = STREAM_TRIGGER_WINDOW_CLOSE; A = B; }
stream_options(A) ::= stream_options(B) TRIGGER MAX_DELAY duration_literal(C). { ((SStreamOptions*)B)->triggerType = STREAM_TRIGGER_MAX_DELAY; ((SStreamOptions*)B)->pDelay = releaseRawExprNode(pCxt, C); A = B; }
stream_options(A) ::= stream_options(B) WATERMARK duration_literal(C). { ((SStreamOptions*)B)->pWatermark = releaseRawExprNode(pCxt, C); A = B; }
stream_options(A) ::= stream_options(B) IGNORE EXPIRED. { ((SStreamOptions*)B)->ignoreExpired = true; A = B; }
stream_options(A) ::= stream_options(B) IGNORE EXPIRED NK_INTEGER(C). { ((SStreamOptions*)B)->ignoreExpired = taosStr2Int8(C.z, NULL, 10); A = B; }
/************************************************ kill connection/query ***********************************************/
cmd ::= KILL CONNECTION NK_INTEGER(A). { pCxt->pRootNode = createKillStmt(pCxt, QUERY_NODE_KILL_CONNECTION_STMT, &A); }

View File

@ -1628,6 +1628,7 @@ SNode* createStreamOptions(SAstCreateContext* pCxt) {
SStreamOptions* pOptions = (SStreamOptions*)nodesMakeNode(QUERY_NODE_STREAM_OPTIONS);
CHECK_OUT_OF_MEM(pOptions);
pOptions->triggerType = STREAM_TRIGGER_AT_ONCE;
pOptions->ignoreExpired = STREAM_DEFAULT_IGNORE_EXPIRED;
return (SNode*)pOptions;
}

View File

@ -73,6 +73,9 @@ typedef struct SInsertParseContext {
SStmtCallback* pStmtCb;
SParseMetaCache* pMetaCache;
char sTableName[TSDB_TABLE_NAME_LEN];
char tmpTokenBuf[TSDB_MAX_BYTES_PER_ROW];
int64_t memElapsed;
int64_t parRowElapsed;
} SInsertParseContext;
typedef struct SInsertParseSyntaxCxt {
@ -203,10 +206,11 @@ static int32_t checkAuth(SInsertParseContext* pCxt, char* pDbFname, bool* pPass)
return catalogChkAuth(pBasicCtx->pCatalog, &conn, pBasicCtx->pUser, pDbFname, AUTH_TYPE_WRITE, pPass);
}
static int32_t getTableSchema(SInsertParseContext* pCxt, SName* pTbName, bool isStb, STableMeta** pTableMeta) {
static int32_t getTableSchema(SInsertParseContext* pCxt, int32_t tbNo, SName* pTbName, bool isStb,
STableMeta** pTableMeta) {
SParseContext* pBasicCtx = pCxt->pComCxt;
if (pBasicCtx->async) {
return getTableMetaFromCache(pCxt->pMetaCache, pTbName, pTableMeta);
return getTableMetaFromCacheForInsert(pBasicCtx->pTableMetaPos, pCxt->pMetaCache, tbNo, pTableMeta);
}
SRequestConnInfo conn = {.pTrans = pBasicCtx->pTransporter,
.requestId = pBasicCtx->requestId,
@ -219,10 +223,10 @@ static int32_t getTableSchema(SInsertParseContext* pCxt, SName* pTbName, bool is
return catalogGetTableMeta(pBasicCtx->pCatalog, &conn, pTbName, pTableMeta);
}
static int32_t getTableVgroup(SInsertParseContext* pCxt, SName* pTbName, SVgroupInfo* pVg) {
static int32_t getTableVgroup(SInsertParseContext* pCxt, int32_t tbNo, SName* pTbName, SVgroupInfo* pVg) {
SParseContext* pBasicCtx = pCxt->pComCxt;
if (pBasicCtx->async) {
return getTableVgroupFromCache(pCxt->pMetaCache, pTbName, pVg);
return getTableVgroupFromCacheForInsert(pBasicCtx->pTableVgroupPos, pCxt->pMetaCache, tbNo, pVg);
}
SRequestConnInfo conn = {.pTrans = pBasicCtx->pTransporter,
.requestId = pBasicCtx->requestId,
@ -231,28 +235,22 @@ static int32_t getTableVgroup(SInsertParseContext* pCxt, SName* pTbName, SVgroup
return catalogGetTableHashVgroup(pBasicCtx->pCatalog, &conn, pTbName, pVg);
}
static int32_t getTableMetaImpl(SInsertParseContext* pCxt, SName* name, char* dbFname, bool isStb) {
bool pass = false;
CHECK_CODE(checkAuth(pCxt, dbFname, &pass));
if (!pass) {
return TSDB_CODE_PAR_PERMISSION_DENIED;
}
CHECK_CODE(getTableSchema(pCxt, name, isStb, &pCxt->pTableMeta));
static int32_t getTableMetaImpl(SInsertParseContext* pCxt, int32_t tbNo, SName* name, char* dbFname, bool isStb) {
CHECK_CODE(getTableSchema(pCxt, tbNo, name, isStb, &pCxt->pTableMeta));
if (!isStb) {
SVgroupInfo vg;
CHECK_CODE(getTableVgroup(pCxt, name, &vg));
CHECK_CODE(getTableVgroup(pCxt, tbNo, name, &vg));
CHECK_CODE(taosHashPut(pCxt->pVgroupsHashObj, (const char*)&vg.vgId, sizeof(vg.vgId), (char*)&vg, sizeof(vg)));
}
return TSDB_CODE_SUCCESS;
}
static int32_t getTableMeta(SInsertParseContext* pCxt, SName* name, char* dbFname) {
return getTableMetaImpl(pCxt, name, dbFname, false);
static int32_t getTableMeta(SInsertParseContext* pCxt, int32_t tbNo, SName* name, char* dbFname) {
return getTableMetaImpl(pCxt, tbNo, name, dbFname, false);
}
static int32_t getSTableMeta(SInsertParseContext* pCxt, SName* name, char* dbFname) {
return getTableMetaImpl(pCxt, name, dbFname, true);
static int32_t getSTableMeta(SInsertParseContext* pCxt, int32_t tbNo, SName* name, char* dbFname) {
return getTableMetaImpl(pCxt, tbNo, name, dbFname, true);
}
static int32_t getDBCfg(SInsertParseContext* pCxt, const char* pDbFName, SDbCfgInfo* pInfo) {
@ -1028,13 +1026,13 @@ end:
return code;
}
static int32_t storeTableMeta(SInsertParseContext* pCxt, SHashObj* pHash, SName* pTableName, const char* pName,
int32_t len, STableMeta* pMeta) {
static int32_t storeTableMeta(SInsertParseContext* pCxt, SHashObj* pHash, int32_t tbNo, SName* pTableName,
const char* pName, int32_t len, STableMeta* pMeta) {
SVgroupInfo vg;
CHECK_CODE(getTableVgroup(pCxt, pTableName, &vg));
CHECK_CODE(getTableVgroup(pCxt, tbNo, pTableName, &vg));
CHECK_CODE(taosHashPut(pCxt->pVgroupsHashObj, (const char*)&vg.vgId, sizeof(vg.vgId), (char*)&vg, sizeof(vg)));
pMeta->uid = 0;
pMeta->uid = tbNo;
pMeta->vgId = vg.vgId;
pMeta->tableType = TSDB_CHILD_TABLE;
@ -1084,7 +1082,7 @@ static int32_t ignoreAutoCreateTableClause(SInsertParseContext* pCxt) {
}
// pSql -> stb_name [(tag1_name, ...)] TAGS (tag1_value, ...)
static int32_t parseUsingClause(SInsertParseContext* pCxt, SName* name, char* tbFName) {
static int32_t parseUsingClause(SInsertParseContext* pCxt, int32_t tbNo, SName* name, char* tbFName) {
int32_t len = strlen(tbFName);
STableMeta** pMeta = taosHashGet(pCxt->pSubTableHashObj, tbFName, len);
if (NULL != pMeta) {
@ -1102,11 +1100,11 @@ static int32_t parseUsingClause(SInsertParseContext* pCxt, SName* name, char* tb
tNameGetFullDbName(&sname, dbFName);
strcpy(pCxt->sTableName, sname.tname);
CHECK_CODE(getSTableMeta(pCxt, &sname, dbFName));
CHECK_CODE(getSTableMeta(pCxt, tbNo, &sname, dbFName));
if (TSDB_SUPER_TABLE != pCxt->pTableMeta->tableType) {
return buildInvalidOperationMsg(&pCxt->msg, "create table only from super table is allowed");
}
CHECK_CODE(storeTableMeta(pCxt, pCxt->pSubTableHashObj, name, tbFName, len, pCxt->pTableMeta));
CHECK_CODE(storeTableMeta(pCxt, pCxt->pSubTableHashObj, tbNo, name, tbFName, len, pCxt->pTableMeta));
SSchema* pTagsSchema = getTableTagSchema(pCxt->pTableMeta);
setBoundColumnInfo(&pCxt->tags, pTagsSchema, getNumOfTags(pCxt->pTableMeta));
@ -1195,7 +1193,7 @@ static int parseOneRow(SInsertParseContext* pCxt, STableDataBlocks* pDataBlocks,
tdSRowEnd(pBuilder);
*gotRow = true;
#ifdef TD_DEBUG_PRINT_ROW
STSchema* pSTSchema = tdGetSTSChemaFromSSChema(schema, spd->numOfCols, 1);
tdSRowPrint(row, pSTSchema, __func__);
@ -1214,7 +1212,7 @@ static int32_t parseValues(SInsertParseContext* pCxt, STableDataBlocks* pDataBlo
CHECK_CODE(initRowBuilder(&pDataBlock->rowBuilder, pDataBlock->pTableMeta->sversion, &pDataBlock->boundColumnInfo));
(*numOfRows) = 0;
char tmpTokenBuf[TSDB_MAX_BYTES_PER_ROW] = {0}; // used for deleting Escape character: \\, \', \"
// char tmpTokenBuf[TSDB_MAX_BYTES_PER_ROW] = {0}; // used for deleting Escape character: \\, \', \"
SToken sToken;
while (1) {
int32_t index = 0;
@ -1232,7 +1230,7 @@ static int32_t parseValues(SInsertParseContext* pCxt, STableDataBlocks* pDataBlo
}
bool gotRow = false;
CHECK_CODE(parseOneRow(pCxt, pDataBlock, tinfo.precision, &gotRow, tmpTokenBuf));
CHECK_CODE(parseOneRow(pCxt, pDataBlock, tinfo.precision, &gotRow, pCxt->tmpTokenBuf));
if (gotRow) {
pDataBlock->size += extendedRowSize; // len;
}
@ -1347,7 +1345,9 @@ static int32_t parseDataFromFile(SInsertParseContext* pCxt, SToken filePath, STa
}
static void destroyInsertParseContextForTable(SInsertParseContext* pCxt) {
taosMemoryFreeClear(pCxt->pTableMeta);
if (!pCxt->pComCxt->async) {
taosMemoryFreeClear(pCxt->pTableMeta);
}
destroyBoundColumnInfo(&pCxt->tags);
tdDestroySVCreateTbReq(&pCxt->createTblReq);
}
@ -1365,6 +1365,20 @@ static void destroyInsertParseContext(SInsertParseContext* pCxt) {
destroyBlockArrayList(pCxt->pVgDataBlocks);
}
static int32_t parseTableName(SInsertParseContext* pCxt, SToken* pTbnameToken, SName* pName, char* pDbFName,
char* pTbFName) {
int32_t code = createSName(pName, pTbnameToken, pCxt->pComCxt->acctId, pCxt->pComCxt->db, &pCxt->msg);
if (TSDB_CODE_SUCCESS == code) {
tNameExtractFullName(pName, pTbFName);
code = taosHashPut(pCxt->pTableNameHashObj, pTbFName, strlen(pTbFName), pName, sizeof(SName));
}
if (TSDB_CODE_SUCCESS == code) {
tNameGetFullDbName(pName, pDbFName);
code = taosHashPut(pCxt->pDbFNameHashObj, pDbFName, strlen(pDbFName), pDbFName, TSDB_DB_FNAME_LEN);
}
return code;
}
// tb_name
// [USING stb_name [(tag1_name, ...)] TAGS (tag1_value, ...)]
// [(field1_name, ...)]
@ -1372,7 +1386,9 @@ static void destroyInsertParseContext(SInsertParseContext* pCxt) {
// [...];
static int32_t parseInsertBody(SInsertParseContext* pCxt) {
int32_t tbNum = 0;
SName name;
char tbFName[TSDB_TABLE_FNAME_LEN];
char dbFName[TSDB_DB_FNAME_LEN];
bool autoCreateTbl = false;
// for each table
@ -1415,20 +1431,15 @@ static int32_t parseInsertBody(SInsertParseContext* pCxt) {
SToken tbnameToken = sToken;
NEXT_TOKEN(pCxt->pSql, sToken);
SName name;
CHECK_CODE(createSName(&name, &tbnameToken, pCxt->pComCxt->acctId, pCxt->pComCxt->db, &pCxt->msg));
tNameExtractFullName(&name, tbFName);
CHECK_CODE(taosHashPut(pCxt->pTableNameHashObj, tbFName, strlen(tbFName), &name, sizeof(SName)));
char dbFName[TSDB_DB_FNAME_LEN];
tNameGetFullDbName(&name, dbFName);
CHECK_CODE(taosHashPut(pCxt->pDbFNameHashObj, dbFName, strlen(dbFName), dbFName, sizeof(dbFName)));
if (!pCxt->pComCxt->async || TK_USING == sToken.type) {
CHECK_CODE(parseTableName(pCxt, &tbnameToken, &name, dbFName, tbFName));
}
bool existedUsing = false;
// USING clause
if (TK_USING == sToken.type) {
existedUsing = true;
CHECK_CODE(parseUsingClause(pCxt, &name, tbFName));
CHECK_CODE(parseUsingClause(pCxt, tbNum, &name, tbFName));
NEXT_TOKEN(pCxt->pSql, sToken);
autoCreateTbl = true;
}
@ -1438,22 +1449,31 @@ static int32_t parseInsertBody(SInsertParseContext* pCxt) {
// pSql -> field1_name, ...)
pBoundColsStart = pCxt->pSql;
CHECK_CODE(ignoreBoundColumns(pCxt));
// CHECK_CODE(parseBoundColumns(pCxt, &dataBuf->boundColumnInfo, getTableColumnSchema(pCxt->pTableMeta)));
NEXT_TOKEN(pCxt->pSql, sToken);
}
if (TK_USING == sToken.type) {
CHECK_CODE(parseUsingClause(pCxt, &name, tbFName));
if (pCxt->pComCxt->async) {
CHECK_CODE(parseTableName(pCxt, &tbnameToken, &name, dbFName, tbFName));
}
CHECK_CODE(parseUsingClause(pCxt, tbNum, &name, tbFName));
NEXT_TOKEN(pCxt->pSql, sToken);
autoCreateTbl = true;
} else if (!existedUsing) {
CHECK_CODE(getTableMeta(pCxt, &name, dbFName));
CHECK_CODE(getTableMeta(pCxt, tbNum, &name, dbFName));
}
STableDataBlocks* dataBuf = NULL;
CHECK_CODE(getDataBlockFromList(pCxt->pTableBlockHashObj, tbFName, strlen(tbFName), TSDB_DEFAULT_PAYLOAD_SIZE,
sizeof(SSubmitBlk), getTableInfo(pCxt->pTableMeta).rowSize, pCxt->pTableMeta,
&dataBuf, NULL, &pCxt->createTblReq));
if (pCxt->pComCxt->async) {
CHECK_CODE(getDataBlockFromList(pCxt->pTableBlockHashObj, &pCxt->pTableMeta->uid, sizeof(pCxt->pTableMeta->uid),
TSDB_DEFAULT_PAYLOAD_SIZE, sizeof(SSubmitBlk),
getTableInfo(pCxt->pTableMeta).rowSize, pCxt->pTableMeta, &dataBuf, NULL,
&pCxt->createTblReq));
} else {
CHECK_CODE(getDataBlockFromList(pCxt->pTableBlockHashObj, tbFName, strlen(tbFName), TSDB_DEFAULT_PAYLOAD_SIZE,
sizeof(SSubmitBlk), getTableInfo(pCxt->pTableMeta).rowSize, pCxt->pTableMeta,
&dataBuf, NULL, &pCxt->createTblReq));
}
if (NULL != pBoundColsStart) {
char* pCurrPos = pCxt->pSql;
@ -1532,7 +1552,9 @@ int32_t parseInsertSql(SParseContext* pContext, SQuery** pQuery, SParseMetaCache
.totalNum = 0,
.pOutput = (SVnodeModifOpStmt*)nodesMakeNode(QUERY_NODE_VNODE_MODIF_STMT),
.pStmtCb = pContext->pStmtCb,
.pMetaCache = pMetaCache};
.pMetaCache = pMetaCache,
.memElapsed = 0,
.parRowElapsed = 0};
if (pContext->pStmtCb && *pQuery) {
(*pContext->pStmtCb->getExecInfoFn)(pContext->pStmtCb->pStmt, &context.pVgroupsHashObj,
@ -1547,7 +1569,7 @@ int32_t parseInsertSql(SParseContext* pContext, SQuery** pQuery, SParseMetaCache
} else {
context.pVgroupsHashObj = taosHashInit(128, taosGetDefaultHashFunction(TSDB_DATA_TYPE_INT), true, HASH_NO_LOCK);
context.pTableBlockHashObj =
taosHashInit(128, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_NO_LOCK);
taosHashInit(128, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, HASH_NO_LOCK);
}
if (NULL == context.pVgroupsHashObj || NULL == context.pTableBlockHashObj || NULL == context.pSubTableHashObj ||
@ -1656,24 +1678,24 @@ static int32_t skipUsingClause(SInsertParseSyntaxCxt* pCxt) {
return TSDB_CODE_SUCCESS;
}
static int32_t collectTableMetaKey(SInsertParseSyntaxCxt* pCxt, SToken* pTbToken) {
static int32_t collectTableMetaKey(SInsertParseSyntaxCxt* pCxt, bool isStable, int32_t tableNo, SToken* pTbToken) {
SName name;
CHECK_CODE(createSName(&name, pTbToken, pCxt->pComCxt->acctId, pCxt->pComCxt->db, &pCxt->msg));
CHECK_CODE(reserveUserAuthInCacheExt(pCxt->pComCxt->pUser, &name, AUTH_TYPE_WRITE, pCxt->pMetaCache));
CHECK_CODE(reserveTableMetaInCacheExt(&name, pCxt->pMetaCache));
CHECK_CODE(reserveTableVgroupInCacheExt(&name, pCxt->pMetaCache));
CHECK_CODE(reserveTableMetaInCacheForInsert(&name, isStable ? CATALOG_REQ_TYPE_META : CATALOG_REQ_TYPE_BOTH, tableNo,
pCxt->pMetaCache));
return TSDB_CODE_SUCCESS;
}
static int32_t collectAutoCreateTableMetaKey(SInsertParseSyntaxCxt* pCxt, SToken* pTbToken) {
static int32_t collectAutoCreateTableMetaKey(SInsertParseSyntaxCxt* pCxt, int32_t tableNo, SToken* pTbToken) {
SName name;
CHECK_CODE(createSName(&name, pTbToken, pCxt->pComCxt->acctId, pCxt->pComCxt->db, &pCxt->msg));
CHECK_CODE(reserveTableVgroupInCacheExt(&name, pCxt->pMetaCache));
CHECK_CODE(reserveTableMetaInCacheForInsert(&name, CATALOG_REQ_TYPE_VGROUP, tableNo, pCxt->pMetaCache));
return TSDB_CODE_SUCCESS;
}
static int32_t parseInsertBodySyntax(SInsertParseSyntaxCxt* pCxt) {
bool hasData = false;
bool hasData = false;
int32_t tableNo = 0;
// for each table
while (1) {
SToken sToken;
@ -1702,9 +1724,9 @@ static int32_t parseInsertBodySyntax(SInsertParseSyntaxCxt* pCxt) {
// USING clause
if (TK_USING == sToken.type) {
existedUsing = true;
CHECK_CODE(collectAutoCreateTableMetaKey(pCxt, &tbnameToken));
CHECK_CODE(collectAutoCreateTableMetaKey(pCxt, tableNo, &tbnameToken));
NEXT_TOKEN(pCxt->pSql, sToken);
CHECK_CODE(collectTableMetaKey(pCxt, &sToken));
CHECK_CODE(collectTableMetaKey(pCxt, true, tableNo, &sToken));
CHECK_CODE(skipUsingClause(pCxt));
NEXT_TOKEN(pCxt->pSql, sToken);
}
@ -1717,15 +1739,17 @@ static int32_t parseInsertBodySyntax(SInsertParseSyntaxCxt* pCxt) {
if (TK_USING == sToken.type && !existedUsing) {
existedUsing = true;
CHECK_CODE(collectAutoCreateTableMetaKey(pCxt, &tbnameToken));
CHECK_CODE(collectAutoCreateTableMetaKey(pCxt, tableNo, &tbnameToken));
NEXT_TOKEN(pCxt->pSql, sToken);
CHECK_CODE(collectTableMetaKey(pCxt, &sToken));
CHECK_CODE(collectTableMetaKey(pCxt, true, tableNo, &sToken));
CHECK_CODE(skipUsingClause(pCxt));
NEXT_TOKEN(pCxt->pSql, sToken);
} else {
CHECK_CODE(collectTableMetaKey(pCxt, &tbnameToken));
} else if (!existedUsing) {
CHECK_CODE(collectTableMetaKey(pCxt, false, tableNo, &tbnameToken));
}
++tableNo;
if (TK_VALUES == sToken.type) {
// pSql -> (field1_value, ...) [(field1_value2, ...) ...]
CHECK_CODE(skipValuesClause(pCxt));

View File

@ -1399,7 +1399,7 @@ static int32_t translateTimelineFunc(STranslateContext* pCxt, SFunctionNode* pFu
"%s function must be used in select statements", pFunc->functionName);
}
SSelectStmt* pSelect = (SSelectStmt*)pCxt->pCurrStmt;
if (QUERY_NODE_TEMP_TABLE == nodeType(pSelect->pFromTable) &&
if (NULL != pSelect->pFromTable && QUERY_NODE_TEMP_TABLE == nodeType(pSelect->pFromTable) &&
!isTimeLineQuery(((STempTableNode*)pSelect->pFromTable)->pSubquery)) {
return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_NOT_ALLOWED_FUNC,
"%s function requires valid time series input", pFunc->functionName);
@ -2037,7 +2037,14 @@ static int32_t setVnodeSysTableVgroupList(STranslateContext* pCxt, SName* pName,
code = getDBVgInfoImpl(pCxt, pName, &vgroupList);
}
if (TSDB_CODE_SUCCESS == code && 0 == strcmp(pRealTable->table.tableName, TSDB_INS_TABLE_TABLES)) {
if (TSDB_CODE_SUCCESS == code && 0 == strcmp(pRealTable->table.dbName, TSDB_INFORMATION_SCHEMA_DB) &&
0 == strcmp(pRealTable->table.tableName, TSDB_INS_TABLE_TAGS) && isSelectStmt(pCxt->pCurrStmt) &&
0 == taosArrayGetSize(vgroupList)) {
((SSelectStmt*)pCxt->pCurrStmt)->isEmptyResult = true;
}
if (TSDB_CODE_SUCCESS == code && 0 == strcmp(pRealTable->table.dbName, TSDB_INFORMATION_SCHEMA_DB) &&
0 == strcmp(pRealTable->table.tableName, TSDB_INS_TABLE_TABLES)) {
code = addMnodeToVgroupList(&pCxt->pParseCxt->mgmtEpSet, &vgroupList);
}

View File

@ -476,9 +476,11 @@ static int32_t buildDbReq(SHashObj* pDbsHash, SArray** pDbs) {
static int32_t buildTableReqFromDb(SHashObj* pDbsHash, SArray** pDbs) {
if (NULL != pDbsHash) {
*pDbs = taosArrayInit(taosHashGetSize(pDbsHash), sizeof(STablesReq));
if (NULL == *pDbs) {
return TSDB_CODE_OUT_OF_MEMORY;
*pDbs = taosArrayInit(taosHashGetSize(pDbsHash), sizeof(STablesReq));
if (NULL == *pDbs) {
return TSDB_CODE_OUT_OF_MEMORY;
}
}
SParseTablesMetaReq* p = taosHashIterate(pDbsHash, NULL);
while (NULL != p) {
@ -530,7 +532,62 @@ static int32_t buildUdfReq(SHashObj* pUdfHash, SArray** pUdf) {
return TSDB_CODE_SUCCESS;
}
int32_t buildCatalogReq(const SParseMetaCache* pMetaCache, SCatalogReq* pCatalogReq) {
static int32_t buildCatalogReqForInsert(SParseContext* pCxt, const SParseMetaCache* pMetaCache,
SCatalogReq* pCatalogReq) {
int32_t ndbs = taosHashGetSize(pMetaCache->pInsertTables);
pCatalogReq->pTableMeta = taosArrayInit(ndbs, sizeof(STablesReq));
if (NULL == pCatalogReq->pTableMeta) {
return TSDB_CODE_OUT_OF_MEMORY;
}
pCatalogReq->pTableHash = taosArrayInit(ndbs, sizeof(STablesReq));
if (NULL == pCatalogReq->pTableHash) {
return TSDB_CODE_OUT_OF_MEMORY;
}
pCatalogReq->pUser = taosArrayInit(ndbs, sizeof(SUserAuthInfo));
if (NULL == pCatalogReq->pUser) {
return TSDB_CODE_OUT_OF_MEMORY;
}
pCxt->pTableMetaPos = taosArrayInit(pMetaCache->sqlTableNum, sizeof(int32_t));
pCxt->pTableVgroupPos = taosArrayInit(pMetaCache->sqlTableNum, sizeof(int32_t));
int32_t metaReqNo = 0;
int32_t vgroupReqNo = 0;
SInsertTablesMetaReq* p = taosHashIterate(pMetaCache->pInsertTables, NULL);
while (NULL != p) {
STablesReq req = {0};
strcpy(req.dbFName, p->dbFName);
TSWAP(req.pTables, p->pTableMetaReq);
taosArrayPush(pCatalogReq->pTableMeta, &req);
req.pTables = NULL;
TSWAP(req.pTables, p->pTableVgroupReq);
taosArrayPush(pCatalogReq->pTableHash, &req);
int32_t ntables = taosArrayGetSize(p->pTableMetaPos);
for (int32_t i = 0; i < ntables; ++i) {
taosArrayInsert(pCxt->pTableMetaPos, *(int32_t*)taosArrayGet(p->pTableMetaPos, i), &metaReqNo);
++metaReqNo;
}
ntables = taosArrayGetSize(p->pTableVgroupPos);
for (int32_t i = 0; i < ntables; ++i) {
taosArrayInsert(pCxt->pTableVgroupPos, *(int32_t*)taosArrayGet(p->pTableVgroupPos, i), &vgroupReqNo);
++vgroupReqNo;
}
SUserAuthInfo auth = {0};
strcpy(auth.user, pCxt->pUser);
strcpy(auth.dbFName, p->dbFName);
auth.type = AUTH_TYPE_WRITE;
taosArrayPush(pCatalogReq->pUser, &auth);
p = taosHashIterate(pMetaCache->pInsertTables, p);
}
return TSDB_CODE_SUCCESS;
}
int32_t buildCatalogReqForQuery(const SParseMetaCache* pMetaCache, SCatalogReq* pCatalogReq) {
int32_t code = buildTableReqFromDb(pMetaCache->pTableMeta, &pCatalogReq->pTableMeta);
if (TSDB_CODE_SUCCESS == code) {
code = buildDbReq(pMetaCache->pDbVgroup, &pCatalogReq->pDbVgroup);
@ -560,6 +617,13 @@ int32_t buildCatalogReq(const SParseMetaCache* pMetaCache, SCatalogReq* pCatalog
return code;
}
int32_t buildCatalogReq(SParseContext* pCxt, const SParseMetaCache* pMetaCache, SCatalogReq* pCatalogReq) {
if (NULL != pMetaCache->pInsertTables) {
return buildCatalogReqForInsert(pCxt, pMetaCache, pCatalogReq);
}
return buildCatalogReqForQuery(pMetaCache, pCatalogReq);
}
static int32_t putMetaDataToHash(const char* pKey, int32_t len, const SArray* pData, int32_t index, SHashObj** pHash) {
if (NULL == *pHash) {
*pHash = taosHashInit(4, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), false, HASH_NO_LOCK);
@ -647,7 +711,8 @@ static int32_t putUdfToCache(const SArray* pUdfReq, const SArray* pUdfData, SHas
return TSDB_CODE_SUCCESS;
}
int32_t putMetaDataToCache(const SCatalogReq* pCatalogReq, const SMetaData* pMetaData, SParseMetaCache* pMetaCache) {
int32_t putMetaDataToCacheForQuery(const SCatalogReq* pCatalogReq, const SMetaData* pMetaData,
SParseMetaCache* pMetaCache) {
int32_t code = putDbTableDataToCache(pCatalogReq->pTableMeta, pMetaData->pTableMeta, &pMetaCache->pTableMeta);
if (TSDB_CODE_SUCCESS == code) {
code = putDbDataToCache(pCatalogReq->pDbVgroup, pMetaData->pDbVgroup, &pMetaCache->pDbVgroup);
@ -677,6 +742,30 @@ int32_t putMetaDataToCache(const SCatalogReq* pCatalogReq, const SMetaData* pMet
return code;
}
int32_t putMetaDataToCacheForInsert(const SMetaData* pMetaData, SParseMetaCache* pMetaCache) {
int32_t ndbs = taosArrayGetSize(pMetaData->pUser);
for (int32_t i = 0; i < ndbs; ++i) {
SMetaRes* pRes = taosArrayGet(pMetaData->pUser, i);
if (TSDB_CODE_SUCCESS != pRes->code) {
return pRes->code;
}
if (!(*(bool*)pRes->pRes)) {
return TSDB_CODE_PAR_PERMISSION_DENIED;
}
}
pMetaCache->pTableMetaData = pMetaData->pTableMeta;
pMetaCache->pTableVgroupData = pMetaData->pTableHash;
return TSDB_CODE_SUCCESS;
}
int32_t putMetaDataToCache(const SCatalogReq* pCatalogReq, const SMetaData* pMetaData, SParseMetaCache* pMetaCache,
bool insertValuesStmt) {
if (insertValuesStmt) {
return putMetaDataToCacheForInsert(pMetaData, pMetaCache);
}
return putMetaDataToCacheForQuery(pCatalogReq, pMetaData, pMetaCache);
}
static int32_t reserveTableReqInCacheImpl(const char* pTbFName, int32_t len, SHashObj** pTables) {
if (NULL == *pTables) {
*pTables = taosHashInit(4, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_NO_LOCK);
@ -977,6 +1066,82 @@ int32_t getDnodeListFromCache(SParseMetaCache* pMetaCache, SArray** pDnodes) {
return TSDB_CODE_SUCCESS;
}
static int32_t reserveTableReqInCacheForInsert(const SName* pName, ECatalogReqType reqType, int32_t tableNo,
SInsertTablesMetaReq* pReq) {
switch (reqType) {
case CATALOG_REQ_TYPE_META:
taosArrayPush(pReq->pTableMetaReq, pName);
taosArrayPush(pReq->pTableMetaPos, &tableNo);
break;
case CATALOG_REQ_TYPE_VGROUP:
taosArrayPush(pReq->pTableVgroupReq, pName);
taosArrayPush(pReq->pTableVgroupPos, &tableNo);
break;
case CATALOG_REQ_TYPE_BOTH:
taosArrayPush(pReq->pTableMetaReq, pName);
taosArrayPush(pReq->pTableMetaPos, &tableNo);
taosArrayPush(pReq->pTableVgroupReq, pName);
taosArrayPush(pReq->pTableVgroupPos, &tableNo);
break;
default:
break;
}
return TSDB_CODE_SUCCESS;
}
static int32_t reserveTableReqInDbCacheForInsert(const SName* pName, ECatalogReqType reqType, int32_t tableNo,
SHashObj* pDbs) {
SInsertTablesMetaReq req = {.pTableMetaReq = taosArrayInit(4, sizeof(SName)),
.pTableMetaPos = taosArrayInit(4, sizeof(int32_t)),
.pTableVgroupReq = taosArrayInit(4, sizeof(SName)),
.pTableVgroupPos = taosArrayInit(4, sizeof(int32_t))};
tNameGetFullDbName(pName, req.dbFName);
int32_t code = reserveTableReqInCacheForInsert(pName, reqType, tableNo, &req);
if (TSDB_CODE_SUCCESS == code) {
code = taosHashPut(pDbs, pName->dbname, strlen(pName->dbname), &req, sizeof(SInsertTablesMetaReq));
}
return code;
}
int32_t reserveTableMetaInCacheForInsert(const SName* pName, ECatalogReqType reqType, int32_t tableNo,
SParseMetaCache* pMetaCache) {
if (NULL == pMetaCache->pInsertTables) {
pMetaCache->pInsertTables = taosHashInit(4, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_NO_LOCK);
if (NULL == pMetaCache->pInsertTables) {
return TSDB_CODE_OUT_OF_MEMORY;
}
}
pMetaCache->sqlTableNum = tableNo;
SInsertTablesMetaReq* pReq = taosHashGet(pMetaCache->pInsertTables, pName->dbname, strlen(pName->dbname));
if (NULL == pReq) {
return reserveTableReqInDbCacheForInsert(pName, reqType, tableNo, pMetaCache->pInsertTables);
}
return reserveTableReqInCacheForInsert(pName, reqType, tableNo, pReq);
}
int32_t getTableMetaFromCacheForInsert(SArray* pTableMetaPos, SParseMetaCache* pMetaCache, int32_t tableNo,
STableMeta** pMeta) {
int32_t reqIndex = *(int32_t*)taosArrayGet(pTableMetaPos, tableNo);
SMetaRes* pRes = taosArrayGet(pMetaCache->pTableMetaData, reqIndex);
if (TSDB_CODE_SUCCESS == pRes->code) {
*pMeta = pRes->pRes;
if (NULL == *pMeta) {
return TSDB_CODE_OUT_OF_MEMORY;
}
}
return pRes->code;
}
int32_t getTableVgroupFromCacheForInsert(SArray* pTableVgroupPos, SParseMetaCache* pMetaCache, int32_t tableNo,
SVgroupInfo* pVgroup) {
int32_t reqIndex = *(int32_t*)taosArrayGet(pTableVgroupPos, tableNo);
SMetaRes* pRes = taosArrayGet(pMetaCache->pTableVgroupData, reqIndex);
if (TSDB_CODE_SUCCESS == pRes->code) {
memcpy(pVgroup, pRes->pRes, sizeof(SVgroupInfo));
}
return pRes->code;
}
void destoryParseTablesMetaReqHash(SHashObj* pHash) {
SParseTablesMetaReq* p = taosHashIterate(pHash, NULL);
while (NULL != p) {

View File

@ -185,7 +185,7 @@ int32_t qParseSqlSyntax(SParseContext* pCxt, SQuery** pQuery, struct SCatalogReq
code = parseSqlSyntax(pCxt, pQuery, &metaCache);
}
if (TSDB_CODE_SUCCESS == code) {
code = buildCatalogReq(&metaCache, pCatalogReq);
code = buildCatalogReq(pCxt, &metaCache, pCatalogReq);
}
destoryParseMetaCache(&metaCache, true);
terrno = code;
@ -195,7 +195,7 @@ int32_t qParseSqlSyntax(SParseContext* pCxt, SQuery** pQuery, struct SCatalogReq
int32_t qAnalyseSqlSemantic(SParseContext* pCxt, const struct SCatalogReq* pCatalogReq,
const struct SMetaData* pMetaData, SQuery* pQuery) {
SParseMetaCache metaCache = {0};
int32_t code = putMetaDataToCache(pCatalogReq, pMetaData, &metaCache);
int32_t code = putMetaDataToCache(pCatalogReq, pMetaData, &metaCache, NULL == pQuery->pRoot);
if (TSDB_CODE_SUCCESS == code) {
if (NULL == pQuery->pRoot) {
code = parseInsertSql(pCxt, &pQuery, &metaCache);

View File

@ -139,17 +139,17 @@ typedef union {
#define ParseCTX_FETCH
#define ParseCTX_STORE
#define YYFALLBACK 1
#define YYNSTATE 666
#define YYNSTATE 667
#define YYNRULE 491
#define YYNTOKEN 305
#define YY_MAX_SHIFT 665
#define YY_MIN_SHIFTREDUCE 972
#define YY_MAX_SHIFTREDUCE 1462
#define YY_ERROR_ACTION 1463
#define YY_ACCEPT_ACTION 1464
#define YY_NO_ACTION 1465
#define YY_MIN_REDUCE 1466
#define YY_MAX_REDUCE 1956
#define YY_MAX_SHIFT 666
#define YY_MIN_SHIFTREDUCE 973
#define YY_MAX_SHIFTREDUCE 1463
#define YY_ERROR_ACTION 1464
#define YY_ACCEPT_ACTION 1465
#define YY_NO_ACTION 1466
#define YY_MIN_REDUCE 1467
#define YY_MAX_REDUCE 1957
/************* End control #defines *******************************************/
#define YY_NLOOKAHEAD ((int)(sizeof(yy_lookahead)/sizeof(yy_lookahead[0])))
@ -218,261 +218,261 @@ typedef union {
*********** Begin parsing tables **********************************************/
#define YY_ACTTAB_COUNT (2548)
static const YYACTIONTYPE yy_action[] = {
/* 0 */ 525, 30, 261, 525, 548, 433, 525, 434, 1501, 11,
/* 10 */ 10, 117, 39, 37, 55, 1652, 1653, 117, 471, 378,
/* 20 */ 339, 1467, 1263, 1005, 476, 1022, 1289, 1021, 1606, 1790,
/* 30 */ 1597, 1606, 127, 1339, 1606, 1261, 441, 551, 434, 1501,
/* 40 */ 469, 1774, 107, 1778, 1289, 106, 105, 104, 103, 102,
/* 50 */ 101, 100, 99, 98, 1774, 1023, 1334, 1808, 150, 64,
/* 60 */ 1934, 14, 1566, 1009, 1010, 552, 1770, 1776, 1269, 450,
/* 70 */ 1760, 125, 576, 165, 39, 37, 1402, 1931, 570, 1770,
/* 80 */ 1776, 328, 339, 1528, 1263, 550, 161, 1876, 1877, 1,
/* 90 */ 1881, 570, 1658, 479, 478, 1339, 1822, 1261, 1375, 327,
/* 100 */ 95, 1791, 579, 1793, 1794, 575, 496, 570, 1656, 344,
/* 110 */ 1868, 662, 1651, 1653, 330, 1864, 160, 513, 1334, 494,
/* 120 */ 1934, 492, 1288, 14, 325, 1341, 1342, 1704, 164, 542,
/* 130 */ 1269, 1160, 1161, 1933, 33, 32, 1894, 1931, 40, 38,
/* 140 */ 36, 35, 34, 148, 63, 1478, 639, 638, 637, 636,
/* 150 */ 349, 2, 635, 634, 128, 629, 628, 627, 626, 625,
/* 160 */ 624, 623, 139, 619, 618, 617, 348, 347, 614, 613,
/* 170 */ 1264, 107, 1262, 662, 106, 105, 104, 103, 102, 101,
/* 180 */ 100, 99, 98, 1808, 36, 35, 34, 1341, 1342, 224,
/* 190 */ 225, 541, 384, 1267, 1268, 612, 1316, 1317, 1319, 1320,
/* 200 */ 1321, 1322, 1323, 1324, 572, 568, 1332, 1333, 1335, 1336,
/* 210 */ 1337, 1338, 1340, 1343, 1466, 1287, 1433, 33, 32, 482,
/* 220 */ 481, 40, 38, 36, 35, 34, 123, 168, 540, 303,
/* 230 */ 1464, 223, 1264, 84, 1262, 1263, 477, 480, 116, 115,
/* 240 */ 114, 113, 112, 111, 110, 109, 108, 305, 1261, 1022,
/* 250 */ 515, 1021, 22, 174, 1599, 1267, 1268, 1489, 1316, 1317,
/* 260 */ 1319, 1320, 1321, 1322, 1323, 1324, 572, 568, 1332, 1333,
/* 270 */ 1335, 1336, 1337, 1338, 1340, 1343, 39, 37, 1488, 1023,
/* 280 */ 537, 1269, 168, 525, 339, 71, 1263, 1487, 70, 354,
/* 290 */ 1243, 1244, 1707, 1790, 170, 211, 512, 1339, 1760, 1261,
/* 300 */ 1118, 601, 600, 599, 1122, 598, 1124, 1125, 597, 1127,
/* 310 */ 594, 1606, 1133, 591, 1135, 1136, 588, 585, 1934, 1760,
/* 320 */ 1334, 1808, 1583, 1269, 662, 14, 1658, 1934, 1760, 552,
/* 330 */ 1934, 166, 1269, 343, 1760, 1931, 576, 1934, 39, 37,
/* 340 */ 1932, 487, 1656, 165, 1931, 551, 339, 1931, 1263, 548,
/* 350 */ 165, 76, 305, 2, 1931, 515, 497, 543, 538, 1339,
/* 360 */ 1822, 1261, 1697, 159, 95, 1791, 579, 1793, 1794, 575,
/* 370 */ 210, 570, 63, 173, 1868, 662, 1645, 127, 330, 1864,
/* 380 */ 160, 551, 1334, 1264, 490, 1262, 419, 604, 484, 1341,
/* 390 */ 1342, 33, 32, 209, 1269, 40, 38, 36, 35, 34,
/* 400 */ 1895, 633, 631, 39, 37, 1344, 1267, 1268, 1486, 91,
/* 410 */ 621, 339, 1790, 1263, 42, 8, 125, 40, 38, 36,
/* 420 */ 35, 34, 124, 610, 1339, 58, 1261, 1595, 57, 49,
/* 430 */ 1598, 162, 1876, 1877, 1264, 1881, 1262, 662, 178, 177,
/* 440 */ 1808, 352, 137, 136, 607, 606, 605, 1334, 574, 1760,
/* 450 */ 43, 1341, 1342, 1760, 316, 576, 1485, 1267, 1268, 1269,
/* 460 */ 1316, 1317, 1319, 1320, 1321, 1322, 1323, 1324, 572, 568,
/* 470 */ 1332, 1333, 1335, 1336, 1337, 1338, 1340, 1343, 63, 1822,
/* 480 */ 9, 74, 1934, 294, 1791, 579, 1793, 1794, 575, 573,
/* 490 */ 570, 567, 1840, 1288, 122, 165, 1264, 1760, 1262, 1931,
/* 500 */ 33, 32, 662, 1601, 40, 38, 36, 35, 34, 317,
/* 510 */ 168, 315, 314, 1484, 473, 351, 1341, 1342, 475, 1267,
/* 520 */ 1268, 1290, 1316, 1317, 1319, 1320, 1321, 1322, 1323, 1324,
/* 530 */ 572, 568, 1332, 1333, 1335, 1336, 1337, 1338, 1340, 1343,
/* 540 */ 474, 1009, 1010, 33, 32, 1459, 1363, 40, 38, 36,
/* 550 */ 35, 34, 168, 168, 1760, 525, 1934, 1591, 377, 146,
/* 560 */ 376, 1264, 63, 1262, 26, 1531, 382, 168, 1609, 165,
/* 570 */ 33, 32, 217, 1931, 40, 38, 36, 35, 34, 218,
/* 580 */ 1483, 1790, 1413, 1606, 1267, 1268, 1593, 1316, 1317, 1319,
/* 590 */ 1320, 1321, 1322, 1323, 1324, 572, 568, 1332, 1333, 1335,
/* 600 */ 1336, 1337, 1338, 1340, 1343, 39, 37, 77, 27, 1808,
/* 610 */ 498, 1883, 63, 339, 78, 1263, 168, 577, 1368, 1482,
/* 620 */ 505, 1760, 1760, 373, 576, 1301, 1339, 28, 1261, 482,
/* 630 */ 481, 1481, 1458, 33, 32, 1880, 123, 40, 38, 36,
/* 640 */ 35, 34, 375, 371, 438, 1589, 477, 480, 1822, 1334,
/* 650 */ 1286, 1934, 96, 1791, 579, 1793, 1794, 575, 253, 570,
/* 660 */ 1760, 1269, 1868, 513, 165, 1480, 1867, 1864, 1931, 1080,
/* 670 */ 33, 32, 1760, 1705, 40, 38, 36, 35, 34, 665,
/* 680 */ 33, 32, 9, 525, 40, 38, 36, 35, 34, 1477,
/* 690 */ 1476, 33, 32, 268, 383, 40, 38, 36, 35, 34,
/* 700 */ 168, 1703, 1082, 300, 662, 432, 1760, 157, 436, 1697,
/* 710 */ 214, 1606, 655, 651, 647, 643, 266, 1581, 1341, 1342,
/* 720 */ 176, 33, 32, 307, 571, 40, 38, 36, 35, 34,
/* 730 */ 1760, 1760, 39, 37, 525, 603, 525, 302, 1475, 1286,
/* 740 */ 339, 548, 1263, 525, 307, 389, 412, 404, 92, 424,
/* 750 */ 168, 231, 1301, 1339, 405, 1261, 440, 1584, 74, 436,
/* 760 */ 1361, 1406, 1606, 1264, 1606, 1262, 397, 1288, 425, 127,
/* 770 */ 399, 1606, 1474, 1702, 1778, 300, 1334, 1888, 1395, 1760,
/* 780 */ 1602, 1361, 44, 4, 522, 1774, 1267, 1268, 1269, 1316,
/* 790 */ 1317, 1319, 1320, 1321, 1322, 1323, 1324, 572, 568, 1332,
/* 800 */ 1333, 1335, 1336, 1337, 1338, 1340, 1343, 390, 125, 2,
/* 810 */ 1770, 1776, 334, 1760, 1362, 7, 220, 450, 610, 386,
/* 820 */ 90, 525, 570, 163, 1876, 1877, 1658, 1881, 1423, 145,
/* 830 */ 87, 662, 448, 312, 1235, 1362, 213, 137, 136, 607,
/* 840 */ 606, 605, 1656, 1479, 1883, 1341, 1342, 423, 1473, 1606,
/* 0 */ 526, 30, 261, 526, 549, 433, 526, 434, 1502, 11,
/* 10 */ 10, 117, 39, 37, 55, 1653, 1654, 117, 471, 378,
/* 20 */ 339, 1468, 1264, 1006, 476, 1023, 1290, 1022, 1607, 1791,
/* 30 */ 1598, 1607, 127, 1340, 1607, 1262, 441, 552, 434, 1502,
/* 40 */ 469, 1775, 107, 1779, 1290, 106, 105, 104, 103, 102,
/* 50 */ 101, 100, 99, 98, 1775, 1024, 1335, 1809, 150, 64,
/* 60 */ 1935, 14, 1567, 1010, 1011, 553, 1771, 1777, 1270, 450,
/* 70 */ 1761, 125, 577, 165, 39, 37, 1403, 1932, 571, 1771,
/* 80 */ 1777, 328, 339, 1529, 1264, 551, 161, 1877, 1878, 1,
/* 90 */ 1882, 571, 1659, 479, 478, 1340, 1823, 1262, 1376, 327,
/* 100 */ 95, 1792, 580, 1794, 1795, 576, 496, 571, 1657, 344,
/* 110 */ 1869, 663, 1652, 1654, 330, 1865, 160, 513, 1335, 494,
/* 120 */ 1935, 492, 1289, 14, 325, 1342, 1343, 1705, 164, 543,
/* 130 */ 1270, 1161, 1162, 1934, 33, 32, 1895, 1932, 40, 38,
/* 140 */ 36, 35, 34, 148, 63, 1479, 640, 639, 638, 637,
/* 150 */ 349, 2, 636, 635, 128, 630, 629, 628, 627, 626,
/* 160 */ 625, 624, 139, 620, 619, 618, 348, 347, 615, 614,
/* 170 */ 1265, 107, 1263, 663, 106, 105, 104, 103, 102, 101,
/* 180 */ 100, 99, 98, 1809, 36, 35, 34, 1342, 1343, 224,
/* 190 */ 225, 542, 384, 1268, 1269, 613, 1317, 1318, 1320, 1321,
/* 200 */ 1322, 1323, 1324, 1325, 573, 569, 1333, 1334, 1336, 1337,
/* 210 */ 1338, 1339, 1341, 1344, 1467, 1288, 1434, 33, 32, 482,
/* 220 */ 481, 40, 38, 36, 35, 34, 123, 168, 541, 303,
/* 230 */ 1465, 223, 1265, 84, 1263, 1264, 477, 480, 116, 115,
/* 240 */ 114, 113, 112, 111, 110, 109, 108, 305, 1262, 1023,
/* 250 */ 516, 1022, 22, 174, 1600, 1268, 1269, 1490, 1317, 1318,
/* 260 */ 1320, 1321, 1322, 1323, 1324, 1325, 573, 569, 1333, 1334,
/* 270 */ 1336, 1337, 1338, 1339, 1341, 1344, 39, 37, 1489, 1024,
/* 280 */ 538, 1270, 168, 526, 339, 71, 1264, 1488, 70, 354,
/* 290 */ 1244, 1245, 1708, 1791, 170, 211, 512, 1340, 1761, 1262,
/* 300 */ 1119, 602, 601, 600, 1123, 599, 1125, 1126, 598, 1128,
/* 310 */ 595, 1607, 1134, 592, 1136, 1137, 589, 586, 1935, 1761,
/* 320 */ 1335, 1809, 1584, 1270, 663, 14, 1659, 1935, 1761, 553,
/* 330 */ 1935, 166, 1270, 343, 1761, 1932, 577, 1935, 39, 37,
/* 340 */ 1933, 487, 1657, 165, 1932, 552, 339, 1932, 1264, 549,
/* 350 */ 165, 76, 305, 2, 1932, 516, 497, 544, 539, 1340,
/* 360 */ 1823, 1262, 1698, 159, 95, 1792, 580, 1794, 1795, 576,
/* 370 */ 210, 571, 63, 173, 1869, 663, 1646, 127, 330, 1865,
/* 380 */ 160, 552, 1335, 1265, 490, 1263, 419, 605, 484, 1342,
/* 390 */ 1343, 33, 32, 209, 1270, 40, 38, 36, 35, 34,
/* 400 */ 1896, 634, 632, 39, 37, 1345, 1268, 1269, 1487, 91,
/* 410 */ 622, 339, 1791, 1264, 42, 8, 125, 40, 38, 36,
/* 420 */ 35, 34, 124, 611, 1340, 58, 1262, 1596, 57, 49,
/* 430 */ 1599, 162, 1877, 1878, 1265, 1882, 1263, 663, 178, 177,
/* 440 */ 1809, 352, 137, 136, 608, 607, 606, 1335, 575, 1761,
/* 450 */ 43, 1342, 1343, 1761, 316, 577, 1486, 1268, 1269, 1270,
/* 460 */ 1317, 1318, 1320, 1321, 1322, 1323, 1324, 1325, 573, 569,
/* 470 */ 1333, 1334, 1336, 1337, 1338, 1339, 1341, 1344, 63, 1823,
/* 480 */ 9, 74, 1935, 294, 1792, 580, 1794, 1795, 576, 574,
/* 490 */ 571, 568, 1841, 1289, 122, 165, 1265, 1761, 1263, 1932,
/* 500 */ 33, 32, 663, 1602, 40, 38, 36, 35, 34, 317,
/* 510 */ 168, 315, 314, 1485, 473, 351, 1342, 1343, 475, 1268,
/* 520 */ 1269, 1291, 1317, 1318, 1320, 1321, 1322, 1323, 1324, 1325,
/* 530 */ 573, 569, 1333, 1334, 1336, 1337, 1338, 1339, 1341, 1344,
/* 540 */ 474, 1010, 1011, 33, 32, 1460, 1364, 40, 38, 36,
/* 550 */ 35, 34, 168, 168, 1761, 526, 1935, 1592, 377, 146,
/* 560 */ 376, 1265, 63, 1263, 26, 1532, 382, 168, 1610, 165,
/* 570 */ 33, 32, 217, 1932, 40, 38, 36, 35, 34, 218,
/* 580 */ 1484, 1791, 1414, 1607, 1268, 1269, 1594, 1317, 1318, 1320,
/* 590 */ 1321, 1322, 1323, 1324, 1325, 573, 569, 1333, 1334, 1336,
/* 600 */ 1337, 1338, 1339, 1341, 1344, 39, 37, 77, 27, 1809,
/* 610 */ 498, 1884, 63, 339, 78, 1264, 168, 578, 1369, 1483,
/* 620 */ 505, 1761, 1761, 373, 577, 1302, 1340, 28, 1262, 482,
/* 630 */ 481, 1482, 1459, 33, 32, 1881, 123, 40, 38, 36,
/* 640 */ 35, 34, 375, 371, 438, 1590, 477, 480, 1823, 1335,
/* 650 */ 1287, 1935, 96, 1792, 580, 1794, 1795, 576, 253, 571,
/* 660 */ 1761, 1270, 1869, 513, 165, 1481, 1868, 1865, 1932, 1081,
/* 670 */ 33, 32, 1761, 1706, 40, 38, 36, 35, 34, 666,
/* 680 */ 33, 32, 9, 526, 40, 38, 36, 35, 34, 1478,
/* 690 */ 1477, 33, 32, 268, 383, 40, 38, 36, 35, 34,
/* 700 */ 168, 1704, 1083, 300, 663, 432, 1761, 157, 436, 1698,
/* 710 */ 214, 1607, 656, 652, 648, 644, 266, 1582, 1342, 1343,
/* 720 */ 176, 33, 32, 307, 572, 40, 38, 36, 35, 34,
/* 730 */ 1761, 1761, 39, 37, 526, 604, 526, 302, 1476, 1287,
/* 740 */ 339, 549, 1264, 526, 307, 389, 412, 404, 92, 424,
/* 750 */ 168, 231, 1302, 1340, 405, 1262, 440, 1585, 74, 436,
/* 760 */ 1362, 1407, 1607, 1265, 1607, 1263, 397, 1289, 425, 127,
/* 770 */ 399, 1607, 1475, 1703, 1779, 300, 1335, 1889, 1396, 1761,
/* 780 */ 1603, 1362, 44, 4, 523, 1775, 1268, 1269, 1270, 1317,
/* 790 */ 1318, 1320, 1321, 1322, 1323, 1324, 1325, 573, 569, 1333,
/* 800 */ 1334, 1336, 1337, 1338, 1339, 1341, 1344, 390, 125, 2,
/* 810 */ 1771, 1777, 334, 1761, 1363, 7, 220, 450, 611, 386,
/* 820 */ 90, 526, 571, 163, 1877, 1878, 1659, 1882, 1424, 145,
/* 830 */ 87, 663, 448, 312, 1236, 1363, 213, 137, 136, 608,
/* 840 */ 607, 606, 1657, 1480, 1884, 1342, 1343, 423, 1474, 1607,
/* 850 */ 418, 417, 416, 415, 414, 411, 410, 409, 408, 407,
/* 860 */ 403, 402, 401, 400, 394, 393, 392, 391, 1879, 388,
/* 870 */ 387, 534, 1421, 1422, 1424, 1425, 29, 337, 1356, 1357,
/* 880 */ 1358, 1359, 1360, 1364, 1365, 1366, 1367, 1349, 61, 1760,
/* 890 */ 1264, 608, 1262, 1288, 1649, 1934, 1399, 29, 337, 1356,
/* 900 */ 1357, 1358, 1359, 1360, 1364, 1365, 1366, 1367, 166, 1582,
/* 910 */ 1790, 1472, 1931, 1267, 1268, 1471, 1316, 1317, 1319, 1320,
/* 920 */ 1321, 1322, 1323, 1324, 572, 568, 1332, 1333, 1335, 1336,
/* 930 */ 1337, 1338, 1340, 1343, 622, 147, 1578, 1790, 1808, 525,
/* 940 */ 279, 610, 609, 256, 1318, 1649, 577, 1883, 1470, 1469,
/* 950 */ 449, 1760, 1760, 576, 277, 60, 1760, 475, 59, 1291,
/* 960 */ 137, 136, 607, 606, 605, 1808, 553, 1606, 1288, 612,
/* 970 */ 1567, 1878, 135, 577, 181, 429, 427, 1822, 1760, 474,
/* 980 */ 576, 94, 1791, 579, 1793, 1794, 575, 535, 570, 1760,
/* 990 */ 1760, 1868, 1779, 553, 468, 306, 1864, 273, 53, 509,
/* 1000 */ 1636, 1658, 1395, 1774, 1822, 525, 63, 1934, 94, 1791,
/* 1010 */ 579, 1793, 1794, 575, 525, 570, 1603, 1657, 1868, 54,
/* 1020 */ 167, 1747, 306, 1864, 1931, 1735, 1518, 202, 1770, 1776,
/* 1030 */ 200, 336, 335, 1606, 1934, 1461, 1462, 557, 525, 525,
/* 1040 */ 570, 1277, 1606, 1272, 93, 525, 525, 165, 483, 506,
/* 1050 */ 510, 1931, 1339, 560, 1270, 326, 228, 521, 525, 204,
/* 1060 */ 525, 1790, 203, 146, 499, 525, 1606, 1606, 361, 523,
/* 1070 */ 1318, 524, 1608, 1606, 1606, 1334, 262, 41, 222, 68,
/* 1080 */ 67, 381, 342, 525, 172, 1271, 1606, 1269, 1606, 1808,
/* 1090 */ 146, 131, 245, 1606, 346, 206, 233, 577, 205, 1608,
/* 1100 */ 301, 566, 1760, 369, 576, 367, 363, 359, 356, 353,
/* 1110 */ 345, 1606, 1781, 208, 134, 135, 207, 1809, 146, 1513,
/* 1120 */ 1398, 1511, 51, 1790, 1212, 226, 237, 1608, 1822, 555,
/* 1130 */ 565, 51, 95, 1791, 579, 1793, 1794, 575, 518, 570,
/* 1140 */ 41, 485, 1868, 488, 168, 1318, 330, 1864, 1947, 11,
/* 1150 */ 10, 1808, 615, 41, 616, 1783, 350, 1902, 583, 577,
/* 1160 */ 134, 230, 1111, 1502, 1760, 1646, 576, 135, 119, 1420,
/* 1170 */ 134, 1898, 549, 240, 1068, 1790, 1066, 255, 1369, 250,
/* 1180 */ 1275, 258, 260, 3, 5, 355, 313, 1325, 1049, 1278,
/* 1190 */ 1822, 1273, 360, 1228, 95, 1791, 579, 1793, 1794, 575,
/* 1200 */ 272, 570, 269, 1808, 1868, 1139, 1507, 1143, 330, 1864,
/* 1210 */ 1947, 577, 1281, 1283, 1150, 1148, 1760, 138, 576, 1925,
/* 1220 */ 175, 1050, 1274, 1286, 568, 1332, 1333, 1335, 1336, 1337,
/* 1230 */ 1338, 1790, 385, 1353, 406, 1699, 413, 421, 420, 1292,
/* 1240 */ 558, 1790, 1822, 422, 426, 431, 95, 1791, 579, 1793,
/* 1250 */ 1794, 575, 428, 570, 657, 439, 1868, 430, 561, 1808,
/* 1260 */ 330, 1864, 1947, 1294, 442, 443, 184, 577, 1293, 1808,
/* 1270 */ 186, 1887, 1760, 1295, 576, 444, 445, 577, 189, 447,
/* 1280 */ 191, 72, 1760, 73, 576, 451, 470, 553, 195, 472,
/* 1290 */ 1790, 304, 1596, 199, 118, 1592, 1740, 553, 1822, 501,
/* 1300 */ 201, 140, 286, 1791, 579, 1793, 1794, 575, 1822, 570,
/* 1310 */ 141, 1594, 286, 1791, 579, 1793, 1794, 575, 1808, 570,
/* 1320 */ 1590, 142, 143, 212, 270, 500, 577, 215, 1934, 507,
/* 1330 */ 504, 1760, 511, 576, 322, 219, 533, 514, 1934, 132,
/* 1340 */ 1739, 167, 1709, 519, 516, 1931, 133, 324, 81, 520,
/* 1350 */ 1790, 165, 1291, 529, 271, 1931, 83, 1822, 1607, 235,
/* 1360 */ 1790, 96, 1791, 579, 1793, 1794, 575, 1899, 570, 536,
/* 1370 */ 239, 1868, 531, 1909, 6, 564, 1864, 532, 1808, 545,
/* 1380 */ 329, 1908, 539, 530, 528, 244, 577, 1890, 1808, 527,
/* 1390 */ 1395, 1760, 1290, 576, 154, 126, 577, 249, 562, 559,
/* 1400 */ 246, 1760, 48, 576, 1884, 247, 331, 248, 85, 1790,
/* 1410 */ 581, 1650, 1579, 265, 274, 658, 659, 1822, 1930, 661,
/* 1420 */ 52, 149, 1791, 579, 1793, 1794, 575, 1822, 570, 1950,
/* 1430 */ 153, 96, 1791, 579, 1793, 1794, 575, 1808, 570, 556,
/* 1440 */ 1754, 1868, 323, 287, 297, 577, 1865, 1849, 296, 254,
/* 1450 */ 1760, 276, 576, 563, 1753, 278, 257, 259, 65, 1752,
/* 1460 */ 1790, 1751, 66, 1748, 357, 554, 1948, 358, 1255, 1256,
/* 1470 */ 171, 362, 1746, 364, 365, 366, 1822, 1745, 1744, 368,
/* 1480 */ 295, 1791, 579, 1793, 1794, 575, 370, 570, 1808, 1743,
/* 1490 */ 372, 1742, 374, 526, 1231, 1230, 577, 1720, 1719, 379,
/* 1500 */ 380, 1760, 1200, 576, 1718, 1717, 1692, 129, 1691, 1690,
/* 1510 */ 1689, 69, 1790, 1688, 1687, 1686, 1685, 1684, 395, 396,
/* 1520 */ 1683, 398, 1790, 130, 1668, 1667, 1666, 1822, 1682, 1681,
/* 1530 */ 1680, 295, 1791, 579, 1793, 1794, 575, 1679, 570, 1790,
/* 1540 */ 1808, 1678, 1677, 1676, 1675, 1674, 1673, 1672, 577, 1671,
/* 1550 */ 1808, 1670, 1669, 1760, 1665, 576, 1664, 1663, 577, 1662,
/* 1560 */ 1202, 1661, 1660, 1760, 1659, 576, 1533, 1808, 179, 1532,
/* 1570 */ 1530, 1498, 120, 182, 180, 574, 1497, 158, 435, 1822,
/* 1580 */ 1760, 1012, 576, 290, 1791, 579, 1793, 1794, 575, 1822,
/* 1590 */ 570, 190, 1011, 149, 1791, 579, 1793, 1794, 575, 1790,
/* 1600 */ 570, 437, 1733, 183, 121, 1727, 1822, 1716, 1715, 1701,
/* 1610 */ 294, 1791, 579, 1793, 1794, 575, 1790, 570, 188, 1841,
/* 1620 */ 1585, 544, 1042, 1529, 1527, 452, 454, 1808, 1525, 453,
/* 1630 */ 456, 457, 338, 458, 1523, 577, 460, 462, 1949, 461,
/* 1640 */ 1760, 1521, 576, 465, 1808, 464, 1510, 1509, 1494, 340,
/* 1650 */ 466, 1587, 577, 1154, 1153, 1586, 50, 1760, 630, 576,
/* 1660 */ 1079, 1076, 632, 1519, 198, 1075, 1822, 1074, 1514, 1512,
/* 1670 */ 295, 1791, 579, 1793, 1794, 575, 318, 570, 319, 320,
/* 1680 */ 486, 1493, 1492, 1822, 1790, 489, 197, 295, 1791, 579,
/* 1690 */ 1793, 1794, 575, 491, 570, 1491, 493, 495, 97, 1732,
/* 1700 */ 152, 1237, 1790, 1726, 216, 467, 463, 459, 455, 196,
/* 1710 */ 56, 502, 1808, 144, 1714, 1712, 1713, 1711, 1710, 221,
/* 1720 */ 577, 1247, 15, 1708, 227, 1760, 79, 576, 1700, 503,
/* 1730 */ 1808, 321, 508, 80, 232, 517, 229, 87, 577, 41,
/* 1740 */ 47, 75, 16, 1760, 194, 576, 243, 242, 82, 25,
/* 1750 */ 17, 1822, 1435, 23, 234, 280, 1791, 579, 1793, 1794,
/* 1760 */ 575, 1790, 570, 236, 1417, 238, 1781, 1419, 151, 1822,
/* 1770 */ 1412, 252, 241, 281, 1791, 579, 1793, 1794, 575, 24,
/* 1780 */ 570, 86, 46, 1392, 1780, 18, 155, 1447, 1391, 1808,
/* 1790 */ 1446, 1452, 1441, 332, 1451, 1450, 333, 577, 10, 1279,
/* 1800 */ 45, 1825, 1760, 1329, 576, 1354, 193, 187, 13, 192,
/* 1810 */ 1790, 19, 1327, 446, 1326, 156, 569, 169, 31, 12,
/* 1820 */ 20, 1309, 578, 21, 582, 1140, 341, 1137, 1822, 185,
/* 1830 */ 586, 1790, 282, 1791, 579, 1793, 1794, 575, 1808, 570,
/* 1840 */ 584, 580, 587, 589, 1134, 590, 577, 1128, 592, 595,
/* 1850 */ 1117, 1760, 593, 576, 1126, 596, 1132, 1131, 1130, 1808,
/* 1860 */ 1129, 88, 89, 602, 263, 1149, 1145, 577, 62, 1040,
/* 1870 */ 611, 1071, 1760, 1070, 576, 1069, 1067, 1822, 1065, 1086,
/* 1880 */ 1064, 289, 1791, 579, 1793, 1794, 575, 1063, 570, 1790,
/* 1890 */ 620, 264, 1061, 1060, 1059, 1058, 1057, 1056, 1822, 1790,
/* 1900 */ 1055, 1083, 291, 1791, 579, 1793, 1794, 575, 1081, 570,
/* 1910 */ 1052, 1051, 1048, 1047, 1046, 1045, 1526, 1808, 640, 1524,
/* 1920 */ 642, 644, 1522, 646, 648, 577, 641, 1808, 1520, 645,
/* 1930 */ 1760, 652, 576, 650, 649, 577, 654, 1508, 653, 656,
/* 1940 */ 1760, 1002, 576, 1490, 664, 267, 660, 1465, 1265, 275,
/* 1950 */ 663, 1790, 1465, 1465, 1465, 1465, 1822, 1465, 1465, 1465,
/* 1960 */ 283, 1791, 579, 1793, 1794, 575, 1822, 570, 1790, 1465,
/* 1970 */ 292, 1791, 579, 1793, 1794, 575, 1465, 570, 1465, 1808,
/* 1980 */ 1465, 1465, 1465, 1465, 1465, 1465, 1465, 577, 1465, 1465,
/* 1990 */ 1465, 1465, 1760, 1465, 576, 1465, 1808, 1465, 1465, 1465,
/* 2000 */ 1465, 1465, 1465, 1465, 577, 1465, 1465, 1465, 1465, 1760,
/* 2010 */ 1465, 576, 1465, 1465, 1465, 1465, 1465, 1790, 1822, 1465,
/* 2020 */ 1465, 1465, 284, 1791, 579, 1793, 1794, 575, 1465, 570,
/* 2030 */ 1465, 1465, 1465, 1465, 1790, 1822, 1465, 1465, 1465, 293,
/* 2040 */ 1791, 579, 1793, 1794, 575, 1808, 570, 1465, 1465, 1465,
/* 2050 */ 1465, 1465, 1465, 577, 1465, 1465, 1465, 1465, 1760, 1465,
/* 2060 */ 576, 1465, 1808, 1465, 1465, 1465, 1465, 1465, 1465, 1465,
/* 2070 */ 577, 1465, 1465, 1465, 1465, 1760, 1465, 576, 1465, 1465,
/* 2080 */ 1465, 1465, 1465, 1790, 1822, 1465, 1465, 1465, 285, 1791,
/* 2090 */ 579, 1793, 1794, 575, 1465, 570, 1465, 1465, 1465, 1465,
/* 2100 */ 1465, 1822, 1465, 1465, 1465, 298, 1791, 579, 1793, 1794,
/* 2110 */ 575, 1808, 570, 1465, 1465, 1465, 1465, 1465, 1465, 577,
/* 2120 */ 1465, 1465, 1465, 1465, 1760, 1465, 576, 1465, 1465, 1465,
/* 2130 */ 1465, 1465, 1465, 1465, 1465, 1465, 1465, 1790, 1465, 1465,
/* 2140 */ 1465, 1465, 1465, 1465, 1465, 1465, 1465, 1790, 1465, 1465,
/* 2150 */ 1822, 1465, 1465, 1465, 299, 1791, 579, 1793, 1794, 575,
/* 2160 */ 1465, 570, 1465, 1465, 1465, 1808, 1465, 1465, 1465, 1465,
/* 2170 */ 1465, 1465, 1465, 577, 1465, 1808, 1465, 1465, 1760, 1465,
/* 2180 */ 576, 1465, 1465, 577, 1465, 1465, 1465, 1465, 1760, 1465,
/* 2190 */ 576, 1465, 1465, 1465, 1465, 1465, 1465, 1465, 1790, 1465,
/* 2200 */ 1465, 1465, 1465, 1465, 1822, 1465, 1465, 1465, 1802, 1791,
/* 2210 */ 579, 1793, 1794, 575, 1822, 570, 1790, 1465, 1801, 1791,
/* 2220 */ 579, 1793, 1794, 575, 1465, 570, 1808, 1465, 1465, 1465,
/* 2230 */ 1465, 1465, 1465, 1465, 577, 1465, 1465, 1465, 1465, 1760,
/* 2240 */ 1465, 576, 1465, 1465, 1808, 1465, 1465, 1465, 1465, 1465,
/* 2250 */ 1465, 1465, 577, 1465, 1465, 1465, 1465, 1760, 1465, 576,
/* 2260 */ 1465, 1465, 1465, 1465, 1465, 1822, 1465, 1465, 1465, 1800,
/* 2270 */ 1791, 579, 1793, 1794, 575, 1790, 570, 1465, 1465, 1465,
/* 2280 */ 1465, 1465, 1465, 1822, 1465, 1465, 1465, 310, 1791, 579,
/* 2290 */ 1793, 1794, 575, 1465, 570, 1465, 1790, 1465, 1465, 1465,
/* 2300 */ 1465, 1465, 1465, 1808, 1465, 1465, 1465, 1465, 1465, 1465,
/* 2310 */ 1465, 577, 1465, 1465, 1465, 1465, 1760, 1465, 576, 1465,
/* 2320 */ 1465, 1465, 1465, 1465, 1808, 1465, 1465, 1465, 1465, 1465,
/* 2330 */ 1465, 1465, 577, 1465, 1465, 1465, 1465, 1760, 1465, 576,
/* 2340 */ 1465, 1465, 1822, 1465, 1465, 1465, 309, 1791, 579, 1793,
/* 2350 */ 1794, 575, 1790, 570, 1465, 1465, 1465, 1465, 1465, 1465,
/* 2360 */ 1465, 1465, 1790, 1822, 1465, 1465, 1465, 311, 1791, 579,
/* 2370 */ 1793, 1794, 575, 1465, 570, 1465, 1465, 1465, 1465, 1465,
/* 2380 */ 1808, 1465, 1465, 1465, 1465, 1465, 1465, 1465, 577, 1465,
/* 2390 */ 1808, 1465, 1465, 1760, 548, 576, 1465, 1465, 577, 1465,
/* 2400 */ 1465, 1465, 1465, 1760, 1465, 576, 1465, 1465, 1465, 1465,
/* 2410 */ 1465, 1465, 1465, 1465, 1465, 1465, 1465, 1465, 1465, 1822,
/* 2420 */ 1465, 1465, 127, 308, 1791, 579, 1793, 1794, 575, 1822,
/* 2430 */ 570, 1465, 1465, 288, 1791, 579, 1793, 1794, 575, 1465,
/* 2440 */ 570, 548, 553, 1465, 1465, 1465, 1465, 1465, 1465, 1465,
/* 2450 */ 1465, 1465, 1465, 1465, 1465, 1465, 1465, 1465, 1465, 1465,
/* 2460 */ 1465, 125, 1465, 1465, 1465, 1465, 1465, 1465, 1465, 127,
/* 2470 */ 1465, 1465, 1465, 1465, 1465, 1465, 251, 1876, 547, 1465,
/* 2480 */ 546, 1465, 1465, 1934, 1465, 1465, 1465, 1465, 1465, 553,
/* 2490 */ 1465, 1465, 1465, 1465, 1465, 1465, 167, 1465, 1465, 1465,
/* 2500 */ 1931, 1465, 1465, 1465, 1465, 1465, 1465, 1465, 125, 1465,
/* 2510 */ 1465, 1465, 1465, 1465, 1465, 1465, 1465, 1465, 1465, 1465,
/* 2520 */ 1465, 1465, 1465, 251, 1876, 547, 1465, 546, 1465, 1465,
/* 2530 */ 1934, 1465, 1465, 1465, 1465, 1465, 1465, 1465, 1465, 1465,
/* 2540 */ 1465, 1465, 1465, 165, 1465, 1465, 1465, 1931,
/* 860 */ 403, 402, 401, 400, 394, 393, 392, 391, 1880, 388,
/* 870 */ 387, 535, 1422, 1423, 1425, 1426, 29, 337, 1357, 1358,
/* 880 */ 1359, 1360, 1361, 1365, 1366, 1367, 1368, 1350, 61, 1761,
/* 890 */ 1265, 609, 1263, 1289, 1650, 1935, 1400, 29, 337, 1357,
/* 900 */ 1358, 1359, 1360, 1361, 1365, 1366, 1367, 1368, 166, 1583,
/* 910 */ 1791, 1473, 1932, 1268, 1269, 1472, 1317, 1318, 1320, 1321,
/* 920 */ 1322, 1323, 1324, 1325, 573, 569, 1333, 1334, 1336, 1337,
/* 930 */ 1338, 1339, 1341, 1344, 623, 147, 1579, 1791, 1809, 526,
/* 940 */ 279, 611, 610, 256, 1319, 1650, 578, 1884, 1471, 1470,
/* 950 */ 449, 1761, 1761, 577, 277, 60, 1761, 475, 59, 1292,
/* 960 */ 137, 136, 608, 607, 606, 1809, 554, 1607, 1289, 613,
/* 970 */ 1568, 1879, 135, 578, 181, 429, 427, 1823, 1761, 474,
/* 980 */ 577, 94, 1792, 580, 1794, 1795, 576, 536, 571, 1761,
/* 990 */ 1761, 1869, 1780, 554, 468, 306, 1865, 273, 53, 509,
/* 1000 */ 1637, 1659, 1396, 1775, 1823, 526, 63, 1935, 94, 1792,
/* 1010 */ 580, 1794, 1795, 576, 526, 571, 1604, 1658, 1869, 54,
/* 1020 */ 167, 1748, 306, 1865, 1932, 1736, 1519, 202, 1771, 1777,
/* 1030 */ 200, 336, 335, 1607, 1935, 1462, 1463, 558, 526, 526,
/* 1040 */ 571, 1278, 1607, 1273, 93, 526, 526, 165, 483, 506,
/* 1050 */ 510, 1932, 1340, 561, 1271, 326, 228, 522, 526, 204,
/* 1060 */ 526, 1791, 203, 146, 499, 526, 1607, 1607, 361, 524,
/* 1070 */ 1319, 525, 1609, 1607, 1607, 1335, 262, 41, 222, 68,
/* 1080 */ 67, 381, 342, 526, 172, 1272, 1607, 1270, 1607, 1809,
/* 1090 */ 146, 131, 245, 1607, 346, 206, 233, 578, 205, 1609,
/* 1100 */ 301, 567, 1761, 369, 577, 367, 363, 359, 356, 353,
/* 1110 */ 345, 1607, 1782, 208, 134, 135, 207, 1810, 146, 1514,
/* 1120 */ 1399, 1512, 51, 1791, 1213, 226, 237, 1609, 1823, 556,
/* 1130 */ 566, 51, 95, 1792, 580, 1794, 1795, 576, 519, 571,
/* 1140 */ 41, 485, 1869, 488, 168, 1319, 330, 1865, 1948, 11,
/* 1150 */ 10, 1809, 616, 41, 617, 1784, 350, 1903, 584, 578,
/* 1160 */ 134, 230, 1112, 1503, 1761, 1647, 577, 135, 119, 1421,
/* 1170 */ 134, 1899, 550, 240, 1069, 1791, 1067, 255, 1370, 250,
/* 1180 */ 1276, 258, 260, 3, 5, 355, 313, 1326, 1050, 1279,
/* 1190 */ 1823, 1274, 360, 1229, 95, 1792, 580, 1794, 1795, 576,
/* 1200 */ 272, 571, 269, 1809, 1869, 1140, 1508, 1144, 330, 1865,
/* 1210 */ 1948, 578, 1282, 1284, 1151, 1149, 1761, 138, 577, 1926,
/* 1220 */ 175, 1051, 1275, 1287, 569, 1333, 1334, 1336, 1337, 1338,
/* 1230 */ 1339, 1791, 385, 1354, 406, 1700, 413, 421, 420, 1293,
/* 1240 */ 559, 1791, 1823, 422, 426, 431, 95, 1792, 580, 1794,
/* 1250 */ 1795, 576, 428, 571, 658, 439, 1869, 430, 562, 1809,
/* 1260 */ 330, 1865, 1948, 1295, 442, 443, 184, 578, 1294, 1809,
/* 1270 */ 186, 1888, 1761, 1296, 577, 444, 445, 578, 189, 447,
/* 1280 */ 191, 72, 1761, 73, 577, 451, 470, 554, 195, 472,
/* 1290 */ 1791, 304, 1597, 199, 118, 1593, 1741, 554, 1823, 501,
/* 1300 */ 201, 140, 286, 1792, 580, 1794, 1795, 576, 1823, 571,
/* 1310 */ 141, 1595, 286, 1792, 580, 1794, 1795, 576, 1809, 571,
/* 1320 */ 1591, 142, 143, 212, 270, 500, 578, 215, 1935, 507,
/* 1330 */ 504, 1761, 511, 577, 322, 219, 534, 514, 1935, 132,
/* 1340 */ 1740, 167, 1710, 520, 517, 1932, 133, 324, 81, 521,
/* 1350 */ 1791, 165, 1292, 530, 271, 1932, 83, 1823, 1608, 235,
/* 1360 */ 1791, 96, 1792, 580, 1794, 1795, 576, 1900, 571, 537,
/* 1370 */ 239, 1869, 532, 1910, 6, 565, 1865, 533, 1809, 546,
/* 1380 */ 329, 1909, 540, 531, 529, 244, 578, 1891, 1809, 528,
/* 1390 */ 1396, 1761, 1291, 577, 154, 126, 578, 249, 563, 560,
/* 1400 */ 246, 1761, 48, 577, 1885, 247, 331, 248, 85, 1791,
/* 1410 */ 582, 1651, 1580, 265, 274, 659, 660, 1823, 1931, 662,
/* 1420 */ 52, 149, 1792, 580, 1794, 1795, 576, 1823, 571, 1951,
/* 1430 */ 153, 96, 1792, 580, 1794, 1795, 576, 1809, 571, 557,
/* 1440 */ 1755, 1869, 323, 287, 297, 578, 1866, 1850, 296, 254,
/* 1450 */ 1761, 276, 577, 564, 1754, 278, 257, 259, 65, 1753,
/* 1460 */ 1791, 1752, 66, 1749, 357, 555, 1949, 358, 1256, 1257,
/* 1470 */ 171, 362, 1747, 364, 365, 366, 1823, 1746, 1745, 368,
/* 1480 */ 295, 1792, 580, 1794, 1795, 576, 370, 571, 1809, 1744,
/* 1490 */ 372, 1743, 374, 527, 1232, 1231, 578, 1721, 1720, 379,
/* 1500 */ 380, 1761, 1201, 577, 1719, 1718, 1693, 129, 1692, 1691,
/* 1510 */ 1690, 69, 1791, 1689, 1688, 1687, 1686, 1685, 395, 396,
/* 1520 */ 1684, 398, 1791, 130, 1669, 1668, 1667, 1823, 1683, 1682,
/* 1530 */ 1681, 295, 1792, 580, 1794, 1795, 576, 1680, 571, 1791,
/* 1540 */ 1809, 1679, 1678, 1677, 1676, 1675, 1674, 1673, 578, 1672,
/* 1550 */ 1809, 1671, 1670, 1761, 1666, 577, 1665, 1664, 578, 1663,
/* 1560 */ 1203, 1662, 1661, 1761, 1660, 577, 1534, 1809, 179, 1533,
/* 1570 */ 1531, 1499, 120, 182, 180, 575, 1498, 158, 435, 1823,
/* 1580 */ 1761, 1013, 577, 290, 1792, 580, 1794, 1795, 576, 1823,
/* 1590 */ 571, 190, 1012, 149, 1792, 580, 1794, 1795, 576, 1791,
/* 1600 */ 571, 437, 1734, 183, 121, 1728, 1823, 1717, 1716, 1702,
/* 1610 */ 294, 1792, 580, 1794, 1795, 576, 1791, 571, 188, 1842,
/* 1620 */ 1586, 545, 1043, 1530, 1528, 452, 454, 1809, 1526, 453,
/* 1630 */ 456, 457, 338, 458, 1524, 578, 460, 462, 1950, 461,
/* 1640 */ 1761, 1522, 577, 465, 1809, 464, 1511, 1510, 1495, 340,
/* 1650 */ 466, 1588, 578, 1155, 1154, 1587, 50, 1761, 631, 577,
/* 1660 */ 1080, 1077, 633, 1520, 198, 1076, 1823, 1075, 1515, 1513,
/* 1670 */ 295, 1792, 580, 1794, 1795, 576, 318, 571, 319, 320,
/* 1680 */ 486, 1494, 1493, 1823, 1791, 489, 197, 295, 1792, 580,
/* 1690 */ 1794, 1795, 576, 491, 571, 1492, 493, 495, 97, 1733,
/* 1700 */ 152, 1238, 1791, 1727, 216, 467, 463, 459, 455, 196,
/* 1710 */ 56, 502, 1809, 144, 1715, 1713, 1714, 1712, 1711, 221,
/* 1720 */ 578, 1248, 15, 1709, 227, 1761, 79, 577, 1701, 503,
/* 1730 */ 1809, 321, 508, 80, 232, 518, 41, 87, 578, 229,
/* 1740 */ 47, 75, 16, 1761, 194, 577, 243, 242, 82, 25,
/* 1750 */ 17, 1823, 1436, 23, 234, 280, 1792, 580, 1794, 1795,
/* 1760 */ 576, 1791, 571, 236, 1418, 515, 238, 1782, 151, 1823,
/* 1770 */ 1420, 252, 241, 281, 1792, 580, 1794, 1795, 576, 24,
/* 1780 */ 571, 1413, 1393, 46, 1781, 86, 18, 155, 1392, 1809,
/* 1790 */ 1448, 1453, 1442, 1447, 332, 1452, 1451, 578, 333, 10,
/* 1800 */ 45, 1280, 1761, 1330, 577, 1355, 193, 187, 13, 192,
/* 1810 */ 1791, 19, 1328, 446, 1327, 156, 1826, 169, 570, 31,
/* 1820 */ 12, 20, 1310, 21, 583, 1141, 341, 1138, 1823, 185,
/* 1830 */ 587, 1791, 282, 1792, 580, 1794, 1795, 576, 1809, 571,
/* 1840 */ 585, 588, 581, 1135, 579, 590, 578, 1129, 593, 596,
/* 1850 */ 1118, 1761, 1127, 577, 591, 594, 597, 1133, 1132, 1809,
/* 1860 */ 1131, 1130, 88, 89, 263, 603, 1150, 578, 1146, 62,
/* 1870 */ 1041, 1072, 1761, 612, 577, 1071, 1070, 1823, 1068, 1066,
/* 1880 */ 1065, 289, 1792, 580, 1794, 1795, 576, 1064, 571, 1791,
/* 1890 */ 1087, 621, 264, 1062, 1061, 1060, 1059, 1058, 1823, 1791,
/* 1900 */ 1057, 1056, 291, 1792, 580, 1794, 1795, 576, 1047, 571,
/* 1910 */ 1084, 1082, 1053, 1052, 1049, 1048, 1046, 1809, 1527, 641,
/* 1920 */ 1525, 642, 643, 645, 647, 578, 1523, 1809, 649, 646,
/* 1930 */ 1761, 651, 577, 1521, 650, 578, 653, 655, 654, 1509,
/* 1940 */ 1761, 657, 577, 1491, 1003, 267, 661, 1466, 1466, 1266,
/* 1950 */ 275, 1791, 664, 1466, 665, 1466, 1823, 1466, 1466, 1466,
/* 1960 */ 283, 1792, 580, 1794, 1795, 576, 1823, 571, 1791, 1466,
/* 1970 */ 292, 1792, 580, 1794, 1795, 576, 1466, 571, 1466, 1809,
/* 1980 */ 1466, 1466, 1466, 1466, 1466, 1466, 1466, 578, 1466, 1466,
/* 1990 */ 1466, 1466, 1761, 1466, 577, 1466, 1809, 1466, 1466, 1466,
/* 2000 */ 1466, 1466, 1466, 1466, 578, 1466, 1466, 1466, 1466, 1761,
/* 2010 */ 1466, 577, 1466, 1466, 1466, 1466, 1466, 1791, 1823, 1466,
/* 2020 */ 1466, 1466, 284, 1792, 580, 1794, 1795, 576, 1466, 571,
/* 2030 */ 1466, 1466, 1466, 1466, 1791, 1823, 1466, 1466, 1466, 293,
/* 2040 */ 1792, 580, 1794, 1795, 576, 1809, 571, 1466, 1466, 1466,
/* 2050 */ 1466, 1466, 1466, 578, 1466, 1466, 1466, 1466, 1761, 1466,
/* 2060 */ 577, 1466, 1809, 1466, 1466, 1466, 1466, 1466, 1466, 1466,
/* 2070 */ 578, 1466, 1466, 1466, 1466, 1761, 1466, 577, 1466, 1466,
/* 2080 */ 1466, 1466, 1466, 1791, 1823, 1466, 1466, 1466, 285, 1792,
/* 2090 */ 580, 1794, 1795, 576, 1466, 571, 1466, 1466, 1466, 1466,
/* 2100 */ 1466, 1823, 1466, 1466, 1466, 298, 1792, 580, 1794, 1795,
/* 2110 */ 576, 1809, 571, 1466, 1466, 1466, 1466, 1466, 1466, 578,
/* 2120 */ 1466, 1466, 1466, 1466, 1761, 1466, 577, 1466, 1466, 1466,
/* 2130 */ 1466, 1466, 1466, 1466, 1466, 1466, 1466, 1791, 1466, 1466,
/* 2140 */ 1466, 1466, 1466, 1466, 1466, 1466, 1466, 1791, 1466, 1466,
/* 2150 */ 1823, 1466, 1466, 1466, 299, 1792, 580, 1794, 1795, 576,
/* 2160 */ 1466, 571, 1466, 1466, 1466, 1809, 1466, 1466, 1466, 1466,
/* 2170 */ 1466, 1466, 1466, 578, 1466, 1809, 1466, 1466, 1761, 1466,
/* 2180 */ 577, 1466, 1466, 578, 1466, 1466, 1466, 1466, 1761, 1466,
/* 2190 */ 577, 1466, 1466, 1466, 1466, 1466, 1466, 1466, 1791, 1466,
/* 2200 */ 1466, 1466, 1466, 1466, 1823, 1466, 1466, 1466, 1803, 1792,
/* 2210 */ 580, 1794, 1795, 576, 1823, 571, 1791, 1466, 1802, 1792,
/* 2220 */ 580, 1794, 1795, 576, 1466, 571, 1809, 1466, 1466, 1466,
/* 2230 */ 1466, 1466, 1466, 1466, 578, 1466, 1466, 1466, 1466, 1761,
/* 2240 */ 1466, 577, 1466, 1466, 1809, 1466, 1466, 1466, 1466, 1466,
/* 2250 */ 1466, 1466, 578, 1466, 1466, 1466, 1466, 1761, 1466, 577,
/* 2260 */ 1466, 1466, 1466, 1466, 1466, 1823, 1466, 1466, 1466, 1801,
/* 2270 */ 1792, 580, 1794, 1795, 576, 1791, 571, 1466, 1466, 1466,
/* 2280 */ 1466, 1466, 1466, 1823, 1466, 1466, 1466, 310, 1792, 580,
/* 2290 */ 1794, 1795, 576, 1466, 571, 1466, 1791, 1466, 1466, 1466,
/* 2300 */ 1466, 1466, 1466, 1809, 1466, 1466, 1466, 1466, 1466, 1466,
/* 2310 */ 1466, 578, 1466, 1466, 1466, 1466, 1761, 1466, 577, 1466,
/* 2320 */ 1466, 1466, 1466, 1466, 1809, 1466, 1466, 1466, 1466, 1466,
/* 2330 */ 1466, 1466, 578, 1466, 1466, 1466, 1466, 1761, 1466, 577,
/* 2340 */ 1466, 1466, 1823, 1466, 1466, 1466, 309, 1792, 580, 1794,
/* 2350 */ 1795, 576, 1791, 571, 1466, 1466, 1466, 1466, 1466, 1466,
/* 2360 */ 1466, 1466, 1791, 1823, 1466, 1466, 1466, 311, 1792, 580,
/* 2370 */ 1794, 1795, 576, 1466, 571, 1466, 1466, 1466, 1466, 1466,
/* 2380 */ 1809, 1466, 1466, 1466, 1466, 1466, 1466, 1466, 578, 1466,
/* 2390 */ 1809, 1466, 1466, 1761, 549, 577, 1466, 1466, 578, 1466,
/* 2400 */ 1466, 1466, 1466, 1761, 1466, 577, 1466, 1466, 1466, 1466,
/* 2410 */ 1466, 1466, 1466, 1466, 1466, 1466, 1466, 1466, 1466, 1823,
/* 2420 */ 1466, 1466, 127, 308, 1792, 580, 1794, 1795, 576, 1823,
/* 2430 */ 571, 1466, 1466, 288, 1792, 580, 1794, 1795, 576, 1466,
/* 2440 */ 571, 549, 554, 1466, 1466, 1466, 1466, 1466, 1466, 1466,
/* 2450 */ 1466, 1466, 1466, 1466, 1466, 1466, 1466, 1466, 1466, 1466,
/* 2460 */ 1466, 125, 1466, 1466, 1466, 1466, 1466, 1466, 1466, 127,
/* 2470 */ 1466, 1466, 1466, 1466, 1466, 1466, 251, 1877, 548, 1466,
/* 2480 */ 547, 1466, 1466, 1935, 1466, 1466, 1466, 1466, 1466, 554,
/* 2490 */ 1466, 1466, 1466, 1466, 1466, 1466, 167, 1466, 1466, 1466,
/* 2500 */ 1932, 1466, 1466, 1466, 1466, 1466, 1466, 1466, 125, 1466,
/* 2510 */ 1466, 1466, 1466, 1466, 1466, 1466, 1466, 1466, 1466, 1466,
/* 2520 */ 1466, 1466, 1466, 251, 1877, 548, 1466, 547, 1466, 1466,
/* 2530 */ 1935, 1466, 1466, 1466, 1466, 1466, 1466, 1466, 1466, 1466,
/* 2540 */ 1466, 1466, 1466, 165, 1466, 1466, 1466, 1932,
};
static const YYCODETYPE yy_lookahead[] = {
/* 0 */ 316, 390, 391, 316, 316, 312, 316, 314, 315, 1,
@ -647,30 +647,30 @@ static const YYCODETYPE yy_lookahead[] = {
/* 1690 */ 382, 383, 384, 35, 386, 0, 35, 22, 20, 0,
/* 1700 */ 47, 35, 308, 0, 154, 52, 53, 54, 55, 56,
/* 1710 */ 157, 22, 336, 173, 0, 0, 0, 0, 0, 90,
/* 1720 */ 344, 182, 89, 0, 89, 349, 89, 351, 0, 157,
/* 1730 */ 336, 157, 159, 39, 46, 155, 153, 99, 344, 43,
/* 1720 */ 344, 35, 89, 0, 89, 349, 89, 351, 0, 157,
/* 1730 */ 336, 157, 159, 39, 46, 155, 43, 99, 344, 153,
/* 1740 */ 43, 88, 231, 349, 91, 351, 46, 43, 89, 43,
/* 1750 */ 231, 375, 90, 89, 89, 379, 380, 381, 382, 383,
/* 1760 */ 384, 308, 386, 90, 90, 89, 46, 90, 89, 375,
/* 1760 */ 384, 308, 386, 90, 90, 182, 89, 46, 89, 375,
/* 1770 */ 90, 46, 89, 379, 380, 381, 382, 383, 384, 89,
/* 1780 */ 386, 89, 43, 90, 46, 43, 46, 35, 90, 336,
/* 1790 */ 35, 90, 90, 35, 35, 35, 35, 344, 2, 22,
/* 1800 */ 225, 89, 349, 90, 351, 193, 153, 154, 231, 156,
/* 1780 */ 386, 90, 90, 43, 46, 89, 43, 46, 90, 336,
/* 1790 */ 35, 90, 90, 35, 35, 35, 35, 344, 35, 2,
/* 1800 */ 225, 22, 349, 90, 351, 193, 153, 154, 231, 156,
/* 1810 */ 308, 43, 90, 160, 90, 46, 89, 46, 89, 89,
/* 1820 */ 89, 22, 195, 89, 35, 90, 35, 90, 375, 176,
/* 1820 */ 89, 89, 22, 89, 35, 90, 35, 90, 375, 176,
/* 1830 */ 35, 308, 379, 380, 381, 382, 383, 384, 336, 386,
/* 1840 */ 89, 100, 89, 35, 90, 89, 344, 90, 35, 35,
/* 1850 */ 22, 349, 89, 351, 90, 89, 113, 113, 113, 336,
/* 1860 */ 113, 89, 89, 101, 43, 35, 22, 344, 89, 62,
/* 1870 */ 61, 35, 349, 35, 351, 35, 35, 375, 35, 68,
/* 1840 */ 89, 89, 100, 90, 195, 35, 344, 90, 35, 35,
/* 1850 */ 22, 349, 90, 351, 89, 89, 89, 113, 113, 336,
/* 1860 */ 113, 113, 89, 89, 43, 101, 35, 344, 22, 89,
/* 1870 */ 62, 35, 349, 61, 351, 35, 35, 375, 35, 35,
/* 1880 */ 35, 379, 380, 381, 382, 383, 384, 35, 386, 308,
/* 1890 */ 87, 43, 35, 35, 22, 35, 22, 35, 375, 308,
/* 1900 */ 35, 68, 379, 380, 381, 382, 383, 384, 35, 386,
/* 1910 */ 35, 35, 35, 35, 22, 35, 0, 336, 35, 0,
/* 1920 */ 39, 35, 0, 39, 35, 344, 47, 336, 0, 47,
/* 1930 */ 349, 35, 351, 39, 47, 344, 39, 0, 47, 35,
/* 1940 */ 349, 35, 351, 0, 20, 22, 21, 427, 22, 22,
/* 1950 */ 21, 308, 427, 427, 427, 427, 375, 427, 427, 427,
/* 1890 */ 68, 87, 43, 35, 35, 22, 35, 22, 375, 308,
/* 1900 */ 35, 35, 379, 380, 381, 382, 383, 384, 22, 386,
/* 1910 */ 68, 35, 35, 35, 35, 35, 35, 336, 0, 35,
/* 1920 */ 0, 47, 39, 35, 39, 344, 0, 336, 35, 47,
/* 1930 */ 349, 39, 351, 0, 47, 344, 35, 39, 47, 0,
/* 1940 */ 349, 35, 351, 0, 35, 22, 21, 427, 427, 22,
/* 1950 */ 22, 308, 21, 427, 20, 427, 375, 427, 427, 427,
/* 1960 */ 379, 380, 381, 382, 383, 384, 375, 386, 308, 427,
/* 1970 */ 379, 380, 381, 382, 383, 384, 427, 386, 427, 336,
/* 1980 */ 427, 427, 427, 427, 427, 427, 427, 344, 427, 427,
@ -731,7 +731,7 @@ static const YYCODETYPE yy_lookahead[] = {
/* 2530 */ 405, 427, 427, 427, 427, 427, 427, 427, 427, 427,
/* 2540 */ 427, 427, 427, 418, 427, 427, 427, 422,
};
#define YY_SHIFT_COUNT (665)
#define YY_SHIFT_COUNT (666)
#define YY_SHIFT_MIN (0)
#define YY_SHIFT_MAX (1943)
static const unsigned short int yy_shift_ofst[] = {
@ -786,22 +786,22 @@ static const unsigned short int yy_shift_ofst[] = {
/* 480 */ 1626, 1630, 1645, 1663, 1654, 1668, 1656, 1631, 1669, 1657,
/* 490 */ 1650, 1681, 1658, 1682, 1661, 1695, 1675, 1678, 1699, 1553,
/* 500 */ 1666, 1703, 1540, 1689, 1572, 1550, 1714, 1715, 1574, 1573,
/* 510 */ 1716, 1717, 1718, 1633, 1629, 1539, 1723, 1635, 1580, 1637,
/* 520 */ 1728, 1694, 1583, 1659, 1638, 1688, 1696, 1511, 1664, 1662,
/* 530 */ 1665, 1673, 1674, 1676, 1697, 1677, 1679, 1683, 1690, 1680,
/* 540 */ 1704, 1700, 1720, 1692, 1706, 1519, 1693, 1698, 1725, 1575,
/* 550 */ 1739, 1738, 1740, 1701, 1742, 1577, 1702, 1752, 1755, 1758,
/* 560 */ 1759, 1760, 1761, 1702, 1796, 1777, 1612, 1768, 1712, 1713,
/* 570 */ 1727, 1722, 1729, 1724, 1769, 1730, 1731, 1771, 1799, 1627,
/* 580 */ 1734, 1741, 1735, 1789, 1791, 1751, 1737, 1795, 1753, 1754,
/* 590 */ 1808, 1756, 1757, 1813, 1763, 1764, 1814, 1766, 1743, 1744,
/* 600 */ 1745, 1747, 1828, 1762, 1772, 1773, 1830, 1779, 1821, 1821,
/* 610 */ 1844, 1807, 1809, 1836, 1838, 1840, 1841, 1843, 1845, 1852,
/* 620 */ 1811, 1803, 1848, 1857, 1858, 1872, 1860, 1874, 1862, 1865,
/* 630 */ 1833, 1615, 1873, 1619, 1875, 1876, 1877, 1878, 1892, 1880,
/* 640 */ 1916, 1883, 1879, 1881, 1919, 1886, 1882, 1884, 1922, 1889,
/* 650 */ 1887, 1894, 1928, 1896, 1891, 1897, 1937, 1904, 1906, 1943,
/* 660 */ 1923, 1925, 1926, 1927, 1929, 1924,
/* 510 */ 1716, 1717, 1718, 1633, 1629, 1686, 1583, 1723, 1635, 1580,
/* 520 */ 1637, 1728, 1694, 1586, 1659, 1638, 1688, 1693, 1511, 1664,
/* 530 */ 1662, 1665, 1673, 1674, 1677, 1697, 1680, 1679, 1683, 1690,
/* 540 */ 1691, 1704, 1700, 1721, 1696, 1706, 1519, 1692, 1698, 1725,
/* 550 */ 1575, 1740, 1738, 1741, 1701, 1743, 1577, 1702, 1755, 1758,
/* 560 */ 1759, 1760, 1761, 1763, 1702, 1797, 1779, 1612, 1768, 1727,
/* 570 */ 1713, 1729, 1722, 1730, 1724, 1769, 1731, 1732, 1771, 1800,
/* 580 */ 1649, 1734, 1742, 1735, 1789, 1791, 1751, 1737, 1795, 1752,
/* 590 */ 1753, 1810, 1765, 1757, 1813, 1766, 1762, 1814, 1767, 1744,
/* 600 */ 1745, 1747, 1748, 1828, 1764, 1773, 1774, 1831, 1780, 1821,
/* 610 */ 1821, 1846, 1808, 1812, 1836, 1840, 1841, 1843, 1844, 1845,
/* 620 */ 1852, 1822, 1804, 1849, 1858, 1859, 1873, 1861, 1875, 1865,
/* 630 */ 1866, 1842, 1615, 1876, 1619, 1877, 1878, 1879, 1880, 1886,
/* 640 */ 1881, 1918, 1884, 1874, 1883, 1920, 1888, 1882, 1885, 1926,
/* 650 */ 1893, 1887, 1892, 1933, 1901, 1891, 1898, 1939, 1906, 1909,
/* 660 */ 1943, 1923, 1925, 1927, 1928, 1931, 1934,
};
#define YY_REDUCE_COUNT (275)
#define YY_REDUCE_MIN (-389)
@ -837,73 +837,73 @@ static const short yy_reduce_ofst[] = {
/* 270 */ 1068, 1113, 1114, 1118, 1132, 1149,
};
static const YYACTIONTYPE yy_default[] = {
/* 0 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 10 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 20 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 30 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 40 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 50 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 60 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 70 */ 1463, 1463, 1463, 1463, 1463, 1537, 1463, 1463, 1463, 1463,
/* 80 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 90 */ 1463, 1463, 1535, 1693, 1463, 1870, 1463, 1463, 1463, 1463,
/* 100 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 110 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 120 */ 1463, 1463, 1537, 1463, 1535, 1882, 1882, 1882, 1463, 1463,
/* 130 */ 1463, 1463, 1736, 1736, 1463, 1463, 1463, 1463, 1635, 1463,
/* 140 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1728, 1463, 1951,
/* 150 */ 1463, 1463, 1463, 1734, 1905, 1463, 1463, 1463, 1463, 1588,
/* 160 */ 1897, 1874, 1888, 1875, 1872, 1936, 1936, 1936, 1891, 1463,
/* 170 */ 1901, 1463, 1721, 1698, 1463, 1463, 1698, 1695, 1695, 1463,
/* 180 */ 1463, 1463, 1463, 1463, 1463, 1537, 1463, 1537, 1463, 1463,
/* 190 */ 1537, 1463, 1537, 1537, 1537, 1463, 1537, 1463, 1463, 1463,
/* 200 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 210 */ 1463, 1463, 1463, 1535, 1730, 1463, 1535, 1463, 1463, 1463,
/* 220 */ 1535, 1910, 1463, 1463, 1463, 1463, 1910, 1463, 1463, 1535,
/* 230 */ 1463, 1535, 1463, 1463, 1463, 1912, 1910, 1463, 1463, 1912,
/* 240 */ 1910, 1463, 1463, 1463, 1924, 1920, 1912, 1928, 1926, 1903,
/* 250 */ 1901, 1888, 1463, 1463, 1942, 1938, 1954, 1942, 1938, 1942,
/* 260 */ 1938, 1463, 1604, 1463, 1463, 1463, 1535, 1495, 1463, 1723,
/* 270 */ 1736, 1638, 1638, 1638, 1538, 1468, 1463, 1463, 1463, 1463,
/* 280 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1807, 1923,
/* 290 */ 1922, 1846, 1845, 1844, 1842, 1806, 1463, 1600, 1805, 1804,
/* 300 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1798, 1799,
/* 310 */ 1797, 1796, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 320 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 330 */ 1871, 1463, 1939, 1943, 1463, 1463, 1463, 1463, 1463, 1782,
/* 340 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 350 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 360 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 370 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 380 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 390 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 400 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 410 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 420 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 430 */ 1463, 1463, 1463, 1463, 1500, 1463, 1463, 1463, 1463, 1463,
/* 440 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 450 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 460 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 470 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1572, 1571,
/* 480 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 490 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 500 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 510 */ 1463, 1463, 1463, 1463, 1463, 1463, 1740, 1463, 1463, 1463,
/* 520 */ 1463, 1463, 1463, 1463, 1463, 1463, 1904, 1463, 1463, 1463,
/* 530 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 540 */ 1463, 1463, 1782, 1463, 1921, 1463, 1881, 1877, 1463, 1463,
/* 550 */ 1873, 1781, 1463, 1463, 1937, 1463, 1463, 1463, 1463, 1463,
/* 560 */ 1463, 1463, 1463, 1463, 1866, 1463, 1463, 1839, 1824, 1463,
/* 570 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1792,
/* 580 */ 1463, 1463, 1463, 1463, 1463, 1632, 1463, 1463, 1463, 1463,
/* 590 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1617, 1615,
/* 600 */ 1614, 1613, 1463, 1610, 1463, 1463, 1463, 1463, 1641, 1640,
/* 610 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 620 */ 1463, 1463, 1556, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 630 */ 1463, 1548, 1463, 1547, 1463, 1463, 1463, 1463, 1463, 1463,
/* 640 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 650 */ 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463, 1463,
/* 660 */ 1463, 1463, 1463, 1463, 1463, 1463,
/* 0 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 10 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 20 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 30 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 40 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 50 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 60 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 70 */ 1464, 1464, 1464, 1464, 1464, 1538, 1464, 1464, 1464, 1464,
/* 80 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 90 */ 1464, 1464, 1536, 1694, 1464, 1871, 1464, 1464, 1464, 1464,
/* 100 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 110 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 120 */ 1464, 1464, 1538, 1464, 1536, 1883, 1883, 1883, 1464, 1464,
/* 130 */ 1464, 1464, 1737, 1737, 1464, 1464, 1464, 1464, 1636, 1464,
/* 140 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1729, 1464, 1952,
/* 150 */ 1464, 1464, 1464, 1735, 1906, 1464, 1464, 1464, 1464, 1589,
/* 160 */ 1898, 1875, 1889, 1876, 1873, 1937, 1937, 1937, 1892, 1464,
/* 170 */ 1902, 1464, 1722, 1699, 1464, 1464, 1699, 1696, 1696, 1464,
/* 180 */ 1464, 1464, 1464, 1464, 1464, 1538, 1464, 1538, 1464, 1464,
/* 190 */ 1538, 1464, 1538, 1538, 1538, 1464, 1538, 1464, 1464, 1464,
/* 200 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 210 */ 1464, 1464, 1464, 1536, 1731, 1464, 1536, 1464, 1464, 1464,
/* 220 */ 1536, 1911, 1464, 1464, 1464, 1464, 1911, 1464, 1464, 1536,
/* 230 */ 1464, 1536, 1464, 1464, 1464, 1913, 1911, 1464, 1464, 1913,
/* 240 */ 1911, 1464, 1464, 1464, 1925, 1921, 1913, 1929, 1927, 1904,
/* 250 */ 1902, 1889, 1464, 1464, 1943, 1939, 1955, 1943, 1939, 1943,
/* 260 */ 1939, 1464, 1605, 1464, 1464, 1464, 1536, 1496, 1464, 1724,
/* 270 */ 1737, 1639, 1639, 1639, 1539, 1469, 1464, 1464, 1464, 1464,
/* 280 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1808, 1924,
/* 290 */ 1923, 1847, 1846, 1845, 1843, 1807, 1464, 1601, 1806, 1805,
/* 300 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1799, 1800,
/* 310 */ 1798, 1797, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 320 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 330 */ 1872, 1464, 1940, 1944, 1464, 1464, 1464, 1464, 1464, 1783,
/* 340 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 350 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 360 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 370 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 380 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 390 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 400 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 410 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 420 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 430 */ 1464, 1464, 1464, 1464, 1501, 1464, 1464, 1464, 1464, 1464,
/* 440 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 450 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 460 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 470 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1573, 1572,
/* 480 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 490 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 500 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 510 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1741, 1464, 1464,
/* 520 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1905, 1464, 1464,
/* 530 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 540 */ 1464, 1464, 1464, 1783, 1464, 1922, 1464, 1882, 1878, 1464,
/* 550 */ 1464, 1874, 1782, 1464, 1464, 1938, 1464, 1464, 1464, 1464,
/* 560 */ 1464, 1464, 1464, 1464, 1464, 1867, 1464, 1464, 1840, 1825,
/* 570 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 580 */ 1793, 1464, 1464, 1464, 1464, 1464, 1633, 1464, 1464, 1464,
/* 590 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1618,
/* 600 */ 1616, 1615, 1614, 1464, 1611, 1464, 1464, 1464, 1464, 1642,
/* 610 */ 1641, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 620 */ 1464, 1464, 1464, 1557, 1464, 1464, 1464, 1464, 1464, 1464,
/* 630 */ 1464, 1464, 1549, 1464, 1548, 1464, 1464, 1464, 1464, 1464,
/* 640 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 650 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464, 1464,
/* 660 */ 1464, 1464, 1464, 1464, 1464, 1464, 1464,
};
/********** End of lemon-generated parsing tables *****************************/
@ -2024,7 +2024,7 @@ static const char *const yyRuleName[] = {
/* 272 */ "stream_options ::= stream_options TRIGGER WINDOW_CLOSE",
/* 273 */ "stream_options ::= stream_options TRIGGER MAX_DELAY duration_literal",
/* 274 */ "stream_options ::= stream_options WATERMARK duration_literal",
/* 275 */ "stream_options ::= stream_options IGNORE EXPIRED",
/* 275 */ "stream_options ::= stream_options IGNORE EXPIRED NK_INTEGER",
/* 276 */ "cmd ::= KILL CONNECTION NK_INTEGER",
/* 277 */ "cmd ::= KILL QUERY NK_STRING",
/* 278 */ "cmd ::= KILL TRANSACTION NK_INTEGER",
@ -3113,7 +3113,7 @@ static const struct {
{ 362, -3 }, /* (272) stream_options ::= stream_options TRIGGER WINDOW_CLOSE */
{ 362, -4 }, /* (273) stream_options ::= stream_options TRIGGER MAX_DELAY duration_literal */
{ 362, -3 }, /* (274) stream_options ::= stream_options WATERMARK duration_literal */
{ 362, -3 }, /* (275) stream_options ::= stream_options IGNORE EXPIRED */
{ 362, -4 }, /* (275) stream_options ::= stream_options IGNORE EXPIRED NK_INTEGER */
{ 305, -3 }, /* (276) cmd ::= KILL CONNECTION NK_INTEGER */
{ 305, -3 }, /* (277) cmd ::= KILL QUERY NK_STRING */
{ 305, -3 }, /* (278) cmd ::= KILL TRANSACTION NK_INTEGER */
@ -4297,9 +4297,9 @@ static YYACTIONTYPE yy_reduce(
{ ((SStreamOptions*)yymsp[-3].minor.yy840)->triggerType = STREAM_TRIGGER_MAX_DELAY; ((SStreamOptions*)yymsp[-3].minor.yy840)->pDelay = releaseRawExprNode(pCxt, yymsp[0].minor.yy840); yylhsminor.yy840 = yymsp[-3].minor.yy840; }
yymsp[-3].minor.yy840 = yylhsminor.yy840;
break;
case 275: /* stream_options ::= stream_options IGNORE EXPIRED */
{ ((SStreamOptions*)yymsp[-2].minor.yy840)->ignoreExpired = true; yylhsminor.yy840 = yymsp[-2].minor.yy840; }
yymsp[-2].minor.yy840 = yylhsminor.yy840;
case 275: /* stream_options ::= stream_options IGNORE EXPIRED NK_INTEGER */
{ ((SStreamOptions*)yymsp[-3].minor.yy840)->ignoreExpired = taosStr2Int8(yymsp[0].minor.yy0.z, NULL, 10); yylhsminor.yy840 = yymsp[-3].minor.yy840; }
yymsp[-3].minor.yy840 = yylhsminor.yy840;
break;
case 276: /* cmd ::= KILL CONNECTION NK_INTEGER */
{ pCxt->pRootNode = createKillStmt(pCxt, QUERY_NODE_KILL_CONNECTION_STMT, &yymsp[0].minor.yy0); }

View File

@ -571,7 +571,7 @@ TEST_F(ParserInitialCTest, createStream) {
auto setCreateStreamReqFunc = [&](const char* pStream, const char* pSrcDb, const char* pSql,
const char* pDstStb = nullptr, int8_t igExists = 0,
int8_t triggerType = STREAM_TRIGGER_AT_ONCE, int64_t maxDelay = 0,
int64_t watermark = 0, int8_t igExpired = 0) {
int64_t watermark = 0, int8_t igExpired = STREAM_DEFAULT_IGNORE_EXPIRED) {
snprintf(expect.name, sizeof(expect.name), "0.%s", pStream);
snprintf(expect.sourceDB, sizeof(expect.sourceDB), "0.%s", pSrcDb);
if (NULL != pDstStb) {
@ -617,11 +617,11 @@ TEST_F(ParserInitialCTest, createStream) {
clearCreateStreamReq();
setCreateStreamReqFunc("s1", "test",
"create stream if not exists s1 trigger max_delay 20s watermark 10s ignore expired into st1 "
"create stream if not exists s1 trigger max_delay 20s watermark 10s ignore expired 0 into st1 "
"as select count(*) from t1 interval(10s)",
"st1", 1, STREAM_TRIGGER_MAX_DELAY, 20 * MILLISECOND_PER_SECOND, 10 * MILLISECOND_PER_SECOND,
1);
run("CREATE STREAM IF NOT EXISTS s1 TRIGGER MAX_DELAY 20s WATERMARK 10s IGNORE EXPIRED INTO st1 AS SELECT COUNT(*) "
0);
run("CREATE STREAM IF NOT EXISTS s1 TRIGGER MAX_DELAY 20s WATERMARK 10s IGNORE EXPIRED 0 INTO st1 AS SELECT COUNT(*) "
"FROM t1 INTERVAL(10S)");
clearCreateStreamReq();
}

View File

@ -13,21 +13,13 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <functional>
#include <gtest/gtest.h>
#include "mockCatalogService.h"
#include "os.h"
#include "parInt.h"
#include "parTestUtil.h"
using namespace std;
using namespace std::placeholders;
using namespace testing;
namespace {
string toString(int32_t code) { return tstrerror(code); }
} // namespace
namespace ParserTest {
// syntax:
// INSERT INTO
@ -36,259 +28,60 @@ string toString(int32_t code) { return tstrerror(code); }
// [(field1_name, ...)]
// VALUES (field1_value, ...) [(field1_value2, ...) ...] | FILE csv_file_path
// [...];
class InsertTest : public Test {
protected:
InsertTest() : res_(nullptr) {}
~InsertTest() { reset(); }
void setDatabase(const string& acctId, const string& db) {
acctId_ = acctId;
db_ = db;
}
void bind(const char* sql) {
reset();
cxt_.acctId = atoi(acctId_.c_str());
cxt_.db = (char*)db_.c_str();
strcpy(sqlBuf_, sql);
cxt_.sqlLen = strlen(sql);
sqlBuf_[cxt_.sqlLen] = '\0';
cxt_.pSql = sqlBuf_;
}
int32_t run() {
code_ = parseInsertSql(&cxt_, &res_, nullptr);
if (code_ != TSDB_CODE_SUCCESS) {
cout << "code:" << toString(code_) << ", msg:" << errMagBuf_ << endl;
}
return code_;
}
int32_t runAsync() {
cxt_.async = true;
bool request = true;
unique_ptr<SParseMetaCache, function<void(SParseMetaCache*)> > metaCache(
new SParseMetaCache(), std::bind(_destoryParseMetaCache, _1, cref(request)));
code_ = parseInsertSyntax(&cxt_, &res_, metaCache.get());
if (code_ != TSDB_CODE_SUCCESS) {
cout << "parseInsertSyntax code:" << toString(code_) << ", msg:" << errMagBuf_ << endl;
return code_;
}
unique_ptr<SCatalogReq, void (*)(SCatalogReq*)> catalogReq(new SCatalogReq(),
MockCatalogService::destoryCatalogReq);
code_ = buildCatalogReq(metaCache.get(), catalogReq.get());
if (code_ != TSDB_CODE_SUCCESS) {
cout << "buildCatalogReq code:" << toString(code_) << ", msg:" << errMagBuf_ << endl;
return code_;
}
unique_ptr<SMetaData, void (*)(SMetaData*)> metaData(new SMetaData(), MockCatalogService::destoryMetaData);
g_mockCatalogService->catalogGetAllMeta(catalogReq.get(), metaData.get());
metaCache.reset(new SParseMetaCache());
request = false;
code_ = putMetaDataToCache(catalogReq.get(), metaData.get(), metaCache.get());
if (code_ != TSDB_CODE_SUCCESS) {
cout << "putMetaDataToCache code:" << toString(code_) << ", msg:" << errMagBuf_ << endl;
return code_;
}
code_ = parseInsertSql(&cxt_, &res_, metaCache.get());
if (code_ != TSDB_CODE_SUCCESS) {
cout << "parseInsertSql code:" << toString(code_) << ", msg:" << errMagBuf_ << endl;
return code_;
}
return code_;
}
void dumpReslut() {
SVnodeModifOpStmt* pStmt = getVnodeModifStmt(res_);
size_t num = taosArrayGetSize(pStmt->pDataBlocks);
cout << "payloadType:" << (int32_t)pStmt->payloadType << ", insertType:" << pStmt->insertType
<< ", numOfVgs:" << num << endl;
for (size_t i = 0; i < num; ++i) {
SVgDataBlocks* vg = (SVgDataBlocks*)taosArrayGetP(pStmt->pDataBlocks, i);
cout << "vgId:" << vg->vg.vgId << ", numOfTables:" << vg->numOfTables << ", dataSize:" << vg->size << endl;
SSubmitReq* submit = (SSubmitReq*)vg->pData;
cout << "length:" << ntohl(submit->length) << ", numOfBlocks:" << ntohl(submit->numOfBlocks) << endl;
int32_t numOfBlocks = ntohl(submit->numOfBlocks);
SSubmitBlk* blk = (SSubmitBlk*)(submit + 1);
for (int32_t i = 0; i < numOfBlocks; ++i) {
cout << "Block:" << i << endl;
cout << "\tuid:" << be64toh(blk->uid) << ", tid:" << be64toh(blk->suid) << ", sversion:" << ntohl(blk->sversion)
<< ", dataLen:" << ntohl(blk->dataLen) << ", schemaLen:" << ntohl(blk->schemaLen)
<< ", numOfRows:" << ntohl(blk->numOfRows) << endl;
blk = (SSubmitBlk*)(blk->data + ntohl(blk->dataLen));
}
}
}
void checkReslut(int32_t numOfTables, int32_t numOfRows1, int32_t numOfRows2 = -1) {
SVnodeModifOpStmt* pStmt = getVnodeModifStmt(res_);
ASSERT_EQ(pStmt->payloadType, PAYLOAD_TYPE_KV);
ASSERT_EQ(pStmt->insertType, TSDB_QUERY_TYPE_INSERT);
size_t num = taosArrayGetSize(pStmt->pDataBlocks);
ASSERT_GE(num, 0);
for (size_t i = 0; i < num; ++i) {
SVgDataBlocks* vg = (SVgDataBlocks*)taosArrayGetP(pStmt->pDataBlocks, i);
ASSERT_EQ(vg->numOfTables, numOfTables);
ASSERT_GE(vg->size, 0);
SSubmitReq* submit = (SSubmitReq*)vg->pData;
ASSERT_GE(ntohl(submit->length), 0);
ASSERT_GE(ntohl(submit->numOfBlocks), 0);
int32_t numOfBlocks = ntohl(submit->numOfBlocks);
SSubmitBlk* blk = (SSubmitBlk*)(submit + 1);
for (int32_t i = 0; i < numOfBlocks; ++i) {
ASSERT_EQ(ntohl(blk->numOfRows), (0 == i ? numOfRows1 : (numOfRows2 > 0 ? numOfRows2 : numOfRows1)));
blk = (SSubmitBlk*)(blk->data + ntohl(blk->dataLen));
}
}
}
private:
static const int max_err_len = 1024;
static const int max_sql_len = 1024 * 1024;
static void _destoryParseMetaCache(SParseMetaCache* pMetaCache, bool request) {
destoryParseMetaCache(pMetaCache, request);
delete pMetaCache;
}
void reset() {
memset(&cxt_, 0, sizeof(cxt_));
memset(errMagBuf_, 0, max_err_len);
cxt_.pMsg = errMagBuf_;
cxt_.msgLen = max_err_len;
code_ = TSDB_CODE_SUCCESS;
qDestroyQuery(res_);
res_ = nullptr;
}
SVnodeModifOpStmt* getVnodeModifStmt(SQuery* pQuery) { return (SVnodeModifOpStmt*)pQuery->pRoot; }
string acctId_;
string db_;
char errMagBuf_[max_err_len];
char sqlBuf_[max_sql_len];
SParseContext cxt_;
int32_t code_;
SQuery* res_;
};
class ParserInsertTest : public ParserTestBase {};
// INSERT INTO tb_name [(field1_name, ...)] VALUES (field1_value, ...)
TEST_F(InsertTest, singleTableSingleRowTest) {
setDatabase("root", "test");
TEST_F(ParserInsertTest, singleTableSingleRowTest) {
useDb("root", "test");
bind("insert into t1 values (now, 1, 'beijing', 3, 4, 5)");
ASSERT_EQ(run(), TSDB_CODE_SUCCESS);
dumpReslut();
checkReslut(1, 1);
run("INSERT INTO t1 VALUES (now, 1, 'beijing', 3, 4, 5)");
bind("insert into t1 (ts, c1, c2, c3, c4, c5) values (now, 1, 'beijing', 3, 4, 5)");
ASSERT_EQ(run(), TSDB_CODE_SUCCESS);
bind("insert into t1 values (now, 1, 'beijing', 3, 4, 5)");
ASSERT_EQ(runAsync(), TSDB_CODE_SUCCESS);
dumpReslut();
checkReslut(1, 1);
bind("insert into t1 (ts, c1, c2, c3, c4, c5) values (now, 1, 'beijing', 3, 4, 5)");
ASSERT_EQ(runAsync(), TSDB_CODE_SUCCESS);
run("INSERT INTO t1 (ts, c1, c2, c3, c4, c5) VALUES (now, 1, 'beijing', 3, 4, 5)");
}
// INSERT INTO tb_name VALUES (field1_value, ...)(field1_value, ...)
TEST_F(InsertTest, singleTableMultiRowTest) {
setDatabase("root", "test");
TEST_F(ParserInsertTest, singleTableMultiRowTest) {
useDb("root", "test");
bind(
"insert into t1 values (now, 1, 'beijing', 3, 4, 5)(now+1s, 2, 'shanghai', 6, 7, 8)"
run("INSERT INTO t1 VALUES (now, 1, 'beijing', 3, 4, 5)"
"(now+1s, 2, 'shanghai', 6, 7, 8)"
"(now+2s, 3, 'guangzhou', 9, 10, 11)");
ASSERT_EQ(run(), TSDB_CODE_SUCCESS);
dumpReslut();
checkReslut(1, 3);
bind(
"insert into t1 values (now, 1, 'beijing', 3, 4, 5)(now+1s, 2, 'shanghai', 6, 7, 8)"
"(now+2s, 3, 'guangzhou', 9, 10, 11)");
ASSERT_EQ(runAsync(), TSDB_CODE_SUCCESS);
}
// INSERT INTO tb1_name VALUES (field1_value, ...) tb2_name VALUES (field1_value, ...)
TEST_F(InsertTest, multiTableSingleRowTest) {
setDatabase("root", "test");
TEST_F(ParserInsertTest, multiTableSingleRowTest) {
useDb("root", "test");
bind("insert into st1s1 values (now, 1, \"beijing\") st1s2 values (now, 10, \"131028\")");
ASSERT_EQ(run(), TSDB_CODE_SUCCESS);
dumpReslut();
checkReslut(2, 1);
bind("insert into st1s1 values (now, 1, \"beijing\") st1s2 values (now, 10, \"131028\")");
ASSERT_EQ(runAsync(), TSDB_CODE_SUCCESS);
run("INSERT INTO st1s1 VALUES (now, 1, 'beijing') st1s2 VALUES (now, 10, '131028')");
}
// INSERT INTO tb1_name VALUES (field1_value, ...) tb2_name VALUES (field1_value, ...)
TEST_F(InsertTest, multiTableMultiRowTest) {
setDatabase("root", "test");
TEST_F(ParserInsertTest, multiTableMultiRowTest) {
useDb("root", "test");
bind(
"insert into st1s1 values (now, 1, \"beijing\")(now+1s, 2, \"shanghai\")(now+2s, 3, \"guangzhou\")"
" st1s2 values (now, 10, \"131028\")(now+1s, 20, \"132028\")");
ASSERT_EQ(run(), TSDB_CODE_SUCCESS);
dumpReslut();
checkReslut(2, 3, 2);
bind(
"insert into st1s1 values (now, 1, \"beijing\")(now+1s, 2, \"shanghai\")(now+2s, 3, \"guangzhou\")"
" st1s2 values (now, 10, \"131028\")(now+1s, 20, \"132028\")");
ASSERT_EQ(runAsync(), TSDB_CODE_SUCCESS);
run("INSERT INTO "
"st1s1 VALUES (now, 1, 'beijing')(now+1s, 2, 'shanghai')(now+2s, 3, 'guangzhou') "
"st1s2 VALUES (now, 10, '131028')(now+1s, 20, '132028')");
}
// INSERT INTO
// tb1_name USING st1_name [(tag1_name, ...)] TAGS (tag1_value, ...) VALUES (field1_value, ...)
// tb2_name USING st2_name [(tag1_name, ...)] TAGS (tag1_value, ...) VALUES (field1_value, ...)
TEST_F(InsertTest, autoCreateTableTest) {
setDatabase("root", "test");
TEST_F(ParserInsertTest, autoCreateTableTest) {
useDb("root", "test");
bind(
"insert into st1s1 using st1 tags(1, 'wxy', now) "
"values (now, 1, \"beijing\")(now+1s, 2, \"shanghai\")(now+2s, 3, \"guangzhou\")");
ASSERT_EQ(run(), TSDB_CODE_SUCCESS);
dumpReslut();
checkReslut(1, 3);
run("INSERT INTO st1s1 USING st1 TAGS(1, 'wxy', now) "
"VALUES (now, 1, 'beijing')(now+1s, 2, 'shanghai')(now+2s, 3, 'guangzhou')");
bind(
"insert into st1s1 using st1 (tag1, tag2) tags(1, 'wxy') values (now, 1, \"beijing\")"
"(now+1s, 2, \"shanghai\")(now+2s, 3, \"guangzhou\")");
ASSERT_EQ(run(), TSDB_CODE_SUCCESS);
run("INSERT INTO st1s1 USING st1 (tag1, tag2) TAGS(1, 'wxy') (ts, c1, c2) "
"VALUES (now, 1, 'beijing')(now+1s, 2, 'shanghai')(now+2s, 3, 'guangzhou')");
bind(
"insert into st1s1 using st1 tags(1, 'wxy', now) "
"values (now, 1, \"beijing\")(now+1s, 2, \"shanghai\")(now+2s, 3, \"guangzhou\")");
ASSERT_EQ(runAsync(), TSDB_CODE_SUCCESS);
run("INSERT INTO st1s1 (ts, c1, c2) USING st1 (tag1, tag2) TAGS(1, 'wxy') "
"VALUES (now, 1, 'beijing')(now+1s, 2, 'shanghai')(now+2s, 3, 'guangzhou')");
bind(
"insert into st1s1 using st1 (tag1, tag2) tags(1, 'wxy') values (now, 1, \"beijing\")"
"(now+1s, 2, \"shanghai\")(now+2s, 3, \"guangzhou\")");
ASSERT_EQ(runAsync(), TSDB_CODE_SUCCESS);
bind(
"insert into st1s1 using st1 tags(1, 'wxy', now) values (now, 1, \"beijing\")"
"st1s1 using st1 tags(1, 'wxy', now) values (now+1s, 2, \"shanghai\")");
ASSERT_EQ(run(), TSDB_CODE_SUCCESS);
run("INSERT INTO "
"st1s1 USING st1 (tag1, tag2) TAGS(1, 'wxy') (ts, c1, c2) VALUES (now, 1, 'beijing') "
"st1s2 (ts, c1, c2) USING st1 TAGS(2, 'abc', now) VALUES (now+1s, 2, 'shanghai')");
}
TEST_F(InsertTest, toleranceTest) {
setDatabase("root", "test");
bind("insert into");
ASSERT_NE(run(), TSDB_CODE_SUCCESS);
bind("insert into t");
ASSERT_NE(run(), TSDB_CODE_SUCCESS);
bind("insert into");
ASSERT_NE(runAsync(), TSDB_CODE_SUCCESS);
bind("insert into t");
ASSERT_NE(runAsync(), TSDB_CODE_SUCCESS);
}
} // namespace ParserTest

Some files were not shown because too many files have changed in this diff Show More