Merge pull request #12963 from taosdata/3.0

3.0
This commit is contained in:
WANG MINGMING 2022-05-25 11:47:19 +08:00 committed by GitHub
commit 131e923718
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
97 changed files with 5553 additions and 617 deletions

3
.gitignore vendored
View File

@ -46,6 +46,7 @@ psim/
pysim/
*.out
*DS_Store
tests/script/api/batchprepare
# Doxygen Generated files
html/
@ -108,4 +109,4 @@ TAGS
contrib/*
!contrib/CMakeLists.txt
!contrib/test
sql
sql

View File

@ -5,39 +5,39 @@ toc_max_heading_level: 2
TDengine is a high-performance, scalable time-series database with SQL support. Its code, including its cluster feature is open source under GNU AGPL v3.0. Besides the database engine, it provides [caching](/develop/cache), [stream processing](/develop/continuous-query), [data subscription](/develop/subscribe) and other functionalities to reduce the complexity and cost of development and operation.
This section introduces the major features, competitive advantages, suited scenarios and benchmarks to help you get a high level picture for TDengine.
This section introduces the major features, competitive advantages, typical use-cases and benchmarks to help you get a high level overview of TDengine.
## Major Features
The major features are listed below:
1. Besides [using SQL to insert](/develop/insert-data/sql-writing)it supports [Schemaless writing](/reference/schemaless/)and it supports [InfluxDB LINE](/develop/insert-data/influxdb-line)[OpenTSDB Telnet](/develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](/develop/insert-data/opentsdb-json) and other protocols.
2. Support for seamless integration with third-party data collection agents like [Telegraf](/third-party/telegraf)[Prometheus](/third-party/prometheus)[StatsD](/third-party/statsd)[collectd](/third-party/collectd)[icinga2](/third-party/icinga2), [TCollector](/third-party/tcollector), [EMQX](/third-party/emq-broker), [HiveMQ](/third-party/hive-mq-broker). Without a line of code, those agents can write data points into TDengine just by configuration.
3. Support for [all kinds of queries](/develop/query-data), including aggregation, nested query, downsampling, interpolation, etc.
4. Support for [user defined functions](/develop/udf)
1. While TDengine supports [using SQL to insert](/develop/insert-data/sql-writing), it also supports [Schemaless writing](/reference/schemaless/) just like NoSQL databases. TDengine also supports standard protocols like [InfluxDB LINE](/develop/insert-data/influxdb-line)[OpenTSDB Telnet](/develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](/develop/insert-data/opentsdb-json) among others.
2. TDengine supports seamless integration with third-party data collection agents like [Telegraf](/third-party/telegraf)[Prometheus](/third-party/prometheus)[StatsD](/third-party/statsd)[collectd](/third-party/collectd)[icinga2](/third-party/icinga2), [TCollector](/third-party/tcollector), [EMQX](/third-party/emq-broker), [HiveMQ](/third-party/hive-mq-broker). These agents can write data into TDengine with simple configuration and without a single line of code.
3. Support for [all kinds of queries](/develop/query-data), including aggregation, nested query, downsampling, interpolation and others.
4. Support for [user defined functions](/develop/udf).
5. Support for [caching](/develop/cache). TDengine always saves the last data point in cache, so Redis is not needed in some scenarios.
6. Support for [continuous query](/develop/continuous-query).
7. Support for [data subscription](/develop/subscribe) with the capability to specify filter conditions.
8. Support for [cluster](/cluster/), with the capability of increasing processing power by adding more nodes. High availability is supported by replication.
9. Provides interactive [command-line interface](/reference/taos-shell) for management, maintenance and ad-hoc query.
9. Provides an interactive [command-line interface](/reference/taos-shell) for management, maintenance and ad-hoc queries.
10. Provides many ways to [import](/operation/import) and [export](/operation/export) data.
11. Provides [monitoring](/operation/monitor) on TDengine running instances.
11. Provides [monitoring](/operation/monitor) on running instances of TDengine.
12. Provides [connectors](/reference/connector/) for [C/C++](/reference/connector/cpp), [Java](/reference/connector/java), [Python](/reference/connector/python), [Go](/reference/connector/go), [Rust](/reference/connector/rust), [Node.js](/reference/connector/node) and other programming languages.
13. Provides a [REST API](/reference/rest-api/).
14. Supports the seamless integration with [Grafana](/third-party/grafana) for visualization.
14. Supports seamless integration with [Grafana](/third-party/grafana) for visualization.
15. Supports seamless integration with Google Data Studio.
For more detail on features, please read through the whole documentation.
For more details on features, please read through the entire documentation.
## Competitive Advantages
TDengine makes full use of [the characteristics of time series data](https://tdengine.com/2019/07/09/86.html), such as structured, no transaction, rarely delete or update, etc., and builds its own innovative storage engine and computing engine to differentiate itself from other time series databases with the following advantages.
Time-series data is structured, not transactional, and is rarely deleted or updated. TDengine makes full use of [these characteristics of time series data](https://tdengine.com/2019/07/09/86.html) to build its own innovative storage engine and computing engine to differentiate itself from other time series databases, with the following advantages.
- **[High Performance](https://tdengine.com/fast)**: TDengine outperforms other time series databases in data ingestion and querying while significantly reducing storage cost and compute costs, with an innovatively designed and purpose-built storage engine.
- **[High Performance](https://tdengine.com/fast)**: With an innovatively designed and purpose-built storage engine, TDengine outperforms other time series databases in data ingestion and querying while significantly reducing storage costs and compute costs.
- **[Scalable](https://tdengine.com/scalable)**: TDengine provides out-of-box scalability and high-availability through its native distributed design. Nodes can be added through simple configuration to achieve greater data processing power. In addition, this feature is open source.
- **[SQL Support](https://tdengine.com/sql-support)**: TDengine uses SQL as the query language, thereby reducing learning and migration costs, while adding SQL extensions to handle time-series data better, and supporting convenient and flexible schemaless data ingestion.
- **[SQL Support](https://tdengine.com/sql-support)**: TDengine uses SQL as the query language, thereby reducing learning and migration costs, while adding SQL extensions to better handle time-series. Keeping NoSQL developers in mind, TDengine also supports convenient and flexible, schemaless data ingestion.
- **All in One**: TDengine has built-in caching, stream processing and data subscription functions. It is no longer necessary to integrate Kafka/Redis/HBase/Spark or other software in some scenarios. It makes the system architecture much simpler, cost-effective and easier to maintain.
@ -45,24 +45,24 @@ TDengine makes full use of [the characteristics of time series data](https://tde
- **Zero Management**: Installation and cluster setup can be done in seconds. Data partitioning and sharding are executed automatically. TDengines running status can be monitored via Grafana or other DevOps tools.
- **Zero Learning Costs**: With SQL as the query language and support for ubiquitous tools like Python, Java, C/C++, Go, Rust, and Node.js connectors, there are zero learning costs.
- **Zero Learning Costs**: With SQL as the query language and support for ubiquitous tools like Python, Java, C/C++, Go, Rust, and Node.js connectors, and a REST API, there are zero learning costs.
- **Interactive Console**: TDengine provides convenient console access to the database to run ad hoc queries, maintain the database, or manage the cluster without any programming.
- **Interactive Console**: TDengine provides convenient console access to the database, through a CLI, to run ad hoc queries, maintain the database, or manage the cluster, without any programming.
With TDengine, the total cost of ownership of time-series data platform can be greatly reduced. Because 1: with its superior performance, the computing and storage resources are reduced significantly; 2with SQL support, it can be seamlessly integrated with many third party tools, and learning costs/migration costs are reduced significantly; 3: with its simple architecture and zero management, the operation and maintenance costs are reduced.
With TDengine, the total cost of ownership of your time-series data platform can be greatly reduced. 1: With its superior performance, the computing and storage resources are reduced significantly 2: With SQL support, it can be seamlessly integrated with many third party tools, and learning costs/migration costs are reduced significantly 3: With its simple architecture and zero management, the operation and maintenance costs are reduced.
## Technical Ecosystem
In the time-series data processing platform, TDengine stands in a role like this diagram below:
This is how TDengine would be situated, in a typical time-series data processing platform:
![TDengine Technical Ecosystem ](eco_system.png)
<center>Figure 1. TDengine Technical Ecosystem</center>
On the left side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right side, visualization/BI tools, HMI, Python/R, and IoT Apps can be connected. TDengine itself provides interactive command-line interface and web interface for management and maintenance.
On the left-hand side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right-hand side, visualization/BI tools, HMI, Python/R, and IoT Apps can be connected. TDengine itself provides an interactive command-line interface and a web interface for management and maintenance.
## Suited Scenarios
## Typical Use Cases
As a high-performance, scalable and SQL supported time-series database, TDengine's typical application scenarios include but are not limited to IoT, Industrial Internet, Connected Vehicles, IT operation and maintenance, energy, financial markets and other fields. TDengine is a purpose-built database optimized for the characteristics of time series data, it cannot be used to process data from web crawlers, social media, e-commerce, ERP, CRM, etc. This section makes a more detailed analysis of the applicable scenarios.
As a high-performance, scalable and SQL supported time-series database, TDengine's typical use case include but are not limited to IoT, Industrial Internet, Connected Vehicles, IT operation and maintenance, energy, financial markets and other fields. TDengine is a purpose-built database optimized for the characteristics of time series data. As such, it cannot be used to process data from web crawlers, social media, e-commerce, ERP, CRM and so on. More generally TDengine is not a suitable storage engine for non-time-series data. This section makes a more detailed analysis of the applicable scenarios.
### Characteristics and Requirements of Data Sources

View File

@ -2,7 +2,7 @@
title: Concepts
---
In order to explain the basic concepts and provide some sample code, the TDengine documentation takes smart meters as a typical time series data scenario. Assuming that each smart meter collects three metrics of current, voltage, and phase, there are multiple smart meters, and each meter has static attributes like location and group ID, the collected data will be similar to the following table:
In order to explain the basic concepts and provide some sample code, the TDengine documentation smart meters as a typical time series use case. We assume the following: 1. Each smart meter collects three metrics i.e. current, voltage, and phase 2. There are multiple smart meters, and 3. Each meter has static attributes like location and group ID. Based on this, collected data will look similar to the following table:
<div className="center-table">
<table>
@ -29,7 +29,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin
<td>10.3</td>
<td>219</td>
<td>0.31</td>
<td>Beijing.Chaoyang</td>
<td>San Jose</td>
<td>2</td>
</tr>
<tr>
@ -38,7 +38,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin
<td>10.2</td>
<td>220</td>
<td>0.23</td>
<td>Beijing.Chaoyang</td>
<td>San Jose</td>
<td>3</td>
</tr>
<tr>
@ -47,7 +47,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin
<td>11.5</td>
<td>221</td>
<td>0.35</td>
<td>Beijing.Haidian</td>
<td>Mountain View</td>
<td>3</td>
</tr>
<tr>
@ -56,7 +56,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin
<td>13.4</td>
<td>223</td>
<td>0.29</td>
<td>Beijing.Haidian</td>
<td>Mountain View</td>
<td>2</td>
</tr>
<tr>
@ -65,7 +65,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin
<td>12.6</td>
<td>218</td>
<td>0.33</td>
<td>Beijing.Chaoyang</td>
<td>San Jose</td>
<td>2</td>
</tr>
<tr>
@ -74,7 +74,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin
<td>11.8</td>
<td>221</td>
<td>0.28</td>
<td>Beijing.Haidian</td>
<td>Mountain View</td>
<td>2</td>
</tr>
<tr>
@ -83,7 +83,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin
<td>10.3</td>
<td>218</td>
<td>0.25</td>
<td>Beijing.Chaoyang</td>
<td>San Jose</td>
<td>3</td>
</tr>
<tr>
@ -92,7 +92,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin
<td>12.3</td>
<td>221</td>
<td>0.31</td>
<td>Beijing.Chaoyang</td>
<td>San Jose</td>
<td>2</td>
</tr>
</tbody>
@ -112,7 +112,7 @@ Label/Tag refers to the static properties of sensors, equipment or other types o
## Data Collection Point
Data Collection Point (DCP) refers to hardware or software that collects metrics based on preset time periods or triggered by events. A data collection point can collect one or multiple metrics, but these metrics are collected at the same time and have the same time stamp. For some complex equipments, there are often multiple data collection points, and the sampling rate of each collection point may be different, and fully independent. For example, for a car, there could be a data collection point to collect GPS position metrics, a data collection point to collect engine status metrics, and a data collection point to collect the environment metrics inside the car, so in this example the car would have three data collection points.
Data Collection Point (DCP) refers to hardware or software that collects metrics based on preset time periods or triggered by events. A data collection point can collect one or multiple metrics, but these metrics are collected at the same time and have the same time stamp. For some complex equipment, there are often multiple data collection points, and the sampling rate of each collection point may be different, and fully independent. For example, for a car, there could be a data collection point to collect GPS position metrics, a data collection point to collect engine status metrics, and a data collection point to collect the environment metrics inside the car. So in this example the car would have three data collection points.
## Table
@ -122,10 +122,10 @@ To make full use of time-series data characteristics, TDengine adopts a strategy
1. Since the metric data from different DCP are fully independent, the data source of each DCP is unique, and a table has only one writer. In this way, data points can be written in a lock-free manner, and the writing speed can be greatly improved.
2. For a DCP, the metric data generated by DCP is ordered by timestamp, so the write operation can be implemented by simple appending, which further greatly improves the data writing speed.
3. The metric data from a DCP is continuously stored in block by block. If you read data for a period of time, it can greatly reduce random read operations and improve read and query performance by orders of magnitude.
4. Inside a data block for a DCP, columnar storage is used, and different compression algorithms are used for different data types. Metrics generally don't vary as significantly between themselves over a time range as compared to other metrics, this allows for a higher compression rate.
3. The metric data from a DCP is continuously stored, block by block. If you read data for a period of time, it can greatly reduce random read operations and improve read and query performance by orders of magnitude.
4. Inside a data block for a DCP, columnar storage is used, and different compression algorithms are used for different data types. Metrics generally don't vary as significantly between themselves over a time range as compared to other metrics, which allows for a higher compression rate.
If the metric data of multiple DCPs are traditionally written into a single table, due to the uncontrollable network delay, the timing of the data from different DCPs arriving at the server cannot be guaranteed, the writing operation must be protected by locks, and the metric data from one DCP cannot be guaranteed to be continuously stored together. **One table for one data collection point can ensure the best performance of insert and query of a single data collection point to the greatest extent.**
If the metric data of multiple DCPs are traditionally written into a single table, due to uncontrollable network delays, the timing of the data from different DCPs arriving at the server cannot be guaranteed, write operations must be protected by locks, and metric data from one DCP cannot be guaranteed to be continuously stored together. **One table for one data collection point can ensure the best performance of insert and query of a single data collection point to the greatest possible extent.**
TDengine suggests using DCP ID as the table name (like D1001 in the above table). Each DCP may collect one or multiple metrics (like the current, voltage, phase as above). Each metric has a corresponding column in the table. The data type for a column can be int, float, string and others. In addition, the first column in the table must be a timestamp. TDengine uses the time stamp as the index, and wont build the index on any metrics stored. Column wise storage is used.
@ -139,7 +139,7 @@ In the design of TDengine, **a table is used to represent a specific data collec
## Subtable
When creating a table for a specific data collection point, the user can use a STable as a template and specifies the tag values of this specific DCP to create it. **The table created by using a STable as the template is called subtable** in TDengine. The difference between regular table and subtable is:
When creating a table for a specific data collection point, the user can use a STable as a template and specify the tag values of this specific DCP to create it. **The table created by using a STable as the template is called subtable** in TDengine. The difference between regular table and subtable is:
1. Subtable is a table, all SQL commands applied on a regular table can be applied on subtable.
2. Subtable is a table with extensions, it has static tags (labels), and these tags can be added, deleted, and updated after it is created. But a regular table does not have tags.
3. A subtable belongs to only one STable, but a STable may have many subtables. Regular tables do not belong to a STable.
@ -151,7 +151,7 @@ The relationship between a STable and the subtables created based on this STable
2. The schema of metrics or labels cannot be adjusted through subtables, and it can only be changed via STable. Changes to the schema of a STable takes effect immediately for all associated subtables.
3. STable defines only one template and does not store any data or label information by itself. Therefore, data cannot be written to a STable, only to subtables.
Queries can be executed on both a table (subtable) and a STable. For a query on a STable, TDengine will treat the data in all its subtables as a whole data set for processing. TDengine will first find the subtables that meet the tag filter conditions, then scan the time-series data of these subtables to perform aggregation operation, which can greatly reduce the data sets to be scanned, thus greatly improving the performance of data aggregation across multiple DCPs.
Queries can be executed on both a table (subtable) and a STable. For a query on a STable, TDengine will treat the data in all its subtables as a whole data set for processing. TDengine will first find the subtables that meet the tag filter conditions, then scan the time-series data of these subtables to perform aggregation operation, which reduces the number of data sets to be scanned which in turn greatly improves the performance of data aggregation across multiple DCPs.
In TDengine, it is recommended to use a subtable instead of a regular table for a DCP.
@ -167,4 +167,4 @@ FQDN (Fully Qualified Domain Name) is the full domain name of a specific compute
Each node of a TDengine cluster is uniquely identified by an End Point, which consists of an FQDN and a Port, such as h1.tdengine.com:6030. In this way, when the IP changes, we can still use the FQDN to dynamically find the node without changing any configuration of the cluster. In addition, FQDN is used to facilitate unified access to the same cluster from the Intranet and the Internet.
TDengine does not recommend using an IP address to access the cluster, FQDN is recommended for cluster management.
TDengine does not recommend using an IP address to access the cluster. FQDN is recommended for cluster management.

View File

@ -10,7 +10,7 @@ import AptGetInstall from "./\_apt_get_install.mdx";
## Quick Install
The full package of TDengine includes the server(taosd), taosAdapter for connecting with third-party systems and providing a RESTful interface, client driver(taosc), command-line program(CLI, taos) and some tools. For the current version, the server taosd and taosAdapter can only be installed and run on Linux systems. In the future taosd and taosAdapter will also be supported on Windows, macOS and other systems. The client driver taosc and TDengine CLI can be installed and run on Windows or Linux. In addition to the connectors of multiple languages, [RESTful interface](/reference/rest-api) is also provided by [taosAdapter](/reference/taosadapter) in TDengine. Prior to version 2.4.0.0, however, there is no taosAdapter, the RESTful interface is provided by the built-in HTTP service of taosd.
The full package of TDengine includes the server(taosd), taosAdapter for connecting with third-party systems and providing a RESTful interface, client driver(taosc), command-line program(CLI, taos) and some tools. For the current version, the server taosd and taosAdapter can only be installed and run on Linux systems. In the future taosd and taosAdapter will also be supported on Windows, macOS and other systems. The client driver taosc and TDengine CLI can be installed and run on Windows or Linux. In addition to connectors for multiple languages, TDengine also provides a [RESTful interface](/reference/rest-api) through [taosAdapter](/reference/taosadapter). Prior to version 2.4.0.0, taosAdapter did not exist and the RESTful interface was provided by the built-in HTTP service of taosd.
TDengine supports X64/ARM64/MIPS64/Alpha64 hardware platforms, and will support ARM32, RISC-V and other CPU architectures in the future.

View File

@ -1,7 +1,7 @@
---
sidebar_label: Connection
title: Connect to TDengine
description: "This document explains how to establish connection to TDengine, and briefly introduce how to install and use TDengine connectors."
description: "This document explains how to establish connections to TDengine, and briefly introduces how to install and use TDengine connectors."
---
import Tabs from "@theme/Tabs";
@ -19,7 +19,7 @@ import InstallOnLinux from "../../14-reference/03-connector/\_windows_install.md
import VerifyLinux from "../../14-reference/03-connector/\_verify_linux.mdx";
import VerifyWindows from "../../14-reference/03-connector/\_verify_windows.mdx";
Any application programs running on any kind of platforms can access TDengine through the REST API provided by TDengine. For the details, please refer to [REST API](/reference/rest-api/). Besides, application programs can use the connectors of multiple programming languages to access TDengine, including C/C++, Java, Python, Go, Node.js, C#, and Rust. This chapter describes how to establish connection to TDengine and briefly introduces how to install and use connectors. For details about the connectors, please refer to [Connectors](/reference/connector/)
Any application programs running on any kind of platform can access TDengine through the REST API provided by TDengine. For details, please refer to [REST API](/reference/rest-api/). Additionally, application programs can use the connectors of multiple programming languages including C/C++, Java, Python, Go, Node.js, C#, and Rust to access TDengine. This chapter describes how to establish a connection to TDengine and briefly introduces how to install and use connectors. For details about the connectors, please refer to [Connectors](/reference/connector/)
## Establish Connection
@ -31,12 +31,12 @@ There are two ways for a connector to establish connections to TDengine:
Key differences
1. The TDengine client driver (taosc) has the highest performance with all the features of TDengine like [Parameter Binding](/reference/connector/cpp#parameter-binding-api), [Subscription](/reference/connector/cpp#subscription-and-consumption-api), etc.
2. The TDengine client driver (taosc) is not supported across all platforms, and applications built on taosc may need to be modified when updating taosc to newere versions.
2. The TDengine client driver (taosc) is not supported across all platforms, and applications built on taosc may need to be modified when updating taosc to newer versions.
3. The REST connection is more accessible with cross-platform support, however it results in a 30% performance downgrade.
## Install Client Driver taosc
If you are choosing to use native connection and the application is not on the same host as TDengine server, the TDengine client driver taosc needs to be installed on the application host. If choosing to use the REST connection or the application is on the same host as TDengine server, this step can be skipped. It's better to use same version of taosc as the server.
If you are choosing to use the native connection and the the application is not on the same host as TDengine server, the TDengine client driver taosc needs to be installed on the application host. If choosing to use the REST connection or the application is on the same host as TDengine server, this step can be skipped. It's better to use same version of taosc as the TDengine server.
### Install

View File

@ -22,11 +22,11 @@ import CStmt from "./_c_stmt.mdx";
## Introduction
Application program can execute `INSERT` statement through connectors to insert rows. TAOS CLI can be launched manually to insert data too.
Application programs can execute `INSERT` statement through connectors to insert rows. The TAOS CLI can also be used to manually insert data.
### Insert Single Row
Below SQL statement is used to insert one row into table "d1001".
The below SQL statement is used to insert one row into table "d1001".
```sql
INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31);
@ -34,7 +34,7 @@ INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31);
### Insert Multiple Rows
Multiple rows can be inserted in single SQL statement. Below example inserts 2 rows into table "d1001".
Multiple rows can be inserted in a single SQL statement. The example below inserts 2 rows into table "d1001".
```sql
INSERT INTO d1001 VALUES (1538548684000, 10.2, 220, 0.23) (1538548696650, 10.3, 218, 0.25);
@ -42,7 +42,7 @@ INSERT INTO d1001 VALUES (1538548684000, 10.2, 220, 0.23) (1538548696650, 10.3,
### Insert into Multiple Tables
Data can be inserted into multiple tables in same SQL statement. Below example inserts 2 rows into table "d1001" and 1 row into table "d1002".
Data can be inserted into multiple tables in the same SQL statement. The example below inserts 2 rows into table "d1001" and 1 row into table "d1002".
```sql
INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6, 218, 0.33) d1002 VALUES (1538548696800, 12.3, 221, 0.31);
@ -52,14 +52,14 @@ For more details about `INSERT` please refer to [INSERT](/taos-sql/insert).
:::info
- Inserting in batch can gain better performance. Normally, the higher the batch size, the better the performance. Please be noted each single row can't exceed 16K bytes and each single SQL statement can't exceed 1M bytes.
- Inserting with multiple threads can gain better performance too. However, depending on the system resources on the application side and the server side, with the number of inserting threads grows to a specific point, the performance may drop instead of growing. The proper number of threads need to be tested in a specific environment to find the best number.
- Inserting in batches can improve performance. Normally, the higher the batch size, the better the performance. Please note that a single row can't exceed 16K bytes and each SQL statement can't exceed 1MB.
- Inserting with multiple threads can also improve performance. However, depending on the system resources on the application side and the server side, when the number of inserting threads grows beyond a specific point the performance may drop instead of improving. The proper number of threads needs to be tested in a specific environment to find the best number.
:::
:::warning
- If the timestamp for the row to be inserted already exists in the table, the behavior depends on the value of parameter `UPDATE`. If it's set to 0 (also the default value), the row will be discarded. If it's set to 1, the new values will override the old values for the same row.
- If the timestamp for the row to be inserted already exists in the table, the behavior depends on the value of parameter `UPDATE`. If it's set to 0 (the default value), the row will be discarded. If it's set to 1, the new values will override the old values for the same row.
- The timestamp to be inserted must be newer than the timestamp of subtracting current time by the parameter `KEEP`. If `KEEP` is set to 3650 days, then the data older than 3650 days ago can't be inserted. The timestamp to be inserted can't be newer than the timestamp of current time plus parameter `DAYS`. If `DAYS` is set to 2, the data newer than 2 days later can't be inserted.
:::
@ -95,13 +95,13 @@ For more details about `INSERT` please refer to [INSERT](/taos-sql/insert).
:::note
1. With either native connection or REST connection, the above samples can work well.
2. Please be noted that `use db` can't be used with REST connection because REST connection is stateless, so in the samples `dbName.tbName` is used to specify the table name.
2. Please note that `use db` can't be used with a REST connection because REST connections are stateless, so in the samples `dbName.tbName` is used to specify the table name.
:::
### Insert with Parameter Binding
TDengine also provides Prepare API that support parameter binding. Similar to MySQL, only `?` can be used in these APIs to represent the parameters to bind. From version 2.1.1.0 and 2.1.2.0, parameter binding support for inserting data has been improved significantly to improve the insert performance by avoiding the cost of parsing SQL statements.
TDengine also provides API support for parameter binding. Similar to MySQL, only `?` can be used in these APIs to represent the parameters to bind. From version 2.1.1.0 and 2.1.2.0, parameter binding support for inserting data has improved significantly to improve the insert performance by avoiding the cost of parsing SQL statements.
Parameter binding is available only with native connection.

View File

@ -15,16 +15,16 @@ import CTelnet from "./_c_opts_telnet.mdx";
## Introduction
A single line of text is used in OpenTSDB line protocol to represent one row of data. OpenTSDB employs single column data model, so one line can only contains single data column. There can be multiple tags. Each line contains 4 parts as below:
A single line of text is used in OpenTSDB line protocol to represent one row of data. OpenTSDB employs single column data model, so one line can only contain a single data column. There can be multiple tags. Each line contains 4 parts as below:
```
<metric> <timestamp> <value> <tagk_1>=<tagv_1>[ <tagk_n>=<tagv_n>]
```
- `metric` will be used as STable name.
- `timestamp` is the timestamp of current row of data. The time precision will be determined automatically based on the length of the timestamp. second and millisecond time precision are supported.\
- `metric` will be used as the STable name.
- `timestamp` is the timestamp of current row of data. The time precision will be determined automatically based on the length of the timestamp. Second and millisecond time precision are supported.
- `value` is a metric which must be a numeric value, the corresponding column name is "value".
- The last part is tag sets separated by space, all tags will be converted to nchar type automatically.
- The last part is the tag set separated by spaces, all tags will be converted to nchar type automatically.
For example:

View File

@ -2,11 +2,11 @@
title: Insert
---
TDengine supports multiple protocols of inserting data, including SQL, InfluxDB Line protocol, OpenTSDB Telnet protocol, OpenTSDB JSON protocol. Data can be inserted row by row, or in batch. Data from one or more collecting points can be inserted simultaneously. In the meantime, data can be inserted with multiple threads, out of order data and historical data can be inserted too. InfluxDB Line protocol, OpenTSDB Telnet protocol and OpenTSDB JSON protocol are the 3 kinds of schemaless insert protocols supported by TDengine. It's not necessary to create stable and table in advance if using schemaless protocols, and the schemas can be adjusted automatically according to the data to be inserted.
TDengine supports multiple protocols of inserting data, including SQL, InfluxDB Line protocol, OpenTSDB Telnet protocol, and OpenTSDB JSON protocol. Data can be inserted row by row, or in batches. Data from one or more collection points can be inserted simultaneously. Data can be inserted with multiple threads, and out of order data and historical data can be inserted as well. InfluxDB Line protocol, OpenTSDB Telnet protocol and OpenTSDB JSON protocol are the 3 kinds of schemaless insert protocols supported by TDengine. It's not necessary to create STables and tables in advance if using schemaless protocols, and the schemas can be adjusted automatically based on the data being inserted.
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
```

View File

@ -20,7 +20,7 @@ import CAsync from "./_c_async.mdx";
## Introduction
SQL is used by TDengine as the query language. Application programs can send SQL statements to TDengine through REST API or connectors. TDengine CLI `taos` can also be used to execute SQL Ad-Hoc query. Here is the list of major query functionalities supported by TDengine
SQL is used by TDengine as the query language. Application programs can send SQL statements to TDengine through REST API or connectors. TDengine CLI `taos` can also be used to execute SQL Ad-Hoc queries. Here is the list of major query functionalities supported by TDengine
- Query on single column or multiple columns
- Filter on tags or data columns>, <, =, <\>, like
@ -31,7 +31,7 @@ SQL is used by TDengine as the query language. Application programs can send SQL
- Join query with timestamp alignment
- Aggregate functions: count, max, min, avg, sum, twa, stddev, leastsquares, top, bottom, first, last, percentile, apercentile, last_row, spread, diff
For example, below SQL statement can be executed in TDengine CLI `taos` to select the rows whose voltage column is bigger than 215 and limit the output to only 2 rows.
For example, the SQL statement below can be executed in TDengine CLI `taos` to select the rows whose voltage column is bigger than 215 and limit the output to only 2 rows.
```sql
select * from d1001 where voltage > 215 order by ts desc limit 2;
@ -46,15 +46,15 @@ taos> select * from d1001 where voltage > 215 order by ts desc limit 2;
Query OK, 2 row(s) in set (0.001100s)
```
To meet the requirements in many use cases, some special functions have been added in TDengine, for example `twa` (Time Weighted Average), `spared` (The difference between the maximum and the minimum), `last_row` (the last row), more and more functions will be added to better perform in many use cases. Furthermore, continuous query is also supported in TDengine.
To meet the requirements of many use cases, some special functions have been added in TDengine, for example `twa` (Time Weighted Average), `spared` (The difference between the maximum and the minimum), and `last_row` (the last row). Furthermore, continuous query is also supported in TDengine.
For detailed query syntax please refer to [Select](/taos-sql/select).
## Aggregation among Tables
In many use cases, there are always multiple kinds of data collection points. A new concept, called STable (abbreviated for super table), is used in TDengine to represent a kind of data collection points, and a table is used to represent a specific data collection point. Tags are used by TDengine to represent the static properties of data collection points. A specific data collection point has its own values for static properties. By specifying filter conditions on tags, aggregation can be performed efficiently among all the subtables created via the same STable, i.e. same kind of data collection points, can be. Aggregate functions applicable for tables can be used directly on STables, syntax is exactly same.
In many use cases, there are always multiple kinds of data collection points. A new concept, called STable (abbreviated for super table), is used in TDengine to represent a kind of data collection point, and a subtable is used to represent a specific data collection point. Tags are used by TDengine to represent the static properties of data collection points. A specific data collection point has its own values for static properties. By specifying filter conditions on tags, aggregation can be performed efficiently among all the subtables created via the same STable, i.e. same kind of data collection points. Aggregate functions applicable for tables can be used directly on STables, the syntax is exactly the same.
In summary, for a STable, its subtables can be aggregated by a simple query on STable, it's kind of join operation. But tables belong to different STables could not be aggregated.
In summary, for a STable, its subtables can be aggregated by a simple query on the STable, it's a kind of join operation. But tables belong to different STables can not be aggregated.
### Example 1
@ -81,11 +81,11 @@ taos> SELECT count(*), max(current) FROM meters where groupId = 2 and ts > now -
Query OK, 1 row(s) in set (0.002136s)
```
Join query is allowed between only the tables of same STable. In [Select](/taos-sql/select), all query operations are marked as whether it supports STable or not.
Join queries are only allowed between the subtables of the same STable. In [Select](/taos-sql/select), all query operations are marked as to whether they supports STables or not.
## Down Sampling and Interpolation
In IoT use cases, down sampling is widely used to aggregate the data by time range. `INTERVAL` keyword in TDengine can be used to simplify the query by time window. For example, below SQL statement can be used to get the sum of current every 10 seconds from meters table d1001.
In IoT use cases, down sampling is widely used to aggregate the data by time range. The `INTERVAL` keyword in TDengine can be used to simplify the query by time window. For example, the SQL statement below can be used to get the sum of current every 10 seconds from meters table d1001.
```
taos> SELECT sum(current) FROM d1001 INTERVAL(10s);
@ -96,7 +96,7 @@ taos> SELECT sum(current) FROM d1001 INTERVAL(10s);
Query OK, 2 row(s) in set (0.000883s)
```
Down sampling can also be used for STable. For example, below SQL statement can be used to get the sum of current from all meters in BeiJing.
Down sampling can also be used for STable. For example, the below SQL statement can be used to get the sum of current from all meters in BeiJing.
```
taos> SELECT SUM(current) FROM meters where location like "Beijing%" INTERVAL(1s);
@ -110,7 +110,7 @@ taos> SELECT SUM(current) FROM meters where location like "Beijing%" INTERVAL(1s
Query OK, 5 row(s) in set (0.001538s)
```
Down sampling also supports time offset. For example, below SQL statement can be used to get the sum of current from all meters but each time window must start at the boundary of 500 milliseconds.
Down sampling also supports time offset. For example, the below SQL statement can be used to get the sum of current from all meters but each time window must start at the boundary of 500 milliseconds.
```
taos> SELECT SUM(current) FROM meters INTERVAL(1s, 500a);
@ -124,7 +124,7 @@ taos> SELECT SUM(current) FROM meters INTERVAL(1s, 500a);
Query OK, 5 row(s) in set (0.001521s)
```
In many use cases, it's hard to align the timestamp of the data collected by each collection point. However, a lot of algorithms like FFT require the data to be aligned with same time interval and application programs have to handle by themselves in many systems. In TDengine, it's easy to achieve the alignment using down sampling.
In many use cases, it's hard to align the timestamp of the data collected by each collection point. However, a lot of algorithms like FFT require the data to be aligned with same time interval and application programs have to handle this by themselves. In TDengine, it's easy to achieve the alignment using down sampling.
Interpolation can be performed in TDengine if there is no data in a time range.
@ -162,16 +162,16 @@ In the section describing [Insert](/develop/insert-data/sql-writing), a database
:::note
1. With either REST connection or native connection, the above sample code work well.
2. Please be noted that `use db` can't be used in case of REST connection because it's stateless.
1. With either REST connection or native connection, the above sample code works well.
2. Please note that `use db` can't be used in case of REST connection because it's stateless.
:::
### Asynchronous Query
Besides synchronous query, asynchronous query API is also provided by TDengine to insert or query data more efficiently. With similar hardware and software environment, async API is 2~4 times faster than sync APIs. Async API works in non-blocking mode, which means an operation can be returned without finishing so that the calling thread can switch to other works to improve the performance of the whole application system. Async APIs perform especially better in case of poor network.
Besides synchronous queries, an asynchronous query API is also provided by TDengine to insert or query data more efficiently. With a similar hardware and software environment, the async API is 2~4 times faster than sync APIs. Async API works in non-blocking mode, which means an operation can be returned without finishing so that the calling thread can switch to other works to improve the performance of the whole application system. Async APIs perform especially better in the case of poor networks.
Please be noted that async query can only be used with native connection.
Please note that async query can only be used with a native connection.
<Tabs defaultValue="python" groupId="lang">
<TabItem label="Python" value="python">

View File

@ -4,15 +4,15 @@ description: "Continuous query is a query that's executed automatically accordin
title: "Continuous Query"
---
Continuous query is a query that's executed automatically according to predefined frequency to provide aggregate query capability by time window, it's actually a simplified time driven stream computing. Continuous query can be performed on a table or STable in TDengine. The result of continuous query can be pushed to client or written back to TDengine. Each query is executed on a time window, which moves forward with time. The size of time window and the forward sliding time need to be specified with parameter `INTERVAL` and `SLIDING` respectively.
Continuous query is a query that's executed automatically according to a predefined frequency to provide aggregate query capability by time window, it's actually a simplified time driven stream computing. Continuous query can be performed on a table or STable in TDengine. The result of continuous query can be pushed to clients or written back to TDengine. Each query is executed on a time window, which moves forward with time. The size of time window and the forward sliding time need to be specified with parameter `INTERVAL` and `SLIDING` respectively.
Continuous query in TDengine is time driven, and can be defined using TAOS SQL directly without any extra operations. With continuous query, the result can be generated according to time window to achieve down sampling of original data. Once a continuous query is defined using TAOS SQL, the query is automatically executed at the end of each time window and the result is pushed back to client or written to TDengine.
Continuous query in TDengine is time driven, and can be defined using TAOS SQL directly without any extra operations. With continuous query, the result can be generated according to a time window to achieve down sampling of the original data. Once a continuous query is defined using TAOS SQL, the query is automatically executed at the end of each time window and the result is pushed back to clients or written to TDengine.
There are some differences between continuous query in TDengine and time window computation in stream computing
- The computation is performed and the result is returned in real time in stream computing, but the computation in continuous query is only started when a time window closes. For example, if the time window is 1 day, then the result will only be generated at 23:59:59.
- If a historical data row is written in to a time widow for which the computation has been finished, the computation will not be performed again and the result will not be pushed to client again either. If the result has been written into TDengine, there will be no update for the result.
- In continuous query, if the result is pushed to client, the client status is not cached on the server side and Exactly-once is not guaranteed by the server either. If the client program crashes, a new time window will be generated from the time where the continuous query is restarted. If the result is written into TDengine, the data written into TDengine can be guaranteed as valid and continuous.
- If a historical data row is written in to a time window for which the computation has already finished, the computation will not be performed again and the result will not be pushed to client applications again. If the results have already been written into TDengine, they will not be updated.
- In continuous query, if the result is pushed to a client, the client status is not cached on the server side and Exactly-once is not guaranteed by the server. If the client program crashes, a new time window will be generated from the time where the continuous query is restarted. If the result is written into TDengine, the data written into TDengine can be guaranteed as valid and continuous.
## Syntax
@ -30,7 +30,7 @@ SLIDING: The time step for which the time window moves forward each time
## How to Use
In this section the use case of meters will be used to introduce how to use continuous query. Assume the STable and sub tables have been created using below SQL statement.
In this section the use case of meters will be used to introduce how to use continuous query. Assume the STable and subtables have been created using the SQL statements below.
```sql
create table meters (ts timestamp, current float, voltage int, phase float) tags (location binary(64), groupId int);
@ -38,7 +38,7 @@ create table D1001 using meters tags ("Beijing.Chaoyang", 2);
create table D1002 using meters tags ("Beijing.Haidian", 2);
```
The average voltage for each time window of one minute with 30 seconds as the length of moving forward can be retrieved using below SQL statement.
The SQL statement below retrieves the average voltage for a one minute time window, with each time window moving forward by 30 seconds.
```sql
select avg(voltage) from meters interval(1m) sliding(30s);
@ -50,13 +50,13 @@ Whenever the above SQL statement is executed, all the existing data will be comp
select avg(voltage) from meters where ts > {startTime} interval(1m) sliding(30s);
```
Another easier way for same purpose is prepend `create table {tableName} as` before the `select`.
An easier way to achieve this is to prepend `create table {tableName} as` before the `select`.
```sql
create table avg_vol as select avg(voltage) from meters interval(1m) sliding(30s);
```
A table named as `avg_vol` will be created automatically, then every 30 seconds the `select` statement will be executed automatically on the data in the past 1 minutes, i.e. the latest time window, and the result is written into table `avg_vol`. The client program just needs to query from table `avg_vol`. For example:
A table named as `avg_vol` will be created automatically, then every 30 seconds the `select` statement will be executed automatically on the data in the past 1 minute, i.e. the latest time window, and the result is written into table `avg_vol`. The client program just needs to query from table `avg_vol`. For example:
```sql
taos> select * from avg_vol;
@ -68,16 +68,16 @@ taos> select * from avg_vol;
2020-07-29 13:39:00.000 | 223.0800000 |
```
Please be noted that the minimum allowed time window is 10 milliseconds, and no upper limit.
Please note that the minimum allowed time window is 10 milliseconds, and no upper limit.
Besides, it's allowed to specify the start and end time of continuous query. If the start time is not specified, the timestamp of the first original row will be considered as the start time; if the end time is not specified, the continuous will be performed infinitely, otherwise it will be terminated once the end time is reached. For example, the continuous query in below SQL statement will be started from now and terminated one hour later.
It's possible to specify the start and end time of a continuous query. If the start time is not specified, the timestamp of the first row will be considered as the start time; if the end time is not specified, the continuous query will be performed indefinitely, otherwise it will be terminated once the end time is reached. For example, the continuous query in the SQL statement below will be started from now and terminated one hour later.
```sql
create table avg_vol as select avg(voltage) from meters where ts > now and ts <= now + 1h interval(1m) sliding(30s);
```
`now` in above SQL statement stands for the time when the continuous query is created, not the time when the computation is actually performed. Besides, to avoid the trouble caused by the delay of original data as much as possible, the actual computation in continuous query is also started with a little delay. That means, once a time window closes, the computation is not started immediately. Normally, the result can only be available a little time later, normally within one minute, after the time window closes.
`now` in the above SQL statement stands for the time when the continuous query is created, not the time when the computation is actually performed. To avoid the trouble caused by a delay in receiving data as much as possible, the actual computation in a continuous query is started after a little delay. That means, once a time window closes, the computation is not started immediately. Normally, the result are available after a little time, normally within one minute, after the time window closes.
## How to Manage
`show streams` command can be used in TDengine CLI `taos` to show all the continuous queries in the system, and `kill stream` can be used to terminate a continuous query.
`show streams` command can be used in the TDengine CLI `taos` to show all the continuous queries in the system, and `kill stream` can be used to terminate a continuous query.

View File

@ -16,9 +16,9 @@ import CDemo from "./_sub_c.mdx";
## Introduction
According to the time series nature of the data, data inserting in TDengine is similar to data publishing in message queues, they both can be considered as a new data record with timestamp is inserted into the system. Data is stored in ascending order of timestamp inside TDengine, so essentially each table in TDengine can be considered as a message queue.
Due to the nature of time series data, data inserting in TDengine is similar to data publishing in message queues. Data is stored in ascending order of timestamp inside TDengine, so each table in TDengine can essentially be considered as a message queue.
Lightweight service for data subscription and pushing is built in TDengine. With the API provided by TDengine, client programs can used `select` statement to subscribe the data from one or more tables. The subscription and and state maintenance is performed on the client side, the client programs polls the server to check whether there is new data, and if so the new data will be pushed back to the client side. If the client program is restarted, where to start for retrieving new data is up to the client side.
A lightweight service for data subscription and pushing is built in TDengine. With the API provided by TDengine, client programs can use `select` statements to subscribe to data from one or more tables. The subscription and state maintenance is performed on the client side, the client programs poll the server to check whether there is new data, and if so the new data will be pushed back to the client side. If the client program is restarted, where to start for retrieving new data is up to the client side.
There are 3 major APIs related to subscription provided in the TDengine client driver.
@ -28,9 +28,9 @@ taos_consume
taos_unsubscribe
```
For more details about these API please refer to [C/C++ Connector](/reference/connector/cpp). Their usage will be introduced below using the use case of meters, in which the schema of STable and sub tables please refer to the previous section "continuous query". Full sample code can be found [here](https://github.com/taosdata/TDengine/blob/master/examples/c/subscribe.c).
For more details about these APIs please refer to [C/C++ Connector](/reference/connector/cpp). Their usage will be introduced below using the use case of meters, in which the schema of STable and subtables from the previous section [Continuous Query](/develop/continuous-query) are used. Full sample code can be found [here](https://github.com/taosdata/TDengine/blob/master/examples/c/subscribe.c).
If we want to get notification and take some actions if the current exceeds a threshold, like 10A, from some meters, there are two ways:
If we want to get a notification and take some actions if the current exceeds a threshold, like 10A, from some meters, there are two ways:
The first way is to query on each sub table and record the last timestamp matching the criteria, then after some time query on the data later than recorded timestamp and repeat this process. The SQL statements for this way are as below.
@ -40,7 +40,7 @@ select * from D1002 where ts > {last_timestamp2} and current > 10;
...
```
The above way works, but the problem is that the number of `select` statements increases with the number of meters grows. Finally the performance of both client side and server side will be unacceptable once the number of meters grows to a big enough number.
The above way works, but the problem is that the number of `select` statements increases with the number of meters. Additionally, the performance of both client side and server side will be unacceptable once the number of meters grows to a big enough number.
A better way is to query on the STable, only one `select` is enough regardless of the number of meters, like below:
@ -48,7 +48,7 @@ A better way is to query on the STable, only one `select` is enough regardless o
select * from meters where ts > {last_timestamp} and current > 10;
```
However, how to choose `last_timestamp` becomes a new problem if using this way. Firstly, the timestamp when the data is generated is different from the timestamp when the data is inserted into the database, sometimes the difference between them may be very big. Secondly, the time when the data from different meters may arrives at the database may be different too. If the timestamp of the "slowest" meter is used as `last_timestamp` in the query, the data from other meters may be selected repeatedly; but if the timestamp of the "fasted" meters is used as `last_timestamp`, some data from other meters may be missed.
However, this presents a new problem in how to choose `last_timestamp`. First, the timestamp when the data is generated is different from the timestamp when the data is inserted into the database, sometimes the difference between them may be very big. Second, the time when the data from different meters arrives at the database may be different too. If the timestamp of the "slowest" meter is used as `last_timestamp` in the query, the data from other meters may be selected repeatedly; but if the timestamp of the "fastest" meter is used as `last_timestamp`, some data from other meters may be missed.
All the problems mentioned above can be resolved thoroughly using subscription provided by TDengine.
@ -75,19 +75,19 @@ The parameter `sql` is a `select` statement in which `where` clause can be used
select * from meters where current > 10;
```
Please be noted that, all the data will be processed because no start time is specified. If only the data from one day ago needs to be processed, a time related condition can be added:
Please note that, all the data will be processed because no start time is specified. If only the data from one day ago needs to be processed, a time related condition can be added:
```sql
select * from meters where ts > now - 1d and current > 10;
```
The parameter `topic` is the name of the subscription, it needs to be guaranteed unique in the client program, but it's not necessary to be globally unique because subscription is implemented in the APIs on client side.
The parameter `topic` is the name of the subscription, it needs to be guaranteed unique in the client program, but it's not necessary to be globally unique because subscription is implemented in the APIs on the client side.
If the subscription named as `topic` doesn't exist, parameter `restart` would be ignored. If the subscription named as `topic` has been created before by the client program which then exited, when the client program is restarted to use this `topic`, parameter `restart` is used to determine retrieving data from beginning or from the last point where the subscription was broken. If the value of `restart` is **true** (i.e. a non-zero value), the data will be retrieved from beginning, or if it is **false** (i.e. zero), the data already consumed before will not be processed again.
If the subscription named as `topic` doesn't exist, the parameter `restart` will be ignored. If the subscription named as `topic` has been created before by the client program, when the client program is restarted with the subscription named `topic`, parameter `restart` is used to determine whether to retrieve data from the beginning or from the last point where the subscription was broken. If the value of `restart` is **true** (i.e. a non-zero value), the data will be retrieved from beginning, or if it is **false** (i.e. zero), the data already consumed before will not be processed again.
The last parameter of `taos_subscribe` is the polling interval in unit of millisecond. In sync mode, if the time difference between two continuous invocations to `taos_consume` is smaller than the interval specified by `taos_subscribe`, `taos_consume` would be blocked until the interval is reached. In async mode, this interval is the minimum interval between two invocations to the call back function.
The last parameter of `taos_subscribe` is the polling interval in unit of millisecond. In sync mode, if the time difference between two continuous invocations to `taos_consume` is smaller than the interval specified by `taos_subscribe`, `taos_consume` will be blocked until the interval is reached. In async mode, this interval is the minimum interval between two invocations to the call back function.
The last second parameter of `taos_subscribe` is used to pass arguments to the call back function. `taos_subscribe` doesn't process this parameter and simply passes it to the call back function. This parameter is simply ignored in sync mode.
The second to last parameter of `taos_subscribe` is used to pass arguments to the call back function. `taos_subscribe` doesn't process this parameter and simply passes it to the call back function. This parameter is simply ignored in sync mode.
After a subscription is created, its data can be consumed and processed, below is the sample code of how to consume data in sync mode, in the else part if `if (async)`.
@ -149,22 +149,22 @@ void subscribe_callback(TAOS_SUB* tsub, TAOS_RES *res, void* param, int code) {
taos_unsubscribe(tsub, keep);
```
The second parameter `keep` is used to specify whether to keep the subscription progress on the client sde. If it is **false**, i.e. **0**, then subscription will be restarted from beginning regardless of the `restart` parameter's value in when `taos_subscribe` is invoked again. The subscription progress information is stored in _{DataDir}/subscribe/_ , under which there is a file with same name as `topic` for each subscription, the subscription will be restarted from beginning if the corresponding progress file is removed.
The second parameter `keep` is used to specify whether to keep the subscription progress on the client sde. If it is **false**, i.e. **0**, then subscription will be restarted from beginning regardless of the `restart` parameter's value when `taos_subscribe` is invoked again. The subscription progress information is stored in _{DataDir}/subscribe/_ , under which there is a file with the same name as `topic` for each subscription, the subscription will be restarted from the beginning if the corresponding progress file is removed.
Now let's see the effect of the above sample code, assuming below prerequisites have been done.
- The sample code has been downloaded to local system
- TDengine has been installed and launched properly on same system
- The database, STable, sub tables required in the sample code have been ready
- The database, STable, and subtables required in the sample code are ready
It's ready to launch below command in the directory where the sample code resides to compile and start the program.
Launch the command below in the directory where the sample code resides to compile and start the program.
```bash
make
./subscribe -sql='select * from meters where current > 10;'
```
After the program is started, open another terminal and launch TDengine CLI `taos`, then use below SQL commands to insert a row whose current is 12A into table **D1001**.
After the program is started, open another terminal and launch TDengine CLI `taos`, then use the below SQL commands to insert a row whose current is 12A into table **D1001**.
```sql
use test;
@ -232,7 +232,7 @@ Query OK, 5 row(s) in set (0.004896s)
### Run the Examples
The example programs firstly consume all historical data matching the criteria.
The example programs first consume all historical data matching the criteria.
```bash
ts: 1597464000000 current: 12.0 voltage: 220 phase: 1 location: Beijing.Chaoyang groupid : 2

View File

@ -10,9 +10,9 @@ Caching the latest data provides the capability of retrieving data in millisecon
The memory space used by TDengine cache is fixed in size, according to the configuration based on application requirement and system resources. Independent memory pool is allocated for and managed by each vnode (virtual node) in TDengine, there is no sharing of memory pools between vnodes. All the tables belonging to a vnode share all the cache memory of the vnode.
Memory pool is divided into blocks and data is stored in row format in memory and each block follows FIFO policy. The size of each block is determined by configuration parameter `cache`, the number of blocks for each vnode is determined by `blocks`. For each vnode, the total cache size is `cache * blocks`. It's better to set the size of each block to hold at least tends of rows.
Memory pool is divided into blocks and data is stored in row format in memory and each block follows FIFO policy. The size of each block is determined by configuration parameter `cache`, the number of blocks for each vnode is determined by `blocks`. For each vnode, the total cache size is `cache * blocks`. A cache block needs to ensure that each table can store at least dozens of records to be efficient.
`last_row` function can be used to retrieve the last row of a table or a STable to quickly show the current state of devices on monitoring screen. For example below SQL statement retrieves the latest voltage of all meters in Chaoyang district of Beijing.
`last_row` function can be used to retrieve the last row of a table or a STable to quickly show the current state of devices on monitoring screen. For example the below SQL statement retrieves the latest voltage of all meters in Chaoyang district of Beijing.
```sql
select last_row(voltage) from meters where location='Beijing.Chaoyang';

View File

@ -2,15 +2,15 @@
title: Developer Guide
---
To develop an application using TDengine to process time-series data, we recommend taking the following steps:
To develop an application to process time-series data using TDengine, we recommend taking the following steps:
1. Choose the way for connection to TDengine. No matter what programming language you use, you can always use the REST interface to access TDengine, but you can also use connectors unique to each programming language.
2. Design the data model based on your own application scenarios. Learn the [concepts](/concept/) of TDengine including "one table for one data collection point" and the "super table" concept; learn about static labels, collected metrics, and subtables. According to the data characteristics, you may decide to create one or more databases, and you should design the STable schema to fit your data.
3. Decide how to insert data. TDengine supports writing using standard SQL, but also supports schemaless writing, so that data can be written directly without creating tables manually.
4. Based on business requirements, find out what SQL query statements need to be written.
1. Choose the method to connect to TDengine. No matter what programming language you use, you can always use the REST interface to access TDengine, but you can also use connectors unique to each programming language.
2. Design the data model based on your own use cases. Learn the [concepts](/concept/) of TDengine including "one table for one data collection point" and the "super table" (STable) concept; learn about static labels, collected metrics, and subtables. Depending on the characteristics of your data and your requirements, you may decide to create one or more databases, and you should design the STable schema to fit your data.
3. Decide how you will insert data. TDengine supports writing using standard SQL, but also supports schemaless writing, so that data can be written directly without creating tables manually.
4. Based on business requirements, find out what SQL query statements need to be written. You may be able to repurpose any existing SQL.
5. If you want to run real-time analysis based on time series data, including various dashboards, it is recommended that you use the TDengine continuous query feature instead of deploying complex streaming processing systems such as Spark or Flink.
6. If your application has modules that need to consume inserted data, and they need to be notified when new data is inserted, it is recommended that you use the data subscription function provided by TDengine without the need to deploy Kafka.
7. In many scenarios (such as fleet management), the application needs to obtain the latest status of each data collection point. It is recommended that you use the cache function of TDengine instead of deploying Redis separately.
7. In many use cases (such as fleet management), the application needs to obtain the latest status of each data collection point. It is recommended that you use the cache function of TDengine instead of deploying Redis separately.
8. If you find that the SQL functions of TDengine cannot meet your requirements, then you can use user-defined functions to solve the problem.
This section is organized in the order described above. For ease of understanding, TDengine provides sample code for each supported programming language for each function. If you want to learn more about the use of SQL, please read the [SQL manual](/taos-sql/). For a more in-depth understanding of the use of each connector, please read the [Connector Reference Guide](/reference/connector/). If you also want to integrate TDengine with third-party systems, such as Grafana, please refer to the [third-party tools](/third-party/).

View File

@ -6,15 +6,15 @@ title: Deployment
### Step 1
The FQDN of all hosts need to be setup properly, all the FQDNs need to be configured in the /etc/hosts of each host. It must be guaranteed that each FQDN can be accessed (by ping, for example) from any other hosts.
The FQDN of all hosts needs to be setup properly, all the FQDNs need to be configured in the /etc/hosts of each host. It must be confirmed that each FQDN can be accessed (by ping, for example) from any other hosts.
On each host command `hostname -f` can be executed to get the hostname. `ping` command can be executed on each host to check whether any other host is accessible from it. If any host is not accessible, the network configuration, like /etc/hosts or DNS configuration, need to be checked and revised to make any two hosts accessible to each other.
On each host the command `hostname -f` can be executed to get the hostname. `ping` command can be executed on each host to check whether any other host is accessible from it. If any host is not accessible, the network configuration, like /etc/hosts or DNS configuration, need to be checked and revised to make any two hosts accessible to each other.
:::note
- The host where the client program runs also needs to configured properly for FQDN, to make sure all hosts for client or server can be accessed from any other. In other words, the hosts where the client is running are also considered as a part of the cluster.
- The host where the client program runs also needs to be configured properly for FQDN, to make sure all hosts for client or server can be accessed from any other. In other words, the hosts where the client is running are also considered as a part of the cluster.
- It's suggested to disable the firewall for all hosts in the cluster. At least TCP/UDP for port 6030~6042 need to be open if firewall is enabled.
- It's suggested to disable the firewall for all hosts in the cluster. At least TCP/UDP for port 6030~6042 need to be open if a firewall is enabled.
:::
@ -28,7 +28,7 @@ Now it's time to install TDengine on all hosts without starting `taosd`, the ver
### Step 4
Now each physical node (referred to as `dnode` hereinafter, it's abbreviation for "data node") of TDengine need to be configured properly. Please be noted that one dnode doesn't stand for one host, multiple TDengine nodes can be started on single host as long as they are configured properly without conflicting. More specifically each instance of the configuration file `taos.cfg` stands for a dnode. Assuming the first dnode of TDengine cluster is "h1.taosdata.com:6030", its `taos.cfg` is configured as following.
Now each physical node (referred to as `dnode` hereinafter, it's abbreviation for "data node") of TDengine needs to be configured properly. Please note that one dnode doesn't stand for one host, multiple TDengine nodes can be started on single host as long as they are configured properly without conflicting. More specifically each instance of the configuration file `taos.cfg` stands for a dnode. Assuming the first dnode of TDengine cluster is "h1.taosdata.com:6030", its `taos.cfg` is configured as following.
```c
// firstEp is the end point to connect to when any dnode starts
@ -44,9 +44,9 @@ serverPort 6030
#arbitrator ha.taosdata.com:6042
```
`firstEp` and `fqdn` must be configured properly. In `taos.cfg` of all dnodes in TDengine cluster, `firstEp` must be configured to point to same address, i.e. the first dnode of the cluster. `fqdn` and `serverPort` compose the address of each node itself. If you want to start multiple TDengine dnodes on a single host, please also make sure all other configurations like `dataDir`, `logDir`, and other resources related parameters are not conflicting.
`firstEp` and `fqdn` must be configured properly. In `taos.cfg` of all dnodes in TDengine cluster, `firstEp` must be configured to point to same address, i.e. the first dnode of the cluster. `fqdn` and `serverPort` compose the address of each node itself. If you want to start multiple TDengine dnodes on a single host, please make sure all other configurations like `dataDir`, `logDir`, and other resources related parameters are not conflicting.
For all the dnodes in a TDengine cluster, below parameters must be configured as exactly same, any node whose configuration is different from dnodes already in the cluster can't join the cluster.
For all the dnodes in a TDengine cluster, the below parameters must be configured exactly the same, any node whose configuration is different from dnodes already in the cluster can't join the cluster.
| **#** | **Parameter** | **Definition** |
| ----- | ------------------ | --------------------------------------------------------------------------------- |
@ -61,7 +61,7 @@ For all the dnodes in a TDengine cluster, below parameters must be configured as
| 9 | maxVgroupsPerDb | Maximum number vgroups that can be used by each DB |
:::note
Prior to version 2.0.19.0, besides the above parameters, `locale` and `charset` must be configured as same too for each dnode.
Prior to version 2.0.19.0, besides the above parameters, `locale` and `charset` must also be configured the same for each dnode.
:::
@ -92,7 +92,7 @@ From the above output, it is shown that the end point of the started dnode is "h
There are a few steps necessary to add other dnodes in the cluster.
Firstly, start `taosd` as instructed in [Get Started](/get-started/), assuming it's for the second dnode. Before starting `taosd`, please making sure the configuration is correct, especially `firstEp`, `FQDN` and `serverPort`, `firstEp` must be same as the dnode shown in the section "Start First DNODE", i.e. "h1.taosdata.com" in this example.
First, start `taosd` as instructed in [Get Started](/get-started/), assuming it's for the second dnode. Before starting `taosd`, please making sure the configuration is correct, especially `firstEp`, `FQDN` and `serverPort`, `firstEp` must be same as the dnode shown in the section "Start First DNODE", i.e. "h1.taosdata.com" in this example.
Then, on the first dnode, use TDengine CLI `taos` to execute below command to add the end point of the dnode in the cluster. In the command "fqdn:port" should be quoted using double quotes.
@ -109,6 +109,6 @@ SHOW DNODES;
If the status of the newly added dnode is offline, please check:
- Whether the `taosd` process is running properly or not
- In the log file `taosdlog.0` to see whether the fqdn and port are correct or not
- In the log file `taosdlog.0` to see whether the fqdn and port are correct
The above process can be repeated to add more dnodes in the cluster.

View File

@ -3,7 +3,7 @@ sidebar_label: Operation
title: Manage DNODEs
---
It has been introduced that how to deploy and start a cluster from scratch. Once a cluster is ready, the dnode status in the cluster can be shown at any time, new dnode can be added to scale out the cluster, an existing dnode can be removed, even load balance can be performed manually.\
The previous section [Deployment](/cluster/deploy) introduced how to deploy and start a cluster from scratch. Once a cluster is ready, the dnode status in the cluster can be shown at any time, new dnode can be added to scale out the cluster, an existing dnode can be removed, even load balance can be performed manually.
:::note
All the commands to be introduced in this chapter need to be run through TDengine CLI, sometimes it's necessary to use root privilege.
@ -12,7 +12,7 @@ All the commands to be introduced in this chapter need to be run through TDengin
## Show DNODEs
below command can be executed in TDengine CLI `taos` to list all dnodes in the cluster, including ID, end point (fqdn:port), status (ready, offline), number of vnodes, number of free vnodes, etc. It's suggested to execute this command to check after adding or removing a dnode.
The below command can be executed in TDengine CLI `taos` to list all dnodes in the cluster, including ID, end point (fqdn:port), status (ready, offline), number of vnodes, number of free vnodes, etc. It's suggested to execute this command to check after adding or removing a dnode.
```sql
SHOW DNODES;
@ -39,7 +39,7 @@ USE SOME_DATABASE;
SHOW VGROUPS;
```
The example output is as below:
The example output is below:
```
taos> show dnodes;
@ -87,7 +87,7 @@ taos> show dnodes;
Query OK, 2 row(s) in set (0.001017s)
```
It can be seen that the status of the new dnode is "offline", once the dnode is started and connects the firstEp of the cluster, execute the command again and get below example output, from which it can be seen that two dnodes are both in "ready" status.
It can be seen that the status of the new dnode is "offline", once the dnode is started and connects the firstEp of the cluster, execute the command again and get the example output below, from which it can be seen that two dnodes are both in "ready" status.
```
taos> show dnodes;
@ -100,7 +100,7 @@ Query OK, 2 row(s) in set (0.001316s)
## Drop DNODE
Launch TDengine CLI `taos` and execute the command below to drop or remove a dnode from the cluster. In the command, `dnodeId` can be gotten from `show dnodes`.
Launch TDengine CLI `taos` and execute the command below to drop or remove a dnode from the cluster. In the command, you can get `dnodeId` from `show dnodes`.
```sql
DROP DNODE "fqdn:port";
@ -112,7 +112,7 @@ or
DROP DNODE dnodeId;
```
The example output is as below
The example output is below
```
taos> show dnodes;
@ -139,7 +139,7 @@ In the above example, when `show dnodes` is executed the first time, two dnodes
- Once a dnode is dropped, it can't rejoin the cluster. To rejoin, the dnode needs to deployed again after cleaning up the data directory. Normally, before dropping a dnode, the data belonging to the dnode needs to be migrated to other place.
- Please be noted that `drop dnode` is different from stopping `taosd` process. `drop dnode` just removes the dnode out of TDengine cluster. Only after a dnode is dropped, can the corresponding `taosd` process be stopped.
- Once a dnode is dropped, other dnodes in the cluster will be notified of the drop and will not accept the request from the dropped dnode.
- dnodeID is allocated automatically and can't be interfered manually. dnodeID is generated in ascending order without duplication.
- dnodeID is allocated automatically and can't be manually modified. dnodeID is generated in ascending order without duplication.
:::
@ -155,7 +155,7 @@ ALTER DNODE <source-dnodeId> BALANCE "VNODE:<vgId>-DNODE:<dest-dnodeId>";
In the above command, `source-dnodeId` is the original dnodeId where the vnode resides, `dest-dnodeId` specifies the target dnode. vgId (vgroup ID) can be shown by `SHOW VGROUPS `.
Firstly `show vgroups` is executed to show the vgroup distribution.
First `show vgroups` is executed to show the vgroup distribution.
```
taos> show vgroups;
@ -172,7 +172,7 @@ taos> show vgroups;
Query OK, 8 row(s) in set (0.001314s)
```
It can be seen that there are 5 vgroups in dnode 3 and 3 vgroups in node 1, now we want to move vgId 18 from dnode 3 to dnode 1. Execute below command in `taos`
It can be seen that there are 5 vgroups in dnode 3 and 3 vgroups in node 1, now we want to move vgId 18 from dnode 3 to dnode 1. Execute the below command in `taos`
```
taos> alter dnode 3 balance "vnode:18-dnode:1";
@ -207,7 +207,7 @@ It can be seen from above output that vgId 18 has been moved from dnode 3 to dno
:::note
- Manual load balancing can only be performed when the automatic load balancing is disabled, i.e. `balance` is set to 0.
- Only vnode in normal state, i.e. master or slave, can be moved. vnode can't moved when its in status offline, unsynced or syncing.
- Only a vnode in normal state, i.e. master or slave, can be moved. vnode can't be moved when its in status offline, unsynced or syncing.
- Before moving a vnode, it's necessary to make sure the target dnode has enough resources: CPU, memory and disk.
:::

View File

@ -7,19 +7,19 @@ title: High Availability and Load Balancing
High availability of vnode and mnode can be achieved through replicas in TDengine.
The number of vnodes is associated with each DB, there can be multiple DBs in a TDengine cluster. For the purpose of operation, different number of replicas can be configured properly for each DB. When creating a database, the parameter `replica` is used to specify the number of replicas, the default value is 1. With single replica, the high availability of the system can't be guaranteed. Whenever one node is down, data service would be unavailable. The number of dnodes in the cluster must NOT be lower than the number of replicas set for any DB, otherwise the `create table` operation would fail with error "more dnodes are needed". Below SQL statement is used to create a database named as "demo" with 3 replicas.
The number of vnodes is associated with each DB, there can be multiple DBs in a TDengine cluster. A different number of replicas can be configured for each DB. When creating a database, the parameter `replica` is used to specify the number of replicas, the default value is 1. With single replica, the high availability of the system can't be guaranteed. Whenever one node is down, the data service will be unavailable. The number of dnodes in the cluster must NOT be lower than the number of replicas set for any DB, otherwise the `create table` operation would fail with error "more dnodes are needed". The SQL statement below is used to create a database named "demo" with 3 replicas.
```sql
CREATE DATABASE demo replica 3;
```
The data in a DB is divided into multiple shards and stored in multiple vgroups. The number of vnodes in each group is determined by the number of replicas set for the DB. The vnodes in each vgroups store exactly same data. For the purpose of high availability, the vnodes in a vgroup must be located in different dnodes on different hosts. As long as over half of the vnodes in a vgroup are in online state, the vgroup is able to serve data access. Otherwise the vgroup can't handle any data access for reading or inserting data.
The data in a DB is divided into multiple shards and stored in multiple vgroups. The number of vnodes in each vgroup is determined by the number of replicas set for the DB. The vnodes in each vgroup store exactly the same data. For the purpose of high availability, the vnodes in a vgroup must be located in different dnodes on different hosts. As long as over half of the vnodes in a vgroup are in an online state, the vgroup is able to provide data access. Otherwise the vgroup can't provide data access for reading or inserting data.
There may be data for multiple DBs in a dnode. Once a dnode is down, multiple DBs may be affected. However, it's hard to say the cluster is guaranteed to work properly as long as over half of dnodes are online because vnodes are introduced and there may be complex mapping between vnodes and dnodes.
## High Availability of Mnode
Each TDengine cluster is managed by `mnode`, which is a module of `taosd`. For the high availability of mnode, multiple mnodes can be configured using system parameter `numOfMNodes`, the valid time range is [1,3]. To make sure the data consistency between mnodes, the data replication between mnodes is performed in synchronous way.
Each TDengine cluster is managed by `mnode`, which is a module of `taosd`. For the high availability of mnode, multiple mnodes can be configured using system parameter `numOfMNodes`, the valid time range is [1,3]. To make sure the data consistency between mnodes, the data replication between mnodes is performed in a synchronous way.
There may be multiple dnodes in a cluster, but only one mnode can be started in each dnode. Which one or ones of the dnodes will be designated as mnodes is automatically determined by TDengine according to the cluster configuration and system resources. Command `show mnodes` can be executed in TDengine `taos` to show the mnodes in the cluster.
@ -32,19 +32,19 @@ The end point and role/status (master, slave, unsynced, or offline) of all mnode
For the high availability of mnode, `numOfMnodes` needs to be configured to 2 or a higher value. Because the data consistency between mnodes must be guaranteed, the replica confirmation parameter `quorum` is set to 2 automatically if `numOfMNodes` is set to 2 or higher.
:::note
If high availability is important for your system, both vnode and mnode must be configured to have multiple replicas. How to configure for them are different and have been described.
If high availability is important for your system, both vnode and mnode must be configured to have multiple replicas.
:::
## Load Balance
Load balance will be triggered in 3 cades without manual intervention.
Load balance will be triggered in 3 cases without manual intervention.
- When a new dnode is joined in the cluster, automatic load balancing may be triggered, some data from some dnodes may be transferred to the new dnode automatically.
- When a dnode is removed from the cluster, the data from this dnode will be transferred to other dnodes automatically.
- When a dnode is too hot, i.e. too much data has been stored in it, automatic load balancing may be triggered to migrate some vnodes from this dnode to other dnodes.
- :::tip
Automatic load balancing is controlled by parameter `balance`, 0 means disabled and 1 means enabled.
:::tip
Automatic load balancing is controlled by parameter `balance`, 0 means disabled and 1 means enabled.
:::
@ -54,7 +54,7 @@ When a dnode is offline, it can be detected by the TDengine cluster. There are t
- The dnode becomes online again before the threshold configured in `offlineThreshold` is reached, it is still in the cluster and data replication is started automatically. The dnode can work properly after the data syncup is finished.
- If the dnode has been offline over the threshold configured in `offlineThreshold` in `taos.cfg`, the dnode will be removed from the cluster automatically. System alert will be generated and automatic load balancing will be triggered too if `balance` is set to 1. When the removed dnode is restarted and becomes online, it will not be joined in the cluster automatically, it can only be joined manually by the system operator.
- If the dnode has been offline over the threshold configured in `offlineThreshold` in `taos.cfg`, the dnode will be removed from the cluster automatically. A system alert will be generated and automatic load balancing will be triggered if `balance` is set to 1. When the removed dnode is restarted and becomes online, it will not join in the cluster automatically, it can only be joined manually by the system operator.
:::note
If all the vnodes in a vgroup (or mnodes in mnode group) are in offline or unsynced status, the master node can only be voted after all the vnodes or mnodes in the group become online and can exchange status, then the vgroup (or mnode group) is able to provide service.
@ -63,15 +63,15 @@ If all the vnodes in a vgroup (or mnodes in mnode group) are in offline or unsyn
## Arbitrator
If the number of replicas is set to an even number like 2, when half of the vnodes in a vgroup don't work master node can't be voted. Similar case is also applicable to mnode if the number of mnodes is set to an even number like 2.
If the number of replicas is set to an even number like 2, when half of the vnodes in a vgroup don't work a master node can't be voted. A similar case is also applicable to mnode if the number of mnodes is set to an even number like 2.
To resolve this problem, a new arbitrator component named `tarbitrator`, abbreviated for TDengine Arbitrator, was introduced. Arbitrator simulates a vnode or mnode but it's only responsible for network communication and doesn't handle any actual data access. With Arbitrator, any vgroup or mnode group can be considered as having number of member nodes and master node can be selected.
To resolve this problem, a new arbitrator component named `tarbitrator`, abbreviated for TDengine Arbitrator, was introduced. Arbitrator simulates a vnode or mnode but it's only responsible for network communication and doesn't handle any actual data access. As long as more than half of the vnode or mnode, including Arbitrator, are available the vnode group or mnode group can provide data insertion or query services normally.
Normally, it's suggested to configure replica number of each DB or system parameter `numOfMNodes` to an odd number. However, if a user is very sensitive to storage space, replica number of 2 plus arbitrator component can be used to achieve both lower cost of storage space and high availability.
Normally, it's suggested to configure a replica number of each DB or system parameter `numOfMNodes` to an odd number. However, if a user is very sensitive to storage space, a replica number of 2 plus arbitrator component can be used to achieve both lower cost of storage space and high availability.
Arbitrator component is installed with the server package. For details about how to install, please refer to [Install](/operation/pkg-install). The `-p` parameter of `tarbitrator` can be used to specify the port on which it provides service.
In the configuration file `taos.cfg` of each dnode, parameter `arbitrator` needs to be configured to the end point of the `tarbitrator` process. arbitrator component will be used automatically if the replica is configured to an even number and will be ignored if the replica is configured to an odd number.
In the configuration file `taos.cfg` of each dnode, parameter `arbitrator` needs to be configured to the end point of the `tarbitrator` process. Arbitrator component will be used automatically if the replica is configured to an even number and will be ignored if the replica is configured to an odd number.
Arbitrator can be shown by executing command in TDengine CLI `taos` with its role shown as "arb".

View File

@ -3,7 +3,7 @@ title: Cluster
keywords: ["cluster", "high availability", "load balance", "scale out"]
---
TDengine has a native distributed design and provides the ability to scale out. A few of nodes can form a TDengine cluster. If you need to get higher processing power, you just need to add more nodes into the cluster. TDengine uses virtual node technology to virtualize a node into multiple virtual nodes to achieve load balancing. At the same time, TDengine can group virtual nodes on different nodes into virtual node groups, and use the replication mechanism to ensure the high availability of the system. The cluster feature of TDengine is completely open source.
TDengine has a native distributed design and provides the ability to scale out. A few nodes can form a TDengine cluster. If you need higher processing power, you just need to add more nodes into the cluster. TDengine uses virtual node technology to virtualize a node into multiple virtual nodes to achieve load balancing. At the same time, TDengine can group virtual nodes on different nodes into virtual node groups, and use the replication mechanism to ensure the high availability of the system. The cluster feature of TDengine is completely open source.
This chapter mainly introduces cluster deployment, maintenance, and how to achieve high availability and load balancing.

View File

@ -3,9 +3,9 @@ title: TDengine SQL
description: "The syntax supported by TDengine SQL "
---
This section explains the syntax about operating database, table, STable, inserting data, selecting data, functions and some tips that can be used in TDengine SQL. It would be easier to understand with some fundamental knowledge of SQL.
This section explains the syntax to operating databases, tables, STables, inserting data, selecting data, functions and some tips that can be used in TDengine SQL. It would be easier to understand with some fundamental knowledge of SQL.
TDengine SQL is the major interface for users to write data into or query from TDengine. For users to easily use, syntax similar to standard SQL is provided. However, please be noted that TDengine SQL is not standard SQL. Besides, because TDengine doesn't provide the functionality of deleting time series data, corresponding statements are not provided in TDengine SQL.
TDengine SQL is the major interface for users to write data into or query from TDengine. For users to easily use, syntax similar to standard SQL is provided. However, please note that TDengine SQL is not standard SQL. For instance, TDengine doesn't provide the functionality of deleting time series data, thus corresponding statements are not provided in TDengine SQL.
TDengine SQL doesn't support abbreviation for keywords, for example `DESCRIBE` can't be abbreviated as `DESC`.
@ -16,7 +16,7 @@ Syntax Specifications used in this chapter:
- | means one of a few options, excluding | itself.
- … means the item prior to it can be repeated multiple times.
To better demonstrate the syntax, usage and rules of TAOS SQL, hereinafter it's assumed that there is a data set of meters. Assuming each meter collects 3 data: current, voltage, phase. The data model is as below:
To better demonstrate the syntax, usage and rules of TAOS SQL, hereinafter it's assumed that there is a data set of meters. Assuming each meter collects 3 data measurements: current, voltage, phase. The data model is shown below:
```sql
taos> DESCRIBE meters;

View File

@ -29,6 +29,7 @@ extern "C" {
typedef struct SMnode SMnode;
typedef struct {
bool standby;
bool deploy;
int8_t replica;
int8_t selfIndex;

View File

@ -44,12 +44,9 @@ extern "C" {
}
#define SDB_GET_INT64(pData, dataPos, val, pos) SDB_GET_VAL(pData, dataPos, val, pos, sdbGetRawInt64, int64_t)
#define SDB_GET_INT32(pData, dataPos, val, pos) SDB_GET_VAL(pData, dataPos, val, pos, sdbGetRawInt32, int32_t)
#define SDB_GET_INT16(pData, dataPos, val, pos) SDB_GET_VAL(pData, dataPos, val, pos, sdbGetRawInt16, int16_t)
#define SDB_GET_INT8(pData, dataPos, val, pos) SDB_GET_VAL(pData, dataPos, val, pos, sdbGetRawInt8, int8_t)
#define SDB_GET_INT8(pData, dataPos, val, pos) SDB_GET_VAL(pData, dataPos, val, pos, sdbGetRawInt8, int8_t)
#define SDB_GET_RESERVE(pRaw, dataPos, valLen, pos) \
{ \
@ -66,11 +63,8 @@ extern "C" {
}
#define SDB_SET_INT64(pRaw, dataPos, val, pos) SDB_SET_VAL(pRaw, dataPos, val, pos, sdbSetRawInt64, int64_t)
#define SDB_SET_INT32(pRaw, dataPos, val, pos) SDB_SET_VAL(pRaw, dataPos, val, pos, sdbSetRawInt32, int32_t)
#define SDB_SET_INT16(pRaw, dataPos, val, pos) SDB_SET_VAL(pRaw, dataPos, val, pos, sdbSetRawInt16, int16_t)
#define SDB_SET_INT8(pRaw, dataPos, val, pos) SDB_SET_VAL(pRaw, dataPos, val, pos, sdbSetRawInt8, int8_t)
#define SDB_SET_BINARY(pRaw, dataPos, val, valLen, pos) \
@ -356,6 +350,14 @@ typedef struct SSdb {
SdbDecodeFp decodeFps[SDB_MAX];
} SSdb;
typedef struct SSdbIter {
TdFilePtr file;
int64_t readlen;
} SSdbIter;
SSdbIter *sdbIterInit(SSdb *pSdb);
SSdbIter *sdbIterRead(SSdb *pSdb, SSdbIter *iter, char **ppBuf, int32_t *len);
#ifdef __cplusplus
}
#endif

View File

@ -39,6 +39,7 @@ typedef bool (*FExecInit)(struct SqlFunctionCtx *pCtx, struct SResultRowEntryInf
typedef int32_t (*FExecProcess)(struct SqlFunctionCtx *pCtx);
typedef int32_t (*FExecFinalize)(struct SqlFunctionCtx *pCtx, SSDataBlock* pBlock);
typedef int32_t (*FScalarExecProcess)(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput);
typedef int32_t (*FExecCombine)(struct SqlFunctionCtx *pDestCtx, struct SqlFunctionCtx *pSourceCtx);
typedef struct SScalarFuncExecFuncs {
FExecGetEnv getEnv;
@ -50,6 +51,7 @@ typedef struct SFuncExecFuncs {
FExecInit init;
FExecProcess process;
FExecFinalize finalize;
FExecCombine combine;
} SFuncExecFuncs;
typedef struct SFileBlockInfo {

View File

@ -212,6 +212,7 @@ typedef enum ENodeType {
QUERY_NODE_PHYSICAL_PLAN_STREAM_FINAL_INTERVAL,
QUERY_NODE_PHYSICAL_PLAN_FILL,
QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW,
QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW,
QUERY_NODE_PHYSICAL_PLAN_STATE_WINDOW,
QUERY_NODE_PHYSICAL_PLAN_PARTITION,
QUERY_NODE_PHYSICAL_PLAN_DISPATCH,

View File

@ -296,6 +296,8 @@ typedef struct SSessionWinodwPhysiNode {
int64_t gap;
} SSessionWinodwPhysiNode;
typedef SSessionWinodwPhysiNode SStreamSessionWinodwPhysiNode;
typedef struct SStateWinodwPhysiNode {
SWinodwPhysiNode window;
SNode* pStateKey;

View File

@ -242,6 +242,7 @@ typedef struct SSelectStmt {
bool hasAggFuncs;
bool hasRepeatScanFuncs;
bool hasIndefiniteRowsFunc;
bool hasSelectValFunc;
} SSelectStmt;
typedef enum ESetOperatorType { SET_OP_TYPE_UNION_ALL = 1, SET_OP_TYPE_UNION } ESetOperatorType;

View File

@ -82,14 +82,29 @@ typedef struct SFsmCbMeta {
SyncTerm currentTerm;
} SFsmCbMeta;
typedef struct SReConfigCbMeta {
int32_t code;
SyncIndex index;
SyncTerm term;
SyncTerm currentTerm;
} SReConfigCbMeta;
typedef struct SSyncFSM {
void* data;
void (*FpCommitCb)(struct SSyncFSM* pFsm, const SRpcMsg* pMsg, SFsmCbMeta cbMeta);
void (*FpPreCommitCb)(struct SSyncFSM* pFsm, const SRpcMsg* pMsg, SFsmCbMeta cbMeta);
void (*FpRollBackCb)(struct SSyncFSM* pFsm, const SRpcMsg* pMsg, SFsmCbMeta cbMeta);
void (*FpRestoreFinish)(struct SSyncFSM* pFsm);
void (*FpRestoreFinishCb)(struct SSyncFSM* pFsm);
int32_t (*FpGetSnapshot)(struct SSyncFSM* pFsm, SSnapshot* pSnapshot);
int32_t (*FpRestoreSnapshot)(struct SSyncFSM* pFsm, const SSnapshot* snapshot);
void* (*FpSnapshotRead)(struct SSyncFSM* pFsm, const SSnapshot* snapshot, void* iter, char** ppBuf, int32_t* len);
int32_t (*FpSnapshotApply)(struct SSyncFSM* pFsm, const SSnapshot* snapshot, char* pBuf, int32_t len);
void (*FpReConfigCb)(struct SSyncFSM* pFsm, SSyncCfg newCfg, SReConfigCbMeta cbMeta);
// int32_t (*FpRestoreSnapshot)(struct SSyncFSM* pFsm, const SSnapshot* snapshot);
} SSyncFSM;
// abstract definition of log store in raft

View File

@ -428,11 +428,11 @@ enum {
};
#define DEFAULT_HANDLE 0
#define MNODE_HANDLE -1
#define QNODE_HANDLE -2
#define SNODE_HANDLE -3
#define VNODE_HANDLE -4
#define BNODE_HANDLE -5
#define MNODE_HANDLE 1
#define QNODE_HANDLE -1
#define SNODE_HANDLE -2
#define VNODE_HANDLE -3
#define BNODE_HANDLE -4
#define TSDB_CONFIG_OPTION_LEN 16
#define TSDB_CONIIG_VALUE_LEN 48

View File

@ -141,7 +141,9 @@ static int32_t hbQueryHbRspHandle(SAppHbMgr *pAppHbMgr, SClientHbRsp *pRsp) {
if (NULL == pTscObj) {
tscDebug("tscObj rid %" PRIx64 " not exist", pRsp->connKey.tscRid);
} else {
updateEpSet_s(&pTscObj->pAppInfo->mgmtEp, &pRsp->query->epSet);
if (pRsp->query->totalDnodes > 1 && !isEpsetEqual(&pTscObj->pAppInfo->mgmtEp.epSet, &pRsp->query->epSet)) {
updateEpSet_s(&pTscObj->pAppInfo->mgmtEp, &pRsp->query->epSet);
}
pTscObj->connId = pRsp->query->connId;
if (pRsp->query->killRid) {

View File

@ -24,7 +24,6 @@
#define EQUAL '='
#define QUOTE '"'
#define SLASH '\\'
#define tsMaxSQLStringLen (1024*1024)
#define JUMP_SPACE(sql) while (*sql != '\0'){if(*sql == SPACE) sql++;else break;}
// comma ,
@ -63,12 +62,11 @@ for (int i = 1; i < keyLen; ++i) { \
#define TS "_ts"
#define TS_LEN 3
#define VALUE "value"
#define VALUE_LEN 5
#define VALUE "_value"
#define VALUE_LEN 6
#define BINARY_ADD_LEN 2 // "binary" 2 means " "
#define NCHAR_ADD_LEN 3 // L"nchar" 3 means L" "
#define CHAR_SAVE_LENGTH 8
//=================================================================================================
typedef TSDB_SML_PROTOCOL_TYPE SMLProtocolType;
@ -253,12 +251,20 @@ static int32_t smlGenerateSchemaAction(SSchema* colField, SHashObj* colHash, SSm
return 0;
}
static int32_t smlFindNearestPowerOf2(int32_t length){
int32_t result = 1;
while(result <= length){
result *= 2;
}
return result;
}
static int32_t smlBuildColumnDescription(SSmlKv* field, char* buf, int32_t bufSize, int32_t* outBytes) {
uint8_t type = field->type;
char tname[TSDB_TABLE_NAME_LEN] = {0};
memcpy(tname, field->key, field->keyLen);
if (type == TSDB_DATA_TYPE_BINARY || type == TSDB_DATA_TYPE_NCHAR) {
int32_t bytes = field->length > CHAR_SAVE_LENGTH ? (2*field->length) : CHAR_SAVE_LENGTH;
int32_t bytes = smlFindNearestPowerOf2(field->length);
int out = snprintf(buf, bufSize, "`%s` %s(%d)",
tname, tDataTypes[field->type].name, bytes);
*outBytes = out;
@ -273,8 +279,8 @@ static int32_t smlBuildColumnDescription(SSmlKv* field, char* buf, int32_t bufSi
static int32_t smlApplySchemaAction(SSmlHandle* info, SSchemaAction* action) {
int32_t code = 0;
int32_t outBytes = 0;
char *result = (char *)taosMemoryCalloc(1, tsMaxSQLStringLen+1);
int32_t capacity = tsMaxSQLStringLen + 1;
char *result = (char *)taosMemoryCalloc(1, TSDB_MAX_ALLOWED_SQL_LEN);
int32_t capacity = TSDB_MAX_ALLOWED_SQL_LEN;
uDebug("SML:0x%"PRIx64" apply schema action. action: %d", info->id, action->action);
switch (action->action) {
@ -398,7 +404,7 @@ static int32_t smlApplySchemaAction(SSmlHandle* info, SSchemaAction* action) {
}
if(taosArrayGetSize(cols) == 0){
outBytes = snprintf(pos, freeBytes,"`%s` %s(%d)",
tsSmlTagName, tDataTypes[TSDB_DATA_TYPE_NCHAR].name, CHAR_SAVE_LENGTH);
tsSmlTagName, tDataTypes[TSDB_DATA_TYPE_NCHAR].name, 1);
pos += outBytes; freeBytes -= outBytes;
*pos = ','; ++pos; --freeBytes;
}
@ -508,6 +514,11 @@ static int32_t smlModifyDBSchemas(SSmlHandle* info) {
if (code != TSDB_CODE_SUCCESS) {
return code;
}
code = catalogRefreshTableMeta(info->pCatalog, info->taos->pAppInfo->pTransporter, &ep, &pName, -1);
if (code != TSDB_CODE_SUCCESS) {
return code;
}
} else {
uError("SML:0x%"PRIx64" load table meta error: %s", info->id, tstrerror(code));
return code;

View File

@ -53,43 +53,45 @@ int32_t mmReadFile(SMnodeMgmt *pMgmt, bool *pDeployed) {
*pDeployed = deployed->valueint;
cJSON *mnodes = cJSON_GetObjectItem(root, "mnodes");
if (!mnodes || mnodes->type != cJSON_Array) {
dError("failed to read %s since nodes not found", file);
goto _OVER;
}
pMgmt->replica = cJSON_GetArraySize(mnodes);
if (pMgmt->replica <= 0 || pMgmt->replica > TSDB_MAX_REPLICA) {
dError("failed to read %s since mnodes size %d invalid", file, pMgmt->replica);
goto _OVER;
}
for (int32_t i = 0; i < pMgmt->replica; ++i) {
cJSON *node = cJSON_GetArrayItem(mnodes, i);
if (node == NULL) break;
SReplica *pReplica = &pMgmt->replicas[i];
cJSON *id = cJSON_GetObjectItem(node, "id");
if (!id || id->type != cJSON_Number) {
dError("failed to read %s since id not found", file);
if (mnodes != NULL) {
if (!mnodes || mnodes->type != cJSON_Array) {
dError("failed to read %s since nodes not found", file);
goto _OVER;
}
pReplica->id = id->valueint;
cJSON *fqdn = cJSON_GetObjectItem(node, "fqdn");
if (!fqdn || fqdn->type != cJSON_String || fqdn->valuestring == NULL) {
dError("failed to read %s since fqdn not found", file);
pMgmt->replica = cJSON_GetArraySize(mnodes);
if (pMgmt->replica <= 0 || pMgmt->replica > TSDB_MAX_REPLICA) {
dError("failed to read %s since mnodes size %d invalid", file, pMgmt->replica);
goto _OVER;
}
tstrncpy(pReplica->fqdn, fqdn->valuestring, TSDB_FQDN_LEN);
cJSON *port = cJSON_GetObjectItem(node, "port");
if (!port || port->type != cJSON_Number) {
dError("failed to read %s since port not found", file);
goto _OVER;
for (int32_t i = 0; i < pMgmt->replica; ++i) {
cJSON *node = cJSON_GetArrayItem(mnodes, i);
if (node == NULL) break;
SReplica *pReplica = &pMgmt->replicas[i];
cJSON *id = cJSON_GetObjectItem(node, "id");
if (!id || id->type != cJSON_Number) {
dError("failed to read %s since id not found", file);
goto _OVER;
}
pReplica->id = id->valueint;
cJSON *fqdn = cJSON_GetObjectItem(node, "fqdn");
if (!fqdn || fqdn->type != cJSON_String || fqdn->valuestring == NULL) {
dError("failed to read %s since fqdn not found", file);
goto _OVER;
}
tstrncpy(pReplica->fqdn, fqdn->valuestring, TSDB_FQDN_LEN);
cJSON *port = cJSON_GetObjectItem(node, "port");
if (!port || port->type != cJSON_Number) {
dError("failed to read %s since port not found", file);
goto _OVER;
}
pReplica->port = port->valueint;
}
pReplica->port = port->valueint;
}
code = 0;
@ -122,21 +124,23 @@ int32_t mmWriteFile(SMnodeMgmt *pMgmt, SDCreateMnodeReq *pMsg, bool deployed) {
char *content = taosMemoryCalloc(1, maxLen + 1);
len += snprintf(content + len, maxLen - len, "{\n");
len += snprintf(content + len, maxLen - len, " \"mnodes\": [{\n");
int8_t replica = (pMsg != NULL ? pMsg->replica : pMgmt->replica);
for (int32_t i = 0; i < replica; ++i) {
SReplica *pReplica = &pMgmt->replicas[i];
if (pMsg != NULL) {
pReplica = &pMsg->replicas[i];
}
len += snprintf(content + len, maxLen - len, " \"id\": %d,\n", pReplica->id);
len += snprintf(content + len, maxLen - len, " \"fqdn\": \"%s\",\n", pReplica->fqdn);
len += snprintf(content + len, maxLen - len, " \"port\": %u\n", pReplica->port);
if (i < replica - 1) {
len += snprintf(content + len, maxLen - len, " },{\n");
} else {
len += snprintf(content + len, maxLen - len, " }],\n");
if (replica > 0) {
len += snprintf(content + len, maxLen - len, " \"mnodes\": [{\n");
for (int32_t i = 0; i < replica; ++i) {
SReplica *pReplica = &pMgmt->replicas[i];
if (pMsg != NULL) {
pReplica = &pMsg->replicas[i];
}
len += snprintf(content + len, maxLen - len, " \"id\": %d,\n", pReplica->id);
len += snprintf(content + len, maxLen - len, " \"fqdn\": \"%s\",\n", pReplica->fqdn);
len += snprintf(content + len, maxLen - len, " \"port\": %u\n", pReplica->port);
if (i < replica - 1) {
len += snprintf(content + len, maxLen - len, " },{\n");
} else {
len += snprintf(content + len, maxLen - len, " }],\n");
}
}
}

View File

@ -39,32 +39,44 @@ static int32_t mmRequire(const SMgmtInputOpt *pInput, bool *required) {
}
static void mmBuildOptionForDeploy(SMnodeMgmt *pMgmt, const SMgmtInputOpt *pInput, SMnodeOpt *pOption) {
pOption->standby = false;
pOption->deploy = true;
pOption->msgCb = pMgmt->msgCb;
pOption->replica = 1;
pOption->selfIndex = 0;
SReplica *pReplica = &pOption->replicas[0];
pReplica->id = 1;
pReplica->port = tsServerPort;
tstrncpy(pReplica->fqdn, tsLocalFqdn, TSDB_FQDN_LEN);
pOption->deploy = true;
pMgmt->selfIndex = pOption->selfIndex;
pMgmt->replica = pOption->replica;
memcpy(&pMgmt->replicas, pOption->replicas, sizeof(SReplica) * TSDB_MAX_REPLICA);
}
static void mmBuildOptionForOpen(SMnodeMgmt *pMgmt, SMnodeOpt *pOption) {
pOption->msgCb = pMgmt->msgCb;
pOption->selfIndex = pMgmt->selfIndex;
pOption->replica = pMgmt->replica;
memcpy(&pOption->replicas, pMgmt->replicas, sizeof(SReplica) * TSDB_MAX_REPLICA);
pOption->deploy = false;
pOption->standby = false;
if (pMgmt->replica > 0) {
pOption->standby = true;
pOption->replica = 1;
pOption->selfIndex = 0;
SReplica *pReplica = &pOption->replicas[0];
for (int32_t i = 0; i < pMgmt->replica; ++i) {
if (pMgmt->replicas[i].id != pMgmt->pData->dnodeId) continue;
pReplica->id = pMgmt->replicas[i].id;
pReplica->port = pMgmt->replicas[i].port;
memcpy(pReplica->fqdn, pMgmt->replicas[i].fqdn, TSDB_FQDN_LEN);
}
}
}
static int32_t mmBuildOptionFromReq(SMnodeMgmt *pMgmt, SMnodeOpt *pOption, SDCreateMnodeReq *pCreate) {
static int32_t mmBuildOptionForAlter(SMnodeMgmt *pMgmt, SMnodeOpt *pOption, SDCreateMnodeReq *pCreate) {
pOption->msgCb = pMgmt->msgCb;
pOption->standby = false;
pOption->deploy = false;
pOption->replica = pCreate->replica;
pOption->selfIndex = -1;
for (int32_t i = 0; i < pCreate->replica; ++i) {
SReplica *pReplica = &pOption->replicas[i];
pReplica->id = pCreate->replicas[i].id;
@ -79,17 +91,13 @@ static int32_t mmBuildOptionFromReq(SMnodeMgmt *pMgmt, SMnodeOpt *pOption, SDCre
dError("failed to build mnode options since %s", terrstr());
return -1;
}
pOption->deploy = true;
pMgmt->selfIndex = pOption->selfIndex;
pMgmt->replica = pOption->replica;
memcpy(&pMgmt->replicas, pOption->replicas, sizeof(SReplica) * TSDB_MAX_REPLICA);
return 0;
}
int32_t mmAlter(SMnodeMgmt *pMgmt, SDAlterMnodeReq *pMsg) {
SMnodeOpt option = {0};
if (mmBuildOptionFromReq(pMgmt, &option, pMsg) != 0) {
if (mmBuildOptionForAlter(pMgmt, &option, pMsg) != 0) {
return -1;
}
@ -97,12 +105,6 @@ int32_t mmAlter(SMnodeMgmt *pMgmt, SDAlterMnodeReq *pMsg) {
return -1;
}
bool deployed = true;
if (mmWriteFile(pMgmt, pMsg, deployed) != 0) {
dError("failed to write mnode file since %s", terrstr());
return -1;
}
return 0;
}
@ -177,7 +179,8 @@ static int32_t mmOpen(SMgmtInputOpt *pInput, SMgmtOutputOpt *pOutput) {
}
tmsgReportStartup("mnode-worker", "initialized");
if (!deployed) {
if (!deployed || pMgmt->replica > 0) {
pMgmt->replica = 0;
deployed = true;
if (mmWriteFile(pMgmt, NULL, deployed) != 0) {
dError("failed to write mnode file since %s", terrstr());

View File

@ -61,6 +61,11 @@ static void mmProcessSyncQueue(SQueueInfo *pInfo, SRpcMsg *pMsg) {
dTrace("msg:%p, get from mnode-sync queue", pMsg);
pMsg->info.node = pMgmt->pMnode;
SMsgHead *pHead = pMsg->pCont;
pHead->contLen = ntohl(pHead->contLen);
pHead->vgId = ntohl(pHead->vgId);
int32_t code = mndProcessSyncMsg(pMsg);
dTrace("msg:%p, is freed, code:0x%x", pMsg, code);

View File

@ -76,11 +76,12 @@ typedef struct {
typedef struct {
SWal *pWal;
int32_t errCode;
bool restored;
sem_t syncSem;
int64_t sync;
ESyncState state;
bool standby;
bool restored;
int32_t errCode;
} SSyncMgmt;
typedef struct {

View File

@ -39,14 +39,16 @@ static int32_t mndRetrieveMnodes(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *p
static void mndCancelGetNextMnode(SMnode *pMnode, void *pIter);
int32_t mndInitMnode(SMnode *pMnode) {
SSdbTable table = {.sdbType = SDB_MNODE,
.keyType = SDB_KEY_INT32,
.deployFp = (SdbDeployFp)mndCreateDefaultMnode,
.encodeFp = (SdbEncodeFp)mndMnodeActionEncode,
.decodeFp = (SdbDecodeFp)mndMnodeActionDecode,
.insertFp = (SdbInsertFp)mndMnodeActionInsert,
.updateFp = (SdbUpdateFp)mndMnodeActionUpdate,
.deleteFp = (SdbDeleteFp)mndMnodeActionDelete};
SSdbTable table = {
.sdbType = SDB_MNODE,
.keyType = SDB_KEY_INT32,
.deployFp = (SdbDeployFp)mndCreateDefaultMnode,
.encodeFp = (SdbEncodeFp)mndMnodeActionEncode,
.decodeFp = (SdbDecodeFp)mndMnodeActionDecode,
.insertFp = (SdbInsertFp)mndMnodeActionInsert,
.updateFp = (SdbUpdateFp)mndMnodeActionUpdate,
.deleteFp = (SdbDeleteFp)mndMnodeActionDelete,
};
mndSetMsgHandle(pMnode, TDMT_MND_CREATE_MNODE, mndProcessCreateMnodeReq);
mndSetMsgHandle(pMnode, TDMT_MND_DROP_MNODE, mndProcessDropMnodeReq);

View File

@ -379,7 +379,7 @@ static int32_t mndProcessQueryHeartBeat(SMnode *pMnode, SRpcMsg *pMsg, SClientHb
}
rspBasic->connId = pConn->id;
rspBasic->totalDnodes = 1; // TODO
rspBasic->totalDnodes = mndGetDnodeSize(pMnode);
rspBasic->onlineDnodes = 1; // TODO
mndGetMnodeEpSet(pMnode, &rspBasic->epSet);
mndReleaseConn(pMnode, pConn);

View File

@ -17,22 +17,26 @@
#include "mndSync.h"
#include "mndTrans.h"
int32_t mndSyncEqMsg(const SMsgCb *msgcb, SRpcMsg *pMsg) { return tmsgPutToQueue(msgcb, SYNC_QUEUE, pMsg); }
int32_t mndSyncEqMsg(const SMsgCb *msgcb, SRpcMsg *pMsg) {
SMsgHead *pHead = pMsg->pCont;
pHead->contLen = htonl(pHead->contLen);
pHead->vgId = htonl(pHead->vgId);
return tmsgPutToQueue(msgcb, SYNC_QUEUE, pMsg);
}
int32_t mndSyncSendMsg(const SEpSet *pEpSet, SRpcMsg *pMsg) { return tmsgSendReq(pEpSet, pMsg); }
void mndSyncCommitMsg(struct SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMeta cbMeta) {
SMnode *pMnode = pFsm->data;
SSdb *pSdb = pMnode->pSdb;
SSyncMgmt *pMgmt = &pMnode->syncMgmt;
SSdbRaw *pRaw = pMsg->pCont;
SMnode *pMnode = pFsm->data;
SSdbRaw *pRaw = pMsg->pCont;
mTrace("raw:%p, apply to sdb, ver:%" PRId64 " role:%s", pRaw, cbMeta.index, syncStr(cbMeta.state));
sdbWriteWithoutFree(pSdb, pRaw);
sdbSetApplyIndex(pSdb, cbMeta.index);
sdbSetApplyTerm(pSdb, cbMeta.term);
sdbWriteWithoutFree(pMnode->pSdb, pRaw);
sdbSetApplyIndex(pMnode->pSdb, cbMeta.index);
sdbSetApplyTerm(pMnode->pSdb, cbMeta.term);
if (cbMeta.state == TAOS_SYNC_STATE_LEADER) {
tsem_post(&pMgmt->syncSem);
tsem_post(&pMnode->syncMgmt.syncSem);
}
}
@ -49,15 +53,41 @@ void mndRestoreFinish(struct SSyncFSM *pFsm) {
pMnode->syncMgmt.restored = true;
}
void *mndSnapshotRead(struct SSyncFSM *pFsm, const SSnapshot *snapshot, void *iter, char **ppBuf, int32_t *len) {
SMnode *pMnode = pFsm->data;
SSdbIter *pIter = iter;
if (iter == NULL) {
pIter = sdbIterInit(pMnode->pSdb);
}
return sdbIterRead(pMnode->pSdb, pIter, ppBuf, len);
}
int32_t mndSnapshotApply(struct SSyncFSM* pFsm, const SSnapshot* snapshot, char* pBuf, int32_t len) {
SMnode *pMnode = pFsm->data;
sdbWrite(pMnode->pSdb, (SSdbRaw*)pBuf);
return 0;
}
void mndReConfig(struct SSyncFSM* pFsm, SSyncCfg newCfg, SReConfigCbMeta cbMeta) {
}
SSyncFSM *mndSyncMakeFsm(SMnode *pMnode) {
SSyncFSM *pFsm = taosMemoryCalloc(1, sizeof(SSyncFSM));
pFsm->data = pMnode;
pFsm->FpCommitCb = mndSyncCommitMsg;
pFsm->FpPreCommitCb = NULL;
pFsm->FpRollBackCb = NULL;
pFsm->FpGetSnapshot = mndSyncGetSnapshot;
pFsm->FpRestoreFinish = mndRestoreFinish;
pFsm->FpRestoreSnapshot = NULL;
pFsm->FpRestoreFinishCb = mndRestoreFinish;
pFsm->FpSnapshotRead = mndSnapshotRead;
pFsm->FpSnapshotApply = mndSnapshotApply;
pFsm->FpReConfigCb = mndReConfig;
return pFsm;
}
@ -90,10 +120,13 @@ int32_t mndInitSync(SMnode *pMnode) {
SSyncCfg *pCfg = &syncInfo.syncCfg;
pCfg->replicaNum = pMnode->replica;
pCfg->myIndex = pMnode->selfIndex;
mInfo("start to open mnode sync, replica:%d myindex:%d standby:%d", pCfg->replicaNum, pCfg->myIndex,
pMgmt->standby);
for (int32_t i = 0; i < pMnode->replica; ++i) {
SNodeInfo *pNode = &pCfg->nodeInfo[i];
tstrncpy(pNode->nodeFqdn, pMnode->replicas[i].fqdn, sizeof(pNode->nodeFqdn));
pNode->nodePort = pMnode->replicas[i].port;
mInfo("index:%d, fqdn:%s port:%d", i, pNode->nodeFqdn, pNode->nodePort);
}
tsem_init(&pMgmt->syncSem, 0, 0);
@ -149,7 +182,11 @@ int32_t mndSyncPropose(SMnode *pMnode, SSdbRaw *pRaw) {
void mndSyncStart(SMnode *pMnode) {
SSyncMgmt *pMgmt = &pMnode->syncMgmt;
syncSetMsgCb(pMgmt->sync, &pMnode->msgCb);
syncStart(pMgmt->sync);
if (pMgmt->standby) {
syncStartStandBy(pMgmt->sync);
} else {
syncStart(pMgmt->sync);
}
mDebug("sync:%" PRId64 " is started", pMgmt->sync);
}
@ -161,3 +198,18 @@ bool mndIsMaster(SMnode *pMnode) {
return (pMgmt->state == TAOS_SYNC_STATE_LEADER) && (pMnode->syncMgmt.restored);
}
int32_t mndAlter(SMnode *pMnode, const SMnodeOpt *pOption) {
SSyncCfg cfg = {.replicaNum = pOption->replica, .myIndex = pOption->selfIndex};
mInfo("start to alter mnode sync, replica:%d myindex:%d standby:%d", cfg.replicaNum, cfg.myIndex, pOption->standby);
for (int32_t i = 0; i < pOption->replica; ++i) {
SNodeInfo *pNode = &cfg.nodeInfo[i];
tstrncpy(pNode->nodeFqdn, pOption->replicas[i].fqdn, sizeof(pNode->nodeFqdn));
pNode->nodePort = pOption->replicas[i].port;
mInfo("index:%d, fqdn:%s port:%d", i, pNode->nodeFqdn, pNode->nodePort);
}
SSyncMgmt *pMgmt = &pMnode->syncMgmt;
pMgmt->standby = pOption->standby;
return syncReconfig(pMgmt->sync, &cfg);
}

View File

@ -263,6 +263,7 @@ static void mndSetOptions(SMnode *pMnode, const SMnodeOpt *pOption) {
memcpy(&pMnode->replicas, pOption->replicas, sizeof(SReplica) * TSDB_MAX_REPLICA);
pMnode->msgCb = pOption->msgCb;
pMnode->selfId = pOption->replicas[pOption->selfIndex].id;
pMnode->syncMgmt.standby = pOption->standby;
}
SMnode *mndOpen(const char *path, const SMnodeOpt *pOption) {
@ -329,12 +330,6 @@ void mndClose(SMnode *pMnode) {
}
}
int32_t mndAlter(SMnode *pMnode, const SMnodeOpt *pOption) {
mDebug("start to alter mnode");
mDebug("mnode is altered");
return 0;
}
int32_t mndStart(SMnode *pMnode) {
mndSyncStart(pMnode);
return mndInitTimer(pMnode);

View File

@ -392,3 +392,66 @@ int32_t sdbDeploy(SSdb *pSdb) {
return 0;
}
SSdbIter *sdbIterInit(SSdb *pSdb) {
char datafile[PATH_MAX] = {0};
char tmpfile[PATH_MAX] = {0};
snprintf(datafile, sizeof(datafile), "%s%ssdb.data", pSdb->currDir, TD_DIRSEP);
snprintf(tmpfile, sizeof(datafile), "%s%ssdb.data", pSdb->tmpDir, TD_DIRSEP);
if (taosCopyFile(datafile, tmpfile) != 0) {
terrno = TAOS_SYSTEM_ERROR(errno);
mError("failed to copy file %s to %s since %s", datafile, tmpfile, terrstr());
return NULL;
}
SSdbIter *pIter = taosMemoryCalloc(1, sizeof(SSdbIter));
if (pIter == NULL) {
terrno = TSDB_CODE_OUT_OF_MEMORY;
return NULL;
}
pIter->file = taosOpenFile(tmpfile, TD_FILE_READ);
if (pIter->file == NULL) {
terrno = TAOS_SYSTEM_ERROR(errno);
mError("failed to read snapshot file:%s since %s", tmpfile, terrstr());
taosMemoryFree(pIter);
return NULL;
}
mDebug("start to read snapshot file:%s, iter:%p", tmpfile, pIter);
return pIter;
}
SSdbIter *sdbIterRead(SSdb *pSdb, SSdbIter *pIter, char **ppBuf, int32_t *buflen) {
const int32_t maxlen = 100;
char *pBuf = taosMemoryCalloc(1, maxlen);
if (pBuf == NULL) {
terrno = TSDB_CODE_OUT_OF_MEMORY;
return NULL;
}
int32_t readlen = taosReadFile(pIter->file, pBuf, maxlen);
if (readlen == 0) {
mTrace("read snapshot to the end, readlen:%" PRId64, pIter->readlen);
taosMemoryFree(pBuf);
taosCloseFile(&pIter->file);
taosMemoryFree(pIter);
pIter = NULL;
} else if (readlen < 0) {
terrno = TAOS_SYSTEM_ERROR(errno);
mError("failed to read snapshot since %s, readlen:%" PRId64, terrstr(), pIter->readlen);
taosMemoryFree(pBuf);
taosCloseFile(&pIter->file);
taosMemoryFree(pIter);
pIter = NULL;
} else {
pIter->readlen += readlen;
mTrace("read snapshot, readlen:%" PRId64, pIter->readlen);
*ppBuf = pBuf;
*buflen = readlen;
}
return pIter;
}

View File

@ -96,7 +96,8 @@ struct STQ {
SHashObj* pStreamTasks;
SVnode* pVnode;
SWal* pWal;
TDB* pTdb;
TDB* pMetaStore;
TTB* pExecStore;
};
typedef struct {

View File

@ -14,6 +14,7 @@
*/
#include "tq.h"
#include "tdbInt.h"
int32_t tqInit() {
int8_t old;
@ -46,6 +47,10 @@ void tqCleanUp() {
}
}
int tqExecKeyCompare(const void* pKey1, int32_t kLen1, const void* pKey2, int32_t kLen2) {
return strcmp(pKey1, pKey2);
}
STQ* tqOpen(const char* path, SVnode* pVnode, SWal* pWal) {
STQ* pTq = taosMemoryMalloc(sizeof(STQ));
if (pTq == NULL) {
@ -55,9 +60,6 @@ STQ* tqOpen(const char* path, SVnode* pVnode, SWal* pWal) {
pTq->path = strdup(path);
pTq->pVnode = pVnode;
pTq->pWal = pWal;
if (tdbOpen(path, 4096, 1, &pTq->pTdb) < 0) {
ASSERT(0);
}
pTq->execs = taosHashInit(64, MurmurHash3_32, true, HASH_ENTRY_LOCK);
@ -65,6 +67,43 @@ STQ* tqOpen(const char* path, SVnode* pVnode, SWal* pWal) {
pTq->pushMgr = taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, HASH_ENTRY_LOCK);
if (tdbOpen(path, 16 * 1024, 1, &pTq->pMetaStore) < 0) {
ASSERT(0);
}
if (tdbTbOpen("exec", -1, -1, tqExecKeyCompare, pTq->pMetaStore, &pTq->pExecStore) < 0) {
ASSERT(0);
}
TXN txn;
if (tdbTxnOpen(&txn, 0, tdbDefaultMalloc, tdbDefaultFree, NULL, 0) < 0) {
ASSERT(0);
}
/*if (tdbBegin(pTq->pMetaStore, &txn) < 0) {*/
/*ASSERT(0);*/
/*}*/
TBC* pCur;
if (tdbTbcOpen(pTq->pExecStore, &pCur, &txn) < 0) {
ASSERT(0);
}
void* pKey;
int kLen;
void* pVal;
int vLen;
tdbTbcMoveToFirst(pCur);
while (tdbTbcNext(pCur, &pKey, &kLen, &pVal, &vLen) == 0) {
// create, put into execsj
}
if (tdbTxnClose(&txn) < 0) {
ASSERT(0);
}
return pTq;
}
@ -74,7 +113,7 @@ void tqClose(STQ* pTq) {
taosHashCleanup(pTq->execs);
taosHashCleanup(pTq->pStreamTasks);
taosHashCleanup(pTq->pushMgr);
tdbClose(pTq->pTdb);
tdbClose(pTq->pMetaStore);
taosMemoryFree(pTq);
}
// TODO
@ -91,7 +130,6 @@ int32_t tEncodeSTqExec(SEncoder* pEncoder, const STqExec* pExec) {
if (tEncodeI8(pEncoder, pExec->withTag) < 0) return -1;
if (pExec->subType == TOPIC_SUB_TYPE__TABLE) {
if (tEncodeCStr(pEncoder, pExec->qmsg) < 0) return -1;
// TODO encode modified exec
}
tEndEncode(pEncoder);
return pEncoder->pos;
@ -108,7 +146,6 @@ int32_t tDecodeSTqExec(SDecoder* pDecoder, STqExec* pExec) {
if (tDecodeI8(pDecoder, &pExec->withTag) < 0) return -1;
if (pExec->subType == TOPIC_SUB_TYPE__TABLE) {
if (tDecodeCStrAlloc(pDecoder, &pExec->qmsg) < 0) return -1;
// TODO decode modified exec
}
tEndDecode(pDecoder);
return 0;
@ -556,6 +593,23 @@ int32_t tqProcessVgDeleteReq(STQ* pTq, char* msg, int32_t msgLen) {
int32_t code = taosHashRemove(pTq->execs, pReq->subKey, strlen(pReq->subKey));
ASSERT(code == 0);
TXN txn;
if (tdbTxnOpen(&txn, 0, tdbDefaultMalloc, tdbDefaultFree, NULL, TDB_TXN_WRITE | TDB_TXN_READ_UNCOMMITTED) < 0) {
ASSERT(0);
}
if (tdbBegin(pTq->pMetaStore, &txn) < 0) {
ASSERT(0);
}
tdbTbDelete(pTq->pExecStore, pReq->subKey, (int)strlen(pReq->subKey), &txn);
if (tdbCommit(pTq->pMetaStore, &txn) < 0) {
ASSERT(0);
}
return 0;
}
@ -604,6 +658,45 @@ int32_t tqProcessVgChangeReq(STQ* pTq, char* msg, int32_t msgLen) {
pExec->pDropTbUid = taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_NO_LOCK);
}
taosHashPut(pTq->execs, req.subKey, strlen(req.subKey), pExec, sizeof(STqExec));
int32_t code;
int32_t vlen;
tEncodeSize(tEncodeSTqExec, pExec, vlen, code);
ASSERT(code == 0);
void* buf = taosMemoryCalloc(1, vlen);
if (buf == NULL) {
ASSERT(0);
}
SEncoder encoder;
tEncoderInit(&encoder, buf, vlen);
if (tEncodeSTqExec(&encoder, pExec) < 0) {
ASSERT(0);
}
TXN txn;
if (tdbTxnOpen(&txn, 0, tdbDefaultMalloc, tdbDefaultFree, NULL, TDB_TXN_WRITE | TDB_TXN_READ_UNCOMMITTED) < 0) {
ASSERT(0);
}
if (tdbBegin(pTq->pMetaStore, &txn) < 0) {
ASSERT(0);
}
if (tdbTbUpsert(pTq->pExecStore, req.subKey, (int)strlen(req.subKey), buf, vlen, &txn) < 0) {
ASSERT(0);
}
if (tdbCommit(pTq->pMetaStore, &txn) < 0) {
ASSERT(0);
}
tEncoderClear(&encoder);
taosMemoryFree(buf);
return 0;
} else {
/*if (req.newConsumerId != -1) {*/

View File

@ -83,11 +83,11 @@ bool tqNextDataBlockFilterOut(STqReadHandle* pHandle, SHashObj* filterOutUids) {
int32_t tqRetrieveDataBlock(SArray** ppCols, STqReadHandle* pHandle, uint64_t* pGroupId, uint64_t* pUid,
int32_t* pNumOfRows, int16_t* pNumOfCols) {
/*int32_t sversion = pHandle->pBlock->sversion;*/
// TODO set to real sversion
*pUid = 0;
int32_t sversion = 1;
// TODO set to real sversion
/*int32_t sversion = 1;*/
int32_t sversion = htonl(pHandle->pBlock->sversion);
if (pHandle->sver != sversion || pHandle->cachedSchemaUid != pHandle->msgIter.suid) {
pHandle->pSchema = metaGetTbTSchema(pHandle->pVnodeMeta, pHandle->msgIter.uid, sversion);
if (pHandle->pSchema == NULL) {

View File

@ -147,6 +147,10 @@ SSyncFSM *vnodeSyncMakeFsm(SVnode *pVnode) {
pFsm->FpPreCommitCb = vnodeSyncPreCommitMsg;
pFsm->FpRollBackCb = vnodeSyncRollBackMsg;
pFsm->FpGetSnapshot = vnodeSyncGetSnapshot;
pFsm->FpRestoreFinish = NULL;
pFsm->FpRestoreFinishCb = NULL;
pFsm->FpSnapshotRead = NULL;
pFsm->FpSnapshotApply = NULL;
pFsm->FpReConfigCb = NULL;
return pFsm;
}

View File

@ -36,6 +36,8 @@ extern "C" {
#define EXPLAIN_SORT_FORMAT "Sort"
#define EXPLAIN_INTERVAL_FORMAT "Interval on Column %s"
#define EXPLAIN_SESSION_FORMAT "Session"
#define EXPLAIN_STATE_WINDOW_FORMAT "StateWindow on Column %s"
#define EXPLAIN_PARITION_FORMAT "Partition on Column %s"
#define EXPLAIN_ORDER_FORMAT "Order: %s"
#define EXPLAIN_FILTER_FORMAT "Filter: "
#define EXPLAIN_FILL_FORMAT "Fill: %s"

View File

@ -162,6 +162,16 @@ int32_t qExplainGenerateResChildren(SPhysiNode *pNode, SExplainGroup *group, SNo
pPhysiChildren = pSessNode->window.node.pChildren;
break;
}
case QUERY_NODE_PHYSICAL_PLAN_STATE_WINDOW: {
SStateWinodwPhysiNode* pStateNode = (SStateWinodwPhysiNode*) pNode;
pPhysiChildren = pStateNode->window.node.pChildren;
break;
}
case QUERY_NODE_PHYSICAL_PLAN_PARTITION: {
SPartitionPhysiNode* partitionPhysiNode = (SPartitionPhysiNode*) pNode;
pPhysiChildren = partitionPhysiNode->node.pChildren;
break;
}
default:
qError("not supported physical node type %d", pNode->type);
QRY_ERR_RET(TSDB_CODE_QRY_APP_ERROR);
@ -339,7 +349,6 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
EXPLAIN_ROW_APPEND(EXPLAIN_COLUMNS_FORMAT, pTagScanNode->pScanCols->length);
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
EXPLAIN_ROW_APPEND(EXPLAIN_WIDTH_FORMAT, pTagScanNode->node.pOutputDataBlockDesc->totalRowSize);
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
EXPLAIN_ROW_APPEND(EXPLAIN_RIGHT_PARENTHESIS_FORMAT);
EXPLAIN_ROW_END();
QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level));
@ -734,6 +743,85 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i
}
break;
}
case QUERY_NODE_PHYSICAL_PLAN_STATE_WINDOW: {
SStateWinodwPhysiNode *pStateNode = (SStateWinodwPhysiNode *)pNode;
EXPLAIN_ROW_NEW(level, EXPLAIN_STATE_WINDOW_FORMAT, nodesGetNameFromColumnNode(((STargetNode*)pStateNode->pStateKey)->pExpr));
EXPLAIN_ROW_APPEND(EXPLAIN_LEFT_PARENTHESIS_FORMAT);
if (pResNode->pExecInfo) {
QRY_ERR_RET(qExplainBufAppendExecInfo(pResNode->pExecInfo, tbuf, &tlen));
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
}
EXPLAIN_ROW_APPEND(EXPLAIN_FUNCTIONS_FORMAT, pStateNode->window.pFuncs->length);
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
EXPLAIN_ROW_APPEND(EXPLAIN_WIDTH_FORMAT, pStateNode->window.node.pOutputDataBlockDesc->totalRowSize);
EXPLAIN_ROW_APPEND(EXPLAIN_RIGHT_PARENTHESIS_FORMAT);
EXPLAIN_ROW_END();
QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level));
if (verbose) {
EXPLAIN_ROW_NEW(level + 1, EXPLAIN_OUTPUT_FORMAT);
EXPLAIN_ROW_APPEND(EXPLAIN_COLUMNS_FORMAT,
nodesGetOutputNumFromSlotList(pStateNode->window.node.pOutputDataBlockDesc->pSlots));
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
EXPLAIN_ROW_APPEND(EXPLAIN_WIDTH_FORMAT, pStateNode->window.node.pOutputDataBlockDesc->outputRowSize);
EXPLAIN_ROW_END();
QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level + 1));
EXPLAIN_ROW_END();
QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level + 1));
if (pStateNode->window.node.pConditions) {
EXPLAIN_ROW_NEW(level + 1, EXPLAIN_FILTER_FORMAT);
QRY_ERR_RET(nodesNodeToSQL(pStateNode->window.node.pConditions, tbuf + VARSTR_HEADER_SIZE,
TSDB_EXPLAIN_RESULT_ROW_SIZE, &tlen));
EXPLAIN_ROW_END();
QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level + 1));
}
}
break;
}
case QUERY_NODE_PHYSICAL_PLAN_PARTITION: {
SPartitionPhysiNode *pPartNode = (SPartitionPhysiNode *)pNode;
SNode* p = nodesListGetNode(pPartNode->pPartitionKeys, 0);
EXPLAIN_ROW_NEW(level, EXPLAIN_PARITION_FORMAT, nodesGetNameFromColumnNode(p));
EXPLAIN_ROW_APPEND(EXPLAIN_LEFT_PARENTHESIS_FORMAT);
if (pResNode->pExecInfo) {
QRY_ERR_RET(qExplainBufAppendExecInfo(pResNode->pExecInfo, tbuf, &tlen));
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
}
// EXPLAIN_ROW_APPEND(EXPLAIN_FUNCTIONS_FORMAT, pPartNode->length);
// EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
EXPLAIN_ROW_APPEND(EXPLAIN_WIDTH_FORMAT, pPartNode->node.pOutputDataBlockDesc->totalRowSize);
// if (pPartNode->pGroupKeys) {
// EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
// EXPLAIN_ROW_APPEND(EXPLAIN_GROUPS_FORMAT, pPartNode->pGroupKeys->length);
// }
EXPLAIN_ROW_APPEND(EXPLAIN_RIGHT_PARENTHESIS_FORMAT);
EXPLAIN_ROW_END();
QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level));
if (verbose) {
EXPLAIN_ROW_NEW(level + 1, EXPLAIN_OUTPUT_FORMAT);
EXPLAIN_ROW_APPEND(EXPLAIN_COLUMNS_FORMAT,
nodesGetOutputNumFromSlotList(pPartNode->node.pOutputDataBlockDesc->pSlots));
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
EXPLAIN_ROW_APPEND(EXPLAIN_WIDTH_FORMAT, pPartNode->node.pOutputDataBlockDesc->outputRowSize);
EXPLAIN_ROW_END();
QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level + 1));
if (pPartNode->node.pConditions) {
EXPLAIN_ROW_NEW(level + 1, EXPLAIN_FILTER_FORMAT);
QRY_ERR_RET(nodesNodeToSQL(pPartNode->node.pConditions, tbuf + VARSTR_HEADER_SIZE,
TSDB_EXPLAIN_RESULT_ROW_SIZE, &tlen));
EXPLAIN_ROW_END();
QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level + 1));
}
}
break;
}
default:
qError("not supported physical node type %d", pNode->type);
return TSDB_CODE_QRY_APP_ERROR;

View File

@ -361,6 +361,18 @@ typedef struct SCatchSupporter {
int64_t* pKeyBuf;
} SCatchSupporter;
typedef struct SStreamAggSupporter {
SArray* pResultRows; // SResultWindowInfo
int32_t keySize;
char* pKeyBuf; // window key buffer
SDiskbasedBuf* pResultBuf; // query result buffer based on blocked-wised disk file
int32_t resultRowSize; // the result buffer size for each result row, with the meta data size for each row
} SStreamAggSupporter;
typedef struct SessionWindowSupporter {
SStreamAggSupporter* pStreamAggSup;
int64_t gap;
} SessionWindowSupporter;
typedef struct SStreamBlockScanInfo {
SArray* pBlockLists; // multiple SSDatablock.
SSDataBlock* pRes; // result SSDataBlock
@ -385,6 +397,7 @@ typedef struct SStreamBlockScanInfo {
SInterval interval; // if the upstream is an interval operator, the interval info is also kept here.
SCatchSupporter childAggSup;
SArray* childIds;
SessionWindowSupporter sessionSup;
} SStreamBlockScanInfo;
typedef struct SSysTableScanInfo {
@ -550,6 +563,27 @@ typedef struct SSessionAggOperatorInfo {
STimeWindowAggSupp twAggSup;
} SSessionAggOperatorInfo;
typedef struct SResultWindowInfo {
SResultRowPosition pos;
STimeWindow win;
bool isOutput;
} SResultWindowInfo;
typedef struct SStreamSessionAggOperatorInfo {
SOptrBasicInfo binfo;
SStreamAggSupporter streamAggSup;
SGroupResInfo groupResInfo;
int64_t gap; // session window gap
int32_t primaryTsIndex; // primary timestamp slot id
int32_t order; // current SSDataBlock scan order
STimeWindowAggSupp twAggSup;
SSDataBlock* pWinBlock; // window result
SqlFunctionCtx* pDummyCtx; // for combine
SSDataBlock* pDelRes;
SHashObj* pStDeleted;
void* pDelIterator;
} SStreamSessionAggOperatorInfo;
typedef struct STimeSliceOperatorInfo {
SOptrBasicInfo binfo;
SInterval interval;
@ -727,6 +761,9 @@ SOperatorInfo* createTimeSliceOperatorInfo(SOperatorInfo* downstream, SExprInfo*
SOperatorInfo* createMergeJoinOperatorInfo(SOperatorInfo** pDownstream, int32_t numOfDownstream, SExprInfo* pExprInfo, int32_t numOfCols, SSDataBlock* pResBlock, SNode* pOnCondition, SExecTaskInfo* pTaskInfo);
SOperatorInfo* createTagScanOperatorInfo(SReadHandle* pReadHandle, SExprInfo* pExpr, int32_t numOfOutput, SSDataBlock* pResBlock, SArray* pColMatchInfo, STableGroupInfo* pTableGroupInfo, SExecTaskInfo* pTaskInfo);
SOperatorInfo* createStreamSessionAggOperatorInfo(SOperatorInfo* downstream,
SExprInfo* pExprInfo, int32_t numOfCols, SSDataBlock* pResBlock, int64_t gap,
int32_t tsSlotId, STimeWindowAggSupp* pTwAggSupp, SExecTaskInfo* pTaskInfo);
#if 0
SOperatorInfo* createTableSeqScanOperatorInfo(void* pTsdbReadHandle, STaskRuntimeEnv* pRuntimeEnv);
#endif
@ -761,13 +798,19 @@ void aggEncodeResultRow(SOperatorInfo* pOperator, SAggSupporter* pSup, SOptrBasi
int32_t* length);
STimeWindow getActiveTimeWindow(SDiskbasedBuf* pBuf, SResultRowInfo* pResultRowInfo, int64_t ts,
SInterval* pInterval, int32_t precision, STimeWindow* win);
int32_t getNumOfRowsInTimeWindow(SDataBlockInfo* pDataBlockInfo, TSKEY* pPrimaryColumn, int32_t startPos,
TSKEY ekey, __block_search_fn_t searchFn, STableQueryInfo* item,
int32_t order);
int32_t getNumOfRowsInTimeWindow(SDataBlockInfo* pDataBlockInfo, TSKEY* pPrimaryColumn,
int32_t startPos, TSKEY ekey, __block_search_fn_t searchFn, STableQueryInfo* item,
int32_t order);
int32_t binarySearchForKey(char* pValue, int num, TSKEY key, int order);
int32_t initCatchSupporter(SCatchSupporter* pCatchSup, size_t rowSize, size_t keyBufSize,
const char* pKey, const char* pDir);
int32_t initCatchSupporter(SCatchSupporter* pCatchSup, size_t rowSize, const char* pKey,
const char* pDir);
int32_t initStreamAggSupporter(SStreamAggSupporter* pSup, const char* pKey);
SResultRow* getNewResultRow_rv(SDiskbasedBuf* pResultBuf, int64_t tableGroupId, int32_t interBufSize);
SResultWindowInfo* getSessionTimeWindow(SArray* pWinInfos, TSKEY ts, int64_t gap,
int32_t* pIndex);
int32_t updateSessionWindowInfo(SResultWindowInfo* pWinInfo, TSKEY* pTs, int32_t rows,
int32_t start, int64_t gap, SHashObj* pStDeleted);
bool functionNeedToExecute(SqlFunctionCtx* pCtx);
#ifdef __cplusplus
}
#endif

View File

@ -98,7 +98,6 @@ static int32_t getExprFunctionId(SExprInfo* pExprInfo) {
}
static void doSetTagValueToResultBuf(char* output, const char* val, int16_t type, int16_t bytes);
static bool functionNeedToExecute(SqlFunctionCtx* pCtx);
static void setBlockStatisInfo(SqlFunctionCtx* pCtx, SExprInfo* pExpr, SSDataBlock* pSDataBlock);
@ -937,7 +936,7 @@ int32_t setGroupResultOutputBuf(SOptrBasicInfo* binfo, int32_t numOfCols, char*
return TSDB_CODE_SUCCESS;
}
static bool functionNeedToExecute(SqlFunctionCtx* pCtx) {
bool functionNeedToExecute(SqlFunctionCtx* pCtx) {
struct SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx);
// in case of timestamp column, always generated results.
@ -2719,8 +2718,9 @@ static void* setAllSourcesCompleted(SOperatorInfo* pOperator, int64_t startTs) {
SExchangeInfo* pExchangeInfo = pOperator->info;
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
int64_t el = taosGetTimestampUs() - startTs;
int64_t el = taosGetTimestampUs() - startTs;
SLoadRemoteDataInfo* pLoadInfo = &pExchangeInfo->loadInfo;
pLoadInfo->totalElapsed += el;
size_t totalSources = taosArrayGetSize(pExchangeInfo->pSources);
@ -2921,6 +2921,7 @@ static SSDataBlock* seqLoadRemoteData(SOperatorInfo* pOperator) {
pLoadInfo->totalSize);
}
pOperator->resultInfo.totalRows += pRes->info.rows;
return pExchangeInfo->pResult;
}
}
@ -2930,10 +2931,10 @@ static int32_t prepareLoadRemoteData(SOperatorInfo* pOperator) {
return TSDB_CODE_SUCCESS;
}
int64_t st = taosGetTimestampUs();
SExchangeInfo* pExchangeInfo = pOperator->info;
if (pExchangeInfo->seqLoadData) {
// do nothing for sequentially load data
} else {
if (!pExchangeInfo->seqLoadData) {
int32_t code = prepareConcurrentlyLoad(pOperator);
if (code != TSDB_CODE_SUCCESS) {
return code;
@ -2941,6 +2942,7 @@ static int32_t prepareLoadRemoteData(SOperatorInfo* pOperator) {
}
OPTR_SET_OPENED(pOperator);
pOperator->cost.openCost = (taosGetTimestampUs() - st) / 1000.0;
return TSDB_CODE_SUCCESS;
}
@ -2968,15 +2970,6 @@ static SSDataBlock* doLoadRemoteData(SOperatorInfo* pOperator) {
} else {
return concurrentlyLoadRemoteData(pOperator);
}
#if 0
_error:
taosMemoryFreeClear(pMsg);
taosMemoryFreeClear(pMsgSendInfo);
terrno = pTaskInfo->code;
return NULL;
#endif
}
static int32_t initDataSource(int32_t numOfSources, SExchangeInfo* pInfo) {
@ -3005,12 +2998,8 @@ SOperatorInfo* createExchangeOperatorInfo(void* pTransporter, const SNodeList* p
SExecTaskInfo* pTaskInfo) {
SExchangeInfo* pInfo = taosMemoryCalloc(1, sizeof(SExchangeInfo));
SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo));
if (pInfo == NULL || pOperator == NULL) {
taosMemoryFreeClear(pInfo);
taosMemoryFreeClear(pOperator);
terrno = TSDB_CODE_QRY_OUT_OF_MEMORY;
return NULL;
goto _error;
}
size_t numOfSources = LIST_LENGTH(pSources);
@ -3035,18 +3024,17 @@ SOperatorInfo* createExchangeOperatorInfo(void* pTransporter, const SNodeList* p
tsem_init(&pInfo->ready, 0, 0);
pOperator->name = "ExchangeOperator";
pOperator->name = "ExchangeOperator";
pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_EXCHANGE;
pOperator->blocking = false;
pOperator->status = OP_NOT_OPENED;
pOperator->info = pInfo;
pOperator->numOfExprs = pBlock->info.numOfCols;
pOperator->pTaskInfo = pTaskInfo;
pOperator->blocking = false;
pOperator->status = OP_NOT_OPENED;
pOperator->info = pInfo;
pOperator->numOfExprs = pBlock->info.numOfCols;
pOperator->pTaskInfo = pTaskInfo;
pOperator->fpSet = createOperatorFpSet(prepareLoadRemoteData, doLoadRemoteData, NULL, NULL,
destroyExchangeOperatorInfo, NULL, NULL, NULL);
pInfo->pTransporter = pTransporter;
return pOperator;
_error:
@ -4671,6 +4659,19 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
pOptr =
createSessionAggOperatorInfo(ops[0], pExprInfo, num, pResBlock, pSessionNode->gap, tsSlotId, &as, pTaskInfo);
} else if (QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW == type) {
SSessionWinodwPhysiNode* pSessionNode = (SSessionWinodwPhysiNode*)pPhyNode;
STimeWindowAggSupp as = {.waterMark = pSessionNode->window.watermark,
.calTrigger = pSessionNode->window.triggerType};
SExprInfo* pExprInfo = createExprInfo(pSessionNode->window.pFuncs, NULL, &num);
SSDataBlock* pResBlock = createResDataBlock(pPhyNode->pOutputDataBlockDesc);
int32_t tsSlotId = ((SColumnNode*)pSessionNode->window.pTspk)->slotId;
pOptr =
createStreamSessionAggOperatorInfo(ops[0], pExprInfo, num, pResBlock, pSessionNode->gap, tsSlotId, &as, pTaskInfo);
} else if (QUERY_NODE_PHYSICAL_PLAN_PARTITION == type) {
SPartitionPhysiNode* pPartNode = (SPartitionPhysiNode*)pPhyNode;
SArray* pColList = extractPartitionColInfo(pPartNode->pPartitionKeys);
@ -5162,15 +5163,37 @@ int32_t getOperatorExplainExecInfo(SOperatorInfo* operatorInfo, SExplainExecInfo
return TSDB_CODE_SUCCESS;
}
int32_t initCatchSupporter(SCatchSupporter* pCatchSup, size_t rowSize, size_t keyBufSize, const char* pKey,
const char* pDir) {
int32_t initCatchSupporter(SCatchSupporter* pCatchSup, size_t rowSize, const char* pKey,
const char* pDir) {
pCatchSup->keySize = sizeof(int64_t) + sizeof(int64_t) + sizeof(TSKEY);
pCatchSup->pKeyBuf = taosMemoryCalloc(1, pCatchSup->keySize);
int32_t pageSize = rowSize * 32;
int32_t bufSize = pageSize * 4096;
createDiskbasedBuf(&pCatchSup->pDataBuf, pageSize, bufSize, pKey, pDir);
_hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY);
pCatchSup->pWindowHashTable = taosHashInit(10000, hashFn, true, HASH_NO_LOCK);
;
return TSDB_CODE_SUCCESS;
if (pCatchSup->pKeyBuf == NULL || pCatchSup->pWindowHashTable == NULL) {
return TSDB_CODE_OUT_OF_MEMORY;
}
int32_t pageSize = rowSize * 32;
int32_t bufSize = pageSize * 4096;
return createDiskbasedBuf(&pCatchSup->pDataBuf, pageSize, bufSize, pKey, pDir);
}
int32_t initStreamAggSupporter(SStreamAggSupporter* pSup, const char* pKey) {
pSup->keySize = sizeof(int64_t) + sizeof(TSKEY);
pSup->pKeyBuf = taosMemoryCalloc(1, pSup->keySize);
pSup->pResultRows = taosArrayInit(1024, sizeof(SResultWindowInfo));
if (pSup->pKeyBuf == NULL || pSup->pResultRows == NULL) {
return TSDB_CODE_OUT_OF_MEMORY;
}
int32_t pageSize = 4096;
while (pageSize < pSup->resultRowSize * 4) {
pageSize <<= 1u;
}
// at least four pages need to be in buffer
int32_t bufSize = 4096 * 256;
if (bufSize <= pageSize) {
bufSize = pageSize * 4;
}
return createDiskbasedBuf(&pSup->pResultBuf, pageSize, bufSize, pKey, "/tmp/");
}

View File

@ -269,15 +269,20 @@ static SSDataBlock* hashGroupbyAggregate(SOperatorInfo* pOperator) {
if (pOperator->status == OP_RES_TO_RETURN) {
doBuildResultDatablock(pOperator, &pInfo->binfo, &pInfo->groupResInfo, pInfo->aggSup.pResultBuf);
if (pRes->info.rows == 0 || !hashRemainDataInGroupInfo(&pInfo->groupResInfo)) {
size_t rows = pRes->info.rows;
if (rows == 0 || !hashRemainDataInGroupInfo(&pInfo->groupResInfo)) {
doSetOperatorCompleted(pOperator);
}
pOperator->resultInfo.totalRows += rows;
return (pRes->info.rows == 0)? NULL:pRes;
}
int32_t order = TSDB_ORDER_ASC;
int32_t scanFlag = MAIN_SCAN;
int64_t st = taosGetTimestampUs();
SOperatorInfo* downstream = pOperator->pDownstream[0];
while (1) {
@ -317,6 +322,8 @@ static SSDataBlock* hashGroupbyAggregate(SOperatorInfo* pOperator) {
blockDataEnsureCapacity(pRes, pOperator->resultInfo.capacity);
initGroupedResultInfo(&pInfo->groupResInfo, pInfo->aggSup.pResultRowHashTable, 0);
pOperator->cost.openCost = (taosGetTimestampUs() - st) / 1000.0;
while(1) {
doBuildResultDatablock(pOperator, &pInfo->binfo, &pInfo->groupResInfo, pInfo->aggSup.pResultBuf);
doFilter(pInfo->pCondition, pRes, NULL);
@ -545,7 +552,7 @@ static SSDataBlock* buildPartitionResult(SOperatorInfo* pOperator) {
// try next group data
pInfo->pGroupIter = taosHashIterate(pInfo->pGroupSet, pInfo->pGroupIter);
if (pInfo->pGroupIter == NULL) {
pOperator->status = OP_EXEC_DONE;
doSetOperatorCompleted(pOperator);
return NULL;
}
@ -562,6 +569,8 @@ static SSDataBlock* buildPartitionResult(SOperatorInfo* pOperator) {
blockDataUpdateTsWindow(pInfo->binfo.pRes, 0);
pInfo->binfo.pRes->info.groupId = pGroupInfo->groupId;
pOperator->resultInfo.totalRows += pInfo->binfo.pRes->info.rows;
return pInfo->binfo.pRes;
}
@ -578,6 +587,7 @@ static SSDataBlock* hashPartition(SOperatorInfo* pOperator) {
return buildPartitionResult(pOperator);
}
int64_t st = taosGetTimestampUs();
SOperatorInfo* downstream = pOperator->pDownstream[0];
while (1) {
@ -589,6 +599,8 @@ static SSDataBlock* hashPartition(SOperatorInfo* pOperator) {
doHashPartition(pOperator, pBlock);
}
pOperator->cost.openCost = (taosGetTimestampUs() - st) / 1000.0;
pOperator->status = OP_RES_TO_RETURN;
blockDataEnsureCapacity(pRes, 4096);
return buildPartitionResult(pOperator);
@ -632,13 +644,14 @@ SOperatorInfo* createPartitionOperatorInfo(SOperatorInfo* downstream, SExprInfo*
}
pOperator->name = "PartitionOperator";
pOperator->blocking = true;
pOperator->blocking = true;
pOperator->status = OP_NOT_OPENED;
pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_PARTITION;
pInfo->binfo.pRes = pResultBlock;
pOperator->numOfExprs = numOfCols;
pOperator->numOfExprs = numOfCols;
pOperator->pExpr = pExprInfo;
pOperator->info = pInfo;
pOperator->pTaskInfo = pTaskInfo;
pOperator->fpSet = createOperatorFpSet(operatorDummyOpenFn, hashPartition, NULL, NULL, destroyPartitionOperatorInfo,
NULL, NULL, NULL);

View File

@ -645,6 +645,10 @@ static void doClearBufferedBlocks(SStreamBlockScanInfo* pInfo) {
taosArrayClear(pInfo->pBlockLists);
}
static bool isSessionWindow(SStreamBlockScanInfo* pInfo) {
return pInfo->sessionSup.pStreamAggSup != NULL;
}
static bool prepareDataScan(SStreamBlockScanInfo* pInfo) {
SSDataBlock* pSDB = pInfo->pUpdateRes;
if (pInfo->updateResIndex < pSDB->info.rows) {
@ -652,13 +656,25 @@ static bool prepareDataScan(SStreamBlockScanInfo* pInfo) {
TSKEY *tsCols = (TSKEY*)pColDataInfo->pData;
SResultRowInfo dumyInfo;
dumyInfo.cur.pageId = -1;
STimeWindow win = getActiveTimeWindow(NULL, &dumyInfo, tsCols[pInfo->updateResIndex], &pInfo->interval,
pInfo->interval.precision, NULL);
STimeWindow win;
if (isSessionWindow(pInfo)) {
SStreamAggSupporter* pAggSup = pInfo->sessionSup.pStreamAggSup;
int64_t gap = pInfo->sessionSup.gap;
int32_t winIndex = 0;
SResultWindowInfo* pCurWin = getSessionTimeWindow(pAggSup->pResultRows,
tsCols[pInfo->updateResIndex], gap, &winIndex);
win = pCurWin->win;
pInfo->updateResIndex += updateSessionWindowInfo(pCurWin, tsCols, pSDB->info.rows,
pInfo->updateResIndex, gap, NULL);
} else {
win = getActiveTimeWindow(NULL, &dumyInfo, tsCols[pInfo->updateResIndex],
&pInfo->interval, pInfo->interval.precision, NULL);
pInfo->updateResIndex += getNumOfRowsInTimeWindow(&pSDB->info, tsCols, pInfo->updateResIndex,
win.ekey, binarySearchForKey, NULL, TSDB_ORDER_ASC);
}
STableScanInfo* pTableScanInfo = pInfo->pOperatorDumy->info;
pTableScanInfo->cond.twindow = win;
tsdbResetReadHandle(pTableScanInfo->dataReader, &pTableScanInfo->cond);
pInfo->updateResIndex += getNumOfRowsInTimeWindow(&pSDB->info, tsCols, pInfo->updateResIndex,
win.ekey, binarySearchForKey, NULL, TSDB_ORDER_ASC);
pTableScanInfo->scanTimes = 0;
return true;
} else {
@ -848,6 +864,7 @@ static SSDataBlock* doStreamBlockScan(SOperatorInfo* pOperator) {
} else if (pInfo->scanMode == STREAM_SCAN_FROM_UPDATERES) {
blockDataCleanup(pInfo->pRes);
pInfo->scanMode = STREAM_SCAN_FROM_DATAREADER;
prepareDataScan(pInfo);
return pInfo->pUpdateRes;
} else if (pInfo->scanMode == STREAM_SCAN_FROM_DATAREADER) {
SSDataBlock* pSDB = doDataScan(pInfo);
@ -924,13 +941,12 @@ static SSDataBlock* doStreamBlockScan(SOperatorInfo* pOperator) {
if (rows == 0) {
pOperator->status = OP_EXEC_DONE;
} else if (pInfo->interval.interval > 0) {
} else if (pInfo->pUpdateInfo) {
SSDataBlock* upRes = getUpdateDataBlock(pInfo, true); //TODO(liuyao) get invertible from plan
if (upRes) {
pInfo->pUpdateRes = upRes;
if (upRes->info.type == STREAM_REPROCESS) {
pInfo->updateResIndex = 0;
prepareDataScan(pInfo);
pInfo->scanMode = STREAM_SCAN_FROM_UPDATERES;
} else if (upRes->info.type == STREAM_INVERT) {
pInfo->scanMode = STREAM_SCAN_FROM_RES;
@ -1001,10 +1017,9 @@ SOperatorInfo* createStreamScanOperatorInfo(void* streamReadHandle, void* pDataR
pInfo->scanMode = STREAM_SCAN_FROM_READERHANDLE;
pInfo->pOperatorDumy = pOperatorDumy;
pInfo->interval = pSTInfo->interval;
pInfo->sessionSup = (SessionWindowSupporter){.pStreamAggSup = NULL, .gap = -1};
size_t childKeyBufSize = sizeof(int64_t) + sizeof(int64_t) + sizeof(TSKEY);
initCatchSupporter(&pInfo->childAggSup, 1024, childKeyBufSize,
"StreamFinalInterval", TD_TMP_DIR_PATH); // TODO(liuyao) get row size from phy plan
initCatchSupporter(&pInfo->childAggSup, 1024, "StreamFinalInterval", "/tmp/"); // TODO(liuyao) get row size from phy plan
pOperator->name = "StreamBlockScanOperator";
pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN;
@ -1640,7 +1655,7 @@ static SSDataBlock* doTagScan(SOperatorInfo* pOperator) {
count += 1;
if (++pInfo->curPos >= pInfo->pTableGroups->numOfTables) {
pOperator->status = OP_EXEC_DONE;
doSetOperatorCompleted(pOperator);
}
}
@ -1652,6 +1667,8 @@ static SSDataBlock* doTagScan(SOperatorInfo* pOperator) {
}
pRes->info.rows = count;
pOperator->resultInfo.totalRows += count;
return (pRes->info.rows == 0) ? NULL : pInfo->pRes;
}

View File

@ -9,6 +9,7 @@ typedef enum SResultTsInterpType {
} SResultTsInterpType;
static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator);
static SSDataBlock* doStreamSessionWindowAgg(SOperatorInfo* pOperator);
/*
* There are two cases to handle:
@ -943,6 +944,7 @@ static SSDataBlock* doStateWindowAgg(SOperatorInfo* pOperator) {
}
int32_t order = TSDB_ORDER_ASC;
int64_t st = taosGetTimestampUs();
SOperatorInfo* downstream = pOperator->pDownstream[0];
while (1) {
@ -957,6 +959,8 @@ static SSDataBlock* doStateWindowAgg(SOperatorInfo* pOperator) {
doStateWindowAggImpl(pOperator, pInfo, pBlock);
}
pOperator->cost.openCost = (taosGetTimestampUs() - st)/1000.0;
pOperator->status = OP_RES_TO_RETURN;
closeAllResultRows(&pBInfo->resultRowInfo);
@ -967,7 +971,10 @@ static SSDataBlock* doStateWindowAgg(SOperatorInfo* pOperator) {
doSetOperatorCompleted(pOperator);
}
return pBInfo->pRes->info.rows == 0 ? NULL : pBInfo->pRes;
size_t rows = pBInfo->pRes->info.rows;
pOperator->resultInfo.totalRows += rows;
return (rows == 0)? NULL : pBInfo->pRes;
}
static SSDataBlock* doBuildIntervalResult(SOperatorInfo* pOperator) {
@ -1033,13 +1040,9 @@ static void setInverFunction(SqlFunctionCtx* pCtx, int32_t num, EStreamType type
}
}
void doClearWindow(SAggSupporter* pSup, SOptrBasicInfo* pBinfo, char* pData,
int16_t bytes, uint64_t groupId, int32_t numOfOutput) {
SET_RES_WINDOW_KEY(pSup->keyBuf, pData, bytes, groupId);
SResultRowPosition* p1 =
(SResultRowPosition*)taosHashGet(pSup->pResultRowHashTable, pSup->keyBuf,
GET_RES_WINDOW_KEY_LEN(bytes));
SResultRow* pResult = getResultRowByPos(pSup->pResultBuf, p1);
void doClearWindowImpl(SResultRowPosition* p1, SDiskbasedBuf* pResultBuf,
SOptrBasicInfo* pBinfo, int32_t numOfOutput) {
SResultRow* pResult = getResultRowByPos(pResultBuf, p1);
SqlFunctionCtx* pCtx = pBinfo->pCtx;
for (int32_t i = 0; i < numOfOutput; ++i) {
pCtx[i].resultInfo = getResultCell(pResult, i, pBinfo->rowCellInfoOffset);
@ -1054,6 +1057,15 @@ void doClearWindow(SAggSupporter* pSup, SOptrBasicInfo* pBinfo, char* pData,
}
}
void doClearWindow(SAggSupporter* pSup, SOptrBasicInfo* pBinfo, char* pData,
int16_t bytes, uint64_t groupId, int32_t numOfOutput) {
SET_RES_WINDOW_KEY(pSup->keyBuf, pData, bytes, groupId);
SResultRowPosition* p1 =
(SResultRowPosition*)taosHashGet(pSup->pResultRowHashTable, pSup->keyBuf,
GET_RES_WINDOW_KEY_LEN(bytes));
doClearWindowImpl(p1, pSup->pResultBuf, pBinfo, numOfOutput);
}
static void doClearWindows(SAggSupporter* pSup, SOptrBasicInfo* pBinfo,
SInterval* pIntrerval, int32_t tsIndex, int32_t numOfOutput, SSDataBlock* pBlock) {
SColumnInfoData* pColDataInfo = taosArrayGet(pBlock->pDataBlock, tsIndex);
@ -1106,8 +1118,8 @@ static SSDataBlock* doStreamIntervalAgg(SOperatorInfo* pOperator) {
}
if (pBlock->info.type == STREAM_REPROCESS) {
doClearWindows(&pInfo->aggSup, &pInfo->binfo, &pInfo->interval,
pInfo->primaryTsIndex, pOperator->numOfExprs, pBlock);
doClearWindows(&pInfo->aggSup, &pInfo->binfo, &pInfo->interval, 0,
pOperator->numOfExprs, pBlock);
qDebug("%s clear existed time window results for updates checked", GET_TASKID(pTaskInfo));
continue;
}
@ -1419,7 +1431,9 @@ static SSDataBlock* doSessionWindowAgg(SOperatorInfo* pOperator) {
return pBInfo->pRes;
}
int32_t order = TSDB_ORDER_ASC;
int64_t st = taosGetTimestampUs();
int32_t order = TSDB_ORDER_ASC;
SOperatorInfo* downstream = pOperator->pDownstream[0];
while (1) {
@ -1435,6 +1449,8 @@ static SSDataBlock* doSessionWindowAgg(SOperatorInfo* pOperator) {
doSessionWindowAggImpl(pOperator, pInfo, pBlock);
}
pOperator->cost.openCost = (taosGetTimestampUs() - st) / 1000.0;
// restore the value
pOperator->status = OP_RES_TO_RETURN;
closeAllResultRows(&pBInfo->resultRowInfo);
@ -1446,7 +1462,10 @@ static SSDataBlock* doSessionWindowAgg(SOperatorInfo* pOperator) {
doSetOperatorCompleted(pOperator);
}
return pBInfo->pRes->info.rows == 0 ? NULL : pBInfo->pRes;
size_t rows = pBInfo->pRes->info.rows;
pOperator->resultInfo.totalRows += rows;
return (rows == 0)? NULL : pBInfo->pRes;
}
static SSDataBlock* doAllIntervalAgg(SOperatorInfo* pOperator) {
@ -1631,9 +1650,10 @@ _error:
return NULL;
}
static SArray* doHashInterval(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResultRowInfo, SSDataBlock* pSDataBlock,
static SArray* doHashInterval(SOperatorInfo* pOperatorInfo, SSDataBlock* pSDataBlock,
int32_t tableGroupId) {
SStreamFinalIntervalOperatorInfo* pInfo = (SStreamFinalIntervalOperatorInfo*)pOperatorInfo->info;
SResultRowInfo* pResultRowInfo = &(pInfo->binfo.resultRowInfo);
SExecTaskInfo* pTaskInfo = pOperatorInfo->pTaskInfo;
int32_t numOfOutput = pOperatorInfo->numOfExprs;
SArray* pUpdated = taosArrayInit(4, POINTER_BYTES);
@ -1646,7 +1666,10 @@ static SArray* doHashInterval(SOperatorInfo* pOperatorInfo, SResultRowInfo* pRes
if (pSDataBlock->pDataBlock != NULL) {
SColumnInfoData* pColDataInfo = taosArrayGet(pSDataBlock->pDataBlock, pInfo->primaryTsIndex);
tsCols = (int64_t*)pColDataInfo->pData;
} else {
return pUpdated;
}
int32_t startPos = ascScan ? 0 : (pSDataBlock->info.rows - 1);
TSKEY ts = getStartTsKey(&pSDataBlock->info.window, tsCols, pSDataBlock->info.rows, ascScan);
STimeWindow nextWin = getActiveTimeWindow(pInfo->aggSup.pResultBuf, pResultRowInfo, ts,
@ -1707,7 +1730,7 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
pInfo->primaryTsIndex, pOperator->numOfExprs, pBlock);
continue;
}
pUpdated = doHashInterval(pOperator, &pInfo->binfo.resultRowInfo, pBlock, 0);
pUpdated = doHashInterval(pOperator, pBlock, 0);
}
finalizeUpdatedResult(pOperator->numOfExprs, pInfo->aggSup.pResultBuf, pUpdated, pInfo->binfo.rowCellInfoOffset);
@ -1717,3 +1740,534 @@ static SSDataBlock* doStreamFinalIntervalAgg(SOperatorInfo* pOperator) {
pOperator->status = OP_RES_TO_RETURN;
return pInfo->binfo.pRes->info.rows == 0 ? NULL : pInfo->binfo.pRes;
}
void destroyStreamAggSupporter(SStreamAggSupporter* pSup) {
taosArrayDestroy(pSup->pResultRows);
taosMemoryFreeClear(pSup->pKeyBuf);
destroyDiskbasedBuf(pSup->pResultBuf);
}
void destroyStreamSessionAggOperatorInfo(void* param, int32_t numOfOutput) {
SStreamSessionAggOperatorInfo* pInfo = (SStreamSessionAggOperatorInfo*)param;
doDestroyBasicInfo(&pInfo->binfo, numOfOutput);
destroyStreamAggSupporter(&pInfo->streamAggSup);
cleanupGroupResInfo(&pInfo->groupResInfo);
}
int32_t initBiasicInfo(SOptrBasicInfo* pBasicInfo, SExprInfo* pExprInfo,
int32_t numOfCols, SSDataBlock* pResultBlock, SDiskbasedBuf* pResultBuf) {
pBasicInfo->pCtx = createSqlFunctionCtx(pExprInfo, numOfCols, &pBasicInfo->rowCellInfoOffset);
pBasicInfo->pRes = pResultBlock;
for (int32_t i = 0; i < numOfCols; ++i) {
pBasicInfo->pCtx[i].pBuf = pResultBuf;
}
return TSDB_CODE_SUCCESS;
}
void initDummyFunction(SqlFunctionCtx* pDummy, SqlFunctionCtx* pCtx, int32_t nums) {
for (int i = 0; i < nums; i++) {
pDummy[i].functionId = pCtx[i].functionId;
}
}
void initDownStream(SOperatorInfo* downstream, SStreamSessionAggOperatorInfo* pInfo) {
ASSERT(downstream->operatorType == QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN);
SStreamBlockScanInfo* pScanInfo = downstream->info;
pScanInfo->sessionSup =
(SessionWindowSupporter){.pStreamAggSup = &pInfo->streamAggSup, .gap = pInfo->gap};
pScanInfo->pUpdateInfo = updateInfoInit(60000, TSDB_TIME_PRECISION_MILLI, 60000 * 60 * 6);
}
SOperatorInfo* createStreamSessionAggOperatorInfo(SOperatorInfo* downstream,
SExprInfo* pExprInfo, int32_t numOfCols, SSDataBlock* pResBlock, int64_t gap,
int32_t tsSlotId, STimeWindowAggSupp* pTwAggSupp, SExecTaskInfo* pTaskInfo) {
SStreamSessionAggOperatorInfo* pInfo =
taosMemoryCalloc(1, sizeof(SStreamSessionAggOperatorInfo));
SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo));
if (pInfo == NULL || pOperator == NULL) {
goto _error;
}
initResultSizeInfo(pOperator, 4096);
int32_t code = initStreamAggSupporter(&pInfo->streamAggSup, "StreamSessionAggOperatorInfo");
if (code != TSDB_CODE_SUCCESS) {
goto _error;
}
code = initBiasicInfo(&pInfo->binfo, pExprInfo, numOfCols, pResBlock,
pInfo->streamAggSup.pResultBuf);
if (code != TSDB_CODE_SUCCESS) {
goto _error;
}
pInfo->streamAggSup.resultRowSize = getResultRowSize(pInfo->binfo.pCtx, numOfCols);
pInfo->pDummyCtx = (SqlFunctionCtx*)taosMemoryCalloc(numOfCols, sizeof(SqlFunctionCtx));
if (pInfo->pDummyCtx == NULL) {
goto _error;
}
initDummyFunction(pInfo->pDummyCtx, pInfo->binfo.pCtx, numOfCols);
pInfo->twAggSup = *pTwAggSupp;
initResultRowInfo(&pInfo->binfo.resultRowInfo, 8);
initExecTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &pTaskInfo->window);
pInfo->primaryTsIndex = tsSlotId;
pInfo->gap = gap;
pInfo->binfo.pRes = pResBlock;
pInfo->order = TSDB_ORDER_ASC;
_hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY);
pInfo->pStDeleted = taosHashInit(64, hashFn, true, HASH_NO_LOCK);
pInfo->pDelIterator = NULL;
pInfo->pDelRes = createOneDataBlock(pResBlock, false);
blockDataEnsureCapacity(pInfo->pDelRes, 64);
pOperator->name = "StreamSessionWindowAggOperator";
pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW;
pOperator->blocking = true;
pOperator->status = OP_NOT_OPENED;
pOperator->pExpr = pExprInfo;
pOperator->numOfExprs = numOfCols;
pOperator->info = pInfo;
pOperator->fpSet = createOperatorFpSet(operatorDummyOpenFn, doStreamSessionWindowAgg,
NULL, NULL, destroyStreamSessionAggOperatorInfo, aggEncodeResultRow,
aggDecodeResultRow, NULL);
pOperator->pTaskInfo = pTaskInfo;
initDownStream(downstream, pInfo);
code = appendDownstream(pOperator, &downstream, 1);
return pOperator;
_error:
if (pInfo != NULL) {
destroyStreamSessionAggOperatorInfo(pInfo, numOfCols);
}
taosMemoryFreeClear(pInfo);
taosMemoryFreeClear(pOperator);
pTaskInfo->code = code;
return NULL;
}
typedef int64_t (*__get_value_fn_t)(void* data, int32_t index);
int32_t binarySearch(void* keyList, int num, TSKEY key, int order,
__get_value_fn_t getValuefn) {
int firstPos = 0, lastPos = num - 1, midPos = -1;
int numOfRows = 0;
if (num <= 0) return -1;
if (order == TSDB_ORDER_DESC) {
// find the first position which is smaller than the key
while (1) {
if (key >= getValuefn(keyList, lastPos)) return lastPos;
if (key == getValuefn(keyList, firstPos)) return firstPos;
if (key < getValuefn(keyList, firstPos)) return firstPos - 1;
numOfRows = lastPos - firstPos + 1;
midPos = (numOfRows >> 1) + firstPos;
if (key < getValuefn(keyList, midPos)) {
lastPos = midPos - 1;
} else if (key > getValuefn(keyList, midPos)) {
firstPos = midPos + 1;
} else {
break;
}
}
} else {
// find the first position which is bigger than the key
while (1) {
if (key <= getValuefn(keyList, firstPos)) return firstPos;
if (key == getValuefn(keyList, lastPos)) return lastPos;
if (key > getValuefn(keyList, lastPos)) {
lastPos = lastPos + 1;
if (lastPos >= num)
return -1;
else
return lastPos;
}
numOfRows = lastPos - firstPos + 1;
midPos = (numOfRows >> 1) + firstPos;
if (key < getValuefn(keyList, midPos)) {
lastPos = midPos - 1;
} else if (key > getValuefn(keyList, midPos)) {
firstPos = midPos + 1;
} else {
break;
}
}
}
return midPos;
}
int64_t getSessionWindowEndkey(void* data, int32_t index) {
SArray* pWinInfos = (SArray*) data;
SResultWindowInfo* pWin = taosArrayGet(pWinInfos, index);
return pWin->win.ekey;
}
static bool isInWindow(SResultWindowInfo* pWin, TSKEY ts, int64_t gap) {
int64_t sGap = ts - pWin->win.skey;
int64_t eGap = pWin->win.ekey - ts;
if ( (sGap < 0 && sGap >= -gap) || (eGap < 0 && eGap >= -gap) || (sGap >= 0 && eGap >= 0) ) {
return true;
}
return false;
}
static SResultWindowInfo* insertNewSessionWindow(SArray* pWinInfos, TSKEY ts,
int32_t index) {
SResultWindowInfo win =
{.pos.offset = -1, .pos.pageId = -1, .win.skey = ts, .win.ekey = ts, .isOutput = false};
return taosArrayInsert(pWinInfos, index, &win);
}
static SResultWindowInfo* addNewSessionWindow(SArray* pWinInfos, TSKEY ts) {
SResultWindowInfo win =
{.pos.offset = -1, .pos.pageId = -1, .win.skey = ts, .win.ekey = ts, .isOutput = false};
return taosArrayPush(pWinInfos, &win);
}
SResultWindowInfo* getSessionTimeWindow(SArray* pWinInfos, TSKEY ts, int64_t gap,
int32_t* pIndex) {
int32_t size = taosArrayGetSize(pWinInfos);
if (size == 0) {
return addNewSessionWindow(pWinInfos, ts);
}
// find the first position which is smaller than the key
int32_t index = binarySearch(pWinInfos, size, ts, TSDB_ORDER_DESC,
getSessionWindowEndkey);
SResultWindowInfo* pWin = NULL;
if (index >= 0) {
pWin = taosArrayGet(pWinInfos, index);
if (isInWindow(pWin, ts, gap)) {
*pIndex = index;
return pWin;
}
}
if (index + 1 < size) {
pWin = taosArrayGet(pWinInfos, index + 1);
if (isInWindow(pWin, ts, gap)) {
*pIndex = index + 1;
return pWin;
}
}
if (index == size - 1) {
*pIndex = taosArrayGetSize(pWinInfos);
return addNewSessionWindow(pWinInfos, ts);
}
*pIndex = index;
return insertNewSessionWindow(pWinInfos, ts, index);
}
int32_t updateSessionWindowInfo(SResultWindowInfo* pWinInfo, TSKEY* pTs, int32_t rows,
int32_t start, int64_t gap, SHashObj* pStDeleted) {
for (int32_t i = start; i < rows; ++i) {
if (!isInWindow(pWinInfo, pTs[i], gap)) {
return i - start;
}
if (pWinInfo->win.skey > pTs[i]) {
if (pStDeleted && pWinInfo->isOutput) {
taosHashPut(pStDeleted, &pWinInfo->pos, sizeof(SResultRowPosition), &pWinInfo->win.skey, sizeof(TSKEY));
pWinInfo->isOutput = false;
}
pWinInfo->win.skey = pTs[i];
}
pWinInfo->win.ekey = TMAX(pWinInfo->win.ekey, pTs[i]);
}
return rows - start;
}
static int32_t setWindowOutputBuf(SResultWindowInfo* pWinInfo, SResultRow** pResult,
SqlFunctionCtx* pCtx, int32_t groupId, int32_t numOfOutput,
int32_t* rowCellInfoOffset, SStreamAggSupporter* pAggSup, SExecTaskInfo* pTaskInfo) {
assert(pWinInfo->win.skey <= pWinInfo->win.ekey);
// too many time window in query
int32_t size = taosArrayGetSize(pAggSup->pResultRows);
if (size > MAX_INTERVAL_TIME_WINDOW) {
longjmp(pTaskInfo->env, TSDB_CODE_QRY_TOO_MANY_TIMEWINDOW);
}
if (pWinInfo->pos.pageId == -1) {
*pResult = getNewResultRow_rv(pAggSup->pResultBuf, groupId, pAggSup->resultRowSize);
if (*pResult == NULL) {
return TSDB_CODE_OUT_OF_MEMORY;
}
initResultRow(*pResult);
// add a new result set for a new group
pWinInfo->pos.pageId = (*pResult)->pageId;
pWinInfo->pos.offset = (*pResult)->offset;
} else {
*pResult = getResultRowByPos(pAggSup->pResultBuf, &pWinInfo->pos);
if (!(*pResult)) {
qError("getResultRowByPos return NULL, TID:%s", GET_TASKID(pTaskInfo));
return TSDB_CODE_FAILED;
}
}
// set time window for current result
(*pResult)->win = pWinInfo->win;
setResultRowInitCtx(*pResult, pCtx, numOfOutput, rowCellInfoOffset);
return TSDB_CODE_SUCCESS;
}
static int32_t doOneWindowAgg(SStreamSessionAggOperatorInfo* pInfo,
SSDataBlock* pSDataBlock, SResultWindowInfo* pCurWin, SResultRow** pResult,
int32_t startIndex, int32_t winRows, int32_t numOutput, SExecTaskInfo* pTaskInfo ) {
SColumnInfoData* pColDataInfo =
taosArrayGet(pSDataBlock->pDataBlock, pInfo->primaryTsIndex);
TSKEY* tsCols = (int64_t*)pColDataInfo->pData;
int32_t code = setWindowOutputBuf(pCurWin, pResult, pInfo->binfo.pCtx, pSDataBlock->info.groupId,
numOutput, pInfo->binfo.rowCellInfoOffset, &pInfo->streamAggSup, pTaskInfo);
if (code != TSDB_CODE_SUCCESS || (*pResult) == NULL) {
return TSDB_CODE_QRY_OUT_OF_MEMORY;
}
updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &pCurWin->win, true);
doApplyFunctions(pTaskInfo, pInfo->binfo.pCtx, &pCurWin->win,
&pInfo->twAggSup.timeWindowData, startIndex, winRows, tsCols, pSDataBlock->info.rows,
numOutput, TSDB_ORDER_ASC);
return TSDB_CODE_SUCCESS;
}
int32_t copyWinInfoToDataBlock(SSDataBlock* pBlock, SStreamAggSupporter* pAggSup,
int32_t start, int32_t num, int32_t numOfExprs, SOptrBasicInfo* pBinfo) {
for (int32_t i = start; i < num; i += 1) {
SResultWindowInfo* pWinInfo = taosArrayGet(pAggSup->pResultRows, start);
SFilePage* bufPage = getBufPage(pAggSup->pResultBuf, pWinInfo->pos.pageId);
SResultRow* pRow = (SResultRow*)((char*)bufPage + pWinInfo->pos.offset);
for (int32_t j = 0; j < numOfExprs; ++j) {
SResultRowEntryInfo* pResultInfo = getResultCell(pRow, j, pBinfo->rowCellInfoOffset);
SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, j);
char* in = GET_ROWCELL_INTERBUF(pBinfo->pCtx[j].resultInfo);
colDataAppend(pColInfoData, pBlock->info.rows, in, pResultInfo->isNullRes);
}
pBlock->info.rows += pRow->numOfRows;
releaseBufPage(pAggSup->pResultBuf, bufPage);
}
blockDataUpdateTsWindow(pBlock, -1);
return TSDB_CODE_SUCCESS;
}
int32_t getNumCompactWindow(SArray* pWinInfos, int32_t startIndex, int64_t gap) {
SResultWindowInfo* pCurWin = taosArrayGet(pWinInfos, startIndex);
int32_t size = taosArrayGetSize(pWinInfos);
// Just look for the window behind StartIndex
for (int32_t i = startIndex + 1; i < size; i++) {
SResultWindowInfo* pWinInfo = taosArrayGet(pWinInfos, i);
if (!isInWindow(pCurWin, pWinInfo->win.skey, gap)) {
return i - startIndex - 1;
}
}
return size - startIndex - 1;
}
void compactFunctions(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx,
int32_t numOfOutput, SExecTaskInfo* pTaskInfo) {
for (int32_t k = 0; k < numOfOutput; ++k) {
if (fmIsWindowPseudoColumnFunc(pDestCtx[k].functionId)) {
continue;
}
int32_t code = TSDB_CODE_SUCCESS;
if (functionNeedToExecute(&pDestCtx[k]) && pDestCtx[k].fpSet.combine != NULL) {
code = pDestCtx[k].fpSet.combine(&pDestCtx[k], &pSourceCtx[k]);
if (code != TSDB_CODE_SUCCESS) {
qError("%s apply functions error, code: %s", GET_TASKID(pTaskInfo), tstrerror(code));
pTaskInfo->code = code;
longjmp(pTaskInfo->env, code);
}
}
}
}
void compactTimeWindow(SStreamSessionAggOperatorInfo* pInfo, int32_t startIndex, int32_t num,
int32_t groupId, int32_t numOfOutput, SExecTaskInfo* pTaskInfo, SHashObj* pStUpdated, SHashObj* pStDeleted) {
SResultWindowInfo* pCurWin = taosArrayGet(pInfo->streamAggSup.pResultRows, startIndex);
SResultRow* pCurResult = NULL;
setWindowOutputBuf(pCurWin, &pCurResult, pInfo->binfo.pCtx, groupId,
numOfOutput, pInfo->binfo.rowCellInfoOffset, &pInfo->streamAggSup, pTaskInfo);
num += startIndex + 1;
ASSERT(num <= taosArrayGetSize(pInfo->streamAggSup.pResultRows));
// Just look for the window behind StartIndex
for (int32_t i = startIndex + 1; i < num; i++) {
SResultWindowInfo* pWinInfo = taosArrayGet(pInfo->streamAggSup.pResultRows, i);
SResultRow* pWinResult = NULL;
setWindowOutputBuf(pWinInfo, &pWinResult, pInfo->pDummyCtx, groupId,
numOfOutput, pInfo->binfo.rowCellInfoOffset, &pInfo->streamAggSup, pTaskInfo);
pCurWin->win.ekey = TMAX(pCurWin->win.ekey, pWinInfo->win.ekey);
compactFunctions(pInfo->binfo.pCtx, pInfo->pDummyCtx, numOfOutput, pTaskInfo);
taosHashRemove(pStUpdated, &pWinInfo->pos, sizeof(SResultRowPosition));
if (pWinInfo->isOutput) {
taosHashPut(pStDeleted, &pWinInfo->pos, sizeof(SResultRowPosition), &pWinInfo->win.skey, sizeof(TSKEY));
pWinInfo->isOutput = false;
}
taosArrayRemove(pInfo->streamAggSup.pResultRows, i);
}
}
static void doStreamSessionWindowAggImpl(SOperatorInfo* pOperator,
SSDataBlock* pSDataBlock, SHashObj* pStUpdated, SHashObj* pStDeleted) {
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
SStreamSessionAggOperatorInfo* pInfo = pOperator->info;
bool masterScan = true;
int32_t numOfOutput = pOperator->numOfExprs;
int64_t groupId = pSDataBlock->info.groupId;
int64_t gap = pInfo->gap;
int64_t code = TSDB_CODE_SUCCESS;
int32_t step = 1;
bool ascScan = true;
TSKEY* tsCols = NULL;
SResultRow* pResult = NULL;
int32_t winRows = 0;
if (pSDataBlock->pDataBlock != NULL) {
SColumnInfoData* pColDataInfo =
taosArrayGet(pSDataBlock->pDataBlock, pInfo->primaryTsIndex);
tsCols = (int64_t*)pColDataInfo->pData;
} else {
return ;
}
SStreamAggSupporter* pAggSup = &pInfo->streamAggSup;
for(int32_t i = 0; i < pSDataBlock->info.rows; ) {
int32_t winIndex = 0;
SResultWindowInfo* pCurWin =
getSessionTimeWindow(pAggSup->pResultRows, tsCols[i], gap, &winIndex);
winRows =
updateSessionWindowInfo(pCurWin, tsCols, pSDataBlock->info.rows, i, pInfo->gap, pStDeleted);
code = doOneWindowAgg(pInfo, pSDataBlock, pCurWin, &pResult, i, winRows, numOfOutput, pTaskInfo);
if (code != TSDB_CODE_SUCCESS || pResult == NULL) {
longjmp(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY);
}
// window start(end) key interpolation
// doWindowBorderInterpolation(pOperatorInfo, pSDataBlock, pInfo->binfo.pCtx, pResult, &nextWin, startPos, forwardStep,
// pInfo->order, false);
int32_t winNum = getNumCompactWindow(pAggSup->pResultRows, winIndex, gap);
if (winNum > 0) {
compactTimeWindow(pInfo, winIndex, winNum, groupId, numOfOutput, pTaskInfo, pStUpdated, pStDeleted);
}
code = taosHashPut(pStUpdated, &pCurWin->pos, sizeof(SResultRowPosition), &(pCurWin->win.skey), sizeof(TSKEY));
if (code != TSDB_CODE_SUCCESS) {
longjmp(pTaskInfo->env, TSDB_CODE_QRY_OUT_OF_MEMORY);
}
pCurWin->isOutput = true;
i += winRows;
}
}
static void doClearSessionWindows(SStreamAggSupporter* pAggSup, SOptrBasicInfo* pBinfo,
SSDataBlock* pBlock, int32_t tsIndex, int32_t numOfOutput, int64_t gap) {
SColumnInfoData* pColDataInfo = taosArrayGet(pBlock->pDataBlock, tsIndex);
TSKEY *tsCols = (TSKEY*)pColDataInfo->pData;
int32_t step = 0;
for (int32_t i = 0; i < pBlock->info.rows; i += step) {
int32_t winIndex = 0;
SResultWindowInfo* pCurWin =
getSessionTimeWindow(pAggSup->pResultRows, tsCols[i], gap, &winIndex);
step = updateSessionWindowInfo(pCurWin, tsCols, pBlock->info.rows, i, gap, NULL);
doClearWindowImpl(&pCurWin->pos, pAggSup->pResultBuf, pBinfo, numOfOutput);
}
}
static int32_t copyUpdateResult(SHashObj* pStUpdated, SArray* pUpdated, int32_t groupId) {
void* pData = NULL;
size_t keyLen = 0;
while((pData = taosHashIterate(pStUpdated, pData)) != NULL) {
void* key = taosHashGetKey(pData, &keyLen);
ASSERT(keyLen == sizeof(SResultRowPosition));
SResKeyPos* pos = taosMemoryMalloc(sizeof(SResKeyPos) + sizeof(uint64_t));
if (pos == NULL) {
return TSDB_CODE_QRY_OUT_OF_MEMORY;
}
pos->groupId = groupId;
pos->pos = *(SResultRowPosition*)key;
*(int64_t*)pos->key = *(uint64_t*)pData;
taosArrayPush(pUpdated, &pos);
}
return TSDB_CODE_SUCCESS;
}
void doBuildDeleteDataBlock(SHashObj* pStDeleted, SSDataBlock* pBlock, void** Ite) {
blockDataCleanup(pBlock);
size_t keyLen = 0;
while(( (*Ite) = taosHashIterate(pStDeleted, *Ite)) != NULL) {
SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, 0);
colDataAppend(pColInfoData, pBlock->info.rows, *Ite, false);
for (int32_t i = 1; i < pBlock->info.numOfCols; i++) {
pColInfoData = taosArrayGet(pBlock->pDataBlock, i);
colDataAppendNULL(pColInfoData, pBlock->info.rows);
}
pBlock->info.rows += 1;
if (pBlock->info.rows + 1 >= pBlock->info.capacity) {
break;
}
}
if ((*Ite) == NULL) {
taosHashClear(pStDeleted);
}
}
static SSDataBlock* doStreamSessionWindowAgg(SOperatorInfo* pOperator) {
if (pOperator->status == OP_EXEC_DONE) {
return NULL;
}
SStreamSessionAggOperatorInfo* pInfo = pOperator->info;
SOptrBasicInfo* pBInfo = &pInfo->binfo;
if (pOperator->status == OP_RES_TO_RETURN) {
doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
if (pInfo->pDelRes->info.rows > 0) {
return pInfo->pDelRes;
}
doBuildResultDatablock(pOperator, pBInfo, &pInfo->groupResInfo,
pInfo->streamAggSup.pResultBuf);
if (pBInfo->pRes->info.rows == 0 ||
!hashRemainDataInGroupInfo(&pInfo->groupResInfo)) {
doSetOperatorCompleted(pOperator);
}
return pBInfo->pRes->info.rows == 0 ? NULL : pBInfo->pRes;
}
_hash_fn_t hashFn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY);
SHashObj* pStUpdated = taosHashInit(64, hashFn, true, HASH_NO_LOCK);
SOperatorInfo* downstream = pOperator->pDownstream[0];
while (1) {
SSDataBlock* pBlock = downstream->fpSet.getNextFn(downstream);
if (pBlock == NULL) {
break;
}
// the pDataBlock are always the same one, no need to call this again
setInputDataBlock(pOperator, pBInfo->pCtx, pBlock, TSDB_ORDER_ASC, MAIN_SCAN, true);
if (pBlock->info.type == STREAM_REPROCESS) {
doClearSessionWindows(&pInfo->streamAggSup, &pInfo->binfo, pBlock, 0,
pOperator->numOfExprs, pInfo->gap);
continue;
}
doStreamSessionWindowAggImpl(pOperator, pBlock, pStUpdated, pInfo->pStDeleted);
}
// restore the value
pOperator->status = OP_RES_TO_RETURN;
SArray* pUpdated = taosArrayInit(16, POINTER_BYTES);
copyUpdateResult(pStUpdated, pUpdated, pBInfo->pRes->info.groupId);
taosHashCleanup(pStUpdated);
finalizeUpdatedResult(pOperator->numOfExprs, pInfo->streamAggSup.pResultBuf, pUpdated,
pInfo->binfo.rowCellInfoOffset);
initMultiResInfoFromArrayList(&pInfo->groupResInfo, pUpdated);
blockDataEnsureCapacity(pInfo->binfo.pRes, pOperator->resultInfo.capacity);
doBuildDeleteDataBlock(pInfo->pStDeleted, pInfo->pDelRes, &pInfo->pDelIterator);
if (pInfo->pDelRes->info.rows > 0) {
return pInfo->pDelRes;
}
doBuildResultDatablock(pOperator, &pInfo->binfo, &pInfo->groupResInfo,
pInfo->streamAggSup.pResultBuf);
return pBInfo->pRes->info.rows == 0 ? NULL : pBInfo->pRes;
}

View File

@ -37,6 +37,7 @@ typedef struct SBuiltinFuncDefinition {
FScalarExecProcess sprocessFunc;
FExecFinalize finalizeFunc;
FExecProcess invertFunc;
FExecCombine combineFunc;
} SBuiltinFuncDefinition;
extern const SBuiltinFuncDefinition funcMgtBuiltins[];

View File

@ -27,6 +27,7 @@ bool functionSetup(SqlFunctionCtx *pCtx, SResultRowEntryInfo* pResultInfo);
int32_t functionFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock);
int32_t dummyProcess(SqlFunctionCtx* UNUSED_PARAM(pCtx));
int32_t functionFinalizeWithResultBuf(SqlFunctionCtx* pCtx, SSDataBlock* pBlock, char* finalResult);
int32_t combineFunction(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx);
EFuncDataRequired countDataRequired(SFunctionNode* pFunc, STimeWindow* pTimeWindow);
bool getCountFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv);
@ -37,24 +38,29 @@ EFuncDataRequired statisDataRequired(SFunctionNode* pFunc, STimeWindow* pTimeWin
bool getSumFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv);
int32_t sumFunction(SqlFunctionCtx *pCtx);
int32_t sumInvertFunction(SqlFunctionCtx *pCtx);
int32_t sumCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx);
bool minmaxFunctionSetup(SqlFunctionCtx *pCtx, SResultRowEntryInfo* pResultInfo);
bool getMinmaxFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv);
int32_t minFunction(SqlFunctionCtx* pCtx);
int32_t maxFunction(SqlFunctionCtx *pCtx);
int32_t minmaxFunctionFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock);
int32_t minCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx);
int32_t maxCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx);
bool getAvgFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv);
bool avgFunctionSetup(SqlFunctionCtx *pCtx, SResultRowEntryInfo* pResultInfo);
int32_t avgFunction(SqlFunctionCtx* pCtx);
int32_t avgFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock);
int32_t avgInvertFunction(SqlFunctionCtx* pCtx);
int32_t avgCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx);
bool getStddevFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv);
bool stddevFunctionSetup(SqlFunctionCtx *pCtx, SResultRowEntryInfo* pResultInfo);
int32_t stddevFunction(SqlFunctionCtx* pCtx);
int32_t stddevFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock);
int32_t stddevInvertFunction(SqlFunctionCtx* pCtx);
int32_t stddevCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx);
bool getLeastSQRFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv);
bool leastSQRFunctionSetup(SqlFunctionCtx *pCtx, SResultRowEntryInfo* pResultInfo);
@ -73,8 +79,10 @@ int32_t diffFunction(SqlFunctionCtx *pCtx);
bool getFirstLastFuncEnv(struct SFunctionNode* pFunc, SFuncExecEnv* pEnv);
int32_t firstFunction(SqlFunctionCtx *pCtx);
int32_t firstCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx);
int32_t lastFunction(SqlFunctionCtx *pCtx);
int32_t lastFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock);
int32_t lastCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx);
bool getTopBotFuncEnv(SFunctionNode* UNUSED_PARAM(pFunc), SFuncExecEnv* pEnv);
int32_t topFunction(SqlFunctionCtx *pCtx);

View File

@ -745,7 +745,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
.initFunc = functionSetup,
.processFunc = countFunction,
.finalizeFunc = functionFinalize,
.invertFunc = countInvertFunction
.invertFunc = countInvertFunction,
.combineFunc = combineFunction,
},
{
.name = "sum",
@ -757,7 +758,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
.initFunc = functionSetup,
.processFunc = sumFunction,
.finalizeFunc = functionFinalize,
.invertFunc = sumInvertFunction
.invertFunc = sumInvertFunction,
.combineFunc = sumCombine,
},
{
.name = "min",
@ -768,7 +770,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
.getEnvFunc = getMinmaxFuncEnv,
.initFunc = minmaxFunctionSetup,
.processFunc = minFunction,
.finalizeFunc = minmaxFunctionFinalize
.finalizeFunc = minmaxFunctionFinalize,
.combineFunc = minCombine
},
{
.name = "max",
@ -779,7 +782,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
.getEnvFunc = getMinmaxFuncEnv,
.initFunc = minmaxFunctionSetup,
.processFunc = maxFunction,
.finalizeFunc = minmaxFunctionFinalize
.finalizeFunc = minmaxFunctionFinalize,
.combineFunc = maxCombine
},
{
.name = "stddev",
@ -790,7 +794,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
.initFunc = stddevFunctionSetup,
.processFunc = stddevFunction,
.finalizeFunc = stddevFinalize,
.invertFunc = stddevInvertFunction
.invertFunc = stddevInvertFunction,
.combineFunc = stddevCombine,
},
{
.name = "leastsquares",
@ -801,7 +806,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
.initFunc = leastSQRFunctionSetup,
.processFunc = leastSQRFunction,
.finalizeFunc = leastSQRFinalize,
.invertFunc = leastSQRInvertFunction
.invertFunc = leastSQRInvertFunction,
},
{
.name = "avg",
@ -812,7 +817,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
.initFunc = avgFunctionSetup,
.processFunc = avgFunction,
.finalizeFunc = avgFinalize,
.invertFunc = avgInvertFunction
.invertFunc = avgInvertFunction,
.combineFunc = avgCombine,
},
{
.name = "percentile",
@ -894,7 +900,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
.getEnvFunc = getFirstLastFuncEnv,
.initFunc = functionSetup,
.processFunc = firstFunction,
.finalizeFunc = functionFinalize
.finalizeFunc = functionFinalize,
.combineFunc = firstCombine,
},
{
.name = "last",
@ -904,7 +911,8 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
.getEnvFunc = getFirstLastFuncEnv,
.initFunc = functionSetup,
.processFunc = lastFunction,
.finalizeFunc = lastFinalize
.finalizeFunc = lastFinalize,
.combineFunc = lastCombine,
},
{
.name = "histogram",

View File

@ -292,6 +292,24 @@ int32_t functionFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
return pResInfo->numOfRes;
}
int32_t firstCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx);
char* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo);
int32_t type = pDestCtx->input.pData[0]->info.type;
int32_t bytes = pDestCtx->input.pData[0]->info.bytes;
SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx);
char* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo);
if (pSResInfo->numOfRes != 0 &&
(pDResInfo->numOfRes == 0 || *(TSKEY*)(pDBuf + bytes) > *(TSKEY*)(pSBuf + bytes)) ) {
memcpy(pDBuf, pSBuf, bytes);
*(TSKEY*)(pDBuf + bytes) = *(TSKEY*)(pSBuf + bytes);
pDResInfo->numOfRes = 1;
}
return TSDB_CODE_SUCCESS;
}
int32_t dummyProcess(SqlFunctionCtx* UNUSED_PARAM(pCtx)) {
return 0;
}
@ -388,6 +406,18 @@ int32_t countInvertFunction(SqlFunctionCtx* pCtx) {
return TSDB_CODE_SUCCESS;
}
int32_t combineFunction(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx);
char* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo);
SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx);
char* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo);
*((int64_t*)pDBuf) += *((int64_t*)pSBuf);
SET_VAL(pDResInfo, *((int64_t*)pDBuf), 1);
return TSDB_CODE_SUCCESS;
}
#define LIST_ADD_N(_res, _col, _start, _rows, _t, numOfElem) \
do { \
_t* d = (_t*)(_col->pData); \
@ -537,6 +567,26 @@ int32_t sumInvertFunction(SqlFunctionCtx* pCtx) {
return TSDB_CODE_SUCCESS;
}
int32_t sumCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx);
SSumRes* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo);
int32_t type = pDestCtx->input.pData[0]->info.type;
SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx);
SSumRes* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo);
if (IS_SIGNED_NUMERIC_TYPE(type) || type == TSDB_DATA_TYPE_BOOL) {
pDBuf->isum += pSBuf->isum;
} else if (IS_UNSIGNED_NUMERIC_TYPE(type)) {
pDBuf->usum += pSBuf->usum;
} else if (type == TSDB_DATA_TYPE_DOUBLE || type == TSDB_DATA_TYPE_FLOAT) {
pDBuf->dsum += pSBuf->dsum;
}
SET_VAL(pDResInfo, *((int64_t*)pDBuf), 1);
return TSDB_CODE_SUCCESS;
}
bool getSumFuncEnv(SFunctionNode* UNUSED_PARAM(pFunc), SFuncExecEnv* pEnv) {
pEnv->calcMemSize = sizeof(SSumRes);
return true;
@ -738,6 +788,24 @@ int32_t avgInvertFunction(SqlFunctionCtx* pCtx) {
return TSDB_CODE_SUCCESS;
}
int32_t avgCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx);
SAvgRes* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo);
int32_t type = pDestCtx->input.pData[0]->info.type;
SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx);
SAvgRes* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo);
if (IS_INTEGER_TYPE(type)) {
pDBuf->sum.isum += pSBuf->sum.isum;
} else {
pDBuf->sum.dsum += pSBuf->sum.dsum;
}
pDBuf->count += pSBuf->count;
return TSDB_CODE_SUCCESS;
}
int32_t avgFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
SInputColumnInfoData* pInput = &pCtx->input;
int32_t type = pInput->pData[0]->info.type;
@ -1273,6 +1341,34 @@ void setSelectivityValue(SqlFunctionCtx* pCtx, SSDataBlock* pBlock, const STuple
}
}
int32_t minMaxCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx, int32_t isMinFunc) {
SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx);
SMinmaxResInfo* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo);
int32_t type = pDestCtx->input.pData[0]->info.type;
SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx);
SMinmaxResInfo* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo);
if (IS_FLOAT_TYPE(type)) {
if (pSBuf->assign &&
( (((*(double*)&pDBuf->v) < (*(double*)&pSBuf->v)) ^ isMinFunc) || !pDBuf->assign ) ) {
*(double*) &pDBuf->v = *(double*) &pSBuf->v;
}
} else {
if ( pSBuf->assign && ( ((pDBuf->v < pSBuf->v) ^ isMinFunc) || !pDBuf->assign ) ) {
pDBuf->v = pSBuf->v;
}
}
SET_VAL(pDResInfo, *((int64_t*)pDBuf), 1);
return TSDB_CODE_SUCCESS;
}
int32_t minCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
return minMaxCombine(pDestCtx, pSourceCtx, 1);
}
int32_t maxCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
return minMaxCombine(pDestCtx, pSourceCtx, 0);
}
bool getStddevFuncEnv(SFunctionNode* pFunc, SFuncExecEnv* pEnv) {
pEnv->calcMemSize = sizeof(SStddevRes);
return true;
@ -1491,6 +1587,25 @@ int32_t stddevFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
return functionFinalize(pCtx, pBlock);
}
int32_t stddevCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx);
SStddevRes* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo);
int32_t type = pDestCtx->input.pData[0]->info.type;
SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx);
SStddevRes* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo);
if (IS_INTEGER_TYPE(type)) {
pDBuf->isum += pSBuf->isum;
pDBuf->quadraticISum += pSBuf->quadraticISum;
} else {
pDBuf->dsum += pSBuf->dsum;
pDBuf->quadraticDSum += pSBuf->quadraticDSum;
}
pDBuf->count += pSBuf->count;
return TSDB_CODE_SUCCESS;
}
bool getLeastSQRFuncEnv(SFunctionNode* pFunc, SFuncExecEnv* pEnv) {
pEnv->calcMemSize = sizeof(SLeastSQRInfo);
return true;
@ -1979,6 +2094,24 @@ int32_t lastFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
return pResInfo->numOfRes;
}
int32_t lastCombine(SqlFunctionCtx* pDestCtx, SqlFunctionCtx* pSourceCtx) {
SResultRowEntryInfo* pDResInfo = GET_RES_INFO(pDestCtx);
char* pDBuf = GET_ROWCELL_INTERBUF(pDResInfo);
int32_t type = pDestCtx->input.pData[0]->info.type;
int32_t bytes = pDestCtx->input.pData[0]->info.bytes;
SResultRowEntryInfo* pSResInfo = GET_RES_INFO(pSourceCtx);
char* pSBuf = GET_ROWCELL_INTERBUF(pSResInfo);
if (pSResInfo->numOfRes != 0 &&
(pDResInfo->numOfRes == 0 || *(TSKEY*)(pDBuf + bytes) < *(TSKEY*)(pSBuf + bytes)) ) {
memcpy(pDBuf, pSBuf, bytes);
*(TSKEY*)(pDBuf + bytes) = *(TSKEY*)(pSBuf + bytes);
pDResInfo->numOfRes = 1;
}
return TSDB_CODE_SUCCESS;
}
bool getDiffFuncEnv(SFunctionNode* UNUSED_PARAM(pFunc), SFuncExecEnv* pEnv) {
pEnv->calcMemSize = sizeof(SDiffInfo);
return true;

View File

@ -118,6 +118,7 @@ int32_t fmGetFuncExecFuncs(int32_t funcId, SFuncExecFuncs* pFpSet) {
pFpSet->init = funcMgtBuiltins[funcId].initFunc;
pFpSet->process = funcMgtBuiltins[funcId].processFunc;
pFpSet->finalize = funcMgtBuiltins[funcId].finalizeFunc;
pFpSet->combine = funcMgtBuiltins[funcId].combineFunc;
return TSDB_CODE_SUCCESS;
}

View File

@ -230,6 +230,8 @@ const char* nodesNodeName(ENodeType type) {
return "PhysiFill";
case QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW:
return "PhysiSessionWindow";
case QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW:
return "PhysiStreamSessionWindow";
case QUERY_NODE_PHYSICAL_PLAN_STATE_WINDOW:
return "PhysiStateWindow";
case QUERY_NODE_PHYSICAL_PLAN_PARTITION:
@ -2528,6 +2530,29 @@ static int32_t jsonToOrderByExprNode(const SJson* pJson, void* pObj) {
return code;
}
static const char* jkSessionWindowTsPrimaryKey = "TsPrimaryKey";
static const char* jkSessionWindowGap = "Gap";
static int32_t sessionWindowNodeToJson(const void* pObj, SJson* pJson) {
const SSessionWindowNode * pNode = (const SSessionWindowNode*)pObj;
int32_t code = tjsonAddObject(pJson, jkSessionWindowTsPrimaryKey, nodeToJson, pNode->pCol);
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddObject(pJson, jkSessionWindowGap, nodeToJson, pNode->pGap);
}
return code;
}
static int32_t jsonToSessionWindowNode(const SJson* pJson, void* pObj) {
SSessionWindowNode* pNode = (SSessionWindowNode*)pObj;
int32_t code = jsonToNodeObject(pJson, jkSessionWindowTsPrimaryKey, (SNode **)&pNode->pCol);
if (TSDB_CODE_SUCCESS == code) {
code = jsonToNodeObject(pJson, jkSessionWindowGap, (SNode **)&pNode->pGap);
}
return code;
}
static const char* jkIntervalWindowInterval = "Interval";
static const char* jkIntervalWindowOffset = "Offset";
static const char* jkIntervalWindowSliding = "Sliding";
@ -3015,8 +3040,9 @@ static int32_t specificNodeToJson(const void* pObj, SJson* pJson) {
return orderByExprNodeToJson(pObj, pJson);
case QUERY_NODE_LIMIT:
case QUERY_NODE_STATE_WINDOW:
case QUERY_NODE_SESSION_WINDOW:
break;
case QUERY_NODE_SESSION_WINDOW:
return sessionWindowNodeToJson(pObj, pJson);
case QUERY_NODE_INTERVAL_WINDOW:
return intervalWindowNodeToJson(pObj, pJson);
case QUERY_NODE_NODE_LIST:
@ -3096,6 +3122,7 @@ static int32_t specificNodeToJson(const void* pObj, SJson* pJson) {
case QUERY_NODE_PHYSICAL_PLAN_FILL:
return physiFillNodeToJson(pObj, pJson);
case QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW:
case QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW:
return physiSessionWindowNodeToJson(pObj, pJson);
case QUERY_NODE_PHYSICAL_PLAN_STATE_WINDOW:
return physiStateWindowNodeToJson(pObj, pJson);
@ -3134,6 +3161,8 @@ static int32_t jsonToSpecificNode(const SJson* pJson, void* pObj) {
return jsonToTempTableNode(pJson, pObj);
case QUERY_NODE_ORDER_BY_EXPR:
return jsonToOrderByExprNode(pJson, pObj);
case QUERY_NODE_SESSION_WINDOW:
return jsonToSessionWindowNode(pJson, pObj);
case QUERY_NODE_INTERVAL_WINDOW:
return jsonToIntervalWindowNode(pJson, pObj);
case QUERY_NODE_NODE_LIST:
@ -3196,6 +3225,7 @@ static int32_t jsonToSpecificNode(const SJson* pJson, void* pObj) {
case QUERY_NODE_PHYSICAL_PLAN_FILL:
return jsonToPhysiFillNode(pJson, pObj);
case QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW:
case QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW:
return jsonToPhysiSessionWindowNode(pJson, pObj);
case QUERY_NODE_PHYSICAL_PLAN_STATE_WINDOW:
return jsonToPhysiStateWindowNode(pJson, pObj);

View File

@ -517,6 +517,7 @@ static EDealRes dispatchPhysiPlan(SNode* pNode, ETraversalOrder order, FNodeWalk
res = walkWindowPhysi((SWinodwPhysiNode*)pNode, order, walker, pContext);
break;
case QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW:
case QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW:
res = walkWindowPhysi((SWinodwPhysiNode*)pNode, order, walker, pContext);
break;
case QUERY_NODE_PHYSICAL_PLAN_STATE_WINDOW: {

View File

@ -251,6 +251,8 @@ int32_t nodesNodeSize(ENodeType type) {
return sizeof(SFillPhysiNode);
case QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW:
return sizeof(SSessionWinodwPhysiNode);
case QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW:
return sizeof(SStreamSessionWinodwPhysiNode);
case QUERY_NODE_PHYSICAL_PLAN_STATE_WINDOW:
return sizeof(SStateWinodwPhysiNode);
case QUERY_NODE_PHYSICAL_PLAN_PARTITION:
@ -664,6 +666,7 @@ void nodesDestroyNode(SNodeptr pNode) {
destroyWinodwPhysiNode((SWinodwPhysiNode*)pNode);
break;
case QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW:
case QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW:
destroyWinodwPhysiNode((SWinodwPhysiNode*)pNode);
break;
case QUERY_NODE_PHYSICAL_PLAN_DISPATCH:

View File

@ -143,12 +143,12 @@ SNode* createDropTableClause(SAstCreateContext* pCxt, bool ignoreNotExists, SNod
SNode* createDropTableStmt(SAstCreateContext* pCxt, SNodeList* pTables);
SNode* createDropSuperTableStmt(SAstCreateContext* pCxt, bool ignoreNotExists, SNode* pRealTable);
SNode* createAlterTableModifyOptions(SAstCreateContext* pCxt, SNode* pRealTable, SNode* pOptions);
SNode* createAlterTableAddModifyCol(SAstCreateContext* pCxt, SNode* pRealTable, int8_t alterType,
const SToken* pColName, SDataType dataType);
SNode* createAlterTableDropCol(SAstCreateContext* pCxt, SNode* pRealTable, int8_t alterType, const SToken* pColName);
SNode* createAlterTableRenameCol(SAstCreateContext* pCxt, SNode* pRealTable, int8_t alterType,
const SToken* pOldColName, const SToken* pNewColName);
SNode* createAlterTableSetTag(SAstCreateContext* pCxt, SNode* pRealTable, const SToken* pTagName, SNode* pVal);
SNode* createAlterTableAddModifyCol(SAstCreateContext* pCxt, SNode* pRealTable, int8_t alterType, SToken* pColName,
SDataType dataType);
SNode* createAlterTableDropCol(SAstCreateContext* pCxt, SNode* pRealTable, int8_t alterType, SToken* pColName);
SNode* createAlterTableRenameCol(SAstCreateContext* pCxt, SNode* pRealTable, int8_t alterType, SToken* pOldColName,
SToken* pNewColName);
SNode* createAlterTableSetTag(SAstCreateContext* pCxt, SNode* pRealTable, SToken* pTagName, SNode* pVal);
SNode* createUseDatabaseStmt(SAstCreateContext* pCxt, SToken* pDbName);
SNode* createShowStmt(SAstCreateContext* pCxt, ENodeType type, SNode* pDbName, SNode* pTbNamePattern);
SNode* createShowCreateDatabaseStmt(SAstCreateContext* pCxt, const SToken* pDbName);

View File

@ -94,7 +94,7 @@ static FORCE_INLINE void getSTSRowAppendInfo(uint8_t rowType, SParsedDataColInfo
col_id_t *colIdx) {
col_id_t schemaIdx = 0;
if (IS_DATA_COL_ORDERED(spd)) {
schemaIdx = spd->boundColumns[idx] - PRIMARYKEY_TIMESTAMP_COL_ID;
schemaIdx = spd->boundColumns[idx];
if (TD_IS_TP_ROW_T(rowType)) {
*toffset = (spd->cols + schemaIdx)->toffset; // the offset of firstPart
*colIdx = schemaIdx;
@ -104,7 +104,7 @@ static FORCE_INLINE void getSTSRowAppendInfo(uint8_t rowType, SParsedDataColInfo
}
} else {
ASSERT(idx == (spd->colIdxInfo + idx)->boundIdx);
schemaIdx = (spd->colIdxInfo + idx)->schemaColIdx - PRIMARYKEY_TIMESTAMP_COL_ID;
schemaIdx = (spd->colIdxInfo + idx)->schemaColIdx;
if (TD_IS_TP_ROW_T(rowType)) {
*toffset = (spd->cols + schemaIdx)->toffset;
*colIdx = schemaIdx;
@ -133,14 +133,15 @@ static FORCE_INLINE int32_t setBlockInfo(SSubmitBlk *pBlocks, STableDataBlocks *
int32_t schemaIdxCompar(const void *lhs, const void *rhs);
int32_t boundIdxCompar(const void *lhs, const void *rhs);
void setBoundColumnInfo(SParsedDataColInfo *pColList, SSchema *pSchema, col_id_t numOfCols);
void destroyBlockArrayList(SArray* pDataBlockList);
void destroyBlockHashmap(SHashObj* pDataBlockHash);
int initRowBuilder(SRowBuilder *pBuilder, int16_t schemaVer, SParsedDataColInfo *pColInfo);
int32_t allocateMemIfNeed(STableDataBlocks *pDataBlock, int32_t rowSize, int32_t * numOfRows);
int32_t getDataBlockFromList(SHashObj* pHashList, void* id, int32_t idLen, int32_t size, int32_t startOffset, int32_t rowSize,
STableMeta* pTableMeta, STableDataBlocks** dataBlocks, SArray* pBlockList, SVCreateTbReq* pCreateTbReq);
int32_t mergeTableDataBlocks(SHashObj* pHashObj, uint8_t payloadType, SArray** pVgDataBlocks);
int32_t buildCreateTbMsg(STableDataBlocks* pBlocks, SVCreateTbReq* pCreateTbReq);
void destroyBlockArrayList(SArray *pDataBlockList);
void destroyBlockHashmap(SHashObj *pDataBlockHash);
int initRowBuilder(SRowBuilder *pBuilder, int16_t schemaVer, SParsedDataColInfo *pColInfo);
int32_t allocateMemIfNeed(STableDataBlocks *pDataBlock, int32_t rowSize, int32_t *numOfRows);
int32_t getDataBlockFromList(SHashObj *pHashList, void *id, int32_t idLen, int32_t size, int32_t startOffset,
int32_t rowSize, STableMeta *pTableMeta, STableDataBlocks **dataBlocks, SArray *pBlockList,
SVCreateTbReq *pCreateTbReq);
int32_t mergeTableDataBlocks(SHashObj *pHashObj, uint8_t payloadType, SArray **pVgDataBlocks);
int32_t buildCreateTbMsg(STableDataBlocks *pBlocks, SVCreateTbReq *pCreateTbReq);
int32_t allocateMemForSize(STableDataBlocks *pDataBlock, int32_t allSize);

View File

@ -968,9 +968,9 @@ SNode* createAlterTableModifyOptions(SAstCreateContext* pCxt, SNode* pRealTable,
return createAlterTableStmtFinalize(pRealTable, pStmt);
}
SNode* createAlterTableAddModifyCol(SAstCreateContext* pCxt, SNode* pRealTable, int8_t alterType,
const SToken* pColName, SDataType dataType) {
if (NULL == pRealTable) {
SNode* createAlterTableAddModifyCol(SAstCreateContext* pCxt, SNode* pRealTable, int8_t alterType, SToken* pColName,
SDataType dataType) {
if (NULL == pRealTable || !checkColumnName(pCxt, pColName)) {
return NULL;
}
SAlterTableStmt* pStmt = nodesMakeNode(QUERY_NODE_ALTER_TABLE_STMT);
@ -981,8 +981,8 @@ SNode* createAlterTableAddModifyCol(SAstCreateContext* pCxt, SNode* pRealTable,
return createAlterTableStmtFinalize(pRealTable, pStmt);
}
SNode* createAlterTableDropCol(SAstCreateContext* pCxt, SNode* pRealTable, int8_t alterType, const SToken* pColName) {
if (NULL == pRealTable) {
SNode* createAlterTableDropCol(SAstCreateContext* pCxt, SNode* pRealTable, int8_t alterType, SToken* pColName) {
if (NULL == pRealTable || !checkColumnName(pCxt, pColName)) {
return NULL;
}
SAlterTableStmt* pStmt = nodesMakeNode(QUERY_NODE_ALTER_TABLE_STMT);
@ -992,9 +992,9 @@ SNode* createAlterTableDropCol(SAstCreateContext* pCxt, SNode* pRealTable, int8_
return createAlterTableStmtFinalize(pRealTable, pStmt);
}
SNode* createAlterTableRenameCol(SAstCreateContext* pCxt, SNode* pRealTable, int8_t alterType,
const SToken* pOldColName, const SToken* pNewColName) {
if (NULL == pRealTable) {
SNode* createAlterTableRenameCol(SAstCreateContext* pCxt, SNode* pRealTable, int8_t alterType, SToken* pOldColName,
SToken* pNewColName) {
if (NULL == pRealTable || !checkColumnName(pCxt, pOldColName) || !checkColumnName(pCxt, pNewColName)) {
return NULL;
}
SAlterTableStmt* pStmt = nodesMakeNode(QUERY_NODE_ALTER_TABLE_STMT);
@ -1005,8 +1005,8 @@ SNode* createAlterTableRenameCol(SAstCreateContext* pCxt, SNode* pRealTable, int
return createAlterTableStmtFinalize(pRealTable, pStmt);
}
SNode* createAlterTableSetTag(SAstCreateContext* pCxt, SNode* pRealTable, const SToken* pTagName, SNode* pVal) {
if (NULL == pRealTable) {
SNode* createAlterTableSetTag(SAstCreateContext* pCxt, SNode* pRealTable, SToken* pTagName, SNode* pVal) {
if (NULL == pRealTable || !checkColumnName(pCxt, pTagName)) {
return NULL;
}
SAlterTableStmt* pStmt = nodesMakeNode(QUERY_NODE_ALTER_TABLE_STMT);

View File

@ -189,11 +189,18 @@ static int32_t calcConstProject(SNode* pProject, SNode** pNew) {
return code;
}
static int32_t calcConstProjections(SCalcConstContext* pCxt, SNodeList* pProjections, bool subquery) {
static bool isUselessCol(bool hasSelectValFunc, SExprNode* pProj) {
if (hasSelectValFunc && QUERY_NODE_FUNCTION == nodeType(pProj) && fmIsSelectFunc(((SFunctionNode*)pProj)->funcId)) {
return false;
}
return NULL == ((SExprNode*)pProj)->pAssociation;
}
static int32_t calcConstProjections(SCalcConstContext* pCxt, SSelectStmt* pSelect, bool subquery) {
SNode* pProj = NULL;
WHERE_EACH(pProj, pProjections) {
if (subquery && NULL == ((SExprNode*)pProj)->pAssociation) {
ERASE_NODE(pProjections);
WHERE_EACH(pProj, pSelect->pProjectionList) {
if (subquery && isUselessCol(pSelect->hasSelectValFunc, (SExprNode*)pProj)) {
ERASE_NODE(pSelect->pProjectionList);
continue;
}
SNode* pNew = NULL;
@ -226,7 +233,7 @@ static int32_t calcConstGroupBy(SCalcConstContext* pCxt, SSelectStmt* pSelect) {
}
static int32_t calcConstSelect(SCalcConstContext* pCxt, SSelectStmt* pSelect, bool subquery) {
int32_t code = calcConstProjections(pCxt, pSelect->pProjectionList, subquery);
int32_t code = calcConstProjections(pCxt, pSelect, subquery);
if (TSDB_CODE_SUCCESS == code) {
code = calcConstFromTable(pCxt, pSelect);
}

View File

@ -701,7 +701,7 @@ static int32_t parseBoundColumns(SInsertParseContext* pCxt, SParsedDataColInfo*
}
lastColIdx = index;
pColList->cols[index].valStat = VAL_STAT_HAS;
pColList->boundColumns[pColList->numOfBound] = index + PRIMARYKEY_TIMESTAMP_COL_ID;
pColList->boundColumns[pColList->numOfBound] = index;
++pColList->numOfBound;
switch (pSchema[t].type) {
case TSDB_DATA_TYPE_BINARY:
@ -815,7 +815,7 @@ static int32_t parseTagsClause(SInsertParseContext* pCxt, SSchema* pSchema, uint
return buildInvalidOperationMsg(&pCxt->msg, "no mix usage for ? and tag values");
}
SSchema* pTagSchema = &pSchema[pCxt->tags.boundColumns[i] - 1]; // colId starts with 1
SSchema* pTagSchema = &pSchema[pCxt->tags.boundColumns[i]];
param.schema = pTagSchema;
CHECK_CODE(
parseValueToken(&pCxt->pSql, &sToken, pTagSchema, precision, tmpTokenBuf, KvRowAppend, &param, &pCxt->msg));
@ -903,7 +903,7 @@ static int32_t parseUsingClause(SInsertParseContext* pCxt, SName* name, char* tb
if (TK_NK_LP != sToken.type) {
return buildSyntaxErrMsg(&pCxt->msg, "( is expected", sToken.z);
}
CHECK_CODE(parseTagsClause(pCxt, pCxt->pTableMeta->schema, getTableInfo(pCxt->pTableMeta).precision, name->tname));
CHECK_CODE(parseTagsClause(pCxt, pTagsSchema, getTableInfo(pCxt->pTableMeta).precision, name->tname));
NEXT_VALID_TOKEN(pCxt->pSql, sToken);
if (TK_NK_COMMA == sToken.type) {
return generateSyntaxErrMsg(&pCxt->msg, TSDB_CODE_PAR_TAGS_NOT_MATCHED);
@ -929,7 +929,7 @@ static int parseOneRow(SInsertParseContext* pCxt, STableDataBlocks* pDataBlocks,
// 1. set the parsed value from sql string
for (int i = 0; i < spd->numOfBound; ++i) {
NEXT_TOKEN_WITH_PREV(pCxt->pSql, sToken);
SSchema* pSchema = &schema[spd->boundColumns[i] - 1];
SSchema* pSchema = &schema[spd->boundColumns[i]];
if (sToken.type == TK_NK_QUESTION) {
isParseBindParam = true;
@ -1088,7 +1088,7 @@ static int32_t parseInsertBody(SInsertParseContext* pCxt) {
if (sToken.type && pCxt->pSql[0]) {
return buildSyntaxErrMsg(&pCxt->msg, "invalid charactor in SQL", sToken.z);
}
if (0 == pCxt->totalNum && (!TSDB_QUERY_HAS_TYPE(pCxt->pOutput->insertType, TSDB_QUERY_TYPE_STMT_INSERT))) {
return buildInvalidOperationMsg(&pCxt->msg, "no data in sql");
}
@ -1337,7 +1337,7 @@ int32_t qBindStmtTagsValue(void* pBlock, void* boundTags, int64_t suid, char* tN
continue;
}
SSchema* pTagSchema = &pSchema[tags->boundColumns[c] - 1]; // colId starts with 1
SSchema* pTagSchema = &pSchema[tags->boundColumns[c]];
param.schema = pTagSchema;
int32_t colLen = pTagSchema->bytes;
@ -1384,7 +1384,7 @@ int32_t qBindStmtColsValue(void* pBlock, TAOS_MULTI_BIND* bind, char* msgBuf, in
tdSRowResetBuf(pBuilder, row);
for (int c = 0; c < spd->numOfBound; ++c) {
SSchema* pColSchema = &pSchema[spd->boundColumns[c] - 1];
SSchema* pColSchema = &pSchema[spd->boundColumns[c]];
if (bind[c].num != rowNum) {
return buildInvalidOperationMsg(&pBuf, "row number in each bind param should be the same");
@ -1467,7 +1467,7 @@ int32_t qBindStmtSingleColValue(void* pBlock, TAOS_MULTI_BIND* bind, char* msgBu
tdSRowGetBuf(pBuilder, row);
}
SSchema* pColSchema = &pSchema[spd->boundColumns[colIdx] - 1];
SSchema* pColSchema = &pSchema[spd->boundColumns[colIdx]];
if (bind->num != rowNum) {
return buildInvalidOperationMsg(&pBuf, "row number in each bind param should be the same");
@ -1539,7 +1539,7 @@ int32_t buildBoundFields(SParsedDataColInfo* boundInfo, SSchema* pSchema, int32_
}
for (int32_t i = 0; i < boundInfo->numOfBound; ++i) {
SSchema* pTagSchema = &pSchema[boundInfo->boundColumns[i] - 1];
SSchema* pTagSchema = &pSchema[boundInfo->boundColumns[i]];
strcpy((*fields)[i].name, pTagSchema->name);
(*fields)[i].type = pTagSchema->type;
(*fields)[i].bytes = pTagSchema->bytes;
@ -1638,7 +1638,7 @@ static int32_t smlBoundColumnData(SArray* cols, SParsedDataColInfo* pColList, SS
}
lastColIdx = index;
pColList->cols[index].valStat = VAL_STAT_HAS;
pColList->boundColumns[pColList->numOfBound] = index + PRIMARYKEY_TIMESTAMP_COL_ID;
pColList->boundColumns[pColList->numOfBound] = index;
++pColList->numOfBound;
switch (pSchema[t].type) {
case TSDB_DATA_TYPE_BINARY:
@ -1688,7 +1688,7 @@ static int32_t smlBuildTagRow(SArray* cols, SKVRowBuilder* tagsBuilder, SParsedD
SKvParam param = {.builder = tagsBuilder};
for (int i = 0; i < tags->numOfBound; ++i) {
SSchema* pTagSchema = &pSchema[tags->boundColumns[i] - 1]; // colId starts with 1
SSchema* pTagSchema = &pSchema[tags->boundColumns[i]];
param.schema = pTagSchema;
SSmlKv* kv = taosArrayGetP(cols, i);
if (IS_VAR_DATA_TYPE(kv->type)) {

View File

@ -74,7 +74,7 @@ void setBoundColumnInfo(SParsedDataColInfo* pColList, SSchema* pSchema, col_id_t
default:
break;
}
pColList->boundColumns[i] = pSchema[i].colId;
pColList->boundColumns[i] = i;
}
pColList->allNullLen += pColList->flen;
pColList->boundNullLen = pColList->allNullLen; // default set allNullLen

View File

@ -812,7 +812,7 @@ static EDealRes translateOperator(STranslateContext* pCxt, SOperatorNode* pOp) {
return DEAL_RES_CONTINUE;
}
static EDealRes haveAggOrNonstdFunction(SNode* pNode, void* pContext) {
static EDealRes haveVectorFunction(SNode* pNode, void* pContext) {
if (isAggFunc(pNode)) {
*((bool*)pContext) = true;
return DEAL_RES_END;
@ -857,7 +857,7 @@ static int32_t rewriteCountStar(STranslateContext* pCxt, SFunctionNode* pCount)
static bool hasInvalidFuncNesting(SNodeList* pParameterList) {
bool hasInvalidFunc = false;
nodesWalkExprs(pParameterList, haveAggOrNonstdFunction, &hasInvalidFunc);
nodesWalkExprs(pParameterList, haveVectorFunction, &hasInvalidFunc);
return hasInvalidFunc;
}
@ -1009,6 +1009,7 @@ static EDealRes rewriteColToSelectValFunc(STranslateContext* pCxt, SNode** pNode
}
if (TSDB_CODE_SUCCESS == pCxt->errCode) {
*pNode = (SNode*)pFunc;
pCxt->pCurrStmt->hasSelectValFunc = true;
} else {
nodesDestroyNode(pFunc);
}
@ -1096,7 +1097,7 @@ typedef struct CheckAggColCoexistCxt {
STranslateContext* pTranslateCxt;
bool existAggFunc;
bool existCol;
bool existNonstdFunc;
bool existIndefiniteRowsFunc;
int32_t selectFuncNum;
bool existOtherAggFunc;
} CheckAggColCoexistCxt;
@ -1113,7 +1114,7 @@ static EDealRes doCheckAggColCoexist(SNode* pNode, void* pContext) {
return DEAL_RES_IGNORE_CHILD;
}
if (isIndefiniteRowsFunc(pNode)) {
pCxt->existNonstdFunc = true;
pCxt->existIndefiniteRowsFunc = true;
return DEAL_RES_IGNORE_CHILD;
}
if (isScanPseudoColumnFunc(pNode) || QUERY_NODE_COLUMN == nodeType(pNode)) {
@ -1129,7 +1130,7 @@ static int32_t checkAggColCoexist(STranslateContext* pCxt, SSelectStmt* pSelect)
CheckAggColCoexistCxt cxt = {.pTranslateCxt = pCxt,
.existAggFunc = false,
.existCol = false,
.existNonstdFunc = false,
.existIndefiniteRowsFunc = false,
.selectFuncNum = 0,
.existOtherAggFunc = false};
nodesWalkExprs(pSelect->pProjectionList, doCheckAggColCoexist, &cxt);
@ -1142,7 +1143,7 @@ static int32_t checkAggColCoexist(STranslateContext* pCxt, SSelectStmt* pSelect)
if ((cxt.selectFuncNum > 1 || cxt.existAggFunc || NULL != pSelect->pWindow) && cxt.existCol) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_NOT_SINGLE_GROUP);
}
if (cxt.existNonstdFunc && cxt.existCol) {
if (cxt.existIndefiniteRowsFunc && cxt.existCol) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
}
return TSDB_CODE_SUCCESS;
@ -4079,9 +4080,7 @@ static int32_t addValToKVRow(STranslateContext* pCxt, SValueNode* pVal, const SS
return parseJsontoTagData(pVal->literal, pBuilder, &pCxt->msgBuf, pSchema->colId);
}
if (pVal->node.resType.type == TSDB_DATA_TYPE_NULL) {
// todo
} else {
if (pVal->node.resType.type != TSDB_DATA_TYPE_NULL) {
tdAddColToKVRow(pBuilder, pSchema->colId, nodesGetValueFromNode(pVal),
IS_VAR_DATA_TYPE(pSchema->type) ? varDataTLen(pVal->datum.p) : TYPE_BYTES[pSchema->type]);
}
@ -4097,16 +4096,17 @@ static int32_t createValueFromFunction(STranslateContext* pCxt, SFunctionNode* p
return code;
}
static SDataType schemaToDataType(SSchema* pSchema) {
SDataType dt = {.type = pSchema->type, .bytes = pSchema->bytes, .precision = 0, .scale = 0};
static SDataType schemaToDataType(uint8_t precision, SSchema* pSchema) {
SDataType dt = {.type = pSchema->type, .bytes = pSchema->bytes, .precision = precision, .scale = 0};
return dt;
}
static int32_t translateTagVal(STranslateContext* pCxt, SSchema* pSchema, SNode* pNode, SValueNode** pVal) {
static int32_t translateTagVal(STranslateContext* pCxt, uint8_t precision, SSchema* pSchema, SNode* pNode,
SValueNode** pVal) {
if (QUERY_NODE_FUNCTION == nodeType(pNode)) {
return createValueFromFunction(pCxt, (SFunctionNode*)pNode, pVal);
} else if (QUERY_NODE_VALUE == nodeType(pNode)) {
return (DEAL_RES_ERROR == translateValueImpl(pCxt, (SValueNode*)pNode, schemaToDataType(pSchema))
return (DEAL_RES_ERROR == translateValueImpl(pCxt, (SValueNode*)pNode, schemaToDataType(precision, pSchema))
? pCxt->errCode
: TSDB_CODE_SUCCESS);
} else {
@ -4137,7 +4137,7 @@ static int32_t buildKVRowForBindTags(STranslateContext* pCxt, SCreateSubTableCla
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_TAG_NAME, pCol->colName);
}
SValueNode* pVal = NULL;
int32_t code = translateTagVal(pCxt, pSchema, pNode, &pVal);
int32_t code = translateTagVal(pCxt, pSuperTableMeta->tableInfo.precision, pSchema, pNode, &pVal);
if (TSDB_CODE_SUCCESS == code) {
if (NULL == pVal) {
pVal = (SValueNode*)pNode;
@ -4167,7 +4167,7 @@ static int32_t buildKVRowForAllTags(STranslateContext* pCxt, SCreateSubTableClau
int32_t index = 0;
FOREACH(pNode, pStmt->pValsOfTags) {
SValueNode* pVal = NULL;
int32_t code = translateTagVal(pCxt, pTagSchema + index, pNode, &pVal);
int32_t code = translateTagVal(pCxt, pSuperTableMeta->tableInfo.precision, pTagSchema + index, pNode, &pVal);
if (TSDB_CODE_SUCCESS == code) {
if (NULL == pVal) {
pVal = (SValueNode*)pNode;
@ -4447,19 +4447,21 @@ static int32_t buildUpdateTagValReq(STranslateContext* pCxt, SAlterTableStmt* pS
return TSDB_CODE_OUT_OF_MEMORY;
}
if (DEAL_RES_ERROR == translateValueImpl(pCxt, pStmt->pVal, schemaToDataType(pSchema))) {
if (DEAL_RES_ERROR ==
translateValueImpl(pCxt, pStmt->pVal, schemaToDataType(pTableMeta->tableInfo.precision, pSchema))) {
return pCxt->errCode;
}
pReq->isNull = (TSDB_DATA_TYPE_NULL == pStmt->pVal->node.resType.type);
if(pStmt->pVal->node.resType.type == TSDB_DATA_TYPE_JSON){
if (pStmt->pVal->node.resType.type == TSDB_DATA_TYPE_JSON) {
SKVRowBuilder kvRowBuilder = {0};
int32_t code = tdInitKVRowBuilder(&kvRowBuilder);
int32_t code = tdInitKVRowBuilder(&kvRowBuilder);
if (TSDB_CODE_SUCCESS != code) {
return TSDB_CODE_OUT_OF_MEMORY;
}
if (pStmt->pVal->literal && strlen(pStmt->pVal->literal) > (TSDB_MAX_JSON_TAG_LEN - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE) {
if (pStmt->pVal->literal &&
strlen(pStmt->pVal->literal) > (TSDB_MAX_JSON_TAG_LEN - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE) {
return buildSyntaxErrMsg(&pCxt->msgBuf, "json string too long than 4095", pStmt->pVal->literal);
}
@ -4477,7 +4479,7 @@ static int32_t buildUpdateTagValReq(STranslateContext* pCxt, SAlterTableStmt* pS
pReq->pTagVal = row;
pStmt->pVal->datum.p = row; // for free
tdDestroyKVRowBuilder(&kvRowBuilder);
}else{
} else {
pReq->nTagVal = pStmt->pVal->node.resType.bytes;
if (TSDB_DATA_TYPE_NCHAR == pStmt->pVal->node.resType.type) {
pReq->nTagVal = pReq->nTagVal * TSDB_NCHAR_SIZE;
@ -4688,16 +4690,16 @@ static int32_t rewriteAlterTable(STranslateContext* pCxt, SQuery* pQuery) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_COL_JSON);
}
if (getNumOfTags(pTableMeta) == 1 && pStmt->alterType == TSDB_ALTER_TABLE_DROP_TAG) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_ALTER_TABLE, "can not drop tag if there is only one tag");
if (getNumOfTags(pTableMeta) == 1 && pStmt->alterType == TSDB_ALTER_TABLE_DROP_TAG) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_ALTER_TABLE,
"can not drop tag if there is only one tag");
}
if (TSDB_SUPER_TABLE == pTableMeta->tableType) {
SSchema* pTagsSchema = getTableTagSchema(pTableMeta);
if (getNumOfTags(pTableMeta) == 1 && pTagsSchema->type == TSDB_DATA_TYPE_JSON &&
(pStmt->alterType == TSDB_ALTER_TABLE_ADD_TAG ||
pStmt->alterType == TSDB_ALTER_TABLE_DROP_TAG ||
pStmt->alterType == TSDB_ALTER_TABLE_UPDATE_TAG_BYTES)) {
(pStmt->alterType == TSDB_ALTER_TABLE_ADD_TAG || pStmt->alterType == TSDB_ALTER_TABLE_DROP_TAG ||
pStmt->alterType == TSDB_ALTER_TABLE_UPDATE_TAG_BYTES)) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_ONLY_ONE_JSON_TAG);
}
return TSDB_CODE_SUCCESS;

View File

@ -154,6 +154,7 @@ void generateTestST1(MockCatalogService* mcs) {
builder.done();
mcs->createSubTable("test", "st1", "st1s1", 1);
mcs->createSubTable("test", "st1", "st1s2", 2);
mcs->createSubTable("test", "st1", "st1s3", 1);
}
} // namespace

View File

@ -945,7 +945,8 @@ static int32_t createIntervalPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pChil
static int32_t createSessionWindowPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pChildren,
SWindowLogicNode* pWindowLogicNode, SPhysiNode** pPhyNode) {
SSessionWinodwPhysiNode* pSession = (SSessionWinodwPhysiNode*)makePhysiNode(
pCxt, getPrecision(pChildren), (SLogicNode*)pWindowLogicNode, QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW);
pCxt, getPrecision(pChildren), (SLogicNode*)pWindowLogicNode,
(pCxt->pPlanCxt->streamQuery ? QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW : QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW));
if (NULL == pSession) {
return TSDB_CODE_OUT_OF_MEMORY;
}

View File

@ -44,3 +44,9 @@ TEST_F(PlanJoinTest, withWhere) {
run("SELECT t1.c1, t2.c1 FROM st1s1 t1 JOIN st1s2 t2 ON t1.ts = t2.ts "
"WHERE t1.c1 > t2.c1 AND t1.c2 = 'abc' AND t2.c2 = 'qwe'");
}
TEST_F(PlanJoinTest, multiJoin) {
useDb("root", "test");
run("SELECT t1.c1, t2.c1 FROM st1s1 t1 JOIN st1s2 t2 ON t1.ts = t2.ts JOIN st1s3 t3 ON t1.ts = t3.ts");
}

View File

@ -127,7 +127,10 @@ static SScalableBf *getSBf(SUpdateInfo *pInfo, TSKEY ts) {
if (pInfo->minTS < 0) {
pInfo->minTS = (TSKEY)(ts / pInfo->interval * pInfo->interval);
}
uint64_t index = (uint64_t)((ts - pInfo->minTS) / pInfo->interval);
int64_t index = (int64_t)((ts - pInfo->minTS) / pInfo->interval);
if (index < 0) {
return NULL;
}
if (index >= pInfo->numSBFs) {
uint64_t count = index + 1 - pInfo->numSBFs;
windowSBfDelete(pInfo, count);

View File

@ -148,8 +148,8 @@ typedef struct SSyncNode {
SSyncRespMgr* pSyncRespMgr;
// restore state
bool restoreFinish;
//sem_t restoreSem;
bool restoreFinish;
// sem_t restoreSem;
SSnapshot* pSnapshot;
} SSyncNode;

View File

@ -42,6 +42,7 @@ typedef struct SVotesGranted {
SVotesGranted *voteGrantedCreate(SSyncNode *pSyncNode);
void voteGrantedDestroy(SVotesGranted *pVotesGranted);
void voteGrantedUpdate(SVotesGranted *pVotesGranted, SSyncNode *pSyncNode);
bool voteGrantedMajority(SVotesGranted *pVotesGranted);
void voteGrantedVote(SVotesGranted *pVotesGranted, SyncRequestVoteReply *pMsg);
void voteGrantedReset(SVotesGranted *pVotesGranted, SyncTerm term);
@ -65,6 +66,7 @@ typedef struct SVotesRespond {
SVotesRespond *votesRespondCreate(SSyncNode *pSyncNode);
void votesRespondDestory(SVotesRespond *pVotesRespond);
void votesRespondUpdate(SVotesRespond *pVotesRespond, SSyncNode *pSyncNode);
bool votesResponded(SVotesRespond *pVotesRespond, const SRaftId *pRaftId);
void votesRespondAdd(SVotesRespond *pVotesRespond, const SyncRequestVoteReply *pMsg);
void votesRespondReset(SVotesRespond *pVotesRespond, SyncTerm term);

View File

@ -362,8 +362,8 @@ int32_t syncNodeOnAppendEntriesCb(SSyncNode* ths, SyncAppendEntries* pMsg) {
// restore finish
if (pEntry->index == ths->pLogStore->getLastIndex(ths->pLogStore)) {
if (ths->restoreFinish == false) {
if (ths->pFsm->FpRestoreFinish != NULL) {
ths->pFsm->FpRestoreFinish(ths->pFsm);
if (ths->pFsm->FpRestoreFinishCb != NULL) {
ths->pFsm->FpRestoreFinishCb(ths->pFsm);
}
ths->restoreFinish = true;
sInfo("==syncNodeOnAppendEntriesCb== restoreFinish set true %p vgId:%d", ths, ths->vgId);

View File

@ -139,8 +139,8 @@ void syncMaybeAdvanceCommitIndex(SSyncNode* pSyncNode) {
// restore finish
if (pEntry->index == pSyncNode->pLogStore->getLastIndex(pSyncNode->pLogStore)) {
if (pSyncNode->restoreFinish == false) {
if (pSyncNode->pFsm->FpRestoreFinish != NULL) {
pSyncNode->pFsm->FpRestoreFinish(pSyncNode->pFsm);
if (pSyncNode->pFsm->FpRestoreFinishCb != NULL) {
pSyncNode->pFsm->FpRestoreFinishCb(pSyncNode->pFsm);
}
pSyncNode->restoreFinish = true;
sInfo("==syncMaybeAdvanceCommitIndex== restoreFinish set true %p vgId:%d", pSyncNode, pSyncNode->vgId);

View File

@ -509,7 +509,7 @@ SSyncNode* syncNodeOpen(const SSyncInfo* pSyncInfo) {
pSyncNode->pSnapshot = taosMemoryMalloc(sizeof(SSnapshot));
pSyncNode->pFsm->FpGetSnapshot(pSyncNode->pFsm, pSyncNode->pSnapshot);
}
//tsem_init(&(pSyncNode->restoreSem), 0, 0);
// tsem_init(&(pSyncNode->restoreSem), 0, 0);
// start in syncNodeStart
// start raft
@ -606,7 +606,7 @@ void syncNodeClose(SSyncNode* pSyncNode) {
taosMemoryFree(pSyncNode->pSnapshot);
}
//tsem_destroy(&pSyncNode->restoreSem);
// tsem_destroy(&pSyncNode->restoreSem);
// free memory in syncFreeNode
// taosMemoryFree(pSyncNode);
@ -920,6 +920,17 @@ char* syncNode2SimpleStr(const SSyncNode* pSyncNode) {
}
void syncNodeUpdateConfig(SSyncNode* pSyncNode, SSyncCfg* newConfig) {
bool hit = false;
for (int i = 0; i < newConfig->replicaNum; ++i) {
if (strcmp(pSyncNode->myNodeInfo.nodeFqdn, (newConfig->nodeInfo)[i].nodeFqdn) == 0 &&
pSyncNode->myNodeInfo.nodePort == (newConfig->nodeInfo)[i].nodePort) {
newConfig->myIndex = i;
hit = true;
break;
}
}
ASSERT(hit == true);
pSyncNode->pRaftCfg->cfg = *newConfig;
int32_t ret = raftCfgPersist(pSyncNode->pRaftCfg);
ASSERT(ret == 0);
@ -949,6 +960,8 @@ void syncNodeUpdateConfig(SSyncNode* pSyncNode, SSyncCfg* newConfig) {
syncIndexMgrUpdate(pSyncNode->pNextIndex, pSyncNode);
syncIndexMgrUpdate(pSyncNode->pMatchIndex, pSyncNode);
voteGrantedUpdate(pSyncNode->pVotesGranted, pSyncNode);
votesRespondUpdate(pSyncNode->pVotesRespond, pSyncNode);
syncNodeLog2("==syncNodeUpdateConfig==", pSyncNode);
}

View File

@ -45,6 +45,17 @@ void voteGrantedDestroy(SVotesGranted *pVotesGranted) {
}
}
void voteGrantedUpdate(SVotesGranted *pVotesGranted, SSyncNode *pSyncNode) {
pVotesGranted->replicas = &(pSyncNode->replicasId);
pVotesGranted->replicaNum = pSyncNode->replicaNum;
voteGrantedClearVotes(pVotesGranted);
pVotesGranted->term = 0;
pVotesGranted->quorum = pSyncNode->quorum;
pVotesGranted->toLeader = false;
pVotesGranted->pSyncNode = pSyncNode;
}
bool voteGrantedMajority(SVotesGranted *pVotesGranted) {
bool ret = pVotesGranted->votes >= pVotesGranted->quorum;
return ret;
@ -168,6 +179,13 @@ void votesRespondDestory(SVotesRespond *pVotesRespond) {
}
}
void votesRespondUpdate(SVotesRespond *pVotesRespond, SSyncNode *pSyncNode) {
pVotesRespond->replicas = &(pSyncNode->replicasId);
pVotesRespond->replicaNum = pSyncNode->replicaNum;
pVotesRespond->term = 0;
pVotesRespond->pSyncNode = pSyncNode;
}
bool votesResponded(SVotesRespond *pVotesRespond, const SRaftId *pRaftId) {
bool ret = false;
for (int i = 0; i < pVotesRespond->replicaNum; ++i) {

View File

@ -73,9 +73,7 @@ int32_t GetSnapshotCb(struct SSyncFSM* pFsm, SSnapshot* pSnapshot) {
return 0;
}
void FpRestoreFinishCb(struct SSyncFSM* pFsm) {
sTrace("==callback== ==FpRestoreFinishCb==");
}
void RestoreFinishCb(struct SSyncFSM* pFsm) { sTrace("==callback== ==RestoreFinishCb=="); }
SSyncFSM* createFsm() {
SSyncFSM* pFsm = (SSyncFSM*)taosMemoryMalloc(sizeof(SSyncFSM));
@ -83,7 +81,7 @@ SSyncFSM* createFsm() {
pFsm->FpPreCommitCb = PreCommitCb;
pFsm->FpRollBackCb = RollBackCb;
pFsm->FpGetSnapshot = GetSnapshotCb;
pFsm->FpRestoreFinish = FpRestoreFinishCb;
pFsm->FpRestoreFinishCb = RestoreFinishCb;
return pFsm;
}

View File

@ -272,6 +272,10 @@ TAOS_DEFINE_ERROR(TSDB_CODE_MND_CONSUMER_NOT_EXIST, "Consumer not exist")
TAOS_DEFINE_ERROR(TSDB_CODE_MND_CONSUMER_NOT_READY, "Consumer waiting for rebalance")
TAOS_DEFINE_ERROR(TSDB_CODE_MND_TOPIC_SUBSCRIBED, "Topic subscribed cannot be dropped")
TAOS_DEFINE_ERROR(TSDB_CODE_MND_STREAM_ALREADY_EXIST, "Stream already exists")
TAOS_DEFINE_ERROR(TSDB_CODE_MND_STREAM_NOT_EXIST, "Stream not exist")
TAOS_DEFINE_ERROR(TSDB_CODE_MND_INVALID_STREAM_OPTION, "Invalid stream option")
// mnode-sma
TAOS_DEFINE_ERROR(TSDB_CODE_MND_SMA_ALREADY_EXIST, "SMA already exists")
TAOS_DEFINE_ERROR(TSDB_CODE_MND_SMA_NOT_EXIST, "SMA does not exist")

View File

@ -55,7 +55,8 @@
./test.sh -f tsim/bnode/basic1.sim
# ---- mnode
./test.sh -f tsim/mnode/basic1.sim
#./test.sh -f tsim/mnode/basic1.sim
./test.sh -f tsim/mnode/basic2.sim
# ---- show
./test.sh -f tsim/show/basic.sim
@ -66,6 +67,8 @@
# ---- stream
./test.sh -f tsim/stream/basic0.sim
./test.sh -f tsim/stream/basic1.sim
./test.sh -f tsim/stream/session0.sim
./test.sh -f tsim/stream/session1.sim
# ---- transaction
./test.sh -f tsim/trans/lossdata1.sim
@ -92,6 +95,8 @@
#./test.sh -f tsim/stable/show.sim
./test.sh -f tsim/stable/values.sim
./test.sh -f tsim/stable/vnode3.sim
./test.sh -f tsim/stable/column_add.sim
./test.sh -f tsim/stable/column_drop.sim
# --- for multi process mode
@ -104,7 +109,7 @@
./test.sh -f tsim/tmq/basic3.sim -m
./test.sh -f tsim/stable/vnode3.sim -m
./test.sh -f tsim/qnode/basic1.sim -m
./test.sh -f tsim/mnode/basic1.sim -m
#./test.sh -f tsim/mnode/basic1.sim -m
# --- sma
./test.sh -f tsim/sma/tsmaCreateInsertData.sim

View File

@ -136,7 +136,7 @@ echo "qDebugFlag 143" >> $TAOS_CFG
echo "rpcDebugFlag 143" >> $TAOS_CFG
echo "tmrDebugFlag 131" >> $TAOS_CFG
echo "uDebugFlag 143" >> $TAOS_CFG
echo "sDebugFlag 135" >> $TAOS_CFG
echo "sDebugFlag 143" >> $TAOS_CFG
echo "wDebugFlag 143" >> $TAOS_CFG
echo "numOfLogLines 20000000" >> $TAOS_CFG
echo "statusInterval 1" >> $TAOS_CFG

View File

@ -0,0 +1,84 @@
system sh/stop_dnodes.sh
system sh/deploy.sh -n dnode1 -i 1
system sh/deploy.sh -n dnode2 -i 2
system sh/exec.sh -n dnode1 -s start
system sh/exec.sh -n dnode2 -s start
sql connect
print =============== show dnodes
sql show dnodes;
if $rows != 1 then
return -1
endi
if $data00 != 1 then
return -1
endi
sql show mnodes;
if $rows != 1 then
return -1
endi
if $data00 != 1 then
return -1
endi
if $data02 != LEADER then
return -1
endi
print =============== create dnodes
sql create dnode $hostname port 7200
sleep 2000
sql show dnodes;
if $rows != 2 then
return -1
endi
if $data00 != 1 then
return -1
endi
if $data10 != 2 then
return -1
endi
print $data02
if $data02 != 0 then
return -1
endi
if $data12 != 0 then
return -1
endi
if $data04 != ready then
return -1
endi
if $data14 != ready then
return -1
endi
sql show mnodes;
if $rows != 1 then
return -1
endi
if $data00 != 1 then
return -1
endi
if $data02 != LEADER then
return -1
endi
print =============== create mnode 2
sql create mnode on dnode 2
sql show mnodes
if $rows != 2 then
return -1
endi

View File

@ -1,141 +0,0 @@
system sh/stop_dnodes.sh
system sh/deploy.sh -n dnode1 -i 1
system sh/exec.sh -n dnode1 -s start
sql connect
print ========== prepare stb and ctb
sql create database db vgroups 1
sql create table db.stb (ts timestamp, c1 int, c2 binary(4)) tags(t1 int, t2 float, t3 binary(16)) comment "abd"
sql create table db.ctb using db.stb tags(101, 102, "103")
sql insert into db.ctb values(now, 1, "2")
sql show db.stables
if $rows != 1 then
return -1
endi
if $data[0][0] != stb then
return -1
endi
if $data[0][1] != db then
return -1
endi
if $data[0][3] != 3 then
return -1
endi
if $data[0][4] != 3 then
return -1
endi
if $data[0][6] != abd then
return -1
endi
sql show db.tables
if $rows != 1 then
return -1
endi
if $data[0][0] != ctb then
return -1
endi
if $data[0][1] != db then
return -1
endi
if $data[0][3] != 3 then
return -1
endi
if $data[0][4] != stb then
return -1
endi
if $data[0][6] != 2 then
return -1
endi
if $data[0][9] != CHILD_TABLE then
return -1
endi
sql select * from db.stb
if $rows != 1 then
return -1
endi
if $data[0][1] != 1 then
return -1
endi
if $data[0][2] != 2 then
return -1
endi
if $data[0][3] != 101 then
return -1
endi
print ========== add column c3
sql alter table db.stb add column c3 int
sql show db.stables
if $data[0][3] != 4 then
return -1
endi
sql show db.tables
if $data[0][3] != 4 then
return -1
endi
sql select * from db.stb
sql select * from db.stb
print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
if $rows != 1 then
return -1
endi
if $data[0][1] != 1 then
return -1
endi
if $data[0][2] != 2 then
return -1
endi
if $data[0][3] != NULL then
return -1
endi
if $data[0][4] != 101 then
return -1
endi
sql insert into db.ctb values(now+1s, 1, 2, 3)
sql select * from db.stb
print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
if $rows != 2 then
return -1
endi
if $data[0][1] != 1 then
return -1
endi
if $data[0][2] != 2 then
return -1
endi
if $data[0][3] != NULL then
return -1
endi
if $data[0][4] != 101 then
return -1
endi
if $data[1][1] != 1 then
return -1
endi
if $data[2][2] != 2 then
return -1
endi
if $data[1][3] != 3 then
return -1
endi
if $data[1][4] != 101 then
return -1
endi
print ========== add column c4
sql alter table db.stb add column c4 bigint
sql insert into db.ctb values(now+2s, 1, 2, 3, 4)
sql select * from db.stb
sql select * from db.stb
print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]

View File

@ -0,0 +1,303 @@
system sh/stop_dnodes.sh
system sh/deploy.sh -n dnode1 -i 1
system sh/exec.sh -n dnode1 -s start
sql connect
print ========== prepare stb and ctb
sql create database db vgroups 1
sql create table db.stb (ts timestamp, c1 int, c2 binary(4)) tags(t1 int, t2 float, t3 binary(16)) comment "abd"
sql create table db.ctb using db.stb tags(101, 102, "103")
sql insert into db.ctb values(now, 1, "2")
sql show db.stables
if $rows != 1 then
return -1
endi
if $data[0][0] != stb then
return -1
endi
if $data[0][1] != db then
return -1
endi
if $data[0][3] != 3 then
return -1
endi
if $data[0][4] != 3 then
return -1
endi
if $data[0][6] != abd then
return -1
endi
sql show db.tables
if $rows != 1 then
return -1
endi
if $data[0][0] != ctb then
return -1
endi
if $data[0][1] != db then
return -1
endi
if $data[0][3] != 3 then
return -1
endi
if $data[0][4] != stb then
return -1
endi
if $data[0][6] != 2 then
return -1
endi
if $data[0][9] != CHILD_TABLE then
return -1
endi
sql select * from db.stb
if $rows != 1 then
return -1
endi
if $data[0][1] != 1 then
return -1
endi
if $data[0][2] != 2 then
return -1
endi
if $data[0][3] != 101 then
return -1
endi
sql_error alter table db.stb add column ts int
sql_error alter table db.stb add column t1 int
sql_error alter table db.stb add column t2 int
sql_error alter table db.stb add column t3 int
sql_error alter table db.stb add column c1 int
print ========== step1 add column c3
sql alter table db.stb add column c3 int
sql show db.stables
if $data[0][3] != 4 then
return -1
endi
sql show db.tables
if $data[0][3] != 4 then
return -1
endi
sql select * from db.stb
sql select * from db.stb
print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
if $rows != 1 then
return -1
endi
if $data[0][1] != 1 then
return -1
endi
if $data[0][2] != 2 then
return -1
endi
if $data[0][3] != NULL then
return -1
endi
if $data[0][4] != 101 then
return -1
endi
sql insert into db.ctb values(now+1s, 1, 2, 3)
sql select * from db.stb
print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
if $rows != 2 then
return -1
endi
if $data[0][1] != 1 then
return -1
endi
if $data[0][2] != 2 then
return -1
endi
if $data[0][3] != NULL then
return -1
endi
if $data[0][4] != 101 then
return -1
endi
if $data[1][1] != 1 then
return -1
endi
if $data[1][2] != 2 then
return -1
endi
if $data[1][3] != 3 then
return -1
endi
if $data[1][4] != 101 then
return -1
endi
print ========== step2 add column c4
sql alter table db.stb add column c4 bigint
sql select * from db.stb
sql insert into db.ctb values(now+2s, 1, 2, 3, 4)
sql select * from db.stb
print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
if $rows != 3 then
return -1
endi
if $data[0][1] != 1 then
return -1
endi
if $data[0][2] != 2 then
return -1
endi
if $data[0][3] != NULL then
return -1
endi
if $data[0][4] != NULL then
return -1
endi
if $data[0][5] != 101 then
return -1
endi
if $data[1][1] != 1 then
return -1
endi
if $data[1][2] != 2 then
return -1
endi
if $data[1][3] != 3 then
return -1
endi
if $data[1][4] != NULL then
return -1
endi
if $data[1][5] != 101 then
return -1
endi
if $data[2][1] != 1 then
return -1
endi
if $data[2][2] != 2 then
return -1
endi
if $data[2][3] != 3 then
return -1
endi
if $data[2][4] != 4 then
return -1
endi
if $data[2][5] != 101 then
return -1
endi
print ========== step3 add column c5
sql alter table db.stb add column c5 int
sql insert into db.ctb values(now+3s, 1, 2, 3, 4, 5)
sql select * from db.stb
print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
print $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
if $rows != 4 then
return -1
endi
if $data[2][1] != 1 then
return -1
endi
if $data[2][2] != 2 then
return -1
endi
if $data[2][3] != 3 then
return -1
endi
if $data[2][4] != 4 then
return -1
endi
if $data[2][5] != NULL then
return -1
endi
if $data[2][6] != 101 then
return -1
endi
if $data[3][1] != 1 then
return -1
endi
if $data[3][2] != 2 then
return -1
endi
if $data[3][3] != 3 then
return -1
endi
if $data[3][4] != 4 then
return -1
endi
if $data[3][5] != 5 then
return -1
endi
if $data[3][6] != 101 then
return -1
endi
print ========== step4 add column c6
sql alter table db.stb add column c6 int
sql insert into db.ctb values(now+4s, 1, 2, 3, 4, 5, 6)
sql select * from db.stb
if $rows != 5 then
return -1
endi
if $data[3][1] != 1 then
return -1
endi
if $data[3][2] != 2 then
return -1
endi
if $data[3][3] != 3 then
return -1
endi
if $data[3][4] != 4 then
return -1
endi
if $data[3][5] != 5 then
return -1
endi
if $data[3][6] != NULL then
return -1
endi
if $data[3][7] != 101 then
return -1
endi
if $data[4][1] != 1 then
return -1
endi
if $data[4][2] != 2 then
return -1
endi
if $data[4][3] != 3 then
return -1
endi
if $data[4][4] != 4 then
return -1
endi
if $data[4][5] != 5 then
return -1
endi
if $data[4][6] != 6 then
return -1
endi
if $data[4][7] != 101 then
return -1
endi
print ========== step5 describe
sql describe db.ctb
if $rows != 10 then
return -1
endi
system sh/exec.sh -n dnode1 -s stop -x SIGINT

View File

@ -0,0 +1,209 @@
system sh/stop_dnodes.sh
system sh/deploy.sh -n dnode1 -i 1
system sh/exec.sh -n dnode1 -s start
sql connect
print ========== prepare stb and ctb
sql create database db vgroups 1
sql create table db.stb (ts timestamp, c1 int, c2 binary(4), c3 int, c4 bigint, c5 int, c6 int) tags(t1 int, t2 float, t3 binary(16)) comment "abd"
sql create table db.ctb using db.stb tags(101, 102, "103")
sql insert into db.ctb values(now, 1, "2", 3, 4, 5, 6)
sql show db.stables
if $rows != 1 then
return -1
endi
if $data[0][0] != stb then
return -1
endi
if $data[0][1] != db then
return -1
endi
if $data[0][3] != 7 then
return -1
endi
if $data[0][4] != 3 then
return -1
endi
if $data[0][6] != abd then
return -1
endi
sql show db.tables
if $rows != 1 then
return -1
endi
if $data[0][0] != ctb then
return -1
endi
if $data[0][1] != db then
return -1
endi
if $data[0][3] != 7 then
return -1
endi
if $data[0][4] != stb then
return -1
endi
if $data[0][6] != 2 then
return -1
endi
if $data[0][9] != CHILD_TABLE then
return -1
endi
sql select * from db.stb
if $rows != 1 then
return -1
endi
if $data[0][1] != 1 then
return -1
endi
if $data[0][2] != 2 then
return -1
endi
if $data[0][3] != 3 then
return -1
endi
if $data[0][4] != 4 then
return -1
endi
if $data[0][5] != 5 then
return -1
endi
if $data[0][6] != 6 then
return -1
endi
if $data[0][7] != 101 then
return -1
endi
sql_error alter table db.stb drop column ts
sql_error alter table db.stb drop column t1
sql_error alter table db.stb drop column t2
sql_error alter table db.stb drop column t3
sql_error alter table db.stb drop column c9
print ========== step1 drop column c6
sql alter table db.stb drop column c6
sql show db.stables
if $data[0][3] != 6 then
return -1
endi
sql show db.tables
if $data[0][3] != 6 then
return -1
endi
sql select * from db.stb
sql select * from db.stb
print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
if $rows != 1 then
return -1
endi
if $data[0][1] != 1 then
return -1
endi
if $data[0][2] != 2 then
return -1
endi
if $data[0][3] != 3 then
return -1
endi
if $data[0][4] != 4 then
return -1
endi
if $data[0][5] != 5 then
return -1
endi
if $data[0][6] != 101 then
return -1
endi
sql insert into db.ctb values(now+1s, 1, 2, 3, 4, 5)
sql select * from db.stb
if $rows != 2 then
return -1
endi
print ========== step2 drop column c5
sql alter table db.stb drop column c5
sql insert into db.ctb values(now+2s, 1, 2, 3, 4, 5)
sql insert into db.ctb values(now+3s, 1, 2, 3, 4)
sql_error insert into db.ctb values(now+2s, 1, 2, 3, 4, 5)
sql select * from db.stb
if $rows != 4 then
return -1
endi
print ========== step3 drop column c4
sql alter table db.stb drop column c4
sql select * from db.stb
sql_error insert into db.ctb values(now+2s, 1, 2, 3, 4, 5)
sql_error insert into db.ctb values(now+2s, 1, 2, 3, 4)
sql insert into db.ctb values(now+3s, 1, 2, 3)
sql select * from db.stb
if $rows != 5 then
return -1
endi
print ========== step4 add column c4
sql alter table db.stb add column c4 binary(13)
sql insert into db.ctb values(now+4s, 1, 2, 3, '4')
sql select * from db.stb
if $rows != 6 then
return -1
endi
if $data[1][4] != NULL then
return -1
endi
if $data[2][4] != NULL then
return -1
endi
if $data[3][4] != NULL then
return -1
endi
if $data[5][4] != 4 then
return -1
endi
print ========== step5 describe
sql describe db.ctb
if $rows != 8 then
return -1
endi
if $data[0][0] != ts then
return -1
endi
if $data[1][0] != c1 then
return -1
endi
if $data[2][0] != c2 then
return -1
endi
if $data[3][0] != c3 then
return -1
endi
if $data[4][0] != c4 then
return -1
endi
if $data[4][1] != VARCHAR then
return -1
endi
if $data[4][2] != 13 then
return -1
endi
if $data[5][0] != t1 then
return -1
endi
if $data[6][0] != t2 then
return -1
endi
if $data[7][0] != t3 then
return -1
endi
system sh/exec.sh -n dnode1 -s stop -x SIGINT

View File

@ -0,0 +1,78 @@
system sh/stop_dnodes.sh
system sh/deploy.sh -n dnode1 -i 1
system sh/exec.sh -n dnode1 -s start
sql connect
print ========== prepare stb and ctb
sql create database db vgroups 1
sql create table db.stb (ts timestamp, c1 int, c2 binary(4)) tags(t1 int, t2 float, t3 binary(16)) comment "abd"
sql create table db.ctb using db.stb tags(101, 102, "103")
sql insert into db.ctb values(now, 1, "1234")
sql_error alter table db.stb MODIFY column c2 binary(3)
sql_error alter table db.stb MODIFY column c2 int
sql_error alter table db.stb MODIFY column c1 int
sql_error alter table db.stb MODIFY column ts int
sql_error insert into db.ctb values(now, 1, "12345")
print ========== step1 modify column
sql alter table db.stb MODIFY column c2 binary(5)
sql insert into db.ctb values(now, 1, "12345")
sql select * from db.stb
print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
if $rows != 2 then
return -1
endi
if $data[0][1] != 1 then
return -1
endi
if $data[0][2] != 1234 then
return -1
endi
if $data[0][3] != 101 then
return -1
endi
if $data[1][1] != 1 then
return -1
endi
if $data[1][2] != 12345 then
return -1
endi
if $data[1][3] != 101 then
return -1
endi
print ========== step2 describe
sql describe db.ctb
if $rows != 7 then
return -1
endi
if $data[0][0] != ts then
return -1
endi
if $data[1][0] != c1 then
return -1
endi
if $data[2][0] != c2 then
return -1
endi
if $data[2][1] != VARCHAR then
return -1
endi
if $data[2][2] != 5 then
return -1
endi
if $data[3][0] != t1 then
return -1
endi
if $data[4][0] != t2 then
return -1
endi
if $data[5][0] != t3 then
return -1
endi
system sh/exec.sh -n dnode1 -s stop -x SIGINT

View File

@ -0,0 +1,162 @@
system sh/stop_dnodes.sh
system sh/deploy.sh -n dnode1 -i 1
system sh/exec.sh -n dnode1 -s start
sleep 50
sql connect
print =============== create database
sql create database test vgroups 1
sql show databases
if $rows != 3 then
return -1
endi
print $data00 $data01 $data02
sql use test
sql create table t1(ts timestamp, a int, b int , c int, d double,id int);
sql create stream streams2 trigger at_once into streamt as select _wstartts, count(*) c1, sum(a), max(a), min(d), stddev(a), last(a), first(d), max(id) s from t1 session(ts,10s);
sql insert into t1 values(1648791213000,NULL,NULL,NULL,NULL,1);
sql insert into t1 values(1648791223001,10,2,3,1.1,2);
sql insert into t1 values(1648791233002,3,2,3,2.1,3);
sql insert into t1 values(1648791243003,NULL,NULL,NULL,NULL,4);
sql insert into t1 values(1648791213002,NULL,NULL,NULL,NULL,5) (1648791233012,NULL,NULL,NULL,NULL,6);
sql select * from streamt order by s desc;
# row 0
if $data01 != 3 then
print ======$data01
return -1
endi
if $data02 != 3 then
print ======$data02
return -1
endi
if $data03 != 3 then
print ======$data03
return -1
endi
if $data04 != 2.100000000 then
print ======$data04
return -1
endi
if $data05 != 0.000000000 then
print ======$data05
return -1
endi
if $data06 != 3 then
print ======$data05
return -1
endi
if $data07 != 2.100000000 then
print ======$data05
return -1
endi
if $data08 != 6 then
print ======$data05
return -1
endi
# row 1
if $data11 != 3 then
print ======$data01
return -1
endi
if $data12 != 10 then
print ======$data02
return -1
endi
if $data13 != 10 then
print ======$data03
return -1
endi
if $data14 != 1.100000000 then
print ======$data04
return -1
endi
if $data15 != 0.000000000 then
print ======$data05
return -1
endi
if $data16 != 10 then
print ======$data05
return -1
endi
if $data17 != 1.100000000 then
print ======$data05
return -1
endi
if $data18 != 5 then
print ======$data05
return -1
endi
sql insert into t1 values(1648791213000,1,2,3,1.0,7);
sql insert into t1 values(1648791223001,2,2,3,1.1,8);
sql insert into t1 values(1648791233002,3,2,3,2.1,9);
sql insert into t1 values(1648791243003,4,2,3,3.1,10);
sql insert into t1 values(1648791213002,4,2,3,4.1,11) ;
sql insert into t1 values(1648791213002,4,2,3,4.1,12) (1648791223009,4,2,3,4.1,13);
sql select * from streamt order by s desc ;
# row 0
if $data01 != 7 then
print ======$data01
return -1
endi
if $data02 != 9 then
print ======$data02
return -1
endi
if $data03 != 4 then
print ======$data03
return -1
endi
if $data04 != 1.100000000 then
print ======$data04
return -1
endi
if $data05 != 0.816496581 then
print ======$data05
return -1
endi
if $data06 != 3 then
print ======$data05
return -1
endi
if $data07 != 1.100000000 then
print ======$data05
return -1
endi
if $data08 != 13 then
print ======$data05
return -1
endi
system sh/exec.sh -n dnode1 -s stop -x SIGINT

View File

@ -0,0 +1,190 @@
system sh/stop_dnodes.sh
system sh/deploy.sh -n dnode1 -i 1
system sh/exec.sh -n dnode1 -s start
sleep 50
sql connect
print =============== create database
sql create database test vgroups 1
sql show databases
if $rows != 3 then
return -1
endi
print $data00 $data01 $data02
sql use test
sql create table t1(ts timestamp, a int, b int , c int, d double,id int);
sql create stream streams2 trigger at_once into streamt as select _wstartts, count(*) c1, sum(a), min(b), max(id) s from t1 session(ts,10s);
sql insert into t1 values(1648791210000,1,1,1,1.1,1);
sql insert into t1 values(1648791220000,2,2,2,2.1,2);
sql insert into t1 values(1648791230000,3,3,3,3.1,3);
sql insert into t1 values(1648791240000,4,4,4,4.1,4);
sql select * from streamt order by s desc;
# row 0
if $data01 != 4 then
print ======$data01
return -1
endi
if $data02 != 10 then
print ======$data02
return -1
endi
if $data03 != 1 then
print ======$data03
return -1
endi
if $data04 != 4 then
print ======$data04
return -1
endi
sql insert into t1 values(1648791250005,5,5,5,5.1,5);
sql insert into t1 values(1648791260006,6,6,6,6.1,6);
sql insert into t1 values(1648791270007,7,7,7,7.1,7);
sql insert into t1 values(1648791240005,5,5,5,5.1,8) (1648791250006,6,6,6,6.1,9);
sql select * from streamt order by s desc;
# row 0
if $data01 != 8 then
print ======$data01
return -1
endi
if $data02 != 32 then
print ======$data02
return -1
endi
if $data03 != 1 then
print ======$data03
return -1
endi
if $data04 != 9 then
print ======$data04
return -1
endi
# row 1
if $data11 != 1 then
print ======$data11
return -1
endi
if $data12 != 7 then
print ======$data12
return -1
endi
if $data13 != 7 then
print ======$data13
return -1
endi
if $data14 != 7 then
print ======$data14
return -1
endi
sql insert into t1 values(1648791280008,7,7,7,7.1,10) (1648791300009,8,8,8,8.1,11);
sql insert into t1 values(1648791260007,7,7,7,7.1,12) (1648791290008,7,7,7,7.1,13) (1648791290009,8,8,8,8.1,14);
sql insert into t1 values(1648791500000,7,7,7,7.1,15) (1648791520000,8,8,8,8.1,16) (1648791540000,8,8,8,8.1,17);
sql insert into t1 values(1648791530000,8,8,8,8.1,18);
sql insert into t1 values(1648791220000,10,10,10,10.1,19) (1648791290008,2,2,2,2.1,20) (1648791540000,17,17,17,17.1,21) (1648791500001,22,22,22,22.1,22);
sql select * from streamt order by s desc;
# row 0
if $data01 != 2 then
print ======$data01
return -1
endi
if $data02 != 29 then
print ======$data02
return -1
endi
if $data03 != 7 then
print ======$data03
return -1
endi
if $data04 != 22 then
print ======$data04
return -1
endi
# row 1
if $data11 != 3 then
print ======$data11
return -1
endi
if $data12 != 33 then
print ======$data12
return -1
endi
if $data13 != 8 then
print ======$data13
return -1
endi
if $data14 != 21 then
print ======$data14
return -1
endi
# row 2
if $data21 != 4 then
print ======$data21
return -1
endi
if $data22 != 25 then
print ======$data22
return -1
endi
if $data23 != 2 then
print ======$data23
return -1
endi
if $data24 != 20 then
print ======$data24
return -1
endi
# row 3
if $data31 != 10 then
print ======$data31
return -1
endi
if $data32 != 54 then
print ======$data32
return -1
endi
if $data33 != 1 then
print ======$data33
return -1
endi
if $data34 != 19 then
print ======$data34
return -1
endi
system sh/exec.sh -n dnode1 -s stop -x SIGINT

View File

@ -20,6 +20,8 @@ print $data[1][0] $data[1][1] $data[1][2] $data[1][3]
if $rows == 2 then
if $data[1][1] == stop then
goto end_insert
elif $data[0][1] == stop then
goto end_insert
endi
endi
@ -47,6 +49,9 @@ endw
if $loop_cnt == 0 then
print ====> notify main to working for insert data
sql insert into interaction values (now, 'working', 0, 0);
sql select * from interaction
print $data[0][0] $data[0][1] $data[0][2] $data[0][3]
print $data[1][0] $data[1][1] $data[1][2] $data[1][3]
endi
$loop_cnt = $loop_cnt + 1
goto loop_insert

View File

@ -155,28 +155,13 @@ while $i < $tbNum
sql create table $ctb using stb tags( $i )
$ntb = $ntbPrefix . $i
sql create table $ntb (ts timestamp, c1 int, c2 float, c3 binary(10))
# $x = 0
# while $x < $rowNum
# $binary = ' . binary
# $binary = $binary . $i
# $binary = $binary . '
#
# sql insert into $ctb values ($tstart , $i , $x , $binary )
# sql insert into $ntb values ($tstart , 999 , 999 , 'binary-ntb' )
# $tstart = $tstart + 1
# $x = $x + 1
# endw
# print ====> insert rows: $rowNum into $ctb and $ntb
$i = $i + 1
# $tstart = 1640966400000
endw
$totalTblNum = $tbNum * 2
print ====>totalTblNum:$totalTblNum
sleep 1000
sql show tables
print ====> expect $totalTblNum and infinsert $rows in fact
if $rows != $totalTblNum then
return -1
endi
@ -222,6 +207,9 @@ endi
$dnodeId = dnode . $dnodeId
print ====> stop $dnodeId
system sh/exec.sh -n $dnodeId -s stop -x SIGINT
sleep 1000
print ====> start $dnodeId
system sh/exec.sh -n $dnodeId -s start
$loop_cnt = 0
check_vg_ready_2:
@ -245,7 +233,7 @@ if $data[0][4] == LEADER then
if $data[0][8] != FOLLOWER then
goto check_vg_ready_2
endi
print ---- vgroup $data[0][0] leader switch to dnode $data[0][3]
print ---- vgroup $dnodeId leader switch to dnode $data[0][3]
goto vg_ready_2
elif $data[0][6] == LEADER then
if $data[0][4] != FOLLOWER then
@ -254,7 +242,7 @@ elif $data[0][6] == LEADER then
if $data[0][8] != FOLLOWER then
goto check_vg_ready_2
endi
print ---- vgroup $data[0][0] leader switch to dnode $data[0][5]
print ---- vgroup $dnodeId leader switch to dnode $data[0][5]
goto vg_ready_2
elif $data[0][8] == LEADER then
if $data[0][4] != FOLLOWER then
@ -263,7 +251,7 @@ elif $data[0][8] == LEADER then
if $data[0][6] != FOLLOWER then
goto check_vg_ready_2
endi
print ---- vgroup $data[0][0] leader switch to dnode $data[0][7]
print ---- vgroup $dnodeId leader switch to dnode $data[0][7]
goto vg_ready_2
else
goto check_vg_ready_2
@ -272,8 +260,6 @@ vg_ready_2:
$switch_loop_cnt = $switch_loop_cnt + 1
if $switch_loop_cnt < 3 then
print ====> start $dnodeId
system sh/exec.sh -n $dnodeId -s start
goto switch_leader_loop
endi

View File

@ -134,7 +134,7 @@ class TDTestCase:
def create_udf_function(self):
for i in range(10):
for i in range(5):
# create scalar functions
tdSql.execute("create function udf1 as '/tmp/udf/libudf1.so' outputtype int bufSize 8;")
@ -644,16 +644,12 @@ class TDTestCase:
self.create_udf_function()
self.basic_udf_query()
self.loop_kill_udfd()
self.unexpected_create()
tdSql.execute(" drop function udf1 ")
tdSql.execute(" drop function udf2 ")
self.create_udf_function()
time.sleep(2)
self.basic_udf_query()
self.test_function_name()
self.restart_taosd_query_udf()
def stop(self):

View File

@ -0,0 +1,654 @@
from distutils.log import error
import taos
import sys
import time
import os
from util.log import *
from util.sql import *
from util.cases import *
from util.dnodes import *
import subprocess
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug(f"start to excute {__file__}")
tdSql.init(conn.cursor(), logSql)
def getBuildPath(self):
selfPath = os.path.dirname(os.path.realpath(__file__))
if ("community" in selfPath):
projPath = selfPath[:selfPath.find("community")]
else:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]
break
return buildPath
def prepare_udf_so(self):
selfPath = os.path.dirname(os.path.realpath(__file__))
if ("community" in selfPath):
projPath = selfPath[:selfPath.find("community")]
else:
projPath = selfPath[:selfPath.find("tests")]
print(projPath)
libudf1 = subprocess.Popen('find %s -name "libudf1.so"|grep lib|head -n1'%projPath , shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
libudf2 = subprocess.Popen('find %s -name "libudf2.so"|grep lib|head -n1'%projPath , shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
os.system("mkdir /tmp/udf/")
os.system("cp %s /tmp/udf/ "%libudf1.replace("\n" ,""))
os.system("cp %s /tmp/udf/ "%libudf2.replace("\n" ,""))
def prepare_data(self):
tdSql.execute("drop database if exists db ")
tdSql.execute("create database if not exists db days 300")
tdSql.execute("use db")
tdSql.execute(
'''create table stb1
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
tags (t1 int)
'''
)
tdSql.execute(
'''
create table t1
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
'''
)
for i in range(4):
tdSql.execute(f'create table ct{i+1} using stb1 tags ( {i+1} )')
for i in range(9):
tdSql.execute(
f"insert into ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
)
tdSql.execute(
f"insert into ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
)
tdSql.execute("insert into ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )")
tdSql.execute("insert into ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
tdSql.execute("insert into ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )")
tdSql.execute("insert into ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
tdSql.execute("insert into ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute("insert into ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute("insert into ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute(
f'''insert into t1 values
( '2020-04-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( '2020-10-21 01:01:01.000', 1, 11111, 111, 11, 1.11, 11.11, 1, "binary1", "nchar1", now()+1a )
( '2020-12-31 01:01:01.000', 2, 22222, 222, 22, 2.22, 22.22, 0, "binary2", "nchar2", now()+2a )
( '2021-01-01 01:01:06.000', 3, 33333, 333, 33, 3.33, 33.33, 0, "binary3", "nchar3", now()+3a )
( '2021-05-07 01:01:10.000', 4, 44444, 444, 44, 4.44, 44.44, 1, "binary4", "nchar4", now()+4a )
( '2021-07-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( '2021-09-30 01:01:16.000', 5, 55555, 555, 55, 5.55, 55.55, 0, "binary5", "nchar5", now()+5a )
( '2022-02-01 01:01:20.000', 6, 66666, 666, 66, 6.66, 66.66, 1, "binary6", "nchar6", now()+6a )
( '2022-10-28 01:01:26.000', 7, 00000, 000, 00, 0.00, 00.00, 1, "binary7", "nchar7", "1970-01-01 08:00:00.000" )
( '2022-12-01 01:01:30.000', 8, -88888, -888, -88, -8.88, -88.88, 0, "binary8", "nchar8", "1969-01-01 01:00:00.000" )
( '2022-12-31 01:01:36.000', 9, -99999999999999999, -999, -99, -9.99, -999999999999999999999.99, 1, "binary9", "nchar9", "1900-01-01 00:00:00.000" )
( '2023-02-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
'''
)
tdSql.execute("create table tb (ts timestamp , num1 int , num2 int, num3 double , num4 binary(30))")
tdSql.execute(
f'''insert into tb values
( '2020-04-21 01:01:01.000', NULL, 1, 1, "binary1" )
( '2020-10-21 01:01:01.000', 1, 1, 1.11, "binary1" )
( '2020-12-31 01:01:01.000', 2, 22222, 22, "binary1" )
( '2021-01-01 01:01:06.000', 3, 33333, 33, "binary1" )
( '2021-05-07 01:01:10.000', 4, 44444, 44, "binary1" )
( '2021-07-21 01:01:01.000', NULL, NULL, NULL, "binary1" )
( '2021-09-30 01:01:16.000', 5, 55555, 55, "binary1" )
( '2022-02-01 01:01:20.000', 6, 66666, 66, "binary1" )
( '2022-10-28 01:01:26.000', 0, 00000, 00, "binary1" )
( '2022-12-01 01:01:30.000', 8, -88888, -88, "binary1" )
( '2022-12-31 01:01:36.000', 9, -9999999, -99, "binary1" )
( '2023-02-21 01:01:01.000', NULL, NULL, NULL, "binary1" )
'''
)
# udf functions with join
ts_start = 1652517451000
tdSql.execute("create stable st (ts timestamp , c1 int , c2 int ,c3 double ,c4 double ) tags(ind int)")
tdSql.execute("create table sub1 using st tags(1)")
tdSql.execute("create table sub2 using st tags(2)")
for i in range(10):
ts = ts_start + i *1000
tdSql.execute(" insert into sub1 values({} , {},{},{},{})".format(ts,i ,i*10,i*100.0,i*1000.0))
tdSql.execute(" insert into sub2 values({} , {},{},{},{})".format(ts,i ,i*10,i*100.0,i*1000.0))
def create_udf_function(self):
for i in range(5):
# create scalar functions
tdSql.execute("create function udf1 as '/tmp/udf/libudf1.so' outputtype int bufSize 8;")
# create aggregate functions
tdSql.execute("create aggregate function udf2 as '/tmp/udf/libudf2.so' outputtype double bufSize 8;")
functions = tdSql.getResult("show functions")
function_nums = len(functions)
if function_nums == 2:
tdLog.info("create two udf functions success ")
# drop functions
tdSql.execute("drop function udf1")
tdSql.execute("drop function udf2")
functions = tdSql.getResult("show functions")
for function in functions:
if "udf1" in function[0] or "udf2" in function[0]:
tdLog.info("drop udf functions failed ")
tdLog.exit("drop udf functions failed")
tdLog.info("drop two udf functions success ")
# create scalar functions
tdSql.execute("create function udf1 as '/tmp/udf/libudf1.so' outputtype int bufSize 8;")
# create aggregate functions
tdSql.execute("create aggregate function udf2 as '/tmp/udf/libudf2.so' outputtype double bufSize 8;")
functions = tdSql.getResult("show functions")
function_nums = len(functions)
if function_nums == 2:
tdLog.info("create two udf functions success ")
def basic_udf_query(self):
# scalar functions
tdSql.execute("use db ")
tdSql.query("select num1 , udf1(num1) ,num2 ,udf1(num2),num3 ,udf1(num3),num4 ,udf1(num4) from tb")
tdSql.checkData(0,0,None)
tdSql.checkData(0,1,None)
tdSql.checkData(0,2,1)
tdSql.checkData(0,3,88)
tdSql.checkData(0,4,1.000000000)
tdSql.checkData(0,5,88)
tdSql.checkData(0,6,"binary1")
tdSql.checkData(0,7,88)
tdSql.checkData(3,0,3)
tdSql.checkData(3,1,88)
tdSql.checkData(3,2,33333)
tdSql.checkData(3,3,88)
tdSql.checkData(3,4,33.000000000)
tdSql.checkData(3,5,88)
tdSql.checkData(3,6,"binary1")
tdSql.checkData(3,7,88)
tdSql.checkData(11,0,None)
tdSql.checkData(11,1,None)
tdSql.checkData(11,2,None)
tdSql.checkData(11,3,None)
tdSql.checkData(11,4,None)
tdSql.checkData(11,5,None)
tdSql.checkData(11,6,"binary1")
tdSql.checkData(11,7,88)
tdSql.query("select c1 , udf1(c1) ,c2 ,udf1(c2), c3 ,udf1(c3), c4 ,udf1(c4) from stb1 order by c1")
tdSql.checkData(0,0,None)
tdSql.checkData(0,1,None)
tdSql.checkData(0,2,None)
tdSql.checkData(0,3,None)
tdSql.checkData(0,4,None)
tdSql.checkData(0,5,None)
tdSql.checkData(0,6,None)
tdSql.checkData(0,7,None)
tdSql.checkData(20,0,8)
tdSql.checkData(20,1,88)
tdSql.checkData(20,2,88888)
tdSql.checkData(20,3,88)
tdSql.checkData(20,4,888)
tdSql.checkData(20,5,88)
tdSql.checkData(20,6,88)
tdSql.checkData(20,7,88)
# aggregate functions
tdSql.query("select udf2(num1) ,udf2(num2), udf2(num3) from tb")
tdSql.checkData(0,0,15.362291496)
tdSql.checkData(0,1,10000949.553189287)
tdSql.checkData(0,2,168.633425216)
# Arithmetic compute
tdSql.query("select udf2(num1)+100 ,udf2(num2)-100, udf2(num3)*100 ,udf2(num3)/100 from tb")
tdSql.checkData(0,0,115.362291496)
tdSql.checkData(0,1,10000849.553189287)
tdSql.checkData(0,2,16863.342521576)
tdSql.checkData(0,3,1.686334252)
tdSql.query("select udf2(c1) ,udf2(c6) from stb1 ")
tdSql.checkData(0,0,25.514701644)
tdSql.checkData(0,1,265.247614504)
tdSql.query("select udf2(c1)+100 ,udf2(c6)-100 ,udf2(c1)*100 ,udf2(c6)/100 from stb1 ")
tdSql.checkData(0,0,125.514701644)
tdSql.checkData(0,1,165.247614504)
tdSql.checkData(0,2,2551.470164435)
tdSql.checkData(0,3,2.652476145)
# # bug for crash when query sub table
tdSql.query("select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from ct1")
tdSql.checkData(0,0,378.215547010)
tdSql.checkData(0,1,353.808067460)
tdSql.checkData(0,2,2114.237451187)
tdSql.checkData(0,3,2.125468151)
tdSql.query("select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from stb1 ")
tdSql.checkData(0,0,490.358032462)
tdSql.checkData(0,1,400.460106627)
tdSql.checkData(0,2,2551.470164435)
tdSql.checkData(0,3,2.652476145)
# regular table with aggregate functions
tdSql.error("select udf1(num1) , count(num1) from tb;")
tdSql.error("select udf1(num1) , avg(num1) from tb;")
tdSql.error("select udf1(num1) , twa(num1) from tb;")
tdSql.error("select udf1(num1) , irate(num1) from tb;")
tdSql.error("select udf1(num1) , sum(num1) from tb;")
tdSql.error("select udf1(num1) , stddev(num1) from tb;")
tdSql.error("select udf1(num1) , mode(num1) from tb;")
tdSql.error("select udf1(num1) , HYPERLOGLOG(num1) from tb;")
# stable
tdSql.error("select udf1(c1) , count(c1) from stb1;")
tdSql.error("select udf1(c1) , avg(c1) from stb1;")
tdSql.error("select udf1(c1) , twa(c1) from stb1;")
tdSql.error("select udf1(c1) , irate(c1) from stb1;")
tdSql.error("select udf1(c1) , sum(c1) from stb1;")
tdSql.error("select udf1(c1) , stddev(c1) from stb1;")
tdSql.error("select udf1(c1) , mode(c1) from stb1;")
tdSql.error("select udf1(c1) , HYPERLOGLOG(c1) from stb1;")
# regular table with select functions
tdSql.query("select udf1(num1) , max(num1) from tb;")
tdSql.checkRows(1)
tdSql.query("select floor(num1) , max(num1) from tb;")
tdSql.checkRows(1)
tdSql.query("select udf1(num1) , min(num1) from tb;")
tdSql.checkRows(1)
tdSql.query("select ceil(num1) , min(num1) from tb;")
tdSql.checkRows(1)
tdSql.error("select udf1(num1) , first(num1) from tb;")
tdSql.error("select abs(num1) , first(num1) from tb;")
tdSql.error("select udf1(num1) , last(num1) from tb;")
tdSql.error("select round(num1) , last(num1) from tb;")
tdSql.query("select udf1(num1) , top(num1,1) from tb;")
tdSql.checkRows(1)
tdSql.query("select udf1(num1) , bottom(num1,1) from tb;")
tdSql.checkRows(1)
tdSql.error("select udf1(num1) , last_row(num1) from tb;")
tdSql.error("select round(num1) , last_row(num1) from tb;")
# stable
tdSql.query("select udf1(c1) , max(c1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select abs(c1) , max(c1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select udf1(c1) , min(c1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select floor(c1) , min(c1) from stb1;")
tdSql.checkRows(1)
tdSql.error("select udf1(c1) , first(c1) from stb1;")
tdSql.error("select udf1(c1) , last(c1) from stb1;")
tdSql.query("select udf1(c1) , top(c1 ,1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select abs(c1) , top(c1 ,1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select udf1(c1) , bottom(c1,1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select ceil(c1) , bottom(c1,1) from stb1;")
tdSql.checkRows(1)
tdSql.error("select udf1(c1) , last_row(c1) from stb1;")
tdSql.error("select ceil(c1) , last_row(c1) from stb1;")
# regular table with compute functions
tdSql.query("select udf1(num1) , abs(num1) from tb;")
tdSql.checkRows(12)
tdSql.query("select floor(num1) , abs(num1) from tb;")
tdSql.checkRows(12)
# # bug need fix
#tdSql.query("select udf1(num1) , csum(num1) from tb;")
#tdSql.checkRows(9)
#tdSql.query("select ceil(num1) , csum(num1) from tb;")
#tdSql.checkRows(9)
#tdSql.query("select udf1(c1) , csum(c1) from stb1;")
#tdSql.checkRows(22)
#tdSql.query("select floor(c1) , csum(c1) from stb1;")
#tdSql.checkRows(22)
# stable with compute functions
tdSql.query("select udf1(c1) , abs(c1) from stb1;")
tdSql.checkRows(25)
tdSql.query("select abs(c1) , ceil(c1) from stb1;")
tdSql.checkRows(25)
# nest query
tdSql.query("select abs(udf1(c1)) , abs(ceil(c1)) from stb1 order by ts;")
tdSql.checkRows(25)
tdSql.checkData(0,0,None)
tdSql.checkData(0,1,None)
tdSql.checkData(1,0,88)
tdSql.checkData(1,1,8)
tdSql.query("select abs(udf1(c1)) , abs(ceil(c1)) from ct1 order by ts;")
tdSql.checkRows(13)
tdSql.checkData(0,0,88)
tdSql.checkData(0,1,8)
tdSql.checkData(1,0,88)
tdSql.checkData(1,1,7)
# bug fix for crash
# order by udf function result
for _ in range(50):
tdSql.query("select udf2(c1) from stb1 group by 1-udf1(c1)")
print(tdSql.queryResult)
# udf functions with filter
tdSql.query("select abs(udf1(c1)) , abs(ceil(c1)) from stb1 where c1 is null order by ts;")
tdSql.checkRows(3)
tdSql.checkData(0,0,None)
tdSql.checkData(0,1,None)
tdSql.query("select c1 ,udf1(c1) , c6 ,udf1(c6) from stb1 where c1 > 8 order by ts")
tdSql.checkRows(3)
tdSql.checkData(0,0,9)
tdSql.checkData(0,1,88)
tdSql.checkData(0,2,-99.990000000)
tdSql.checkData(0,3,88)
tdSql.query("select sub1.c1, sub2.c2 from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,0)
tdSql.checkData(0,1,0)
tdSql.checkData(1,0,1)
tdSql.checkData(1,1,10)
tdSql.query("select udf1(sub1.c1), udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,88)
tdSql.checkData(0,1,88)
tdSql.checkData(1,0,88)
tdSql.checkData(1,1,88)
tdSql.query("select sub1.c1 , udf1(sub1.c1), sub2.c2 ,udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,0)
tdSql.checkData(0,1,88)
tdSql.checkData(0,2,0)
tdSql.checkData(0,3,88)
tdSql.checkData(1,0,1)
tdSql.checkData(1,1,88)
tdSql.checkData(1,2,10)
tdSql.checkData(1,3,88)
tdSql.query("select udf2(sub1.c1), udf2(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,16.881943016)
tdSql.checkData(0,1,168.819430161)
tdSql.error("select sub1.c1 , udf2(sub1.c1), sub2.c2 ,udf2(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
# udf functions with group by
tdSql.query("select udf1(c1) from ct1 group by c1")
tdSql.checkRows(10)
tdSql.query("select udf1(c1) from stb1 group by c1")
tdSql.checkRows(11)
tdSql.query("select c1,c2, udf1(c1,c2) from ct1 group by c1,c2")
tdSql.checkRows(10)
tdSql.query("select c1,c2, udf1(c1,c2) from stb1 group by c1,c2")
tdSql.checkRows(11)
tdSql.query("select udf2(c1) from ct1 group by c1")
tdSql.checkRows(10)
tdSql.query("select udf2(c1) from stb1 group by c1")
tdSql.checkRows(11)
tdSql.query("select c1,c2, udf2(c1,c6) from ct1 group by c1,c2")
tdSql.checkRows(10)
tdSql.query("select c1,c2, udf2(c1,c6) from stb1 group by c1,c2")
tdSql.checkRows(11)
tdSql.query("select udf2(c1) from stb1 group by udf1(c1)")
tdSql.checkRows(2)
tdSql.query("select udf2(c1) from stb1 group by floor(c1)")
tdSql.checkRows(11)
# udf mix with order by
tdSql.query("select udf2(c1) from stb1 group by floor(c1) order by udf2(c1)")
tdSql.checkRows(11)
def multi_cols_udf(self):
tdSql.query("select num1,num2,num3,udf1(num1,num2,num3) from tb")
tdSql.checkData(0,0,None)
tdSql.checkData(0,1,1)
tdSql.checkData(0,2,1.000000000)
tdSql.checkData(0,3,None)
tdSql.checkData(1,0,1)
tdSql.checkData(1,1,1)
tdSql.checkData(1,2,1.110000000)
tdSql.checkData(1,3,88)
tdSql.query("select c1,c6,udf1(c1,c6) from stb1 order by ts")
tdSql.checkData(1,0,8)
tdSql.checkData(1,1,88.880000000)
tdSql.checkData(1,2,88)
tdSql.query("select abs(udf1(c1,c6,c1,c6)) , abs(ceil(c1)) from stb1 where c1 is not null order by ts;")
tdSql.checkRows(22)
tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,169.661427555)
tdSql.checkData(0,1,169.661427555)
def try_query_sql(self):
udf1_sqls = [
"select num1 , udf1(num1) ,num2 ,udf1(num2),num3 ,udf1(num3),num4 ,udf1(num4) from tb" ,
"select c1 , udf1(c1) ,c2 ,udf1(c2), c3 ,udf1(c3), c4 ,udf1(c4) from stb1 order by c1" ,
"select udf1(num1) , max(num1) from tb;" ,
"select udf1(num1) , min(num1) from tb;" ,
#"select udf1(num1) , top(num1,1) from tb;" ,
#"select udf1(num1) , bottom(num1,1) from tb;" ,
"select udf1(c1) , max(c1) from stb1;" ,
"select udf1(c1) , min(c1) from stb1;" ,
#"select udf1(c1) , top(c1 ,1) from stb1;" ,
#"select udf1(c1) , bottom(c1,1) from stb1;" ,
"select udf1(num1) , abs(num1) from tb;" ,
#"select udf1(num1) , csum(num1) from tb;" ,
#"select udf1(c1) , csum(c1) from stb1;" ,
"select udf1(c1) , abs(c1) from stb1;" ,
"select abs(udf1(c1)) , abs(ceil(c1)) from stb1 order by ts;" ,
"select abs(udf1(c1)) , abs(ceil(c1)) from ct1 order by ts;" ,
"select abs(udf1(c1)) , abs(ceil(c1)) from stb1 where c1 is null order by ts;" ,
"select c1 ,udf1(c1) , c6 ,udf1(c6) from stb1 where c1 > 8 order by ts" ,
"select udf1(sub1.c1), udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select sub1.c1 , udf1(sub1.c1), sub2.c2 ,udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select udf1(c1) from ct1 group by c1" ,
"select udf1(c1) from stb1 group by c1" ,
"select c1,c2, udf1(c1,c2) from ct1 group by c1,c2" ,
"select c1,c2, udf1(c1,c2) from stb1 group by c1,c2" ,
"select num1,num2,num3,udf1(num1,num2,num3) from tb" ,
"select c1,c6,udf1(c1,c6) from stb1 order by ts" ,
"select abs(udf1(c1,c6,c1,c6)) , abs(ceil(c1)) from stb1 where c1 is not null order by ts;"
]
udf2_sqls = ["select udf2(sub1.c1), udf2(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select udf2(c1) from stb1 group by 1-udf1(c1)" ,
"select udf2(num1) ,udf2(num2), udf2(num3) from tb" ,
"select udf2(num1)+100 ,udf2(num2)-100, udf2(num3)*100 ,udf2(num3)/100 from tb" ,
"select udf2(c1) ,udf2(c6) from stb1 " ,
"select udf2(c1)+100 ,udf2(c6)-100 ,udf2(c1)*100 ,udf2(c6)/100 from stb1 " ,
"select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from ct1" ,
"select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from stb1 " ,
"select udf2(c1) from ct1 group by c1" ,
"select udf2(c1) from stb1 group by c1" ,
"select c1,c2, udf2(c1,c6) from ct1 group by c1,c2" ,
"select c1,c2, udf2(c1,c6) from stb1 group by c1,c2" ,
"select udf2(c1) from stb1 group by udf1(c1)" ,
"select udf2(c1) from stb1 group by floor(c1)" ,
"select udf2(c1) from stb1 group by floor(c1) order by udf2(c1)" ,
"select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null"]
return udf1_sqls ,udf2_sqls
def unexpected_create(self):
tdLog.info(" create function with out bufsize ")
tdSql.query("drop function udf1 ")
tdSql.query("drop function udf2 ")
# create function without buffer
tdSql.execute("create function udf1 as '/tmp/udf/libudf1.so' outputtype int")
tdSql.execute("create aggregate function udf2 as '/tmp/udf/libudf2.so' outputtype double")
udf1_sqls ,udf2_sqls = self.try_query_sql()
for scalar_sql in udf1_sqls:
tdSql.query(scalar_sql)
for aggregate_sql in udf2_sqls:
tdSql.error(aggregate_sql)
# create function without aggregate
tdLog.info(" create function with out aggregate ")
tdSql.query("drop function udf1 ")
tdSql.query("drop function udf2 ")
# create function without buffer
tdSql.execute("create aggregate function udf1 as '/tmp/udf/libudf1.so' outputtype int bufSize 8 ")
tdSql.execute("create function udf2 as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
udf1_sqls ,udf2_sqls = self.try_query_sql()
for scalar_sql in udf1_sqls:
tdSql.error(scalar_sql)
for aggregate_sql in udf2_sqls:
tdSql.error(aggregate_sql)
tdSql.execute(" create function db as '/tmp/udf/libudf1.so' outputtype int bufSize 8 ")
tdSql.execute(" create aggregate function test as '/tmp/udf/libudf1.so' outputtype int bufSize 8 ")
tdSql.error(" select db(c1) from stb1 ")
tdSql.error(" select db(c1,c6), db(c6) from stb1 ")
tdSql.error(" select db(num1,num2), db(num1) from tb ")
tdSql.error(" select test(c1) from stb1 ")
tdSql.error(" select test(c1,c6), test(c6) from stb1 ")
tdSql.error(" select test(num1,num2), test(num1) from tb ")
def loop_kill_udfd(self):
buildPath = self.getBuildPath()
if (buildPath == ""):
tdLog.exit("taosd not found!")
else:
tdLog.info("taosd found in %s" % buildPath)
cfgPath = buildPath + "/../sim/dnode1/cfg"
udfdPath = buildPath +'/build/bin/udfd'
for i in range(3):
tdLog.info(" loop restart udfd %d_th" % i)
tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,169.661427555)
tdSql.checkData(0,1,169.661427555)
# stop udfd cmds
get_processID = "ps -ef | grep -w udfd | grep -v grep| grep -v defunct | awk '{print $2}'"
processID = subprocess.check_output(get_processID, shell=True).decode("utf-8")
stop_udfd = " kill -9 %s" % processID
os.system(stop_udfd)
time.sleep(2)
tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,169.661427555)
tdSql.checkData(0,1,169.661427555)
# # start udfd cmds
# start_udfd = "nohup " + udfdPath +'-c' +cfgPath +" > /dev/null 2>&1 &"
# tdLog.info("start udfd : %s " % start_udfd)
def test_function_name(self):
tdLog.info(" create function name is not build_in functions ")
tdSql.execute(" drop function udf1 ")
tdSql.execute(" drop function udf2 ")
tdSql.error("create function max as '/tmp/udf/libudf1.so' outputtype int bufSize 8")
tdSql.error("create aggregate function sum as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create function max as '/tmp/udf/libudf1.so' outputtype int bufSize 8")
tdSql.error("create aggregate function sum as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function tbname as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function function as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function stable as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function union as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function 123 as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function 123db as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function mnode as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
def restart_taosd_query_udf(self):
self.create_udf_function()
for i in range(5):
tdLog.info(" this is %d_th restart taosd " %i)
tdSql.execute("use db ")
tdSql.query("select count(*) from stb1")
tdSql.checkRows(1)
tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,169.661427555)
tdSql.checkData(0,1,169.661427555)
tdDnodes.stop(1)
tdDnodes.start(1)
time.sleep(2)
def run(self): # sourcery skip: extract-duplicate-method, remove-redundant-fstring
print(" env is ok for all ")
self.prepare_udf_so()
self.prepare_data()
self.create_udf_function()
self.basic_udf_query()
self.unexpected_create()
def stop(self):
tdSql.close()
tdLog.success(f"{__file__} successfully executed")
tdCases.addLinux(__file__, TDTestCase())
tdCases.addWindows(__file__, TDTestCase())

View File

@ -0,0 +1,654 @@
from distutils.log import error
import taos
import sys
import time
import os
from util.log import *
from util.sql import *
from util.cases import *
from util.dnodes import *
import subprocess
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug(f"start to excute {__file__}")
tdSql.init(conn.cursor(), logSql)
def getBuildPath(self):
selfPath = os.path.dirname(os.path.realpath(__file__))
if ("community" in selfPath):
projPath = selfPath[:selfPath.find("community")]
else:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]
break
return buildPath
def prepare_udf_so(self):
selfPath = os.path.dirname(os.path.realpath(__file__))
if ("community" in selfPath):
projPath = selfPath[:selfPath.find("community")]
else:
projPath = selfPath[:selfPath.find("tests")]
print(projPath)
libudf1 = subprocess.Popen('find %s -name "libudf1.so"|grep lib|head -n1'%projPath , shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
libudf2 = subprocess.Popen('find %s -name "libudf2.so"|grep lib|head -n1'%projPath , shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
os.system("mkdir /tmp/udf/")
os.system("cp %s /tmp/udf/ "%libudf1.replace("\n" ,""))
os.system("cp %s /tmp/udf/ "%libudf2.replace("\n" ,""))
def prepare_data(self):
tdSql.execute("drop database if exists db ")
tdSql.execute("create database if not exists db days 300")
tdSql.execute("use db")
tdSql.execute(
'''create table stb1
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
tags (t1 int)
'''
)
tdSql.execute(
'''
create table t1
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
'''
)
for i in range(4):
tdSql.execute(f'create table ct{i+1} using stb1 tags ( {i+1} )')
for i in range(9):
tdSql.execute(
f"insert into ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
)
tdSql.execute(
f"insert into ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
)
tdSql.execute("insert into ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )")
tdSql.execute("insert into ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
tdSql.execute("insert into ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )")
tdSql.execute("insert into ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
tdSql.execute("insert into ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute("insert into ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute("insert into ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute(
f'''insert into t1 values
( '2020-04-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( '2020-10-21 01:01:01.000', 1, 11111, 111, 11, 1.11, 11.11, 1, "binary1", "nchar1", now()+1a )
( '2020-12-31 01:01:01.000', 2, 22222, 222, 22, 2.22, 22.22, 0, "binary2", "nchar2", now()+2a )
( '2021-01-01 01:01:06.000', 3, 33333, 333, 33, 3.33, 33.33, 0, "binary3", "nchar3", now()+3a )
( '2021-05-07 01:01:10.000', 4, 44444, 444, 44, 4.44, 44.44, 1, "binary4", "nchar4", now()+4a )
( '2021-07-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( '2021-09-30 01:01:16.000', 5, 55555, 555, 55, 5.55, 55.55, 0, "binary5", "nchar5", now()+5a )
( '2022-02-01 01:01:20.000', 6, 66666, 666, 66, 6.66, 66.66, 1, "binary6", "nchar6", now()+6a )
( '2022-10-28 01:01:26.000', 7, 00000, 000, 00, 0.00, 00.00, 1, "binary7", "nchar7", "1970-01-01 08:00:00.000" )
( '2022-12-01 01:01:30.000', 8, -88888, -888, -88, -8.88, -88.88, 0, "binary8", "nchar8", "1969-01-01 01:00:00.000" )
( '2022-12-31 01:01:36.000', 9, -99999999999999999, -999, -99, -9.99, -999999999999999999999.99, 1, "binary9", "nchar9", "1900-01-01 00:00:00.000" )
( '2023-02-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
'''
)
tdSql.execute("create table tb (ts timestamp , num1 int , num2 int, num3 double , num4 binary(30))")
tdSql.execute(
f'''insert into tb values
( '2020-04-21 01:01:01.000', NULL, 1, 1, "binary1" )
( '2020-10-21 01:01:01.000', 1, 1, 1.11, "binary1" )
( '2020-12-31 01:01:01.000', 2, 22222, 22, "binary1" )
( '2021-01-01 01:01:06.000', 3, 33333, 33, "binary1" )
( '2021-05-07 01:01:10.000', 4, 44444, 44, "binary1" )
( '2021-07-21 01:01:01.000', NULL, NULL, NULL, "binary1" )
( '2021-09-30 01:01:16.000', 5, 55555, 55, "binary1" )
( '2022-02-01 01:01:20.000', 6, 66666, 66, "binary1" )
( '2022-10-28 01:01:26.000', 0, 00000, 00, "binary1" )
( '2022-12-01 01:01:30.000', 8, -88888, -88, "binary1" )
( '2022-12-31 01:01:36.000', 9, -9999999, -99, "binary1" )
( '2023-02-21 01:01:01.000', NULL, NULL, NULL, "binary1" )
'''
)
# udf functions with join
ts_start = 1652517451000
tdSql.execute("create stable st (ts timestamp , c1 int , c2 int ,c3 double ,c4 double ) tags(ind int)")
tdSql.execute("create table sub1 using st tags(1)")
tdSql.execute("create table sub2 using st tags(2)")
for i in range(10):
ts = ts_start + i *1000
tdSql.execute(" insert into sub1 values({} , {},{},{},{})".format(ts,i ,i*10,i*100.0,i*1000.0))
tdSql.execute(" insert into sub2 values({} , {},{},{},{})".format(ts,i ,i*10,i*100.0,i*1000.0))
def create_udf_function(self):
for i in range(5):
# create scalar functions
tdSql.execute("create function udf1 as '/tmp/udf/libudf1.so' outputtype int bufSize 8;")
# create aggregate functions
tdSql.execute("create aggregate function udf2 as '/tmp/udf/libudf2.so' outputtype double bufSize 8;")
functions = tdSql.getResult("show functions")
function_nums = len(functions)
if function_nums == 2:
tdLog.info("create two udf functions success ")
# drop functions
tdSql.execute("drop function udf1")
tdSql.execute("drop function udf2")
functions = tdSql.getResult("show functions")
for function in functions:
if "udf1" in function[0] or "udf2" in function[0]:
tdLog.info("drop udf functions failed ")
tdLog.exit("drop udf functions failed")
tdLog.info("drop two udf functions success ")
# create scalar functions
tdSql.execute("create function udf1 as '/tmp/udf/libudf1.so' outputtype int bufSize 8;")
# create aggregate functions
tdSql.execute("create aggregate function udf2 as '/tmp/udf/libudf2.so' outputtype double bufSize 8;")
functions = tdSql.getResult("show functions")
function_nums = len(functions)
if function_nums == 2:
tdLog.info("create two udf functions success ")
def basic_udf_query(self):
# scalar functions
tdSql.execute("use db ")
tdSql.query("select num1 , udf1(num1) ,num2 ,udf1(num2),num3 ,udf1(num3),num4 ,udf1(num4) from tb")
tdSql.checkData(0,0,None)
tdSql.checkData(0,1,None)
tdSql.checkData(0,2,1)
tdSql.checkData(0,3,88)
tdSql.checkData(0,4,1.000000000)
tdSql.checkData(0,5,88)
tdSql.checkData(0,6,"binary1")
tdSql.checkData(0,7,88)
tdSql.checkData(3,0,3)
tdSql.checkData(3,1,88)
tdSql.checkData(3,2,33333)
tdSql.checkData(3,3,88)
tdSql.checkData(3,4,33.000000000)
tdSql.checkData(3,5,88)
tdSql.checkData(3,6,"binary1")
tdSql.checkData(3,7,88)
tdSql.checkData(11,0,None)
tdSql.checkData(11,1,None)
tdSql.checkData(11,2,None)
tdSql.checkData(11,3,None)
tdSql.checkData(11,4,None)
tdSql.checkData(11,5,None)
tdSql.checkData(11,6,"binary1")
tdSql.checkData(11,7,88)
tdSql.query("select c1 , udf1(c1) ,c2 ,udf1(c2), c3 ,udf1(c3), c4 ,udf1(c4) from stb1 order by c1")
tdSql.checkData(0,0,None)
tdSql.checkData(0,1,None)
tdSql.checkData(0,2,None)
tdSql.checkData(0,3,None)
tdSql.checkData(0,4,None)
tdSql.checkData(0,5,None)
tdSql.checkData(0,6,None)
tdSql.checkData(0,7,None)
tdSql.checkData(20,0,8)
tdSql.checkData(20,1,88)
tdSql.checkData(20,2,88888)
tdSql.checkData(20,3,88)
tdSql.checkData(20,4,888)
tdSql.checkData(20,5,88)
tdSql.checkData(20,6,88)
tdSql.checkData(20,7,88)
# aggregate functions
tdSql.query("select udf2(num1) ,udf2(num2), udf2(num3) from tb")
tdSql.checkData(0,0,15.362291496)
tdSql.checkData(0,1,10000949.553189287)
tdSql.checkData(0,2,168.633425216)
# Arithmetic compute
tdSql.query("select udf2(num1)+100 ,udf2(num2)-100, udf2(num3)*100 ,udf2(num3)/100 from tb")
tdSql.checkData(0,0,115.362291496)
tdSql.checkData(0,1,10000849.553189287)
tdSql.checkData(0,2,16863.342521576)
tdSql.checkData(0,3,1.686334252)
tdSql.query("select udf2(c1) ,udf2(c6) from stb1 ")
tdSql.checkData(0,0,25.514701644)
tdSql.checkData(0,1,265.247614504)
tdSql.query("select udf2(c1)+100 ,udf2(c6)-100 ,udf2(c1)*100 ,udf2(c6)/100 from stb1 ")
tdSql.checkData(0,0,125.514701644)
tdSql.checkData(0,1,165.247614504)
tdSql.checkData(0,2,2551.470164435)
tdSql.checkData(0,3,2.652476145)
# # bug for crash when query sub table
tdSql.query("select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from ct1")
tdSql.checkData(0,0,378.215547010)
tdSql.checkData(0,1,353.808067460)
tdSql.checkData(0,2,2114.237451187)
tdSql.checkData(0,3,2.125468151)
tdSql.query("select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from stb1 ")
tdSql.checkData(0,0,490.358032462)
tdSql.checkData(0,1,400.460106627)
tdSql.checkData(0,2,2551.470164435)
tdSql.checkData(0,3,2.652476145)
# regular table with aggregate functions
tdSql.error("select udf1(num1) , count(num1) from tb;")
tdSql.error("select udf1(num1) , avg(num1) from tb;")
tdSql.error("select udf1(num1) , twa(num1) from tb;")
tdSql.error("select udf1(num1) , irate(num1) from tb;")
tdSql.error("select udf1(num1) , sum(num1) from tb;")
tdSql.error("select udf1(num1) , stddev(num1) from tb;")
tdSql.error("select udf1(num1) , mode(num1) from tb;")
tdSql.error("select udf1(num1) , HYPERLOGLOG(num1) from tb;")
# stable
tdSql.error("select udf1(c1) , count(c1) from stb1;")
tdSql.error("select udf1(c1) , avg(c1) from stb1;")
tdSql.error("select udf1(c1) , twa(c1) from stb1;")
tdSql.error("select udf1(c1) , irate(c1) from stb1;")
tdSql.error("select udf1(c1) , sum(c1) from stb1;")
tdSql.error("select udf1(c1) , stddev(c1) from stb1;")
tdSql.error("select udf1(c1) , mode(c1) from stb1;")
tdSql.error("select udf1(c1) , HYPERLOGLOG(c1) from stb1;")
# regular table with select functions
tdSql.query("select udf1(num1) , max(num1) from tb;")
tdSql.checkRows(1)
tdSql.query("select floor(num1) , max(num1) from tb;")
tdSql.checkRows(1)
tdSql.query("select udf1(num1) , min(num1) from tb;")
tdSql.checkRows(1)
tdSql.query("select ceil(num1) , min(num1) from tb;")
tdSql.checkRows(1)
tdSql.error("select udf1(num1) , first(num1) from tb;")
tdSql.error("select abs(num1) , first(num1) from tb;")
tdSql.error("select udf1(num1) , last(num1) from tb;")
tdSql.error("select round(num1) , last(num1) from tb;")
tdSql.query("select udf1(num1) , top(num1,1) from tb;")
tdSql.checkRows(1)
tdSql.query("select udf1(num1) , bottom(num1,1) from tb;")
tdSql.checkRows(1)
tdSql.error("select udf1(num1) , last_row(num1) from tb;")
tdSql.error("select round(num1) , last_row(num1) from tb;")
# stable
tdSql.query("select udf1(c1) , max(c1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select abs(c1) , max(c1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select udf1(c1) , min(c1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select floor(c1) , min(c1) from stb1;")
tdSql.checkRows(1)
tdSql.error("select udf1(c1) , first(c1) from stb1;")
tdSql.error("select udf1(c1) , last(c1) from stb1;")
tdSql.query("select udf1(c1) , top(c1 ,1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select abs(c1) , top(c1 ,1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select udf1(c1) , bottom(c1,1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select ceil(c1) , bottom(c1,1) from stb1;")
tdSql.checkRows(1)
tdSql.error("select udf1(c1) , last_row(c1) from stb1;")
tdSql.error("select ceil(c1) , last_row(c1) from stb1;")
# regular table with compute functions
tdSql.query("select udf1(num1) , abs(num1) from tb;")
tdSql.checkRows(12)
tdSql.query("select floor(num1) , abs(num1) from tb;")
tdSql.checkRows(12)
# # bug need fix
#tdSql.query("select udf1(num1) , csum(num1) from tb;")
#tdSql.checkRows(9)
#tdSql.query("select ceil(num1) , csum(num1) from tb;")
#tdSql.checkRows(9)
#tdSql.query("select udf1(c1) , csum(c1) from stb1;")
#tdSql.checkRows(22)
#tdSql.query("select floor(c1) , csum(c1) from stb1;")
#tdSql.checkRows(22)
# stable with compute functions
tdSql.query("select udf1(c1) , abs(c1) from stb1;")
tdSql.checkRows(25)
tdSql.query("select abs(c1) , ceil(c1) from stb1;")
tdSql.checkRows(25)
# nest query
tdSql.query("select abs(udf1(c1)) , abs(ceil(c1)) from stb1 order by ts;")
tdSql.checkRows(25)
tdSql.checkData(0,0,None)
tdSql.checkData(0,1,None)
tdSql.checkData(1,0,88)
tdSql.checkData(1,1,8)
tdSql.query("select abs(udf1(c1)) , abs(ceil(c1)) from ct1 order by ts;")
tdSql.checkRows(13)
tdSql.checkData(0,0,88)
tdSql.checkData(0,1,8)
tdSql.checkData(1,0,88)
tdSql.checkData(1,1,7)
# bug fix for crash
# order by udf function result
for _ in range(50):
tdSql.query("select udf2(c1) from stb1 group by 1-udf1(c1)")
print(tdSql.queryResult)
# udf functions with filter
tdSql.query("select abs(udf1(c1)) , abs(ceil(c1)) from stb1 where c1 is null order by ts;")
tdSql.checkRows(3)
tdSql.checkData(0,0,None)
tdSql.checkData(0,1,None)
tdSql.query("select c1 ,udf1(c1) , c6 ,udf1(c6) from stb1 where c1 > 8 order by ts")
tdSql.checkRows(3)
tdSql.checkData(0,0,9)
tdSql.checkData(0,1,88)
tdSql.checkData(0,2,-99.990000000)
tdSql.checkData(0,3,88)
tdSql.query("select sub1.c1, sub2.c2 from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,0)
tdSql.checkData(0,1,0)
tdSql.checkData(1,0,1)
tdSql.checkData(1,1,10)
tdSql.query("select udf1(sub1.c1), udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,88)
tdSql.checkData(0,1,88)
tdSql.checkData(1,0,88)
tdSql.checkData(1,1,88)
tdSql.query("select sub1.c1 , udf1(sub1.c1), sub2.c2 ,udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,0)
tdSql.checkData(0,1,88)
tdSql.checkData(0,2,0)
tdSql.checkData(0,3,88)
tdSql.checkData(1,0,1)
tdSql.checkData(1,1,88)
tdSql.checkData(1,2,10)
tdSql.checkData(1,3,88)
tdSql.query("select udf2(sub1.c1), udf2(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,16.881943016)
tdSql.checkData(0,1,168.819430161)
tdSql.error("select sub1.c1 , udf2(sub1.c1), sub2.c2 ,udf2(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
# udf functions with group by
tdSql.query("select udf1(c1) from ct1 group by c1")
tdSql.checkRows(10)
tdSql.query("select udf1(c1) from stb1 group by c1")
tdSql.checkRows(11)
tdSql.query("select c1,c2, udf1(c1,c2) from ct1 group by c1,c2")
tdSql.checkRows(10)
tdSql.query("select c1,c2, udf1(c1,c2) from stb1 group by c1,c2")
tdSql.checkRows(11)
tdSql.query("select udf2(c1) from ct1 group by c1")
tdSql.checkRows(10)
tdSql.query("select udf2(c1) from stb1 group by c1")
tdSql.checkRows(11)
tdSql.query("select c1,c2, udf2(c1,c6) from ct1 group by c1,c2")
tdSql.checkRows(10)
tdSql.query("select c1,c2, udf2(c1,c6) from stb1 group by c1,c2")
tdSql.checkRows(11)
tdSql.query("select udf2(c1) from stb1 group by udf1(c1)")
tdSql.checkRows(2)
tdSql.query("select udf2(c1) from stb1 group by floor(c1)")
tdSql.checkRows(11)
# udf mix with order by
tdSql.query("select udf2(c1) from stb1 group by floor(c1) order by udf2(c1)")
tdSql.checkRows(11)
def multi_cols_udf(self):
tdSql.query("select num1,num2,num3,udf1(num1,num2,num3) from tb")
tdSql.checkData(0,0,None)
tdSql.checkData(0,1,1)
tdSql.checkData(0,2,1.000000000)
tdSql.checkData(0,3,None)
tdSql.checkData(1,0,1)
tdSql.checkData(1,1,1)
tdSql.checkData(1,2,1.110000000)
tdSql.checkData(1,3,88)
tdSql.query("select c1,c6,udf1(c1,c6) from stb1 order by ts")
tdSql.checkData(1,0,8)
tdSql.checkData(1,1,88.880000000)
tdSql.checkData(1,2,88)
tdSql.query("select abs(udf1(c1,c6,c1,c6)) , abs(ceil(c1)) from stb1 where c1 is not null order by ts;")
tdSql.checkRows(22)
tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,169.661427555)
tdSql.checkData(0,1,169.661427555)
def try_query_sql(self):
udf1_sqls = [
"select num1 , udf1(num1) ,num2 ,udf1(num2),num3 ,udf1(num3),num4 ,udf1(num4) from tb" ,
"select c1 , udf1(c1) ,c2 ,udf1(c2), c3 ,udf1(c3), c4 ,udf1(c4) from stb1 order by c1" ,
"select udf1(num1) , max(num1) from tb;" ,
"select udf1(num1) , min(num1) from tb;" ,
#"select udf1(num1) , top(num1,1) from tb;" ,
#"select udf1(num1) , bottom(num1,1) from tb;" ,
"select udf1(c1) , max(c1) from stb1;" ,
"select udf1(c1) , min(c1) from stb1;" ,
#"select udf1(c1) , top(c1 ,1) from stb1;" ,
#"select udf1(c1) , bottom(c1,1) from stb1;" ,
"select udf1(num1) , abs(num1) from tb;" ,
#"select udf1(num1) , csum(num1) from tb;" ,
#"select udf1(c1) , csum(c1) from stb1;" ,
"select udf1(c1) , abs(c1) from stb1;" ,
"select abs(udf1(c1)) , abs(ceil(c1)) from stb1 order by ts;" ,
"select abs(udf1(c1)) , abs(ceil(c1)) from ct1 order by ts;" ,
"select abs(udf1(c1)) , abs(ceil(c1)) from stb1 where c1 is null order by ts;" ,
"select c1 ,udf1(c1) , c6 ,udf1(c6) from stb1 where c1 > 8 order by ts" ,
"select udf1(sub1.c1), udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select sub1.c1 , udf1(sub1.c1), sub2.c2 ,udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select udf1(c1) from ct1 group by c1" ,
"select udf1(c1) from stb1 group by c1" ,
"select c1,c2, udf1(c1,c2) from ct1 group by c1,c2" ,
"select c1,c2, udf1(c1,c2) from stb1 group by c1,c2" ,
"select num1,num2,num3,udf1(num1,num2,num3) from tb" ,
"select c1,c6,udf1(c1,c6) from stb1 order by ts" ,
"select abs(udf1(c1,c6,c1,c6)) , abs(ceil(c1)) from stb1 where c1 is not null order by ts;"
]
udf2_sqls = ["select udf2(sub1.c1), udf2(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select udf2(c1) from stb1 group by 1-udf1(c1)" ,
"select udf2(num1) ,udf2(num2), udf2(num3) from tb" ,
"select udf2(num1)+100 ,udf2(num2)-100, udf2(num3)*100 ,udf2(num3)/100 from tb" ,
"select udf2(c1) ,udf2(c6) from stb1 " ,
"select udf2(c1)+100 ,udf2(c6)-100 ,udf2(c1)*100 ,udf2(c6)/100 from stb1 " ,
"select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from ct1" ,
"select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from stb1 " ,
"select udf2(c1) from ct1 group by c1" ,
"select udf2(c1) from stb1 group by c1" ,
"select c1,c2, udf2(c1,c6) from ct1 group by c1,c2" ,
"select c1,c2, udf2(c1,c6) from stb1 group by c1,c2" ,
"select udf2(c1) from stb1 group by udf1(c1)" ,
"select udf2(c1) from stb1 group by floor(c1)" ,
"select udf2(c1) from stb1 group by floor(c1) order by udf2(c1)" ,
"select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null"]
return udf1_sqls ,udf2_sqls
def unexpected_create(self):
tdLog.info(" create function with out bufsize ")
tdSql.query("drop function udf1 ")
tdSql.query("drop function udf2 ")
# create function without buffer
tdSql.execute("create function udf1 as '/tmp/udf/libudf1.so' outputtype int")
tdSql.execute("create aggregate function udf2 as '/tmp/udf/libudf2.so' outputtype double")
udf1_sqls ,udf2_sqls = self.try_query_sql()
for scalar_sql in udf1_sqls:
tdSql.query(scalar_sql)
for aggregate_sql in udf2_sqls:
tdSql.error(aggregate_sql)
# create function without aggregate
tdLog.info(" create function with out aggregate ")
tdSql.query("drop function udf1 ")
tdSql.query("drop function udf2 ")
# create function without buffer
tdSql.execute("create aggregate function udf1 as '/tmp/udf/libudf1.so' outputtype int bufSize 8 ")
tdSql.execute("create function udf2 as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
udf1_sqls ,udf2_sqls = self.try_query_sql()
for scalar_sql in udf1_sqls:
tdSql.error(scalar_sql)
for aggregate_sql in udf2_sqls:
tdSql.error(aggregate_sql)
tdSql.execute(" create function db as '/tmp/udf/libudf1.so' outputtype int bufSize 8 ")
tdSql.execute(" create aggregate function test as '/tmp/udf/libudf1.so' outputtype int bufSize 8 ")
tdSql.error(" select db(c1) from stb1 ")
tdSql.error(" select db(c1,c6), db(c6) from stb1 ")
tdSql.error(" select db(num1,num2), db(num1) from tb ")
tdSql.error(" select test(c1) from stb1 ")
tdSql.error(" select test(c1,c6), test(c6) from stb1 ")
tdSql.error(" select test(num1,num2), test(num1) from tb ")
def loop_kill_udfd(self):
buildPath = self.getBuildPath()
if (buildPath == ""):
tdLog.exit("taosd not found!")
else:
tdLog.info("taosd found in %s" % buildPath)
cfgPath = buildPath + "/../sim/dnode1/cfg"
udfdPath = buildPath +'/build/bin/udfd'
for i in range(3):
tdLog.info(" loop restart udfd %d_th" % i)
tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,169.661427555)
tdSql.checkData(0,1,169.661427555)
# stop udfd cmds
get_processID = "ps -ef | grep -w udfd | grep -v grep| grep -v defunct | awk '{print $2}'"
processID = subprocess.check_output(get_processID, shell=True).decode("utf-8")
stop_udfd = " kill -9 %s" % processID
os.system(stop_udfd)
time.sleep(2)
tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,169.661427555)
tdSql.checkData(0,1,169.661427555)
# # start udfd cmds
# start_udfd = "nohup " + udfdPath +'-c' +cfgPath +" > /dev/null 2>&1 &"
# tdLog.info("start udfd : %s " % start_udfd)
def test_function_name(self):
tdLog.info(" create function name is not build_in functions ")
tdSql.execute(" drop function udf1 ")
tdSql.execute(" drop function udf2 ")
tdSql.error("create function max as '/tmp/udf/libudf1.so' outputtype int bufSize 8")
tdSql.error("create aggregate function sum as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create function max as '/tmp/udf/libudf1.so' outputtype int bufSize 8")
tdSql.error("create aggregate function sum as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function tbname as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function function as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function stable as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function union as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function 123 as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function 123db as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function mnode as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
def restart_taosd_query_udf(self):
for i in range(3):
tdLog.info(" this is %d_th restart taosd " %i)
tdSql.execute("use db ")
tdSql.query("select count(*) from stb1")
tdSql.checkRows(1)
tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,169.661427555)
tdSql.checkData(0,1,169.661427555)
tdDnodes.stop(1)
tdDnodes.start(1)
time.sleep(2)
def run(self): # sourcery skip: extract-duplicate-method, remove-redundant-fstring
print(" env is ok for all ")
self.prepare_udf_so()
self.prepare_data()
self.create_udf_function()
self.basic_udf_query()
self.multi_cols_udf()
self.restart_taosd_query_udf()
def stop(self):
tdSql.close()
tdLog.success(f"{__file__} successfully executed")
tdCases.addLinux(__file__, TDTestCase())
tdCases.addWindows(__file__, TDTestCase())

View File

@ -294,7 +294,7 @@ class TDTestCase:
return
def test_case3(self):
self.taosBenchCreate("127.0.0.1","no","db1", "stb1", 1, 8, 1*10000)
self.taosBenchCreate("127.0.0.1","no","db1", "stb1", 1, 1, 1*10)
# self.taosBenchCreate("test209","no","db2", "stb2", 1, 8, 1*10000)
# self.taosBenchCreate("chenhaoran02","no","db1", "stb1", 1, 8, 1*10000)
@ -349,17 +349,17 @@ class TDTestCase:
# run case
def run(self):
# create database and tables。
self.test_case1()
tdLog.debug(" LIMIT test_case1 ............ [OK]")
# # create database and tables。
# self.test_case1()
# tdLog.debug(" LIMIT test_case1 ............ [OK]")
# # taosBenchmark create database and table
# self.test_case2()
# tdLog.debug(" LIMIT test_case2 ............ [OK]")
# # taosBenchmarkcreate database/table and insert data
# self.test_case3()
# tdLog.debug(" LIMIT test_case3 ............ [OK]")
# taosBenchmarkcreate database/table and insert data
self.test_case3()
tdLog.debug(" LIMIT test_case3 ............ [OK]")
# # test qnode

View File

@ -10,7 +10,7 @@
"result_file": "./insert_res.txt",
"confirm_parameter_prompt": "no",
"insert_interval": 0,
"interlace_rows": 100000,
"interlace_rows": 0,
"num_of_records_per_req": 100,
"databases": [
{
@ -29,8 +29,8 @@
"batch_create_tbl_num": 50000,
"data_source": "rand",
"insert_mode": "taosc",
"insert_rows": 10,
"interlace_rows": 100000,
"insert_rows": 1,
"interlace_rows": 0,
"insert_interval": 0,
"max_sql_len": 10000000,
"disorder_ratio": 0,

View File

@ -0,0 +1,106 @@
import taos
import sys
import datetime
import inspect
from util.log import *
from util.sql import *
from util.cases import *
from util.dnodes import *
class TDTestCase:
updatecfgDict = {'debugFlag': 143 ,"cDebugFlag":143,"uDebugFlag":143 ,"rpcDebugFlag":143 , "tmrDebugFlag":143 ,
"jniDebugFlag":143 ,"simDebugFlag":143,"dDebugFlag":143, "dDebugFlag":143,"vDebugFlag":143,"mDebugFlag":143,"qDebugFlag":143,
"wDebugFlag":143,"sDebugFlag":143,"tsdbDebugFlag":143,"tqDebugFlag":143 ,"fsDebugFlag":143 ,"fnDebugFlag":143}
def init(self, conn, logSql):
tdLog.debug(f"start to excute {__file__}")
tdSql.init(conn.cursor(), True)
def prepare_datas(self):
tdSql.execute(
'''create table stb1
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
tags (t1 int)
'''
)
tdSql.execute(
'''
create table t1
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
'''
)
for i in range(4):
tdSql.execute(f'create table ct{i+1} using stb1 tags ( {i+1} )')
for i in range(9):
tdSql.execute(
f"insert into ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
)
tdSql.execute(
f"insert into ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
)
tdSql.execute("insert into ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )")
tdSql.execute("insert into ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
tdSql.execute("insert into ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )")
tdSql.execute("insert into ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
tdSql.execute("insert into ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute("insert into ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute("insert into ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute(
f'''insert into t1 values
( '2020-04-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( '2020-10-21 01:01:01.000', 1, 11111, 111, 11, 1.11, 11.11, 1, "binary1", "nchar1", now()+1a )
( '2020-12-31 01:01:01.000', 2, 22222, 222, 22, 2.22, 22.22, 0, "binary2", "nchar2", now()+2a )
( '2021-01-01 01:01:06.000', 3, 33333, 333, 33, 3.33, 33.33, 0, "binary3", "nchar3", now()+3a )
( '2021-05-07 01:01:10.000', 4, 44444, 444, 44, 4.44, 44.44, 1, "binary4", "nchar4", now()+4a )
( '2021-07-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( '2021-09-30 01:01:16.000', 5, 55555, 555, 55, 5.55, 55.55, 0, "binary5", "nchar5", now()+5a )
( '2022-02-01 01:01:20.000', 6, 66666, 666, 66, 6.66, 66.66, 1, "binary6", "nchar6", now()+6a )
( '2022-10-28 01:01:26.000', 7, 00000, 000, 00, 0.00, 00.00, 1, "binary7", "nchar7", "1970-01-01 08:00:00.000" )
( '2022-12-01 01:01:30.000', 8, -88888, -888, -88, -8.88, -88.88, 0, "binary8", "nchar8", "1969-01-01 01:00:00.000" )
( '2022-12-31 01:01:36.000', 9, -99999999999999999, -999, -99, -9.99, -999999999999999999999.99, 1, "binary9", "nchar9", "1900-01-01 00:00:00.000" )
( '2023-02-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
'''
)
def restart_taosd_query_sum(self):
for i in range(5):
tdLog.info(" this is %d_th restart taosd " %i)
os.system("taos -s ' use db ;select c6 from stb1 ; '")
tdSql.execute("use db ")
tdSql.query("select count(*) from stb1")
tdSql.checkRows(1)
tdSql.query("select sum(c1),sum(c2),sum(c3),sum(c4),sum(c5),sum(c6) from stb1;")
tdSql.checkData(0,0,99)
tdSql.checkData(0,1,499995)
tdSql.checkData(0,2,4995)
tdSql.checkData(0,3,594)
tdSql.checkData(0,4,49.950001001)
tdSql.checkData(0,5,599.940000000)
tdDnodes.stop(1)
tdDnodes.start(1)
time.sleep(2)
def run(self): # sourcery skip: extract-duplicate-method, remove-redundant-fstring
tdSql.prepare()
tdLog.printNoPrefix("==========step1:create table ==============")
self.prepare_datas()
os.system("taos -s ' select c6 from stb1 ; '")
self.restart_taosd_query_sum()
def stop(self):
tdSql.close()
tdLog.success(f"{__file__} successfully executed")
tdCases.addLinux(__file__, TDTestCase())
tdCases.addWindows(__file__, TDTestCase())

View File

@ -1377,9 +1377,9 @@ class TDTestCase:
self.tmqCase1(cfgPath, buildPath)
self.tmqCase2(cfgPath, buildPath)
self.tmqCase3(cfgPath, buildPath)
self.tmqCase4(cfgPath, buildPath)
self.tmqCase5(cfgPath, buildPath)
# self.tmqCase3(cfgPath, buildPath)
# self.tmqCase4(cfgPath, buildPath)
# self.tmqCase5(cfgPath, buildPath)
def stop(self):
tdSql.close()

File diff suppressed because it is too large Load Diff

View File

@ -8,6 +8,8 @@ python3 ./test.py -f 0-others/taosShellNetChk.py
python3 ./test.py -f 0-others/telemetry.py
python3 ./test.py -f 0-others/taosdMonitor.py
python3 ./test.py -f 0-others/udfTest.py
python3 ./test.py -f 0-others/udf_create.py
python3 ./test.py -f 0-others/udf_restart_taosd.py
python3 ./test.py -f 0-others/user_control.py
python3 ./test.py -f 0-others/fsync.py
@ -25,6 +27,7 @@ python3 ./test.py -f 2-query/join.py
python3 ./test.py -f 2-query/cast.py
python3 ./test.py -f 2-query/concat.py
python3 ./test.py -f 2-query/concat_ws.py
python3 ./test.py -f 2-query/check_tsdb.py
# python3 ./test.py -f 2-query/union.py
# python3 ./test.py -f 2-query/union2.py
# python3 ./test.py -f 2-query/union3.py
@ -60,11 +63,13 @@ python3 ./test.py -f 2-query/arccos.py
python3 ./test.py -f 2-query/arctan.py
python3 ./test.py -f 2-query/query_cols_tags_and_or.py
python3 ./test.py -f 2-query/nestedQuery.py
python3 ./test.py -f 7-tmq/basic5.py
python3 ./test.py -f 7-tmq/subscribeDb.py
python3 ./test.py -f 7-tmq/subscribeDb1.py
python3 ./test.py -f 7-tmq/subscribeStb.py
python3 ./test.py -f 7-tmq/subscribeStb0.py
python3 ./test.py -f 7-tmq/subscribeStb1.py
python3 ./test.py -f 7-tmq/subscribeStb2.py

View File

@ -321,9 +321,16 @@ int32_t saveConsumeResult(SThreadInfo* pInfo) {
TAOS* pConn = taos_connect(NULL, "root", "taosdata", NULL, 0);
assert(pConn != NULL);
int64_t now = taosGetTimestampMs();
// schema: ts timestamp, consumerid int, consummsgcnt bigint, checkresult int
sprintf(sqlStr, "insert into %s.consumeresult values (now, %d, %" PRId64 ", %" PRId64 ", %d)", g_stConfInfo.cdbName,
pInfo->consumerId, pInfo->consumeMsgCnt, pInfo->consumeRowCnt, pInfo->checkresult);
sprintf(sqlStr, "insert into %s.consumeresult values (%"PRId64", %d, %" PRId64 ", %" PRId64 ", %d)",
g_stConfInfo.cdbName,
now,
pInfo->consumerId,
pInfo->consumeMsgCnt,
pInfo->consumeRowCnt,
pInfo->checkresult);
char tmpString[128];
taosFprintfFile(g_fp, "%s, consume id %d result: %s\n", getCurrentTimeString(tmpString), pInfo->consumerId ,sqlStr);

@ -1 +1 @@
Subproject commit 0aad27d725f4ee6b18daf1db0c07d933aed16eea
Subproject commit a8bb88c9056735919fc50bf9b12d9562f17e844f