Merge branch '3.0' of github.com:taosdata/TDengine into 3.0
This commit is contained in:
commit
5c61eea4ad
|
@ -9,14 +9,13 @@ Apache Superset provides an intuitive user interface that makes creating, sharin
|
||||||
|
|
||||||
Through the Python connector of TDengine, Superset can support TDengine data sources and provide functions such as data presentation and analysis
|
Through the Python connector of TDengine, Superset can support TDengine data sources and provide functions such as data presentation and analysis
|
||||||
|
|
||||||
## Install Apache Superset
|
## Prerequisites
|
||||||
|
|
||||||
Ensure that Apache Superset v2.1.0 or above is installed. If not, please visit [official website](https://superset.apache.org/) to install
|
|
||||||
|
|
||||||
## Install TDengine
|
|
||||||
|
|
||||||
Both TDengine Enterprise Edition and Community Edition are supported, with version requirements of 3.0 or higher
|
|
||||||
|
|
||||||
|
Prepare the following environment:
|
||||||
|
- TDengine is installed and running normally (both Enterprise and Community versions are available)
|
||||||
|
- taosAdapter is running normally, refer to [taosAdapter](../../../reference/components/taosAdapter)
|
||||||
|
- Apache Superset version 2.1.0 or above is already installed, refre to [Apache Superset](https://superset.apache.org/)
|
||||||
|
|
||||||
## Install TDengine Python Connector
|
## Install TDengine Python Connector
|
||||||
|
|
||||||
The Python connector of TDengine comes with a connection driver that supports Superset in versions 2.1.18 and later, which will be automatically installed in the Superset directory and provide data source services.
|
The Python connector of TDengine comes with a connection driver that supports Superset in versions 2.1.18 and later, which will be automatically installed in the Superset directory and provide data source services.
|
||||||
|
|
|
@ -4,22 +4,17 @@ sidebar_label: taosdump
|
||||||
slug: /tdengine-reference/tools/taosdump
|
slug: /tdengine-reference/tools/taosdump
|
||||||
---
|
---
|
||||||
|
|
||||||
taosdump is a tool application that supports backing up data from a running TDengine cluster and restoring the backed-up data to the same or another running TDengine cluster.
|
`taosdump` is a TDengine data backup/recovery tool provided for open source users, and the backed up data files adopt the standard [Apache AVRO](https://avro.apache.org/)
|
||||||
|
Format, convenient for exchanging data with the external ecosystem.
|
||||||
taosdump can back up data using databases, supertables, or basic tables as logical data units, and can also back up data records within a specified time period from databases, supertables, and basic tables. You can specify the directory path for data backup; if not specified, taosdump defaults to backing up data to the current directory.
|
Taosdump provides multiple data backup and recovery options to meet different data needs, and all supported options can be viewed through --help.
|
||||||
|
|
||||||
If the specified location already has data files, taosdump will prompt the user and exit immediately to avoid data being overwritten. This means the same path can only be used for one backup.
|
|
||||||
If you see related prompts, please operate carefully.
|
|
||||||
|
|
||||||
taosdump is a logical backup tool, it should not be used to back up any raw data, environment settings, hardware information, server configuration, or cluster topology. taosdump uses [Apache AVRO](https://avro.apache.org/) as the data file format to store backup data.
|
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
There are two ways to install taosdump:
|
Taosdump provides two installation methods:
|
||||||
|
|
||||||
- Install the official taosTools package, please find taosTools on the [release history page](../../../release-history/taostools/) and download it for installation.
|
- Taosdump is the default installation component in the TDengine installation package, which can be used after installing TDengine. For how to install TDengine, please refer to [TDengine Installation](../../../get-started/)
|
||||||
|
|
||||||
- Compile taos-tools separately and install, please refer to the [taos-tools](https://github.com/taosdata/taos-tools) repository for details.
|
- Compile and install taos tools separately, refer to [taos tools](https://github.com/taosdata/taos-tools) .
|
||||||
|
|
||||||
## Common Use Cases
|
## Common Use Cases
|
||||||
|
|
||||||
|
@ -30,6 +25,9 @@ There are two ways to install taosdump:
|
||||||
3. Backup certain supertables or basic tables in a specified database: use the `dbname stbname1 stbname2 tbname1 tbname2 ...` parameter, note that this input sequence starts with the database name, supports only one database, and the second and subsequent parameters are the names of the supertables or basic tables in that database, separated by spaces;
|
3. Backup certain supertables or basic tables in a specified database: use the `dbname stbname1 stbname2 tbname1 tbname2 ...` parameter, note that this input sequence starts with the database name, supports only one database, and the second and subsequent parameters are the names of the supertables or basic tables in that database, separated by spaces;
|
||||||
4. Backup the system log database: TDengine clusters usually include a system database named `log`, which contains data for TDengine's own operation, taosdump does not back up the log database by default. If there is a specific need to back up the log database, you can use the `-a` or `--allow-sys` command line parameter.
|
4. Backup the system log database: TDengine clusters usually include a system database named `log`, which contains data for TDengine's own operation, taosdump does not back up the log database by default. If there is a specific need to back up the log database, you can use the `-a` or `--allow-sys` command line parameter.
|
||||||
5. "Tolerant" mode backup: Versions after taosdump 1.4.1 provide the `-n` and `-L` parameters, used for backing up data without using escape characters and in "tolerant" mode, which can reduce backup data time and space occupied when table names, column names, and label names do not use escape characters. If unsure whether to use `-n` and `-L`, use the default parameters for "strict" mode backup. For an explanation of escape characters, please refer to the [official documentation](../../sql-manual/escape-characters/).
|
5. "Tolerant" mode backup: Versions after taosdump 1.4.1 provide the `-n` and `-L` parameters, used for backing up data without using escape characters and in "tolerant" mode, which can reduce backup data time and space occupied when table names, column names, and label names do not use escape characters. If unsure whether to use `-n` and `-L`, use the default parameters for "strict" mode backup. For an explanation of escape characters, please refer to the [official documentation](../../sql-manual/escape-characters/).
|
||||||
|
6. If a backup file already exists in the directory specified by the `-o` parameter, to prevent data from being overwritten, taosdump will report an error and exit. Please replace it with another empty directory or clear the original data before backing up.
|
||||||
|
7. Currently, taosdump does not support data breakpoint backup function. Once the data backup is interrupted, it needs to be started from scratch.
|
||||||
|
If the backup takes a long time, it is recommended to use the (-S -E options) method to specify the start/end time for segmented backup.
|
||||||
|
|
||||||
:::tip
|
:::tip
|
||||||
|
|
||||||
|
@ -42,7 +40,8 @@ There are two ways to install taosdump:
|
||||||
|
|
||||||
### taosdump Restore Data
|
### taosdump Restore Data
|
||||||
|
|
||||||
Restore data files from a specified path: use the `-i` parameter along with the data file path. As mentioned earlier, the same directory should not be used to back up different data sets, nor should the same path be used to back up the same data set multiple times, otherwise, the backup data will cause overwriting or multiple backups.
|
- Restore data files from a specified path: use the `-i` parameter along with the data file path. As mentioned earlier, the same directory should not be used to back up different data sets, nor should the same path be used to back up the same data set multiple times, otherwise, the backup data will cause overwriting or multiple backups.
|
||||||
|
- taosdump supports data recovery to a new database name with the parameter `-W`, please refer to the command line parameter description for details.
|
||||||
|
|
||||||
:::tip
|
:::tip
|
||||||
taosdump internally uses the TDengine stmt binding API to write restored data, currently using 16384 as a batch for writing. If there are many columns in the backup data, it may cause a "WAL size exceeds limit" error, in which case you can try adjusting the `-B` parameter to a smaller value.
|
taosdump internally uses the TDengine stmt binding API to write restored data, currently using 16384 as a batch for writing. If there are many columns in the backup data, it may cause a "WAL size exceeds limit" error, in which case you can try adjusting the `-B` parameter to a smaller value.
|
||||||
|
@ -105,6 +104,13 @@ Usage: taosdump [OPTION...] dbname [tbname ...]
|
||||||
the table name.(Version 2.5.3)
|
the table name.(Version 2.5.3)
|
||||||
-T, --thread-num=THREAD_NUM Number of thread for dump in file. Default is
|
-T, --thread-num=THREAD_NUM Number of thread for dump in file. Default is
|
||||||
8.
|
8.
|
||||||
|
-W, --rename=RENAME-LIST Rename database name with new name during
|
||||||
|
importing data. RENAME-LIST:
|
||||||
|
"db1=newDB1|db2=newDB2" means rename db1 to newDB1
|
||||||
|
and rename db2 to newDB2 (Version 2.5.4)
|
||||||
|
-k, --retry-count=VALUE Set the number of retry attempts for connection or
|
||||||
|
query failures
|
||||||
|
-z, --retry-sleep-ms=VALUE retry interval sleep time, unit ms
|
||||||
-C, --cloud=CLOUD_DSN specify a DSN to access TDengine cloud service
|
-C, --cloud=CLOUD_DSN specify a DSN to access TDengine cloud service
|
||||||
-R, --restful Use RESTful interface to connect TDengine
|
-R, --restful Use RESTful interface to connect TDengine
|
||||||
-t, --timeout=SECONDS The timeout seconds for websocket to interact.
|
-t, --timeout=SECONDS The timeout seconds for websocket to interact.
|
||||||
|
@ -112,10 +118,6 @@ Usage: taosdump [OPTION...] dbname [tbname ...]
|
||||||
-?, --help Give this help list
|
-?, --help Give this help list
|
||||||
--usage Give a short usage message
|
--usage Give a short usage message
|
||||||
-V, --version Print program version
|
-V, --version Print program version
|
||||||
-W, --rename=RENAME-LIST Rename database name with new name during
|
|
||||||
importing data. RENAME-LIST:
|
|
||||||
"db1=newDB1|db2=newDB2" means rename db1 to newDB1
|
|
||||||
and rename db2 to newDB2 (Version 2.5.4)
|
|
||||||
|
|
||||||
Mandatory or optional arguments to long options are also mandatory or optional
|
Mandatory or optional arguments to long options are also mandatory or optional
|
||||||
for any corresponding short options.
|
for any corresponding short options.
|
||||||
|
|
|
@ -4,35 +4,38 @@ sidebar_label: taosBenchmark
|
||||||
slug: /tdengine-reference/tools/taosbenchmark
|
slug: /tdengine-reference/tools/taosbenchmark
|
||||||
---
|
---
|
||||||
|
|
||||||
taosBenchmark (formerly known as taosdemo) is a tool for testing the performance of the TDengine product. taosBenchmark can test the performance of TDengine's insert, query, and subscription functions. It can simulate massive data generated by a large number of devices and flexibly control the number of databases, supertables, types and number of tag columns, types and number of data columns, number of subtables, data volume per subtable, data insertion interval, number of working threads in taosBenchmark, whether and how to insert out-of-order data, etc. To accommodate the usage habits of past users, the installation package provides taosdemo as a soft link to taosBenchmark.
|
TaosBenchmark is a performance benchmarking tool for TDengine products, providing insertion, query, and subscription performance testing for TDengine products, and outputting performance indicators.
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
There are two ways to install taosBenchmark:
|
taosBenchmark provides two installation methods:
|
||||||
|
|
||||||
- taosBenchmark is automatically installed with the official TDengine installation package, for details please refer to [TDengine Installation](../../../get-started/).
|
- taosBenchmark is the default installation component in the TDengine installation package, which can be used after installing TDengine. For how to install TDengine, please refer to [TDengine Installation](../../../get started/)
|
||||||
|
|
||||||
- Compile and install taos-tools separately, for details please refer to the [taos-tools](https://github.com/taosdata/taos-tools) repository.
|
- Compile and install taos tools separately, refer to [taos tools](https://github.com/taosdata/taos-tools) .
|
||||||
|
|
||||||
## Operation
|
## Operation
|
||||||
|
|
||||||
### Configuration and Operation Methods
|
### Configuration and Operation Methods
|
||||||
|
|
||||||
taosBenchmark needs to be executed in the operating system's terminal, and this tool supports two configuration methods: Command Line Arguments and JSON Configuration File. These two methods are mutually exclusive; when using a configuration file, only one command line argument `-f <json file>` can be used to specify the configuration file. When using command line arguments to run taosBenchmark and control its behavior, the `-f` parameter cannot be used; instead, other parameters must be used for configuration. In addition, taosBenchmark also offers a special mode of operation, which is running without any parameters.
|
taosBbenchmark supports three operating modes:
|
||||||
|
- No parameter mode
|
||||||
taosBenchmark supports comprehensive performance testing for TDengine, and the TDengine features it supports are divided into three categories: writing, querying, and subscribing. These three functions are mutually exclusive, and each run of taosBenchmark can only select one of them. It is important to note that the type of function to be tested is not configurable when using the command line configuration method; the command line configuration method can only test writing performance. To test TDengine's query and subscription performance, you must use the configuration file method and specify the type of function to be tested through the `filetype` parameter in the configuration file.
|
- Command line mode
|
||||||
|
- JSON configuration file mode
|
||||||
|
The command-line approach is a subset of the functionality of JSON configuration files, which immediately uses the command line and then the configuration file, with the parameters specified by the command line taking precedence.
|
||||||
|
|
||||||
**Ensure that the TDengine cluster is running correctly before running taosBenchmark.**
|
**Ensure that the TDengine cluster is running correctly before running taosBenchmark.**
|
||||||
|
|
||||||
### Running Without Command Line Arguments
|
### Running Without Command Line Arguments
|
||||||
|
|
||||||
Execute the following command to quickly experience taosBenchmark performing a write performance test on TDengine based on the default configuration.
|
|
||||||
|
|
||||||
|
Execute the following command to quickly experience taosBenchmark performing a write performance test on TDengine based on the default configuration.
|
||||||
```shell
|
```shell
|
||||||
taosBenchmark
|
taosBenchmark
|
||||||
```
|
```
|
||||||
|
|
||||||
When running without parameters, taosBenchmark by default connects to the TDengine cluster specified under `/etc/taos`, and creates a database named `test` in TDengine, under which a supertable named `meters` is created, and 10,000 tables are created under the supertable, each table having 10,000 records inserted. Note that if a `test` database already exists, this command will delete the existing database and create a new `test` database.
|
When running without parameters, taosBenchmark defaults to connecting to the TDengine cluster specified in `/etc/taos/taos.cfg `.
|
||||||
|
After successful connection, a smart meter example database test, super meters, and 10000 sub meters will be created, with 10000 records per sub meter. If the test database already exists, it will be deleted before creating a new one.
|
||||||
|
|
||||||
### Running Using Command Line Configuration Parameters
|
### Running Using Command Line Configuration Parameters
|
||||||
|
|
||||||
|
@ -46,9 +49,7 @@ The above command `taosBenchmark` will create a database named `test`, establish
|
||||||
|
|
||||||
### Running Using a Configuration File
|
### Running Using a Configuration File
|
||||||
|
|
||||||
The taosBenchmark installation package includes examples of configuration files, located in `<install_directory>/examples/taosbenchmark-json`
|
Running in configuration file mode provides all functions, so parameters can be configured to run in the configuration file.
|
||||||
|
|
||||||
Use the following command line to run taosBenchmark and control its behavior through a configuration file.
|
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
taosBenchmark -f <json file>
|
taosBenchmark -f <json file>
|
||||||
|
@ -214,6 +215,61 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
|
||||||
- **-?/--help**:
|
- **-?/--help**:
|
||||||
Displays help information and exits. Cannot be used with other parameters.
|
Displays help information and exits. Cannot be used with other parameters.
|
||||||
|
|
||||||
|
|
||||||
|
## Output performance indicators
|
||||||
|
|
||||||
|
#### Write indicators
|
||||||
|
|
||||||
|
After writing is completed, a summary performance metric will be output in the last two lines in the following format:
|
||||||
|
``` bash
|
||||||
|
SUCC: Spent 8.527298 (real 8.117379) seconds to insert rows: 10000000 with 8 thread(s) into test 1172704.41 (real 1231924.74) records/second
|
||||||
|
SUCC: insert delay, min: 19.6780ms, avg: 64.9390ms, p90: 94.6900ms, p95: 105.1870ms, p99: 130.6660ms, max: 157.0830ms
|
||||||
|
```
|
||||||
|
First line write speed statistics:
|
||||||
|
- Spent: Total write time, in seconds, counting from the start of writing the first data to the end of the last data. This indicates that a total of 8.527298 seconds were spent
|
||||||
|
- Real: Total write time (calling the engine), excluding the time spent preparing data for the testing framework. Purely counting the time spent on engine calls, The time spent is 8.117379 seconds. If 8.527298-8.117379=0.409919 seconds, it is the time spent preparing data for the testing framework
|
||||||
|
- Rows: Write the total number of rows, which is 10 million pieces of data
|
||||||
|
- Threads: The number of threads being written, which is 8 threads writing simultaneously
|
||||||
|
- Records/second write speed = `total write time` / `total number of rows written`, real in parentheses is the same as before, indicating pure engine write speed
|
||||||
|
|
||||||
|
Second line single write delay statistics:
|
||||||
|
- min: Write minimum delay
|
||||||
|
- avg: Write normal delay
|
||||||
|
- p90: Write delay p90 percentile delay number
|
||||||
|
- p95: Write delay p95 percentile delay number
|
||||||
|
- p99: Write delay p99 percentile delay number
|
||||||
|
- max: maximum write delay
|
||||||
|
Through this series of indicators, the distribution of write request latency can be observed
|
||||||
|
|
||||||
|
#### Query indicators
|
||||||
|
The query performance test mainly outputs the QPS indicator of query request speed, and the output format is as follows:
|
||||||
|
|
||||||
|
``` bash
|
||||||
|
complete query with 3 threads and 10000 query delay avg: 0.002686s min: 0.001182s max: 0.012189s p90: 0.002977s p95: 0.003493s p99: 0.004645s SQL command: select ...
|
||||||
|
INFO: Total specified queries: 30000
|
||||||
|
INFO: Spend 26.9530 second completed total queries: 30000, the QPS of all threads: 1113.049
|
||||||
|
```
|
||||||
|
|
||||||
|
- The first line represents the percentile distribution of query execution and query request delay for each of the three threads executing 10000 queries. The SQL command is the test query statement
|
||||||
|
- The second line indicates that a total of 10000 * 3 = 30000 queries have been completed
|
||||||
|
- The third line indicates that the total query time is 26.9653 seconds, and the query rate per second (QPS) is 1113.049 times/second
|
||||||
|
|
||||||
|
#### Subscription metrics
|
||||||
|
|
||||||
|
The subscription performance test mainly outputs consumer consumption speed indicators, with the following output format:
|
||||||
|
``` bash
|
||||||
|
INFO: consumer id 0 has poll total msgs: 376, period rate: 37.592 msgs/s, total rows: 3760000, period rate: 375924.815 rows/s
|
||||||
|
INFO: consumer id 1 has poll total msgs: 362, period rate: 36.131 msgs/s, total rows: 3620000, period rate: 361313.504 rows/s
|
||||||
|
INFO: consumer id 2 has poll total msgs: 364, period rate: 36.378 msgs/s, total rows: 3640000, period rate: 363781.731 rows/s
|
||||||
|
INFO: consumerId: 0, consume msgs: 1000, consume rows: 10000000
|
||||||
|
INFO: consumerId: 1, consume msgs: 1000, consume rows: 10000000
|
||||||
|
INFO: consumerId: 2, consume msgs: 1000, consume rows: 10000000
|
||||||
|
INFO: Consumed total msgs: 3000, total rows: 30000000
|
||||||
|
```
|
||||||
|
- Lines 1 to 3 real-time output of the current consumption speed of each consumer, msgs/s represents the number of consumption messages, each message contains multiple rows of data, and rows/s represents the consumption speed calculated by rows
|
||||||
|
- Lines 4 to 6 show the overall statistics of each consumer after the test is completed, including the total number of messages consumed and the total number of lines
|
||||||
|
- The overall statistics of all consumers in line 7, `msgs` represents how many messages were consumed in total, `rows` represents how many rows of data were consumed in total
|
||||||
|
|
||||||
## Configuration File Parameters Detailed Explanation
|
## Configuration File Parameters Detailed Explanation
|
||||||
|
|
||||||
### General Configuration Parameters
|
### General Configuration Parameters
|
||||||
|
@ -331,21 +387,6 @@ Parameters related to supertable creation are configured in the `super_tables` s
|
||||||
- **repeat_ts_max** : Numeric type, when composite primary key is enabled, specifies the maximum number of records with the same timestamp to be generated
|
- **repeat_ts_max** : Numeric type, when composite primary key is enabled, specifies the maximum number of records with the same timestamp to be generated
|
||||||
- **sqls** : Array of strings type, specifies the array of sql to be executed after the supertable is successfully created, the table name specified in sql must be prefixed with the database name, otherwise an unspecified database error will occur
|
- **sqls** : Array of strings type, specifies the array of sql to be executed after the supertable is successfully created, the table name specified in sql must be prefixed with the database name, otherwise an unspecified database error will occur
|
||||||
|
|
||||||
#### tsma Configuration Parameters
|
|
||||||
|
|
||||||
Specify the configuration parameters for tsma in `super_tables` under `tsmas`, with the following specific parameters:
|
|
||||||
|
|
||||||
- **name**: Specifies the name of the tsma, mandatory.
|
|
||||||
|
|
||||||
- **function**: Specifies the function of the tsma, mandatory.
|
|
||||||
|
|
||||||
- **interval**: Specifies the time interval for the tsma, mandatory.
|
|
||||||
|
|
||||||
- **sliding**: Specifies the window time shift for the tsma, mandatory.
|
|
||||||
|
|
||||||
- **custom**: Specifies custom configuration appended at the end of the tsma creation statement, optional.
|
|
||||||
|
|
||||||
- **start_when_inserted**: Specifies when to create the tsma after how many rows are inserted, optional, default is 0.
|
|
||||||
|
|
||||||
#### Tag and Data Column Configuration Parameters
|
#### Tag and Data Column Configuration Parameters
|
||||||
|
|
||||||
|
@ -423,6 +464,11 @@ For other common parameters, see Common Configuration Parameters.
|
||||||
|
|
||||||
Configuration parameters for querying specified tables (can specify supertables, subtables, or regular tables) are set in `specified_table_query`.
|
Configuration parameters for querying specified tables (can specify supertables, subtables, or regular tables) are set in `specified_table_query`.
|
||||||
|
|
||||||
|
- **mixed_query** "yes": `Mixed Query` "no": `Normal Query`, default is "no"
|
||||||
|
`Mixed Query`: All SQL statements in `sqls` are grouped by the number of threads, with each thread executing one group. Each SQL statement in a thread needs to perform `query_times` queries.
|
||||||
|
`Normal Query `: Each SQL in `sqls` starts `threads` and exits after executing `query_times` times. The next SQL can only be executed after all previous SQL threads have finished executing and exited.
|
||||||
|
Regardless of whether it is a `Normal Query` or `Mixed Query`, the total number of query executions is the same. The total number of queries = `sqls` * `threads` * `query_times`. The difference is that `Normal Query` starts `threads` for each SQL query, while ` Mixed Query` only starts `threads` once to complete all SQL queries. The number of thread startups for the two is different.
|
||||||
|
|
||||||
- **query_interval** : Query interval, in seconds, default is 0.
|
- **query_interval** : Query interval, in seconds, default is 0.
|
||||||
|
|
||||||
- **threads** : Number of threads executing the SQL query, default is 1.
|
- **threads** : Number of threads executing the SQL query, default is 1.
|
||||||
|
@ -433,7 +479,8 @@ Configuration parameters for querying specified tables (can specify supertables,
|
||||||
|
|
||||||
#### Configuration Parameters for Querying Supertables
|
#### Configuration Parameters for Querying Supertables
|
||||||
|
|
||||||
Configuration parameters for querying supertables are set in `super_table_query`.
|
Configuration parameters for querying supertables are set in `super_table_query`.
|
||||||
|
The thread mode of the super table query is the same as the `Normal Query` mode of the specified query statement described above, except that `sqls` is filled all sub tables.
|
||||||
|
|
||||||
- **stblname** : The name of the supertable to query, required.
|
- **stblname** : The name of the supertable to query, required.
|
||||||
|
|
||||||
|
|
|
@ -4,26 +4,27 @@ title: 与 Superset 集成
|
||||||
---
|
---
|
||||||
Apache Superset 是一个现代的企业级商业智能(BI)Web 应用程序,主要用于数据探索和可视化。它由 Apache 软件基金会支持,是一个开源项目,它拥有活跃的社区和丰富的生态系统。Apache Superset 提供了直观的用户界面,使得创建、分享和可视化数据变得简单,同时支持多种数据源和丰富的可视化选项。
|
Apache Superset 是一个现代的企业级商业智能(BI)Web 应用程序,主要用于数据探索和可视化。它由 Apache 软件基金会支持,是一个开源项目,它拥有活跃的社区和丰富的生态系统。Apache Superset 提供了直观的用户界面,使得创建、分享和可视化数据变得简单,同时支持多种数据源和丰富的可视化选项。
|
||||||
|
|
||||||
通过 TDengine 的 Python 连接器, Superset 可支持 TDengine 数据源并提供数据展现、分析等功能
|
通过 TDengine 的 Python 连接器, Apache Superset 可支持 TDengine 数据源并提供数据展现、分析等功能
|
||||||
|
|
||||||
## 安装 Apache Superset
|
|
||||||
|
|
||||||
确保已安装 Apache Superset v2.1.0 及以上版本, 如未安装,请到其 [官网](https://superset.apache.org/) 安装
|
## 前置条件
|
||||||
|
|
||||||
## 安装 TDengine
|
准备以下环境:
|
||||||
|
- TDengine 集群已部署并正常运行(企业及社区版均可)
|
||||||
|
- taosAdapter 能够正常运行。详细参考 [taosAdapter 使用手册](../../../reference/components/taosadapter)
|
||||||
|
- Apache Superset v2.1.0 或以上版本已安装。安装 Apache Superset 请参考 [官方文档](https://superset.apache.org/)
|
||||||
|
|
||||||
TDengine 企业版及社区版均可支持,版本要求在 3.0 及以上
|
|
||||||
|
|
||||||
## 安装 TDengine Python 连接器
|
## 安装 TDengine Python 连接器
|
||||||
|
|
||||||
TDengine Python 连接器从 `v2.1.18` 开始自带 Superset 连接驱动,安装程序会把连接驱动安装到 Superset 相应目录下并向 Superset 提供数据源服务
|
TDengine Python 连接器从 `v2.1.18` 起带 Superset 连接驱动,会安装至 Superset 相应目录下并向 Superset 提供数据源服务
|
||||||
Superset 与 TDengine 之间使用 WebSocket 协议连接,所以需另安装支持 WebSocket 连接协议的组件 `taos-ws-py` , 全部安装脚本如下:
|
Superset 与 TDengine 之间使用 WebSocket 协议连接,需安装支持此协议的 `taos-ws-py` 组件, 全部安装脚本如下:
|
||||||
```bash
|
```bash
|
||||||
pip3 install taospy
|
pip3 install taospy
|
||||||
pip3 install taos-ws-py
|
pip3 install taos-ws-py
|
||||||
```
|
```
|
||||||
|
|
||||||
## Superset 中配置 TDengine 连接
|
## 配置 TDengine 数据源
|
||||||
|
|
||||||
**第 1 步**,进入新建数据库连接页面 "Superset" → "Setting" → "Database Connections" → "+DATABASE"
|
**第 1 步**,进入新建数据库连接页面 "Superset" → "Setting" → "Database Connections" → "+DATABASE"
|
||||||
**第 2 步**,选择 TDengine 数据库连接。"SUPPORTED DATABASES" 下拉列表中选择 "TDengine" 项。
|
**第 2 步**,选择 TDengine 数据库连接。"SUPPORTED DATABASES" 下拉列表中选择 "TDengine" 项。
|
||||||
|
|
|
@ -4,26 +4,17 @@ sidebar_label: taosdump
|
||||||
toc_max_heading_level: 4
|
toc_max_heading_level: 4
|
||||||
---
|
---
|
||||||
|
|
||||||
taosdump 是一个支持从运行中的 TDengine 集群备份数据并将备份的数据恢复到相同或另一个运行中的 TDengine 集群中的工具应用程序。
|
taosdump 是为开源用户提供的 TDengine 数据备份/恢复工具,备份数据文件采用标准 [ Apache AVRO ](https://avro.apache.org/) 格式,方便与外界生态交换数据。taosdump 提供多种数据备份及恢复选项来满足不同需求,可通过 --help 查看支持的全部选项。
|
||||||
|
|
||||||
taosdump 可以用数据库、超级表或普通表作为逻辑数据单元进行备份,也可以对数据库、超级
|
|
||||||
表和普通表中指定时间段内的数据记录进行备份。使用时可以指定数据备份的目录路径,如果
|
|
||||||
不指定位置,taosdump 默认会将数据备份到当前目录。
|
|
||||||
|
|
||||||
如果指定的位置已经有数据文件,taosdump 会提示用户并立即退出,避免数据被覆盖。这意味着同一路径只能被用于一次备份。
|
|
||||||
如果看到相关提示,请小心操作。
|
|
||||||
|
|
||||||
taosdump 是一个逻辑备份工具,它不应被用于备份任何原始数据、环境设置、
|
|
||||||
硬件信息、服务端配置或集群的拓扑结构。taosdump 使用
|
|
||||||
[ Apache AVRO ](https://avro.apache.org/)作为数据文件格式来存储备份数据。
|
|
||||||
|
|
||||||
## 安装
|
## 安装
|
||||||
|
|
||||||
taosdump 有两种安装方式:
|
taosdump 提供两种安装方式:
|
||||||
|
|
||||||
- 安装 taosTools 官方安装包, 请从[发布历史页面](https://docs.taosdata.com/releases/tools/)页面找到 taosTools 并下载安装。
|
- taosdump 是 TDengine 安装包中默认安装组件,安装 TDengine 后即可使用,可参考[TDengine 安装](../../../get-started/)
|
||||||
|
|
||||||
|
- 单独编译 taos-tools 并安装, 参考 [taos-tools](https://github.com/taosdata/taos-tools) 仓库。
|
||||||
|
|
||||||
- 单独编译 taos-tools 并安装, 详情请参考 [taos-tools](https://github.com/taosdata/taos-tools) 仓库。
|
|
||||||
|
|
||||||
## 常用使用场景
|
## 常用使用场景
|
||||||
|
|
||||||
|
@ -31,9 +22,11 @@ taosdump 有两种安装方式:
|
||||||
|
|
||||||
1. 备份所有数据库:指定 `-A` 或 `--all-databases` 参数;
|
1. 备份所有数据库:指定 `-A` 或 `--all-databases` 参数;
|
||||||
2. 备份多个指定数据库:使用 `-D db1,db2,...` 参数;
|
2. 备份多个指定数据库:使用 `-D db1,db2,...` 参数;
|
||||||
3. 备份指定数据库中的某些超级表或普通表:使用 `dbname stbname1 stbname2 tbname1 tbname2 ...` 参数,注意这种输入序列第一个参数为数据库名称,且只支持一个数据库,第二个和之后的参数为该数据库中的超级表或普通表名称,中间以空格分隔;
|
3. 备份指定数据库中某些超级表或普通表:使用 `dbname stbname1 stbname2 tbname1 tbname2 ...` 参数,注意这种输入序列第一个参数为数据库名称,且只支持一个数据库,第二个和之后的参数为该数据库中的超级表或普通表名称,中间以空格分隔;
|
||||||
4. 备份系统 log 库:TDengine 集群通常会包含一个系统数据库,名为 `log`,这个数据库内的数据为 TDengine 自我运行的数据,taosdump 默认不会对 log 库进行备份。如果有特定需求对 log 库进行备份,可以使用 `-a` 或 `--allow-sys` 命令行参数。
|
4. 备份系统 log 库:TDengine 集群通常会包含一个系统数据库,名为 `log`,这个数据库内的数据为 TDengine 自我运行的数据,taosdump 默认不会对 log 库进行备份。如果有特定需求对 log 库进行备份,可以使用 `-a` 或 `--allow-sys` 命令行参数。
|
||||||
5. “宽容”模式备份:taosdump 1.4.1 之后的版本提供 `-n` 参数和 `-L` 参数,用于备份数据时不使用转义字符和“宽容”模式,可以在表名、列名、标签名没使用转义字符的情况下减少备份数据时间和备份数据占用空间。如果不确定符合使用 `-n` 和 `-L` 条件时请使用默认参数进行“严格”模式进行备份。转义字符的说明请参考[官方文档](../../taos-sql/escape)。
|
5. “宽容”模式备份:taosdump 1.4.1 之后的版本提供 `-n` 参数和 `-L` 参数,用于备份数据时不使用转义字符和“宽容”模式,可以在表名、列名、标签名没使用转义字符的情况下减少备份数据时间和备份数据占用空间。如果不确定符合使用 `-n` 和 `-L` 条件时请使用默认参数进行“严格”模式进行备份。转义字符的说明请参考[官方文档](../../taos-sql/escape)。
|
||||||
|
6. `-o` 参数指定的目录下如果已存在备份文件,为防止数据被覆盖,taosdump 会报错并退出,请更换其它空目录或清空原来数据后再备份。
|
||||||
|
7. 目前 taosdump 不支持数据断点继备功能,一旦数据备份中断,需要从头开始。如果备份需要很长时间,建议使用(-S -E 选项)指定开始/结束时间进行分段备份的方法,
|
||||||
|
|
||||||
:::tip
|
:::tip
|
||||||
- taosdump 1.4.1 之后的版本提供 `-I` 参数,用于解析 avro 文件 schema 和数据,如果指定 `-s` 参数将只解析 schema。
|
- taosdump 1.4.1 之后的版本提供 `-I` 参数,用于解析 avro 文件 schema 和数据,如果指定 `-s` 参数将只解析 schema。
|
||||||
|
@ -45,7 +38,9 @@ taosdump 有两种安装方式:
|
||||||
|
|
||||||
### taosdump 恢复数据
|
### taosdump 恢复数据
|
||||||
|
|
||||||
恢复指定路径下的数据文件:使用 `-i` 参数加上数据文件所在路径。如前面提及,不应该使用同一个目录备份不同数据集合,也不应该在同一路径多次备份同一数据集,否则备份数据会造成覆盖或多次备份。
|
- 恢复指定路径下的数据文件:使用 `-i` 参数加上数据文件所在路径。如前面提及,不应该使用同一个目录备份不同数据集合,也不应该在同一路径多次备份同一数据集,否则备份数据会造成覆盖或多次备份。
|
||||||
|
- taosdump 支持数据恢复至新数据库名下,参数是 -W, 详细见命令行参数说明。
|
||||||
|
|
||||||
|
|
||||||
:::tip
|
:::tip
|
||||||
taosdump 内部使用 TDengine stmt binding API 进行恢复数据的写入,为提高数据恢复性能,目前使用 16384 为一次写入批次。如果备份数据中有比较多列数据,可能会导致产生 "WAL size exceeds limit" 错误,此时可以通过使用 `-B` 参数调整为一个更小的值进行尝试。
|
taosdump 内部使用 TDengine stmt binding API 进行恢复数据的写入,为提高数据恢复性能,目前使用 16384 为一次写入批次。如果备份数据中有比较多列数据,可能会导致产生 "WAL size exceeds limit" 错误,此时可以通过使用 `-B` 参数调整为一个更小的值进行尝试。
|
||||||
|
@ -108,6 +103,13 @@ Usage: taosdump [OPTION...] dbname [tbname ...]
|
||||||
the table name.(Version 2.5.3)
|
the table name.(Version 2.5.3)
|
||||||
-T, --thread-num=THREAD_NUM Number of thread for dump in file. Default is
|
-T, --thread-num=THREAD_NUM Number of thread for dump in file. Default is
|
||||||
8.
|
8.
|
||||||
|
-W, --rename=RENAME-LIST Rename database name with new name during
|
||||||
|
importing data. RENAME-LIST:
|
||||||
|
"db1=newDB1|db2=newDB2" means rename db1 to newDB1
|
||||||
|
and rename db2 to newDB2 (Version 2.5.4)
|
||||||
|
-k, --retry-count=VALUE Set the number of retry attempts for connection or
|
||||||
|
query failures
|
||||||
|
-z, --retry-sleep-ms=VALUE retry interval sleep time, unit ms
|
||||||
-C, --cloud=CLOUD_DSN specify a DSN to access TDengine cloud service
|
-C, --cloud=CLOUD_DSN specify a DSN to access TDengine cloud service
|
||||||
-R, --restful Use RESTful interface to connect TDengine
|
-R, --restful Use RESTful interface to connect TDengine
|
||||||
-t, --timeout=SECONDS The timeout seconds for websocket to interact.
|
-t, --timeout=SECONDS The timeout seconds for websocket to interact.
|
||||||
|
@ -115,10 +117,6 @@ Usage: taosdump [OPTION...] dbname [tbname ...]
|
||||||
-?, --help Give this help list
|
-?, --help Give this help list
|
||||||
--usage Give a short usage message
|
--usage Give a short usage message
|
||||||
-V, --version Print program version
|
-V, --version Print program version
|
||||||
-W, --rename=RENAME-LIST Rename database name with new name during
|
|
||||||
importing data. RENAME-LIST:
|
|
||||||
"db1=newDB1|db2=newDB2" means rename db1 to newDB1
|
|
||||||
and rename db2 to newDB2 (Version 2.5.4)
|
|
||||||
|
|
||||||
Mandatory or optional arguments to long options are also mandatory or optional
|
Mandatory or optional arguments to long options are also mandatory or optional
|
||||||
for any corresponding short options.
|
for any corresponding short options.
|
||||||
|
|
|
@ -4,59 +4,59 @@ sidebar_label: taosBenchmark
|
||||||
toc_max_heading_level: 4
|
toc_max_heading_level: 4
|
||||||
---
|
---
|
||||||
|
|
||||||
taosBenchmark (曾用名 taosdemo ) 是一个用于测试 TDengine 产品性能的工具。taosBenchmark 可以测试 TDengine 的插入、查询和订阅等功能的性能,它可以模拟由大量设备产生的大量数据,还可以灵活地控制数据库、超级表、标签列的数量和类型、数据列的数量和类型、子表的数量、每张子表的数据量、插入数据的时间间隔、taosBenchmark 的工作线程数量、是否以及如何插入乱序数据等。为了兼容过往用户的使用习惯,安装包提供 了 taosdemo 作为 taosBenchmark 的软链接。
|
taosBenchmark 是 TDengine 产品性能基准测试工具,提供对 TDengine 产品写入、查询及订阅性能测试,输出性能指标。
|
||||||
|
|
||||||
## 安装
|
## 安装
|
||||||
|
|
||||||
taosBenchmark 有两种安装方式:
|
taosBenchmark 提供两种安装方式:
|
||||||
|
|
||||||
- 安装 TDengine 官方安装包的同时会自动安装 taosBenchmark, 详情请参考[ TDengine 安装](../../../get-started/)。
|
- taosBenchmark 是 TDengine 安装包中默认安装组件,安装 TDengine 后即可使用,参考 [TDengine 安装](../../../get-started/)
|
||||||
|
|
||||||
- 单独编译 taos-tools 并安装, 详情请参考 [taos-tools](https://github.com/taosdata/taos-tools) 仓库。
|
- 单独编译 taos-tools 并安装, 参考 [taos-tools](https://github.com/taosdata/taos-tools) 仓库。
|
||||||
|
|
||||||
## 运行
|
## 运行
|
||||||
|
|
||||||
### 配置和运行方式
|
### 运行方式
|
||||||
|
|
||||||
taosBenchmark 需要在操作系统的终端执行,该工具支持两种配置方式:[命令行参数](#命令行参数详解) 和 [JSON 配置文件](#配置文件参数详解)。这两种方式是互斥的,在使用配置文件时只能使用一个命令行参数 `-f <json file>` 指定配置文件。在使用命令行参数运行 taosBenchmark 并控制其行为时则不能使用 `-f` 参数而要用其它参数来进行配置。除此之外,taosBenchmark 还提供了一种特殊的运行方式,即无参数运行。
|
taosBenchmark 支持三种运行模式:
|
||||||
|
- 无参数模式
|
||||||
|
- 命令行模式
|
||||||
|
- JSON 配置文件模式
|
||||||
|
`命令行方式` 为 `JSON 配置文件方式` 功能子集,两者都使用时,命令行方式优先。
|
||||||
|
|
||||||
taosBenchmark 支持对 TDengine 做完备的性能测试,其所支持的 TDengine 功能分为三大类:写入、查询和订阅。这三种功能之间是互斥的,每次运行 taosBenchmark 只能选择其中之一。值得注意的是,所要测试的功能类型在使用命令行配置方式时是不可配置的,命令行配置方式只能测试写入性能。若要测试 TDengine 的查询和订阅性能,必须使用配置文件的方式,通过配置文件中的参数 `filetype` 指定所要测试的功能类型。
|
|
||||||
|
|
||||||
**在运行 taosBenchmark 之前要确保 TDengine 集群已经在正确运行。**
|
**在运行 taosBenchmark 之前要确保 TDengine 集群已经在正确运行。**
|
||||||
|
|
||||||
### 无命令行参数运行
|
### 无命令行参数运行
|
||||||
|
|
||||||
执行下列命令即可快速体验 taosBenchmark 对 TDengine 进行基于默认配置的写入性能测试。
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
taosBenchmark
|
taosBenchmark
|
||||||
```
|
```
|
||||||
|
|
||||||
在无参数运行时,taosBenchmark 默认连接 `/etc/taos` 下指定的 TDengine 集群,并在 TDengine 中创建一个名为 test 的数据库,test 数据库下创建名为 meters 的一张超级表,超级表下创建 10000 张表,每张表中写入 10000 条记录。注意,如果已有 test 数据库,这个命令会先删除该数据库后建立一个全新的 test 数据库。
|
在无参数运行时,taosBenchmark 默认连接 `/etc/taos/taos.cfg` 中指定的 TDengine 集群。
|
||||||
|
连接成功后,会默认创建智能电表示例数据库 test,创建超级表 meters, 创建子表 1 万,每子写入数据 1 万条,若 test 库已存在,默认会先删再建。
|
||||||
|
|
||||||
### 使用命令行配置参数运行
|
### 使用命令行参数运行
|
||||||
|
|
||||||
在使用命令行参数运行 taosBenchmark 并控制其行为时,`-f <json file>` 参数不能使用。所有配置参数都必须通过命令行指定。以下是使用命令行方式测试 taosBenchmark 写入性能的一个示例。
|
|
||||||
|
|
||||||
|
命令行支持的参数为写入功能中使用较为频繁的参数,查询与订阅功能不支持命令行方式
|
||||||
|
示例:
|
||||||
```bash
|
```bash
|
||||||
taosBenchmark -I stmt -n 200 -t 100
|
taosBenchmark -d db -t 100 -n 1000 -T 4 -I stmt -y
|
||||||
```
|
```
|
||||||
|
|
||||||
上面的命令 `taosBenchmark` 将创建一个名为`test`的数据库,在其中建立一张超级表`meters`,在该超级表中建立 100 张子表并使用参数绑定的方式为每张子表插入 200 条记录。
|
此命令表示使用 `taosBenchmark` 将创建一个名为 `db` 的数据库,并建立默认超级表 `meters`,子表 100 ,使用参数绑定(stmt)方式为每张子表写入 1000 条记录。
|
||||||
|
|
||||||
### 使用配置文件运行
|
### 使用配置文件运行
|
||||||
|
|
||||||
taosBenchmark 安装包中提供了配置文件的示例,位于 `<install_directory>/examples/taosbenchmark-json` 下
|
配置文件方式运行提供了全部功能,所有命令行参数都可以在配置文件中配置运行
|
||||||
|
|
||||||
使用如下命令行即可运行 taosBenchmark 并通过配置文件控制其行为。
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
taosBenchmark -f <json file>
|
taosBenchmark -f <json file>
|
||||||
```
|
```
|
||||||
|
|
||||||
**下面是几个配置文件的示例:**
|
**下面为支持的写入、查询、订阅三大功能的配置文件示例:**
|
||||||
|
|
||||||
#### 插入场景 JSON 配置文件示例
|
#### 写入场景 JSON 配置文件示例
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
<summary>insert.json</summary>
|
<summary>insert.json</summary>
|
||||||
|
@ -89,130 +89,102 @@ taosBenchmark -f <json file>
|
||||||
|
|
||||||
</details>
|
</details>
|
||||||
|
|
||||||
|
查看更多 json 配置文件示例可 [点击这里](https://github.com/taosdata/taos-tools/tree/main/example)
|
||||||
|
|
||||||
## 命令行参数详解
|
## 命令行参数详解
|
||||||
|
| 命令行参数 | 功能说明 |
|
||||||
|
| ---------------------------- | ----------------------------------------------- |
|
||||||
|
| -f/--file \<json file> | 要使用的 JSON 配置文件,由该文件指定所有参数,本参数与命令行其他参数不能同时使用。没有默认值 |
|
||||||
|
| -c/--config-dir \<dir> | TDengine 集群配置文件所在的目录,默认路径是 /etc/taos |
|
||||||
|
| -h/--host \<host> | 指定要连接的 TDengine 服务端的 FQDN,默认值为 localhost |
|
||||||
|
| -P/--port \<port> | 要连接的 TDengine 服务器的端口号,默认值为 6030 |
|
||||||
|
| -I/--interface \<insertMode> | 插入模式,可选项有 taosc, rest, stmt, sml, sml-rest, 分别对应普通写入、restful 接口写入、参数绑定接口写入、schemaless 接口写入、restful schemaless 接口写入 (由 taosAdapter 提供)。默认值为 taosc |
|
||||||
|
| -u/--user \<user> | 用于连接 TDengine 服务端的用户名,默认为 root |
|
||||||
|
| -U/--supplement-insert | 写入数据而不提前建数据库和表,默认关闭 |
|
||||||
|
| -p/--password \<passwd> | 用于连接 TDengine 服务端的密码,默认值为 taosdata |
|
||||||
|
| -o/--output \<file> | 结果输出文件的路径,默认值为 ./output.txt |
|
||||||
|
| -T/--thread \<threadNum> | 插入数据的线程数量,默认为 8 |
|
||||||
|
| -B/--interlace-rows \<rowNum> |启用交错插入模式并同时指定向每个子表每次插入的数据行数。交错插入模式是指依次向每张子表插入由本参数所指定的行数并重复这个过程,直到所有子表的数据都插入完成。默认值为 0, 即向一张子表完成数据插入后才会向下一张子表进行数据插入 |
|
||||||
|
| -i/--insert-interval \<timeInterval> | 指定交错插入模式的插入间隔,单位为 ms,默认值为 0。 只有当 `-B/--interlace-rows` 大于 0 时才起作用 |意味着数据插入线程在为每个子表插入隔行扫描记录后,会等待该值指定的时间间隔后再进行下一轮写入 |
|
||||||
|
| -r/--rec-per-req \<rowNum> | 每次向 TDengine 请求写入的数据行数,默认值为 30000 |
|
||||||
|
| -t/--tables \<tableNum> | 指定子表的数量,默认为 10000 |
|
||||||
|
| -S/--timestampstep \<stepLength> | 每个子表中插入数据的时间戳步长,单位是 ms,默认值是 1 |
|
||||||
|
| -n/--records \<recordNum> | 每个子表插入的记录数,默认值为 10000 |
|
||||||
|
| -d/--database \<dbName> | 所使用的数据库的名称,默认值为 test |
|
||||||
|
| -b/--data-type \<colType> | 指定超级表普通列数据类型, 多个使用逗号分隔,默认值: "FLOAT,INT,FLOAT" 如:`taosBenchmark -b "FLOAT,BINARY(8),NCHAR(16)"`|
|
||||||
|
| -A/--tag-type \<tagType> | 指定超级表标签列数据类型,多个使用逗号分隔,默认值: "INT,BINARY(24)" 如:`taosBenchmark -A "INT,BINARY(8),NCHAR(8)"`|
|
||||||
|
| -l/--columns \<colNum> | 超级表的数据列的总数量。如果同时设置了该参数和 `-b/--data-type`,则最后的结果列数为两者取大。如果本参数指定的数量大于 `-b/--data-type` 指定的列数,则未指定的列类型默认为 INT, 例如: `-l 5 -b float,double`, 那么最后的列为 `FLOAT,DOUBLE,INT,INT,INT`。如果 columns 指定的数量小于或等于 `-b/--data-type` 指定的列数,则结果为 `-b/--data-type` 指定的列和类型,例如: `-l 3 -b float,double,float,bigint`,那么最后的列为 `FLOAT,DOUBLE,FLOAT,BIGINT` |
|
||||||
|
| -L/--partial-col-num \<colNum> | 指定某些列写入数据,其他列数据为 NULL。默认所有列都写入数据 |
|
||||||
|
| -w/--binwidth \<length> | nchar 和 binary 类型的默认长度,默认值为 64 |
|
||||||
|
| -m/--table-prefix \<tablePrefix> | 子表名称的前缀,默认值为 "d" |
|
||||||
|
| -E/--escape-character | 开关参数,指定在超级表和子表名称中是否使用转义字符。默认值为不使用 |
|
||||||
|
| -C/--chinese | 开关参数,指定 nchar 和 binary 是否使用 Unicode 中文字符。默认值为不使用 |
|
||||||
|
| -N/--normal-table | 开关参数,指定只创建普通表,不创建超级表。默认值为 false。仅当插入模式为 taosc, stmt, rest 模式下可以使用 |
|
||||||
|
| -M/--random | 开关参数,插入数据为生成的随机值。默认值为 false。若配置此参数,则随机生成要插入的数据。对于数值类型的 标签列/数据列,其值为该类型取值范围内的随机值。对于 NCHAR 和 BINARY 类型的 标签列/数据列,其值为指定长度范围内的随机字符串 |
|
||||||
|
| -x/--aggr-func | 开关参数,指示插入后查询聚合函数。默认值为 false |
|
||||||
|
| -y/--answer-yes | 开关参数,要求用户在提示后确认才能继续 |默认值为 false 。
|
||||||
|
| -O/--disorder \<Percentage> | 指定乱序数据的百分比概率,其值域为 [0,50]。默认为 0,即没有乱序数据 |
|
||||||
|
| -R/--disorder-range \<timeRange> | 指定乱序数据的时间戳回退范围。所生成的乱序时间戳为非乱序情况下应该使用的时间戳减去这个范围内的一个随机值。仅在 `-O/--disorder` 指定的乱序数据百分比大于 0 时有效|
|
||||||
|
| -F/--prepare_rand \<Num> | 生成的随机数据中唯一值的数量。若为 1 则表示所有数据都相同。默认值为 10000 |
|
||||||
|
| -a/--replica \<replicaNum> | 创建数据库时指定其副本数,默认值为 1 |
|
||||||
|
| -k/--keep-trying \<NUMBER> | 失败后进行重试的次数,默认不重试。需使用 v3.0.9 以上版本|
|
||||||
|
| -z/--trying-interval \<NUMBER> | 失败重试间隔时间,单位为毫秒,仅在 -k 指定重试后有效。需使用 v3.0.9 以上版本 |
|
||||||
|
| -v/--vgroups \<NUMBER> | 创建数据库时指定 vgroups 数,仅对 TDengine v3.0+ 有效|
|
||||||
|
| -V/--version | 显示版本信息并退出。不能与其它参数混用|
|
||||||
|
| -?/--help | 显示帮助信息并退出。不能与其它参数混用|
|
||||||
|
|
||||||
- **-f/--file \<json file>** :
|
|
||||||
要使用的 JSON 配置文件,由该文件指定所有参数,本参数与命令行其他参数不能同时使用。没有默认值。
|
|
||||||
|
|
||||||
- **-c/--config-dir \<dir>** :
|
## 输出性能指标
|
||||||
TDengine 集群配置文件所在的目录,默认路径是 /etc/taos 。
|
|
||||||
|
|
||||||
- **-h/--host \<host>** :
|
#### 写入指标
|
||||||
指定要连接的 TDengine 服务端的 FQDN,默认值为 localhost 。
|
|
||||||
|
|
||||||
- **-P/--port \<port>** :
|
|
||||||
要连接的 TDengine 服务器的端口号,默认值为 6030 。
|
|
||||||
|
|
||||||
- **-I/--interface \<insertMode>** :
|
|
||||||
插入模式,可选项有 taosc, rest, stmt, sml, sml-rest, 分别对应普通写入、restful 接口写入、参数绑定接口写入、schemaless 接口写入、restful schemaless 接口写入 (由 taosAdapter 提供)。默认值为 taosc。
|
|
||||||
|
|
||||||
- **-u/--user \<user>** :
|
|
||||||
用于连接 TDengine 服务端的用户名,默认为 root 。
|
|
||||||
|
|
||||||
- **-U/--supplement-insert ** :
|
|
||||||
写入数据而不提前建数据库和表,默认关闭。
|
|
||||||
|
|
||||||
- **-p/--password \<passwd>** :
|
|
||||||
用于连接 TDengine 服务端的密码,默认值为 taosdata。
|
|
||||||
|
|
||||||
- **-o/--output \<file>** :
|
|
||||||
结果输出文件的路径,默认值为 ./output.txt。
|
|
||||||
|
|
||||||
- **-T/--thread \<threadNum>** :
|
|
||||||
插入数据的线程数量,默认为 8 。
|
|
||||||
|
|
||||||
- **-B/--interlace-rows \<rowNum>** :
|
|
||||||
启用交错插入模式并同时指定向每个子表每次插入的数据行数。交错插入模式是指依次向每张子表插入由本参数所指定的行数并重复这个过程,直到所有子表的数据都插入完成。默认值为 0, 即向一张子表完成数据插入后才会向下一张子表进行数据插入。
|
|
||||||
|
|
||||||
- **-i/--insert-interval \<timeInterval>** :
|
|
||||||
指定交错插入模式的插入间隔,单位为 ms,默认值为 0。 只有当 `-B/--interlace-rows` 大于 0 时才起作用。意味着数据插入线程在为每个子表插入隔行扫描记录后,会等待该值指定的时间间隔后再进行下一轮写入。
|
|
||||||
|
|
||||||
- **-r/--rec-per-req \<rowNum>** :
|
|
||||||
每次向 TDengine 请求写入的数据行数,默认值为 30000 。
|
|
||||||
|
|
||||||
- **-t/--tables \<tableNum>** :
|
|
||||||
指定子表的数量,默认为 10000 。
|
|
||||||
|
|
||||||
- **-S/--timestampstep \<stepLength>** :
|
|
||||||
每个子表中插入数据的时间戳步长,单位是 ms,默认值是 1。
|
|
||||||
|
|
||||||
- **-n/--records \<recordNum>** :
|
|
||||||
每个子表插入的记录数,默认值为 10000 。
|
|
||||||
|
|
||||||
- **-d/--database \<dbName>** :
|
|
||||||
所使用的数据库的名称,默认值为 test 。
|
|
||||||
|
|
||||||
- **-b/--data-type \<colType>** :
|
|
||||||
超级表的数据列的类型。如果不使用则默认为有三个数据列,其类型分别为 FLOAT, INT, FLOAT 。
|
|
||||||
|
|
||||||
- **-l/--columns \<colNum>** :
|
|
||||||
超级表的数据列的总数量。如果同时设置了该参数和 `-b/--data-type`,则最后的结果列数为两者取大。如果本参数指定的数量大于 `-b/--data-type` 指定的列数,则未指定的列类型默认为 INT, 例如: `-l 5 -b float,double`, 那么最后的列为 `FLOAT,DOUBLE,INT,INT,INT`。如果 columns 指定的数量小于或等于 `-b/--data-type` 指定的列数,则结果为 `-b/--data-type` 指定的列和类型,例如: `-l 3 -b float,double,float,bigint`,那么最后的列为 `FLOAT,DOUBLE,FLOAT,BIGINT` 。
|
|
||||||
|
|
||||||
- **-L/--partial-col-num \<colNum> **:
|
|
||||||
指定某些列写入数据,其他列数据为 NULL。默认所有列都写入数据。
|
|
||||||
|
|
||||||
- **-A/--tag-type \<tagType>** :
|
|
||||||
超级表的标签列类型。nchar 和 binary 类型可以同时设置长度,例如:
|
|
||||||
|
|
||||||
|
写入结束后会在最后两行输出总体性能指标,格式如下:
|
||||||
|
``` bash
|
||||||
|
SUCC: Spent 8.527298 (real 8.117379) seconds to insert rows: 10000000 with 8 thread(s) into test 1172704.41 (real 1231924.74) records/second
|
||||||
|
SUCC: insert delay, min: 19.6780ms, avg: 64.9390ms, p90: 94.6900ms, p95: 105.1870ms, p99: 130.6660ms, max: 157.0830ms
|
||||||
```
|
```
|
||||||
taosBenchmark -A INT,DOUBLE,NCHAR,BINARY(16)
|
第一行写入速度统计:
|
||||||
|
- Spent: 写入总耗时,单位秒,从开始写入第一个数据开始计时到最后一条数据结束,这里表示共花了 8.527298 秒
|
||||||
|
- real : 写入总耗时(调用引擎),此耗时已抛去测试框架准备数据时间,纯统计在引擎调用上花费的时间,示例为 8.117379 秒,8.527298 - 8.117379 = 0.409919 秒则为测试框架准备数据消耗时间
|
||||||
|
- rows : 写入总行数,为 1000 万条数据
|
||||||
|
- threads: 写入线程数,这里是 8 个线程同时写入
|
||||||
|
- records/second 写入速度 = `写入总耗时`/ `写入总行数` , 括号中 `real` 同前,表示纯引擎写入速度
|
||||||
|
第二行单个写入延时统计:
|
||||||
|
- min : 写入最小延时
|
||||||
|
- avg : 写入平时延时
|
||||||
|
- p90 : 写入延时 p90 百分位上的延时数
|
||||||
|
- p95 : 写入延时 p95 百分位上的延时数
|
||||||
|
- p99 : 写入延时 p99 百分位上的延时数
|
||||||
|
- max : 写入最大延时
|
||||||
|
通过此系列指标,可观察到写入请求延时分布情况
|
||||||
|
|
||||||
|
#### 查询指标
|
||||||
|
|
||||||
|
查询性能测试主要输出查询请求速度 QPS 指标, 输出格式如下:
|
||||||
|
``` bash
|
||||||
|
complete query with 3 threads and 10000 query delay avg: 0.002686s min: 0.001182s max: 0.012189s p90: 0.002977s p95: 0.003493s p99: 0.004645s SQL command: select ...
|
||||||
|
INFO: Total specified queries: 30000
|
||||||
|
INFO: Spend 26.9530 second completed total queries: 30000, the QPS of all threads: 1113.049
|
||||||
```
|
```
|
||||||
|
- 第一行表示 3 个线程每个线程执行 10000 次查询及查询请求延时百分位分布情况,`SQL command` 为测试的查询语句
|
||||||
|
- 第二行表示总共完成了 10000 * 3 = 30000 次查询总数
|
||||||
|
- 第三行表示查询总耗时为 26.9653 秒,每秒查询率(QPS)为:1113.049 次/秒
|
||||||
|
|
||||||
如果没有设置标签类型,默认是两个标签,其类型分别为 INT 和 BINARY(16)。
|
#### 订阅指标
|
||||||
注意:在有的 shell 比如 bash 命令里面 “()” 需要转义,则上述指令应为:
|
|
||||||
|
|
||||||
|
订阅性能测试主要输出消费者消费速度指标,输出格式如下:
|
||||||
|
``` bash
|
||||||
|
INFO: consumer id 0 has poll total msgs: 376, period rate: 37.592 msgs/s, total rows: 3760000, period rate: 375924.815 rows/s
|
||||||
|
INFO: consumer id 1 has poll total msgs: 362, period rate: 36.131 msgs/s, total rows: 3620000, period rate: 361313.504 rows/s
|
||||||
|
INFO: consumer id 2 has poll total msgs: 364, period rate: 36.378 msgs/s, total rows: 3640000, period rate: 363781.731 rows/s
|
||||||
|
INFO: consumerId: 0, consume msgs: 1000, consume rows: 10000000
|
||||||
|
INFO: consumerId: 1, consume msgs: 1000, consume rows: 10000000
|
||||||
|
INFO: consumerId: 2, consume msgs: 1000, consume rows: 10000000
|
||||||
|
INFO: Consumed total msgs: 3000, total rows: 30000000
|
||||||
```
|
```
|
||||||
taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
|
- 1 ~ 3 行实时输出每个消费者当前的消费速度,`msgs/s` 表示消费消息个数,每个消息中包含多行数据,`rows/s` 表示按行数统计的消费速度
|
||||||
```
|
- 4 ~ 6 行是测试完成后每个消费者总体统计,统计共消费了多少条消息,共计多少行
|
||||||
|
- 第 7 行所有消费者总体统计,`msgs` 表示共消费了多少条消息, `rows` 表示共消费了多少行数据
|
||||||
- **-w/--binwidth \<length>**:
|
|
||||||
nchar 和 binary 类型的默认长度,默认值为 64。
|
|
||||||
|
|
||||||
- **-m/--table-prefix \<tablePrefix>** :
|
|
||||||
子表名称的前缀,默认值为 "d"。
|
|
||||||
|
|
||||||
- **-E/--escape-character** :
|
|
||||||
开关参数,指定在超级表和子表名称中是否使用转义字符。默认值为不使用。
|
|
||||||
|
|
||||||
- **-C/--chinese** :
|
|
||||||
开关参数,指定 nchar 和 binary 是否使用 Unicode 中文字符。默认值为不使用。
|
|
||||||
|
|
||||||
- **-N/--normal-table** :
|
|
||||||
开关参数,指定只创建普通表,不创建超级表。默认值为 false。仅当插入模式为 taosc, stmt, rest 模式下可以使用。
|
|
||||||
|
|
||||||
- **-M/--random** :
|
|
||||||
开关参数,插入数据为生成的随机值。默认值为 false。若配置此参数,则随机生成要插入的数据。对于数值类型的 标签列/数据列,其值为该类型取值范围内的随机值。对于 NCHAR 和 BINARY 类型的 标签列/数据列,其值为指定长度范围内的随机字符串。
|
|
||||||
|
|
||||||
- **-x/--aggr-func** :
|
|
||||||
开关参数,指示插入后查询聚合函数。默认值为 false。
|
|
||||||
|
|
||||||
- **-y/--answer-yes** :
|
|
||||||
开关参数,要求用户在提示后确认才能继续。默认值为 false 。
|
|
||||||
|
|
||||||
- **-O/--disorder \<Percentage>** :
|
|
||||||
指定乱序数据的百分比概率,其值域为 [0,50]。默认为 0,即没有乱序数据。
|
|
||||||
|
|
||||||
- **-R/--disorder-range \<timeRange>** :
|
|
||||||
指定乱序数据的时间戳回退范围。所生成的乱序时间戳为非乱序情况下应该使用的时间戳减去这个范围内的一个随机值。仅在 `-O/--disorder` 指定的乱序数据百分比大于 0 时有效。
|
|
||||||
|
|
||||||
- **-F/--prepare_rand \<Num>** :
|
|
||||||
生成的随机数据中唯一值的数量。若为 1 则表示所有数据都相同。默认值为 10000 。
|
|
||||||
|
|
||||||
- **-a/--replica \<replicaNum>** :
|
|
||||||
创建数据库时指定其副本数,默认值为 1 。
|
|
||||||
|
|
||||||
- ** -k/--keep-trying \<NUMBER>** : 失败后进行重试的次数,默认不重试。需使用 v3.0.9 以上版本。
|
|
||||||
|
|
||||||
- ** -z/--trying-interval \<NUMBER>** : 失败重试间隔时间,单位为毫秒,仅在 -k 指定重试后有效。需使用 v3.0.9 以上版本。
|
|
||||||
|
|
||||||
- **-v/--vgroups \<NUMBER>** :
|
|
||||||
创建数据库时指定 vgroups 数,仅对 TDengine v3.0+ 有效。
|
|
||||||
|
|
||||||
- **-V/--version** :
|
|
||||||
显示版本信息并退出。不能与其它参数混用。
|
|
||||||
|
|
||||||
- **-?/--help** :
|
|
||||||
显示帮助信息并退出。不能与其它参数混用。
|
|
||||||
|
|
||||||
## 配置文件参数详解
|
## 配置文件参数详解
|
||||||
|
|
||||||
|
@ -220,7 +192,7 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
|
||||||
|
|
||||||
本节所列参数适用于所有功能模式。
|
本节所列参数适用于所有功能模式。
|
||||||
|
|
||||||
- **filetype** : 要测试的功能,可选值为 `insert`, `query` 和 `subscribe`。分别对应插入、查询和订阅功能。每个配置文件中只能指定其中之一。
|
- **filetype** : 功能分类,可选值为 `insert`, `query` 和 `subscribe`。分别对应插入、查询和订阅功能。每个配置文件中只能指定其中之一。
|
||||||
- **cfgdir** : TDengine 客户端配置文件所在的目录,默认路径是 /etc/taos 。
|
- **cfgdir** : TDengine 客户端配置文件所在的目录,默认路径是 /etc/taos 。
|
||||||
|
|
||||||
- **host** : 指定要连接的 TDengine 服务端的 FQDN,默认值为 localhost。
|
- **host** : 指定要连接的 TDengine 服务端的 FQDN,默认值为 localhost。
|
||||||
|
@ -252,7 +224,7 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
|
||||||
|
|
||||||
- **name** : 数据库名。
|
- **name** : 数据库名。
|
||||||
|
|
||||||
- **drop** : 插入前是否删除数据库,可选项为 "yes" 或者 "no", 为 "no" 时不创建。默认删除。
|
- **drop** : 数据库已存在时是否删除重建,可选项为 "yes" 或 "no", 默认为 “yes”
|
||||||
|
|
||||||
#### 流式计算相关配置参数
|
#### 流式计算相关配置参数
|
||||||
|
|
||||||
|
@ -331,21 +303,6 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
|
||||||
- **repeat_ts_max** : 数值类型,复合主键开启情况下指定生成相同时间戳记录的最大个数
|
- **repeat_ts_max** : 数值类型,复合主键开启情况下指定生成相同时间戳记录的最大个数
|
||||||
- **sqls** : 字符串数组类型,指定超级表创建成功后要执行的 sql 数组,sql 中指定表名前面要带数据库名,否则会报未指定数据库错误
|
- **sqls** : 字符串数组类型,指定超级表创建成功后要执行的 sql 数组,sql 中指定表名前面要带数据库名,否则会报未指定数据库错误
|
||||||
|
|
||||||
#### tsma配置参数
|
|
||||||
|
|
||||||
指定tsma的配置参数在 `super_tables` 中的 `tsmas` 中,具体参数如下。
|
|
||||||
|
|
||||||
- **name** : 指定 tsma 的名字,必选项。
|
|
||||||
|
|
||||||
- **function** : 指定 tsma 的函数,必选项。
|
|
||||||
|
|
||||||
- **interval** : 指定 tsma 的时间间隔,必选项。
|
|
||||||
|
|
||||||
- **sliding** : 指定 tsma 的窗口时间位移,必选项。
|
|
||||||
|
|
||||||
- **custom** : 指定 tsma 的创建语句结尾追加的自定义配置,可选项。
|
|
||||||
|
|
||||||
- **start_when_inserted** : 指定当插入多少行时创建 tsma,可选项,默认为 0。
|
|
||||||
|
|
||||||
#### 标签列与数据列配置参数
|
#### 标签列与数据列配置参数
|
||||||
|
|
||||||
|
@ -415,7 +372,8 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
|
||||||
查询场景下 `filetype` 必须设置为 `query`。
|
查询场景下 `filetype` 必须设置为 `query`。
|
||||||
`query_times` 指定运行查询的次数,数值类型
|
`query_times` 指定运行查询的次数,数值类型
|
||||||
|
|
||||||
查询场景可以通过设置 `kill_slow_query_threshold` 和 `kill_slow_query_interval` 参数来控制杀掉慢查询语句的执行,threshold 控制如果 exec_usec 超过指定时间的查询将被 taosBenchmark 杀掉,单位为秒;interval 控制休眠时间,避免持续查询慢查询消耗 CPU ,单位为秒。
|
查询场景可以通过设置 `kill_slow_query_threshold` 和 `kill_slow_query_interval` 参数来控制杀掉慢查询语句的执行,threshold 控制如果 exec_usec 超过指定时间的查询将被 taosBenchmark 杀掉,单位为秒;
|
||||||
|
interval 控制休眠时间,避免持续查询慢查询消耗 CPU ,单位为秒。
|
||||||
|
|
||||||
其它通用参数详见[通用配置参数](#通用配置参数)。
|
其它通用参数详见[通用配置参数](#通用配置参数)。
|
||||||
|
|
||||||
|
@ -423,6 +381,11 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
|
||||||
|
|
||||||
查询指定表(可以指定超级表、子表或普通表)的配置参数在 `specified_table_query` 中设置。
|
查询指定表(可以指定超级表、子表或普通表)的配置参数在 `specified_table_query` 中设置。
|
||||||
|
|
||||||
|
- **mixed_query** : 查询模式,取值 “yes” 为`混合查询`, "no" 为`正常查询` , 默认值为 “no”
|
||||||
|
`混合查询`:`sqls` 中所有 sql 按 `threads` 线程数分组,每个线程执行一组, 线程中每个 sql 都需执行 `query_times` 次查询
|
||||||
|
`正常查询`:`sqls` 中每个 sql 启动 `threads` 个线程,每个线程执行完 `query_times` 次后退出,下个 sql 需等待上个 sql 线程全部执行完退出后方可执行
|
||||||
|
不管 `正常查询` 还是 `混合查询` ,执行查询总次数是相同的 ,查询总次数 = `sqls` 个数 * `threads` * `query_times`, 区别是 `正常查询` 每个 sql 都会启动 `threads` 个线程,而 `混合查询` 只启动一次 `threads` 个线程执行完所有 SQL, 两者启动线程次数不一样。
|
||||||
|
|
||||||
- **query_interval** : 查询时间间隔,单位是秒,默认值为 0。
|
- **query_interval** : 查询时间间隔,单位是秒,默认值为 0。
|
||||||
|
|
||||||
- **threads** : 执行查询 SQL 的线程数,默认值为 1。
|
- **threads** : 执行查询 SQL 的线程数,默认值为 1。
|
||||||
|
@ -433,7 +396,8 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
|
||||||
|
|
||||||
#### 查询超级表的配置参数
|
#### 查询超级表的配置参数
|
||||||
|
|
||||||
查询超级表的配置参数在 `super_table_query` 中设置。
|
查询超级表的配置参数在 `super_table_query` 中设置。
|
||||||
|
超级表查询的线程模式与上面介绍的指定查询语句查询的 `正常查询` 模式相同,不同之处是本 `sqls` 使用所有子表填充。
|
||||||
|
|
||||||
- **stblname** : 指定要查询的超级表的名称,必填。
|
- **stblname** : 指定要查询的超级表的名称,必填。
|
||||||
|
|
||||||
|
|
|
@ -5791,7 +5791,6 @@ int32_t tsdbReaderSuspend2(STsdbReader* pReader) {
|
||||||
|
|
||||||
// make sure only release once
|
// make sure only release once
|
||||||
void* p = pReader->pReadSnap;
|
void* p = pReader->pReadSnap;
|
||||||
TSDB_CHECK_NULL(p, code, lino, _end, TSDB_CODE_INVALID_PARA);
|
|
||||||
if ((p == atomic_val_compare_exchange_ptr((void**)&pReader->pReadSnap, p, NULL)) && (p != NULL)) {
|
if ((p == atomic_val_compare_exchange_ptr((void**)&pReader->pReadSnap, p, NULL)) && (p != NULL)) {
|
||||||
tsdbUntakeReadSnap2(pReader, p, false);
|
tsdbUntakeReadSnap2(pReader, p, false);
|
||||||
pReader->pReadSnap = NULL;
|
pReader->pReadSnap = NULL;
|
||||||
|
|
|
@ -11,6 +11,6 @@ target_link_libraries(
|
||||||
PRIVATE os util transport qcom nodes
|
PRIVATE os util transport qcom nodes
|
||||||
)
|
)
|
||||||
|
|
||||||
#if(${BUILD_TEST})
|
if(${BUILD_TEST} AND NOT ${TD_WINDOWS})
|
||||||
# ADD_SUBDIRECTORY(test)
|
ADD_SUBDIRECTORY(test)
|
||||||
#endif(${BUILD_TEST})
|
endif()
|
||||||
|
|
|
@ -83,6 +83,8 @@ int32_t hInnerJoinDo(struct SOperatorInfo* pOperator) {
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#ifdef HASH_JOIN_FULL
|
||||||
|
|
||||||
int32_t hLeftJoinHandleSeqRowRemains(struct SOperatorInfo* pOperator, SHJoinOperatorInfo* pJoin, bool* loopCont) {
|
int32_t hLeftJoinHandleSeqRowRemains(struct SOperatorInfo* pOperator, SHJoinOperatorInfo* pJoin, bool* loopCont) {
|
||||||
bool allFetched = false;
|
bool allFetched = false;
|
||||||
SHJoinCtx* pCtx = &pJoin->ctx;
|
SHJoinCtx* pCtx = &pJoin->ctx;
|
||||||
|
@ -346,4 +348,5 @@ int32_t hLeftJoinDo(struct SOperatorInfo* pOperator) {
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#endif
|
||||||
|
|
||||||
|
|
|
@ -89,7 +89,7 @@ int32_t hJoinSetImplFp(SHJoinOperatorInfo* pJoin) {
|
||||||
case JOIN_TYPE_RIGHT: {
|
case JOIN_TYPE_RIGHT: {
|
||||||
switch (pJoin->subType) {
|
switch (pJoin->subType) {
|
||||||
case JOIN_STYPE_OUTER:
|
case JOIN_STYPE_OUTER:
|
||||||
pJoin->joinFp = hLeftJoinDo;
|
//pJoin->joinFp = hLeftJoinDo; TOOPEN
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
break;
|
break;
|
||||||
|
|
|
@ -44,3 +44,15 @@ TARGET_INCLUDE_DIRECTORIES(
|
||||||
PUBLIC "${TD_SOURCE_DIR}/include/common"
|
PUBLIC "${TD_SOURCE_DIR}/include/common"
|
||||||
PRIVATE "${CMAKE_CURRENT_SOURCE_DIR}/../inc"
|
PRIVATE "${CMAKE_CURRENT_SOURCE_DIR}/../inc"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
ADD_EXECUTABLE(execUtilTests execUtilTests.cpp)
|
||||||
|
TARGET_LINK_LIBRARIES(
|
||||||
|
execUtilTests
|
||||||
|
PRIVATE os util common executor gtest_main qcom function planner scalar nodes vnode
|
||||||
|
)
|
||||||
|
|
||||||
|
TARGET_INCLUDE_DIRECTORIES(
|
||||||
|
execUtilTests
|
||||||
|
PUBLIC "${TD_SOURCE_DIR}/include/common"
|
||||||
|
PRIVATE "${CMAKE_CURRENT_SOURCE_DIR}/../inc"
|
||||||
|
)
|
||||||
|
|
|
@ -0,0 +1,35 @@
|
||||||
|
#include "gtest/gtest.h"
|
||||||
|
|
||||||
|
#include "executil.h"
|
||||||
|
|
||||||
|
TEST(execUtilTest, resRowTest) {
|
||||||
|
SDiskbasedBuf *pBuf = nullptr;
|
||||||
|
int32_t pageSize = 32;
|
||||||
|
int32_t numPages = 3;
|
||||||
|
int32_t code = createDiskbasedBuf(&pBuf, pageSize, pageSize * numPages, "test_buf", "/");
|
||||||
|
EXPECT_EQ(code, TSDB_CODE_SUCCESS);
|
||||||
|
|
||||||
|
std::vector<void *> pages(numPages);
|
||||||
|
std::vector<int32_t> pageIds(numPages);
|
||||||
|
for (int32_t i = 0; i < numPages; ++i) {
|
||||||
|
pages[i] = getNewBufPage(pBuf, &pageIds[i]);
|
||||||
|
EXPECT_NE(pages[i], nullptr);
|
||||||
|
EXPECT_EQ(pageIds[i], i);
|
||||||
|
}
|
||||||
|
|
||||||
|
EXPECT_EQ(getNewBufPage(pBuf, nullptr), nullptr);
|
||||||
|
|
||||||
|
SResultRowPosition pos;
|
||||||
|
pos.offset = 0;
|
||||||
|
for (int32_t i = 0; i < numPages; ++i) {
|
||||||
|
pos.pageId = pageIds[i];
|
||||||
|
bool forUpdate = i & 0x1;
|
||||||
|
SResultRow *row = getResultRowByPos(pBuf, &pos, forUpdate);
|
||||||
|
EXPECT_EQ((void *)row, pages[i]);
|
||||||
|
}
|
||||||
|
|
||||||
|
pos.pageId = numPages + 1;
|
||||||
|
EXPECT_EQ(getResultRowByPos(pBuf, &pos, true), nullptr);
|
||||||
|
|
||||||
|
destroyDiskbasedBuf(pBuf);
|
||||||
|
}
|
|
@ -137,6 +137,8 @@ static void processTaskQueue(SQueueInfo *pInfo, SSchedMsg *pSchedMsg) {
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t initTaskQueue() {
|
int32_t initTaskQueue() {
|
||||||
|
memset(&taskQueue, 0, sizeof(taskQueue));
|
||||||
|
|
||||||
taskQueue.wrokrerPool.name = "taskWorkPool";
|
taskQueue.wrokrerPool.name = "taskWorkPool";
|
||||||
taskQueue.wrokrerPool.min = tsNumOfTaskQueueThreads;
|
taskQueue.wrokrerPool.min = tsNumOfTaskQueueThreads;
|
||||||
taskQueue.wrokrerPool.max = tsNumOfTaskQueueThreads;
|
taskQueue.wrokrerPool.max = tsNumOfTaskQueueThreads;
|
||||||
|
|
|
@ -2985,279 +2985,93 @@ static int32_t doScalarFunction2(SScalarParam *pInput, int32_t inputNum, SScalar
|
||||||
bool hasNullType = (IS_NULL_TYPE(GET_PARAM_TYPE(&pInput[0])) || IS_NULL_TYPE(GET_PARAM_TYPE(&pInput[1])));
|
bool hasNullType = (IS_NULL_TYPE(GET_PARAM_TYPE(&pInput[0])) || IS_NULL_TYPE(GET_PARAM_TYPE(&pInput[1])));
|
||||||
|
|
||||||
int32_t numOfRows = TMAX(pInput[0].numOfRows, pInput[1].numOfRows);
|
int32_t numOfRows = TMAX(pInput[0].numOfRows, pInput[1].numOfRows);
|
||||||
if (pInput[0].numOfRows == pInput[1].numOfRows) {
|
for (int32_t i = 0; i < numOfRows; ++i) {
|
||||||
for (int32_t i = 0; i < numOfRows; ++i) {
|
int32_t colIdx1 = (pInput[0].numOfRows == 1) ? 0 : i;
|
||||||
if (colDataIsNull_s(pInputData[0], i) || colDataIsNull_s(pInputData[1], i) || hasNullType) {
|
int32_t colIdx2 = (pInput[1].numOfRows == 1) ? 0 : i;
|
||||||
colDataSetNULL(pOutputData, i);
|
if (colDataIsNull_s(pInputData[0], colIdx1) || colDataIsNull_s(pInputData[1], colIdx2) || hasNullType) {
|
||||||
continue;
|
colDataSetNULL(pOutputData, i);
|
||||||
}
|
continue;
|
||||||
double in2;
|
|
||||||
GET_TYPED_DATA(in2, double, GET_PARAM_TYPE(&pInput[1]), colDataGetData(pInputData[1], i));
|
|
||||||
switch (GET_PARAM_TYPE(&pInput[0])) {
|
|
||||||
case TSDB_DATA_TYPE_DOUBLE: {
|
|
||||||
double *in = (double *)pInputData[0]->pData;
|
|
||||||
double *out = (double *)pOutputData->pData;
|
|
||||||
double result = d1(in[i], in2);
|
|
||||||
if (isinf(result) || isnan(result)) {
|
|
||||||
colDataSetNULL(pOutputData, i);
|
|
||||||
} else {
|
|
||||||
out[i] = result;
|
|
||||||
}
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case TSDB_DATA_TYPE_FLOAT: {
|
|
||||||
float *in = (float *)pInputData[0]->pData;
|
|
||||||
float *out = (float *)pOutputData->pData;
|
|
||||||
float result = f1(in[i], (float)in2);
|
|
||||||
if (isinf(result) || isnan(result)) {
|
|
||||||
colDataSetNULL(pOutputData, i);
|
|
||||||
} else {
|
|
||||||
out[i] = result;
|
|
||||||
}
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case TSDB_DATA_TYPE_TINYINT: {
|
|
||||||
int8_t *in = (int8_t *)pInputData[0]->pData;
|
|
||||||
int8_t *out = (int8_t *)pOutputData->pData;
|
|
||||||
int8_t result = (int8_t)d1((double)in[i], in2);
|
|
||||||
out[i] = result;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case TSDB_DATA_TYPE_SMALLINT: {
|
|
||||||
int16_t *in = (int16_t *)pInputData[0]->pData;
|
|
||||||
int16_t *out = (int16_t *)pOutputData->pData;
|
|
||||||
int16_t result = (int16_t)d1((double)in[i], in2);
|
|
||||||
out[i] = result;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case TSDB_DATA_TYPE_INT: {
|
|
||||||
int32_t *in = (int32_t *)pInputData[0]->pData;
|
|
||||||
int32_t *out = (int32_t *)pOutputData->pData;
|
|
||||||
int32_t result = (int32_t)d1((double)in[i], in2);
|
|
||||||
out[i] = result;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case TSDB_DATA_TYPE_BIGINT: {
|
|
||||||
int64_t *in = (int64_t *)pInputData[0]->pData;
|
|
||||||
int64_t *out = (int64_t *)pOutputData->pData;
|
|
||||||
int64_t result = (int64_t)d1((double)in[i], in2);
|
|
||||||
out[i] = result;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case TSDB_DATA_TYPE_UTINYINT: {
|
|
||||||
uint8_t *in = (uint8_t *)pInputData[0]->pData;
|
|
||||||
uint8_t *out = (uint8_t *)pOutputData->pData;
|
|
||||||
uint8_t result = (uint8_t)d1((double)in[i], in2);
|
|
||||||
out[i] = result;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case TSDB_DATA_TYPE_USMALLINT: {
|
|
||||||
uint16_t *in = (uint16_t *)pInputData[0]->pData;
|
|
||||||
uint16_t *out = (uint16_t *)pOutputData->pData;
|
|
||||||
uint16_t result = (uint16_t)d1((double)in[i], in2);
|
|
||||||
out[i] = result;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case TSDB_DATA_TYPE_UINT: {
|
|
||||||
uint32_t *in = (uint32_t *)pInputData[0]->pData;
|
|
||||||
uint32_t *out = (uint32_t *)pOutputData->pData;
|
|
||||||
uint32_t result = (uint32_t)d1((double)in[i], in2);
|
|
||||||
out[i] = result;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case TSDB_DATA_TYPE_UBIGINT: {
|
|
||||||
uint64_t *in = (uint64_t *)pInputData[0]->pData;
|
|
||||||
uint64_t *out = (uint64_t *)pOutputData->pData;
|
|
||||||
uint64_t result = (uint64_t)d1((double)in[i], in2);
|
|
||||||
out[i] = result;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
} else if (pInput[0].numOfRows == 1) { // left operand is constant
|
double in2;
|
||||||
if (colDataIsNull_s(pInputData[0], 0) || hasNullType) {
|
GET_TYPED_DATA(in2, double, GET_PARAM_TYPE(&pInput[1]), colDataGetData(pInputData[1], colIdx2));
|
||||||
colDataSetNNULL(pOutputData, 0, pInput[1].numOfRows);
|
switch (GET_PARAM_TYPE(&pInput[0])) {
|
||||||
} else {
|
case TSDB_DATA_TYPE_DOUBLE: {
|
||||||
for (int32_t i = 0; i < numOfRows; ++i) {
|
double *in = (double *)pInputData[0]->pData;
|
||||||
if (colDataIsNull_s(pInputData[1], i)) {
|
double *out = (double *)pOutputData->pData;
|
||||||
|
double result = d1(in[colIdx1], in2);
|
||||||
|
if (isinf(result) || isnan(result)) {
|
||||||
colDataSetNULL(pOutputData, i);
|
colDataSetNULL(pOutputData, i);
|
||||||
continue;
|
} else {
|
||||||
}
|
out[i] = result;
|
||||||
double in2;
|
|
||||||
GET_TYPED_DATA(in2, double, GET_PARAM_TYPE(&pInput[1]), colDataGetData(pInputData[1], i));
|
|
||||||
switch (GET_PARAM_TYPE(&pInput[0])) {
|
|
||||||
case TSDB_DATA_TYPE_DOUBLE: {
|
|
||||||
double *in = (double *)pInputData[0]->pData;
|
|
||||||
double *out = (double *)pOutputData->pData;
|
|
||||||
double result = d1(in[0], in2);
|
|
||||||
if (isinf(result) || isnan(result)) {
|
|
||||||
colDataSetNULL(pOutputData, i);
|
|
||||||
} else {
|
|
||||||
out[i] = result;
|
|
||||||
}
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case TSDB_DATA_TYPE_FLOAT: {
|
|
||||||
float *in = (float *)pInputData[0]->pData;
|
|
||||||
float *out = (float *)pOutputData->pData;
|
|
||||||
float result = f1(in[0], (float)in2);
|
|
||||||
if (isinf(result) || isnan(result)) {
|
|
||||||
colDataSetNULL(pOutputData, i);
|
|
||||||
} else {
|
|
||||||
out[i] = result;
|
|
||||||
}
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case TSDB_DATA_TYPE_TINYINT: {
|
|
||||||
int8_t *in = (int8_t *)pInputData[0]->pData;
|
|
||||||
int8_t *out = (int8_t *)pOutputData->pData;
|
|
||||||
int8_t result = (int8_t)d1((double)in[0], in2);
|
|
||||||
out[i] = result;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case TSDB_DATA_TYPE_SMALLINT: {
|
|
||||||
int16_t *in = (int16_t *)pInputData[0]->pData;
|
|
||||||
int16_t *out = (int16_t *)pOutputData->pData;
|
|
||||||
int16_t result = (int16_t)d1((double)in[0], in2);
|
|
||||||
out[i] = result;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case TSDB_DATA_TYPE_INT: {
|
|
||||||
int32_t *in = (int32_t *)pInputData[0]->pData;
|
|
||||||
int32_t *out = (int32_t *)pOutputData->pData;
|
|
||||||
int32_t result = (int32_t)d1((double)in[0], in2);
|
|
||||||
out[i] = result;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case TSDB_DATA_TYPE_BIGINT: {
|
|
||||||
int64_t *in = (int64_t *)pInputData[0]->pData;
|
|
||||||
int64_t *out = (int64_t *)pOutputData->pData;
|
|
||||||
int64_t result = (int64_t)d1((double)in[0], in2);
|
|
||||||
out[i] = result;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case TSDB_DATA_TYPE_UTINYINT: {
|
|
||||||
uint8_t *in = (uint8_t *)pInputData[0]->pData;
|
|
||||||
uint8_t *out = (uint8_t *)pOutputData->pData;
|
|
||||||
uint8_t result = (uint8_t)d1((double)in[0], in2);
|
|
||||||
out[i] = result;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case TSDB_DATA_TYPE_USMALLINT: {
|
|
||||||
uint16_t *in = (uint16_t *)pInputData[0]->pData;
|
|
||||||
uint16_t *out = (uint16_t *)pOutputData->pData;
|
|
||||||
uint16_t result = (uint16_t)d1((double)in[0], in2);
|
|
||||||
out[i] = result;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case TSDB_DATA_TYPE_UINT: {
|
|
||||||
uint32_t *in = (uint32_t *)pInputData[0]->pData;
|
|
||||||
uint32_t *out = (uint32_t *)pOutputData->pData;
|
|
||||||
uint32_t result = (uint32_t)d1((double)in[0], in2);
|
|
||||||
out[i] = result;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case TSDB_DATA_TYPE_UBIGINT: {
|
|
||||||
uint64_t *in = (uint64_t *)pInputData[0]->pData;
|
|
||||||
uint64_t *out = (uint64_t *)pOutputData->pData;
|
|
||||||
uint64_t result = (uint64_t)d1((double)in[0], in2);
|
|
||||||
out[i] = result;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
break;
|
||||||
}
|
}
|
||||||
}
|
case TSDB_DATA_TYPE_FLOAT: {
|
||||||
} else if (pInput[1].numOfRows == 1) {
|
float *in = (float *)pInputData[0]->pData;
|
||||||
if (colDataIsNull_s(pInputData[1], 0) || hasNullType) {
|
float *out = (float *)pOutputData->pData;
|
||||||
colDataSetNNULL(pOutputData, 0, pInput[0].numOfRows);
|
float result = f1(in[colIdx1], (float)in2);
|
||||||
} else {
|
if (isinf(result) || isnan(result)) {
|
||||||
for (int32_t i = 0; i < numOfRows; ++i) {
|
|
||||||
if (colDataIsNull_s(pInputData[0], i)) {
|
|
||||||
colDataSetNULL(pOutputData, i);
|
colDataSetNULL(pOutputData, i);
|
||||||
continue;
|
} else {
|
||||||
}
|
out[i] = result;
|
||||||
double in2;
|
|
||||||
GET_TYPED_DATA(in2, double, GET_PARAM_TYPE(&pInput[1]), colDataGetData(pInputData[1], 0));
|
|
||||||
switch (GET_PARAM_TYPE(&pInput[0])) {
|
|
||||||
case TSDB_DATA_TYPE_DOUBLE: {
|
|
||||||
double *in = (double *)pInputData[0]->pData;
|
|
||||||
double *out = (double *)pOutputData->pData;
|
|
||||||
double result = d1(in[i], in2);
|
|
||||||
if (isinf(result) || isnan(result)) {
|
|
||||||
colDataSetNULL(pOutputData, i);
|
|
||||||
} else {
|
|
||||||
out[i] = result;
|
|
||||||
}
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case TSDB_DATA_TYPE_FLOAT: {
|
|
||||||
float *in = (float *)pInputData[0]->pData;
|
|
||||||
float *out = (float *)pOutputData->pData;
|
|
||||||
float result = f1(in[i], in2);
|
|
||||||
if (isinf(result) || isnan(result)) {
|
|
||||||
colDataSetNULL(pOutputData, i);
|
|
||||||
} else {
|
|
||||||
out[i] = result;
|
|
||||||
}
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case TSDB_DATA_TYPE_TINYINT: {
|
|
||||||
int8_t *in = (int8_t *)pInputData[0]->pData;
|
|
||||||
int8_t *out = (int8_t *)pOutputData->pData;
|
|
||||||
int8_t result = (int8_t)d1((double)in[i], in2);
|
|
||||||
out[i] = result;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case TSDB_DATA_TYPE_SMALLINT: {
|
|
||||||
int16_t *in = (int16_t *)pInputData[0]->pData;
|
|
||||||
int16_t *out = (int16_t *)pOutputData->pData;
|
|
||||||
int16_t result = (int16_t)d1((double)in[i], in2);
|
|
||||||
out[i] = result;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case TSDB_DATA_TYPE_INT: {
|
|
||||||
int32_t *in = (int32_t *)pInputData[0]->pData;
|
|
||||||
int32_t *out = (int32_t *)pOutputData->pData;
|
|
||||||
int32_t result = (int32_t)d1((double)in[i], in2);
|
|
||||||
out[i] = result;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case TSDB_DATA_TYPE_BIGINT: {
|
|
||||||
int64_t *in = (int64_t *)pInputData[0]->pData;
|
|
||||||
int64_t *out = (int64_t *)pOutputData->pData;
|
|
||||||
int64_t result = (int64_t)d1((double)in[i], in2);
|
|
||||||
out[i] = result;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case TSDB_DATA_TYPE_UTINYINT: {
|
|
||||||
uint8_t *in = (uint8_t *)pInputData[0]->pData;
|
|
||||||
uint8_t *out = (uint8_t *)pOutputData->pData;
|
|
||||||
uint8_t result = (uint8_t)d1((double)in[i], in2);
|
|
||||||
out[i] = result;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case TSDB_DATA_TYPE_USMALLINT: {
|
|
||||||
uint16_t *in = (uint16_t *)pInputData[0]->pData;
|
|
||||||
uint16_t *out = (uint16_t *)pOutputData->pData;
|
|
||||||
uint16_t result = (uint16_t)d1((double)in[i], in2);
|
|
||||||
out[i] = result;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case TSDB_DATA_TYPE_UINT: {
|
|
||||||
uint32_t *in = (uint32_t *)pInputData[0]->pData;
|
|
||||||
uint32_t *out = (uint32_t *)pOutputData->pData;
|
|
||||||
uint32_t result = (uint32_t)d1((double)in[i], in2);
|
|
||||||
out[i] = result;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
case TSDB_DATA_TYPE_UBIGINT: {
|
|
||||||
uint64_t *in = (uint64_t *)pInputData[0]->pData;
|
|
||||||
uint64_t *out = (uint64_t *)pOutputData->pData;
|
|
||||||
uint64_t result = (uint64_t)d1((double)in[i], in2);
|
|
||||||
out[i] = result;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_TINYINT: {
|
||||||
|
int8_t *in = (int8_t *)pInputData[0]->pData;
|
||||||
|
int8_t *out = (int8_t *)pOutputData->pData;
|
||||||
|
int8_t result = (int8_t)d1((double)in[colIdx1], in2);
|
||||||
|
out[i] = result;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_SMALLINT: {
|
||||||
|
int16_t *in = (int16_t *)pInputData[0]->pData;
|
||||||
|
int16_t *out = (int16_t *)pOutputData->pData;
|
||||||
|
int16_t result = (int16_t)d1((double)in[colIdx1], in2);
|
||||||
|
out[i] = result;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_INT: {
|
||||||
|
int32_t *in = (int32_t *)pInputData[0]->pData;
|
||||||
|
int32_t *out = (int32_t *)pOutputData->pData;
|
||||||
|
int32_t result = (int32_t)d1((double)in[colIdx1], in2);
|
||||||
|
out[i] = result;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_BIGINT: {
|
||||||
|
int64_t *in = (int64_t *)pInputData[0]->pData;
|
||||||
|
int64_t *out = (int64_t *)pOutputData->pData;
|
||||||
|
int64_t result = (int64_t)d1((double)in[colIdx1], in2);
|
||||||
|
out[i] = result;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_UTINYINT: {
|
||||||
|
uint8_t *in = (uint8_t *)pInputData[0]->pData;
|
||||||
|
uint8_t *out = (uint8_t *)pOutputData->pData;
|
||||||
|
uint8_t result = (uint8_t)d1((double)in[colIdx1], in2);
|
||||||
|
out[i] = result;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_USMALLINT: {
|
||||||
|
uint16_t *in = (uint16_t *)pInputData[0]->pData;
|
||||||
|
uint16_t *out = (uint16_t *)pOutputData->pData;
|
||||||
|
uint16_t result = (uint16_t)d1((double)in[colIdx1], in2);
|
||||||
|
out[i] = result;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_UINT: {
|
||||||
|
uint32_t *in = (uint32_t *)pInputData[0]->pData;
|
||||||
|
uint32_t *out = (uint32_t *)pOutputData->pData;
|
||||||
|
uint32_t result = (uint32_t)d1((double)in[colIdx1], in2);
|
||||||
|
out[i] = result;
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case TSDB_DATA_TYPE_UBIGINT: {
|
||||||
|
uint64_t *in = (uint64_t *)pInputData[0]->pData;
|
||||||
|
uint64_t *out = (uint64_t *)pOutputData->pData;
|
||||||
|
uint64_t result = (uint64_t)d1((double)in[colIdx1], in2);
|
||||||
|
out[i] = result;
|
||||||
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -121,19 +121,6 @@ _return:
|
||||||
SCL_RET(code);
|
SCL_RET(code);
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t convertBinaryToDouble(const void *inData, void *outData) {
|
|
||||||
char *tmp = taosMemoryCalloc(1, varDataTLen(inData));
|
|
||||||
if (tmp == NULL) {
|
|
||||||
*((double *)outData) = 0.;
|
|
||||||
SCL_ERR_RET(terrno);
|
|
||||||
}
|
|
||||||
(void)memcpy(tmp, varDataVal(inData), varDataLen(inData));
|
|
||||||
double ret = taosStr2Double(tmp, NULL);
|
|
||||||
taosMemoryFree(tmp);
|
|
||||||
*((double *)outData) = ret;
|
|
||||||
SCL_RET(TSDB_CODE_SUCCESS);
|
|
||||||
}
|
|
||||||
|
|
||||||
typedef int32_t (*_getBigintValue_fn_t)(void *src, int32_t index, int64_t *res);
|
typedef int32_t (*_getBigintValue_fn_t)(void *src, int32_t index, int64_t *res);
|
||||||
|
|
||||||
int32_t getVectorBigintValue_TINYINT(void *src, int32_t index, int64_t *res) {
|
int32_t getVectorBigintValue_TINYINT(void *src, int32_t index, int64_t *res) {
|
||||||
|
@ -1129,17 +1116,11 @@ int32_t vectorConvertCols(SScalarParam *pLeft, SScalarParam *pRight, SScalarPara
|
||||||
}
|
}
|
||||||
|
|
||||||
if (type != GET_PARAM_TYPE(param1)) {
|
if (type != GET_PARAM_TYPE(param1)) {
|
||||||
code = vectorConvertSingleCol(param1, paramOut1, type, startIndex, numOfRows);
|
SCL_ERR_RET(vectorConvertSingleCol(param1, paramOut1, type, startIndex, numOfRows));
|
||||||
if (code) {
|
|
||||||
return code;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if (type != GET_PARAM_TYPE(param2)) {
|
if (type != GET_PARAM_TYPE(param2)) {
|
||||||
code = vectorConvertSingleCol(param2, paramOut2, type, startIndex, numOfRows);
|
SCL_ERR_RET(vectorConvertSingleCol(param2, paramOut2, type, startIndex, numOfRows));
|
||||||
if (code) {
|
|
||||||
return code;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
|
@ -1208,22 +1189,16 @@ static int32_t vectorMathTsAddHelper(SColumnInfoData *pLeftCol, SColumnInfoData
|
||||||
static int32_t vectorConvertVarToDouble(SScalarParam *pInput, int32_t *converted, SColumnInfoData **pOutputCol) {
|
static int32_t vectorConvertVarToDouble(SScalarParam *pInput, int32_t *converted, SColumnInfoData **pOutputCol) {
|
||||||
SScalarParam output = {0};
|
SScalarParam output = {0};
|
||||||
SColumnInfoData *pCol = pInput->columnData;
|
SColumnInfoData *pCol = pInput->columnData;
|
||||||
|
int32_t code = TSDB_CODE_SUCCESS;
|
||||||
|
*pOutputCol = NULL;
|
||||||
if (IS_VAR_DATA_TYPE(pCol->info.type) && pCol->info.type != TSDB_DATA_TYPE_JSON && pCol->info.type != TSDB_DATA_TYPE_VARBINARY) {
|
if (IS_VAR_DATA_TYPE(pCol->info.type) && pCol->info.type != TSDB_DATA_TYPE_JSON && pCol->info.type != TSDB_DATA_TYPE_VARBINARY) {
|
||||||
int32_t code = vectorConvertSingleCol(pInput, &output, TSDB_DATA_TYPE_DOUBLE, -1, -1);
|
SCL_ERR_RET(vectorConvertSingleCol(pInput, &output, TSDB_DATA_TYPE_DOUBLE, -1, -1));
|
||||||
if (code != TSDB_CODE_SUCCESS) {
|
|
||||||
*pOutputCol = NULL;
|
|
||||||
SCL_ERR_RET(code);
|
|
||||||
}
|
|
||||||
|
|
||||||
*converted = VECTOR_DO_CONVERT;
|
*converted = VECTOR_DO_CONVERT;
|
||||||
|
|
||||||
*pOutputCol = output.columnData;
|
*pOutputCol = output.columnData;
|
||||||
SCL_RET(code);
|
SCL_RET(code);
|
||||||
}
|
}
|
||||||
|
|
||||||
*converted = VECTOR_UN_CONVERT;
|
*converted = VECTOR_UN_CONVERT;
|
||||||
|
|
||||||
*pOutputCol = pInput->columnData;
|
*pOutputCol = pInput->columnData;
|
||||||
SCL_RET(TSDB_CODE_SUCCESS);
|
SCL_RET(TSDB_CODE_SUCCESS);
|
||||||
}
|
}
|
||||||
|
@ -1616,68 +1591,25 @@ int32_t vectorMathRemainder(SScalarParam *pLeft, SScalarParam *pRight, SScalarPa
|
||||||
|
|
||||||
double *output = (double *)pOutputCol->pData;
|
double *output = (double *)pOutputCol->pData;
|
||||||
|
|
||||||
if (pLeft->numOfRows == pRight->numOfRows) {
|
int32_t numOfRows = TMAX(pLeft->numOfRows, pRight->numOfRows);
|
||||||
for (; i < pRight->numOfRows && i >= 0; i += step, output += 1) {
|
for (; i < numOfRows && i >= 0; i += step, output += 1) {
|
||||||
if (IS_NULL) {
|
int32_t leftidx = pLeft->numOfRows == 1 ? 0 : i;
|
||||||
colDataSetNULL(pOutputCol, i);
|
int32_t rightidx = pRight->numOfRows == 1 ? 0 : i;
|
||||||
continue;
|
if (IS_HELPER_NULL(pLeftCol, leftidx) || IS_HELPER_NULL(pRightCol, rightidx)) {
|
||||||
}
|
colDataSetNULL(pOutputCol, i);
|
||||||
|
continue;
|
||||||
double lx = 0;
|
|
||||||
double rx = 0;
|
|
||||||
SCL_ERR_JRET(getVectorDoubleValueFnLeft(LEFT_COL, i, &lx));
|
|
||||||
SCL_ERR_JRET(getVectorDoubleValueFnRight(RIGHT_COL, i, &rx));
|
|
||||||
if (isnan(lx) || isinf(lx) || isnan(rx) || isinf(rx) || FLT_EQUAL(rx, 0)) {
|
|
||||||
colDataSetNULL(pOutputCol, i);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
*output = lx - ((int64_t)(lx / rx)) * rx;
|
|
||||||
}
|
}
|
||||||
} else if (pLeft->numOfRows == 1) {
|
|
||||||
double lx = 0;
|
double lx = 0;
|
||||||
SCL_ERR_JRET(getVectorDoubleValueFnLeft(LEFT_COL, 0, &lx));
|
|
||||||
if (IS_HELPER_NULL(pLeftCol, 0)) { // Set pLeft->numOfRows NULL value
|
|
||||||
colDataSetNNULL(pOutputCol, 0, pRight->numOfRows);
|
|
||||||
} else {
|
|
||||||
for (; i >= 0 && i < pRight->numOfRows; i += step, output += 1) {
|
|
||||||
if (IS_HELPER_NULL(pRightCol, i)) {
|
|
||||||
colDataSetNULL(pOutputCol, i);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
double rx = 0;
|
|
||||||
SCL_ERR_JRET(getVectorDoubleValueFnRight(RIGHT_COL, i, &rx));
|
|
||||||
if (isnan(rx) || isinf(rx) || FLT_EQUAL(rx, 0)) {
|
|
||||||
colDataSetNULL(pOutputCol, i);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
*output = lx - ((int64_t)(lx / rx)) * rx;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else if (pRight->numOfRows == 1) {
|
|
||||||
double rx = 0;
|
double rx = 0;
|
||||||
SCL_ERR_JRET(getVectorDoubleValueFnRight(RIGHT_COL, 0, &rx));
|
SCL_ERR_JRET(getVectorDoubleValueFnLeft(LEFT_COL, leftidx, &lx));
|
||||||
if (IS_HELPER_NULL(pRightCol, 0) || FLT_EQUAL(rx, 0)) { // Set pLeft->numOfRows NULL value
|
SCL_ERR_JRET(getVectorDoubleValueFnRight(RIGHT_COL, rightidx, &rx));
|
||||||
colDataSetNNULL(pOutputCol, 0, pLeft->numOfRows);
|
if (isnan(lx) || isinf(lx) || isnan(rx) || isinf(rx) || FLT_EQUAL(rx, 0)) {
|
||||||
} else {
|
colDataSetNULL(pOutputCol, i);
|
||||||
for (; i >= 0 && i < pLeft->numOfRows; i += step, output += 1) {
|
continue;
|
||||||
if (IS_HELPER_NULL(pLeftCol, i)) {
|
|
||||||
colDataSetNULL(pOutputCol, i);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
double lx = 0;
|
|
||||||
SCL_ERR_JRET(getVectorDoubleValueFnLeft(LEFT_COL, i, &lx));
|
|
||||||
if (isnan(lx) || isinf(lx)) {
|
|
||||||
colDataSetNULL(pOutputCol, i);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
*output = lx - ((int64_t)(lx / rx)) * rx;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
*output = lx - ((int64_t)(lx / rx)) * rx;
|
||||||
}
|
}
|
||||||
|
|
||||||
_return:
|
_return:
|
||||||
|
@ -1739,33 +1671,6 @@ int32_t vectorAssign(SScalarParam *pLeft, SScalarParam *pRight, SScalarParam *pO
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t vectorBitAndHelper(SColumnInfoData *pLeftCol, SColumnInfoData *pRightCol, SColumnInfoData *pOutputCol,
|
|
||||||
int32_t numOfRows, int32_t step, int32_t i) {
|
|
||||||
_getBigintValue_fn_t getVectorBigintValueFnLeft;
|
|
||||||
_getBigintValue_fn_t getVectorBigintValueFnRight;
|
|
||||||
SCL_ERR_RET(getVectorBigintValueFn(pLeftCol->info.type, &getVectorBigintValueFnLeft));
|
|
||||||
SCL_ERR_RET(getVectorBigintValueFn(pRightCol->info.type, &getVectorBigintValueFnRight));
|
|
||||||
|
|
||||||
int64_t *output = (int64_t *)pOutputCol->pData;
|
|
||||||
|
|
||||||
if (IS_HELPER_NULL(pRightCol, 0)) { // Set pLeft->numOfRows NULL value
|
|
||||||
colDataSetNNULL(pOutputCol, 0, numOfRows);
|
|
||||||
} else {
|
|
||||||
for (; i >= 0 && i < numOfRows; i += step, output += 1) {
|
|
||||||
if (IS_HELPER_NULL(pLeftCol, i)) {
|
|
||||||
colDataSetNULL(pOutputCol, i);
|
|
||||||
continue; // TODO set null or ignore
|
|
||||||
}
|
|
||||||
int64_t leftRes = 0;
|
|
||||||
int64_t rightRes = 0;
|
|
||||||
SCL_ERR_RET(getVectorBigintValueFnLeft(LEFT_COL, i, &leftRes));
|
|
||||||
SCL_ERR_RET(getVectorBigintValueFnRight(RIGHT_COL, 0, &rightRes));
|
|
||||||
*output = leftRes & rightRes;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
SCL_RET(TSDB_CODE_SUCCESS);
|
|
||||||
}
|
|
||||||
|
|
||||||
int32_t vectorBitAnd(SScalarParam *pLeft, SScalarParam *pRight, SScalarParam *pOut, int32_t _ord) {
|
int32_t vectorBitAnd(SScalarParam *pLeft, SScalarParam *pRight, SScalarParam *pOut, int32_t _ord) {
|
||||||
SColumnInfoData *pOutputCol = pOut->columnData;
|
SColumnInfoData *pOutputCol = pOut->columnData;
|
||||||
pOut->numOfRows = TMAX(pLeft->numOfRows, pRight->numOfRows);
|
pOut->numOfRows = TMAX(pLeft->numOfRows, pRight->numOfRows);
|
||||||
|
@ -1786,22 +1691,19 @@ int32_t vectorBitAnd(SScalarParam *pLeft, SScalarParam *pRight, SScalarParam *pO
|
||||||
SCL_ERR_JRET(getVectorBigintValueFn(pRightCol->info.type, &getVectorBigintValueFnRight));
|
SCL_ERR_JRET(getVectorBigintValueFn(pRightCol->info.type, &getVectorBigintValueFnRight));
|
||||||
|
|
||||||
int64_t *output = (int64_t *)pOutputCol->pData;
|
int64_t *output = (int64_t *)pOutputCol->pData;
|
||||||
if (pLeft->numOfRows == pRight->numOfRows) {
|
int32_t numOfRows = TMAX(pLeft->numOfRows, pRight->numOfRows);
|
||||||
for (; i < pRight->numOfRows && i >= 0; i += step, output += 1) {
|
for (; i < numOfRows && i >= 0; i += step, output += 1) {
|
||||||
if (IS_NULL) {
|
int32_t leftidx = pLeft->numOfRows == 1 ? 0 : i;
|
||||||
colDataSetNULL(pOutputCol, i);
|
int32_t rightidx = pRight->numOfRows == 1 ? 0 : i;
|
||||||
continue; // TODO set null or ignore
|
if (IS_HELPER_NULL(pRightCol, rightidx) || IS_HELPER_NULL(pLeftCol, leftidx)) {
|
||||||
}
|
colDataSetNULL(pOutputCol, i);
|
||||||
int64_t leftRes = 0;
|
continue; // TODO set null or ignore
|
||||||
int64_t rightRes = 0;
|
|
||||||
SCL_ERR_JRET(getVectorBigintValueFnLeft(LEFT_COL, i, &leftRes));
|
|
||||||
SCL_ERR_JRET(getVectorBigintValueFnRight(RIGHT_COL, i, &rightRes));
|
|
||||||
*output = leftRes & rightRes;
|
|
||||||
}
|
}
|
||||||
} else if (pLeft->numOfRows == 1) {
|
int64_t leftRes = 0;
|
||||||
SCL_ERR_JRET(vectorBitAndHelper(pRightCol, pLeftCol, pOutputCol, pRight->numOfRows, step, i));
|
int64_t rightRes = 0;
|
||||||
} else if (pRight->numOfRows == 1) {
|
SCL_ERR_JRET(getVectorBigintValueFnLeft(LEFT_COL, leftidx, &leftRes));
|
||||||
SCL_ERR_JRET(vectorBitAndHelper(pLeftCol, pRightCol, pOutputCol, pLeft->numOfRows, step, i));
|
SCL_ERR_JRET(getVectorBigintValueFnRight(RIGHT_COL, rightidx, &rightRes));
|
||||||
|
*output = leftRes & rightRes;
|
||||||
}
|
}
|
||||||
|
|
||||||
_return:
|
_return:
|
||||||
|
@ -1810,33 +1712,6 @@ _return:
|
||||||
SCL_RET(code);
|
SCL_RET(code);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t vectorBitOrHelper(SColumnInfoData *pLeftCol, SColumnInfoData *pRightCol, SColumnInfoData *pOutputCol,
|
|
||||||
int32_t numOfRows, int32_t step, int32_t i) {
|
|
||||||
_getBigintValue_fn_t getVectorBigintValueFnLeft;
|
|
||||||
_getBigintValue_fn_t getVectorBigintValueFnRight;
|
|
||||||
SCL_ERR_RET(getVectorBigintValueFn(pLeftCol->info.type, &getVectorBigintValueFnLeft));
|
|
||||||
SCL_ERR_RET(getVectorBigintValueFn(pRightCol->info.type, &getVectorBigintValueFnRight));
|
|
||||||
|
|
||||||
int64_t *output = (int64_t *)pOutputCol->pData;
|
|
||||||
|
|
||||||
if (IS_HELPER_NULL(pRightCol, 0)) { // Set pLeft->numOfRows NULL value
|
|
||||||
colDataSetNNULL(pOutputCol, 0, numOfRows);
|
|
||||||
} else {
|
|
||||||
int64_t rx = 0;
|
|
||||||
SCL_ERR_RET(getVectorBigintValueFnRight(RIGHT_COL, 0, &rx));
|
|
||||||
for (; i >= 0 && i < numOfRows; i += step, output += 1) {
|
|
||||||
if (IS_HELPER_NULL(pLeftCol, i)) {
|
|
||||||
colDataSetNULL(pOutputCol, i);
|
|
||||||
continue; // TODO set null or ignore
|
|
||||||
}
|
|
||||||
int64_t lx = 0;
|
|
||||||
SCL_ERR_RET(getVectorBigintValueFnLeft(LEFT_COL, i, &lx));
|
|
||||||
*output = lx | rx;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
SCL_RET(TSDB_CODE_SUCCESS);
|
|
||||||
}
|
|
||||||
|
|
||||||
int32_t vectorBitOr(SScalarParam *pLeft, SScalarParam *pRight, SScalarParam *pOut, int32_t _ord) {
|
int32_t vectorBitOr(SScalarParam *pLeft, SScalarParam *pRight, SScalarParam *pOut, int32_t _ord) {
|
||||||
SColumnInfoData *pOutputCol = pOut->columnData;
|
SColumnInfoData *pOutputCol = pOut->columnData;
|
||||||
pOut->numOfRows = TMAX(pLeft->numOfRows, pRight->numOfRows);
|
pOut->numOfRows = TMAX(pLeft->numOfRows, pRight->numOfRows);
|
||||||
|
@ -1857,22 +1732,20 @@ int32_t vectorBitOr(SScalarParam *pLeft, SScalarParam *pRight, SScalarParam *pOu
|
||||||
SCL_ERR_JRET(getVectorBigintValueFn(pRightCol->info.type, &getVectorBigintValueFnRight));
|
SCL_ERR_JRET(getVectorBigintValueFn(pRightCol->info.type, &getVectorBigintValueFnRight));
|
||||||
|
|
||||||
int64_t *output = (int64_t *)pOutputCol->pData;
|
int64_t *output = (int64_t *)pOutputCol->pData;
|
||||||
if (pLeft->numOfRows == pRight->numOfRows) {
|
int32_t numOfRows = TMAX(pLeft->numOfRows, pRight->numOfRows);
|
||||||
for (; i < pRight->numOfRows && i >= 0; i += step, output += 1) {
|
for (; i < numOfRows && i >= 0; i += step, output += 1) {
|
||||||
if (IS_NULL) {
|
int32_t leftidx = pLeft->numOfRows == 1 ? 0 : i;
|
||||||
colDataSetNULL(pOutputCol, i);
|
int32_t rightidx = pRight->numOfRows == 1 ? 0 : i;
|
||||||
continue; // TODO set null or ignore
|
if (IS_HELPER_NULL(pRightCol, leftidx) || IS_HELPER_NULL(pLeftCol, rightidx)) {
|
||||||
}
|
colDataSetNULL(pOutputCol, i);
|
||||||
int64_t leftRes = 0;
|
continue; // TODO set null or ignore
|
||||||
int64_t rightRes = 0;
|
|
||||||
SCL_ERR_JRET(getVectorBigintValueFnLeft(LEFT_COL, i, &leftRes));
|
|
||||||
SCL_ERR_JRET(getVectorBigintValueFnRight(RIGHT_COL, i, &rightRes));
|
|
||||||
*output = leftRes | rightRes;
|
|
||||||
}
|
}
|
||||||
} else if (pLeft->numOfRows == 1) {
|
|
||||||
SCL_ERR_JRET(vectorBitOrHelper(pRightCol, pLeftCol, pOutputCol, pRight->numOfRows, step, i));
|
int64_t leftRes = 0;
|
||||||
} else if (pRight->numOfRows == 1) {
|
int64_t rightRes = 0;
|
||||||
SCL_ERR_JRET(vectorBitOrHelper(pLeftCol, pRightCol, pOutputCol, pLeft->numOfRows, step, i));
|
SCL_ERR_JRET(getVectorBigintValueFnLeft(LEFT_COL, leftidx, &leftRes));
|
||||||
|
SCL_ERR_JRET(getVectorBigintValueFnRight(RIGHT_COL, rightidx, &rightRes));
|
||||||
|
*output = leftRes | rightRes;
|
||||||
}
|
}
|
||||||
|
|
||||||
_return:
|
_return:
|
||||||
|
|
|
@ -391,6 +391,26 @@ TEST(constantTest, bigint_add_bigint) {
|
||||||
nodesDestroyNode(res);
|
nodesDestroyNode(res);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
TEST(constantTest, ubigint_add_ubigint) {
|
||||||
|
SNode *pLeft = NULL, *pRight = NULL, *opNode = NULL, *res = NULL;
|
||||||
|
int32_t code = TSDB_CODE_SUCCESS;
|
||||||
|
code = scltMakeValueNode(&pLeft, TSDB_DATA_TYPE_UBIGINT, &scltLeftV);
|
||||||
|
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
|
||||||
|
code = scltMakeValueNode(&pRight, TSDB_DATA_TYPE_UBIGINT, &scltRightV);
|
||||||
|
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
|
||||||
|
code = scltMakeOpNode(&opNode, OP_TYPE_ADD, TSDB_DATA_TYPE_UBIGINT, pLeft, pRight);
|
||||||
|
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
|
||||||
|
|
||||||
|
code = scalarCalculateConstants(opNode, &res);
|
||||||
|
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
|
||||||
|
ASSERT_TRUE(res);
|
||||||
|
ASSERT_EQ(nodeType(res), QUERY_NODE_VALUE);
|
||||||
|
SValueNode *v = (SValueNode *)res;
|
||||||
|
ASSERT_EQ(v->node.resType.type, TSDB_DATA_TYPE_UBIGINT);
|
||||||
|
ASSERT_FLOAT_EQ(v->datum.d, (scltLeftV + scltRightV));
|
||||||
|
nodesDestroyNode(res);
|
||||||
|
}
|
||||||
|
|
||||||
TEST(constantTest, double_sub_bigint) {
|
TEST(constantTest, double_sub_bigint) {
|
||||||
SNode *pLeft = NULL, *pRight = NULL, *opNode = NULL, *res = NULL;
|
SNode *pLeft = NULL, *pRight = NULL, *opNode = NULL, *res = NULL;
|
||||||
int32_t code = TSDB_CODE_SUCCESS;
|
int32_t code = TSDB_CODE_SUCCESS;
|
||||||
|
@ -431,6 +451,66 @@ TEST(constantTest, tinyint_and_smallint) {
|
||||||
nodesDestroyNode(res);
|
nodesDestroyNode(res);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
TEST(constantTest, utinyint_and_usmallint) {
|
||||||
|
SNode *pLeft = NULL, *pRight = NULL, *opNode = NULL, *res = NULL;
|
||||||
|
int32_t code = TSDB_CODE_SUCCESS;
|
||||||
|
code = scltMakeValueNode(&pLeft, TSDB_DATA_TYPE_UTINYINT, &scltLeftV);
|
||||||
|
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
|
||||||
|
code = scltMakeValueNode(&pRight, TSDB_DATA_TYPE_USMALLINT, &scltRightV);
|
||||||
|
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
|
||||||
|
code = scltMakeOpNode(&opNode, OP_TYPE_BIT_AND, TSDB_DATA_TYPE_BIGINT, pLeft, pRight);
|
||||||
|
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
|
||||||
|
|
||||||
|
code = scalarCalculateConstants(opNode, &res);
|
||||||
|
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
|
||||||
|
ASSERT_TRUE(res);
|
||||||
|
ASSERT_EQ(nodeType(res), QUERY_NODE_VALUE);
|
||||||
|
SValueNode *v = (SValueNode *)res;
|
||||||
|
ASSERT_EQ(v->node.resType.type, TSDB_DATA_TYPE_BIGINT);
|
||||||
|
ASSERT_EQ(v->datum.i, (int64_t)scltLeftV & (int64_t)scltRightV);
|
||||||
|
nodesDestroyNode(res);
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST(constantTest, uint_and_usmallint) {
|
||||||
|
SNode *pLeft = NULL, *pRight = NULL, *opNode = NULL, *res = NULL;
|
||||||
|
int32_t code = TSDB_CODE_SUCCESS;
|
||||||
|
code = scltMakeValueNode(&pLeft, TSDB_DATA_TYPE_UINT, &scltLeftV);
|
||||||
|
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
|
||||||
|
code = scltMakeValueNode(&pRight, TSDB_DATA_TYPE_USMALLINT, &scltRightV);
|
||||||
|
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
|
||||||
|
code = scltMakeOpNode(&opNode, OP_TYPE_BIT_AND, TSDB_DATA_TYPE_BIGINT, pLeft, pRight);
|
||||||
|
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
|
||||||
|
|
||||||
|
code = scalarCalculateConstants(opNode, &res);
|
||||||
|
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
|
||||||
|
ASSERT_TRUE(res);
|
||||||
|
ASSERT_EQ(nodeType(res), QUERY_NODE_VALUE);
|
||||||
|
SValueNode *v = (SValueNode *)res;
|
||||||
|
ASSERT_EQ(v->node.resType.type, TSDB_DATA_TYPE_BIGINT);
|
||||||
|
ASSERT_EQ(v->datum.i, (int64_t)scltLeftV & (int64_t)scltRightV);
|
||||||
|
nodesDestroyNode(res);
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST(constantTest, ubigint_and_uint) {
|
||||||
|
SNode *pLeft = NULL, *pRight = NULL, *opNode = NULL, *res = NULL;
|
||||||
|
int32_t code = TSDB_CODE_SUCCESS;
|
||||||
|
code = scltMakeValueNode(&pLeft, TSDB_DATA_TYPE_UBIGINT, &scltLeftV);
|
||||||
|
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
|
||||||
|
code = scltMakeValueNode(&pRight, TSDB_DATA_TYPE_UINT, &scltRightV);
|
||||||
|
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
|
||||||
|
code = scltMakeOpNode(&opNode, OP_TYPE_BIT_AND, TSDB_DATA_TYPE_BIGINT, pLeft, pRight);
|
||||||
|
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
|
||||||
|
|
||||||
|
code = scalarCalculateConstants(opNode, &res);
|
||||||
|
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
|
||||||
|
ASSERT_TRUE(res);
|
||||||
|
ASSERT_EQ(nodeType(res), QUERY_NODE_VALUE);
|
||||||
|
SValueNode *v = (SValueNode *)res;
|
||||||
|
ASSERT_EQ(v->node.resType.type, TSDB_DATA_TYPE_BIGINT);
|
||||||
|
ASSERT_EQ(v->datum.i, (int64_t)scltLeftV & (int64_t)scltRightV);
|
||||||
|
nodesDestroyNode(res);
|
||||||
|
}
|
||||||
|
|
||||||
TEST(constantTest, bigint_or_double) {
|
TEST(constantTest, bigint_or_double) {
|
||||||
SNode *pLeft = NULL, *pRight = NULL, *opNode = NULL, *res = NULL;
|
SNode *pLeft = NULL, *pRight = NULL, *opNode = NULL, *res = NULL;
|
||||||
int32_t code = TSDB_CODE_SUCCESS;
|
int32_t code = TSDB_CODE_SUCCESS;
|
||||||
|
@ -494,6 +574,53 @@ TEST(constantTest, int_greater_double) {
|
||||||
nodesDestroyNode(res);
|
nodesDestroyNode(res);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
TEST(constantTest, binary_greater_equal_varbinary) {
|
||||||
|
SNode *pLeft = NULL, *pRight = NULL, *opNode = NULL, *res = NULL;
|
||||||
|
char binaryStr[64] = {0};
|
||||||
|
int32_t code = TSDB_CODE_SUCCESS;
|
||||||
|
(void)sprintf(&binaryStr[2], "%d", scltRightV);
|
||||||
|
varDataSetLen(binaryStr, strlen(&binaryStr[2]));
|
||||||
|
code = scltMakeValueNode(&pLeft, TSDB_DATA_TYPE_VARBINARY, binaryStr);
|
||||||
|
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
|
||||||
|
code = scltMakeValueNode(&pRight, TSDB_DATA_TYPE_BINARY, binaryStr);
|
||||||
|
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
|
||||||
|
code = scltMakeOpNode(&opNode, OP_TYPE_GREATER_THAN, TSDB_DATA_TYPE_BOOL, pLeft, pRight);
|
||||||
|
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
|
||||||
|
|
||||||
|
code = scalarCalculateConstants(opNode, &res);
|
||||||
|
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
|
||||||
|
ASSERT_TRUE(res);
|
||||||
|
ASSERT_EQ(nodeType(res), QUERY_NODE_VALUE);
|
||||||
|
SValueNode *v = (SValueNode *)res;
|
||||||
|
ASSERT_EQ(v->node.resType.type, TSDB_DATA_TYPE_BOOL);
|
||||||
|
ASSERT_EQ(v->datum.b, scltLeftV < scltRightVd);
|
||||||
|
nodesDestroyNode(res);
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST(constantTest, binary_equal_geo) {
|
||||||
|
SNode *pLeft = NULL, *pRight = NULL, *opNode = NULL, *res = NULL;
|
||||||
|
char geoRawStr[64] = "POLYGON((30 10, 40 40, 20 40, 10 20, 30 10))";
|
||||||
|
char geoStr[64] = {0};
|
||||||
|
int32_t code = TSDB_CODE_SUCCESS;
|
||||||
|
(void)sprintf(&geoStr[2], "%s", geoRawStr);
|
||||||
|
varDataSetLen(geoStr, strlen(&geoStr[2]));
|
||||||
|
code = scltMakeValueNode(&pLeft, TSDB_DATA_TYPE_GEOMETRY, geoStr);
|
||||||
|
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
|
||||||
|
code = scltMakeValueNode(&pRight, TSDB_DATA_TYPE_BINARY, geoStr);
|
||||||
|
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
|
||||||
|
code = scltMakeOpNode(&opNode, OP_TYPE_EQUAL, TSDB_DATA_TYPE_BOOL, pLeft, pRight);
|
||||||
|
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
|
||||||
|
|
||||||
|
code = scalarCalculateConstants(opNode, &res);
|
||||||
|
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
|
||||||
|
ASSERT_TRUE(res);
|
||||||
|
ASSERT_EQ(nodeType(res), QUERY_NODE_VALUE);
|
||||||
|
SValueNode *v = (SValueNode *)res;
|
||||||
|
ASSERT_EQ(v->node.resType.type, TSDB_DATA_TYPE_BOOL);
|
||||||
|
ASSERT_EQ(v->datum.b, scltLeftV < scltRightVd);
|
||||||
|
nodesDestroyNode(res);
|
||||||
|
}
|
||||||
|
|
||||||
TEST(constantTest, int_greater_equal_binary) {
|
TEST(constantTest, int_greater_equal_binary) {
|
||||||
SNode *pLeft = NULL, *pRight = NULL, *opNode = NULL, *res = NULL;
|
SNode *pLeft = NULL, *pRight = NULL, *opNode = NULL, *res = NULL;
|
||||||
char binaryStr[64] = {0};
|
char binaryStr[64] = {0};
|
||||||
|
|
|
@ -531,8 +531,8 @@ int32_t schHandleNotifyCallback(void *param, SDataBuf *pMsg, int32_t code) {
|
||||||
qDebug("QID:0x%" PRIx64 ",SID:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 " task notify rsp received, code:0x%x",
|
qDebug("QID:0x%" PRIx64 ",SID:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 " task notify rsp received, code:0x%x",
|
||||||
pParam->queryId, pParam->seriousId, pParam->clientId, pParam->taskId, code);
|
pParam->queryId, pParam->seriousId, pParam->clientId, pParam->taskId, code);
|
||||||
if (pMsg) {
|
if (pMsg) {
|
||||||
taosMemoryFree(pMsg->pData);
|
taosMemoryFreeClear(pMsg->pData);
|
||||||
taosMemoryFree(pMsg->pEpSet);
|
taosMemoryFreeClear(pMsg->pEpSet);
|
||||||
}
|
}
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
@ -545,8 +545,8 @@ int32_t schHandleLinkBrokenCallback(void *param, SDataBuf *pMsg, int32_t code) {
|
||||||
qDebug("handle %p is broken", pMsg->handle);
|
qDebug("handle %p is broken", pMsg->handle);
|
||||||
|
|
||||||
if (head->isHbParam) {
|
if (head->isHbParam) {
|
||||||
taosMemoryFree(pMsg->pData);
|
taosMemoryFreeClear(pMsg->pData);
|
||||||
taosMemoryFree(pMsg->pEpSet);
|
taosMemoryFreeClear(pMsg->pEpSet);
|
||||||
|
|
||||||
SSchHbCallbackParam *hbParam = (SSchHbCallbackParam *)param;
|
SSchHbCallbackParam *hbParam = (SSchHbCallbackParam *)param;
|
||||||
SSchTrans trans = {.pTrans = hbParam->pTrans, .pHandle = NULL, .pHandleId = 0};
|
SSchTrans trans = {.pTrans = hbParam->pTrans, .pHandle = NULL, .pHandleId = 0};
|
||||||
|
@ -1293,6 +1293,7 @@ int32_t schBuildAndSendMsg(SSchJob *pJob, SSchTask *pTask, SQueryNodeAddr *addr,
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
/*
|
||||||
case TDMT_SCH_QUERY_HEARTBEAT: {
|
case TDMT_SCH_QUERY_HEARTBEAT: {
|
||||||
SCH_ERR_RET(schMakeHbRpcCtx(pJob, pTask, &rpcCtx));
|
SCH_ERR_RET(schMakeHbRpcCtx(pJob, pTask, &rpcCtx));
|
||||||
|
|
||||||
|
@ -1320,6 +1321,7 @@ int32_t schBuildAndSendMsg(SSchJob *pJob, SSchTask *pTask, SQueryNodeAddr *addr,
|
||||||
persistHandle = true;
|
persistHandle = true;
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
*/
|
||||||
case TDMT_SCH_TASK_NOTIFY: {
|
case TDMT_SCH_TASK_NOTIFY: {
|
||||||
ETaskNotifyType* pType = param;
|
ETaskNotifyType* pType = param;
|
||||||
STaskNotifyReq qMsg;
|
STaskNotifyReq qMsg;
|
||||||
|
|
|
@ -189,7 +189,6 @@ int32_t schProcessOnTaskFailure(SSchJob *pJob, SSchTask *pTask, int32_t errCode)
|
||||||
}
|
}
|
||||||
|
|
||||||
pTask->failedExecId = pTask->execId;
|
pTask->failedExecId = pTask->execId;
|
||||||
pTask->failedSeriousId = pTask->seriousId;
|
|
||||||
|
|
||||||
int8_t jobStatus = 0;
|
int8_t jobStatus = 0;
|
||||||
if (schJobNeedToStop(pJob, &jobStatus)) {
|
if (schJobNeedToStop(pJob, &jobStatus)) {
|
||||||
|
@ -438,7 +437,7 @@ void schResetTaskForRetry(SSchJob *pJob, SSchTask *pTask) {
|
||||||
pTask->waitRetry = true;
|
pTask->waitRetry = true;
|
||||||
|
|
||||||
if (pTask->delayTimer) {
|
if (pTask->delayTimer) {
|
||||||
taosTmrStop(pTask->delayTimer);
|
UNUSED(taosTmrStop(pTask->delayTimer));
|
||||||
}
|
}
|
||||||
|
|
||||||
schDropTaskOnExecNode(pJob, pTask);
|
schDropTaskOnExecNode(pJob, pTask);
|
||||||
|
@ -452,6 +451,8 @@ void schResetTaskForRetry(SSchJob *pJob, SSchTask *pTask) {
|
||||||
TAOS_MEMSET(&pTask->succeedAddr, 0, sizeof(pTask->succeedAddr));
|
TAOS_MEMSET(&pTask->succeedAddr, 0, sizeof(pTask->succeedAddr));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#if 0
|
||||||
|
|
||||||
int32_t schDoTaskRedirect(SSchJob *pJob, SSchTask *pTask, SDataBuf *pData, int32_t rspCode) {
|
int32_t schDoTaskRedirect(SSchJob *pJob, SSchTask *pTask, SDataBuf *pData, int32_t rspCode) {
|
||||||
int32_t code = 0;
|
int32_t code = 0;
|
||||||
|
|
||||||
|
@ -593,6 +594,7 @@ _return:
|
||||||
|
|
||||||
SCH_RET(schProcessOnTaskFailure(pJob, pTask, code));
|
SCH_RET(schProcessOnTaskFailure(pJob, pTask, code));
|
||||||
}
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
int32_t schPushTaskToExecList(SSchJob *pJob, SSchTask *pTask) {
|
int32_t schPushTaskToExecList(SSchJob *pJob, SSchTask *pTask) {
|
||||||
int32_t code = taosHashPut(pJob->execTasks, &pTask->taskId, sizeof(pTask->taskId), &pTask, POINTER_BYTES);
|
int32_t code = taosHashPut(pJob->execTasks, &pTask->taskId, sizeof(pTask->taskId), &pTask, POINTER_BYTES);
|
||||||
|
@ -759,7 +761,7 @@ int32_t schHandleTaskRetry(SSchJob *pJob, SSchTask *pTask) {
|
||||||
(void)atomic_sub_fetch_32(&pTask->level->taskLaunchedNum, 1);
|
(void)atomic_sub_fetch_32(&pTask->level->taskLaunchedNum, 1);
|
||||||
|
|
||||||
if (pTask->delayTimer) {
|
if (pTask->delayTimer) {
|
||||||
taosTmrStop(pTask->delayTimer);
|
UNUSED(taosTmrStop(pTask->delayTimer));
|
||||||
}
|
}
|
||||||
|
|
||||||
(void)schRemoveTaskFromExecList(pJob, pTask); // ignore error
|
(void)schRemoveTaskFromExecList(pJob, pTask); // ignore error
|
||||||
|
@ -869,6 +871,7 @@ int32_t schSetTaskCandidateAddrs(SSchJob *pJob, SSchTask *pTask) {
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#if 0
|
||||||
int32_t schUpdateTaskCandidateAddr(SSchJob *pJob, SSchTask *pTask, SEpSet *pEpSet) {
|
int32_t schUpdateTaskCandidateAddr(SSchJob *pJob, SSchTask *pTask, SEpSet *pEpSet) {
|
||||||
int32_t code = TSDB_CODE_SUCCESS;
|
int32_t code = TSDB_CODE_SUCCESS;
|
||||||
if (NULL == pTask->candidateAddrs || 1 != taosArrayGetSize(pTask->candidateAddrs)) {
|
if (NULL == pTask->candidateAddrs || 1 != taosArrayGetSize(pTask->candidateAddrs)) {
|
||||||
|
@ -900,6 +903,7 @@ _return:
|
||||||
|
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
int32_t schSwitchTaskCandidateAddr(SSchJob *pJob, SSchTask *pTask) {
|
int32_t schSwitchTaskCandidateAddr(SSchJob *pJob, SSchTask *pTask) {
|
||||||
int32_t candidateNum = taosArrayGetSize(pTask->candidateAddrs);
|
int32_t candidateNum = taosArrayGetSize(pTask->candidateAddrs);
|
||||||
|
@ -1376,6 +1380,7 @@ int32_t schLaunchLevelTasks(SSchJob *pJob, SSchLevel *level) {
|
||||||
|
|
||||||
for (int32_t i = 0; i < level->taskNum; ++i) {
|
for (int32_t i = 0; i < level->taskNum; ++i) {
|
||||||
SSchTask *pTask = taosArrayGet(level->subTasks, i);
|
SSchTask *pTask = taosArrayGet(level->subTasks, i);
|
||||||
|
pTask->failedSeriousId = pJob->seriousId - 1;
|
||||||
pTask->seriousId = pJob->seriousId;
|
pTask->seriousId = pJob->seriousId;
|
||||||
|
|
||||||
SCH_TASK_DLOG("task seriousId set to 0x%" PRIx64, pTask->seriousId);
|
SCH_TASK_DLOG("task seriousId set to 0x%" PRIx64, pTask->seriousId);
|
||||||
|
|
|
@ -57,6 +57,9 @@ namespace {
|
||||||
extern "C" int32_t schHandleResponseMsg(SSchJob *pJob, SSchTask *pTask, uint64_t sId, int32_t execId, SDataBuf *pMsg,
|
extern "C" int32_t schHandleResponseMsg(SSchJob *pJob, SSchTask *pTask, uint64_t sId, int32_t execId, SDataBuf *pMsg,
|
||||||
int32_t rspCode);
|
int32_t rspCode);
|
||||||
extern "C" int32_t schHandleCallback(void *param, const SDataBuf *pMsg, int32_t rspCode);
|
extern "C" int32_t schHandleCallback(void *param, const SDataBuf *pMsg, int32_t rspCode);
|
||||||
|
extern "C" int32_t schHandleNotifyCallback(void *param, SDataBuf *pMsg, int32_t code);
|
||||||
|
extern "C" int32_t schHandleLinkBrokenCallback(void *param, SDataBuf *pMsg, int32_t code);
|
||||||
|
extern "C" int32_t schRescheduleTask(SSchJob *pJob, SSchTask *pTask);
|
||||||
|
|
||||||
int64_t insertJobRefId = 0;
|
int64_t insertJobRefId = 0;
|
||||||
int64_t queryJobRefId = 0;
|
int64_t queryJobRefId = 0;
|
||||||
|
@ -316,7 +319,7 @@ void schtBuildQueryFlowCtrlDag(SQueryPlan *dag) {
|
||||||
|
|
||||||
scanPlan->execNode.nodeId = 1 + i;
|
scanPlan->execNode.nodeId = 1 + i;
|
||||||
scanPlan->execNode.epSet.inUse = 0;
|
scanPlan->execNode.epSet.inUse = 0;
|
||||||
scanPlan->execNodeStat.tableNum = taosRand() % 30;
|
scanPlan->execNodeStat.tableNum = taosRand() % 100;
|
||||||
addEpIntoEpSet(&scanPlan->execNode.epSet, "ep0", 6030);
|
addEpIntoEpSet(&scanPlan->execNode.epSet, "ep0", 6030);
|
||||||
addEpIntoEpSet(&scanPlan->execNode.epSet, "ep1", 6030);
|
addEpIntoEpSet(&scanPlan->execNode.epSet, "ep1", 6030);
|
||||||
addEpIntoEpSet(&scanPlan->execNode.epSet, "ep2", 6030);
|
addEpIntoEpSet(&scanPlan->execNode.epSet, "ep2", 6030);
|
||||||
|
@ -982,8 +985,159 @@ TEST(queryTest, normalCase) {
|
||||||
schedulerFreeJob(&job, 0);
|
schedulerFreeJob(&job, 0);
|
||||||
|
|
||||||
(void)taosThreadJoin(thread1, NULL);
|
(void)taosThreadJoin(thread1, NULL);
|
||||||
|
|
||||||
|
schMgmt.jobRef = -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
TEST(queryTest, rescheduleCase) {
|
||||||
|
void *mockPointer = (void *)0x1;
|
||||||
|
char *clusterId = "cluster1";
|
||||||
|
char *dbname = "1.db1";
|
||||||
|
char *tablename = "table1";
|
||||||
|
SVgroupInfo vgInfo = {0};
|
||||||
|
int64_t job = 0;
|
||||||
|
SQueryPlan *dag = NULL;
|
||||||
|
int32_t code = nodesMakeNode(QUERY_NODE_PHYSICAL_PLAN, (SNode**)&dag);
|
||||||
|
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
|
||||||
|
|
||||||
|
SArray *qnodeList = taosArrayInit(1, sizeof(SQueryNodeLoad));
|
||||||
|
|
||||||
|
SQueryNodeLoad load = {0};
|
||||||
|
load.addr.epSet.numOfEps = 1;
|
||||||
|
TAOS_STRCPY(load.addr.epSet.eps[0].fqdn, "qnode0.ep");
|
||||||
|
load.addr.epSet.eps[0].port = 6031;
|
||||||
|
assert(taosArrayPush(qnodeList, &load) != NULL);
|
||||||
|
|
||||||
|
TAOS_STRCPY(load.addr.epSet.eps[0].fqdn, "qnode1.ep");
|
||||||
|
assert(taosArrayPush(qnodeList, &load) != NULL);
|
||||||
|
|
||||||
|
code = schedulerInit();
|
||||||
|
ASSERT_EQ(code, 0);
|
||||||
|
|
||||||
|
schtBuildQueryDag(dag);
|
||||||
|
|
||||||
|
schtSetPlanToString();
|
||||||
|
schtSetExecNode();
|
||||||
|
schtSetAsyncSendMsgToServer();
|
||||||
|
|
||||||
|
int32_t queryDone = 0;
|
||||||
|
|
||||||
|
SRequestConnInfo conn = {0};
|
||||||
|
conn.pTrans = mockPointer;
|
||||||
|
SSchedulerReq req = {0};
|
||||||
|
req.pConn = &conn;
|
||||||
|
req.pNodeList = qnodeList;
|
||||||
|
req.pDag = dag;
|
||||||
|
req.sql = "select * from tb";
|
||||||
|
req.execFp = schtQueryCb;
|
||||||
|
req.cbParam = &queryDone;
|
||||||
|
|
||||||
|
code = schedulerExecJob(&req, &job);
|
||||||
|
ASSERT_EQ(code, 0);
|
||||||
|
|
||||||
|
SSchJob *pJob = NULL;
|
||||||
|
code = schAcquireJob(job, &pJob);
|
||||||
|
ASSERT_EQ(code, 0);
|
||||||
|
|
||||||
|
schedulerEnableReSchedule(true);
|
||||||
|
|
||||||
|
void *pIter = taosHashIterate(pJob->execTasks, NULL);
|
||||||
|
while (pIter) {
|
||||||
|
SSchTask *task = *(SSchTask **)pIter;
|
||||||
|
task->timeoutUsec = -1;
|
||||||
|
|
||||||
|
code = schRescheduleTask(pJob, task);
|
||||||
|
ASSERT_EQ(code, 0);
|
||||||
|
|
||||||
|
task->timeoutUsec = SCH_DEFAULT_TASK_TIMEOUT_USEC;
|
||||||
|
pIter = taosHashIterate(pJob->execTasks, pIter);
|
||||||
|
}
|
||||||
|
|
||||||
|
pIter = taosHashIterate(pJob->execTasks, NULL);
|
||||||
|
while (pIter) {
|
||||||
|
SSchTask *task = *(SSchTask **)pIter;
|
||||||
|
|
||||||
|
SDataBuf msg = {0};
|
||||||
|
void *rmsg = NULL;
|
||||||
|
assert(0 == schtBuildQueryRspMsg(&msg.len, &rmsg));
|
||||||
|
msg.msgType = TDMT_SCH_QUERY_RSP;
|
||||||
|
msg.pData = rmsg;
|
||||||
|
|
||||||
|
code = schHandleResponseMsg(pJob, task, task->seriousId, task->execId, &msg, 0);
|
||||||
|
|
||||||
|
ASSERT_EQ(code, 0);
|
||||||
|
pIter = taosHashIterate(pJob->execTasks, pIter);
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
pIter = taosHashIterate(pJob->execTasks, NULL);
|
||||||
|
while (pIter) {
|
||||||
|
SSchTask *task = *(SSchTask **)pIter;
|
||||||
|
task->timeoutUsec = -1;
|
||||||
|
|
||||||
|
code = schRescheduleTask(pJob, task);
|
||||||
|
ASSERT_EQ(code, 0);
|
||||||
|
|
||||||
|
task->timeoutUsec = SCH_DEFAULT_TASK_TIMEOUT_USEC;
|
||||||
|
pIter = taosHashIterate(pJob->execTasks, pIter);
|
||||||
|
}
|
||||||
|
|
||||||
|
pIter = taosHashIterate(pJob->execTasks, NULL);
|
||||||
|
while (pIter) {
|
||||||
|
SSchTask *task = *(SSchTask **)pIter;
|
||||||
|
if (JOB_TASK_STATUS_EXEC == task->status) {
|
||||||
|
SDataBuf msg = {0};
|
||||||
|
void *rmsg = NULL;
|
||||||
|
assert(0 == schtBuildQueryRspMsg(&msg.len, &rmsg));
|
||||||
|
msg.msgType = TDMT_SCH_QUERY_RSP;
|
||||||
|
msg.pData = rmsg;
|
||||||
|
|
||||||
|
code = schHandleResponseMsg(pJob, task, task->seriousId, task->execId, &msg, 0);
|
||||||
|
|
||||||
|
ASSERT_EQ(code, 0);
|
||||||
|
}
|
||||||
|
|
||||||
|
pIter = taosHashIterate(pJob->execTasks, pIter);
|
||||||
|
}
|
||||||
|
|
||||||
|
while (true) {
|
||||||
|
if (queryDone) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
taosUsleep(10000);
|
||||||
|
}
|
||||||
|
|
||||||
|
TdThreadAttr thattr;
|
||||||
|
assert(0 == taosThreadAttrInit(&thattr));
|
||||||
|
|
||||||
|
TdThread thread1;
|
||||||
|
assert(0 == taosThreadCreate(&(thread1), &thattr, schtCreateFetchRspThread, &job));
|
||||||
|
|
||||||
|
void *data = NULL;
|
||||||
|
req.syncReq = true;
|
||||||
|
req.pFetchRes = &data;
|
||||||
|
|
||||||
|
code = schedulerFetchRows(job, &req);
|
||||||
|
ASSERT_EQ(code, 0);
|
||||||
|
|
||||||
|
SRetrieveTableRsp *pRsp = (SRetrieveTableRsp *)data;
|
||||||
|
ASSERT_EQ(pRsp->completed, 1);
|
||||||
|
ASSERT_EQ(pRsp->numOfRows, 10);
|
||||||
|
taosMemoryFreeClear(data);
|
||||||
|
|
||||||
|
(void)schReleaseJob(job);
|
||||||
|
|
||||||
|
schedulerDestroy();
|
||||||
|
|
||||||
|
schedulerFreeJob(&job, 0);
|
||||||
|
|
||||||
|
(void)taosThreadJoin(thread1, NULL);
|
||||||
|
|
||||||
|
schMgmt.jobRef = -1;
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
TEST(queryTest, readyFirstCase) {
|
TEST(queryTest, readyFirstCase) {
|
||||||
void *mockPointer = (void *)0x1;
|
void *mockPointer = (void *)0x1;
|
||||||
char *clusterId = "cluster1";
|
char *clusterId = "cluster1";
|
||||||
|
@ -1097,6 +1251,7 @@ TEST(queryTest, readyFirstCase) {
|
||||||
schedulerFreeJob(&job, 0);
|
schedulerFreeJob(&job, 0);
|
||||||
|
|
||||||
(void)taosThreadJoin(thread1, NULL);
|
(void)taosThreadJoin(thread1, NULL);
|
||||||
|
schMgmt.jobRef = -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
TEST(queryTest, flowCtrlCase) {
|
TEST(queryTest, flowCtrlCase) {
|
||||||
|
@ -1196,6 +1351,9 @@ TEST(queryTest, flowCtrlCase) {
|
||||||
schedulerFreeJob(&job, 0);
|
schedulerFreeJob(&job, 0);
|
||||||
|
|
||||||
(void)taosThreadJoin(thread1, NULL);
|
(void)taosThreadJoin(thread1, NULL);
|
||||||
|
schMgmt.jobRef = -1;
|
||||||
|
|
||||||
|
cleanupTaskQueue();
|
||||||
}
|
}
|
||||||
|
|
||||||
TEST(insertTest, normalCase) {
|
TEST(insertTest, normalCase) {
|
||||||
|
@ -1260,6 +1418,7 @@ TEST(insertTest, normalCase) {
|
||||||
schedulerDestroy();
|
schedulerDestroy();
|
||||||
|
|
||||||
(void)taosThreadJoin(thread1, NULL);
|
(void)taosThreadJoin(thread1, NULL);
|
||||||
|
schMgmt.jobRef = -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
TEST(multiThread, forceFree) {
|
TEST(multiThread, forceFree) {
|
||||||
|
@ -1282,9 +1441,11 @@ TEST(multiThread, forceFree) {
|
||||||
|
|
||||||
schtTestStop = true;
|
schtTestStop = true;
|
||||||
// taosSsleep(3);
|
// taosSsleep(3);
|
||||||
|
|
||||||
|
schMgmt.jobRef = -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
TEST(otherTest, otherCase) {
|
TEST(otherTest, function) {
|
||||||
// excpet test
|
// excpet test
|
||||||
(void)schReleaseJob(0);
|
(void)schReleaseJob(0);
|
||||||
schFreeRpcCtx(NULL);
|
schFreeRpcCtx(NULL);
|
||||||
|
@ -1293,6 +1454,39 @@ TEST(otherTest, otherCase) {
|
||||||
ASSERT_EQ(schDumpEpSet(NULL, &ep), TSDB_CODE_SUCCESS);
|
ASSERT_EQ(schDumpEpSet(NULL, &ep), TSDB_CODE_SUCCESS);
|
||||||
ASSERT_EQ(strcmp(schGetOpStr(SCH_OP_NULL), "NULL"), 0);
|
ASSERT_EQ(strcmp(schGetOpStr(SCH_OP_NULL), "NULL"), 0);
|
||||||
ASSERT_EQ(strcmp(schGetOpStr((SCH_OP_TYPE)100), "UNKNOWN"), 0);
|
ASSERT_EQ(strcmp(schGetOpStr((SCH_OP_TYPE)100), "UNKNOWN"), 0);
|
||||||
|
|
||||||
|
SSchTaskCallbackParam param = {0};
|
||||||
|
SDataBuf dataBuf = {0};
|
||||||
|
dataBuf.pData = taosMemoryMalloc(1);
|
||||||
|
dataBuf.pEpSet = (SEpSet*)taosMemoryMalloc(sizeof(*dataBuf.pEpSet));
|
||||||
|
ASSERT_EQ(schHandleNotifyCallback(¶m, &dataBuf, TSDB_CODE_SUCCESS), TSDB_CODE_SUCCESS);
|
||||||
|
|
||||||
|
SSchCallbackParamHeader param2 = {0};
|
||||||
|
dataBuf.pData = taosMemoryMalloc(1);
|
||||||
|
dataBuf.pEpSet = (SEpSet*)taosMemoryMalloc(sizeof(*dataBuf.pEpSet));
|
||||||
|
schHandleLinkBrokenCallback(¶m2, &dataBuf, TSDB_CODE_SUCCESS);
|
||||||
|
param2.isHbParam = true;
|
||||||
|
dataBuf.pData = taosMemoryMalloc(1);
|
||||||
|
dataBuf.pEpSet = (SEpSet*)taosMemoryMalloc(sizeof(*dataBuf.pEpSet));
|
||||||
|
schHandleLinkBrokenCallback(¶m2, &dataBuf, TSDB_CODE_SUCCESS);
|
||||||
|
|
||||||
|
schMgmt.jobRef = -1;
|
||||||
|
}
|
||||||
|
|
||||||
|
void schtReset() {
|
||||||
|
insertJobRefId = 0;
|
||||||
|
queryJobRefId = 0;
|
||||||
|
|
||||||
|
schtJobDone = false;
|
||||||
|
schtMergeTemplateId = 0x4;
|
||||||
|
schtFetchTaskId = 0;
|
||||||
|
schtQueryId = 1;
|
||||||
|
|
||||||
|
schtTestStop = false;
|
||||||
|
schtTestDeadLoop = false;
|
||||||
|
schtTestMTRunSec = 1;
|
||||||
|
schtTestPrintNum = 1000;
|
||||||
|
schtStartFetch = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
int main(int argc, char **argv) {
|
int main(int argc, char **argv) {
|
||||||
|
@ -1302,7 +1496,17 @@ int main(int argc, char **argv) {
|
||||||
}
|
}
|
||||||
taosSeedRand(taosGetTimestampSec());
|
taosSeedRand(taosGetTimestampSec());
|
||||||
testing::InitGoogleTest(&argc, argv);
|
testing::InitGoogleTest(&argc, argv);
|
||||||
return RUN_ALL_TESTS();
|
|
||||||
|
int code = 0;
|
||||||
|
for (int32_t i = 0; i < 10; ++i) {
|
||||||
|
schtReset();
|
||||||
|
code = RUN_ALL_TESTS();
|
||||||
|
if (code) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
#pragma GCC diagnostic pop
|
#pragma GCC diagnostic pop
|
||||||
|
|
|
@ -22,7 +22,7 @@ char tsSIMDEnable = 0;
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
int32_t tsDecompressIntImpl_Hw(const char *const input, const int32_t nelements, char *const output, const char type) {
|
int32_t tsDecompressIntImpl_Hw(const char *const input, const int32_t nelements, char *const output, const char type) {
|
||||||
#ifdef __AVX2__
|
#ifdef __AVX512F__
|
||||||
int32_t word_length = getWordLength(type);
|
int32_t word_length = getWordLength(type);
|
||||||
|
|
||||||
// Selector value: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
|
// Selector value: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
|
||||||
|
@ -53,183 +53,79 @@ int32_t tsDecompressIntImpl_Hw(const char *const input, const int32_t nelements,
|
||||||
int32_t gRemainder = (nelements - _pos);
|
int32_t gRemainder = (nelements - _pos);
|
||||||
int32_t num = (gRemainder > elems) ? elems : gRemainder;
|
int32_t num = (gRemainder > elems) ? elems : gRemainder;
|
||||||
|
|
||||||
int32_t batch = 0;
|
int32_t batch = num >> 3;
|
||||||
int32_t remain = 0;
|
int32_t remain = num & 0x07;
|
||||||
if (tsSIMDEnable && tsAVX512Supported && tsAVX512Enable) {
|
|
||||||
#ifdef __AVX512F__
|
|
||||||
batch = num >> 3;
|
|
||||||
remain = num & 0x07;
|
|
||||||
#endif
|
|
||||||
} else if (tsSIMDEnable && tsAVX2Supported) {
|
|
||||||
#ifdef __AVX2__
|
|
||||||
batch = num >> 2;
|
|
||||||
remain = num & 0x03;
|
|
||||||
#endif
|
|
||||||
}
|
|
||||||
|
|
||||||
if (selector == 0 || selector == 1) {
|
if (selector == 0 || selector == 1) {
|
||||||
if (tsSIMDEnable && tsAVX512Supported && tsAVX512Enable) {
|
for (int32_t i = 0; i < batch; ++i) {
|
||||||
#ifdef __AVX512F__
|
__m512i prev = _mm512_set1_epi64(prevValue);
|
||||||
for (int32_t i = 0; i < batch; ++i) {
|
_mm512_storeu_si512((__m512i *)&p[_pos], prev);
|
||||||
__m512i prev = _mm512_set1_epi64(prevValue);
|
_pos += 8; // handle 64bit x 8 = 512bit
|
||||||
_mm512_storeu_si512((__m512i *)&p[_pos], prev);
|
}
|
||||||
_pos += 8; // handle 64bit x 8 = 512bit
|
for (int32_t i = 0; i < remain; ++i) {
|
||||||
}
|
p[_pos++] = prevValue;
|
||||||
for (int32_t i = 0; i < remain; ++i) {
|
|
||||||
p[_pos++] = prevValue;
|
|
||||||
}
|
|
||||||
#endif
|
|
||||||
} else if (tsSIMDEnable && tsAVX2Supported) {
|
|
||||||
for (int32_t i = 0; i < batch; ++i) {
|
|
||||||
__m256i prev = _mm256_set1_epi64x(prevValue);
|
|
||||||
_mm256_storeu_si256((__m256i *)&p[_pos], prev);
|
|
||||||
_pos += 4;
|
|
||||||
}
|
|
||||||
|
|
||||||
for (int32_t i = 0; i < remain; ++i) {
|
|
||||||
p[_pos++] = prevValue;
|
|
||||||
}
|
|
||||||
|
|
||||||
} else { // alternative implementation without SIMD instructions.
|
|
||||||
for (int32_t i = 0; i < elems && count < nelements; i++, count++) {
|
|
||||||
p[_pos++] = prevValue;
|
|
||||||
v += bit;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
if (tsSIMDEnable && tsAVX512Supported && tsAVX512Enable) {
|
__m512i sum_mask1 = _mm512_set_epi64(6, 6, 4, 4, 2, 2, 0, 0);
|
||||||
#ifdef __AVX512F__
|
__m512i sum_mask2 = _mm512_set_epi64(5, 5, 5, 5, 1, 1, 1, 1);
|
||||||
__m512i sum_mask1 = _mm512_set_epi64(6, 6, 4, 4, 2, 2, 0, 0);
|
__m512i sum_mask3 = _mm512_set_epi64(3, 3, 3, 3, 3, 3, 3, 3);
|
||||||
__m512i sum_mask2 = _mm512_set_epi64(5, 5, 5, 5, 1, 1, 1, 1);
|
__m512i base = _mm512_set1_epi64(w);
|
||||||
__m512i sum_mask3 = _mm512_set_epi64(3, 3, 3, 3, 3, 3, 3, 3);
|
__m512i maskVal = _mm512_set1_epi64(mask);
|
||||||
__m512i base = _mm512_set1_epi64(w);
|
__m512i shiftBits = _mm512_set_epi64(bit * 7 + 4, bit * 6 + 4, bit * 5 + 4, bit * 4 + 4, bit * 3 + 4,
|
||||||
__m512i maskVal = _mm512_set1_epi64(mask);
|
bit * 2 + 4, bit + 4, 4);
|
||||||
__m512i shiftBits = _mm512_set_epi64(bit * 7 + 4, bit * 6 + 4, bit * 5 + 4, bit * 4 + 4, bit * 3 + 4,
|
__m512i inc = _mm512_set1_epi64(bit << 3);
|
||||||
bit * 2 + 4, bit + 4, 4);
|
|
||||||
__m512i inc = _mm512_set1_epi64(bit << 3);
|
|
||||||
|
|
||||||
for (int32_t i = 0; i < batch; ++i) {
|
for (int32_t i = 0; i < batch; ++i) {
|
||||||
__m512i after = _mm512_srlv_epi64(base, shiftBits);
|
__m512i after = _mm512_srlv_epi64(base, shiftBits);
|
||||||
__m512i zigzagVal = _mm512_and_si512(after, maskVal);
|
__m512i zigzagVal = _mm512_and_si512(after, maskVal);
|
||||||
|
|
||||||
// ZIGZAG_DECODE(T, v) (((v) >> 1) ^ -((T)((v)&1)))
|
// ZIGZAG_DECODE(T, v) (((v) >> 1) ^ -((T)((v)&1)))
|
||||||
__m512i signmask = _mm512_and_si512(_mm512_set1_epi64(1), zigzagVal);
|
__m512i signmask = _mm512_and_si512(_mm512_set1_epi64(1), zigzagVal);
|
||||||
signmask = _mm512_sub_epi64(_mm512_setzero_si512(), signmask);
|
signmask = _mm512_sub_epi64(_mm512_setzero_si512(), signmask);
|
||||||
__m512i delta = _mm512_xor_si512(_mm512_srli_epi64(zigzagVal, 1), signmask);
|
__m512i delta = _mm512_xor_si512(_mm512_srli_epi64(zigzagVal, 1), signmask);
|
||||||
|
|
||||||
// calculate the cumulative sum (prefix sum) for each number
|
// calculate the cumulative sum (prefix sum) for each number
|
||||||
// decode[0] = prevValue + final[0]
|
// decode[0] = prevValue + final[0]
|
||||||
// decode[1] = decode[0] + final[1] -----> prevValue + final[0] + final[1]
|
// decode[1] = decode[0] + final[1] -----> prevValue + final[0] + final[1]
|
||||||
// decode[2] = decode[1] + final[2] -----> prevValue + final[0] + final[1] + final[2]
|
// decode[2] = decode[1] + final[2] -----> prevValue + final[0] + final[1] + final[2]
|
||||||
// decode[3] = decode[2] + final[3] -----> prevValue + final[0] + final[1] + final[2] + final[3]
|
// decode[3] = decode[2] + final[3] -----> prevValue + final[0] + final[1] + final[2] + final[3]
|
||||||
|
|
||||||
// 7 6 5 4 3 2 1
|
// 7 6 5 4 3 2 1
|
||||||
// 0 D7 D6 D5 D4 D3 D2 D1
|
// 0 D7 D6 D5 D4 D3 D2 D1
|
||||||
// D0 D6 0 D4 0 D2 0 D0
|
// D0 D6 0 D4 0 D2 0 D0
|
||||||
// 0 D7+D6 D6 D5+D4 D4 D3+D2 D2
|
// 0 D7+D6 D6 D5+D4 D4 D3+D2 D2
|
||||||
// D1+D0 D0 13 6 9 4 5 2
|
// D1+D0 D0 13 6 9 4 5 2
|
||||||
// 1 0
|
// 1 0
|
||||||
__m512i prev = _mm512_set1_epi64(prevValue);
|
__m512i prev = _mm512_set1_epi64(prevValue);
|
||||||
__m512i cum_sum = _mm512_add_epi64(delta, _mm512_maskz_permutexvar_epi64(0xaa, sum_mask1, delta));
|
__m512i cum_sum = _mm512_add_epi64(delta, _mm512_maskz_permutexvar_epi64(0xaa, sum_mask1, delta));
|
||||||
cum_sum = _mm512_add_epi64(cum_sum, _mm512_maskz_permutexvar_epi64(0xcc, sum_mask2, cum_sum));
|
cum_sum = _mm512_add_epi64(cum_sum, _mm512_maskz_permutexvar_epi64(0xcc, sum_mask2, cum_sum));
|
||||||
cum_sum = _mm512_add_epi64(cum_sum, _mm512_maskz_permutexvar_epi64(0xf0, sum_mask3, cum_sum));
|
cum_sum = _mm512_add_epi64(cum_sum, _mm512_maskz_permutexvar_epi64(0xf0, sum_mask3, cum_sum));
|
||||||
|
|
||||||
// 13 6 9 4 5 2 1
|
// 13 6 9 4 5 2 1
|
||||||
// 0 D7,D6 D6 D5,D4 D4 D3,D2 D2
|
// 0 D7,D6 D6 D5,D4 D4 D3,D2 D2
|
||||||
// D1,D0 D0 +D5,D4 D5,D4, 0 0 D1,D0 D1,D0
|
// D1,D0 D0 +D5,D4 D5,D4, 0 0 D1,D0 D1,D0
|
||||||
// 0 0 D7~D4 D6~D4 D5~D4 D4 D3~D0 D2~D0
|
// 0 0 D7~D4 D6~D4 D5~D4 D4 D3~D0 D2~D0
|
||||||
// D1~D0 D0 22 15 9 4 6 3
|
// D1~D0 D0 22 15 9 4 6 3
|
||||||
// 1 0
|
// 1 0
|
||||||
//
|
//
|
||||||
// D3~D0 D3~D0 D3~D0 D3~D0 0 0 0
|
// D3~D0 D3~D0 D3~D0 D3~D0 0 0 0
|
||||||
// 0 28 21 15 10 6 3 1
|
// 0 28 21 15 10 6 3 1
|
||||||
// 0
|
// 0
|
||||||
|
|
||||||
cum_sum = _mm512_add_epi64(cum_sum, prev);
|
cum_sum = _mm512_add_epi64(cum_sum, prev);
|
||||||
_mm512_storeu_si512((__m512i *)&p[_pos], cum_sum);
|
_mm512_storeu_si512((__m512i *)&p[_pos], cum_sum);
|
||||||
|
|
||||||
shiftBits = _mm512_add_epi64(shiftBits, inc);
|
shiftBits = _mm512_add_epi64(shiftBits, inc);
|
||||||
prevValue = p[_pos + 7];
|
prevValue = p[_pos + 7];
|
||||||
_pos += 8;
|
_pos += 8;
|
||||||
}
|
}
|
||||||
// handle the remain value
|
// handle the remain value
|
||||||
for (int32_t i = 0; i < remain; i++) {
|
for (int32_t i = 0; i < remain; i++) {
|
||||||
zigzag_value = ((w >> (v + (batch * bit * 8))) & mask);
|
zigzag_value = ((w >> (v + (batch * bit * 8))) & mask);
|
||||||
prevValue += ZIGZAG_DECODE(int64_t, zigzag_value);
|
prevValue += ZIGZAG_DECODE(int64_t, zigzag_value);
|
||||||
|
|
||||||
p[_pos++] = prevValue;
|
p[_pos++] = prevValue;
|
||||||
v += bit;
|
v += bit;
|
||||||
}
|
|
||||||
#endif
|
|
||||||
} else if (tsSIMDEnable && tsAVX2Supported) {
|
|
||||||
__m256i base = _mm256_set1_epi64x(w);
|
|
||||||
__m256i maskVal = _mm256_set1_epi64x(mask);
|
|
||||||
|
|
||||||
__m256i shiftBits = _mm256_set_epi64x(bit * 3 + 4, bit * 2 + 4, bit + 4, 4);
|
|
||||||
__m256i inc = _mm256_set1_epi64x(bit << 2);
|
|
||||||
|
|
||||||
for (int32_t i = 0; i < batch; ++i) {
|
|
||||||
__m256i after = _mm256_srlv_epi64(base, shiftBits);
|
|
||||||
__m256i zigzagVal = _mm256_and_si256(after, maskVal);
|
|
||||||
|
|
||||||
// ZIGZAG_DECODE(T, v) (((v) >> 1) ^ -((T)((v)&1)))
|
|
||||||
__m256i signmask = _mm256_and_si256(_mm256_set1_epi64x(1), zigzagVal);
|
|
||||||
signmask = _mm256_sub_epi64(_mm256_setzero_si256(), signmask);
|
|
||||||
|
|
||||||
// get four zigzag values here
|
|
||||||
__m256i delta = _mm256_xor_si256(_mm256_srli_epi64(zigzagVal, 1), signmask);
|
|
||||||
|
|
||||||
// calculate the cumulative sum (prefix sum) for each number
|
|
||||||
// decode[0] = prevValue + final[0]
|
|
||||||
// decode[1] = decode[0] + final[1] -----> prevValue + final[0] + final[1]
|
|
||||||
// decode[2] = decode[1] + final[2] -----> prevValue + final[0] + final[1] + final[2]
|
|
||||||
// decode[3] = decode[2] + final[3] -----> prevValue + final[0] + final[1] + final[2] + final[3]
|
|
||||||
|
|
||||||
// 1, 2, 3, 4
|
|
||||||
//+ 0, 1, 0, 3
|
|
||||||
// 1, 3, 3, 7
|
|
||||||
// shift and add for the first round
|
|
||||||
__m128i prev = _mm_set1_epi64x(prevValue);
|
|
||||||
__m256i x = _mm256_slli_si256(delta, 8);
|
|
||||||
|
|
||||||
delta = _mm256_add_epi64(delta, x);
|
|
||||||
_mm256_storeu_si256((__m256i *)&p[_pos], delta);
|
|
||||||
|
|
||||||
// 1, 3, 3, 7
|
|
||||||
//+ 0, 0, 3, 3
|
|
||||||
// 1, 3, 6, 10
|
|
||||||
// shift and add operation for the second round
|
|
||||||
__m128i firstPart = _mm_loadu_si128((__m128i *)&p[_pos]);
|
|
||||||
__m128i secondItem = _mm_set1_epi64x(p[_pos + 1]);
|
|
||||||
__m128i secPart = _mm_add_epi64(_mm_loadu_si128((__m128i *)&p[_pos + 2]), secondItem);
|
|
||||||
firstPart = _mm_add_epi64(firstPart, prev);
|
|
||||||
secPart = _mm_add_epi64(secPart, prev);
|
|
||||||
|
|
||||||
// save it in the memory
|
|
||||||
_mm_storeu_si128((__m128i *)&p[_pos], firstPart);
|
|
||||||
_mm_storeu_si128((__m128i *)&p[_pos + 2], secPart);
|
|
||||||
|
|
||||||
shiftBits = _mm256_add_epi64(shiftBits, inc);
|
|
||||||
prevValue = p[_pos + 3];
|
|
||||||
_pos += 4;
|
|
||||||
}
|
|
||||||
|
|
||||||
// handle the remain value
|
|
||||||
for (int32_t i = 0; i < remain; i++) {
|
|
||||||
zigzag_value = ((w >> (v + (batch * bit * 4))) & mask);
|
|
||||||
prevValue += ZIGZAG_DECODE(int64_t, zigzag_value);
|
|
||||||
|
|
||||||
p[_pos++] = prevValue;
|
|
||||||
v += bit;
|
|
||||||
}
|
|
||||||
} else { // alternative implementation without SIMD instructions.
|
|
||||||
for (int32_t i = 0; i < elems && count < nelements; i++, count++) {
|
|
||||||
zigzag_value = ((w >> v) & mask);
|
|
||||||
prevValue += ZIGZAG_DECODE(int64_t, zigzag_value);
|
|
||||||
|
|
||||||
p[_pos++] = prevValue;
|
|
||||||
v += bit;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
} break;
|
} break;
|
||||||
|
@ -292,7 +188,7 @@ int32_t tsDecompressIntImpl_Hw(const char *const input, const int32_t nelements,
|
||||||
|
|
||||||
return nelements * word_length;
|
return nelements * word_length;
|
||||||
#else
|
#else
|
||||||
uError("unable run %s without avx2 instructions", __func__);
|
uError("unable run %s without avx512 instructions", __func__);
|
||||||
return -1;
|
return -1;
|
||||||
#endif
|
#endif
|
||||||
}
|
}
|
||||||
|
|
|
@ -823,6 +823,8 @@ bool tQueryAutoQWorkerTryRecycleWorker(SQueryAutoQWorkerPool *pPool, SQueryAutoQ
|
||||||
int32_t tQueryAutoQWorkerInit(SQueryAutoQWorkerPool *pool) {
|
int32_t tQueryAutoQWorkerInit(SQueryAutoQWorkerPool *pool) {
|
||||||
int32_t code;
|
int32_t code;
|
||||||
|
|
||||||
|
pool->exit = false;
|
||||||
|
|
||||||
(void)taosThreadMutexInit(&pool->poolLock, NULL);
|
(void)taosThreadMutexInit(&pool->poolLock, NULL);
|
||||||
(void)taosThreadMutexInit(&pool->backupLock, NULL);
|
(void)taosThreadMutexInit(&pool->backupLock, NULL);
|
||||||
(void)taosThreadMutexInit(&pool->waitingAfterBlockLock, NULL);
|
(void)taosThreadMutexInit(&pool->waitingAfterBlockLock, NULL);
|
||||||
|
|
|
@ -138,6 +138,10 @@ add_test(
|
||||||
COMMAND logTest
|
COMMAND logTest
|
||||||
)
|
)
|
||||||
|
|
||||||
|
IF(COMPILER_SUPPORT_AVX2)
|
||||||
|
MESSAGE(STATUS "AVX2 instructions is ACTIVATED")
|
||||||
|
set_source_files_properties(decompressTest.cpp PROPERTIES COMPILE_FLAGS -mavx2)
|
||||||
|
ENDIF()
|
||||||
add_executable(decompressTest "decompressTest.cpp")
|
add_executable(decompressTest "decompressTest.cpp")
|
||||||
target_link_libraries(decompressTest os util common gtest_main)
|
target_link_libraries(decompressTest os util common gtest_main)
|
||||||
add_test(
|
add_test(
|
||||||
|
@ -145,6 +149,16 @@ add_test(
|
||||||
COMMAND decompressTest
|
COMMAND decompressTest
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
|
IF($TD_LINUX)
|
||||||
|
add_executable(utilTests "utilTests.cpp")
|
||||||
|
target_link_libraries(utilTests os util common gtest_main)
|
||||||
|
add_test(
|
||||||
|
NAME utilTests
|
||||||
|
COMMAND utilTests
|
||||||
|
)
|
||||||
|
ENDIF()
|
||||||
|
|
||||||
if(${TD_LINUX})
|
if(${TD_LINUX})
|
||||||
# terrorTest
|
# terrorTest
|
||||||
add_executable(terrorTest "terrorTest.cpp")
|
add_executable(terrorTest "terrorTest.cpp")
|
||||||
|
|
|
@ -6,6 +6,7 @@
|
||||||
|
|
||||||
#include "tarray.h"
|
#include "tarray.h"
|
||||||
#include "tcompare.h"
|
#include "tcompare.h"
|
||||||
|
#include "tdatablock.h"
|
||||||
|
|
||||||
namespace {
|
namespace {
|
||||||
} // namespace
|
} // namespace
|
||||||
|
@ -474,3 +475,67 @@ TEST(tsma, reverse_unit) {
|
||||||
ASSERT_FALSE(tsmaIntervalCheck(12, 'n', 1, 'y', TSDB_TIME_PRECISION_NANO));
|
ASSERT_FALSE(tsmaIntervalCheck(12, 'n', 1, 'y', TSDB_TIME_PRECISION_NANO));
|
||||||
ASSERT_TRUE(tsmaIntervalCheck(3, 'n', 1, 'y', TSDB_TIME_PRECISION_NANO));
|
ASSERT_TRUE(tsmaIntervalCheck(3, 'n', 1, 'y', TSDB_TIME_PRECISION_NANO));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
template <int16_t type, typename ValType, typename F>
|
||||||
|
void dataBlockNullTest(const F& setValFunc) {
|
||||||
|
int32_t totalRows = 16;
|
||||||
|
SColumnInfoData columnInfoData = createColumnInfoData(type, tDataTypes[type].bytes, 0);
|
||||||
|
SColumnDataAgg columnDataAgg = {.numOfNull = 0};
|
||||||
|
|
||||||
|
auto checkNull = [totalRows, &columnInfoData, &columnDataAgg](uint32_t row, bool expected) {
|
||||||
|
EXPECT_EQ(colDataIsNull_s(&columnInfoData, row), expected);
|
||||||
|
EXPECT_EQ(colDataIsNull_t(&columnInfoData, row, IS_VAR_DATA_TYPE(columnInfoData.info.type)), expected);
|
||||||
|
EXPECT_EQ(colDataIsNull(&columnInfoData, totalRows, row, NULL), expected);
|
||||||
|
columnDataAgg.numOfNull = totalRows;
|
||||||
|
EXPECT_EQ(colDataIsNull(&columnInfoData, totalRows, row, &columnDataAgg), columnInfoData.hasNull);
|
||||||
|
columnDataAgg.numOfNull = 0;
|
||||||
|
EXPECT_EQ(colDataIsNull(&columnInfoData, totalRows, row, &columnDataAgg), false);
|
||||||
|
};
|
||||||
|
|
||||||
|
columnInfoData.hasNull = false;
|
||||||
|
checkNull(0, false);
|
||||||
|
checkNull(1, false);
|
||||||
|
checkNull(2, false);
|
||||||
|
checkNull(totalRows - 2, false);
|
||||||
|
checkNull(totalRows - 1, false);
|
||||||
|
|
||||||
|
if (IS_VAR_DATA_TYPE(type)) {
|
||||||
|
columnInfoData.varmeta.offset = (int32_t*)taosMemoryCalloc(totalRows, sizeof(int32_t));
|
||||||
|
} else {
|
||||||
|
columnInfoData.pData = (char*)taosMemoryCalloc(totalRows, tDataTypes[type].bytes);
|
||||||
|
columnInfoData.nullbitmap = (char*)taosMemoryCalloc(((totalRows - 1) >> NBIT) + 1, 1);
|
||||||
|
ValType val = 1;
|
||||||
|
setValFunc(&columnInfoData, 1, &val);
|
||||||
|
val = 2;
|
||||||
|
setValFunc(&columnInfoData, 2, &val);
|
||||||
|
}
|
||||||
|
colDataSetNULL(&columnInfoData, 0);
|
||||||
|
colDataSetNNULL(&columnInfoData, 3, totalRows - 3);
|
||||||
|
checkNull(0, true);
|
||||||
|
checkNull(1, false);
|
||||||
|
checkNull(2, false);
|
||||||
|
checkNull(totalRows - 2, true);
|
||||||
|
checkNull(totalRows - 1, true);
|
||||||
|
|
||||||
|
if (IS_VAR_DATA_TYPE(type)) {
|
||||||
|
taosMemoryFreeClear(columnInfoData.varmeta.offset);
|
||||||
|
} else {
|
||||||
|
taosMemoryFreeClear(columnInfoData.pData);
|
||||||
|
taosMemoryFreeClear(columnInfoData.nullbitmap);
|
||||||
|
checkNull(0, false);
|
||||||
|
checkNull(1, false);
|
||||||
|
checkNull(2, false);
|
||||||
|
checkNull(totalRows - 2, false);
|
||||||
|
checkNull(totalRows - 1, false);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
TEST(utilTest, tdatablockTestNull) {
|
||||||
|
dataBlockNullTest<TSDB_DATA_TYPE_TINYINT, int8_t>(colDataSetInt8);
|
||||||
|
dataBlockNullTest<TSDB_DATA_TYPE_SMALLINT, int16_t>(colDataSetInt16);
|
||||||
|
dataBlockNullTest<TSDB_DATA_TYPE_INT, int32_t>(colDataSetInt32);
|
||||||
|
dataBlockNullTest<TSDB_DATA_TYPE_BIGINT, int64_t>(colDataSetInt64);
|
||||||
|
dataBlockNullTest<TSDB_DATA_TYPE_FLOAT, float>(colDataSetFloat);
|
||||||
|
dataBlockNullTest<TSDB_DATA_TYPE_DOUBLE, double>(colDataSetDouble);
|
||||||
|
dataBlockNullTest<TSDB_DATA_TYPE_VARCHAR, int64_t>(colDataSetInt64);
|
||||||
|
}
|
||||||
|
|
|
@ -0,0 +1,56 @@
|
||||||
|
|
||||||
|
taos> select leastsquares(1, 1, 1)
|
||||||
|
leastsquares(1, 1, 1) |
|
||||||
|
=================================
|
||||||
|
{slop:-nan, intercept:-nan} |
|
||||||
|
|
||||||
|
taos> select leastsquares(cast(1.1 as float), 1, 1)
|
||||||
|
leastsquares(cast(1.1 as float), 1, 1) |
|
||||||
|
=========================================
|
||||||
|
{slop:-nan, intercept:-nan} |
|
||||||
|
|
||||||
|
taos> select leastsquares(cast(1.1 as double), 1, 1)
|
||||||
|
leastsquares(cast(1.1 as double), 1, 1) |
|
||||||
|
==========================================
|
||||||
|
{slop:-nan, intercept:-nan} |
|
||||||
|
|
||||||
|
taos> select leastsquares(cast(1 as tinyint), 1, 1)
|
||||||
|
leastsquares(cast(1 as tinyint), 1, 1) |
|
||||||
|
=========================================
|
||||||
|
{slop:-nan, intercept:-nan} |
|
||||||
|
|
||||||
|
taos> select leastsquares(cast(100 as smallint), 1, 1)
|
||||||
|
leastsquares(cast(100 as smallint), 1, 1) |
|
||||||
|
============================================
|
||||||
|
{slop:-nan, intercept:-nan} |
|
||||||
|
|
||||||
|
taos> select leastsquares(cast(100000 as int), 1, 1)
|
||||||
|
leastsquares(cast(100000 as int), 1, 1) |
|
||||||
|
==========================================
|
||||||
|
{slop:-nan, intercept:-nan} |
|
||||||
|
|
||||||
|
taos> select leastsquares(cast(10000000000 as bigint), 1, 1)
|
||||||
|
leastsquares(cast(10000000000 as bigint), 1, 1) |
|
||||||
|
==================================================
|
||||||
|
{slop:-nan, intercept:-nan} |
|
||||||
|
|
||||||
|
taos> select leastsquares(cast(1 as tinyint unsigned), 1, 1)
|
||||||
|
leastsquares(cast(1 as tinyint unsigned), 1, 1) |
|
||||||
|
==================================================
|
||||||
|
{slop:-nan, intercept:-nan} |
|
||||||
|
|
||||||
|
taos> select leastsquares(cast(100 as smallint unsigned), 1, 1)
|
||||||
|
leastsquares(cast(100 as smallint unsigned), 1, 1) |
|
||||||
|
=====================================================
|
||||||
|
{slop:-nan, intercept:-nan} |
|
||||||
|
|
||||||
|
taos> select leastsquares(cast(100000 as int unsigned), 1, 1)
|
||||||
|
leastsquares(cast(100000 as int unsigned), 1, 1) |
|
||||||
|
===================================================
|
||||||
|
{slop:-nan, intercept:-nan} |
|
||||||
|
|
||||||
|
taos> select leastsquares(cast(10000000000 as bigint unsigned), 1, 1)
|
||||||
|
leastsquares(cast(10000000000 as bigint unsigned), 1, 1) |
|
||||||
|
===========================================================
|
||||||
|
{slop:-nan, intercept:-nan} |
|
||||||
|
|
|
|
@ -603,3 +603,58 @@ taos> select location, max(current) from ts_4893.meters group by location order
|
||||||
============================================
|
============================================
|
||||||
beijing | 11.9989996 |
|
beijing | 11.9989996 |
|
||||||
|
|
||||||
|
taos> select max(1)
|
||||||
|
max(1) |
|
||||||
|
========================
|
||||||
|
1 |
|
||||||
|
|
||||||
|
taos> select max(cast(1 as tinyint))
|
||||||
|
max(cast(1 as tinyint)) |
|
||||||
|
==========================
|
||||||
|
1 |
|
||||||
|
|
||||||
|
taos> select max(cast(100 as smallint))
|
||||||
|
max(cast(100 as smallint)) |
|
||||||
|
=============================
|
||||||
|
100 |
|
||||||
|
|
||||||
|
taos> select max(cast(100000 as int))
|
||||||
|
max(cast(100000 as int)) |
|
||||||
|
===========================
|
||||||
|
100000 |
|
||||||
|
|
||||||
|
taos> select max(cast(10000000000 as bigint))
|
||||||
|
max(cast(10000000000 as bigint)) |
|
||||||
|
===================================
|
||||||
|
10000000000 |
|
||||||
|
|
||||||
|
taos> select max(cast(1 as tinyint unsigned))
|
||||||
|
max(cast(1 as tinyint unsigned)) |
|
||||||
|
===================================
|
||||||
|
1 |
|
||||||
|
|
||||||
|
taos> select max(cast(100 as smallint unsigned))
|
||||||
|
max(cast(100 as smallint unsigned)) |
|
||||||
|
======================================
|
||||||
|
100 |
|
||||||
|
|
||||||
|
taos> select max(cast(100000 as int unsigned))
|
||||||
|
max(cast(100000 as int unsigned)) |
|
||||||
|
====================================
|
||||||
|
100000 |
|
||||||
|
|
||||||
|
taos> select max(cast(10000000000 as bigint unsigned))
|
||||||
|
max(cast(10000000000 as bigint unsigned)) |
|
||||||
|
============================================
|
||||||
|
10000000000 |
|
||||||
|
|
||||||
|
taos> select max(cast(1.1 as float))
|
||||||
|
max(cast(1.1 as float)) |
|
||||||
|
==========================
|
||||||
|
1.1000000e+00 |
|
||||||
|
|
||||||
|
taos> select max(cast(1.1 as double))
|
||||||
|
max(cast(1.1 as double)) |
|
||||||
|
============================
|
||||||
|
1.100000000000000 |
|
||||||
|
|
||||||
|
|
Can't render this file because it has a wrong number of fields in line 576.
|
|
@ -603,3 +603,58 @@ taos> select location, min(id) from ts_4893.meters group by location order by lo
|
||||||
===================================
|
===================================
|
||||||
beijing | 0 |
|
beijing | 0 |
|
||||||
|
|
||||||
|
taos> select min(1)
|
||||||
|
min(1) |
|
||||||
|
========================
|
||||||
|
1 |
|
||||||
|
|
||||||
|
taos> select min(cast(1 as tinyint))
|
||||||
|
min(cast(1 as tinyint)) |
|
||||||
|
==========================
|
||||||
|
1 |
|
||||||
|
|
||||||
|
taos> select min(cast(100 as smallint))
|
||||||
|
min(cast(100 as smallint)) |
|
||||||
|
=============================
|
||||||
|
100 |
|
||||||
|
|
||||||
|
taos> select min(cast(100000 as int))
|
||||||
|
min(cast(100000 as int)) |
|
||||||
|
===========================
|
||||||
|
100000 |
|
||||||
|
|
||||||
|
taos> select min(cast(10000000000 as bigint))
|
||||||
|
min(cast(10000000000 as bigint)) |
|
||||||
|
===================================
|
||||||
|
10000000000 |
|
||||||
|
|
||||||
|
taos> select min(cast(1 as tinyint unsigned))
|
||||||
|
min(cast(1 as tinyint unsigned)) |
|
||||||
|
===================================
|
||||||
|
1 |
|
||||||
|
|
||||||
|
taos> select min(cast(100 as smallint unsigned))
|
||||||
|
min(cast(100 as smallint unsigned)) |
|
||||||
|
======================================
|
||||||
|
100 |
|
||||||
|
|
||||||
|
taos> select min(cast(100000 as int unsigned))
|
||||||
|
min(cast(100000 as int unsigned)) |
|
||||||
|
====================================
|
||||||
|
100000 |
|
||||||
|
|
||||||
|
taos> select min(cast(10000000000 as bigint unsigned))
|
||||||
|
min(cast(10000000000 as bigint unsigned)) |
|
||||||
|
============================================
|
||||||
|
10000000000 |
|
||||||
|
|
||||||
|
taos> select min(cast(1.1 as float))
|
||||||
|
min(cast(1.1 as float)) |
|
||||||
|
==========================
|
||||||
|
1.1000000e+00 |
|
||||||
|
|
||||||
|
taos> select min(cast(1.1 as double))
|
||||||
|
min(cast(1.1 as double)) |
|
||||||
|
============================
|
||||||
|
1.100000000000000 |
|
||||||
|
|
||||||
|
|
Can't render this file because it has a wrong number of fields in line 576.
|
|
@ -308,3 +308,53 @@ taos> select round(log(current), 2) from ts_4893.meters limit 1
|
||||||
============================
|
============================
|
||||||
2.370000000000000 |
|
2.370000000000000 |
|
||||||
|
|
||||||
|
taos> select round(cast(1.0e+400 as float), 0);
|
||||||
|
round(cast(1.0e+400 as float), 0) |
|
||||||
|
====================================
|
||||||
|
NULL |
|
||||||
|
|
||||||
|
taos> select round(cast(1.0e+400 as double), 0);
|
||||||
|
round(cast(1.0e+400 as double), 0) |
|
||||||
|
=====================================
|
||||||
|
NULL |
|
||||||
|
|
||||||
|
taos> select round(cast(5 as tinyint), 1);
|
||||||
|
round(cast(5 as tinyint), 1) |
|
||||||
|
===============================
|
||||||
|
5 |
|
||||||
|
|
||||||
|
taos> select round(cast(50 as smallint), 1);
|
||||||
|
round(cast(50 as smallint), 1) |
|
||||||
|
=================================
|
||||||
|
50 |
|
||||||
|
|
||||||
|
taos> select round(cast(500 as int), 1);
|
||||||
|
round(cast(500 as int), 1) |
|
||||||
|
=============================
|
||||||
|
500 |
|
||||||
|
|
||||||
|
taos> select round(cast(50000 as bigint), 1);
|
||||||
|
round(cast(50000 as bigint), 1) |
|
||||||
|
==================================
|
||||||
|
50000 |
|
||||||
|
|
||||||
|
taos> select round(cast(5 as TINYINT UNSIGNED), 1);
|
||||||
|
round(cast(5 as tinyint unsigned), 1) |
|
||||||
|
========================================
|
||||||
|
5 |
|
||||||
|
|
||||||
|
taos> select round(cast(50 as smallint unsigned), 1);
|
||||||
|
round(cast(50 as smallint unsigned), 1) |
|
||||||
|
==========================================
|
||||||
|
50 |
|
||||||
|
|
||||||
|
taos> select round(cast(500 as int unsigned), 1);
|
||||||
|
round(cast(500 as int unsigned), 1) |
|
||||||
|
======================================
|
||||||
|
500 |
|
||||||
|
|
||||||
|
taos> select round(cast(50000 as bigint unsigned), 1)
|
||||||
|
round(cast(50000 as bigint unsigned), 1) |
|
||||||
|
===========================================
|
||||||
|
50000 |
|
||||||
|
|
||||||
|
|
|
|
@ -121,6 +121,106 @@ taos> select SIGN(id) + id from ts_4893.meters order by ts limit 5
|
||||||
4.000000000000000 |
|
4.000000000000000 |
|
||||||
5.000000000000000 |
|
5.000000000000000 |
|
||||||
|
|
||||||
|
taos> select sign(cast(1 as tinyint))
|
||||||
|
sign(cast(1 as tinyint)) |
|
||||||
|
===========================
|
||||||
|
1 |
|
||||||
|
|
||||||
|
taos> select sign(cast(1 as smallint))
|
||||||
|
sign(cast(1 as smallint)) |
|
||||||
|
============================
|
||||||
|
1 |
|
||||||
|
|
||||||
|
taos> select sign(cast(1 as int))
|
||||||
|
sign(cast(1 as int)) |
|
||||||
|
=======================
|
||||||
|
1 |
|
||||||
|
|
||||||
|
taos> select sign(cast(1 as bigint))
|
||||||
|
sign(cast(1 as bigint)) |
|
||||||
|
==========================
|
||||||
|
1 |
|
||||||
|
|
||||||
|
taos> select sign(cast(1 as tinyint unsigned))
|
||||||
|
sign(cast(1 as tinyint unsigned)) |
|
||||||
|
====================================
|
||||||
|
1 |
|
||||||
|
|
||||||
|
taos> select sign(cast(1 as smallint unsigned))
|
||||||
|
sign(cast(1 as smallint unsigned)) |
|
||||||
|
=====================================
|
||||||
|
1 |
|
||||||
|
|
||||||
|
taos> select sign(cast(1 as int unsigned))
|
||||||
|
sign(cast(1 as int unsigned)) |
|
||||||
|
================================
|
||||||
|
1 |
|
||||||
|
|
||||||
|
taos> select sign(cast(1 as bigint unsigned))
|
||||||
|
sign(cast(1 as bigint unsigned)) |
|
||||||
|
===================================
|
||||||
|
1 |
|
||||||
|
|
||||||
|
taos> select sign(cast(1 as float))
|
||||||
|
sign(cast(1 as float)) |
|
||||||
|
=========================
|
||||||
|
1.0000000e+00 |
|
||||||
|
|
||||||
|
taos> select sign(cast(1 as double))
|
||||||
|
sign(cast(1 as double)) |
|
||||||
|
============================
|
||||||
|
1.000000000000000 |
|
||||||
|
|
||||||
|
taos> select sign(cast(NULL as tinyint))
|
||||||
|
sign(cast(null as tinyint)) |
|
||||||
|
==============================
|
||||||
|
NULL |
|
||||||
|
|
||||||
|
taos> select sign(cast(NULL as smallint))
|
||||||
|
sign(cast(null as smallint)) |
|
||||||
|
===============================
|
||||||
|
NULL |
|
||||||
|
|
||||||
|
taos> select sign(cast(NULL as int))
|
||||||
|
sign(cast(null as int)) |
|
||||||
|
==========================
|
||||||
|
NULL |
|
||||||
|
|
||||||
|
taos> select sign(cast(NULL as bigint))
|
||||||
|
sign(cast(null as bigint)) |
|
||||||
|
=============================
|
||||||
|
NULL |
|
||||||
|
|
||||||
|
taos> select sign(cast(NULL as tinyint unsigned))
|
||||||
|
sign(cast(null as tinyint unsigned)) |
|
||||||
|
=======================================
|
||||||
|
NULL |
|
||||||
|
|
||||||
|
taos> select sign(cast(NULL as smallint unsigned))
|
||||||
|
sign(cast(null as smallint unsigned)) |
|
||||||
|
========================================
|
||||||
|
NULL |
|
||||||
|
|
||||||
|
taos> select sign(cast(NULL as int unsigned))
|
||||||
|
sign(cast(null as int unsigned)) |
|
||||||
|
===================================
|
||||||
|
NULL |
|
||||||
|
|
||||||
|
taos> select sign(cast(NULL as bigint unsigned))
|
||||||
|
sign(cast(null as bigint unsigned)) |
|
||||||
|
======================================
|
||||||
|
NULL |
|
||||||
|
|
||||||
|
taos> select sign(cast(NULL as float))
|
||||||
|
sign(cast(null as float)) |
|
||||||
|
============================
|
||||||
|
NULL |
|
||||||
|
|
||||||
|
taos> select sign(cast(NULL as double))
|
||||||
|
sign(cast(null as double)) |
|
||||||
|
=============================
|
||||||
|
NULL |
|
||||||
|
|
||||||
taos> select SIGN(abs(10))
|
taos> select SIGN(abs(10))
|
||||||
sign(abs(10)) |
|
sign(abs(10)) |
|
||||||
========================
|
========================
|
||||||
|
@ -213,6 +313,34 @@ taos> select sign(current) from ts_4893.meters order by ts limit 10
|
||||||
1.0000000 |
|
1.0000000 |
|
||||||
1.0000000 |
|
1.0000000 |
|
||||||
|
|
||||||
|
taos> select sign(cast(current as float)) from ts_4893.d0 order by ts limit 10
|
||||||
|
sign(cast(current as float)) |
|
||||||
|
===============================
|
||||||
|
1.0000000e+00 |
|
||||||
|
1.0000000e+00 |
|
||||||
|
1.0000000e+00 |
|
||||||
|
1.0000000e+00 |
|
||||||
|
1.0000000e+00 |
|
||||||
|
1.0000000e+00 |
|
||||||
|
1.0000000e+00 |
|
||||||
|
1.0000000e+00 |
|
||||||
|
1.0000000e+00 |
|
||||||
|
1.0000000e+00 |
|
||||||
|
|
||||||
|
taos> select sign(cast(current as float)) from ts_4893.meters order by ts limit 10
|
||||||
|
sign(cast(current as float)) |
|
||||||
|
===============================
|
||||||
|
1.0000000e+00 |
|
||||||
|
1.0000000e+00 |
|
||||||
|
1.0000000e+00 |
|
||||||
|
1.0000000e+00 |
|
||||||
|
1.0000000e+00 |
|
||||||
|
1.0000000e+00 |
|
||||||
|
1.0000000e+00 |
|
||||||
|
1.0000000e+00 |
|
||||||
|
1.0000000e+00 |
|
||||||
|
1.0000000e+00 |
|
||||||
|
|
||||||
taos> select sign(null)
|
taos> select sign(null)
|
||||||
sign(null) |
|
sign(null) |
|
||||||
========================
|
========================
|
||||||
|
|
Can't render this file because it has a wrong number of fields in line 139.
|
|
@ -0,0 +1,56 @@
|
||||||
|
|
||||||
|
taos> select statecount(1, 'GT', 1)
|
||||||
|
statecount(1, 'GT', 1) |
|
||||||
|
=========================
|
||||||
|
-1 |
|
||||||
|
|
||||||
|
taos> select statecount(cast(1 as tinyint), 'GT', 1)
|
||||||
|
statecount(cast(1 as tinyint), 'GT', 1) |
|
||||||
|
==========================================
|
||||||
|
-1 |
|
||||||
|
|
||||||
|
taos> select statecount(cast(100 as smallint), 'GT', 1)
|
||||||
|
statecount(cast(100 as smallint), 'GT', 1) |
|
||||||
|
=============================================
|
||||||
|
1 |
|
||||||
|
|
||||||
|
taos> select statecount(cast(100000 as int), 'GT', 1)
|
||||||
|
statecount(cast(100000 as int), 'GT', 1) |
|
||||||
|
===========================================
|
||||||
|
1 |
|
||||||
|
|
||||||
|
taos> select statecount(cast(10000000000 as bigint), 'GT', 1)
|
||||||
|
statecount(cast(10000000000 as bigint), 'GT', 1) |
|
||||||
|
===================================================
|
||||||
|
1 |
|
||||||
|
|
||||||
|
taos> select statecount(cast(1 as tinyint unsigned), 'GT', 1)
|
||||||
|
statecount(cast(1 as tinyint unsigned), 'GT', 1) |
|
||||||
|
===================================================
|
||||||
|
-1 |
|
||||||
|
|
||||||
|
taos> select statecount(cast(100 as smallint unsigned), 'GT', 1)
|
||||||
|
statecount(cast(100 as smallint unsigned), 'GT', 1) |
|
||||||
|
======================================================
|
||||||
|
1 |
|
||||||
|
|
||||||
|
taos> select statecount(cast(100000 as int unsigned), 'GT', 1)
|
||||||
|
statecount(cast(100000 as int unsigned), 'GT', 1) |
|
||||||
|
====================================================
|
||||||
|
1 |
|
||||||
|
|
||||||
|
taos> select statecount(cast(10000000000 as bigint unsigned), 'GT', 1)
|
||||||
|
statecount(cast(10000000000 as bigint unsigned), 'GT', 1) |
|
||||||
|
============================================================
|
||||||
|
1 |
|
||||||
|
|
||||||
|
taos> select statecount(cast(1.1 as float), 'GT', 1)
|
||||||
|
statecount(cast(1.1 as float), 'GT', 1) |
|
||||||
|
==========================================
|
||||||
|
1 |
|
||||||
|
|
||||||
|
taos> select statecount(cast(1.1 as double), 'GT', 1)
|
||||||
|
statecount(cast(1.1 as double), 'GT', 1) |
|
||||||
|
===========================================
|
||||||
|
1 |
|
||||||
|
|
|
|
@ -0,0 +1,56 @@
|
||||||
|
|
||||||
|
taos> select sum(1)
|
||||||
|
sum(1) |
|
||||||
|
========================
|
||||||
|
1 |
|
||||||
|
|
||||||
|
taos> select sum(cast(1 as tinyint))
|
||||||
|
sum(cast(1 as tinyint)) |
|
||||||
|
==========================
|
||||||
|
1 |
|
||||||
|
|
||||||
|
taos> select sum(cast(100 as smallint))
|
||||||
|
sum(cast(100 as smallint)) |
|
||||||
|
=============================
|
||||||
|
100 |
|
||||||
|
|
||||||
|
taos> select sum(cast(100000 as int))
|
||||||
|
sum(cast(100000 as int)) |
|
||||||
|
===========================
|
||||||
|
100000 |
|
||||||
|
|
||||||
|
taos> select sum(cast(10000000000 as bigint))
|
||||||
|
sum(cast(10000000000 as bigint)) |
|
||||||
|
===================================
|
||||||
|
10000000000 |
|
||||||
|
|
||||||
|
taos> select sum(cast(1 as tinyint unsigned))
|
||||||
|
sum(cast(1 as tinyint unsigned)) |
|
||||||
|
===================================
|
||||||
|
1 |
|
||||||
|
|
||||||
|
taos> select sum(cast(100 as smallint unsigned))
|
||||||
|
sum(cast(100 as smallint unsigned)) |
|
||||||
|
======================================
|
||||||
|
100 |
|
||||||
|
|
||||||
|
taos> select sum(cast(100000 as int unsigned))
|
||||||
|
sum(cast(100000 as int unsigned)) |
|
||||||
|
====================================
|
||||||
|
100000 |
|
||||||
|
|
||||||
|
taos> select sum(cast(10000000000 as bigint unsigned))
|
||||||
|
sum(cast(10000000000 as bigint unsigned)) |
|
||||||
|
============================================
|
||||||
|
10000000000 |
|
||||||
|
|
||||||
|
taos> select sum(cast(1.1 as float))
|
||||||
|
sum(cast(1.1 as float)) |
|
||||||
|
============================
|
||||||
|
1.100000023841858 |
|
||||||
|
|
||||||
|
taos> select sum(cast(1.1 as double))
|
||||||
|
sum(cast(1.1 as double)) |
|
||||||
|
============================
|
||||||
|
1.100000000000000 |
|
||||||
|
|
|
|
@ -179,6 +179,33 @@ taos> select trim(trailing '空格blank' from '空格blank空格中Tes空格blan
|
||||||
===================================================================
|
===================================================================
|
||||||
空格blank空格中Tes空格blank空 |
|
空格blank空格中Tes空格blank空 |
|
||||||
|
|
||||||
|
taos> select trim(both from nch1) from ts_4893.meters order by ts limit 5
|
||||||
|
trim(both from nch1) |
|
||||||
|
=================================
|
||||||
|
novel |
|
||||||
|
一二三四五六七八九十 |
|
||||||
|
update |
|
||||||
|
prision |
|
||||||
|
novel |
|
||||||
|
|
||||||
|
taos> select trim(leading from nch1) from ts_4893.meters order by ts limit 5
|
||||||
|
trim(leading from nch1) |
|
||||||
|
=================================
|
||||||
|
novel |
|
||||||
|
一二三四五六七八九十 |
|
||||||
|
update |
|
||||||
|
prision |
|
||||||
|
novel |
|
||||||
|
|
||||||
|
taos> select trim(trailing from nch1) from ts_4893.meters order by ts limit 5
|
||||||
|
trim(trailing from nch1) |
|
||||||
|
=================================
|
||||||
|
novel |
|
||||||
|
一二三四五六七八九十 |
|
||||||
|
update |
|
||||||
|
prision |
|
||||||
|
novel |
|
||||||
|
|
||||||
taos> select trim(nch2 from nch1) from ts_4893.meters where position(nch2 in nch1) != 0 order by ts limit 5
|
taos> select trim(nch2 from nch1) from ts_4893.meters where position(nch2 in nch1) != 0 order by ts limit 5
|
||||||
trim(nch2 from nch1) |
|
trim(nch2 from nch1) |
|
||||||
=================================
|
=================================
|
||||||
|
|
|
|
@ -0,0 +1,11 @@
|
||||||
|
select avg(1)
|
||||||
|
select avg(cast(1 as tinyint))
|
||||||
|
select avg(cast(100 as smallint))
|
||||||
|
select avg(cast(100000 as int))
|
||||||
|
select avg(cast(10000000000 as bigint))
|
||||||
|
select avg(cast(1 as tinyint unsigned))
|
||||||
|
select avg(cast(100 as smallint unsigned))
|
||||||
|
select avg(cast(100000 as int unsigned))
|
||||||
|
select avg(cast(10000000000 as bigint unsigned))
|
||||||
|
select avg(cast(1.1 as float))
|
||||||
|
select avg(cast(1.1 as double))
|
|
@ -0,0 +1,11 @@
|
||||||
|
select leastsquares(1, 1, 1)
|
||||||
|
select leastsquares(cast(1.1 as float), 1, 1)
|
||||||
|
select leastsquares(cast(1.1 as double), 1, 1)
|
||||||
|
select leastsquares(cast(1 as tinyint), 1, 1)
|
||||||
|
select leastsquares(cast(100 as smallint), 1, 1)
|
||||||
|
select leastsquares(cast(100000 as int), 1, 1)
|
||||||
|
select leastsquares(cast(10000000000 as bigint), 1, 1)
|
||||||
|
select leastsquares(cast(1 as tinyint unsigned), 1, 1)
|
||||||
|
select leastsquares(cast(100 as smallint unsigned), 1, 1)
|
||||||
|
select leastsquares(cast(100000 as int unsigned), 1, 1)
|
||||||
|
select leastsquares(cast(10000000000 as bigint unsigned), 1, 1)
|
|
@ -26,3 +26,14 @@ select log(max(voltage) + 1) from ts_4893.meters
|
||||||
select groupid, max(voltage) from ts_4893.meters group by groupid order by groupid
|
select groupid, max(voltage) from ts_4893.meters group by groupid order by groupid
|
||||||
select location, max(id) from ts_4893.meters group by location order by location
|
select location, max(id) from ts_4893.meters group by location order by location
|
||||||
select location, max(current) from ts_4893.meters group by location order by location
|
select location, max(current) from ts_4893.meters group by location order by location
|
||||||
|
select max(1)
|
||||||
|
select max(cast(1 as tinyint))
|
||||||
|
select max(cast(100 as smallint))
|
||||||
|
select max(cast(100000 as int))
|
||||||
|
select max(cast(10000000000 as bigint))
|
||||||
|
select max(cast(1 as tinyint unsigned))
|
||||||
|
select max(cast(100 as smallint unsigned))
|
||||||
|
select max(cast(100000 as int unsigned))
|
||||||
|
select max(cast(10000000000 as bigint unsigned))
|
||||||
|
select max(cast(1.1 as float))
|
||||||
|
select max(cast(1.1 as double))
|
||||||
|
|
|
@ -26,3 +26,14 @@ select log(min(voltage) + 1) from ts_4893.meters
|
||||||
select groupid, min(voltage) from ts_4893.meters group by groupid order by groupid
|
select groupid, min(voltage) from ts_4893.meters group by groupid order by groupid
|
||||||
select location, min(current) from ts_4893.meters group by location order by location
|
select location, min(current) from ts_4893.meters group by location order by location
|
||||||
select location, min(id) from ts_4893.meters group by location order by location
|
select location, min(id) from ts_4893.meters group by location order by location
|
||||||
|
select min(1)
|
||||||
|
select min(cast(1 as tinyint))
|
||||||
|
select min(cast(100 as smallint))
|
||||||
|
select min(cast(100000 as int))
|
||||||
|
select min(cast(10000000000 as bigint))
|
||||||
|
select min(cast(1 as tinyint unsigned))
|
||||||
|
select min(cast(100 as smallint unsigned))
|
||||||
|
select min(cast(100000 as int unsigned))
|
||||||
|
select min(cast(10000000000 as bigint unsigned))
|
||||||
|
select min(cast(1.1 as float))
|
||||||
|
select min(cast(1.1 as double))
|
||||||
|
|
|
@ -47,3 +47,13 @@ select round(abs(voltage), 2) from ts_4893.meters limit 1
|
||||||
select round(pi() * phase, 3) from ts_4893.meters limit 1
|
select round(pi() * phase, 3) from ts_4893.meters limit 1
|
||||||
select round(sqrt(voltage), 2) from ts_4893.meters limit 1
|
select round(sqrt(voltage), 2) from ts_4893.meters limit 1
|
||||||
select round(log(current), 2) from ts_4893.meters limit 1
|
select round(log(current), 2) from ts_4893.meters limit 1
|
||||||
|
select round(cast(1.0e+400 as float), 0);
|
||||||
|
select round(cast(1.0e+400 as double), 0);
|
||||||
|
select round(cast(5 as tinyint), 1);
|
||||||
|
select round(cast(50 as smallint), 1);
|
||||||
|
select round(cast(500 as int), 1);
|
||||||
|
select round(cast(50000 as bigint), 1);
|
||||||
|
select round(cast(5 as TINYINT UNSIGNED), 1);
|
||||||
|
select round(cast(50 as smallint unsigned), 1);
|
||||||
|
select round(cast(500 as int unsigned), 1);
|
||||||
|
select round(cast(50000 as bigint unsigned), 1);
|
|
@ -20,6 +20,26 @@ select SIGN(2) * SIGN(1) from ts_4893.meters limit 1
|
||||||
select SIGN(2) / SIGN(1) from ts_4893.meters limit 1
|
select SIGN(2) / SIGN(1) from ts_4893.meters limit 1
|
||||||
select SIGN(1) + id from ts_4893.meters order by ts limit 5
|
select SIGN(1) + id from ts_4893.meters order by ts limit 5
|
||||||
select SIGN(id) + id from ts_4893.meters order by ts limit 5
|
select SIGN(id) + id from ts_4893.meters order by ts limit 5
|
||||||
|
select sign(cast(1 as tinyint))
|
||||||
|
select sign(cast(1 as smallint))
|
||||||
|
select sign(cast(1 as int))
|
||||||
|
select sign(cast(1 as bigint))
|
||||||
|
select sign(cast(1 as tinyint unsigned))
|
||||||
|
select sign(cast(1 as smallint unsigned))
|
||||||
|
select sign(cast(1 as int unsigned))
|
||||||
|
select sign(cast(1 as bigint unsigned))
|
||||||
|
select sign(cast(1 as float))
|
||||||
|
select sign(cast(1 as double))
|
||||||
|
select sign(cast(NULL as tinyint))
|
||||||
|
select sign(cast(NULL as smallint))
|
||||||
|
select sign(cast(NULL as int))
|
||||||
|
select sign(cast(NULL as bigint))
|
||||||
|
select sign(cast(NULL as tinyint unsigned))
|
||||||
|
select sign(cast(NULL as smallint unsigned))
|
||||||
|
select sign(cast(NULL as int unsigned))
|
||||||
|
select sign(cast(NULL as bigint unsigned))
|
||||||
|
select sign(cast(NULL as float))
|
||||||
|
select sign(cast(NULL as double))
|
||||||
select SIGN(abs(10))
|
select SIGN(abs(10))
|
||||||
select SIGN(abs(-10))
|
select SIGN(abs(-10))
|
||||||
select abs(SIGN(10))
|
select abs(SIGN(10))
|
||||||
|
@ -34,6 +54,8 @@ select sign(-1)
|
||||||
select sign(-10)
|
select sign(-10)
|
||||||
select sign(current) from ts_4893.d0 order by ts limit 10
|
select sign(current) from ts_4893.d0 order by ts limit 10
|
||||||
select sign(current) from ts_4893.meters order by ts limit 10
|
select sign(current) from ts_4893.meters order by ts limit 10
|
||||||
|
select sign(cast(current as float)) from ts_4893.d0 order by ts limit 10
|
||||||
|
select sign(cast(current as float)) from ts_4893.meters order by ts limit 10
|
||||||
select sign(null)
|
select sign(null)
|
||||||
select sign(25)
|
select sign(25)
|
||||||
select sign(-10)
|
select sign(-10)
|
||||||
|
|
|
@ -0,0 +1,11 @@
|
||||||
|
select statecount(1, 'GT', 1)
|
||||||
|
select statecount(cast(1 as tinyint), 'GT', 1)
|
||||||
|
select statecount(cast(100 as smallint), 'GT', 1)
|
||||||
|
select statecount(cast(100000 as int), 'GT', 1)
|
||||||
|
select statecount(cast(10000000000 as bigint), 'GT', 1)
|
||||||
|
select statecount(cast(1 as tinyint unsigned), 'GT', 1)
|
||||||
|
select statecount(cast(100 as smallint unsigned), 'GT', 1)
|
||||||
|
select statecount(cast(100000 as int unsigned), 'GT', 1)
|
||||||
|
select statecount(cast(10000000000 as bigint unsigned), 'GT', 1)
|
||||||
|
select statecount(cast(1.1 as float), 'GT', 1)
|
||||||
|
select statecount(cast(1.1 as double), 'GT', 1)
|
|
@ -0,0 +1,11 @@
|
||||||
|
select sum(1)
|
||||||
|
select sum(cast(1 as tinyint))
|
||||||
|
select sum(cast(100 as smallint))
|
||||||
|
select sum(cast(100000 as int))
|
||||||
|
select sum(cast(10000000000 as bigint))
|
||||||
|
select sum(cast(1 as tinyint unsigned))
|
||||||
|
select sum(cast(100 as smallint unsigned))
|
||||||
|
select sum(cast(100000 as int unsigned))
|
||||||
|
select sum(cast(10000000000 as bigint unsigned))
|
||||||
|
select sum(cast(1.1 as float))
|
||||||
|
select sum(cast(1.1 as double))
|
|
@ -34,6 +34,9 @@ select trim('空格blank' from '空格blank空格中Tes空格blank空')
|
||||||
select trim(both '空格blank' from '空格blank空格中Tes空格blank空')
|
select trim(both '空格blank' from '空格blank空格中Tes空格blank空')
|
||||||
select trim(leading '空格blank' from '空格blank空格中Tes空格blank空')
|
select trim(leading '空格blank' from '空格blank空格中Tes空格blank空')
|
||||||
select trim(trailing '空格blank' from '空格blank空格中Tes空格blank空')
|
select trim(trailing '空格blank' from '空格blank空格中Tes空格blank空')
|
||||||
|
select trim(both from nch1) from ts_4893.meters order by ts limit 5
|
||||||
|
select trim(leading from nch1) from ts_4893.meters order by ts limit 5
|
||||||
|
select trim(trailing from nch1) from ts_4893.meters order by ts limit 5
|
||||||
select trim(nch2 from nch1) from ts_4893.meters where position(nch2 in nch1) != 0 order by ts limit 5
|
select trim(nch2 from nch1) from ts_4893.meters where position(nch2 in nch1) != 0 order by ts limit 5
|
||||||
select trim(both nch2 from nch1) from ts_4893.meters where position(nch2 in nch1) != 0 order by ts limit 5
|
select trim(both nch2 from nch1) from ts_4893.meters where position(nch2 in nch1) != 0 order by ts limit 5
|
||||||
select trim(leading nch2 from nch1) from ts_4893.meters where position(nch2 in nch1) != 0 order by ts limit 5
|
select trim(leading nch2 from nch1) from ts_4893.meters where position(nch2 in nch1) != 0 order by ts limit 5
|
||||||
|
|
|
@ -294,6 +294,18 @@ class TDTestCase(TBase):
|
||||||
|
|
||||||
tdSql.error("select min(nonexistent_column) from ts_4893.meters;")
|
tdSql.error("select min(nonexistent_column) from ts_4893.meters;")
|
||||||
|
|
||||||
|
def test_sum(self):
|
||||||
|
self.test_normal_query_new("sum")
|
||||||
|
|
||||||
|
def test_statecount(self):
|
||||||
|
self.test_normal_query_new("statecount")
|
||||||
|
|
||||||
|
def test_avg(self):
|
||||||
|
self.test_normal_query_new("avg")
|
||||||
|
|
||||||
|
def test_leastsquares(self):
|
||||||
|
self.test_normal_query_new("leastsquares")
|
||||||
|
|
||||||
def test_error(self):
|
def test_error(self):
|
||||||
tdSql.error("select * from (select to_iso8601(ts, timezone()), timezone() from ts_4893.meters \
|
tdSql.error("select * from (select to_iso8601(ts, timezone()), timezone() from ts_4893.meters \
|
||||||
order by ts desc) limit 1000;", expectErrInfo="Invalid parameter data type : to_iso8601") # TS-5340
|
order by ts desc) limit 1000;", expectErrInfo="Invalid parameter data type : to_iso8601") # TS-5340
|
||||||
|
@ -336,6 +348,10 @@ class TDTestCase(TBase):
|
||||||
# agg function
|
# agg function
|
||||||
self.test_stddev_pop()
|
self.test_stddev_pop()
|
||||||
self.test_varpop()
|
self.test_varpop()
|
||||||
|
self.test_avg()
|
||||||
|
self.test_sum()
|
||||||
|
self.test_leastsquares()
|
||||||
|
self.test_statecount()
|
||||||
|
|
||||||
# select function
|
# select function
|
||||||
self.test_max()
|
self.test_max()
|
||||||
|
|
|
@ -384,7 +384,8 @@ Core dir: {core_dir}
|
||||||
if text_result == "success":
|
if text_result == "success":
|
||||||
send_msg(notification_robot_url, get_msg(text))
|
send_msg(notification_robot_url, get_msg(text))
|
||||||
else:
|
else:
|
||||||
send_msg(alert_robot_url, get_msg(text))
|
send_msg(alert_robot_url, get_msg(text))
|
||||||
|
send_msg(notification_robot_url, get_msg(text))
|
||||||
|
|
||||||
#send_msg(get_msg(text))
|
#send_msg(get_msg(text))
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
|
|
|
@ -419,6 +419,7 @@ Core dir: {core_dir}
|
||||||
send_msg(notification_robot_url, get_msg(text))
|
send_msg(notification_robot_url, get_msg(text))
|
||||||
else:
|
else:
|
||||||
send_msg(alert_robot_url, get_msg(text))
|
send_msg(alert_robot_url, get_msg(text))
|
||||||
|
send_msg(notification_robot_url, get_msg(text))
|
||||||
|
|
||||||
#send_msg(get_msg(text))
|
#send_msg(get_msg(text))
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
|
|
|
@ -406,7 +406,8 @@ Core dir: {core_dir}
|
||||||
if text_result == "success":
|
if text_result == "success":
|
||||||
send_msg(notification_robot_url, get_msg(text))
|
send_msg(notification_robot_url, get_msg(text))
|
||||||
else:
|
else:
|
||||||
send_msg(alert_robot_url, get_msg(text))
|
send_msg(alert_robot_url, get_msg(text))
|
||||||
|
send_msg(notification_robot_url, get_msg(text))
|
||||||
|
|
||||||
#send_msg(get_msg(text))
|
#send_msg(get_msg(text))
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
|
|
|
@ -7,12 +7,120 @@ GREEN_DARK='\033[0;32m'
|
||||||
GREEN_UNDERLINE='\033[4;32m'
|
GREEN_UNDERLINE='\033[4;32m'
|
||||||
NC='\033[0m'
|
NC='\033[0m'
|
||||||
|
|
||||||
TDENGINE_DIR=/root/TDinternal/community
|
function print_color() {
|
||||||
|
local color="$1"
|
||||||
|
local message="$2"
|
||||||
|
echo -e "${color}${message}${NC}"
|
||||||
|
}
|
||||||
|
|
||||||
|
# 初始化参数
|
||||||
|
TDENGINE_DIR="/root/TDinternal/community"
|
||||||
|
BRANCH=""
|
||||||
|
SAVE_LOG="notsave"
|
||||||
|
|
||||||
|
# 解析命令行参数
|
||||||
|
while getopts "hd:b:t:s:" arg; do
|
||||||
|
case $arg in
|
||||||
|
d)
|
||||||
|
TDENGINE_DIR=$OPTARG
|
||||||
|
;;
|
||||||
|
b)
|
||||||
|
BRANCH=$OPTARG
|
||||||
|
;;
|
||||||
|
s)
|
||||||
|
SAVE_LOG=$OPTARG
|
||||||
|
;;
|
||||||
|
h)
|
||||||
|
echo "Usage: $(basename $0) -d [TDengine_dir] -b [branch] -s [save ci case log]"
|
||||||
|
echo " -d [TDengine_dir] [default /root/TDinternal/community] "
|
||||||
|
echo " -b [branch] [default local branch] "
|
||||||
|
echo " -s [save/notsave] [default save ci case log in TDengine_dir/tests/ci_bak] "
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
?)
|
||||||
|
echo "Usage: ./$(basename $0) -h"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
# 检查是否提供了命令名称
|
||||||
|
if [ -z "$TDENGINE_DIR" ]; then
|
||||||
|
echo "Error: TDengine dir is required."
|
||||||
|
echo "Usage: $(basename $0) -d [TDengine_dir] -b [branch] -s [save ci case log] "
|
||||||
|
echo " -d [TDengine_dir] [default /root/TDinternal/community] "
|
||||||
|
echo " -b [branch] [default local branch] "
|
||||||
|
echo " -s [save/notsave] [default save ci case log in TDengine_dir/tests/ci_bak] "
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
|
||||||
#echo "TDENGINE_DIR = $TDENGINE_DIR"
|
echo "TDENGINE_DIR = $TDENGINE_DIR"
|
||||||
today=`date +"%Y%m%d"`
|
today=`date +"%Y%m%d"`
|
||||||
TDENGINE_ALLCI_REPORT=$TDENGINE_DIR/tests/all-ci-report-$today.log
|
TDENGINE_ALLCI_REPORT="$TDENGINE_DIR/tests/all-ci-report-$today.log"
|
||||||
|
BACKUP_DIR="$TDENGINE_DIR/tests/ci_bak"
|
||||||
|
mkdir -p "$BACKUP_DIR"
|
||||||
|
#cd $BACKUP_DIR && rm -rf *
|
||||||
|
|
||||||
|
|
||||||
|
function buildTDengine() {
|
||||||
|
print_color "$GREEN" "TDengine build start"
|
||||||
|
|
||||||
|
# pull parent code
|
||||||
|
cd "$TDENGINE_DIR/../"
|
||||||
|
print_color "$GREEN" "git pull parent code..."
|
||||||
|
git remote prune origin > /dev/null
|
||||||
|
git remote update > /dev/null
|
||||||
|
|
||||||
|
# pull tdengine code
|
||||||
|
cd $TDENGINE_DIR
|
||||||
|
print_color "$GREEN" "git pull tdengine code..."
|
||||||
|
git remote prune origin > /dev/null
|
||||||
|
git remote update > /dev/null
|
||||||
|
REMOTE_COMMIT=`git rev-parse --short remotes/origin/$branch`
|
||||||
|
LOCAL_COMMIT=`git rev-parse --short @`
|
||||||
|
print_color "$GREEN" " LOCAL: $LOCAL_COMMIT"
|
||||||
|
print_color "$GREEN" "REMOTE: $REMOTE_COMMIT"
|
||||||
|
|
||||||
|
if [ "$LOCAL_COMMIT" == "$REMOTE_COMMIT" ]; then
|
||||||
|
print_color "$GREEN" "repo up-to-date"
|
||||||
|
else
|
||||||
|
print_color "$GREEN" "repo need to pull"
|
||||||
|
fi
|
||||||
|
|
||||||
|
git reset --hard
|
||||||
|
git checkout -- .
|
||||||
|
git checkout $branch
|
||||||
|
git checkout -- .
|
||||||
|
git clean -f
|
||||||
|
git pull
|
||||||
|
|
||||||
|
[ -d $TDENGINE_DIR/debug ] || mkdir $TDENGINE_DIR/debug
|
||||||
|
cd $TDENGINE_DIR/debug
|
||||||
|
|
||||||
|
print_color "$GREEN" "rebuild.."
|
||||||
|
LOCAL_COMMIT=`git rev-parse --short @`
|
||||||
|
|
||||||
|
rm -rf *
|
||||||
|
makecmd="cmake -DBUILD_TEST=false -DBUILD_HTTP=false -DBUILD_DEPENDENCY_TESTS=0 -DBUILD_TOOLS=true -DBUILD_GEOS=true -DBUILD_TEST=true -DBUILD_CONTRIB=false ../../"
|
||||||
|
print_color "$GREEN" "$makecmd"
|
||||||
|
$makecmd
|
||||||
|
|
||||||
|
make -j 8 install
|
||||||
|
|
||||||
|
print_color "$GREEN" "TDengine build end"
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
# 检查并获取分支名称
|
||||||
|
if [ -n "$BRANCH" ]; then
|
||||||
|
branch="$BRANCH"
|
||||||
|
print_color "$GREEN" "Testing branch: $branch "
|
||||||
|
print_color "$GREEN" "Build is required for this test!"
|
||||||
|
buildTDengine
|
||||||
|
else
|
||||||
|
print_color "$GREEN" "Build is not required for this test!"
|
||||||
|
fi
|
||||||
|
|
||||||
|
|
||||||
function runCasesOneByOne () {
|
function runCasesOneByOne () {
|
||||||
|
@ -20,23 +128,50 @@ function runCasesOneByOne () {
|
||||||
if [[ "$line" != "#"* ]]; then
|
if [[ "$line" != "#"* ]]; then
|
||||||
cmd=`echo $line | cut -d',' -f 5`
|
cmd=`echo $line | cut -d',' -f 5`
|
||||||
if [[ "$2" == "sim" ]] && [[ $line == *"script"* ]]; then
|
if [[ "$2" == "sim" ]] && [[ $line == *"script"* ]]; then
|
||||||
|
echo $cmd
|
||||||
case=`echo $cmd | cut -d' ' -f 3`
|
case=`echo $cmd | cut -d' ' -f 3`
|
||||||
|
case_file=`echo $case | tr -d ' /' `
|
||||||
start_time=`date +%s`
|
start_time=`date +%s`
|
||||||
date +%F\ %T | tee -a $TDENGINE_ALLCI_REPORT && timeout 20m $cmd > /dev/null 2>&1 && \
|
date +%F\ %T | tee -a $TDENGINE_ALLCI_REPORT && timeout 20m $cmd > $TDENGINE_DIR/tests/$case_file.log 2>&1 && \
|
||||||
echo -e "${GREEN}$case success${NC}" | tee -a $TDENGINE_ALLCI_REPORT \
|
echo -e "${GREEN}$case success${NC}" | tee -a $TDENGINE_ALLCI_REPORT || \
|
||||||
|| echo -e "${RED}$case failed${NC}" | tee -a $TDENGINE_ALLCI_REPORT
|
echo -e "${RED}$case failed${NC}" | tee -a $TDENGINE_ALLCI_REPORT
|
||||||
|
|
||||||
|
# # 记录日志和备份
|
||||||
|
# mkdir -p "$BACKUP_DIR/$case_file"
|
||||||
|
# tar --exclude='*.sock*' -czf "$BACKUP_DIR/$case_file/sim.tar.gz" -C "$TDENGINE_DIR/.." sim
|
||||||
|
# mv "$TDENGINE_DIR/tests/$case_file.log" "$BACKUP_DIR/$case_file"
|
||||||
|
|
||||||
|
if [ "$SAVE_LOG" == "save" ]; then
|
||||||
|
mkdir -p "$BACKUP_DIR/$case_file"
|
||||||
|
tar --exclude='*.sock*' -czf "$BACKUP_DIR/$case_file/sim.tar.gz" -C "$TDENGINE_DIR/.." sim
|
||||||
|
mv "$TDENGINE_DIR/tests/$case_file.log" "$BACKUP_DIR/$case_file"
|
||||||
|
else
|
||||||
|
echo "This case not save log!"
|
||||||
|
fi
|
||||||
|
|
||||||
end_time=`date +%s`
|
end_time=`date +%s`
|
||||||
echo execution time of $case was `expr $end_time - $start_time`s. | tee -a $TDENGINE_ALLCI_REPORT
|
echo execution time of $case was `expr $end_time - $start_time`s. | tee -a $TDENGINE_ALLCI_REPORT
|
||||||
|
|
||||||
elif [[ "$line" == *"$2"* ]]; then
|
elif [[ "$line" == *"$2"* ]]; then
|
||||||
|
echo $cmd
|
||||||
if [[ "$cmd" == *"pytest.sh"* ]]; then
|
if [[ "$cmd" == *"pytest.sh"* ]]; then
|
||||||
cmd=`echo $cmd | cut -d' ' -f 2-20`
|
cmd=`echo $cmd | cut -d' ' -f 2-20`
|
||||||
fi
|
fi
|
||||||
case=`echo $cmd | cut -d' ' -f 4-20`
|
case=`echo $cmd | cut -d' ' -f 4-20`
|
||||||
|
case_file=`echo $case | tr -d ' /' `
|
||||||
start_time=`date +%s`
|
start_time=`date +%s`
|
||||||
date +%F\ %T | tee -a $TDENGINE_ALLCI_REPORT && timeout 20m $cmd > /dev/null 2>&1 && \
|
date +%F\ %T | tee -a $TDENGINE_ALLCI_REPORT && timeout 20m $cmd > $TDENGINE_DIR/tests/$case_file.log 2>&1 && \
|
||||||
echo -e "${GREEN}$case success${NC}" | tee -a $TDENGINE_ALLCI_REPORT || \
|
echo -e "${GREEN}$case success${NC}" | tee -a $TDENGINE_ALLCI_REPORT || \
|
||||||
echo -e "${RED}$case failed${NC}" | tee -a $TDENGINE_ALLCI_REPORT
|
echo -e "${RED}$case failed${NC}" | tee -a $TDENGINE_ALLCI_REPORT
|
||||||
|
|
||||||
|
if [ "$SAVE_LOG" == "save" ]; then
|
||||||
|
mkdir -p "$BACKUP_DIR/$case_file"
|
||||||
|
tar --exclude='*.sock*' -czf "$BACKUP_DIR/$case_file/sim.tar.gz" -C "$TDENGINE_DIR/.." sim
|
||||||
|
mv "$TDENGINE_DIR/tests/$case_file.log" "$BACKUP_DIR/$case_file"
|
||||||
|
else
|
||||||
|
echo "This case not save log!"
|
||||||
|
fi
|
||||||
|
|
||||||
end_time=`date +%s`
|
end_time=`date +%s`
|
||||||
echo execution time of $case was `expr $end_time - $start_time`s. | tee -a $TDENGINE_ALLCI_REPORT
|
echo execution time of $case was `expr $end_time - $start_time`s. | tee -a $TDENGINE_ALLCI_REPORT
|
||||||
fi
|
fi
|
||||||
|
@ -45,62 +180,62 @@ function runCasesOneByOne () {
|
||||||
}
|
}
|
||||||
|
|
||||||
function runUnitTest() {
|
function runUnitTest() {
|
||||||
echo "=== Run unit test case ==="
|
print_color "$GREEN" "=== Run unit test case ==="
|
||||||
echo " $TDENGINE_DIR/debug"
|
print_color "$GREEN" " $TDENGINE_DIR/../debug"
|
||||||
cd $TDENGINE_DIR/debug
|
cd $TDENGINE_DIR/../debug
|
||||||
ctest -j12
|
ctest -j12
|
||||||
echo "3.0 unit test done"
|
print_color "$GREEN" "3.0 unit test done"
|
||||||
}
|
}
|
||||||
|
|
||||||
function runSimCases() {
|
function runSimCases() {
|
||||||
echo "=== Run sim cases ==="
|
print_color "$GREEN" "=== Run sim cases ==="
|
||||||
|
|
||||||
cd $TDENGINE_DIR/tests/script
|
cd $TDENGINE_DIR/tests/script
|
||||||
runCasesOneByOne $TDENGINE_DIR/tests/parallel_test/cases-test.task sim
|
runCasesOneByOne $TDENGINE_DIR/tests/parallel_test/cases.task sim
|
||||||
|
|
||||||
totalSuccess=`grep 'sim success' $TDENGINE_ALLCI_REPORT | wc -l`
|
totalSuccess=`grep 'sim success' $TDENGINE_ALLCI_REPORT | wc -l`
|
||||||
if [ "$totalSuccess" -gt "0" ]; then
|
if [ "$totalSuccess" -gt "0" ]; then
|
||||||
echo "### Total $totalSuccess SIM test case(s) succeed! ###" | tee -a $TDENGINE_ALLCI_REPORT
|
print_color "$GREEN" "### Total $totalSuccess SIM test case(s) succeed! ###" | tee -a $TDENGINE_ALLCI_REPORT
|
||||||
fi
|
fi
|
||||||
|
|
||||||
totalFailed=`grep 'sim failed\|fault' $TDENGINE_ALLCI_REPORT | wc -l`
|
totalFailed=`grep 'sim failed\|fault' $TDENGINE_ALLCI_REPORT | wc -l`
|
||||||
if [ "$totalFailed" -ne "0" ]; then
|
if [ "$totalFailed" -ne "0" ]; then
|
||||||
echo "### Total $totalFailed SIM test case(s) failed! ###" | tee -a $TDENGINE_ALLCI_REPORT
|
print_color "$RED" "### Total $totalFailed SIM test case(s) failed! ###" | tee -a $TDENGINE_ALLCI_REPORT
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
function runPythonCases() {
|
function runPythonCases() {
|
||||||
echo "=== Run python cases ==="
|
print_color "$GREEN" "=== Run python cases ==="
|
||||||
|
|
||||||
cd $TDENGINE_DIR/tests/parallel_test
|
cd $TDENGINE_DIR/tests/parallel_test
|
||||||
sed -i '/compatibility.py/d' cases-test.task
|
sed -i '/compatibility.py/d' cases.task
|
||||||
|
|
||||||
# army
|
# army
|
||||||
cd $TDENGINE_DIR/tests/army
|
cd $TDENGINE_DIR/tests/army
|
||||||
runCasesOneByOne ../parallel_test/cases-test.task army
|
runCasesOneByOne ../parallel_test/cases.task army
|
||||||
|
|
||||||
# system-test
|
# system-test
|
||||||
cd $TDENGINE_DIR/tests/system-test
|
cd $TDENGINE_DIR/tests/system-test
|
||||||
runCasesOneByOne ../parallel_test/cases-test.task system-test
|
runCasesOneByOne ../parallel_test/cases.task system-test
|
||||||
|
|
||||||
# develop-test
|
# develop-test
|
||||||
cd $TDENGINE_DIR/tests/develop-test
|
cd $TDENGINE_DIR/tests/develop-test
|
||||||
runCasesOneByOne ../parallel_test/cases-test.task develop-test
|
runCasesOneByOne ../parallel_test/cases.task develop-test
|
||||||
|
|
||||||
totalSuccess=`grep 'py success' $TDENGINE_ALLCI_REPORT | wc -l`
|
totalSuccess=`grep 'py success' $TDENGINE_ALLCI_REPORT | wc -l`
|
||||||
if [ "$totalSuccess" -gt "0" ]; then
|
if [ "$totalSuccess" -gt "0" ]; then
|
||||||
echo "### Total $totalSuccess python test case(s) succeed! ###" | tee -a $TDENGINE_ALLCI_REPORT
|
print_color "$GREEN" "### Total $totalSuccess python test case(s) succeed! ###" | tee -a $TDENGINE_ALLCI_REPORT
|
||||||
fi
|
fi
|
||||||
|
|
||||||
totalFailed=`grep 'py failed\|fault' $TDENGINE_ALLCI_REPORT | wc -l`
|
totalFailed=`grep 'py failed\|fault' $TDENGINE_ALLCI_REPORT | wc -l`
|
||||||
if [ "$totalFailed" -ne "0" ]; then
|
if [ "$totalFailed" -ne "0" ]; then
|
||||||
echo "### Total $totalFailed python test case(s) failed! ###" | tee -a $TDENGINE_ALLCI_REPORT
|
print_color "$RED" "### Total $totalFailed python test case(s) failed! ###" | tee -a $TDENGINE_ALLCI_REPORT
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
function runTest() {
|
function runTest() {
|
||||||
echo "run Test"
|
print_color "$GREEN" "run Test"
|
||||||
|
|
||||||
cd $TDENGINE_DIR
|
cd $TDENGINE_DIR
|
||||||
[ -d sim ] && rm -rf sim
|
[ -d sim ] && rm -rf sim
|
||||||
|
@ -119,20 +254,20 @@ function runTest() {
|
||||||
}
|
}
|
||||||
|
|
||||||
function stopTaosd {
|
function stopTaosd {
|
||||||
echo "Stop taosd start"
|
print_color "$GREEN" "Stop taosd start"
|
||||||
systemctl stop taosd
|
systemctl stop taosd
|
||||||
PID=`ps -ef|grep -w taosd | grep -v grep | awk '{print $2}'`
|
PID=`ps -ef|grep -w taosd | grep -v grep | awk '{print $2}'`
|
||||||
while [ -n "$PID" ]
|
while [ -n "$PID" ]
|
||||||
do
|
do
|
||||||
pkill -TERM -x taosd
|
pkill -TERM -x taosd
|
||||||
sleep 1
|
sleep 1
|
||||||
PID=`ps -ef|grep -w taosd | grep -v grep | awk '{print $2}'`
|
PID=`ps -ef|grep -w taosd | grep -v grep | awk '{print $2}'`
|
||||||
done
|
done
|
||||||
echo "Stop tasod end"
|
print_color "$GREEN" "Stop tasod end"
|
||||||
}
|
}
|
||||||
|
|
||||||
function stopTaosadapter {
|
function stopTaosadapter {
|
||||||
echo "Stop taosadapter"
|
print_color "$GREEN" "Stop taosadapter"
|
||||||
systemctl stop taosadapter.service
|
systemctl stop taosadapter.service
|
||||||
PID=`ps -ef|grep -w taosadapter | grep -v grep | awk '{print $2}'`
|
PID=`ps -ef|grep -w taosadapter | grep -v grep | awk '{print $2}'`
|
||||||
while [ -n "$PID" ]
|
while [ -n "$PID" ]
|
||||||
|
@ -141,18 +276,18 @@ function stopTaosadapter {
|
||||||
sleep 1
|
sleep 1
|
||||||
PID=`ps -ef|grep -w taosd | grep -v grep | awk '{print $2}'`
|
PID=`ps -ef|grep -w taosd | grep -v grep | awk '{print $2}'`
|
||||||
done
|
done
|
||||||
echo "Stop tasoadapter end"
|
print_color "$GREEN" "Stop tasoadapter end"
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
WORK_DIR=/root/
|
WORK_DIR=/root/
|
||||||
|
|
||||||
date >> $WORK_DIR/date.log
|
date >> $WORK_DIR/date.log
|
||||||
echo "Run ALL CI Test Cases" | tee -a $WORK_DIR/date.log
|
print_color "$GREEN" "Run all ci test cases" | tee -a $WORK_DIR/date.log
|
||||||
|
|
||||||
stopTaosd
|
stopTaosd
|
||||||
|
|
||||||
runTest
|
runTest
|
||||||
|
|
||||||
date >> $WORK_DIR/date.log
|
date >> $WORK_DIR/date.log
|
||||||
echo "End of CI Test Cases" | tee -a $WORK_DIR/date.log
|
print_color "$GREEN" "End of ci test cases" | tee -a $WORK_DIR/date.log
|
|
@ -17,9 +17,10 @@ function print_color() {
|
||||||
TDENGINE_DIR="/root/TDinternal/community"
|
TDENGINE_DIR="/root/TDinternal/community"
|
||||||
BRANCH=""
|
BRANCH=""
|
||||||
TDENGINE_GCDA_DIR="/root/TDinternal/community/debug/"
|
TDENGINE_GCDA_DIR="/root/TDinternal/community/debug/"
|
||||||
|
LCOV_DIR="/usr/local/bin"
|
||||||
|
|
||||||
# Parse command line parameters
|
# Parse command line parameters
|
||||||
while getopts "hd:b:f:c:u:i:" arg; do
|
while getopts "hd:b:f:c:u:i:l:" arg; do
|
||||||
case $arg in
|
case $arg in
|
||||||
d)
|
d)
|
||||||
TDENGINE_DIR=$OPTARG
|
TDENGINE_DIR=$OPTARG
|
||||||
|
@ -39,14 +40,18 @@ while getopts "hd:b:f:c:u:i:" arg; do
|
||||||
i)
|
i)
|
||||||
BRANCH_BUILD=$OPTARG
|
BRANCH_BUILD=$OPTARG
|
||||||
;;
|
;;
|
||||||
|
l)
|
||||||
|
LCOV_DIR=$OPTARG
|
||||||
|
;;
|
||||||
h)
|
h)
|
||||||
echo "Usage: $(basename $0) -d [TDengine dir] -b [Test branch] -i [Build test branch] -f [TDengine gcda dir] -c [Test single case/all cases] -u [Unit test case]"
|
echo "Usage: $(basename $0) -d [TDengine dir] -b [Test branch] -i [Build test branch] -f [TDengine gcda dir] -c [Test single case/all cases] -u [Unit test case] -l [Lcov dir]"
|
||||||
echo " -d [TDengine dir] [default /root/TDinternal/community; eg: /home/TDinternal/community] "
|
echo " -d [TDengine dir] [default /root/TDinternal/community; eg: /home/TDinternal/community] "
|
||||||
echo " -b [Test branch] [default local branch; eg:cover/3.0] "
|
echo " -b [Test branch] [default local branch; eg:cover/3.0] "
|
||||||
echo " -i [Build test branch] [default no:not build, but still install ;yes:will build and install ] "
|
echo " -i [Build test branch] [default no:not build, but still install ;yes:will build and install ] "
|
||||||
echo " -f [TDengine gcda dir] [default /root/TDinternal/community/debug; eg:/root/TDinternal/community/debug/community/source/dnode/vnode/CMakeFiles/vnode.dir/src/tq/] "
|
echo " -f [TDengine gcda dir] [default /root/TDinternal/community/debug; eg:/root/TDinternal/community/debug/community/source/dnode/vnode/CMakeFiles/vnode.dir/src/tq/] "
|
||||||
echo " -c [Test single case/all cases] [default null; -c all : include parallel_test/longtimeruning_cases.task and all unit cases; -c task : include parallel_test/longtimeruning_cases.task; single case: eg: -c './test.sh -f tsim/stream/streamFwcIntervalFill.sim' ] "
|
echo " -c [Test single case/all cases] [default null; -c all : include parallel_test/longtimeruning_cases.task and all unit cases; -c task : include parallel_test/longtimeruning_cases.task; single case: eg: -c './test.sh -f tsim/stream/streamFwcIntervalFill.sim' ] "
|
||||||
echo " -u [Unit test case] [default null; eg: './schedulerTest' ] "
|
echo " -u [Unit test case] [default null; eg: './schedulerTest' ] "
|
||||||
|
echo " -l [Lcov bin dir] [default /usr/local/bin; eg: '/root/TDinternal/community/tests/lcov-1.14/bin' ] "
|
||||||
exit 0
|
exit 0
|
||||||
;;
|
;;
|
||||||
?)
|
?)
|
||||||
|
@ -59,13 +64,14 @@ done
|
||||||
# Check if the command name is provided
|
# Check if the command name is provided
|
||||||
if [ -z "$TDENGINE_DIR" ]; then
|
if [ -z "$TDENGINE_DIR" ]; then
|
||||||
echo "Error: TDengine dir is required."
|
echo "Error: TDengine dir is required."
|
||||||
echo "Usage: $(basename $0) -d [TDengine dir] -b [Test branch] -i [Build test branch] -f [TDengine gcda dir] -c [Test single case/all cases] -u [Unit test case] "
|
echo "Usage: $(basename $0) -d [TDengine dir] -b [Test branch] -i [Build test branch] -f [TDengine gcda dir] -c [Test single case/all cases] -u [Unit test case] -l [Lcov dir] "
|
||||||
echo " -d [TDengine dir] [default /root/TDinternal/community; eg: /home/TDinternal/community] "
|
echo " -d [TDengine dir] [default /root/TDinternal/community; eg: /home/TDinternal/community] "
|
||||||
echo " -b [Test branch] [default local branch; eg:cover/3.0] "
|
echo " -b [Test branch] [default local branch; eg:cover/3.0] "
|
||||||
echo " -i [Build test branch] [default no:not build, but still install ;yes:will build and install ] "
|
echo " -i [Build test branch] [default no:not build, but still install ;yes:will build and install ] "
|
||||||
echo " -f [TDengine gcda dir] [default /root/TDinternal/community/debug; eg:/root/TDinternal/community/debug/community/source/dnode/vnode/CMakeFiles/vnode.dir/src/tq/] "
|
echo " -f [TDengine gcda dir] [default /root/TDinternal/community/debug; eg:/root/TDinternal/community/debug/community/source/dnode/vnode/CMakeFiles/vnode.dir/src/tq/] "
|
||||||
echo " -c [Test casingle case/all casesse] [default null; -c all : include parallel_test/longtimeruning_cases.task and all unit cases; -c task : include parallel_test/longtimeruning_cases.task; single case: eg: -c './test.sh -f tsim/stream/streamFwcIntervalFill.sim' ] "
|
echo " -c [Test casingle case/all casesse] [default null; -c all : include parallel_test/longtimeruning_cases.task and all unit cases; -c task : include parallel_test/longtimeruning_cases.task; single case: eg: -c './test.sh -f tsim/stream/streamFwcIntervalFill.sim' ] "
|
||||||
echo " -u [Unit test case] [default null; eg: './schedulerTest' ] "
|
echo " -u [Unit test case] [default null; eg: './schedulerTest' ] "
|
||||||
|
echo " -l [Lcov bin dir] [default /usr/local/bin; eg: '/root/TDinternal/community/tests/lcov-1.14/bin' ] "
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
@ -299,11 +305,18 @@ function lcovFunc {
|
||||||
print_color "$GREEN" "Test gcda file dir is default: /root/TDinternal/community/debug"
|
print_color "$GREEN" "Test gcda file dir is default: /root/TDinternal/community/debug"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
if [ -n "$LCOV_DIR" ]; then
|
||||||
|
LCOV_DIR="$LCOV_DIR"
|
||||||
|
print_color "$GREEN" "Lcov bin dir: $LCOV_DIR "
|
||||||
|
else
|
||||||
|
print_color "$GREEN" "Lcov bin dir is default"
|
||||||
|
fi
|
||||||
|
|
||||||
# collect data
|
# collect data
|
||||||
lcov -d "$TDENGINE_GCDA_DIR" -capture --rc lcov_branch_coverage=1 --rc genhtml_branch_coverage=1 --no-external -b $TDENGINE_DIR -o coverage.info
|
$LCOV_DIR/lcov -d "$TDENGINE_GCDA_DIR" -capture --rc lcov_branch_coverage=1 --rc genhtml_branch_coverage=1 --no-external -b $TDENGINE_DIR -o coverage.info
|
||||||
|
|
||||||
# remove exclude paths
|
# remove exclude paths
|
||||||
lcov --remove coverage.info \
|
$LCOV_DIR/lcov --remove coverage.info \
|
||||||
'*/contrib/*' '*/test/*' '*/packaging/*' '*/taos-tools/*' '*/taosadapter/*' '*/TSZ/*' \
|
'*/contrib/*' '*/test/*' '*/packaging/*' '*/taos-tools/*' '*/taosadapter/*' '*/TSZ/*' \
|
||||||
'*/AccessBridgeCalls.c' '*/ttszip.c' '*/dataInserter.c' '*/tlinearhash.c' '*/tsimplehash.c' '*/tsdbDiskData.c' '/*/enterprise/*' '*/docs/*' '*/sim/*'\
|
'*/AccessBridgeCalls.c' '*/ttszip.c' '*/dataInserter.c' '*/tlinearhash.c' '*/tsimplehash.c' '*/tsdbDiskData.c' '/*/enterprise/*' '*/docs/*' '*/sim/*'\
|
||||||
'*/texpr.c' '*/runUdf.c' '*/schDbg.c' '*/syncIO.c' '*/tdbOs.c' '*/pushServer.c' '*/osLz4.c'\
|
'*/texpr.c' '*/runUdf.c' '*/schDbg.c' '*/syncIO.c' '*/tdbOs.c' '*/pushServer.c' '*/osLz4.c'\
|
||||||
|
@ -316,7 +329,7 @@ function lcovFunc {
|
||||||
|
|
||||||
# generate result
|
# generate result
|
||||||
echo "generate result"
|
echo "generate result"
|
||||||
lcov -l --rc lcov_branch_coverage=1 coverage.info | tee -a $TDENGINE_COVERAGE_REPORT
|
$LCOV_DIR/lcov -l --rc lcov_branch_coverage=1 coverage.info | tee -a $TDENGINE_COVERAGE_REPORT
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -373,8 +386,14 @@ if [ ! -f "$COVERAGE_INFO" ]; then
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
if [ -n "$LCOV_DIR" ]; then
|
||||||
|
LCOV_DIR="$LCOV_DIR"
|
||||||
|
print_color "$GREEN" "Lcov bin dir: $LCOV_DIR "
|
||||||
|
else
|
||||||
|
print_color "$GREEN" "Lcov bin dir is default"
|
||||||
|
fi
|
||||||
# Generate local HTML reports
|
# Generate local HTML reports
|
||||||
genhtml "$COVERAGE_INFO" --branch-coverage --function-coverage --output-directory "$OUTPUT_DIR"
|
$LCOV_DIR/genhtml "$COVERAGE_INFO" --branch-coverage --function-coverage --output-directory "$OUTPUT_DIR"
|
||||||
|
|
||||||
# Check whether the report was generated successfully
|
# Check whether the report was generated successfully
|
||||||
if [ $? -eq 0 ]; then
|
if [ $? -eq 0 ]; then
|
||||||
|
|
|
@ -39,6 +39,7 @@ sql explain analyze verbose true select a.ts from sta a join sta b on a.col1 = b
|
||||||
sql explain analyze verbose true select a.ts from sta a join sta b where a.ts=b.ts;
|
sql explain analyze verbose true select a.ts from sta a join sta b where a.ts=b.ts;
|
||||||
sql_error explain analyze verbose true select a.ts from sta a ,sta b on a.ts=b.ts;
|
sql_error explain analyze verbose true select a.ts from sta a ,sta b on a.ts=b.ts;
|
||||||
sql explain analyze verbose true select a.ts from sta a ,sta b where a.ts=b.ts;
|
sql explain analyze verbose true select a.ts from sta a ,sta b where a.ts=b.ts;
|
||||||
|
sql explain analyze verbose true select a.ts from sta a ,sta b where a.t1 = b.t1 and a.ts=b.ts;
|
||||||
sql explain analyze verbose true select a.ts from sta a ,sta b where a.ts=b.ts and a.col1 + 1 = b.col1;
|
sql explain analyze verbose true select a.ts from sta a ,sta b where a.ts=b.ts and a.col1 + 1 = b.col1;
|
||||||
sql explain analyze verbose true select b.col1 from sta a ,sta b where a.ts=b.ts and a.col1 + 1 = b.col1 order by a.ts;
|
sql explain analyze verbose true select b.col1 from sta a ,sta b where a.ts=b.ts and a.col1 + 1 = b.col1 order by a.ts;
|
||||||
sql explain analyze verbose true select b.col1 from sta a join sta b join sta c where a.ts=b.ts and b.ts = c.ts order by a.ts;
|
sql explain analyze verbose true select b.col1 from sta a join sta b join sta c where a.ts=b.ts and b.ts = c.ts order by a.ts;
|
||||||
|
|
|
@ -0,0 +1,57 @@
|
||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
function usage() {
|
||||||
|
echo "Usage: $0 -v <version>"
|
||||||
|
echo "Example: $0 -v 1.14"
|
||||||
|
}
|
||||||
|
|
||||||
|
function download_lcov() {
|
||||||
|
local version=$1
|
||||||
|
local url="https://github.com/linux-test-project/lcov/releases/download/v${version}/lcov-${version}.tar.gz"
|
||||||
|
echo "Downloading lcov version ${version} from ${url}..."
|
||||||
|
curl -LO ${url}
|
||||||
|
tar -xzf lcov-${version}.tar.gz
|
||||||
|
echo "lcov version ${version} downloaded and extracted."
|
||||||
|
}
|
||||||
|
|
||||||
|
function install_lcov() {
|
||||||
|
echo -e "\nInstalling..."
|
||||||
|
local version=$1
|
||||||
|
cd lcov-${version}
|
||||||
|
sudo make uninstall && sudo make install
|
||||||
|
cd ..
|
||||||
|
echo "lcov version ${version} installed."
|
||||||
|
}
|
||||||
|
|
||||||
|
function verify_lcov() {
|
||||||
|
echo -e "\nVerify installation..."
|
||||||
|
lcov --version
|
||||||
|
}
|
||||||
|
|
||||||
|
function main() {
|
||||||
|
if [[ "$#" -ne 2 ]]; then
|
||||||
|
usage
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
while getopts "v:h" opt; do
|
||||||
|
case ${opt} in
|
||||||
|
v)
|
||||||
|
version=${OPTARG}
|
||||||
|
download_lcov ${version}
|
||||||
|
install_lcov ${version}
|
||||||
|
verify_lcov
|
||||||
|
;;
|
||||||
|
h)
|
||||||
|
usage
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
usage
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
main "$@"
|
Loading…
Reference in New Issue