Merge branch '3.0' of https://github.com/taosdata/TDengine into refact/tsdb_optimize
This commit is contained in:
commit
1a265d8eca
|
@ -2,7 +2,7 @@
|
|||
# libuv
|
||||
ExternalProject_Add(libuv
|
||||
GIT_REPOSITORY https://github.com/libuv/libuv.git
|
||||
GIT_TAG v1.42.0
|
||||
GIT_TAG v1.44.2
|
||||
SOURCE_DIR "${TD_CONTRIB_DIR}/libuv"
|
||||
BINARY_DIR "${TD_CONTRIB_DIR}/libuv"
|
||||
CONFIGURE_COMMAND ""
|
||||
|
|
|
@ -13,7 +13,7 @@ TDengine greatly improves the efficiency of data ingestion, querying and storage
|
|||
|
||||
If you are a developer, please read the [“Developer Guide”](./develop) carefully. This section introduces the database connection, data modeling, data ingestion, query, continuous query, cache, data subscription, user-defined functions, and other functionality in detail. Sample code is provided for a variety of programming languages. In most cases, you can just copy and paste the sample code, make a few changes to accommodate your application, and it will work.
|
||||
|
||||
We live in the era of big data, and scale-up is unable to meet the growing needs of business. Any modern data system must have the ability to scale out, and clustering has become an indispensable feature of big data systems. Not only did the TDengine team develop the cluster feature, but also decided to open source this important feature. To learn how to deploy, manage and maintain a TDengine cluster please refer to ["cluster"](./cluster).
|
||||
We live in the era of big data, and scale-up is unable to meet the growing needs of business. Any modern data system must have the ability to scale out, and clustering has become an indispensable feature of big data systems. Not only did the TDengine team develop the cluster feature, but also decided to open source this important feature. To learn how to deploy, manage and maintain a TDengine cluster please refer to ["cluster deployment"](../deployment).
|
||||
|
||||
TDengine uses ubiquitious SQL as its query language, which greatly reduces learning costs and migration costs. In addition to the standard SQL, TDengine has extensions to better support time series data analysis. These extensions include functions such as roll up, interpolation and time weighted average, among many others. The ["SQL Reference"](./taos-sql) chapter describes the SQL syntax in detail, and lists the various supported commands and functions.
|
||||
|
||||
|
|
|
@ -3,7 +3,7 @@ title: Introduction
|
|||
toc_max_heading_level: 2
|
||||
---
|
||||
|
||||
TDengine is an open source, high-performance, cloud native time-series database optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. Its code, including its cluster feature is open source under GNU AGPL v3.0. Besides the database engine, it provides [caching](/develop/cache), [stream processing](/develop/continuous-query), [data subscription](/develop/subscribe) and other functionalities to reduce the system complexity and cost of development and operation.
|
||||
TDengine is an open source, high-performance, cloud native time-series database optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. Its code, including its cluster feature is open source under GNU AGPL v3.0. Besides the database engine, it provides [caching](../develop/cache), [stream processing](../develop/stream), [data subscription](../develop/tmq) and other functionalities to reduce the system complexity and cost of development and operation.
|
||||
|
||||
This section introduces the major features, competitive advantages, typical use-cases and benchmarks to help you get a high level overview of TDengine.
|
||||
|
||||
|
@ -16,9 +16,9 @@ The major features are listed below:
|
|||
3. Support for [all kinds of queries](/develop/query-data), including aggregation, nested query, downsampling, interpolation and others.
|
||||
4. Support for [user defined functions](/develop/udf).
|
||||
5. Support for [caching](/develop/cache). TDengine always saves the last data point in cache, so Redis is not needed in some scenarios.
|
||||
6. Support for [continuous query](/develop/continuous-query).
|
||||
7. Support for [data subscription](/develop/subscribe) with the capability to specify filter conditions.
|
||||
8. Support for [cluster](/cluster/), with the capability of increasing processing power by adding more nodes. High availability is supported by replication.
|
||||
6. Support for [continuous query](../develop/stream).
|
||||
7. Support for [data subscription](../develop/tmq) with the capability to specify filter conditions.
|
||||
8. Support for [cluster](../deployment/), with the capability of increasing processing power by adding more nodes. High availability is supported by replication.
|
||||
9. Provides an interactive [command-line interface](/reference/taos-shell) for management, maintenance and ad-hoc queries.
|
||||
10. Provides many ways to [import](/operation/import) and [export](/operation/export) data.
|
||||
11. Provides [monitoring](/operation/monitor) on running instances of TDengine.
|
||||
|
@ -43,7 +43,7 @@ By making full use of [characteristics of time series data](https://tdengine.com
|
|||
|
||||
- **Easy Data Analytics**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way.
|
||||
|
||||
- **Open Source**: TDengine’s core modules, including cluster feature, are all available under open source licenses. It has gathered 18.8k stars on GitHub. There is an active developer community, and over 139k running instances worldwide.
|
||||
- **Open Source**: TDengine’s core modules, including cluster feature, are all available under open source licenses. It has gathered over 19k stars on GitHub. There is an active developer community, and over 140k running instances worldwide.
|
||||
|
||||
With TDengine, the total cost of ownership of your time-series data platform can be greatly reduced. 1: With its superior performance, the computing and storage resources are reduced significantly;2: With SQL support, it can be seamlessly integrated with many third party tools, and learning costs/migration costs are reduced significantly;3: With its simplified solution and nearly zero management, the operation and maintenance costs are reduced significantly.
|
||||
|
||||
|
|
|
@ -31,17 +31,6 @@ You can now access TDengine or run other Linux commands.
|
|||
|
||||
Note: For information about installing docker, see the [official documentation](https://docs.docker.com/get-docker/).
|
||||
|
||||
## Open the TDengine CLI
|
||||
|
||||
On the container, run the following command to open the TDengine CLI:
|
||||
|
||||
```
|
||||
$ taos
|
||||
|
||||
taos>
|
||||
|
||||
```
|
||||
|
||||
## Insert Data into TDengine
|
||||
|
||||
You can use the `taosBenchmark` tool included with TDengine to write test data into your deployment.
|
||||
|
@ -59,39 +48,51 @@ To do so, run the following command:
|
|||
|
||||
You can customize the test deployment that taosBenchmark creates by specifying command-line parameters. For information about command-line parameters, run the `taosBenchmark --help` command. For more information about taosBenchmark, see [taosBenchmark](/reference/taosbenchmark).
|
||||
|
||||
## Open the TDengine CLI
|
||||
|
||||
On the container, run the following command to open the TDengine CLI:
|
||||
|
||||
```
|
||||
$ taos
|
||||
|
||||
taos>
|
||||
|
||||
```
|
||||
|
||||
## Query Data in TDengine
|
||||
|
||||
After using taosBenchmark to create your test deployment, you can run queries in the TDengine CLI to test its performance. For example:
|
||||
|
||||
Query the number of rows in the `meters` supertable:
|
||||
From the TDengine CLI query the number of rows in the `meters` supertable:
|
||||
|
||||
```sql
|
||||
taos> select count(*) from test.meters;
|
||||
select count(*) from test.meters;
|
||||
```
|
||||
|
||||
Query the average, maximum, and minimum values of all 100 million rows of data:
|
||||
|
||||
```sql
|
||||
taos> select avg(current), max(voltage), min(phase) from test.meters;
|
||||
select avg(current), max(voltage), min(phase) from test.meters;
|
||||
```
|
||||
|
||||
Query the number of rows whose `location` tag is `California.SanFrancisco`:
|
||||
Query the number of rows whose `location` tag is `San Francisco`:
|
||||
|
||||
```sql
|
||||
taos> select count(*) from test.meters where location="San Francisco";
|
||||
select count(*) from test.meters where location="San Francisco";
|
||||
```
|
||||
|
||||
Query the average, maximum, and minimum values of all rows whose `groupId` tag is `10`:
|
||||
|
||||
```sql
|
||||
taos> select avg(current), max(voltage), min(phase) from test.meters where groupId=10;
|
||||
select avg(current), max(voltage), min(phase) from test.meters where groupId=10;
|
||||
```
|
||||
|
||||
Query the average, maximum, and minimum values for table `d10` in 10 second intervals:
|
||||
Query the average, maximum, and minimum values for table `d10` in 1 second intervals:
|
||||
|
||||
```sql
|
||||
taos> select avg(current), max(voltage), min(phase) from test.d10 interval(10s);
|
||||
select first(ts), avg(current), max(voltage), min(phase) from test.d10 interval(1s);
|
||||
```
|
||||
In the query above you are selecting the first timestamp (ts) in the interval, another way of selecting this would be _wstart which will give the start of the time window. For more information about windowed queries, see [Time-Series Extensions](../../taos-sql/distinguished/).
|
||||
|
||||
## Additional Information
|
||||
|
||||
|
|
|
@ -72,6 +72,9 @@ Users will be prompted to enter some configuration information when install.sh i
|
|||
1. Download the Windows installation package.
|
||||
<PkgListV3 type={3}/>
|
||||
2. Run the downloaded package to install TDengine.
|
||||
:::info
|
||||
TDengine only supports Windows Server 2016/2019 and windows 10/11 system versions on the windows platform.
|
||||
:::
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="apt-get" label="apt-get">
|
||||
|
@ -172,6 +175,20 @@ After the installation is complete, run `C:\TDengine\taosd.exe` to start TDengin
|
|||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Test data insert performance
|
||||
|
||||
After your TDengine Server is running normally, you can run the taosBenchmark utility to test its performance:
|
||||
|
||||
```bash
|
||||
taosBenchmark
|
||||
```
|
||||
|
||||
This command creates the `meters` supertable in the `test` database. In the `meters` supertable, it then creates 10,000 subtables named `d0` to `d9999`. Each table has 10,000 rows and each row has four columns: `ts`, `current`, `voltage`, and `phase`. The timestamps of the data in these columns range from 2017-07-14 10:40:00 000 to 2017-07-14 10:40:09 999. Each table is randomly assigned a `groupId` tag from 1 to 10 and a `location` tag of either `Campbell`, `Cupertino`, `Los Angeles`, `Mountain View`, `Palo Alto`, `San Diego`, `San Francisco`, `San Jose`, `Santa Clara` or `Sunnyvale`.
|
||||
|
||||
The `taosBenchmark` command creates a deployment with 100 million data points that you can use for testing purposes. The time required to create the deployment depends on your hardware. On most modern servers, the deployment is created in less than a minute.
|
||||
|
||||
You can customize the test deployment that taosBenchmark creates by specifying command-line parameters. For information about command-line parameters, run the `taosBenchmark --help` command. For more information about taosBenchmark, see [taosBenchmark](../../reference/taosbenchmark).
|
||||
|
||||
## Command Line Interface
|
||||
|
||||
You can use the TDengine CLI to monitor your TDengine deployment and execute ad hoc queries. To open the CLI, run the following command:
|
||||
|
@ -203,51 +220,38 @@ Query OK, 2 row(s) in set (0.003128s)
|
|||
```
|
||||
|
||||
You can also can monitor the deployment status, add and remove user accounts, and manage running instances. You can run the TDengine CLI on either Linux or Windows machines. For more information, see [TDengine CLI](../../reference/taos-shell/).
|
||||
|
||||
## Test data insert performance
|
||||
|
||||
After your TDengine Server is running normally, you can run the taosBenchmark utility to test its performance:
|
||||
|
||||
```bash
|
||||
taosBenchmark
|
||||
```
|
||||
|
||||
This command creates the `meters` supertable in the `test` database. In the `meters` supertable, it then creates 10,000 subtables named `d0` to `d9999`. Each table has 10,000 rows and each row has four columns: `ts`, `current`, `voltage`, and `phase`. The timestamps of the data in these columns range from 2017-07-14 10:40:00 000 to 2017-07-14 10:40:09 999. Each table is randomly assigned a `groupId` tag from 1 to ten and a `location` tag of either `California.SanFrancisco` or `California.LosAngeles`.
|
||||
|
||||
The `taosBenchmark` command creates a deployment with 100 million data points that you can use for testing purposes. The time required to create the deployment depends on your hardware. On most modern servers, the deployment is created in less than a minute.
|
||||
|
||||
You can customize the test deployment that taosBenchmark creates by specifying command-line parameters. For information about command-line parameters, run the `taosBenchmark --help` command. For more information about taosBenchmark, see [taosBenchmark](../../reference/taosbenchmark).
|
||||
|
||||
|
||||
## Test data query performance
|
||||
|
||||
After using taosBenchmark to create your test deployment, you can run queries in the TDengine CLI to test its performance:
|
||||
|
||||
Query the number of rows in the `meters` supertable:
|
||||
From the TDengine CLI query the number of rows in the `meters` supertable:
|
||||
|
||||
```sql
|
||||
taos> select count(*) from test.meters;
|
||||
select count(*) from test.meters;
|
||||
```
|
||||
|
||||
Query the average, maximum, and minimum values of all 100 million rows of data:
|
||||
|
||||
```sql
|
||||
taos> select avg(current), max(voltage), min(phase) from test.meters;
|
||||
select avg(current), max(voltage), min(phase) from test.meters;
|
||||
```
|
||||
|
||||
Query the number of rows whose `location` tag is `California.SanFrancisco`:
|
||||
Query the number of rows whose `location` tag is `San Francisco`:
|
||||
|
||||
```sql
|
||||
taos> select count(*) from test.meters where location="California.SanFrancisco";
|
||||
select count(*) from test.meters where location="San Francisco";
|
||||
```
|
||||
|
||||
Query the average, maximum, and minimum values of all rows whose `groupId` tag is `10`:
|
||||
|
||||
```sql
|
||||
taos> select avg(current), max(voltage), min(phase) from test.meters where groupId=10;
|
||||
select avg(current), max(voltage), min(phase) from test.meters where groupId=10;
|
||||
```
|
||||
|
||||
Query the average, maximum, and minimum values for table `d10` in 10 second intervals:
|
||||
Query the average, maximum, and minimum values for table `d10` in 1 second intervals:
|
||||
|
||||
```sql
|
||||
taos> select avg(current), max(voltage), min(phase) from test.d10 interval(10s);
|
||||
select first(ts), avg(current), max(voltage), min(phase) from test.d10 interval(1s);
|
||||
```
|
||||
In the query above you are selecting the first timestamp (ts) in the interval, another way of selecting this would be _wstart which will give the start of the time window. For more information about windowed queries, see [Time-Series Extensions](../../taos-sql/distinguished/).
|
||||
|
|
|
@ -1,83 +0,0 @@
|
|||
---
|
||||
sidebar_label: Continuous Query
|
||||
description: "Continuous query is a query that's executed automatically at a predefined frequency to provide aggregate query capability by time window. It is essentially simplified, time driven, stream computing."
|
||||
title: "Continuous Query"
|
||||
---
|
||||
|
||||
A continuous query is a query that's executed automatically at a predefined frequency to provide aggregate query capability by time window. It is essentially simplified, time driven, stream computing. A continuous query can be performed on a table or STable in TDengine. The results of a continuous query can be pushed to clients or written back to TDengine. Each query is executed on a time window, which moves forward with time. The size of time window and the forward sliding time need to be specified with parameter `INTERVAL` and `SLIDING` respectively.
|
||||
|
||||
A continuous query in TDengine is time driven, and can be defined using TAOS SQL directly without any extra operations. With a continuous query, the result can be generated based on a time window to achieve down sampling of the original data. Once a continuous query is defined using TAOS SQL, the query is automatically executed at the end of each time window and the result is pushed back to clients or written to TDengine.
|
||||
|
||||
There are some differences between continuous query in TDengine and time window computation in stream computing:
|
||||
|
||||
- The computation is performed and the result is returned in real time in stream computing, but the computation in continuous query is only started when a time window closes. For example, if the time window is 1 day, then the result will only be generated at 23:59:59.
|
||||
- If a historical data row is written in to a time window for which the computation has already finished, the computation will not be performed again and the result will not be pushed to client applications again. If the results have already been written into TDengine, they will not be updated.
|
||||
- In continuous query, if the result is pushed to a client, the client status is not cached on the server side and Exactly-once is not guaranteed by the server. If the client program crashes, a new time window will be generated from the time where the continuous query is restarted. If the result is written into TDengine, the data written into TDengine can be guaranteed as valid and continuous.
|
||||
|
||||
## Syntax
|
||||
|
||||
```sql
|
||||
[CREATE TABLE AS] SELECT select_expr [, select_expr ...]
|
||||
FROM {tb_name_list}
|
||||
[WHERE where_condition]
|
||||
[INTERVAL(interval_val [, interval_offset]) [SLIDING sliding_val]]
|
||||
|
||||
```
|
||||
|
||||
INTERVAL: The time window for which continuous query is performed
|
||||
|
||||
SLIDING: The time step for which the time window moves forward each time
|
||||
|
||||
## How to Use
|
||||
|
||||
In this section the use case of meters will be used to introduce how to use continuous query. Assume the STable and subtables have been created using the SQL statements below.
|
||||
|
||||
```sql
|
||||
create table meters (ts timestamp, current float, voltage int, phase float) tags (location binary(64), groupId int);
|
||||
create table D1001 using meters tags ("California.SanFrancisco", 2);
|
||||
create table D1002 using meters tags ("California.LosAngeles", 2);
|
||||
```
|
||||
|
||||
The SQL statement below retrieves the average voltage for a one minute time window, with each time window moving forward by 30 seconds.
|
||||
|
||||
```sql
|
||||
select avg(voltage) from meters interval(1m) sliding(30s);
|
||||
```
|
||||
|
||||
Whenever the above SQL statement is executed, all the existing data will be computed again. If the computation needs to be performed every 30 seconds automatically to compute on the data in the past one minute, the above SQL statement needs to be revised as below, in which `{startTime}` stands for the beginning timestamp in the latest time window.
|
||||
|
||||
```sql
|
||||
select avg(voltage) from meters where ts > {startTime} interval(1m) sliding(30s);
|
||||
```
|
||||
|
||||
An easier way to achieve this is to prepend `create table {tableName} as` before the `select`.
|
||||
|
||||
```sql
|
||||
create table avg_vol as select avg(voltage) from meters interval(1m) sliding(30s);
|
||||
```
|
||||
|
||||
A table named as `avg_vol` will be created automatically, then every 30 seconds the `select` statement will be executed automatically on the data in the past 1 minute, i.e. the latest time window, and the result is written into table `avg_vol`. The client program just needs to query from table `avg_vol`. For example:
|
||||
|
||||
```sql
|
||||
taos> select * from avg_vol;
|
||||
ts | avg_voltage_ |
|
||||
===================================================
|
||||
2020-07-29 13:37:30.000 | 222.0000000 |
|
||||
2020-07-29 13:38:00.000 | 221.3500000 |
|
||||
2020-07-29 13:38:30.000 | 220.1700000 |
|
||||
2020-07-29 13:39:00.000 | 223.0800000 |
|
||||
```
|
||||
|
||||
Please note that the minimum allowed time window is 10 milliseconds, and there is no upper limit.
|
||||
|
||||
It's possible to specify the start and end time of a continuous query. If the start time is not specified, the timestamp of the first row will be considered as the start time; if the end time is not specified, the continuous query will be performed indefinitely, otherwise it will be terminated once the end time is reached. For example, the continuous query in the SQL statement below will be started from now and terminated one hour later.
|
||||
|
||||
```sql
|
||||
create table avg_vol as select avg(voltage) from meters where ts > now and ts <= now + 1h interval(1m) sliding(30s);
|
||||
```
|
||||
|
||||
`now` in the above SQL statement stands for the time when the continuous query is created, not the time when the computation is actually performed. To avoid the trouble caused by a delay in receiving data as much as possible, the actual computation in a continuous query is started after a little delay. That means, once a time window closes, the computation is not started immediately. Normally, the result are available after a little time, normally within one minute, after the time window closes.
|
||||
|
||||
## How to Manage
|
||||
|
||||
`show streams` command can be used in the TDengine CLI `taos` to show all the continuous queries in the system, and `kill stream` can be used to terminate a continuous query.
|
|
@ -1,259 +0,0 @@
|
|||
---
|
||||
sidebar_label: Data Subscription
|
||||
description: "Lightweight service for data subscription and publishing. Time series data inserted into TDengine continuously can be pushed automatically to subscribing clients."
|
||||
title: Data Subscription
|
||||
---
|
||||
|
||||
import Tabs from "@theme/Tabs";
|
||||
import TabItem from "@theme/TabItem";
|
||||
import Java from "./_sub_java.mdx";
|
||||
import Python from "./_sub_python.mdx";
|
||||
import Go from "./_sub_go.mdx";
|
||||
import Rust from "./_sub_rust.mdx";
|
||||
import Node from "./_sub_node.mdx";
|
||||
import CSharp from "./_sub_cs.mdx";
|
||||
import CDemo from "./_sub_c.mdx";
|
||||
|
||||
## Introduction
|
||||
|
||||
Due to the nature of time series data, data insertion into TDengine is similar to data publishing in message queues. Data is stored in ascending order of timestamp inside TDengine, and so each table in TDengine can essentially be considered as a message queue.
|
||||
|
||||
A lightweight service for data subscription and publishing is built into TDengine. With the API provided by TDengine, client programs can use `select` statements to subscribe to data from one or more tables. The subscription and state maintenance is performed on the client side. The client programs poll the server to check whether there is new data, and if so the new data will be pushed back to the client side. If the client program is restarted, where to start retrieving new data is up to the client side.
|
||||
|
||||
There are 3 major APIs related to subscription provided in the TDengine client driver.
|
||||
|
||||
```c
|
||||
taos_subscribe
|
||||
taos_consume
|
||||
taos_unsubscribe
|
||||
```
|
||||
|
||||
For more details about these APIs please refer to [C/C++ Connector](/reference/connector/cpp). Their usage will be introduced below using the use case of meters, in which the schema of STable and subtables from the previous section [Continuous Query](/develop/continuous-query) are used. Full sample code can be found [here](https://github.com/taosdata/TDengine/blob/master/examples/c/subscribe.c).
|
||||
|
||||
If we want to get a notification and take some actions if the current exceeds a threshold, like 10A, from some meters, there are two ways:
|
||||
|
||||
The first way is to query each sub table and record the last timestamp matching the criteria. Then after some time, query the data later than the recorded timestamp, and repeat this process. The SQL statements for this way are as below.
|
||||
|
||||
```sql
|
||||
select * from D1001 where ts > {last_timestamp1} and current > 10;
|
||||
select * from D1002 where ts > {last_timestamp2} and current > 10;
|
||||
...
|
||||
```
|
||||
|
||||
The above way works, but the problem is that the number of `select` statements increases with the number of meters. Additionally, the performance of both client side and server side will be unacceptable once the number of meters grows to a big enough number.
|
||||
|
||||
A better way is to query on the STable, only one `select` is enough regardless of the number of meters, like below:
|
||||
|
||||
```sql
|
||||
select * from meters where ts > {last_timestamp} and current > 10;
|
||||
```
|
||||
|
||||
However, this presents a new problem in how to choose `last_timestamp`. First, the timestamp when the data is generated is different from the timestamp when the data is inserted into the database, sometimes the difference between them may be very big. Second, the time when the data from different meters arrives at the database may be different too. If the timestamp of the "slowest" meter is used as `last_timestamp` in the query, the data from other meters may be selected repeatedly; but if the timestamp of the "fastest" meter is used as `last_timestamp`, some data from other meters may be missed.
|
||||
|
||||
All the problems mentioned above can be resolved easily using the subscription functionality provided by TDengine.
|
||||
|
||||
The first step is to create subscription using `taos_subscribe`.
|
||||
|
||||
```c
|
||||
TAOS_SUB* tsub = NULL;
|
||||
if (async) {
|
||||
// create an asynchronous subscription, the callback function will be called every 1s
|
||||
tsub = taos_subscribe(taos, restart, topic, sql, subscribe_callback, &blockFetch, 1000);
|
||||
} else {
|
||||
// create an synchronous subscription, need to call 'taos_consume' manually
|
||||
tsub = taos_subscribe(taos, restart, topic, sql, NULL, NULL, 0);
|
||||
}
|
||||
```
|
||||
|
||||
The subscription in TDengine can be either synchronous or asynchronous. In the above sample code, the value of variable `async` is determined from the CLI input, then it's used to create either an async or sync subscription. Sync subscription means the client program needs to invoke `taos_consume` to retrieve data, and async subscription means another thread created by `taos_subscribe` internally invokes `taos_consume` to retrieve data and pass the data to `subscribe_callback` for processing. `subscribe_callback` is a callback function provided by the client program. You should not perform time consuming operations in the callback function.
|
||||
|
||||
The parameter `taos` is an established connection. Nothing special needs to be done for thread safety for synchronous subscription. For asynchronous subscription, the taos_subscribe function should be called exclusively by the current thread, to avoid unpredictable errors.
|
||||
|
||||
The parameter `sql` is a `select` statement in which the `where` clause can be used to specify filter conditions. In our example, we can subscribe to the records in which the current exceeds 10A, with the following SQL statement:
|
||||
|
||||
```sql
|
||||
select * from meters where current > 10;
|
||||
```
|
||||
|
||||
Please note that, all the data will be processed because no start time is specified. If we only want to process data for the past day, a time related condition can be added:
|
||||
|
||||
```sql
|
||||
select * from meters where ts > now - 1d and current > 10;
|
||||
```
|
||||
|
||||
The parameter `topic` is the name of the subscription. The client application must guarantee that the name is unique. However, it doesn't have to be globally unique because subscription is implemented in the APIs on the client side.
|
||||
|
||||
If the subscription named as `topic` doesn't exist, the parameter `restart` will be ignored. If the subscription named as `topic` has been created before by the client program, when the client program is restarted with the subscription named `topic`, parameter `restart` is used to determine whether to retrieve data from the beginning or from the last point where the subscription was broken.
|
||||
|
||||
If the value of `restart` is **true** (i.e. a non-zero value), data will be retrieved from the beginning. If it is **false** (i.e. zero), the data already consumed before will not be processed again.
|
||||
|
||||
The last parameter of `taos_subscribe` is the polling interval in units of millisecond. In sync mode, if the time difference between two continuous invocations to `taos_consume` is smaller than the interval specified by `taos_subscribe`, `taos_consume` will be blocked until the interval is reached. In async mode, this interval is the minimum interval between two invocations to the call back function.
|
||||
|
||||
The second to last parameter of `taos_subscribe` is used to pass arguments to the call back function. `taos_subscribe` doesn't process this parameter and simply passes it to the call back function. This parameter is simply ignored in sync mode.
|
||||
|
||||
After a subscription is created, its data can be consumed and processed. Shown below is the sample code to consume data in sync mode, in the else condition of `if (async)`.
|
||||
|
||||
```c
|
||||
if (async) {
|
||||
getchar();
|
||||
} else while(1) {
|
||||
TAOS_RES* res = taos_consume(tsub);
|
||||
if (res == NULL) {
|
||||
printf("failed to consume data.");
|
||||
break;
|
||||
} else {
|
||||
print_result(res, blockFetch);
|
||||
getchar();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
In the above sample code in the else condition, there is an infinite loop. Each time carriage return is entered `taos_consume` is invoked. The return value of `taos_consume` is the selected result set. In the above sample, `print_result` is used to simplify the printing of the result set. It is similar to `taos_use_result`. Below is the implementation of `print_result`.
|
||||
|
||||
```c
|
||||
void print_result(TAOS_RES* res, int blockFetch) {
|
||||
TAOS_ROW row = NULL;
|
||||
int num_fields = taos_num_fields(res);
|
||||
TAOS_FIELD* fields = taos_fetch_fields(res);
|
||||
int nRows = 0;
|
||||
if (blockFetch) {
|
||||
nRows = taos_fetch_block(res, &row);
|
||||
for (int i = 0; i < nRows; i++) {
|
||||
char temp[256];
|
||||
taos_print_row(temp, row + i, fields, num_fields);
|
||||
puts(temp);
|
||||
}
|
||||
} else {
|
||||
while ((row = taos_fetch_row(res))) {
|
||||
char temp[256];
|
||||
taos_print_row(temp, row, fields, num_fields);
|
||||
puts(temp);
|
||||
nRows++;
|
||||
}
|
||||
}
|
||||
printf("%d rows consumed.\n", nRows);
|
||||
}
|
||||
```
|
||||
|
||||
In the above code `taos_print_row` is used to process the data consumed. All matching rows are printed.
|
||||
|
||||
In async mode, consuming data is simpler as shown below.
|
||||
|
||||
```c
|
||||
void subscribe_callback(TAOS_SUB* tsub, TAOS_RES *res, void* param, int code) {
|
||||
print_result(res, *(int*)param);
|
||||
}
|
||||
```
|
||||
|
||||
`taos_unsubscribe` can be invoked to terminate a subscription.
|
||||
|
||||
```c
|
||||
taos_unsubscribe(tsub, keep);
|
||||
```
|
||||
|
||||
The second parameter `keep` is used to specify whether to keep the subscription progress on the client sde. If it is **false**, i.e. **0**, then subscription will be restarted from beginning regardless of the `restart` parameter's value when `taos_subscribe` is invoked again. The subscription progress information is stored in _{DataDir}/subscribe/_ , under which there is a file with the same name as `topic` for each subscription(Note: The default value of `DataDir` in the `taos.cfg` file is **/var/lib/taos/**. However, **/var/lib/taos/** does not exist on the Windows server. So you need to change the `DataDir` value to the corresponding existing directory."), the subscription will be restarted from the beginning if the corresponding progress file is removed.
|
||||
|
||||
Now let's see the effect of the above sample code, assuming below prerequisites have been done.
|
||||
|
||||
- The sample code has been downloaded to local system
|
||||
- TDengine has been installed and launched properly on same system
|
||||
- The database, STable, and subtables required in the sample code are ready
|
||||
|
||||
Launch the command below in the directory where the sample code resides to compile and start the program.
|
||||
|
||||
```bash
|
||||
make
|
||||
./subscribe -sql='select * from meters where current > 10;'
|
||||
```
|
||||
|
||||
After the program is started, open another terminal and launch TDengine CLI `taos`, then use the below SQL commands to insert a row whose current is 12A into table **D1001**.
|
||||
|
||||
```sql
|
||||
use test;
|
||||
insert into D1001 values(now, 12, 220, 1);
|
||||
```
|
||||
|
||||
Then, this row of data will be shown by the example program on the first terminal because its current exceeds 10A. More data can be inserted for you to observe the output of the example program.
|
||||
|
||||
## Examples
|
||||
|
||||
The example program below demonstrates how to subscribe, using connectors, to data rows in which current exceeds 10A.
|
||||
|
||||
### Prepare Data
|
||||
|
||||
```bash
|
||||
# create database "power"
|
||||
taos> create database power;
|
||||
# use "power" as the database in following operations
|
||||
taos> use power;
|
||||
# create super table "meters"
|
||||
taos> create table meters(ts timestamp, current float, voltage int, phase int) tags(location binary(64), groupId int);
|
||||
# create tabes using the schema defined by super table "meters"
|
||||
taos> create table d1001 using meters tags ("California.SanFrancisco", 2);
|
||||
taos> create table d1002 using meters tags ("California.LoSangeles", 2);
|
||||
# insert some rows
|
||||
taos> insert into d1001 values("2020-08-15 12:00:00.000", 12, 220, 1),("2020-08-15 12:10:00.000", 12.3, 220, 2),("2020-08-15 12:20:00.000", 12.2, 220, 1);
|
||||
taos> insert into d1002 values("2020-08-15 12:00:00.000", 9.9, 220, 1),("2020-08-15 12:10:00.000", 10.3, 220, 1),("2020-08-15 12:20:00.000", 11.2, 220, 1);
|
||||
# filter out the rows in which current is bigger than 10A
|
||||
taos> select * from meters where current > 10;
|
||||
ts | current | voltage | phase | location | groupid |
|
||||
===========================================================================================================
|
||||
2020-08-15 12:10:00.000 | 10.30000 | 220 | 1 | California.LoSangeles | 2 |
|
||||
2020-08-15 12:20:00.000 | 11.20000 | 220 | 1 | California.LoSangeles | 2 |
|
||||
2020-08-15 12:00:00.000 | 12.00000 | 220 | 1 | California.SanFrancisco | 2 |
|
||||
2020-08-15 12:10:00.000 | 12.30000 | 220 | 2 | California.SanFrancisco | 2 |
|
||||
2020-08-15 12:20:00.000 | 12.20000 | 220 | 1 | California.SanFrancisco | 2 |
|
||||
Query OK, 5 row(s) in set (0.004896s)
|
||||
```
|
||||
|
||||
### Example Programs
|
||||
|
||||
<Tabs defaultValue="java" groupId="lang">
|
||||
<TabItem label="Java" value="java">
|
||||
<Java />
|
||||
</TabItem>
|
||||
<TabItem label="Python" value="Python">
|
||||
<Python />
|
||||
</TabItem>
|
||||
{/* <TabItem label="Go" value="go">
|
||||
<Go/>
|
||||
</TabItem> */}
|
||||
<TabItem label="Rust" value="rust">
|
||||
<Rust />
|
||||
</TabItem>
|
||||
{/* <TabItem label="Node.js" value="nodejs">
|
||||
<Node/>
|
||||
</TabItem>
|
||||
<TabItem label="C#" value="csharp">
|
||||
<CSharp/>
|
||||
</TabItem> */}
|
||||
<TabItem label="C" value="c">
|
||||
<CDemo />
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Run the Examples
|
||||
|
||||
The example programs first consume all historical data matching the criteria.
|
||||
|
||||
```bash
|
||||
ts: 1597464000000 current: 12.0 voltage: 220 phase: 1 location: California.SanFrancisco groupid : 2
|
||||
ts: 1597464600000 current: 12.3 voltage: 220 phase: 2 location: California.SanFrancisco groupid : 2
|
||||
ts: 1597465200000 current: 12.2 voltage: 220 phase: 1 location: California.SanFrancisco groupid : 2
|
||||
ts: 1597464600000 current: 10.3 voltage: 220 phase: 1 location: California.LoSangeles groupid : 2
|
||||
ts: 1597465200000 current: 11.2 voltage: 220 phase: 1 location: California.LoSangeles groupid : 2
|
||||
```
|
||||
|
||||
Next, use TDengine CLI to insert a new row.
|
||||
|
||||
```
|
||||
# taos
|
||||
taos> use power;
|
||||
taos> insert into d1001 values(now, 12.4, 220, 1);
|
||||
```
|
||||
|
||||
Because the current in the inserted row exceeds 10A, it will be consumed by the example program.
|
||||
|
||||
```
|
||||
ts: 1651146662805 current: 12.4 voltage: 220 phase: 1 location: California.SanFrancisco groupid: 2
|
||||
```
|
|
@ -1,126 +0,0 @@
|
|||
---
|
||||
title: Deployment
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Step 1
|
||||
|
||||
The FQDN of all hosts must be setup properly. All FQDNs need to be configured in the /etc/hosts file on each host. You must confirm that each FQDN can be accessed from any other host, you can do this by using the `ping` command.
|
||||
|
||||
The command `hostname -f` can be executed to get the hostname on any host. `ping <FQDN>` command can be executed on each host to check whether any other host is accessible from it. If any host is not accessible, the network configuration, like /etc/hosts or DNS configuration, needs to be checked and revised, to make any two hosts accessible to each other.
|
||||
|
||||
:::note
|
||||
|
||||
- The host where the client program runs also needs to be configured properly for FQDN, to make sure all hosts for client or server can be accessed from any other. In other words, the hosts where the client is running are also considered as a part of the cluster.
|
||||
|
||||
- Please ensure that your firewall rules do not block TCP/UDP on ports 6030-6042 on all hosts in the cluster.
|
||||
|
||||
:::
|
||||
|
||||
### Step 2
|
||||
|
||||
If any previous version of TDengine has been installed and configured on any host, the installation needs to be removed and the data needs to be cleaned up. For details about uninstalling please refer to [Install and Uninstall](/operation/pkg-install). To clean up the data, please use `rm -rf /var/lib/taos/\*` assuming the `dataDir` is configured as `/var/lib/taos`.
|
||||
|
||||
:::note
|
||||
|
||||
As a best practice, before cleaning up any data files or directories, please ensure that your data has been backed up correctly, if required by your data integrity, backup, security, or other standard operating protocols (SOP).
|
||||
|
||||
:::
|
||||
|
||||
### Step 3
|
||||
|
||||
Now it's time to install TDengine on all hosts but without starting `taosd`. Note that the versions on all hosts should be same. If you are prompted to input the existing TDengine cluster, simply press carriage return to ignore the prompt. `install.sh -e no` can also be used to disable this prompt. For details please refer to [Install and Uninstall](/operation/pkg-install).
|
||||
|
||||
### Step 4
|
||||
|
||||
Now each physical node (referred to, hereinafter, as `dnode` which is an abbreviation for "data node") of TDengine needs to be configured properly. Please note that one dnode doesn't stand for one host. Multiple TDengine dnodes can be started on a single host as long as they are configured properly without conflicting. More specifically each instance of the configuration file `taos.cfg` stands for a dnode. Assuming the first dnode of TDengine cluster is "h1.taosdata.com:6030", its `taos.cfg` is configured as following.
|
||||
|
||||
```c
|
||||
// firstEp is the end point to connect to when any dnode starts
|
||||
firstEp h1.taosdata.com:6030
|
||||
|
||||
// must be configured to the FQDN of the host where the dnode is launched
|
||||
fqdn h1.taosdata.com
|
||||
|
||||
// the port used by the dnode, default is 6030
|
||||
serverPort 6030
|
||||
|
||||
// only necessary when replica is configured to an even number
|
||||
#arbitrator ha.taosdata.com:6042
|
||||
```
|
||||
|
||||
`firstEp` and `fqdn` must be configured properly. In `taos.cfg` of all dnodes in TDengine cluster, `firstEp` must be configured to point to same address, i.e. the first dnode of the cluster. `fqdn` and `serverPort` compose the address of each node itself. If you want to start multiple TDengine dnodes on a single host, please make sure all other configurations like `dataDir`, `logDir`, and other resources related parameters are not conflicting.
|
||||
|
||||
For all the dnodes in a TDengine cluster, the below parameters must be configured exactly the same, any node whose configuration is different from dnodes already in the cluster can't join the cluster.
|
||||
|
||||
| **#** | **Parameter** | **Definition** |
|
||||
| ----- | -------------- | ------------------------------------------------------------- |
|
||||
| 1 | statusInterval | The time interval for which dnode reports its status to mnode |
|
||||
| 2 | timezone | Time Zone where the server is located |
|
||||
| 3 | locale | Location code of the system |
|
||||
| 4 | charset | Character set of the system |
|
||||
|
||||
## Start Cluster
|
||||
|
||||
In the following example we assume that first dnode has FQDN h1.taosdata.com and the second dnode has FQDN h2.taosdata.com.
|
||||
|
||||
### Start The First DNODE
|
||||
|
||||
Start the first dnode following the instructions in [Get Started](/get-started/). Then launch TDengine CLI `taos` and execute command `show dnodes`, the output is as following for example:
|
||||
|
||||
```
|
||||
Welcome to the TDengine shell from Linux, Client Version:3.0.0.0
|
||||
Copyright (c) 2022 by TAOS Data, Inc. All rights reserved.
|
||||
|
||||
Server is Enterprise trial Edition, ver:3.0.0.0 and will never expire.
|
||||
|
||||
taos> show dnodes;
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | note |
|
||||
============================================================================================================================================
|
||||
1 | h1.taosdata.com:6030 | 0 | 1024 | ready | 2022-07-16 10:50:42.673 | |
|
||||
Query OK, 1 rows affected (0.007984s)
|
||||
|
||||
taos>
|
||||
```
|
||||
|
||||
From the above output, it is shown that the end point of the started dnode is "h1.taosdata.com:6030", which is the `firstEp` of the cluster.
|
||||
|
||||
### Start Other DNODEs
|
||||
|
||||
There are a few steps necessary to add other dnodes in the cluster.
|
||||
|
||||
Let's assume we are starting the second dnode with FQDN, h2.taosdata.com. Firstly we make sure the configuration is correct.
|
||||
|
||||
```c
|
||||
// firstEp is the end point to connect to when any dnode starts
|
||||
firstEp h1.taosdata.com:6030
|
||||
|
||||
// must be configured to the FQDN of the host where the dnode is launched
|
||||
fqdn h2.taosdata.com
|
||||
|
||||
// the port used by the dnode, default is 6030
|
||||
serverPort 6030
|
||||
|
||||
```
|
||||
|
||||
Secondly, we can start `taosd` as instructed in [Get Started](/get-started/).
|
||||
|
||||
Then, on the first dnode i.e. h1.taosdata.com in our example, use TDengine CLI `taos` to execute the following command to add the end point of the dnode in the cluster. In the command "fqdn:port" should be quoted using double quotes.
|
||||
|
||||
```sql
|
||||
CREATE DNODE "h2.taos.com:6030";
|
||||
```
|
||||
|
||||
Then on the first dnode h1.taosdata.com, execute `show dnodes` in `taos` to show whether the second dnode has been added in the cluster successfully or not.
|
||||
|
||||
```sql
|
||||
SHOW DNODES;
|
||||
```
|
||||
|
||||
If the status of the newly added dnode is offline, please check:
|
||||
|
||||
- Whether the `taosd` process is running properly or not
|
||||
- In the log file `taosdlog.0` to see whether the fqdn and port are correct
|
||||
|
||||
The above process can be repeated to add more dnodes in the cluster.
|
|
@ -1,138 +0,0 @@
|
|||
---
|
||||
sidebar_label: Operation
|
||||
title: Manage DNODEs
|
||||
---
|
||||
|
||||
The previous section, [Deployment],(/cluster/deploy) showed you how to deploy and start a cluster from scratch. Once a cluster is ready, the status of dnode(s) in the cluster can be shown at any time. Dnodes can be managed from the TDengine CLI. New dnode(s) can be added to scale out the cluster, an existing dnode can be removed and you can even perform load balancing manually, if necessary.
|
||||
|
||||
:::note
|
||||
All the commands introduced in this chapter must be run in the TDengine CLI - `taos`. Note that sometimes it is necessary to use root privilege.
|
||||
|
||||
:::
|
||||
|
||||
## Show DNODEs
|
||||
|
||||
The below command can be executed in TDengine CLI `taos` to list all dnodes in the cluster, including ID, end point (fqdn:port), status (ready, offline), number of vnodes, number of free vnodes and so on. We recommend executing this command after adding or removing a dnode.
|
||||
|
||||
```sql
|
||||
SHOW DNODES;
|
||||
```
|
||||
|
||||
Below is the example output of this command.
|
||||
|
||||
```
|
||||
taos> show dnodes;
|
||||
id | end_point | vnodes | cores | status | role | create_time | offline reason |
|
||||
======================================================================================================================================
|
||||
1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
|
||||
Query OK, 1 row(s) in set (0.008298s)
|
||||
```
|
||||
|
||||
## Show VGROUPs
|
||||
|
||||
To utilize system resources efficiently and provide scalability, data sharding is required. The data of each database is divided into multiple shards and stored in multiple vnodes. These vnodes may be located on different dnodes. One way of scaling out is to add more vnodes on dnodes. Each vnode can only be used for a single DB, but one DB can have multiple vnodes. The allocation of vnode is scheduled automatically by mnode based on system resources of the dnodes.
|
||||
|
||||
Launch TDengine CLI `taos` and execute below command:
|
||||
|
||||
```sql
|
||||
USE SOME_DATABASE;
|
||||
SHOW VGROUPS;
|
||||
```
|
||||
|
||||
The output is like below:
|
||||
|
||||
taos> use db;
|
||||
Database changed.
|
||||
|
||||
taos> show vgroups;
|
||||
vgId | tables | status | onlines | v1_dnode | v1_status | compacting |
|
||||
==========================================================================================
|
||||
14 | 38000 | ready | 1 | 1 | leader | 0 |
|
||||
15 | 38000 | ready | 1 | 1 | leader | 0 |
|
||||
16 | 38000 | ready | 1 | 1 | leader | 0 |
|
||||
17 | 38000 | ready | 1 | 1 | leader | 0 |
|
||||
18 | 37001 | ready | 1 | 1 | leader | 0 |
|
||||
19 | 37000 | ready | 1 | 1 | leader | 0 |
|
||||
20 | 37000 | ready | 1 | 1 | leader | 0 |
|
||||
21 | 37000 | ready | 1 | 1 | leader | 0 |
|
||||
Query OK, 8 row(s) in set (0.001154s)
|
||||
|
||||
````
|
||||
|
||||
## Add DNODE
|
||||
|
||||
Launch TDengine CLI `taos` and execute the command below to add the end point of a new dnode into the EPI (end point) list of the cluster. "fqdn:port" must be quoted using double quotes.
|
||||
|
||||
```sql
|
||||
CREATE DNODE "fqdn:port";
|
||||
````
|
||||
|
||||
The example output is as below:
|
||||
|
||||
```
|
||||
taos> create dnode "localhost:7030";
|
||||
Query OK, 0 of 0 row(s) in database (0.008203s)
|
||||
|
||||
taos> show dnodes;
|
||||
id | end_point | vnodes | cores | status | role | create_time | offline reason |
|
||||
======================================================================================================================================
|
||||
1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
|
||||
2 | localhost:7030 | 0 | 0 | offline | any | 2022-04-19 08:11:42.158 | status not received |
|
||||
Query OK, 2 row(s) in set (0.001017s)
|
||||
```
|
||||
|
||||
It can be seen that the status of the new dnode is "offline". Once the dnode is started and connects to the firstEp of the cluster, you can execute the command again and get the example output below. As can be seen, both dnodes are in "ready" status.
|
||||
|
||||
```
|
||||
taos> show dnodes;
|
||||
id | end_point | vnodes | cores | status | role | create_time | offline reason |
|
||||
======================================================================================================================================
|
||||
1 | localhost:6030 | 3 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
|
||||
2 | localhost:7030 | 6 | 8 | ready | any | 2022-04-19 08:14:59.165 | |
|
||||
Query OK, 2 row(s) in set (0.001316s)
|
||||
```
|
||||
|
||||
## Drop DNODE
|
||||
|
||||
Launch TDengine CLI `taos` and execute the command below to drop or remove a dnode from the cluster. In the command, you can get `dnodeId` from `show dnodes`.
|
||||
|
||||
```sql
|
||||
DROP DNODE "fqdn:port";
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```sql
|
||||
DROP DNODE dnodeId;
|
||||
```
|
||||
|
||||
The example output is below:
|
||||
|
||||
```
|
||||
taos> show dnodes;
|
||||
id | end_point | vnodes | cores | status | role | create_time | offline reason |
|
||||
======================================================================================================================================
|
||||
1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
|
||||
2 | localhost:7030 | 0 | 0 | offline | any | 2022-04-19 08:11:42.158 | status not received |
|
||||
Query OK, 2 row(s) in set (0.001017s)
|
||||
|
||||
taos> drop dnode 2;
|
||||
Query OK, 0 of 0 row(s) in database (0.000518s)
|
||||
|
||||
taos> show dnodes;
|
||||
id | end_point | vnodes | cores | status | role | create_time | offline reason |
|
||||
======================================================================================================================================
|
||||
1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
|
||||
Query OK, 1 row(s) in set (0.001137s)
|
||||
```
|
||||
|
||||
In the above example, when `show dnodes` is executed the first time, two dnodes are shown. After `drop dnode 2` is executed, you can execute `show dnodes` again and it can be seen that only the dnode with ID 1 is still in the cluster.
|
||||
|
||||
:::note
|
||||
|
||||
- Once a dnode is dropped, it can't rejoin the cluster. To rejoin, the dnode needs to deployed again after cleaning up the data directory. Before dropping a dnode, the data belonging to the dnode MUST be migrated/backed up according to your data retention, data security or other SOPs.
|
||||
- Please note that `drop dnode` is different from stopping `taosd` process. `drop dnode` just removes the dnode out of TDengine cluster. Only after a dnode is dropped, can the corresponding `taosd` process be stopped.
|
||||
- Once a dnode is dropped, other dnodes in the cluster will be notified of the drop and will not accept the request from the dropped dnode.
|
||||
- dnodeID is allocated automatically and can't be manually modified. dnodeID is generated in ascending order without duplication.
|
||||
|
||||
:::
|
|
@ -1 +0,0 @@
|
|||
label: Cluster
|
|
@ -1,15 +0,0 @@
|
|||
---
|
||||
title: Cluster
|
||||
keywords: ["cluster", "high availability", "load balance", "scale out"]
|
||||
---
|
||||
|
||||
TDengine has a native distributed design and provides the ability to scale out. A few nodes can form a TDengine cluster. If you need higher processing power, you just need to add more nodes into the cluster. TDengine uses virtual node technology to virtualize a node into multiple virtual nodes to achieve load balancing. At the same time, TDengine can group virtual nodes on different nodes into virtual node groups, and use the replication mechanism to ensure the high availability of the system. The cluster feature of TDengine is completely open source.
|
||||
|
||||
This chapter mainly introduces cluster deployment, maintenance, and how to achieve high availability and load balancing.
|
||||
|
||||
```mdx-code-block
|
||||
import DocCardList from '@theme/DocCardList';
|
||||
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
|
||||
|
||||
<DocCardList items={useCurrentSidebarCategory().items}/>
|
||||
```
|
|
@ -171,8 +171,8 @@ The \_QSTART and \_QEND pseudocolumns contain the beginning and end of the time
|
|||
|
||||
The \_QSTART and \_QEND pseudocolumns cannot be used in a WHERE clause.
|
||||
|
||||
**\_WSTART, \_WEND, and \_DURATION**
|
||||
\_WSTART, \_WEND, and \_WDURATION pseudocolumns
|
||||
**\_WSTART, \_WEND, and \_WDURATION**
|
||||
|
||||
The \_WSTART, \_WEND, and \_WDURATION pseudocolumns indicate the beginning, end, and duration of a window.
|
||||
|
||||
These pseudocolumns can be used only in time window-based aggregations and must occur after the aggregation clause.
|
||||
|
|
|
@ -1232,7 +1232,7 @@ SELECT SERVER_VERSION();
|
|||
### SERVER_STATUS
|
||||
|
||||
```sql
|
||||
SELECT SERVER_VERSION();
|
||||
SELECT SERVER_STATUS();
|
||||
```
|
||||
|
||||
**Description**: The server status.
|
||||
|
|
|
@ -58,6 +58,15 @@ The following restrictions apply:
|
|||
- The window clause cannot be used with a GROUP BY clause.
|
||||
- `WHERE` clause can be used to specify the starting and ending time and other filter conditions
|
||||
|
||||
|
||||
### Window Pseudocolumns
|
||||
|
||||
**\_WSTART, \_WEND, and \_WDURATION**
|
||||
|
||||
The \_WSTART, \_WEND, and \_WDURATION pseudocolumns indicate the beginning, end, and duration of a window.
|
||||
|
||||
These pseudocolumns occur after the aggregation clause.
|
||||
|
||||
### FILL Clause
|
||||
|
||||
`FILL` clause is used to specify how to fill when there is data missing in any window, including:
|
||||
|
|
|
@ -10,131 +10,7 @@ TDengine community version provides deb and rpm packages for users to choose fro
|
|||
|
||||
## Install
|
||||
|
||||
<Tabs>
|
||||
<TabItem label="Install Deb" value="debinst">
|
||||
|
||||
1. Download deb package from official website, for example TDengine-server-3.0.0.0-Linux-x64.deb
|
||||
2. In the directory where the package is located, execute the command below
|
||||
|
||||
```bash
|
||||
$ sudo dpkg -i TDengine-server-3.0.0.0-Linux-x64.deb
|
||||
(Reading database ... 137504 files and directories currently installed.)
|
||||
Preparing to unpack TDengine-server-3.0.0.0-Linux-x64.deb ...
|
||||
TDengine is removed successfully!
|
||||
Unpacking tdengine (3.0.0.0) over (3.0.0.0) ...
|
||||
Setting up tdengine (3.0.0.0) ...
|
||||
Start to install TDengine...
|
||||
|
||||
System hostname is: ubuntu-1804
|
||||
|
||||
Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join
|
||||
OR leave it blank to build one:
|
||||
|
||||
Enter your email address for priority support or enter empty to skip:
|
||||
Created symlink /etc/systemd/system/multi-user.target.wants/taosd.service → /etc/systemd/system/taosd.service.
|
||||
|
||||
To configure TDengine : edit /etc/taos/taos.cfg
|
||||
To start TDengine : sudo systemctl start taosd
|
||||
To access TDengine : taos -h ubuntu-1804 to login into TDengine server
|
||||
|
||||
|
||||
TDengine is installed successfully!
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
||||
<TabItem label="Install RPM" value="rpminst">
|
||||
|
||||
1. Download rpm package from official website, for example TDengine-server-3.0.0.0-Linux-x64.rpm;
|
||||
2. In the directory where the package is located, execute the command below
|
||||
|
||||
```
|
||||
$ sudo rpm -ivh TDengine-server-3.0.0.0-Linux-x64.rpm
|
||||
Preparing... ################################# [100%]
|
||||
Updating / installing...
|
||||
1:tdengine-3.0.0.0-3 ################################# [100%]
|
||||
Start to install TDengine...
|
||||
|
||||
System hostname is: centos7
|
||||
|
||||
Enter FQDN:port (like h1.taosdata.com:6030) of an existing TDengine cluster node to join
|
||||
OR leave it blank to build one:
|
||||
|
||||
Enter your email address for priority support or enter empty to skip:
|
||||
|
||||
Created symlink from /etc/systemd/system/multi-user.target.wants/taosd.service to /etc/systemd/system/taosd.service.
|
||||
|
||||
To configure TDengine : edit /etc/taos/taos.cfg
|
||||
To start TDengine : sudo systemctl start taosd
|
||||
To access TDengine : taos -h centos7 to login into TDengine server
|
||||
|
||||
|
||||
TDengine is installed successfully!
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
||||
<TabItem label="Install tar.gz" value="tarinst">
|
||||
|
||||
1. Download the tar.gz package, for example TDengine-server-3.0.0.0-Linux-x64.tar.gz;
|
||||
2. In the directory where the package is located, first decompress the file, then switch to the sub-directory generated in decompressing, i.e. "TDengine-enterprise-server-3.0.0.0/" in this example, and execute the `install.sh` script.
|
||||
|
||||
```bash
|
||||
$ tar xvzf TDengine-enterprise-server-3.0.0.0-Linux-x64.tar.gz
|
||||
TDengine-enterprise-server-3.0.0.0/
|
||||
TDengine-enterprise-server-3.0.0.0/driver/
|
||||
TDengine-enterprise-server-3.0.0.0/driver/vercomp.txt
|
||||
TDengine-enterprise-server-3.0.0.0/driver/libtaos.so.3.0.0.0
|
||||
TDengine-enterprise-server-3.0.0.0/install.sh
|
||||
TDengine-enterprise-server-3.0.0.0/examples/
|
||||
...
|
||||
|
||||
$ ll
|
||||
total 43816
|
||||
drwxrwxr-x 3 ubuntu ubuntu 4096 Feb 22 09:31 ./
|
||||
drwxr-xr-x 20 ubuntu ubuntu 4096 Feb 22 09:30 ../
|
||||
drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 TDengine-enterprise-server-3.0.0.0/
|
||||
-rw-rw-r-- 1 ubuntu ubuntu 44852544 Feb 22 09:31 TDengine-enterprise-server-3.0.0.0-Linux-x64.tar.gz
|
||||
|
||||
$ cd TDengine-enterprise-server-3.0.0.0/
|
||||
|
||||
$ ll
|
||||
total 40784
|
||||
drwxrwxr-x 4 ubuntu ubuntu 4096 Feb 22 09:30 ./
|
||||
drwxrwxr-x 3 ubuntu ubuntu 4096 Feb 22 09:31 ../
|
||||
drwxrwxr-x 2 ubuntu ubuntu 4096 Feb 22 09:30 driver/
|
||||
drwxrwxr-x 10 ubuntu ubuntu 4096 Feb 22 09:30 examples/
|
||||
-rwxrwxr-x 1 ubuntu ubuntu 33294 Feb 22 09:30 install.sh*
|
||||
-rw-rw-r-- 1 ubuntu ubuntu 41704288 Feb 22 09:30 taos.tar.gz
|
||||
|
||||
$ sudo ./install.sh
|
||||
|
||||
Start to update TDengine...
|
||||
Created symlink /etc/systemd/system/multi-user.target.wants/taosd.service → /etc/systemd/system/taosd.service.
|
||||
Nginx for TDengine is updated successfully!
|
||||
|
||||
To configure TDengine : edit /etc/taos/taos.cfg
|
||||
To configure Taos Adapter (if has) : edit /etc/taos/taosadapter.toml
|
||||
To start TDengine : sudo systemctl start taosd
|
||||
To access TDengine : use taos -h ubuntu-1804 in shell OR from http://127.0.0.1:6060
|
||||
|
||||
TDengine is updated successfully!
|
||||
Install taoskeeper as a standalone service
|
||||
taoskeeper is installed, enable it by `systemctl enable taoskeeper`
|
||||
```
|
||||
|
||||
:::info
|
||||
Users will be prompted to enter some configuration information when install.sh is executing. The interactive mode can be disabled by executing `./install.sh -e no`. `./install.sh -h` can show all parameters with detailed explanation.
|
||||
|
||||
:::
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
:::note
|
||||
When installing on the first node in the cluster, at the "Enter FQDN:" prompt, nothing needs to be provided. When installing on subsequent nodes, at the "Enter FQDN:" prompt, you must enter the end point of the first dnode in the cluster if it is already up. You can also just ignore it and configure it later after installation is finished.
|
||||
|
||||
:::
|
||||
About details of installing TDenine, please refer to [Installation Guide](../../get-started/package/).
|
||||
|
||||
## Uninstall
|
||||
|
||||
|
|
|
@ -1,50 +0,0 @@
|
|||
---
|
||||
title: User Management
|
||||
---
|
||||
|
||||
A system operator can use TDengine CLI `taos` to create or remove users or change passwords. The SQL commands are documented below:
|
||||
|
||||
## Create User
|
||||
|
||||
```sql
|
||||
CREATE USER <user_name> PASS <'password'>;
|
||||
```
|
||||
|
||||
When creating a user and specifying the user name and password, the password needs to be quoted using single quotes.
|
||||
|
||||
## Drop User
|
||||
|
||||
```sql
|
||||
DROP USER <user_name>;
|
||||
```
|
||||
|
||||
Dropping a user can only be performed by root.
|
||||
|
||||
## Change Password
|
||||
|
||||
```sql
|
||||
ALTER USER <user_name> PASS <'password'>;
|
||||
```
|
||||
|
||||
To keep the case of the password when changing password, the password needs to be quoted using single quotes.
|
||||
|
||||
## Change Privilege
|
||||
|
||||
```sql
|
||||
ALTER USER <user_name> PRIVILEGE <write|read>;
|
||||
```
|
||||
|
||||
The privileges that can be changed to are `read` or `write` without single quotes.
|
||||
|
||||
Note:there is another privilege `super`, which is not allowed to be authorized to any user.
|
||||
|
||||
## Show Users
|
||||
|
||||
```sql
|
||||
SHOW USERS;
|
||||
```
|
||||
|
||||
:::note
|
||||
In SQL syntax, `< >` means the part that needs to be input by the user, excluding the `< >` itself.
|
||||
|
||||
:::
|
|
@ -1,54 +0,0 @@
|
|||
---
|
||||
sidebar_label: Connections & Tasks
|
||||
title: Manage Connections and Query Tasks
|
||||
---
|
||||
|
||||
A system operator can use the TDengine CLI to show connections, ongoing queries, stream computing, and can close connections or stop ongoing query tasks or stream computing.
|
||||
|
||||
## Show Connections
|
||||
|
||||
```sql
|
||||
SHOW CONNECTIONS;
|
||||
```
|
||||
|
||||
One column of the output of the above SQL command is "ip:port", which is the end point of the client.
|
||||
|
||||
## Force Close Connections
|
||||
|
||||
```sql
|
||||
KILL CONNECTION <connection-id>;
|
||||
```
|
||||
|
||||
In the above SQL command, `connection-id` is from the first column of the output of `SHOW CONNECTIONS`.
|
||||
|
||||
## Show Ongoing Queries
|
||||
|
||||
```sql
|
||||
SHOW QUERIES;
|
||||
```
|
||||
|
||||
The first column of the output is query ID, which is composed of the corresponding connection ID and the sequence number of the current query task started on this connection. The format is "connection-id:query-no".
|
||||
|
||||
## Force Close Queries
|
||||
|
||||
```sql
|
||||
KILL QUERY <query-id>;
|
||||
```
|
||||
|
||||
In the above SQL command, `query-id` is from the first column of the output of `SHOW QUERIES `.
|
||||
|
||||
## Show Continuous Query
|
||||
|
||||
```sql
|
||||
SHOW STREAMS;
|
||||
```
|
||||
|
||||
The first column of the output is stream ID, which is composed of the connection ID and the sequence number of the current stream started on this connection. The format is "connection-id:stream-no".
|
||||
|
||||
## Force Close Continuous Query
|
||||
|
||||
```sql
|
||||
KILL STREAM <stream-id>;
|
||||
```
|
||||
|
||||
The above SQL command, `stream-id` is from the first column of the output of `SHOW STREAMS`.
|
|
@ -419,11 +419,11 @@ Note that once the installation is complete, do not immediately start the `taosd
|
|||
|
||||
To ensure that the system can obtain the necessary information for regular operation. Please set the following vital parameters correctly on the server:
|
||||
|
||||
FQDN, firstEp, secondEP, dataDir, logDir, tmpDir, serverPort. For the specific meaning and setting requirements of each parameter, please refer to the document "[TDengine Cluster Installation and Management](/cluster/)"
|
||||
FQDN, firstEp, secondEP, dataDir, logDir, tmpDir, serverPort. For the specific meaning and setting requirements of each parameter, please refer to the document "[TDengine Cluster Deployment](../../deployment)"
|
||||
|
||||
Follow the same steps to set parameters on the other nodes, start the taosd service, and then add Dnodes to the cluster.
|
||||
|
||||
Finally, start `taos` and execute the `show dnodes` command. If you can see all the nodes that have joined the cluster, the cluster building process was successfully completed. For specific operation procedures and precautions, please refer to the document "[TDengine Cluster Installation and Management](/cluster/)".
|
||||
Finally, start `taos` and execute the `show dnodes` command. If you can see all the nodes that have joined the cluster, the cluster building process was successfully completed. For specific operation procedures and precautions, please refer to the document "[TDengine Cluster Deployment](../../deployment)".
|
||||
|
||||
## Appendix 4: Super Table Names
|
||||
|
||||
|
@ -431,5 +431,5 @@ Since OpenTSDB's metric name has a dot (".") in it, for example, a metric with a
|
|||
|
||||
## Appendix 5: Reference Articles
|
||||
|
||||
1. [Using TDengine + collectd/StatsD + Grafana to quickly build an IT operation and maintenance monitoring system](/application/collectd/)
|
||||
2. [Write collected data directly to TDengine through collectd](/third-party/collectd/)
|
||||
1. [Using TDengine + collectd/StatsD + Grafana to quickly build an IT operation and maintenance monitoring system](../collectd/)
|
||||
2. [Write collected data directly to TDengine through collectd](../collectd/)
|
||||
|
|
|
@ -72,6 +72,9 @@ install.sh 安装脚本在执行过程中,会通过命令行交互界面询问
|
|||
1. 从列表中下载获得 exe 安装程序;
|
||||
<PkgListV3 type={3}/>
|
||||
2. 运行可执行程序来安装 TDengine。
|
||||
:::info
|
||||
目前 TDengine 在 Windows 平台上只支持 Windows server 2016/2019 和 Windows 10/11 系统版本。
|
||||
:::
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="apt-get" label="apt-get">
|
||||
|
|
|
@ -276,6 +276,7 @@ typedef struct SSelectStmt {
|
|||
bool hasLastRowFunc;
|
||||
bool hasTimeLineFunc;
|
||||
bool hasUdaf;
|
||||
bool hasStateKey;
|
||||
bool onlyHasKeepOrderFunc;
|
||||
bool groupSort;
|
||||
} SSelectStmt;
|
||||
|
|
|
@ -386,7 +386,7 @@ typedef enum ELogicConditionType {
|
|||
|
||||
#define TSDB_DEFAULT_EXPLAIN_VERBOSE false
|
||||
|
||||
#define TSDB_EXPLAIN_RESULT_ROW_SIZE 512
|
||||
#define TSDB_EXPLAIN_RESULT_ROW_SIZE (16*1024)
|
||||
#define TSDB_EXPLAIN_RESULT_COLUMN_NAME "QUERY_PLAN"
|
||||
|
||||
#define TSDB_MAX_FIELD_LEN 16384
|
||||
|
|
|
@ -422,6 +422,8 @@ typedef struct {
|
|||
STsdb *pTsdb; // [input]
|
||||
SBlockIdx *pBlockIdxExp; // [input]
|
||||
STSchema *pTSchema; // [input]
|
||||
tb_uid_t suid;
|
||||
tb_uid_t uid;
|
||||
int32_t nFileSet;
|
||||
int32_t iFileSet;
|
||||
SArray *aDFileSet;
|
||||
|
@ -494,6 +496,8 @@ static int32_t getNextRowFromFSLast(void *iter, TSDBROW **ppRow) {
|
|||
|
||||
if (!state->pBlockDataL) {
|
||||
state->pBlockDataL = &state->blockDataL;
|
||||
|
||||
tBlockDataCreate(state->pBlockDataL);
|
||||
}
|
||||
code = tBlockDataInit(state->pBlockDataL, suid, suid ? 0 : uid, state->pTSchema);
|
||||
if (code) goto _err;
|
||||
|
@ -593,6 +597,9 @@ typedef struct SFSNextRowIter {
|
|||
SFSNEXTROWSTATES state; // [input]
|
||||
STsdb *pTsdb; // [input]
|
||||
SBlockIdx *pBlockIdxExp; // [input]
|
||||
STSchema *pTSchema; // [input]
|
||||
tb_uid_t suid;
|
||||
tb_uid_t uid;
|
||||
int32_t nFileSet;
|
||||
int32_t iFileSet;
|
||||
SArray *aDFileSet;
|
||||
|
@ -685,6 +692,10 @@ static int32_t getNextRowFromFS(void *iter, TSDBROW **ppRow) {
|
|||
|
||||
tMapDataGetItemByIdx(&state->blockMap, state->iBlock, &block, tGetBlock);
|
||||
/* code = tsdbReadBlockData(state->pDataFReader, &state->blockIdx, &block, &state->blockData, NULL, NULL); */
|
||||
tBlockDataReset(state->pBlockData);
|
||||
code = tBlockDataInit(state->pBlockData, state->suid, state->uid, state->pTSchema);
|
||||
if (code) goto _err;
|
||||
|
||||
code = tsdbReadDataBlock(state->pDataFReader, &block, state->pBlockData);
|
||||
if (code) goto _err;
|
||||
|
||||
|
@ -958,16 +969,21 @@ static int32_t nextRowIterOpen(CacheNextRowIter *pIter, tb_uid_t uid, STsdb *pTs
|
|||
|
||||
pIter->idx = (SBlockIdx){.suid = suid, .uid = uid};
|
||||
|
||||
pIter->fsLastState.state = (SFSLASTNEXTROWSTATES) SFSNEXTROW_FS;
|
||||
pIter->fsLastState.state = (SFSLASTNEXTROWSTATES)SFSNEXTROW_FS;
|
||||
pIter->fsLastState.pTsdb = pTsdb;
|
||||
pIter->fsLastState.aDFileSet = pIter->pReadSnap->fs.aDFileSet;
|
||||
pIter->fsLastState.pBlockIdxExp = &pIter->idx;
|
||||
pIter->fsLastState.pTSchema = pTSchema;
|
||||
pIter->fsLastState.suid = suid;
|
||||
pIter->fsLastState.uid = uid;
|
||||
|
||||
pIter->fsState.state = SFSNEXTROW_FS;
|
||||
pIter->fsState.pTsdb = pTsdb;
|
||||
pIter->fsState.aDFileSet = pIter->pReadSnap->fs.aDFileSet;
|
||||
pIter->fsState.pBlockIdxExp = &pIter->idx;
|
||||
pIter->fsState.pTSchema = pTSchema;
|
||||
pIter->fsState.suid = suid;
|
||||
pIter->fsState.uid = uid;
|
||||
|
||||
pIter->input[0] = (TsdbNextRowState){&pIter->memRow, true, false, &pIter->memState, getNextRowFromMem, NULL};
|
||||
pIter->input[1] = (TsdbNextRowState){&pIter->imemRow, true, false, &pIter->imemState, getNextRowFromMem, NULL};
|
||||
|
|
|
@ -1402,7 +1402,7 @@ static int32_t doMergeBufAndFileRows_Rv(STsdbReader* pReader, STableBlockScanInf
|
|||
SFileBlockDumpInfo* pDumpInfo = &pReader->status.fBlockDumpInfo;
|
||||
|
||||
int64_t tsLast = INT64_MIN;
|
||||
if (pLastBlockReader->lastBlockData.nRow > 0) {
|
||||
if ((pLastBlockReader->lastBlockData.nRow > 0) && hasDataInLastBlock(pLastBlockReader)) {
|
||||
tsLast = getCurrentKeyInLastBlock(pLastBlockReader);
|
||||
}
|
||||
|
||||
|
@ -1595,7 +1595,10 @@ static int32_t doMergeMultiLevelRowsRv(STsdbReader* pReader, STableBlockScanInfo
|
|||
ASSERT(pRow != NULL && piRow != NULL);
|
||||
|
||||
SBlockData* pLastBlockData = &pLastBlockReader->lastBlockData;
|
||||
int64_t tsLast = getCurrentKeyInLastBlock(pLastBlockReader);
|
||||
int64_t tsLast = INT64_MIN;
|
||||
if (hasDataInLastBlock(pLastBlockReader)) {
|
||||
tsLast = getCurrentKeyInLastBlock(pLastBlockReader);
|
||||
}
|
||||
|
||||
int64_t key = pBlockData->aTSKEY[pDumpInfo->rowIndex];
|
||||
|
||||
|
@ -1617,7 +1620,7 @@ static int32_t doMergeMultiLevelRowsRv(STsdbReader* pReader, STableBlockScanInfo
|
|||
minKey = key;
|
||||
}
|
||||
|
||||
if (minKey > tsLast && pLastBlockData->nRow > 0) {
|
||||
if (minKey > tsLast && hasDataInLastBlock(pLastBlockReader)) {
|
||||
minKey = tsLast;
|
||||
}
|
||||
} else {
|
||||
|
@ -1634,7 +1637,7 @@ static int32_t doMergeMultiLevelRowsRv(STsdbReader* pReader, STableBlockScanInfo
|
|||
minKey = key;
|
||||
}
|
||||
|
||||
if (minKey < tsLast && pLastBlockData->nRow > 0) {
|
||||
if (minKey < tsLast && hasDataInLastBlock(pLastBlockReader)) {
|
||||
minKey = tsLast;
|
||||
}
|
||||
}
|
||||
|
@ -3043,7 +3046,12 @@ static int32_t checkForNeighborFileBlock(STsdbReader* pReader, STableBlockScanIn
|
|||
|
||||
// 3. load the neighbor block, and set it to be the currently accessed file data block
|
||||
tBlockDataReset(&pStatus->fileBlockData);
|
||||
int32_t code = doLoadFileBlockData(pReader, pBlockIter, &pStatus->fileBlockData);
|
||||
int32_t code = tBlockDataInit(&pStatus->fileBlockData, pReader->suid, pFBlock->uid, pReader->pSchema);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
return code;
|
||||
}
|
||||
|
||||
code = doLoadFileBlockData(pReader, pBlockIter, &pStatus->fileBlockData);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
return code;
|
||||
}
|
||||
|
|
|
@ -100,7 +100,6 @@ extern "C" {
|
|||
typedef struct SExplainGroup {
|
||||
int32_t nodeNum;
|
||||
int32_t physiPlanExecNum;
|
||||
int32_t physiPlanNum;
|
||||
int32_t physiPlanExecIdx;
|
||||
SRWLatch lock;
|
||||
SSubplan *plan;
|
||||
|
|
|
@ -296,8 +296,6 @@ int32_t qExplainGenerateResNode(SPhysiNode *pNode, SExplainGroup *group, SExplai
|
|||
|
||||
QRY_ERR_JRET(qExplainGenerateResChildren(pNode, group, &resNode->pChildren));
|
||||
|
||||
++group->physiPlanNum;
|
||||
|
||||
*pResNode = resNode;
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
@ -1548,12 +1546,6 @@ int32_t qExplainAppendGroupResRows(void *pCtx, int32_t groupId, int32_t level) {
|
|||
|
||||
QRY_ERR_RET(qExplainGenerateResNode(group->plan->pNode, group, &node));
|
||||
|
||||
if ((EXPLAIN_MODE_ANALYZE == ctx->mode) && (group->physiPlanNum != group->physiPlanExecNum)) {
|
||||
qError("physiPlanNum %d mismatch with physiExecNum %d in group %d", group->physiPlanNum, group->physiPlanExecNum,
|
||||
groupId);
|
||||
QRY_ERR_JRET(TSDB_CODE_QRY_APP_ERROR);
|
||||
}
|
||||
|
||||
QRY_ERR_JRET(qExplainResNodeToRows(node, ctx, level));
|
||||
|
||||
_return:
|
||||
|
|
|
@ -408,6 +408,7 @@ static SColumnInfoData* getColInfoResult(void* metaHandle, uint64_t suid, SArray
|
|||
tags = taosHashInit(32, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_NO_LOCK);
|
||||
code = metaGetTableTags(metaHandle, suid, uidList, tags);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
qError("failed to get table tags from meta, reason:%s, suid:%" PRIu64, tstrerror(code), suid);
|
||||
terrno = code;
|
||||
goto end;
|
||||
}
|
||||
|
@ -482,11 +483,13 @@ static SColumnInfoData* getColInfoResult(void* metaHandle, uint64_t suid, SArray
|
|||
SDataType type = {.type = TSDB_DATA_TYPE_BOOL, .bytes = sizeof(bool)};
|
||||
code = createResultData(&type, rows, &output);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
qError("failed to create result, reason:%s", tstrerror(code));
|
||||
goto end;
|
||||
}
|
||||
|
||||
code = scalarCalculate(pTagCond, pBlockList, &output);
|
||||
if(code != TSDB_CODE_SUCCESS){
|
||||
qError("failed to calculate scalar, reason:%s", tstrerror(code));
|
||||
terrno = code;
|
||||
}
|
||||
// int64_t st2 = taosGetTimestampUs();
|
||||
|
|
|
@ -3342,7 +3342,11 @@ static SSDataBlock* doFillImpl(SOperatorInfo* pOperator) {
|
|||
pInfo->curGroupId = pInfo->pRes->info.groupId; // the first data block
|
||||
pInfo->totalInputRows += pInfo->pRes->info.rows;
|
||||
|
||||
taosFillSetStartInfo(pInfo->pFillInfo, pInfo->pRes->info.rows, pBlock->info.window.ekey);
|
||||
if (order == pInfo->pFillInfo->order) {
|
||||
taosFillSetStartInfo(pInfo->pFillInfo, pInfo->pRes->info.rows, pBlock->info.window.ekey);
|
||||
} else {
|
||||
taosFillSetStartInfo(pInfo->pFillInfo, pInfo->pRes->info.rows, pBlock->info.window.skey);
|
||||
}
|
||||
taosFillSetInputDataBlock(pInfo->pFillInfo, pInfo->pRes);
|
||||
} else if (pInfo->curGroupId != pBlock->info.groupId) { // the new group data block
|
||||
pInfo->existNewGroupBlock = pBlock;
|
||||
|
@ -3711,13 +3715,20 @@ static int32_t initFillInfo(SFillOperatorInfo* pInfo, SExprInfo* pExpr, int32_t
|
|||
const char* id, SInterval* pInterval, int32_t fillType, int32_t order) {
|
||||
SFillColInfo* pColInfo = createFillColInfo(pExpr, numOfCols, pNotFillExpr, numOfNotFillCols, pValNode);
|
||||
|
||||
STimeWindow w = getAlignQueryTimeWindow(pInterval, pInterval->precision, win.skey);
|
||||
w = getFirstQualifiedTimeWindow(win.skey, &w, pInterval, TSDB_ORDER_ASC);
|
||||
int64_t startKey = (order == TSDB_ORDER_ASC) ? win.skey : win.ekey;
|
||||
STimeWindow w = getAlignQueryTimeWindow(pInterval, pInterval->precision, startKey);
|
||||
w = getFirstQualifiedTimeWindow(startKey, &w, pInterval, order);
|
||||
|
||||
pInfo->pFillInfo = taosCreateFillInfo(w.skey, numOfCols, numOfNotFillCols, capacity, pInterval, fillType, pColInfo,
|
||||
pInfo->primaryTsCol, order, id);
|
||||
|
||||
pInfo->win = win;
|
||||
if (order == TSDB_ORDER_ASC) {
|
||||
pInfo->win.skey = win.skey;
|
||||
pInfo->win.ekey = win.ekey;
|
||||
} else {
|
||||
pInfo->win.skey = win.ekey;
|
||||
pInfo->win.ekey = win.skey;
|
||||
}
|
||||
pInfo->p = taosMemoryCalloc(numOfCols, POINTER_BYTES);
|
||||
|
||||
if (pInfo->pFillInfo == NULL || pInfo->p == NULL) {
|
||||
|
|
|
@ -540,7 +540,7 @@ int64_t getNumOfResultsAfterFillGap(SFillInfo* pFillInfo, TSKEY ekey, int32_t ma
|
|||
|
||||
int64_t numOfRes = -1;
|
||||
if (numOfRows > 0) { // still fill gap within current data block, not generating data after the result set.
|
||||
TSKEY lastKey = (TSDB_ORDER_ASC == pFillInfo->order ? tsList[pFillInfo->numOfRows - 1] : tsList[0]);
|
||||
TSKEY lastKey = tsList[pFillInfo->numOfRows - 1];
|
||||
numOfRes = taosTimeCountInterval(lastKey, pFillInfo->currentKey, pFillInfo->interval.sliding,
|
||||
pFillInfo->interval.slidingUnit, pFillInfo->interval.precision);
|
||||
numOfRes += 1;
|
||||
|
|
|
@ -468,7 +468,7 @@ int32_t functionFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
|
|||
SColumnInfoData* pCol = taosArrayGet(pBlock->pDataBlock, slotId);
|
||||
|
||||
SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx);
|
||||
pResInfo->isNullRes = (pResInfo->isNullRes == 1) ? 1 : (pResInfo->numOfRes == 0);
|
||||
pResInfo->isNullRes = (pResInfo->numOfRes == 0) ? 1 : 0;
|
||||
|
||||
char* in = GET_ROWCELL_INTERBUF(pResInfo);
|
||||
colDataAppend(pCol, pBlock->info.rows, in, pResInfo->isNullRes);
|
||||
|
@ -498,7 +498,7 @@ int32_t functionFinalizeWithResultBuf(SqlFunctionCtx* pCtx, SSDataBlock* pBlock,
|
|||
SColumnInfoData* pCol = taosArrayGet(pBlock->pDataBlock, slotId);
|
||||
|
||||
SResultRowEntryInfo* pResInfo = GET_RES_INFO(pCtx);
|
||||
pResInfo->isNullRes = (pResInfo->isNullRes == 1) ? 1 : (pResInfo->numOfRes == 0);;
|
||||
pResInfo->isNullRes = (pResInfo->numOfRes == 0) ? 1 : 0;
|
||||
|
||||
char* in = finalResult;
|
||||
colDataAppend(pCol, pBlock->info.rows, in, pResInfo->isNullRes);
|
||||
|
@ -663,8 +663,7 @@ int32_t sumFunction(SqlFunctionCtx* pCtx) {
|
|||
|
||||
// check for overflow
|
||||
if (IS_FLOAT_TYPE(type) && (isinf(pSumRes->dsum) || isnan(pSumRes->dsum))) {
|
||||
GET_RES_INFO(pCtx)->isNullRes = 1;
|
||||
numOfElem = 1;
|
||||
numOfElem = 0;
|
||||
}
|
||||
|
||||
_sum_over:
|
||||
|
@ -791,8 +790,7 @@ int32_t avgFunction(SqlFunctionCtx* pCtx) {
|
|||
int32_t numOfRows = pInput->numOfRows;
|
||||
|
||||
if (IS_NULL_TYPE(type)) {
|
||||
GET_RES_INFO(pCtx)->isNullRes = 1;
|
||||
numOfElem = 1;
|
||||
numOfElem = 0;
|
||||
goto _avg_over;
|
||||
}
|
||||
|
||||
|
@ -1613,7 +1611,7 @@ int32_t minmaxFunctionFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
|
|||
int32_t currentRow = pBlock->info.rows;
|
||||
|
||||
SColumnInfoData* pCol = taosArrayGet(pBlock->pDataBlock, slotId);
|
||||
pEntryInfo->isNullRes = (pEntryInfo->isNullRes == 1) ? 1 : (pEntryInfo->numOfRes == 0);
|
||||
pEntryInfo->isNullRes = (pEntryInfo->numOfRes == 0) ? 1 : 0;
|
||||
|
||||
if (pCol->info.type == TSDB_DATA_TYPE_FLOAT) {
|
||||
float v = *(double*)&pRes->v;
|
||||
|
@ -1792,8 +1790,7 @@ int32_t stddevFunction(SqlFunctionCtx* pCtx) {
|
|||
int32_t numOfRows = pInput->numOfRows;
|
||||
|
||||
if (IS_NULL_TYPE(type)) {
|
||||
GET_RES_INFO(pCtx)->isNullRes = 1;
|
||||
numOfElem = 1;
|
||||
numOfElem = 0;
|
||||
goto _stddev_over;
|
||||
}
|
||||
|
||||
|
|
|
@ -681,6 +681,11 @@ static int32_t parseBoundColumns(SInsertParseContext* pCxt, SParsedDataColInfo*
|
|||
break;
|
||||
}
|
||||
|
||||
char tmpTokenBuf[TSDB_COL_NAME_LEN + 2] = {0}; // used for deleting Escape character backstick(`)
|
||||
strncpy(tmpTokenBuf, sToken.z, sToken.n);
|
||||
sToken.z = tmpTokenBuf;
|
||||
sToken.n = strdequote(sToken.z);
|
||||
|
||||
col_id_t t = lastColIdx + 1;
|
||||
col_id_t index = findCol(&sToken, t, nCols, pSchema);
|
||||
if (index < 0 && t > 0) {
|
||||
|
|
|
@ -1881,6 +1881,12 @@ static EDealRes doCheckExprForGroupBy(SNode** pNode, void* pContext) {
|
|||
return rewriteExprToGroupKeyFunc(pCxt, pNode);
|
||||
}
|
||||
}
|
||||
if (NULL != pSelect->pWindow && QUERY_NODE_STATE_WINDOW == nodeType(pSelect->pWindow)) {
|
||||
if (nodesEqualNode(((SStateWindowNode*)pSelect->pWindow)->pExpr, *pNode)) {
|
||||
pSelect->hasStateKey = true;
|
||||
return rewriteExprToGroupKeyFunc(pCxt, pNode);
|
||||
}
|
||||
}
|
||||
if (isScanPseudoColumnFunc(*pNode) || QUERY_NODE_COLUMN == nodeType(*pNode)) {
|
||||
if (pSelect->selectFuncNum > 1 || pSelect->hasOtherVectorFunc || !pSelect->hasSelectFunc) {
|
||||
return generateDealNodeErrMsg(pCxt, getGroupByErrorCode(pCxt));
|
||||
|
@ -1973,7 +1979,7 @@ static int32_t checkWindowFuncCoexist(STranslateContext* pCxt, SSelectStmt* pSel
|
|||
if (NULL == pSelect->pWindow) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
if (NULL != pSelect->pWindow && !pSelect->hasAggFuncs) {
|
||||
if (NULL != pSelect->pWindow && !pSelect->hasAggFuncs && !pSelect->hasStateKey) {
|
||||
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_NO_VALID_FUNC_IN_WIN);
|
||||
}
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
@ -2825,7 +2831,7 @@ static int32_t createDefaultFillNode(STranslateContext* pCxt, SNode** pOutput) {
|
|||
static int32_t checkEvery(STranslateContext* pCxt, SValueNode* pInterval) {
|
||||
int32_t len = strlen(pInterval->literal);
|
||||
|
||||
char *unit = &pInterval->literal[len - 1];
|
||||
char* unit = &pInterval->literal[len - 1];
|
||||
if (*unit == 'n' || *unit == 'y') {
|
||||
return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_WRONG_VALUE_TYPE,
|
||||
"Unsupported time unit in EVERY clause");
|
||||
|
@ -2837,7 +2843,7 @@ static int32_t checkEvery(STranslateContext* pCxt, SValueNode* pInterval) {
|
|||
static int32_t translateInterpEvery(STranslateContext* pCxt, SNode** pEvery) {
|
||||
int32_t code = TSDB_CODE_SUCCESS;
|
||||
|
||||
code = checkEvery(pCxt, (SValueNode *)(*pEvery));
|
||||
code = checkEvery(pCxt, (SValueNode*)(*pEvery));
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = translateExpr(pCxt, pEvery);
|
||||
}
|
||||
|
|
|
@ -60,8 +60,7 @@ typedef enum {
|
|||
#define SCH_DEFAULT_TASK_TIMEOUT_USEC 10000000
|
||||
#define SCH_MAX_TASK_TIMEOUT_USEC 60000000
|
||||
#define SCH_DEFAULT_MAX_RETRY_NUM 6
|
||||
|
||||
#define SCH_ASYNC_LAUNCH_TASK 0
|
||||
#define SCH_MIN_AYSNC_EXEC_NUM 3
|
||||
|
||||
typedef struct SSchDebug {
|
||||
bool lockEnable;
|
||||
|
|
|
@ -871,14 +871,14 @@ _return:
|
|||
|
||||
taosMemoryFree(param);
|
||||
|
||||
#if SCH_ASYNC_LAUNCH_TASK
|
||||
if (code) {
|
||||
code = schProcessOnTaskFailure(pJob, pTask, code);
|
||||
if (pJob->taskNum >= SCH_MIN_AYSNC_EXEC_NUM) {
|
||||
if (code) {
|
||||
code = schProcessOnTaskFailure(pJob, pTask, code);
|
||||
}
|
||||
if (code) {
|
||||
code = schHandleJobFailure(pJob, code);
|
||||
}
|
||||
}
|
||||
if (code) {
|
||||
code = schHandleJobFailure(pJob, code);
|
||||
}
|
||||
#endif
|
||||
|
||||
SCH_RET(code);
|
||||
}
|
||||
|
@ -893,12 +893,12 @@ int32_t schAsyncLaunchTaskImpl(SSchJob *pJob, SSchTask *pTask) {
|
|||
param->pJob = pJob;
|
||||
param->pTask = pTask;
|
||||
|
||||
#if SCH_ASYNC_LAUNCH_TASK
|
||||
taosAsyncExec(schLaunchTaskImpl, param, NULL);
|
||||
#else
|
||||
SCH_ERR_RET(schLaunchTaskImpl(param));
|
||||
#endif
|
||||
|
||||
if (pJob->taskNum >= SCH_MIN_AYSNC_EXEC_NUM) {
|
||||
taosAsyncExec(schLaunchTaskImpl, param, NULL);
|
||||
} else {
|
||||
SCH_ERR_RET(schLaunchTaskImpl(param));
|
||||
}
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
|
|
@ -28,10 +28,10 @@ extern "C" {
|
|||
#include "syncMessage.h"
|
||||
#include "taosdef.h"
|
||||
|
||||
#define SYNC_SNAPSHOT_SEQ_INVALID -1
|
||||
#define SYNC_SNAPSHOT_SEQ_INVALID -1
|
||||
#define SYNC_SNAPSHOT_SEQ_FORCE_CLOSE -2
|
||||
#define SYNC_SNAPSHOT_SEQ_BEGIN 0
|
||||
#define SYNC_SNAPSHOT_SEQ_END 0x7FFFFFFF
|
||||
#define SYNC_SNAPSHOT_SEQ_BEGIN 0
|
||||
#define SYNC_SNAPSHOT_SEQ_END 0x7FFFFFFF
|
||||
|
||||
#define SYNC_SNAPSHOT_RETRY_MS 5000
|
||||
|
||||
|
@ -40,14 +40,14 @@ typedef struct SSyncSnapshotSender {
|
|||
bool start;
|
||||
int32_t seq;
|
||||
int32_t ack;
|
||||
void * pReader;
|
||||
void * pCurrentBlock;
|
||||
void *pReader;
|
||||
void *pCurrentBlock;
|
||||
int32_t blockLen;
|
||||
SSnapshotParam snapshotParam;
|
||||
SSnapshot snapshot;
|
||||
SSyncCfg lastConfig;
|
||||
int64_t sendingMS;
|
||||
SSyncNode * pSyncNode;
|
||||
SSyncNode *pSyncNode;
|
||||
int32_t replicaIndex;
|
||||
SyncTerm term;
|
||||
SyncTerm privateTerm;
|
||||
|
@ -64,20 +64,20 @@ int32_t snapshotSend(SSyncSnapshotSender *pSender);
|
|||
int32_t snapshotReSend(SSyncSnapshotSender *pSender);
|
||||
|
||||
cJSON *snapshotSender2Json(SSyncSnapshotSender *pSender);
|
||||
char * snapshotSender2Str(SSyncSnapshotSender *pSender);
|
||||
char * snapshotSender2SimpleStr(SSyncSnapshotSender *pSender, char *event);
|
||||
char *snapshotSender2Str(SSyncSnapshotSender *pSender);
|
||||
char *snapshotSender2SimpleStr(SSyncSnapshotSender *pSender, char *event);
|
||||
|
||||
//---------------------------------------------------
|
||||
typedef struct SSyncSnapshotReceiver {
|
||||
bool start;
|
||||
int32_t ack;
|
||||
void * pWriter;
|
||||
void *pWriter;
|
||||
SyncTerm term;
|
||||
SyncTerm privateTerm;
|
||||
SSnapshotParam snapshotParam;
|
||||
SSnapshot snapshot;
|
||||
SRaftId fromId;
|
||||
SSyncNode * pSyncNode;
|
||||
SSyncNode *pSyncNode;
|
||||
|
||||
} SSyncSnapshotReceiver;
|
||||
|
||||
|
@ -86,10 +86,11 @@ void snapshotReceiverDestroy(SSyncSnapshotReceiver *pReceiver)
|
|||
int32_t snapshotReceiverStart(SSyncSnapshotReceiver *pReceiver, SyncSnapshotSend *pBeginMsg);
|
||||
int32_t snapshotReceiverStop(SSyncSnapshotReceiver *pReceiver);
|
||||
bool snapshotReceiverIsStart(SSyncSnapshotReceiver *pReceiver);
|
||||
void snapshotReceiverForceStop(SSyncSnapshotReceiver *pReceiver);
|
||||
|
||||
cJSON *snapshotReceiver2Json(SSyncSnapshotReceiver *pReceiver);
|
||||
char * snapshotReceiver2Str(SSyncSnapshotReceiver *pReceiver);
|
||||
char * snapshotReceiver2SimpleStr(SSyncSnapshotReceiver *pReceiver, char *event);
|
||||
char *snapshotReceiver2Str(SSyncSnapshotReceiver *pReceiver);
|
||||
char *snapshotReceiver2SimpleStr(SSyncSnapshotReceiver *pReceiver, char *event);
|
||||
|
||||
//---------------------------------------------------
|
||||
// on message
|
||||
|
|
|
@ -2181,6 +2181,11 @@ void syncNodeBecomeLeader(SSyncNode* pSyncNode, const char* debugStr) {
|
|||
(pMySender->privateTerm) += 100;
|
||||
}
|
||||
|
||||
// close receiver
|
||||
if (snapshotReceiverIsStart(pSyncNode->pNewNodeReceiver)) {
|
||||
snapshotReceiverForceStop(pSyncNode->pNewNodeReceiver);
|
||||
}
|
||||
|
||||
// stop elect timer
|
||||
syncNodeStopElectTimer(pSyncNode);
|
||||
|
||||
|
|
|
@ -24,7 +24,6 @@
|
|||
//----------------------------------
|
||||
static void snapshotSenderUpdateProgress(SSyncSnapshotSender *pSender, SyncSnapshotRsp *pMsg);
|
||||
static void snapshotReceiverDoStart(SSyncSnapshotReceiver *pReceiver, SyncSnapshotSend *pBeginMsg);
|
||||
static void snapshotReceiverForceStop(SSyncSnapshotReceiver *pReceiver);
|
||||
static void snapshotReceiverGotData(SSyncSnapshotReceiver *pReceiver, SyncSnapshotSend *pMsg);
|
||||
static int32_t snapshotReceiverFinish(SSyncSnapshotReceiver *pReceiver, SyncSnapshotSend *pMsg);
|
||||
|
||||
|
@ -374,14 +373,14 @@ cJSON *snapshotSender2Json(SSyncSnapshotSender *pSender) {
|
|||
|
||||
char *snapshotSender2Str(SSyncSnapshotSender *pSender) {
|
||||
cJSON *pJson = snapshotSender2Json(pSender);
|
||||
char * serialized = cJSON_Print(pJson);
|
||||
char *serialized = cJSON_Print(pJson);
|
||||
cJSON_Delete(pJson);
|
||||
return serialized;
|
||||
}
|
||||
|
||||
char *snapshotSender2SimpleStr(SSyncSnapshotSender *pSender, char *event) {
|
||||
int32_t len = 256;
|
||||
char * s = taosMemoryMalloc(len);
|
||||
char *s = taosMemoryMalloc(len);
|
||||
|
||||
SRaftId destId = pSender->pSyncNode->replicasId[pSender->replicaIndex];
|
||||
char host[64];
|
||||
|
@ -480,7 +479,7 @@ static void snapshotReceiverDoStart(SSyncSnapshotReceiver *pReceiver, SyncSnapsh
|
|||
}
|
||||
|
||||
// force stop
|
||||
static void snapshotReceiverForceStop(SSyncSnapshotReceiver *pReceiver) {
|
||||
void snapshotReceiverForceStop(SSyncSnapshotReceiver *pReceiver) {
|
||||
// force close, abandon incomplete data
|
||||
if (pReceiver->pWriter != NULL) {
|
||||
int32_t ret = pReceiver->pSyncNode->pFsm->FpSnapshotStopWrite(pReceiver->pSyncNode->pFsm, pReceiver->pWriter, false,
|
||||
|
@ -653,7 +652,7 @@ cJSON *snapshotReceiver2Json(SSyncSnapshotReceiver *pReceiver) {
|
|||
cJSON_AddStringToObject(pFromId, "addr", u64buf);
|
||||
{
|
||||
uint64_t u64 = pReceiver->fromId.addr;
|
||||
cJSON * pTmp = pFromId;
|
||||
cJSON *pTmp = pFromId;
|
||||
char host[128] = {0};
|
||||
uint16_t port;
|
||||
syncUtilU642Addr(u64, host, sizeof(host), &port);
|
||||
|
@ -686,14 +685,14 @@ cJSON *snapshotReceiver2Json(SSyncSnapshotReceiver *pReceiver) {
|
|||
|
||||
char *snapshotReceiver2Str(SSyncSnapshotReceiver *pReceiver) {
|
||||
cJSON *pJson = snapshotReceiver2Json(pReceiver);
|
||||
char * serialized = cJSON_Print(pJson);
|
||||
char *serialized = cJSON_Print(pJson);
|
||||
cJSON_Delete(pJson);
|
||||
return serialized;
|
||||
}
|
||||
|
||||
char *snapshotReceiver2SimpleStr(SSyncSnapshotReceiver *pReceiver, char *event) {
|
||||
int32_t len = 256;
|
||||
char * s = taosMemoryMalloc(len);
|
||||
char *s = taosMemoryMalloc(len);
|
||||
|
||||
SRaftId fromId = pReceiver->fromId;
|
||||
char host[128];
|
||||
|
|
|
@ -440,10 +440,10 @@ int64_t taosPReadFile(TdFilePtr pFile, void *buf, int64_t count, int64_t offset)
|
|||
#endif
|
||||
assert(pFile->fd >= 0); // Please check if you have closed the file.
|
||||
#ifdef WINDOWS
|
||||
size_t pos = _lseek(pFile->fd, 0, SEEK_CUR);
|
||||
_lseek(pFile->fd, offset, SEEK_SET);
|
||||
size_t pos = _lseeki64(pFile->fd, 0, SEEK_CUR);
|
||||
_lseeki64(pFile->fd, offset, SEEK_SET);
|
||||
int64_t ret = _read(pFile->fd, buf, count);
|
||||
_lseek(pFile->fd, pos, SEEK_SET);
|
||||
_lseeki64(pFile->fd, pos, SEEK_SET);
|
||||
#else
|
||||
int64_t ret = pread(pFile->fd, buf, count, offset);
|
||||
#endif
|
||||
|
@ -493,7 +493,7 @@ int64_t taosLSeekFile(TdFilePtr pFile, int64_t offset, int32_t whence) {
|
|||
#endif
|
||||
assert(pFile->fd >= 0); // Please check if you have closed the file.
|
||||
#ifdef WINDOWS
|
||||
int64_t ret = _lseek(pFile->fd, offset, whence);
|
||||
int64_t ret = _lseeki64(pFile->fd, offset, whence);
|
||||
#else
|
||||
int64_t ret = lseek(pFile->fd, offset, whence);
|
||||
#endif
|
||||
|
@ -637,7 +637,7 @@ int64_t taosFSendFile(TdFilePtr pFileOut, TdFilePtr pFileIn, int64_t *offset, in
|
|||
|
||||
#ifdef WINDOWS
|
||||
|
||||
_lseek(pFileIn->fd, (int32_t)(*offset), 0);
|
||||
_lseeki64(pFileIn->fd, *offset, 0);
|
||||
int64_t writeLen = 0;
|
||||
uint8_t buffer[_SEND_FILE_STEP_] = {0};
|
||||
|
||||
|
|
|
@ -74,6 +74,7 @@ sql explain analyze verbose true select ts from tb1 where f1 > 0;
|
|||
sql explain analyze verbose true select f1 from st1 where f1 > 0 and ts > '2020-10-31 00:00:00' and ts < '2021-10-31 00:00:00';
|
||||
sql explain analyze verbose true select * from information_schema.ins_stables where db_name='db2';
|
||||
sql explain analyze verbose true select * from (select min(f1),count(*) a from st1 where f1 > 0) where a < 0;
|
||||
sql explain analyze verbose true select count(f1) from st1 group by tbname;
|
||||
|
||||
#not pass case
|
||||
#sql explain verbose true select count(*),sum(f1) as aa from tb1 where (f1 > 0 or f1 < -1) and ts > '2020-10-31 00:00:00' and ts < '2021-10-31 00:00:00' order by aa;
|
||||
|
|
|
@ -361,7 +361,7 @@ class TDTestCase:
|
|||
tdSql.error(
|
||||
f"insert into {dbname}.sub1_bound values ( now()+1s, 2147483648, 9223372036854775808, 32768, 128, 3.40E+38, 1.7e+308, True, 'binary_tb1', 'nchar_tb1', now() )"
|
||||
)
|
||||
self.check_avg(f"select avg(c1), avg(c2), avg(c3) , avg(c4), avg(c5) ,avg(c6) from {dbname}.sub1_bound " , f" select sum(c1)/count(c1), sum(c2)/count(c2) ,sum(c3)/count(c3), sum(c4)/count(c4), sum(c5)/count(c5) ,sum(c6)/count(c6) from {dbname}.sub1_bound ")
|
||||
#self.check_avg(f"select avg(c1), avg(c2), avg(c3) , avg(c4), avg(c5) ,avg(c6) from {dbname}.sub1_bound " , f" select sum(c1)/count(c1), sum(c2)/count(c2) ,sum(c3)/count(c3), sum(c4)/count(c4), sum(c5)/count(c5) ,sum(c6)/count(c6) from {dbname}.sub1_bound ")
|
||||
|
||||
|
||||
# check basic elem for table per row
|
||||
|
@ -372,7 +372,7 @@ class TDTestCase:
|
|||
tdSql.checkData(0,2,14042.142857143)
|
||||
tdSql.checkData(0,3,53.571428571)
|
||||
tdSql.checkData(0,4,5.828571332045761e+37)
|
||||
# tdSql.checkData(0,5,None)
|
||||
tdSql.checkData(0,5,None)
|
||||
|
||||
|
||||
# check + - * / in functions
|
||||
|
@ -382,7 +382,7 @@ class TDTestCase:
|
|||
tdSql.checkData(0,2,14042.142857143)
|
||||
tdSql.checkData(0,3,26.785714286)
|
||||
tdSql.checkData(0,4,2.9142856660228804e+37)
|
||||
# tdSql.checkData(0,5,None)
|
||||
tdSql.checkData(0,5,None)
|
||||
|
||||
|
||||
|
||||
|
|
Loading…
Reference in New Issue