Merge branch '3.0' into feature/TD-11274-3.0
This commit is contained in:
commit
5e2776ac2d
|
@ -6,9 +6,9 @@ title: Deployment
|
|||
|
||||
### Step 1
|
||||
|
||||
The FQDN of all hosts must be setup properly. For e.g. FQDNs may have to be configured in the /etc/hosts file on each host. You must confirm that each FQDN can be accessed from any other host. For e.g. you can do this by using the `ping` command.
|
||||
The FQDN of all hosts must be setup properly. All FQDNs need to be configured in the /etc/hosts file on each host. You must confirm that each FQDN can be accessed from any other host, you can do this by using the `ping` command.
|
||||
|
||||
To get the hostname on any host, the command `hostname -f` can be executed. `ping <FQDN>` command can be executed on each host to check whether any other host is accessible from it. If any host is not accessible, the network configuration, like /etc/hosts or DNS configuration, needs to be checked and revised, to make any two hosts accessible to each other.
|
||||
The command `hostname -f` can be executed to get the hostname on any host. `ping <FQDN>` command can be executed on each host to check whether any other host is accessible from it. If any host is not accessible, the network configuration, like /etc/hosts or DNS configuration, needs to be checked and revised, to make any two hosts accessible to each other.
|
||||
|
||||
:::note
|
||||
|
||||
|
@ -20,7 +20,7 @@ To get the hostname on any host, the command `hostname -f` can be executed. `pin
|
|||
|
||||
### Step 2
|
||||
|
||||
If any previous version of TDengine has been installed and configured on any host, the installation needs to be removed and the data needs to be cleaned up. For details about uninstalling please refer to [Install and Uninstall](/operation/pkg-install). To clean up the data, please use `rm -rf /var/lib/taos/\*` assuming the `dataDir` is configured as `/var/lib/taos`.
|
||||
If any previous version of TDengine has been installed and configured on any host, the installation needs to be removed and the data needs to be cleaned up. For details about uninstalling please refer to [Install and Uninstall](/operation/pkg-install). To clean up the data, please use `rm -rf /var/lib/taos/\*` assuming the `dataDir` is configured as `/var/lib/taos`.
|
||||
|
||||
:::note
|
||||
|
||||
|
@ -54,22 +54,12 @@ serverPort 6030
|
|||
|
||||
For all the dnodes in a TDengine cluster, the below parameters must be configured exactly the same, any node whose configuration is different from dnodes already in the cluster can't join the cluster.
|
||||
|
||||
| **#** | **Parameter** | **Definition** |
|
||||
| ----- | ------------------ | --------------------------------------------------------------------------------- |
|
||||
| 1 | numOfMnodes | The number of management nodes in the cluster |
|
||||
| 2 | mnodeEqualVnodeNum | The ratio of resource consuming of mnode to vnode |
|
||||
| 3 | offlineThreshold | The threshold of dnode offline, once it's reached the dnode is considered as down |
|
||||
| 4 | statusInterval | The interval by which dnode reports its status to mnode |
|
||||
| 5 | arbitrator | End point of the arbitrator component in the cluster |
|
||||
| 6 | timezone | Timezone |
|
||||
| 7 | balance | Enable load balance automatically |
|
||||
| 8 | maxTablesPerVnode | Maximum number of tables that can be created in each vnode |
|
||||
| 9 | maxVgroupsPerDb | Maximum number vgroups that can be used by each DB |
|
||||
|
||||
:::note
|
||||
Prior to version 2.0.19.0, besides the above parameters, `locale` and `charset` must also be configured the same for each dnode.
|
||||
|
||||
:::
|
||||
| **#** | **Parameter** | **Definition** |
|
||||
| ----- | -------------- | ------------------------------------------------------------- |
|
||||
| 1 | statusInterval | The time interval for which dnode reports its status to mnode |
|
||||
| 2 | timezone | Time Zone where the server is located |
|
||||
| 3 | locale | Location code of the system |
|
||||
| 4 | charset | Character set of the system |
|
||||
|
||||
## Start Cluster
|
||||
|
||||
|
@ -77,19 +67,19 @@ In the following example we assume that first dnode has FQDN h1.taosdata.com and
|
|||
|
||||
### Start The First DNODE
|
||||
|
||||
The first dnode can be started following the instructions in [Get Started](/get-started/). Then TDengine CLI `taos` can be launched to execute command `show dnodes`, the output is as following for example:
|
||||
Start the first dnode following the instructions in [Get Started](/get-started/). Then launch TDengine CLI `taos` and execute command `show dnodes`, the output is as following for example:
|
||||
|
||||
```
|
||||
Welcome to the TDengine shell from Linux, Client Version:2.0.0.0
|
||||
Welcome to the TDengine shell from Linux, Client Version:3.0.0.0
|
||||
Copyright (c) 2022 by TAOS Data, Inc. All rights reserved.
|
||||
|
||||
|
||||
Copyright (c) 2017 by TAOS Data, Inc. All rights reserved.
|
||||
Server is Enterprise trial Edition, ver:3.0.0.0 and will never expire.
|
||||
|
||||
taos> show dnodes;
|
||||
id | end_point | vnodes | cores | status | role | create_time |
|
||||
=====================================================================================
|
||||
1 | h1.taosdata.com:6030 | 0 | 2 | ready | any | 2020-07-31 03:49:29.202 |
|
||||
Query OK, 1 row(s) in set (0.006385s)
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | note |
|
||||
============================================================================================================================================
|
||||
1 | h1.taosdata.com:6030 | 0 | 1024 | ready | 2022-07-16 10:50:42.673 | |
|
||||
Query OK, 1 rows affected (0.007984s)
|
||||
|
||||
taos>
|
||||
```
|
||||
|
@ -100,7 +90,7 @@ From the above output, it is shown that the end point of the started dnode is "h
|
|||
|
||||
There are a few steps necessary to add other dnodes in the cluster.
|
||||
|
||||
Let's assume we are starting the second dnode with FQDN, h2.taosdata.com. First we make sure the configuration is correct.
|
||||
Let's assume we are starting the second dnode with FQDN, h2.taosdata.com. Firstly we make sure the configuration is correct.
|
||||
|
||||
```c
|
||||
// firstEp is the end point to connect to when any dnode starts
|
||||
|
@ -114,7 +104,7 @@ serverPort 6030
|
|||
|
||||
```
|
||||
|
||||
Second, we can start `taosd` as instructed in [Get Started](/get-started/).
|
||||
Secondly, we can start `taosd` as instructed in [Get Started](/get-started/).
|
||||
|
||||
Then, on the first dnode i.e. h1.taosdata.com in our example, use TDengine CLI `taos` to execute the following command to add the end point of the dnode in the cluster. In the command "fqdn:port" should be quoted using double quotes.
|
||||
|
||||
|
|
|
@ -39,31 +39,25 @@ USE SOME_DATABASE;
|
|||
SHOW VGROUPS;
|
||||
```
|
||||
|
||||
The example output is below:
|
||||
|
||||
```
|
||||
taos> show dnodes;
|
||||
id | end_point | vnodes | cores | status | role | create_time | offline reason |
|
||||
======================================================================================================================================
|
||||
1 | localhost:6030 | 9 | 8 | ready | any | 2022-04-15 08:27:09.359 | |
|
||||
Query OK, 1 row(s) in set (0.008298s)
|
||||
The output is like below:
|
||||
|
||||
taos> use db;
|
||||
Database changed.
|
||||
|
||||
taos> show vgroups;
|
||||
vgId | tables | status | onlines | v1_dnode | v1_status | compacting |
|
||||
vgId | tables | status | onlines | v1_dnode | v1_status | compacting |
|
||||
==========================================================================================
|
||||
14 | 38000 | ready | 1 | 1 | leader | 0 |
|
||||
15 | 38000 | ready | 1 | 1 | leader | 0 |
|
||||
16 | 38000 | ready | 1 | 1 | leader | 0 |
|
||||
17 | 38000 | ready | 1 | 1 | leader | 0 |
|
||||
18 | 37001 | ready | 1 | 1 | leader | 0 |
|
||||
19 | 37000 | ready | 1 | 1 | leader | 0 |
|
||||
20 | 37000 | ready | 1 | 1 | leader | 0 |
|
||||
21 | 37000 | ready | 1 | 1 | leader | 0 |
|
||||
14 | 38000 | ready | 1 | 1 | leader | 0 |
|
||||
15 | 38000 | ready | 1 | 1 | leader | 0 |
|
||||
16 | 38000 | ready | 1 | 1 | leader | 0 |
|
||||
17 | 38000 | ready | 1 | 1 | leader | 0 |
|
||||
18 | 37001 | ready | 1 | 1 | leader | 0 |
|
||||
19 | 37000 | ready | 1 | 1 | leader | 0 |
|
||||
20 | 37000 | ready | 1 | 1 | leader | 0 |
|
||||
21 | 37000 | ready | 1 | 1 | leader | 0 |
|
||||
Query OK, 8 row(s) in set (0.001154s)
|
||||
```
|
||||
|
||||
````
|
||||
|
||||
## Add DNODE
|
||||
|
||||
|
@ -71,7 +65,7 @@ Launch TDengine CLI `taos` and execute the command below to add the end point of
|
|||
|
||||
```sql
|
||||
CREATE DNODE "fqdn:port";
|
||||
```
|
||||
````
|
||||
|
||||
The example output is as below:
|
||||
|
||||
|
@ -142,72 +136,3 @@ In the above example, when `show dnodes` is executed the first time, two dnodes
|
|||
- dnodeID is allocated automatically and can't be manually modified. dnodeID is generated in ascending order without duplication.
|
||||
|
||||
:::
|
||||
|
||||
## Move VNODE
|
||||
|
||||
A vnode can be manually moved from one dnode to another.
|
||||
|
||||
Launch TDengine CLI `taos` and execute below command:
|
||||
|
||||
```sql
|
||||
ALTER DNODE <source-dnodeId> BALANCE "VNODE:<vgId>-DNODE:<dest-dnodeId>";
|
||||
```
|
||||
|
||||
In the above command, `source-dnodeId` is the original dnodeId where the vnode resides, `dest-dnodeId` specifies the target dnode. vgId (vgroup ID) can be shown by `SHOW VGROUPS `.
|
||||
|
||||
First `show vgroups` is executed to show the vgroup distribution.
|
||||
|
||||
```
|
||||
taos> show vgroups;
|
||||
vgId | tables | status | onlines | v1_dnode | v1_status | compacting |
|
||||
==========================================================================================
|
||||
14 | 38000 | ready | 1 | 3 | leader | 0 |
|
||||
15 | 38000 | ready | 1 | 3 | leader | 0 |
|
||||
16 | 38000 | ready | 1 | 3 | leader | 0 |
|
||||
17 | 38000 | ready | 1 | 3 | leader | 0 |
|
||||
18 | 37001 | ready | 1 | 3 | leader | 0 |
|
||||
19 | 37000 | ready | 1 | 1 | leader | 0 |
|
||||
20 | 37000 | ready | 1 | 1 | leader | 0 |
|
||||
21 | 37000 | ready | 1 | 1 | leader | 0 |
|
||||
Query OK, 8 row(s) in set (0.001314s)
|
||||
```
|
||||
|
||||
It can be seen that there are 5 vgroups in dnode 3 and 3 vgroups in node 1, now we want to move vgId 18 from dnode 3 to dnode 1. Execute the below command in `taos`
|
||||
|
||||
```
|
||||
taos> alter dnode 3 balance "vnode:18-dnode:1";
|
||||
|
||||
DB error: Balance already enabled (0.00755
|
||||
```
|
||||
|
||||
However, the operation fails with error message show above, which means automatic load balancing has been enabled in the current database so manual load balance can't be performed.
|
||||
|
||||
Shutdown the cluster, configure `balance` parameter in all the dnodes to 0, then restart the cluster, and execute `alter dnode` and `show vgroups` as below.
|
||||
|
||||
```
|
||||
taos> alter dnode 3 balance "vnode:18-dnode:1";
|
||||
Query OK, 0 row(s) in set (0.000575s)
|
||||
|
||||
taos> show vgroups;
|
||||
vgId | tables | status | onlines | v1_dnode | v1_status | v2_dnode | v2_status | compacting |
|
||||
=================================================================================================================
|
||||
14 | 38000 | ready | 1 | 3 | leader | 0 | NULL | 0 |
|
||||
15 | 38000 | ready | 1 | 3 | leader | 0 | NULL | 0 |
|
||||
16 | 38000 | ready | 1 | 3 | leader | 0 | NULL | 0 |
|
||||
17 | 38000 | ready | 1 | 3 | leader | 0 | NULL | 0 |
|
||||
18 | 37001 | ready | 2 | 1 | follower | 3 | leader | 0 |
|
||||
19 | 37000 | ready | 1 | 1 | leader | 0 | NULL | 0 |
|
||||
20 | 37000 | ready | 1 | 1 | leader | 0 | NULL | 0 |
|
||||
21 | 37000 | ready | 1 | 1 | leader | 0 | NULL | 0 |
|
||||
Query OK, 8 row(s) in set (0.001242s)
|
||||
```
|
||||
|
||||
It can be seen from above output that vgId 18 has been moved from dnode 3 to dnode 1.
|
||||
|
||||
:::note
|
||||
|
||||
- Manual load balancing can only be performed when the automatic load balancing is disabled, i.e. `balance` is set to 0.
|
||||
- Only a vnode in normal state, i.e. leader or follower, can be moved. vnode can't be moved when its in status offline, unsynced or syncing.
|
||||
- Before moving a vnode, it's necessary to make sure the target dnode has enough resources: CPU, memory and disk.
|
||||
|
||||
:::
|
||||
|
|
|
@ -1,81 +0,0 @@
|
|||
---
|
||||
sidebar_label: HA & LB
|
||||
title: High Availability and Load Balancing
|
||||
---
|
||||
|
||||
## High Availability of Vnode
|
||||
|
||||
High availability of vnode and mnode can be achieved through replicas in TDengine.
|
||||
|
||||
A TDengine cluster can have multiple databases. Each database has a number of vnodes associated with it. A different number of replicas can be configured for each DB. When creating a database, the parameter `replica` is used to specify the number of replicas. The default value for `replica` is 1. Naturally, a single replica cannot guarantee high availability since if one node is down, the data service is unavailable. Note that the number of dnodes in the cluster must NOT be lower than the number of replicas set for any DB, otherwise the `create table` operation will fail with error "more dnodes are needed". The SQL statement below is used to create a database named "demo" with 3 replicas.
|
||||
|
||||
```sql
|
||||
CREATE DATABASE demo replica 3;
|
||||
```
|
||||
|
||||
The data in a DB is divided into multiple shards and stored in multiple vgroups. The number of vnodes in each vgroup is determined by the number of replicas set for the DB. The vnodes in each vgroup store exactly the same data. For the purpose of high availability, the vnodes in a vgroup must be located in different dnodes on different hosts. As long as over half of the vnodes in a vgroup are in an online state, the vgroup is able to provide data access. Otherwise the vgroup can't provide data access for reading or inserting data.
|
||||
|
||||
There may be data for multiple DBs in a dnode. When a dnode is down, multiple DBs may be affected. While in theory, the cluster will provide data access for reading or inserting data if over half the vnodes in vgroups are online, because of the possibly complex mapping between vnodes and dnodes, it is difficult to guarantee that the cluster will work properly if over half of the dnodes are online.
|
||||
|
||||
## High Availability of Mnode
|
||||
|
||||
Each TDengine cluster is managed by `mnode`, which is a module of `taosd`. For the high availability of mnode, multiple mnodes can be configured using system parameter `numOfMNodes`. The valid range for `numOfMnodes` is [1,3]. To ensure data consistency between mnodes, data replication between mnodes is performed synchronously.
|
||||
|
||||
There may be multiple dnodes in a cluster, but only one mnode can be started in each dnode. Which one or ones of the dnodes will be designated as mnodes is automatically determined by TDengine according to the cluster configuration and system resources. The command `show mnodes` can be executed in TDengine `taos` to show the mnodes in the cluster.
|
||||
|
||||
```sql
|
||||
SHOW MNODES;
|
||||
```
|
||||
|
||||
The end point and role/status (leader, follower, unsynced, or offline) of all mnodes can be shown by the above command. When the first dnode is started in a cluster, there must be one mnode in this dnode. Without at least one mnode, the cluster cannot work. If `numOfMNodes` is configured to 2, another mnode will be started when the second dnode is launched.
|
||||
|
||||
For the high availability of mnode, `numOfMnodes` needs to be configured to 2 or a higher value. Because the data consistency between mnodes must be guaranteed, the replica confirmation parameter `quorum` is set to 2 automatically if `numOfMNodes` is set to 2 or higher.
|
||||
|
||||
:::note
|
||||
If high availability is important for your system, both vnode and mnode must be configured to have multiple replicas.
|
||||
|
||||
:::
|
||||
|
||||
## Load Balancing
|
||||
|
||||
Load balancing will be triggered in 3 cases without manual intervention.
|
||||
|
||||
- When a new dnode joins the cluster, automatic load balancing may be triggered. Some data from other dnodes may be transferred to the new dnode automatically.
|
||||
- When a dnode is removed from the cluster, the data from this dnode will be transferred to other dnodes automatically.
|
||||
- When a dnode is too hot, i.e. too much data has been stored in it, automatic load balancing may be triggered to migrate some vnodes from this dnode to other dnodes.
|
||||
|
||||
:::tip
|
||||
Automatic load balancing is controlled by the parameter `balance`, 0 means disabled and 1 means enabled. This is set in the file [taos.cfg](https://docs.tdengine.com/reference/config/#balance).
|
||||
|
||||
:::
|
||||
|
||||
## Dnode Offline
|
||||
|
||||
When a dnode is offline, it can be detected by the TDengine cluster. There are two cases:
|
||||
|
||||
- The dnode comes online before the threshold configured in `offlineThreshold` is reached. The dnode is still in the cluster and data replication is started automatically. The dnode can work properly after the data sync is finished.
|
||||
|
||||
- If the dnode has been offline over the threshold configured in `offlineThreshold` in `taos.cfg`, the dnode will be removed from the cluster automatically. A system alert will be generated and automatic load balancing will be triggered if `balance` is set to 1. When the removed dnode is restarted and becomes online, it will not join the cluster automatically. The system administrator has to manually join the dnode to the cluster.
|
||||
|
||||
:::note
|
||||
If all the vnodes in a vgroup (or mnodes in mnode group) are in offline or unsynced status, the leader node can only be voted on, after all the vnodes or mnodes in the group become online and can exchange status. Following this, the vgroup (or mnode group) is able to provide service.
|
||||
|
||||
:::
|
||||
|
||||
## Arbitrator
|
||||
|
||||
The "arbitrator" component is used to address the special case when the number of replicas is set to an even number like 2,4 etc. If half of the vnodes in a vgroup don't work, it is impossible to vote and select a leader node. This situation also applies to mnodes if the number of mnodes is set to an even number like 2,4 etc.
|
||||
|
||||
To resolve this problem, a new arbitrator component named `tarbitrator`, an abbreviation of TDengine Arbitrator, was introduced. The `tarbitrator` simulates a vnode or mnode but it's only responsible for network communication and doesn't handle any actual data access. As long as more than half of the vnode or mnode, including Arbitrator, are available the vnode group or mnode group can provide data insertion or query services normally.
|
||||
|
||||
Normally, it's prudent to configure the replica number for each DB or system parameter `numOfMNodes` to be an odd number. However, if a user is very sensitive to storage space, a replica number of 2 plus arbitrator component can be used to achieve both lower cost of storage space and high availability.
|
||||
|
||||
Arbitrator component is installed with the server package. For details about how to install, please refer to [Install](/operation/pkg-install). The `-p` parameter of `tarbitrator` can be used to specify the port on which it provides service.
|
||||
|
||||
In the configuration file `taos.cfg` of each dnode, parameter `arbitrator` needs to be configured to the end point of the `tarbitrator` process. Arbitrator component will be used automatically if the replica is configured to an even number and will be ignored if the replica is configured to an odd number.
|
||||
|
||||
Arbitrator can be shown by executing command in TDengine CLI `taos` with its role shown as "arb".
|
||||
|
||||
```sql
|
||||
SHOW DNODES;
|
||||
```
|
|
@ -0,0 +1,30 @@
|
|||
---
|
||||
sidebar_label: High Availability
|
||||
title: High Availability
|
||||
---
|
||||
|
||||
## High Availability of Vnode
|
||||
|
||||
High availability of vnode can be achieved through replicas in TDengine.
|
||||
|
||||
A TDengine cluster can have multiple databases. Each database has a number of vnodes associated with it. A different number of replicas can be configured for each DB. When creating a database, the parameter `replica` is used to specify the number of replicas. The default value for `replica` is 1. Naturally, a single replica cannot guarantee high availability since if one node is down, the data service is unavailable. Note that the number of dnodes in the cluster must NOT be lower than the number of replicas set for any DB, otherwise the `create table` operation will fail with error "more dnodes are needed". The SQL statement below is used to create a database named "demo" with 3 replicas.
|
||||
|
||||
```sql
|
||||
CREATE DATABASE demo replica 3;
|
||||
```
|
||||
|
||||
The data in a DB is divided into multiple shards and stored in multiple vgroups. The number of vnodes in each vgroup is determined by the number of replicas set for the DB. The vnodes in each vgroup store exactly the same data. For the purpose of high availability, the vnodes in a vgroup must be located in different dnodes on different hosts. As long as over half of the vnodes in a vgroup are in an online state, the vgroup is able to provide data access. Otherwise the vgroup can't provide data access for reading or inserting data.
|
||||
|
||||
There may be data for multiple DBs in a dnode. When a dnode is down, multiple DBs may be affected. While in theory, the cluster will provide data access for reading or inserting data if over half the vnodes in vgroups are online, because of the possibly complex mapping between vnodes and dnodes, it is difficult to guarantee that the cluster will work properly if over half of the dnodes are online.
|
||||
|
||||
## High Availability of Mnode
|
||||
|
||||
Each TDengine cluster is managed by `mnode`, which is a module of `taosd`. For the high availability of mnode, multiple mnodes can be configured in the system. When a TDengine cluster is started from scratch, there is only one `mnode`, then you can use command `create mnode` to and start corresponding dnode to add more mnodes.
|
||||
|
||||
```sql
|
||||
SHOW MNODES;
|
||||
```
|
||||
|
||||
The end point and role/status (leader, follower, candidate, offline) of all mnodes can be shown by the above command. When the first dnode is started in a cluster, there must be one mnode in this dnode. Without at least one mnode, the cluster cannot work.
|
||||
|
||||
From TDengine 3.0.0, RAFT procotol is used to guarantee the high availability, so the number of mnodes is should be 1 or 3.
|
|
@ -0,0 +1,20 @@
|
|||
---
|
||||
sidebar_label: Load Balance
|
||||
title: Load Balance
|
||||
---
|
||||
|
||||
The load balance in TDengine is mainly about processing data series data. TDengine employes builtin hash algorithm to distribute all the tables, sub-tables and their data of a database across all the vgroups that belongs to the database. Each table or sub-table can only be handled by a single vgroup, while each vgroup can process multiple table or sub-table.
|
||||
|
||||
The number of vgroup can be specified when creating a database, using the parameter `vgroups`.
|
||||
|
||||
```sql
|
||||
create database db0 vgroups 100;
|
||||
```
|
||||
|
||||
The proper value of `vgroups` depends on available system resources. Assuming there is only one database to be created in the system, then the number of `vgroups` is determined by the available resources from all dnodes. In principle more vgroups can be created if you have more CPU and memory. Disk I/O is another important factor to consider. Once the bottleneck shows on disk I/O, more vgroups may downgrad the system performance significantly. If multiple databases are to be created in the system, then the total number of `vroups` of all the databases are dependent on the available system resources. It needs to be careful to distribute vgroups among these databases, you need to consider the number of tables, data writing frequency, size of each data row for all these databases. A recommended practice is to firstly choose a starting number for `vgroups`, for example double of the number of CPU cores, then try to adjust and optimize system configurations to find the best setting for `vgroups`, then distribute these vgroups among databases.
|
||||
|
||||
Furthermode, TDengine distributes the vgroups of each database equally among all dnodes. In case of replica 3, the distrubtion is even more complex, TDengine tries its best to prevent any dnode from becoming a bottleneck.
|
||||
|
||||
TDegnine utilizes the above ways to achieve load balance in a cluster, and finally achieve higher throughput.
|
||||
|
||||
Once the load balance is achieved, after some operations like deleting tables or droping databases, the load across all dnodes may become inbalanced, the method of rebalance will be provided in later versions. However, even without explicit rebalancing, TDengine will try its best to achieve new balance without manual interfering when a new database is created.
|
|
@ -72,19 +72,22 @@ serverPort 6030
|
|||
按照《立即开始》里的步骤,启动第一个数据节点,例如 h1.taosdata.com,然后执行 taos,启动 taos shell,从 shell 里执行命令“SHOW DNODES”,如下所示:
|
||||
|
||||
```
|
||||
Welcome to the TDengine shell from Linux, Client Version:2.0.0.0
|
||||
Welcome to the TDengine shell from Linux, Client Version:3.0.0.0
|
||||
Copyright (c) 2022 by TAOS Data, Inc. All rights reserved.
|
||||
|
||||
|
||||
Copyright (c) 2017 by TAOS Data, Inc. All rights reserved.
|
||||
Server is Enterprise trial Edition, ver:3.0.0.0 and will never expire.
|
||||
|
||||
taos> show dnodes;
|
||||
id | end_point | vnodes | cores | status | role | create_time |
|
||||
=====================================================================================
|
||||
1 | h1.taos.com:6030 | 0 | 2 | ready | any | 2020-07-31 03:49:29.202 |
|
||||
Query OK, 1 row(s) in set (0.006385s)
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | note |
|
||||
============================================================================================================================================
|
||||
1 | h1.taosdata.com:6030 | 0 | 1024 | ready | 2022-07-16 10:50:42.673 | |
|
||||
Query OK, 1 rows affected (0.007984s)
|
||||
|
||||
taos>
|
||||
```
|
||||
|
||||
taos>
|
||||
|
||||
````
|
||||
|
||||
上述命令里,可以看到刚启动的数据节点的 End Point 是:h1.taos.com:6030,就是这个新集群的 firstEp。
|
||||
|
||||
|
@ -98,7 +101,7 @@ taos>
|
|||
|
||||
```sql
|
||||
CREATE DNODE "h2.taos.com:6030";
|
||||
```
|
||||
````
|
||||
|
||||
将新数据节点的 End Point(准备工作中第四步获知的)添加进集群的 EP 列表。“fqdn:port”需要用双引号引起来,否则出错。请注意将示例的“h2.taos.com:6030” 替换为这个新数据节点的 End Point。
|
||||
|
||||
|
|
|
@ -72,10 +72,11 @@ CREATE DNODE "fqdn:port";
|
|||
taos> show dnodes;
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | note |
|
||||
============================================================================================================================================
|
||||
1 | trd01:6030 | 100 | 1024 | ready | 2022-07-15 16:47:47.726 | |
|
||||
2 | trd04:6030 | 0 | 1024 | ready | 2022-07-15 16:56:13.670 | |
|
||||
1 | localhost:6030 | 100 | 1024 | ready | 2022-07-15 16:47:47.726 | |
|
||||
2 | localhost:7030 | 0 | 1024 | ready | 2022-07-15 16:56:13.670 | |
|
||||
Query OK, 2 rows affected (0.007031s)
|
||||
```
|
||||
|
||||
从中可以看到两个 dnode 状态都为 ready
|
||||
|
||||
## 删除数据节点
|
||||
|
@ -85,14 +86,15 @@ Query OK, 2 rows affected (0.007031s)
|
|||
```sql
|
||||
DROP DNODE "fqdn:port";
|
||||
```
|
||||
|
||||
或者
|
||||
|
||||
```sql
|
||||
DROP DNODE dnodeId;
|
||||
```
|
||||
|
||||
通过 “fqdn:port” 或 dnodeID 来指定一个具体的节点都是可以的。其中 fqdn 是被删除的节点的 FQDN,port 是其对外服务器的端口号;dnodeID 可以通过 SHOW DNODES 获得。
|
||||
|
||||
|
||||
:::warning
|
||||
|
||||
数据节点一旦被 drop 之后,不能重新加入集群。需要将此节点重新部署(清空数据文件夹)。集群在完成 `drop dnode` 操作之前,会将该 dnode 的数据迁移走。
|
||||
|
@ -101,5 +103,3 @@ DROP DNODE dnodeId;
|
|||
dnodeID 是集群自动分配的,不得人工指定。它在生成时是递增的,不会重复。
|
||||
|
||||
:::
|
||||
|
||||
|
||||
|
|
|
@ -28,6 +28,6 @@ TDengine 集群是由 mnode(taosd 的一个模块,管理节点)负责管
|
|||
SHOW MNODES;
|
||||
```
|
||||
|
||||
来查看 mnode 列表,该列表将列出 mnode 所处的 dnode 的 End Point 和角色(leader, follower, candidate)。当集群中第一个数据节点启动时,该数据节点一定会运行一个 mnode 实例,否则该数据节点 dnode 无法正常工作,因为一个系统是必须有至少一个 mnode 的。
|
||||
来查看 mnode 列表,该列表将列出 mnode 所处的 dnode 的 End Point 和角色(leader, follower, candidate, offline)。当集群中第一个数据节点启动时,该数据节点一定会运行一个 mnode 实例,否则该数据节点 dnode 无法正常工作,因为一个系统是必须有至少一个 mnode 的。
|
||||
|
||||
在 TDengine 3.0 及以后的版本中,数据同步采用 RAFT 协议,所以 mnode 的数量应该被设置为 1 个或者 3 个。
|
||||
|
|
|
@ -2,17 +2,17 @@
|
|||
title: 负载均衡
|
||||
---
|
||||
|
||||
TDengine 中的负载均衡主要指对时序数据的处理的负载均衡。TDengine 采用 Hash 一致性算法将一个数据库中的所有表和子表的数据均衡分散在属于该数据库的所有 vgroups 中,每张表或子表只能由一个 vgroups 处理,一个 vgroups 可能负责处理多个表或子表。
|
||||
TDengine 中的负载均衡主要指对时序数据的处理的负载均衡。TDengine 采用 Hash 一致性算法将一个数据库中的所有表和子表的数据均衡分散在属于该数据库的所有 vgroup 中,每张表或子表只能由一个 vgroup 处理,一个 vgroup 可能负责处理多个表或子表。
|
||||
|
||||
创建数据库时可以指定其中的 vgroups 的数量:
|
||||
创建数据库时可以指定其中的 vgroup 的数量:
|
||||
|
||||
```sql
|
||||
create database db0 vgroups 100;
|
||||
```
|
||||
|
||||
如何指定合适的 vgroups 的数量,这取决于系统资源。假定系统中只计划建立一个数据库,则 vgroups 由集群中所有 dnode 所能使用的资源决定。原则上可用的 CPU 和 Memory 越多,可建立的 vgroups 也越多。但也要考虑到磁盘性能,过多的 vgroups 在磁盘性能达到上限后反而会拖累整个系统的性能。假如系统中会建立多个数据库,则多个数据库的 vgoups 之和取决于系统中可用资源的数量。要综合考虑多个数据库之间表的数量、写入频率、数据量等多个因素在多个数据库之间分配 vgroups。实际中建议首先根据系统资源配置选择一个初始的 vgroups 数量,比如 CPU 总核数的 2 倍,以此为起点通过测试找到最佳的 vgroups 数量配置,此为系统中的 vgroups 总数。如果有多个数据库的话,再根据各个数据库的表数和数据量对 vgroups 进行分配。
|
||||
如何指定合适的 vgroup 的数量,这取决于系统资源。假定系统中只计划建立一个数据库,则 vgroup 数量由集群中所有 dnode 所能使用的资源决定。原则上可用的 CPU 和 Memory 越多,可建立的 vgroup 也越多。但也要考虑到磁盘性能,过多的 vgroup 在磁盘性能达到上限后反而会拖累整个系统的性能。假如系统中会建立多个数据库,则多个数据库的 vgroup 之和取决于系统中可用资源的数量。要综合考虑多个数据库之间表的数量、写入频率、数据量等多个因素在多个数据库之间分配 vgroup。实际中建议首先根据系统资源配置选择一个初始的 vgroup 数量,比如 CPU 总核数的 2 倍,以此为起点通过测试找到最佳的 vgroup 数量配置,此为系统中的 vgroup 总数。如果有多个数据库的话,再根据各个数据库的表数和数据量对 vgroup 进行分配。
|
||||
|
||||
此外,对于任意数据库的 vgroups,TDengine 都是尽可能将其均衡分散在多个 dnode 上。在多副本情况下(replica 3),这种均衡分布尤其复杂,TDengine 的分布策略会尽量避免任意一个 dnode 成为写入的瓶颈。
|
||||
此外,对于任意数据库的 vgroup,TDengine 都是尽可能将其均衡分散在多个 dnode 上。在多副本情况下(replica 3),这种均衡分布尤其复杂,TDengine 的分布策略会尽量避免任意一个 dnode 成为写入的瓶颈。
|
||||
|
||||
通过以上措施可以最大限度地在整个 TDengine 集群中实现负载均衡,负载均衡也能反过来提升系统总的数据处理能力。
|
||||
|
||||
|
|
|
@ -79,8 +79,8 @@
|
|||
#define TK_NOT 61
|
||||
#define TK_EXISTS 62
|
||||
#define TK_BUFFER 63
|
||||
#define TK_CACHELAST 64
|
||||
#define TK_CACHELASTSIZE 65
|
||||
#define TK_CACHEMODEL 64
|
||||
#define TK_CACHESIZE 65
|
||||
#define TK_COMP 66
|
||||
#define TK_DURATION 67
|
||||
#define TK_NK_VARIABLE 68
|
||||
|
|
|
@ -190,14 +190,13 @@ bool fmIsUserDefinedFunc(int32_t funcId);
|
|||
bool fmIsDistExecFunc(int32_t funcId);
|
||||
bool fmIsForbidFillFunc(int32_t funcId);
|
||||
bool fmIsForbidStreamFunc(int32_t funcId);
|
||||
bool fmIsForbidWindowFunc(int32_t funcId);
|
||||
bool fmIsForbidGroupByFunc(int32_t funcId);
|
||||
bool fmIsIntervalInterpoFunc(int32_t funcId);
|
||||
bool fmIsInterpFunc(int32_t funcId);
|
||||
bool fmIsLastRowFunc(int32_t funcId);
|
||||
bool fmIsSystemInfoFunc(int32_t funcId);
|
||||
bool fmIsImplicitTsFunc(int32_t funcId);
|
||||
bool fmIsClientPseudoColumnFunc(int32_t funcId);
|
||||
bool fmIsMultiRowsFunc(int32_t funcId);
|
||||
|
||||
int32_t fmGetDistMethod(const SFunctionNode* pFunc, SFunctionNode** pPartialFunc, SFunctionNode** pMergeFunc);
|
||||
|
||||
|
|
|
@ -51,7 +51,8 @@ extern "C" {
|
|||
typedef struct SDatabaseOptions {
|
||||
ENodeType type;
|
||||
int32_t buffer;
|
||||
int8_t cacheLast;
|
||||
char cacheModelStr[TSDB_CACHE_MODEL_STR_LEN];
|
||||
int8_t cacheModel;
|
||||
int32_t cacheLastSize;
|
||||
int8_t compressionLevel;
|
||||
int32_t daysPerFile;
|
||||
|
@ -66,6 +67,7 @@ typedef struct SDatabaseOptions {
|
|||
char precisionStr[3];
|
||||
int8_t precision;
|
||||
int8_t replica;
|
||||
char strictStr[TSDB_DB_STRICT_STR_LEN];
|
||||
int8_t strict;
|
||||
int8_t walLevel;
|
||||
int32_t numOfVgroups;
|
||||
|
|
|
@ -78,6 +78,7 @@ typedef struct SScanLogicNode {
|
|||
SNodeList* pGroupTags;
|
||||
bool groupSort;
|
||||
int8_t cacheLastMode;
|
||||
bool hasNormalCols; // neither tag column nor primary key tag column
|
||||
} SScanLogicNode;
|
||||
|
||||
typedef struct SJoinLogicNode {
|
||||
|
|
|
@ -258,6 +258,7 @@ typedef struct SSelectStmt {
|
|||
bool hasAggFuncs;
|
||||
bool hasRepeatScanFuncs;
|
||||
bool hasIndefiniteRowsFunc;
|
||||
bool hasMultiRowsFunc;
|
||||
bool hasSelectFunc;
|
||||
bool hasSelectValFunc;
|
||||
bool hasOtherVectorFunc;
|
||||
|
|
|
@ -104,6 +104,13 @@ int32_t maxScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *
|
|||
int32_t avgScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput);
|
||||
int32_t stddevScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput);
|
||||
int32_t leastSQRScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput);
|
||||
int32_t percentileScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput);
|
||||
int32_t apercentileScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput);
|
||||
int32_t spreadScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput);
|
||||
int32_t derivativeScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput);
|
||||
int32_t irateScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput);
|
||||
int32_t twaScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput);
|
||||
int32_t mavgScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
|
|
|
@ -31,6 +31,11 @@ extern "C" {
|
|||
|
||||
typedef struct SStreamTask SStreamTask;
|
||||
|
||||
enum {
|
||||
STREAM_STATUS__NORMAL = 0,
|
||||
STREAM_STATUS__RECOVER,
|
||||
};
|
||||
|
||||
enum {
|
||||
TASK_STATUS__NORMAL = 0,
|
||||
TASK_STATUS__DROPPING,
|
||||
|
|
|
@ -53,7 +53,7 @@ extern const int32_t TYPE_BYTES[16];
|
|||
#define TSDB_DATA_BIGINT_NULL 0x8000000000000000LL
|
||||
#define TSDB_DATA_TIMESTAMP_NULL TSDB_DATA_BIGINT_NULL
|
||||
|
||||
#define TSDB_DATA_FLOAT_NULL 0x7FF00000 // it is an NAN
|
||||
#define TSDB_DATA_FLOAT_NULL 0x7FF00000 // it is an NAN
|
||||
#define TSDB_DATA_DOUBLE_NULL 0x7FFFFF0000000000LL // an NAN
|
||||
#define TSDB_DATA_NCHAR_NULL 0xFFFFFFFF
|
||||
#define TSDB_DATA_BINARY_NULL 0xFF
|
||||
|
@ -107,9 +107,10 @@ extern const int32_t TYPE_BYTES[16];
|
|||
|
||||
#define TSDB_INS_USER_STABLES_DBNAME_COLID 2
|
||||
|
||||
#define TSDB_TICK_PER_SECOND(precision) \
|
||||
((int64_t)((precision) == TSDB_TIME_PRECISION_MILLI ? 1000LL \
|
||||
: ((precision) == TSDB_TIME_PRECISION_MICRO ? 1000000LL : 1000000000LL)))
|
||||
#define TSDB_TICK_PER_SECOND(precision) \
|
||||
((int64_t)((precision) == TSDB_TIME_PRECISION_MILLI \
|
||||
? 1000LL \
|
||||
: ((precision) == TSDB_TIME_PRECISION_MICRO ? 1000000LL : 1000000000LL)))
|
||||
|
||||
#define T_MEMBER_SIZE(type, member) sizeof(((type *)0)->member)
|
||||
#define T_APPEND_MEMBER(dst, ptr, type, member) \
|
||||
|
@ -328,15 +329,25 @@ typedef enum ELogicConditionType {
|
|||
#define TSDB_MIN_DB_REPLICA 1
|
||||
#define TSDB_MAX_DB_REPLICA 3
|
||||
#define TSDB_DEFAULT_DB_REPLICA 1
|
||||
#define TSDB_DB_STRICT_STR_LEN sizeof(TSDB_DB_STRICT_OFF_STR)
|
||||
#define TSDB_DB_STRICT_OFF_STR "off"
|
||||
#define TSDB_DB_STRICT_ON_STR "on"
|
||||
#define TSDB_DB_STRICT_OFF 0
|
||||
#define TSDB_DB_STRICT_ON 1
|
||||
#define TSDB_DEFAULT_DB_STRICT 0
|
||||
#define TSDB_MIN_DB_CACHE_LAST 0
|
||||
#define TSDB_MAX_DB_CACHE_LAST 3
|
||||
#define TSDB_DEFAULT_CACHE_LAST 0
|
||||
#define TSDB_MIN_DB_CACHE_LAST_SIZE 1 // MB
|
||||
#define TSDB_MAX_DB_CACHE_LAST_SIZE 65536
|
||||
#define TSDB_DEFAULT_CACHE_LAST_SIZE 1
|
||||
#define TSDB_DEFAULT_DB_STRICT TSDB_DB_STRICT_OFF
|
||||
#define TSDB_CACHE_MODEL_STR_LEN sizeof(TSDB_CACHE_MODEL_LAST_VALUE_STR)
|
||||
#define TSDB_CACHE_MODEL_NONE_STR "none"
|
||||
#define TSDB_CACHE_MODEL_LAST_ROW_STR "last_row"
|
||||
#define TSDB_CACHE_MODEL_LAST_VALUE_STR "last_value"
|
||||
#define TSDB_CACHE_MODEL_BOTH_STR "both"
|
||||
#define TSDB_CACHE_MODEL_NONE 0
|
||||
#define TSDB_CACHE_MODEL_LAST_ROW 1
|
||||
#define TSDB_CACHE_MODEL_LAST_VALUE 2
|
||||
#define TSDB_CACHE_MODEL_BOTH 3
|
||||
#define TSDB_DEFAULT_CACHE_MODEL TSDB_CACHE_MODEL_NONE
|
||||
#define TSDB_MIN_DB_CACHE_SIZE 1 // MB
|
||||
#define TSDB_MAX_DB_CACHE_SIZE 65536
|
||||
#define TSDB_DEFAULT_CACHE_SIZE 1
|
||||
#define TSDB_DB_STREAM_MODE_OFF 0
|
||||
#define TSDB_DB_STREAM_MODE_ON 1
|
||||
#define TSDB_DEFAULT_DB_STREAM_MODE 0
|
||||
|
|
|
@ -161,7 +161,7 @@ JNIEXPORT jstring JNICALL Java_com_taosdata_jdbc_tmq_TMQConnector_tmqGetTableNam
|
|||
* Signature: (JJLcom/taosdata/jdbc/TSDBResultSetBlockData;ILjava/util/List;)I
|
||||
*/
|
||||
JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_tmq_TMQConnector_fetchRawBlockImp(JNIEnv *, jobject, jlong, jlong,
|
||||
jobject, jint, jobject);
|
||||
jobject, jobject);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
|
|
|
@ -278,7 +278,7 @@ JNIEXPORT jstring JNICALL Java_com_taosdata_jdbc_tmq_TMQConnector_tmqGetTableNam
|
|||
}
|
||||
|
||||
JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_tmq_TMQConnector_fetchRawBlockImp(JNIEnv *env, jobject jobj, jlong con,
|
||||
jlong res, jobject rowobj, jint flag,
|
||||
jlong res, jobject rowobj,
|
||||
jobject arrayListObj) {
|
||||
TAOS *tscon = (TAOS *)con;
|
||||
int32_t code = check_for_params(jobj, con, res);
|
||||
|
@ -309,16 +309,14 @@ JNIEXPORT jint JNICALL Java_com_taosdata_jdbc_tmq_TMQConnector_fetchRawBlockImp(
|
|||
|
||||
TAOS_FIELD *fields = taos_fetch_fields(tres);
|
||||
jniDebug("jobj:%p, conn:%p, resultset:%p, fields size is %d", jobj, tscon, tres, numOfFields);
|
||||
if (flag) {
|
||||
for (int i = 0; i < numOfFields; ++i) {
|
||||
jobject metadataObj = (*env)->NewObject(env, g_metadataClass, g_metadataConstructFp);
|
||||
(*env)->SetIntField(env, metadataObj, g_metadataColtypeField, fields[i].type);
|
||||
(*env)->SetIntField(env, metadataObj, g_metadataColsizeField, fields[i].bytes);
|
||||
(*env)->SetIntField(env, metadataObj, g_metadataColindexField, i);
|
||||
jstring metadataObjColname = (*env)->NewStringUTF(env, fields[i].name);
|
||||
(*env)->SetObjectField(env, metadataObj, g_metadataColnameField, metadataObjColname);
|
||||
(*env)->CallBooleanMethod(env, arrayListObj, g_arrayListAddFp, metadataObj);
|
||||
}
|
||||
for (int i = 0; i < numOfFields; ++i) {
|
||||
jobject metadataObj = (*env)->NewObject(env, g_metadataClass, g_metadataConstructFp);
|
||||
(*env)->SetIntField(env, metadataObj, g_metadataColtypeField, fields[i].type);
|
||||
(*env)->SetIntField(env, metadataObj, g_metadataColsizeField, fields[i].bytes);
|
||||
(*env)->SetIntField(env, metadataObj, g_metadataColindexField, i);
|
||||
jstring metadataObjColname = (*env)->NewStringUTF(env, fields[i].name);
|
||||
(*env)->SetObjectField(env, metadataObj, g_metadataColnameField, metadataObjColname);
|
||||
(*env)->CallBooleanMethod(env, arrayListObj, g_arrayListAddFp, metadataObj);
|
||||
}
|
||||
|
||||
(*env)->CallVoidMethod(env, rowobj, g_blockdataSetNumOfRowsFp, (jint)numOfRows);
|
||||
|
|
|
@ -76,7 +76,7 @@ static const SSysDbTableSchema userDBSchema[] = {
|
|||
{.name = "vgroups", .bytes = 2, .type = TSDB_DATA_TYPE_SMALLINT},
|
||||
{.name = "ntables", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT},
|
||||
{.name = "replica", .bytes = 1, .type = TSDB_DATA_TYPE_TINYINT},
|
||||
{.name = "strict", .bytes = 9 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||
{.name = "strict", .bytes = TSDB_DB_STRICT_STR_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||
{.name = "duration", .bytes = 10 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||
{.name = "keep", .bytes = 32 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||
{.name = "buffer", .bytes = 4, .type = TSDB_DATA_TYPE_INT},
|
||||
|
@ -87,9 +87,9 @@ static const SSysDbTableSchema userDBSchema[] = {
|
|||
{.name = "wal", .bytes = 1, .type = TSDB_DATA_TYPE_TINYINT},
|
||||
{.name = "fsync", .bytes = 4, .type = TSDB_DATA_TYPE_INT},
|
||||
{.name = "comp", .bytes = 1, .type = TSDB_DATA_TYPE_TINYINT},
|
||||
{.name = "cache_model", .bytes = 1, .type = TSDB_DATA_TYPE_TINYINT},
|
||||
{.name = "cacheModel", .bytes = TSDB_CACHE_MODEL_STR_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||
{.name = "precision", .bytes = 2 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||
{.name = "single_stable_model", .bytes = 1, .type = TSDB_DATA_TYPE_BOOL},
|
||||
{.name = "single_stable", .bytes = 1, .type = TSDB_DATA_TYPE_BOOL},
|
||||
{.name = "status", .bytes = 10 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||
// {.name = "schemaless", .bytes = 1, .type = TSDB_DATA_TYPE_BOOL},
|
||||
{.name = "retention", .bytes = 60 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||
|
|
|
@ -1752,7 +1752,7 @@ char* dumpBlockData(SSDataBlock* pDataBlock, const char* flag, char** pDataBuf)
|
|||
int32_t colNum = taosArrayGetSize(pDataBlock->pDataBlock);
|
||||
int32_t rows = pDataBlock->info.rows;
|
||||
int32_t len = 0;
|
||||
len += snprintf(dumpBuf + len, size - len, "\n%s |block type %d |child id %d|group id:%" PRIu64 "| uid:%ld\n", flag,
|
||||
len += snprintf(dumpBuf + len, size - len, "%s |block type %d |child id %d|group id:%" PRIu64 "| uid:%ld|======\n", "dumpBlockData",
|
||||
(int32_t)pDataBlock->info.type, pDataBlock->info.childId, pDataBlock->info.groupId,
|
||||
pDataBlock->info.uid);
|
||||
if (len >= size - 1) return dumpBuf;
|
||||
|
|
|
@ -293,7 +293,7 @@ static int32_t mndCheckDbCfg(SMnode *pMnode, SDbCfg *pCfg) {
|
|||
if (pCfg->buffer < TSDB_MIN_BUFFER_PER_VNODE || pCfg->buffer > TSDB_MAX_BUFFER_PER_VNODE) return -1;
|
||||
if (pCfg->pageSize < TSDB_MIN_PAGESIZE_PER_VNODE || pCfg->pageSize > TSDB_MAX_PAGESIZE_PER_VNODE) return -1;
|
||||
if (pCfg->pages < TSDB_MIN_PAGES_PER_VNODE || pCfg->pages > TSDB_MAX_PAGES_PER_VNODE) return -1;
|
||||
if (pCfg->cacheLastSize < TSDB_MIN_DB_CACHE_LAST_SIZE || pCfg->cacheLastSize > TSDB_MAX_DB_CACHE_LAST_SIZE) return -1;
|
||||
if (pCfg->cacheLastSize < TSDB_MIN_DB_CACHE_SIZE || pCfg->cacheLastSize > TSDB_MAX_DB_CACHE_SIZE) return -1;
|
||||
if (pCfg->daysPerFile < TSDB_MIN_DAYS_PER_FILE || pCfg->daysPerFile > TSDB_MAX_DAYS_PER_FILE) return -1;
|
||||
if (pCfg->daysToKeep0 < TSDB_MIN_KEEP || pCfg->daysToKeep0 > TSDB_MAX_KEEP) return -1;
|
||||
if (pCfg->daysToKeep1 < TSDB_MIN_KEEP || pCfg->daysToKeep1 > TSDB_MAX_KEEP) return -1;
|
||||
|
@ -312,7 +312,7 @@ static int32_t mndCheckDbCfg(SMnode *pMnode, SDbCfg *pCfg) {
|
|||
if (pCfg->replications != 1 && pCfg->replications != 3) return -1;
|
||||
if (pCfg->strict < TSDB_DB_STRICT_OFF || pCfg->strict > TSDB_DB_STRICT_ON) return -1;
|
||||
if (pCfg->schemaless < TSDB_DB_SCHEMALESS_OFF || pCfg->schemaless > TSDB_DB_SCHEMALESS_ON) return -1;
|
||||
if (pCfg->cacheLast < TSDB_MIN_DB_CACHE_LAST || pCfg->cacheLast > TSDB_MAX_DB_CACHE_LAST) return -1;
|
||||
if (pCfg->cacheLast < TSDB_CACHE_MODEL_NONE || pCfg->cacheLast > TSDB_CACHE_MODEL_BOTH) return -1;
|
||||
if (pCfg->hashMethod != 1) return -1;
|
||||
if (pCfg->replications > mndGetDnodeSize(pMnode)) {
|
||||
terrno = TSDB_CODE_MND_NO_ENOUGH_DNODES;
|
||||
|
@ -341,8 +341,8 @@ static void mndSetDefaultDbCfg(SDbCfg *pCfg) {
|
|||
if (pCfg->compression < 0) pCfg->compression = TSDB_DEFAULT_COMP_LEVEL;
|
||||
if (pCfg->replications < 0) pCfg->replications = TSDB_DEFAULT_DB_REPLICA;
|
||||
if (pCfg->strict < 0) pCfg->strict = TSDB_DEFAULT_DB_STRICT;
|
||||
if (pCfg->cacheLast < 0) pCfg->cacheLast = TSDB_DEFAULT_CACHE_LAST;
|
||||
if (pCfg->cacheLastSize <= 0) pCfg->cacheLastSize = TSDB_DEFAULT_CACHE_LAST_SIZE;
|
||||
if (pCfg->cacheLast < 0) pCfg->cacheLast = TSDB_DEFAULT_CACHE_MODEL;
|
||||
if (pCfg->cacheLastSize <= 0) pCfg->cacheLastSize = TSDB_DEFAULT_CACHE_SIZE;
|
||||
if (pCfg->numOfRetensions < 0) pCfg->numOfRetensions = 0;
|
||||
if (pCfg->schemaless < 0) pCfg->schemaless = TSDB_DB_SCHEMALESS_OFF;
|
||||
}
|
||||
|
@ -1443,6 +1443,22 @@ char *buildRetension(SArray *pRetension) {
|
|||
return p1;
|
||||
}
|
||||
|
||||
static const char *getCacheModelStr(int8_t cacheModel) {
|
||||
switch (cacheModel) {
|
||||
case TSDB_CACHE_MODEL_NONE:
|
||||
return TSDB_CACHE_MODEL_NONE_STR;
|
||||
case TSDB_CACHE_MODEL_LAST_ROW:
|
||||
return TSDB_CACHE_MODEL_LAST_ROW_STR;
|
||||
case TSDB_CACHE_MODEL_LAST_VALUE:
|
||||
return TSDB_CACHE_MODEL_LAST_VALUE_STR;
|
||||
case TSDB_CACHE_MODEL_BOTH:
|
||||
return TSDB_CACHE_MODEL_BOTH_STR;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
return "unknown";
|
||||
}
|
||||
|
||||
static void dumpDbInfoData(SSDataBlock *pBlock, SDbObj *pDb, SShowObj *pShow, int32_t rows, int64_t numOfTables,
|
||||
bool sysDb, ESdbStatus objStatus, bool sysinfo) {
|
||||
int32_t cols = 0;
|
||||
|
@ -1491,7 +1507,7 @@ static void dumpDbInfoData(SSDataBlock *pBlock, SDbObj *pDb, SShowObj *pShow, in
|
|||
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
|
||||
colDataAppend(pColInfo, rows, (const char *)&pDb->cfg.replications, false);
|
||||
|
||||
const char *strictStr = pDb->cfg.strict ? "strict" : "no_strict";
|
||||
const char *strictStr = pDb->cfg.strict ? "on" : "off";
|
||||
char strictVstr[24] = {0};
|
||||
STR_WITH_SIZE_TO_VARSTR(strictVstr, strictStr, strlen(strictStr));
|
||||
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
|
||||
|
@ -1539,8 +1555,11 @@ static void dumpDbInfoData(SSDataBlock *pBlock, SDbObj *pDb, SShowObj *pShow, in
|
|||
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
|
||||
colDataAppend(pColInfo, rows, (const char *)&pDb->cfg.compression, false);
|
||||
|
||||
const char *cacheModelStr = getCacheModelStr(pDb->cfg.cacheLast);
|
||||
char cacheModelVstr[24] = {0};
|
||||
STR_WITH_SIZE_TO_VARSTR(cacheModelVstr, cacheModelStr, strlen(cacheModelStr));
|
||||
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
|
||||
colDataAppend(pColInfo, rows, (const char *)&pDb->cfg.cacheLast, false);
|
||||
colDataAppend(pColInfo, rows, (const char *)cacheModelVstr, false);
|
||||
|
||||
const char *precStr = NULL;
|
||||
switch (pDb->cfg.precision) {
|
||||
|
|
|
@ -177,7 +177,7 @@ static int32_t mndStreamActionUpdate(SSdb *pSdb, SStreamObj *pOldStream, SStream
|
|||
|
||||
taosWLockLatch(&pOldStream->lock);
|
||||
|
||||
// TODO handle update
|
||||
pOldStream->status = pNewStream->status;
|
||||
|
||||
taosWUnLockLatch(&pOldStream->lock);
|
||||
return 0;
|
||||
|
@ -395,6 +395,20 @@ int32_t mndPersistDropStreamLog(SMnode *pMnode, STrans *pTrans, SStreamObj *pStr
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int32_t mndSetStreamRecover(SMnode *pMnode, STrans *pTrans, const SStreamObj *pStream) {
|
||||
SStreamObj streamObj = {0};
|
||||
memcpy(streamObj.name, pStream->name, TSDB_STREAM_FNAME_LEN);
|
||||
streamObj.status = STREAM_STATUS__RECOVER;
|
||||
SSdbRaw *pCommitRaw = mndStreamActionEncode(&streamObj);
|
||||
if (pCommitRaw == NULL || mndTransAppendCommitlog(pTrans, pCommitRaw) != 0) {
|
||||
mError("stream trans:%d, failed to append commit log since %s", pTrans->id, terrstr());
|
||||
mndTransDrop(pTrans);
|
||||
return -1;
|
||||
}
|
||||
sdbSetRawStatus(pCommitRaw, SDB_STATUS_READY);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int32_t mndCreateStbForStream(SMnode *pMnode, STrans *pTrans, const SStreamObj *pStream, const char *user) {
|
||||
SStbObj *pStb = NULL;
|
||||
SDbObj *pDb = NULL;
|
||||
|
@ -492,6 +506,76 @@ static int32_t mndPersistTaskDropReq(STrans *pTrans, SStreamTask *pTask) {
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int32_t mndPersistTaskRecoverReq(STrans *pTrans, SStreamTask *pTask) {
|
||||
SMStreamTaskRecoverReq *pReq = taosMemoryCalloc(1, sizeof(SMStreamTaskRecoverReq));
|
||||
if (pReq == NULL) {
|
||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||
return -1;
|
||||
}
|
||||
pReq->streamId = pTask->streamId;
|
||||
pReq->taskId = pTask->taskId;
|
||||
int32_t len;
|
||||
int32_t code;
|
||||
tEncodeSize(tEncodeSMStreamTaskRecoverReq, pReq, len, code);
|
||||
if (code != 0) {
|
||||
return -1;
|
||||
}
|
||||
void *buf = taosMemoryCalloc(1, sizeof(SMsgHead) + len);
|
||||
if (buf == NULL) {
|
||||
return -1;
|
||||
}
|
||||
void *abuf = POINTER_SHIFT(buf, sizeof(SMsgHead));
|
||||
SEncoder encoder;
|
||||
tEncoderInit(&encoder, abuf, len);
|
||||
tEncodeSMStreamTaskRecoverReq(&encoder, pReq);
|
||||
((SMsgHead *)buf)->vgId = pTask->nodeId;
|
||||
|
||||
STransAction action = {0};
|
||||
memcpy(&action.epSet, &pTask->epSet, sizeof(SEpSet));
|
||||
action.pCont = buf;
|
||||
action.contLen = sizeof(SMsgHead) + len;
|
||||
action.msgType = TDMT_STREAM_TASK_RECOVER;
|
||||
if (mndTransAppendRedoAction(pTrans, &action) != 0) {
|
||||
taosMemoryFree(buf);
|
||||
return -1;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
int32_t mndRecoverStreamTasks(SMnode *pMnode, STrans *pTrans, SStreamObj *pStream) {
|
||||
if (pStream->isDistributed) {
|
||||
int32_t lv = taosArrayGetSize(pStream->tasks);
|
||||
for (int32_t i = 0; i < lv; i++) {
|
||||
SArray *pTasks = taosArrayGetP(pStream->tasks, i);
|
||||
int32_t sz = taosArrayGetSize(pTasks);
|
||||
SStreamTask *pTask = taosArrayGetP(pTasks, 0);
|
||||
if (!pTask->isDataScan && pTask->execType != TASK_EXEC__NONE) {
|
||||
ASSERT(sz == 1);
|
||||
if (mndPersistTaskRecoverReq(pTrans, pTask) < 0) {
|
||||
return -1;
|
||||
}
|
||||
} else {
|
||||
continue;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
int32_t lv = taosArrayGetSize(pStream->tasks);
|
||||
for (int32_t i = 0; i < lv; i++) {
|
||||
SArray *pTasks = taosArrayGetP(pStream->tasks, i);
|
||||
int32_t sz = taosArrayGetSize(pTasks);
|
||||
for (int32_t j = 0; j < sz; j++) {
|
||||
SStreamTask *pTask = taosArrayGetP(pTasks, j);
|
||||
if (!pTask->isDataScan) break;
|
||||
ASSERT(pTask->execType != TASK_EXEC__NONE);
|
||||
if (mndPersistTaskRecoverReq(pTrans, pTask) < 0) {
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
int32_t mndDropStreamTasks(SMnode *pMnode, STrans *pTrans, SStreamObj *pStream) {
|
||||
int32_t lv = taosArrayGetSize(pStream->tasks);
|
||||
for (int32_t i = 0; i < lv; i++) {
|
||||
|
@ -712,14 +796,14 @@ static int32_t mndProcessRecoverStreamReq(SRpcMsg *pReq) {
|
|||
mDebug("trans:%d, used to drop stream:%s", pTrans->id, recoverReq.name);
|
||||
|
||||
// broadcast to recover all tasks
|
||||
if (mndDropStreamTasks(pMnode, pTrans, pStream) < 0) {
|
||||
if (mndRecoverStreamTasks(pMnode, pTrans, pStream) < 0) {
|
||||
mError("stream:%s, failed to recover task since %s", recoverReq.name, terrstr());
|
||||
sdbRelease(pMnode->pSdb, pStream);
|
||||
return -1;
|
||||
}
|
||||
|
||||
// update stream status
|
||||
if (mndPersistDropStreamLog(pMnode, pTrans, pStream) < 0) {
|
||||
if (mndSetStreamRecover(pMnode, pTrans, pStream) < 0) {
|
||||
sdbRelease(pMnode->pSdb, pStream);
|
||||
return -1;
|
||||
}
|
||||
|
|
|
@ -30,15 +30,6 @@
|
|||
extern "C" {
|
||||
#endif
|
||||
|
||||
enum {
|
||||
STREAM_STATUS__RUNNING = 1,
|
||||
STREAM_STATUS__STOPPED,
|
||||
STREAM_STATUS__CREATING,
|
||||
STREAM_STATUS__STOPING,
|
||||
STREAM_STATUS__RESTORING,
|
||||
STREAM_STATUS__DELETING,
|
||||
};
|
||||
|
||||
typedef struct {
|
||||
SHashObj* pHash; // taskId -> SStreamTask
|
||||
} SStreamMeta;
|
||||
|
|
|
@ -15,10 +15,10 @@
|
|||
|
||||
#include "command.h"
|
||||
#include "catalog.h"
|
||||
#include "tdatablock.h"
|
||||
#include "tglobal.h"
|
||||
#include "commandInt.h"
|
||||
#include "scheduler.h"
|
||||
#include "tdatablock.h"
|
||||
#include "tglobal.h"
|
||||
|
||||
extern SConfig* tsCfg;
|
||||
|
||||
|
@ -222,7 +222,7 @@ static void setCreateDBResultIntoDataBlock(SSDataBlock* pBlock, char* dbFName, S
|
|||
char* retentions = buildRetension(pCfg->pRetensions);
|
||||
|
||||
len += sprintf(buf2 + VARSTR_HEADER_SIZE,
|
||||
"CREATE DATABASE `%s` BUFFER %d CACHELAST %d COMP %d DURATION %dm "
|
||||
"CREATE DATABASE `%s` BUFFER %d CACHEMODEL %d COMP %d DURATION %dm "
|
||||
"FSYNC %d MAXROWS %d MINROWS %d KEEP %dm,%dm,%dm PAGES %d PAGESIZE %d PRECISION '%s' REPLICA %d "
|
||||
"STRICT %d WAL %d VGROUPS %d SINGLE_STABLE %d",
|
||||
dbFName, pCfg->buffer, pCfg->cacheLast, pCfg->compression, pCfg->daysPerFile, pCfg->fsyncPeriod,
|
||||
|
@ -483,7 +483,7 @@ static int32_t execShowCreateSTable(SShowCreateTableStmt* pStmt, SRetrieveTableR
|
|||
|
||||
static int32_t execAlterCmd(char* cmd, char* value, bool* processed) {
|
||||
int32_t code = 0;
|
||||
|
||||
|
||||
if (0 == strcasecmp(cmd, COMMAND_RESET_LOG)) {
|
||||
taosResetLog();
|
||||
cfgDumpCfg(tsCfg, 0, false);
|
||||
|
@ -502,13 +502,13 @@ _return:
|
|||
if (code) {
|
||||
terrno = code;
|
||||
}
|
||||
|
||||
return code;
|
||||
|
||||
return code;
|
||||
}
|
||||
|
||||
static int32_t execAlterLocal(SAlterLocalStmt* pStmt) {
|
||||
bool processed = false;
|
||||
|
||||
|
||||
if (execAlterCmd(pStmt->config, pStmt->value, &processed)) {
|
||||
return terrno;
|
||||
}
|
||||
|
@ -516,7 +516,7 @@ static int32_t execAlterLocal(SAlterLocalStmt* pStmt) {
|
|||
if (processed) {
|
||||
goto _return;
|
||||
}
|
||||
|
||||
|
||||
if (cfgSetItem(tsCfg, pStmt->config, pStmt->value, CFG_STYPE_ALTER_CMD)) {
|
||||
return terrno;
|
||||
}
|
||||
|
|
|
@ -371,6 +371,13 @@ typedef struct SessionWindowSupporter {
|
|||
uint8_t parentType;
|
||||
} SessionWindowSupporter;
|
||||
|
||||
typedef struct STimeWindowSupp {
|
||||
int8_t calTrigger;
|
||||
int64_t waterMark;
|
||||
TSKEY maxTs;
|
||||
SColumnInfoData timeWindowData; // query time window info for scalar function execution.
|
||||
} STimeWindowAggSupp;
|
||||
|
||||
typedef struct SStreamScanInfo {
|
||||
uint64_t tableUid; // queried super table uid
|
||||
SExprInfo* pPseudoExpr;
|
||||
|
@ -407,6 +414,7 @@ typedef struct SStreamScanInfo {
|
|||
SSDataBlock* pDeleteDataRes; // delete data SSDataBlock
|
||||
int32_t deleteDataIndex;
|
||||
STimeWindow updateWin;
|
||||
STimeWindowAggSupp twAggSup;
|
||||
|
||||
// status for tmq
|
||||
// SSchemaWrapper schema;
|
||||
|
@ -452,13 +460,6 @@ typedef struct SAggSupporter {
|
|||
int32_t resultRowSize; // the result buffer size for each result row, with the meta data size for each row
|
||||
} SAggSupporter;
|
||||
|
||||
typedef struct STimeWindowSupp {
|
||||
int8_t calTrigger;
|
||||
int64_t waterMark;
|
||||
TSKEY maxTs;
|
||||
SColumnInfoData timeWindowData; // query time window info for scalar function execution.
|
||||
} STimeWindowAggSupp;
|
||||
|
||||
typedef struct SIntervalAggOperatorInfo {
|
||||
// SOptrBasicInfo should be first, SAggSupporter should be second for stream encode
|
||||
SOptrBasicInfo binfo; // basic info
|
||||
|
@ -952,6 +953,7 @@ bool isInTimeWindow(STimeWindow* pWin, TSKEY ts, int64_t gap);
|
|||
int32_t updateSessionWindowInfo(SResultWindowInfo* pWinInfo, TSKEY* pStartTs,
|
||||
TSKEY* pEndTs, int32_t rows, int32_t start, int64_t gap, SHashObj* pStDeleted);
|
||||
bool functionNeedToExecute(SqlFunctionCtx* pCtx);
|
||||
bool isCloseWindow(STimeWindow* pWin, STimeWindowAggSupp* pSup);
|
||||
|
||||
int32_t finalizeResultRowIntoResultDataBlock(SDiskbasedBuf* pBuf, SResultRowPosition* resultRowPosition,
|
||||
SqlFunctionCtx* pCtx, SExprInfo* pExprInfo, int32_t numOfExprs, const int32_t* rowCellOffset,
|
||||
|
|
|
@ -802,7 +802,12 @@ static bool isStateWindow(SStreamScanInfo* pInfo) {
|
|||
|
||||
static bool isIntervalWindow(SStreamScanInfo* pInfo) {
|
||||
return pInfo->sessionSup.parentType == QUERY_NODE_PHYSICAL_PLAN_STREAM_INTERVAL ||
|
||||
pInfo->sessionSup.parentType == QUERY_NODE_PHYSICAL_PLAN_STREAM_SEMI_INTERVAL;
|
||||
pInfo->sessionSup.parentType == QUERY_NODE_PHYSICAL_PLAN_STREAM_SEMI_INTERVAL ||
|
||||
pInfo->sessionSup.parentType == QUERY_NODE_PHYSICAL_PLAN_STREAM_FINAL_INTERVAL;
|
||||
}
|
||||
|
||||
static bool isSignleIntervalWindow(SStreamScanInfo* pInfo) {
|
||||
return pInfo->sessionSup.parentType == QUERY_NODE_PHYSICAL_PLAN_STREAM_INTERVAL;
|
||||
}
|
||||
|
||||
static uint64_t getGroupId(SOperatorInfo* pOperator, uint64_t uid) {
|
||||
|
@ -1130,9 +1135,14 @@ static void setUpdateData(SStreamScanInfo* pInfo, SSDataBlock* pBlock, SSDataBlo
|
|||
static void checkUpdateData(SStreamScanInfo* pInfo, bool invertible, SSDataBlock* pBlock, bool out) {
|
||||
SColumnInfoData* pColDataInfo = taosArrayGet(pBlock->pDataBlock, pInfo->primaryTsIndex);
|
||||
ASSERT(pColDataInfo->info.type == TSDB_DATA_TYPE_TIMESTAMP);
|
||||
TSKEY* ts = (TSKEY*)pColDataInfo->pData;
|
||||
TSKEY* tsCol = (TSKEY*)pColDataInfo->pData;
|
||||
for (int32_t rowId = 0; rowId < pBlock->info.rows; rowId++) {
|
||||
if (updateInfoIsUpdated(pInfo->pUpdateInfo, pBlock->info.uid, ts[rowId]) && out) {
|
||||
SResultRowInfo dumyInfo;
|
||||
dumyInfo.cur.pageId = -1;
|
||||
STimeWindow win = getActiveTimeWindow(NULL, &dumyInfo, tsCol[rowId], &pInfo->interval, TSDB_ORDER_ASC);
|
||||
// must check update info first.
|
||||
bool update = updateInfoIsUpdated(pInfo->pUpdateInfo, pBlock->info.uid, tsCol[rowId]);
|
||||
if ( (update || (isSignleIntervalWindow(pInfo) && isCloseWindow(&win, &pInfo->twAggSup)) ) && out) {
|
||||
taosArrayPush(pInfo->tsArray, &rowId);
|
||||
}
|
||||
}
|
||||
|
@ -1413,6 +1423,7 @@ static SSDataBlock* doStreamScan(SOperatorInfo* pOperator) {
|
|||
pInfo->tsArrayIndex = 0;
|
||||
checkUpdateData(pInfo, true, pInfo->pRes, true);
|
||||
setUpdateData(pInfo, pInfo->pRes, pInfo->pUpdateRes);
|
||||
pInfo->twAggSup.maxTs = TMAX(pInfo->twAggSup.maxTs, pBlockInfo->window.ekey);
|
||||
if (pInfo->pUpdateRes->info.rows > 0) {
|
||||
if (pInfo->pUpdateRes->info.type == STREAM_CLEAR) {
|
||||
pInfo->updateResIndex = 0;
|
||||
|
@ -1584,13 +1595,14 @@ SOperatorInfo* createStreamScanOperatorInfo(SReadHandle* pHandle, STableScanPhys
|
|||
pInfo->pUpdateRes = createResDataBlock(pDescNode);
|
||||
pInfo->pCondition = pScanPhyNode->node.pConditions;
|
||||
pInfo->scanMode = STREAM_SCAN_FROM_READERHANDLE;
|
||||
pInfo->sessionSup = (SessionWindowSupporter){.pStreamAggSup = NULL, .gap = -1};
|
||||
pInfo->sessionSup = (SessionWindowSupporter){.pStreamAggSup = NULL, .gap = -1, .parentType = QUERY_NODE_PHYSICAL_PLAN};
|
||||
pInfo->groupId = 0;
|
||||
pInfo->pPullDataRes = createPullDataBlock();
|
||||
pInfo->pStreamScanOp = pOperator;
|
||||
pInfo->deleteDataIndex = 0;
|
||||
pInfo->pDeleteDataRes = createPullDataBlock();
|
||||
pInfo->updateWin = (STimeWindow){.skey = INT64_MAX, .ekey = INT64_MAX};
|
||||
pInfo->twAggSup = *pTwSup;
|
||||
|
||||
pOperator->name = "StreamScanOperator";
|
||||
pOperator->operatorType = QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN;
|
||||
|
|
|
@ -652,8 +652,8 @@ static void doInterpUnclosedTimeWindow(SOperatorInfo* pOperatorInfo, int32_t num
|
|||
}
|
||||
|
||||
void printDataBlock(SSDataBlock* pBlock, const char* flag) {
|
||||
if (pBlock == NULL) {
|
||||
qDebug("======printDataBlock Block is Null");
|
||||
if (!pBlock || pBlock->info.rows == 0) {
|
||||
qDebug("======printDataBlock: Block is Null or Empty");
|
||||
return;
|
||||
}
|
||||
char* pBuf = NULL;
|
||||
|
@ -1355,7 +1355,7 @@ static int32_t closeIntervalWindow(SHashObj* pHashMap, STimeWindowAggSupp* pSup,
|
|||
int32_t size = taosArrayGetSize(chAy);
|
||||
qDebug("window %" PRId64 " wait child size:%d", win.skey, size);
|
||||
for (int32_t i = 0; i < size; i++) {
|
||||
qDebug("window %" PRId64 " wait chid id:%d", win.skey, *(int32_t*)taosArrayGet(chAy, i));
|
||||
qDebug("window %" PRId64 " wait child id:%d", win.skey, *(int32_t*)taosArrayGet(chAy, i));
|
||||
}
|
||||
continue;
|
||||
} else if (pPullDataMap) {
|
||||
|
@ -1626,6 +1626,12 @@ SSDataBlock* createDeleteBlock() {
|
|||
return pBlock;
|
||||
}
|
||||
|
||||
void initIntervalDownStream(SOperatorInfo* downstream, uint8_t type) {
|
||||
ASSERT(downstream->operatorType == QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN);
|
||||
SStreamScanInfo* pScanInfo = downstream->info;
|
||||
pScanInfo->sessionSup.parentType = type;
|
||||
}
|
||||
|
||||
SOperatorInfo* createIntervalOperatorInfo(SOperatorInfo* downstream, SExprInfo* pExprInfo, int32_t numOfCols,
|
||||
SSDataBlock* pResBlock, SInterval* pInterval, int32_t primaryTsSlotId,
|
||||
STimeWindowAggSupp* pTwAggSupp, SIntervalPhysiNode* pPhyNode,
|
||||
|
@ -1701,6 +1707,10 @@ SOperatorInfo* createIntervalOperatorInfo(SOperatorInfo* downstream, SExprInfo*
|
|||
pOperator->fpSet = createOperatorFpSet(doOpenIntervalAgg, doBuildIntervalResult, doStreamIntervalAgg, NULL,
|
||||
destroyIntervalOperatorInfo, aggEncodeResultRow, aggDecodeResultRow, NULL);
|
||||
|
||||
if (nodeType(pPhyNode) == QUERY_NODE_PHYSICAL_PLAN_STREAM_INTERVAL) {
|
||||
initIntervalDownStream(downstream, QUERY_NODE_PHYSICAL_PLAN_STREAM_INTERVAL);
|
||||
}
|
||||
|
||||
code = appendDownstream(pOperator, &downstream, 1);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
goto _error;
|
||||
|
@ -2476,12 +2486,14 @@ static void doHashInterval(SOperatorInfo* pOperatorInfo, SSDataBlock* pSDataBloc
|
|||
} else {
|
||||
int32_t index = -1;
|
||||
SArray* chArray = NULL;
|
||||
int32_t chId = 0;
|
||||
if (chIds) {
|
||||
chArray = *(void**)chIds;
|
||||
int32_t chId = getChildIndex(pSDataBlock);
|
||||
chId = getChildIndex(pSDataBlock);
|
||||
index = taosArraySearchIdx(chArray, &chId, compareInt32Val, TD_EQ);
|
||||
}
|
||||
if (index != -1 && pSDataBlock->info.type == STREAM_PULL_DATA) {
|
||||
qDebug("======delete child id %d", chId);
|
||||
taosArrayRemove(chArray, index);
|
||||
if (taosArrayGetSize(chArray) == 0) {
|
||||
// pull data is over
|
||||
|
@ -3010,6 +3022,7 @@ void initDummyFunction(SqlFunctionCtx* pDummy, SqlFunctionCtx* pCtx, int32_t num
|
|||
pDummy[i].functionId = pCtx[i].functionId;
|
||||
}
|
||||
}
|
||||
|
||||
void initDownStream(SOperatorInfo* downstream, SStreamAggSupporter* pAggSup, int64_t gap, int64_t waterMark,
|
||||
uint8_t type) {
|
||||
ASSERT(downstream->operatorType == QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN);
|
||||
|
|
|
@ -621,7 +621,7 @@ static int32_t createInitialSources(SSortHandle* pHandle) {
|
|||
pHandle->sortElapsed += el;
|
||||
|
||||
// All sorted data can fit in memory, external memory sort is not needed. Return to directly
|
||||
if (size <= sortBufSize) {
|
||||
if (size <= sortBufSize && pHandle->pBuf == NULL) {
|
||||
pHandle->cmpParam.numOfSources = 1;
|
||||
pHandle->inMemSort = true;
|
||||
|
||||
|
|
|
@ -44,10 +44,9 @@ extern "C" {
|
|||
#define FUNC_MGT_FORBID_FILL_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(15)
|
||||
#define FUNC_MGT_INTERVAL_INTERPO_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(16)
|
||||
#define FUNC_MGT_FORBID_STREAM_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(17)
|
||||
#define FUNC_MGT_FORBID_WINDOW_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(18)
|
||||
#define FUNC_MGT_FORBID_GROUP_BY_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(19)
|
||||
#define FUNC_MGT_SYSTEM_INFO_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(20)
|
||||
#define FUNC_MGT_CLIENT_PC_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(21)
|
||||
#define FUNC_MGT_SYSTEM_INFO_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(18)
|
||||
#define FUNC_MGT_CLIENT_PC_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(19)
|
||||
#define FUNC_MGT_MULTI_ROWS_FUNC FUNC_MGT_FUNC_CLASSIFICATION_MASK(20)
|
||||
|
||||
#define FUNC_MGT_TEST_MASK(val, mask) (((val) & (mask)) != 0)
|
||||
|
||||
|
|
|
@ -2034,6 +2034,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
|||
.getEnvFunc = getPercentileFuncEnv,
|
||||
.initFunc = percentileFunctionSetup,
|
||||
.processFunc = percentileFunction,
|
||||
.sprocessFunc = percentileScalarFunction,
|
||||
.finalizeFunc = percentileFinalize,
|
||||
.invertFunc = NULL,
|
||||
.combineFunc = NULL,
|
||||
|
@ -2046,6 +2047,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
|||
.getEnvFunc = getApercentileFuncEnv,
|
||||
.initFunc = apercentileFunctionSetup,
|
||||
.processFunc = apercentileFunction,
|
||||
.sprocessFunc = apercentileScalarFunction,
|
||||
.finalizeFunc = apercentileFinalize,
|
||||
.invertFunc = NULL,
|
||||
.combineFunc = apercentileCombine,
|
||||
|
@ -2079,7 +2081,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
|||
{
|
||||
.name = "top",
|
||||
.type = FUNCTION_TYPE_TOP,
|
||||
.classification = FUNC_MGT_AGG_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_FORBID_STREAM_FUNC,
|
||||
.classification = FUNC_MGT_AGG_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_MULTI_ROWS_FUNC | FUNC_MGT_FORBID_STREAM_FUNC,
|
||||
.translateFunc = translateTopBot,
|
||||
.getEnvFunc = getTopBotFuncEnv,
|
||||
.initFunc = topBotFunctionSetup,
|
||||
|
@ -2093,7 +2095,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
|||
{
|
||||
.name = "bottom",
|
||||
.type = FUNCTION_TYPE_BOTTOM,
|
||||
.classification = FUNC_MGT_AGG_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_FORBID_STREAM_FUNC,
|
||||
.classification = FUNC_MGT_AGG_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_MULTI_ROWS_FUNC | FUNC_MGT_FORBID_STREAM_FUNC,
|
||||
.translateFunc = translateTopBot,
|
||||
.getEnvFunc = getTopBotFuncEnv,
|
||||
.initFunc = topBotFunctionSetup,
|
||||
|
@ -2113,6 +2115,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
|||
.getEnvFunc = getSpreadFuncEnv,
|
||||
.initFunc = spreadFunctionSetup,
|
||||
.processFunc = spreadFunction,
|
||||
.sprocessFunc = spreadScalarFunction,
|
||||
.finalizeFunc = spreadFinalize,
|
||||
.invertFunc = NULL,
|
||||
.combineFunc = spreadCombine,
|
||||
|
@ -2204,6 +2207,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
|||
.getEnvFunc = getDerivativeFuncEnv,
|
||||
.initFunc = derivativeFuncSetup,
|
||||
.processFunc = derivativeFunction,
|
||||
.sprocessFunc = derivativeScalarFunction,
|
||||
.finalizeFunc = functionFinalize
|
||||
},
|
||||
{
|
||||
|
@ -2214,6 +2218,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
|||
.getEnvFunc = getIrateFuncEnv,
|
||||
.initFunc = irateFuncSetup,
|
||||
.processFunc = irateFunction,
|
||||
.sprocessFunc = irateScalarFunction,
|
||||
.finalizeFunc = irateFinalize
|
||||
},
|
||||
{
|
||||
|
@ -2315,12 +2320,13 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
|||
.getEnvFunc = getTwaFuncEnv,
|
||||
.initFunc = twaFunctionSetup,
|
||||
.processFunc = twaFunction,
|
||||
.sprocessFunc = twaScalarFunction,
|
||||
.finalizeFunc = twaFinalize
|
||||
},
|
||||
{
|
||||
.name = "histogram",
|
||||
.type = FUNCTION_TYPE_HISTOGRAM,
|
||||
.classification = FUNC_MGT_AGG_FUNC | FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_FORBID_FILL_FUNC,
|
||||
.classification = FUNC_MGT_AGG_FUNC | FUNC_MGT_MULTI_ROWS_FUNC | FUNC_MGT_FORBID_FILL_FUNC,
|
||||
.translateFunc = translateHistogram,
|
||||
.getEnvFunc = getHistogramFuncEnv,
|
||||
.initFunc = histogramFunctionSetup,
|
||||
|
@ -2396,7 +2402,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
|||
{
|
||||
.name = "diff",
|
||||
.type = FUNCTION_TYPE_DIFF,
|
||||
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_FORBID_STREAM_FUNC | FUNC_MGT_FORBID_WINDOW_FUNC,
|
||||
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_FORBID_STREAM_FUNC,
|
||||
.translateFunc = translateDiff,
|
||||
.getEnvFunc = getDiffFuncEnv,
|
||||
.initFunc = diffFunctionSetup,
|
||||
|
@ -2406,7 +2412,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
|||
{
|
||||
.name = "statecount",
|
||||
.type = FUNCTION_TYPE_STATE_COUNT,
|
||||
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_FORBID_STREAM_FUNC | FUNC_MGT_FORBID_WINDOW_FUNC,
|
||||
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_FORBID_STREAM_FUNC,
|
||||
.translateFunc = translateStateCount,
|
||||
.getEnvFunc = getStateFuncEnv,
|
||||
.initFunc = functionSetup,
|
||||
|
@ -2416,7 +2422,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
|||
{
|
||||
.name = "stateduration",
|
||||
.type = FUNCTION_TYPE_STATE_DURATION,
|
||||
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_IMPLICIT_TS_FUNC | FUNC_MGT_FORBID_STREAM_FUNC | FUNC_MGT_FORBID_WINDOW_FUNC,
|
||||
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_IMPLICIT_TS_FUNC | FUNC_MGT_FORBID_STREAM_FUNC,
|
||||
.translateFunc = translateStateDuration,
|
||||
.getEnvFunc = getStateFuncEnv,
|
||||
.initFunc = functionSetup,
|
||||
|
@ -2426,7 +2432,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
|||
{
|
||||
.name = "csum",
|
||||
.type = FUNCTION_TYPE_CSUM,
|
||||
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_FORBID_STREAM_FUNC | FUNC_MGT_FORBID_WINDOW_FUNC,
|
||||
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_FORBID_STREAM_FUNC,
|
||||
.translateFunc = translateCsum,
|
||||
.getEnvFunc = getCsumFuncEnv,
|
||||
.initFunc = functionSetup,
|
||||
|
@ -2436,17 +2442,18 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
|||
{
|
||||
.name = "mavg",
|
||||
.type = FUNCTION_TYPE_MAVG,
|
||||
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_FORBID_STREAM_FUNC | FUNC_MGT_FORBID_WINDOW_FUNC,
|
||||
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_FORBID_STREAM_FUNC,
|
||||
.translateFunc = translateMavg,
|
||||
.getEnvFunc = getMavgFuncEnv,
|
||||
.initFunc = mavgFunctionSetup,
|
||||
.processFunc = mavgFunction,
|
||||
.sprocessFunc = mavgScalarFunction,
|
||||
.finalizeFunc = NULL
|
||||
},
|
||||
{
|
||||
.name = "sample",
|
||||
.type = FUNCTION_TYPE_SAMPLE,
|
||||
.classification = FUNC_MGT_AGG_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_FORBID_STREAM_FUNC,
|
||||
.classification = FUNC_MGT_AGG_FUNC | FUNC_MGT_SELECT_FUNC | FUNC_MGT_MULTI_ROWS_FUNC | FUNC_MGT_FORBID_STREAM_FUNC,
|
||||
.translateFunc = translateSample,
|
||||
.getEnvFunc = getSampleFuncEnv,
|
||||
.initFunc = sampleFunctionSetup,
|
||||
|
@ -2457,7 +2464,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
|||
.name = "tail",
|
||||
.type = FUNCTION_TYPE_TAIL,
|
||||
.classification = FUNC_MGT_SELECT_FUNC | FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_TIMELINE_FUNC | FUNC_MGT_FORBID_STREAM_FUNC |
|
||||
FUNC_MGT_FORBID_WINDOW_FUNC | FUNC_MGT_FORBID_GROUP_BY_FUNC | FUNC_MGT_IMPLICIT_TS_FUNC,
|
||||
FUNC_MGT_IMPLICIT_TS_FUNC,
|
||||
.translateFunc = translateTail,
|
||||
.getEnvFunc = getTailFuncEnv,
|
||||
.initFunc = tailFunctionSetup,
|
||||
|
@ -2468,7 +2475,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
|||
.name = "unique",
|
||||
.type = FUNCTION_TYPE_UNIQUE,
|
||||
.classification = FUNC_MGT_SELECT_FUNC | FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_TIMELINE_FUNC |
|
||||
FUNC_MGT_FORBID_STREAM_FUNC | FUNC_MGT_FORBID_WINDOW_FUNC | FUNC_MGT_FORBID_GROUP_BY_FUNC | FUNC_MGT_IMPLICIT_TS_FUNC,
|
||||
FUNC_MGT_FORBID_STREAM_FUNC | FUNC_MGT_IMPLICIT_TS_FUNC,
|
||||
.translateFunc = translateUnique,
|
||||
.getEnvFunc = getUniqueFuncEnv,
|
||||
.initFunc = uniqueFunctionSetup,
|
||||
|
|
|
@ -175,16 +175,14 @@ bool fmIsIntervalInterpoFunc(int32_t funcId) { return isSpecificClassifyFunc(fun
|
|||
|
||||
bool fmIsForbidStreamFunc(int32_t funcId) { return isSpecificClassifyFunc(funcId, FUNC_MGT_FORBID_STREAM_FUNC); }
|
||||
|
||||
bool fmIsForbidWindowFunc(int32_t funcId) { return isSpecificClassifyFunc(funcId, FUNC_MGT_FORBID_WINDOW_FUNC); }
|
||||
|
||||
bool fmIsForbidGroupByFunc(int32_t funcId) { return isSpecificClassifyFunc(funcId, FUNC_MGT_FORBID_GROUP_BY_FUNC); }
|
||||
|
||||
bool fmIsSystemInfoFunc(int32_t funcId) { return isSpecificClassifyFunc(funcId, FUNC_MGT_SYSTEM_INFO_FUNC); }
|
||||
|
||||
bool fmIsImplicitTsFunc(int32_t funcId) { return isSpecificClassifyFunc(funcId, FUNC_MGT_IMPLICIT_TS_FUNC); }
|
||||
|
||||
bool fmIsClientPseudoColumnFunc(int32_t funcId) { return isSpecificClassifyFunc(funcId, FUNC_MGT_CLIENT_PC_FUNC); }
|
||||
|
||||
bool fmIsMultiRowsFunc(int32_t funcId) { return isSpecificClassifyFunc(funcId, FUNC_MGT_MULTI_ROWS_FUNC); }
|
||||
|
||||
bool fmIsInterpFunc(int32_t funcId) {
|
||||
if (funcId < 0 || funcId >= funcMgtBuiltinsNum) {
|
||||
return false;
|
||||
|
|
|
@ -3663,7 +3663,7 @@ static int32_t jsonToDownstreamSourceNode(const SJson* pJson, void* pObj) {
|
|||
}
|
||||
|
||||
static const char* jkDatabaseOptionsBuffer = "Buffer";
|
||||
static const char* jkDatabaseOptionsCachelast = "Cachelast";
|
||||
static const char* jkDatabaseOptionsCacheModel = "CacheModel";
|
||||
static const char* jkDatabaseOptionsCompressionLevel = "CompressionLevel";
|
||||
static const char* jkDatabaseOptionsDaysPerFileNode = "DaysPerFileNode";
|
||||
static const char* jkDatabaseOptionsDaysPerFile = "DaysPerFile";
|
||||
|
@ -3687,7 +3687,7 @@ static int32_t databaseOptionsToJson(const void* pObj, SJson* pJson) {
|
|||
|
||||
int32_t code = tjsonAddIntegerToObject(pJson, jkDatabaseOptionsBuffer, pNode->buffer);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = tjsonAddIntegerToObject(pJson, jkDatabaseOptionsCachelast, pNode->cacheLast);
|
||||
code = tjsonAddIntegerToObject(pJson, jkDatabaseOptionsCacheModel, pNode->cacheModel);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = tjsonAddIntegerToObject(pJson, jkDatabaseOptionsCompressionLevel, pNode->compressionLevel);
|
||||
|
@ -3749,7 +3749,7 @@ static int32_t jsonToDatabaseOptions(const SJson* pJson, void* pObj) {
|
|||
|
||||
int32_t code = tjsonGetIntValue(pJson, jkDatabaseOptionsBuffer, &pNode->buffer);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = tjsonGetTinyIntValue(pJson, jkDatabaseOptionsCachelast, &pNode->cacheLast);
|
||||
code = tjsonGetTinyIntValue(pJson, jkDatabaseOptionsCacheModel, &pNode->cacheModel);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = tjsonGetTinyIntValue(pJson, jkDatabaseOptionsCompressionLevel, &pNode->compressionLevel);
|
||||
|
|
|
@ -38,8 +38,8 @@ typedef struct SAstCreateContext {
|
|||
|
||||
typedef enum EDatabaseOptionType {
|
||||
DB_OPTION_BUFFER = 1,
|
||||
DB_OPTION_CACHELAST,
|
||||
DB_OPTION_CACHELASTSIZE,
|
||||
DB_OPTION_CACHEMODEL,
|
||||
DB_OPTION_CACHESIZE,
|
||||
DB_OPTION_COMP,
|
||||
DB_OPTION_DAYS,
|
||||
DB_OPTION_FSYNC,
|
||||
|
|
|
@ -172,8 +172,8 @@ exists_opt(A) ::= .
|
|||
|
||||
db_options(A) ::= . { A = createDefaultDatabaseOptions(pCxt); }
|
||||
db_options(A) ::= db_options(B) BUFFER NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_BUFFER, &C); }
|
||||
db_options(A) ::= db_options(B) CACHELAST NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_CACHELAST, &C); }
|
||||
db_options(A) ::= db_options(B) CACHELASTSIZE NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_CACHELASTSIZE, &C); }
|
||||
db_options(A) ::= db_options(B) CACHEMODEL NK_STRING(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_CACHEMODEL, &C); }
|
||||
db_options(A) ::= db_options(B) CACHESIZE NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_CACHESIZE, &C); }
|
||||
db_options(A) ::= db_options(B) COMP NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_COMP, &C); }
|
||||
db_options(A) ::= db_options(B) DURATION NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_DAYS, &C); }
|
||||
db_options(A) ::= db_options(B) DURATION NK_VARIABLE(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_DAYS, &C); }
|
||||
|
@ -186,7 +186,7 @@ db_options(A) ::= db_options(B) PAGES NK_INTEGER(C).
|
|||
db_options(A) ::= db_options(B) PAGESIZE NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_PAGESIZE, &C); }
|
||||
db_options(A) ::= db_options(B) PRECISION NK_STRING(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_PRECISION, &C); }
|
||||
db_options(A) ::= db_options(B) REPLICA NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_REPLICA, &C); }
|
||||
db_options(A) ::= db_options(B) STRICT NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_STRICT, &C); }
|
||||
db_options(A) ::= db_options(B) STRICT NK_STRING(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_STRICT, &C); }
|
||||
db_options(A) ::= db_options(B) WAL NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_WAL, &C); }
|
||||
db_options(A) ::= db_options(B) VGROUPS NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_VGROUPS, &C); }
|
||||
db_options(A) ::= db_options(B) SINGLE_STABLE NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_SINGLE_STABLE, &C); }
|
||||
|
@ -199,8 +199,8 @@ alter_db_options(A) ::= alter_db_options(B) alter_db_option(C).
|
|||
%type alter_db_option { SAlterOption }
|
||||
%destructor alter_db_option { }
|
||||
alter_db_option(A) ::= BUFFER NK_INTEGER(B). { A.type = DB_OPTION_BUFFER; A.val = B; }
|
||||
alter_db_option(A) ::= CACHELAST NK_INTEGER(B). { A.type = DB_OPTION_CACHELAST; A.val = B; }
|
||||
alter_db_option(A) ::= CACHELASTSIZE NK_INTEGER(B). { A.type = DB_OPTION_CACHELASTSIZE; A.val = B; }
|
||||
alter_db_option(A) ::= CACHEMODEL NK_STRING(B). { A.type = DB_OPTION_CACHEMODEL; A.val = B; }
|
||||
alter_db_option(A) ::= CACHESIZE NK_INTEGER(B). { A.type = DB_OPTION_CACHESIZE; A.val = B; }
|
||||
alter_db_option(A) ::= FSYNC NK_INTEGER(B). { A.type = DB_OPTION_FSYNC; A.val = B; }
|
||||
alter_db_option(A) ::= KEEP integer_list(B). { A.type = DB_OPTION_KEEP; A.pList = B; }
|
||||
alter_db_option(A) ::= KEEP variable_list(B). { A.type = DB_OPTION_KEEP; A.pList = B; }
|
||||
|
|
|
@ -760,8 +760,8 @@ SNode* createDefaultDatabaseOptions(SAstCreateContext* pCxt) {
|
|||
SDatabaseOptions* pOptions = (SDatabaseOptions*)nodesMakeNode(QUERY_NODE_DATABASE_OPTIONS);
|
||||
CHECK_OUT_OF_MEM(pOptions);
|
||||
pOptions->buffer = TSDB_DEFAULT_BUFFER_PER_VNODE;
|
||||
pOptions->cacheLast = TSDB_DEFAULT_CACHE_LAST;
|
||||
pOptions->cacheLastSize = TSDB_DEFAULT_CACHE_LAST_SIZE;
|
||||
pOptions->cacheModel = TSDB_DEFAULT_CACHE_MODEL;
|
||||
pOptions->cacheLastSize = TSDB_DEFAULT_CACHE_SIZE;
|
||||
pOptions->compressionLevel = TSDB_DEFAULT_COMP_LEVEL;
|
||||
pOptions->daysPerFile = TSDB_DEFAULT_DAYS_PER_FILE;
|
||||
pOptions->fsyncPeriod = TSDB_DEFAULT_FSYNC_PERIOD;
|
||||
|
@ -787,7 +787,7 @@ SNode* createAlterDatabaseOptions(SAstCreateContext* pCxt) {
|
|||
SDatabaseOptions* pOptions = (SDatabaseOptions*)nodesMakeNode(QUERY_NODE_DATABASE_OPTIONS);
|
||||
CHECK_OUT_OF_MEM(pOptions);
|
||||
pOptions->buffer = -1;
|
||||
pOptions->cacheLast = -1;
|
||||
pOptions->cacheModel = -1;
|
||||
pOptions->cacheLastSize = -1;
|
||||
pOptions->compressionLevel = -1;
|
||||
pOptions->daysPerFile = -1;
|
||||
|
@ -815,10 +815,10 @@ SNode* setDatabaseOption(SAstCreateContext* pCxt, SNode* pOptions, EDatabaseOpti
|
|||
case DB_OPTION_BUFFER:
|
||||
((SDatabaseOptions*)pOptions)->buffer = taosStr2Int32(((SToken*)pVal)->z, NULL, 10);
|
||||
break;
|
||||
case DB_OPTION_CACHELAST:
|
||||
((SDatabaseOptions*)pOptions)->cacheLast = taosStr2Int8(((SToken*)pVal)->z, NULL, 10);
|
||||
case DB_OPTION_CACHEMODEL:
|
||||
COPY_STRING_FORM_STR_TOKEN(((SDatabaseOptions*)pOptions)->cacheModelStr, (SToken*)pVal);
|
||||
break;
|
||||
case DB_OPTION_CACHELASTSIZE:
|
||||
case DB_OPTION_CACHESIZE:
|
||||
((SDatabaseOptions*)pOptions)->cacheLastSize = taosStr2Int32(((SToken*)pVal)->z, NULL, 10);
|
||||
break;
|
||||
case DB_OPTION_COMP:
|
||||
|
@ -858,7 +858,7 @@ SNode* setDatabaseOption(SAstCreateContext* pCxt, SNode* pOptions, EDatabaseOpti
|
|||
((SDatabaseOptions*)pOptions)->replica = taosStr2Int8(((SToken*)pVal)->z, NULL, 10);
|
||||
break;
|
||||
case DB_OPTION_STRICT:
|
||||
((SDatabaseOptions*)pOptions)->strict = taosStr2Int8(((SToken*)pVal)->z, NULL, 10);
|
||||
COPY_STRING_FORM_STR_TOKEN(((SDatabaseOptions*)pOptions)->strictStr, (SToken*)pVal);
|
||||
break;
|
||||
case DB_OPTION_WAL:
|
||||
((SDatabaseOptions*)pOptions)->walLevel = taosStr2Int8(((SToken*)pVal)->z, NULL, 10);
|
||||
|
@ -872,10 +872,6 @@ SNode* setDatabaseOption(SAstCreateContext* pCxt, SNode* pOptions, EDatabaseOpti
|
|||
case DB_OPTION_RETENTIONS:
|
||||
((SDatabaseOptions*)pOptions)->pRetentions = pVal;
|
||||
break;
|
||||
// case DB_OPTION_SCHEMALESS:
|
||||
// ((SDatabaseOptions*)pOptions)->schemaless = taosStr2Int8(((SToken*)pVal)->z, NULL, 10);
|
||||
// ((SDatabaseOptions*)pOptions)->schemaless = 0;
|
||||
// break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
|
|
@ -52,8 +52,8 @@ static SKeyword keywordTable[] = {
|
|||
{"BUFSIZE", TK_BUFSIZE},
|
||||
{"BY", TK_BY},
|
||||
{"CACHE", TK_CACHE},
|
||||
{"CACHELAST", TK_CACHELAST},
|
||||
{"CACHELASTSIZE", TK_CACHELASTSIZE},
|
||||
{"CACHEMODEL", TK_CACHEMODEL},
|
||||
{"CACHESIZE", TK_CACHESIZE},
|
||||
{"CAST", TK_CAST},
|
||||
{"CLIENT_VERSION", TK_CLIENT_VERSION},
|
||||
{"CLUSTER", TK_CLUSTER},
|
||||
|
|
|
@ -1058,7 +1058,9 @@ static int32_t translateAggFunc(STranslateContext* pCxt, SFunctionNode* pFunc) {
|
|||
if (hasInvalidFuncNesting(pFunc->pParameterList)) {
|
||||
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_AGG_FUNC_NESTING);
|
||||
}
|
||||
if (isSelectStmt(pCxt->pCurrStmt) && ((SSelectStmt*)pCxt->pCurrStmt)->hasIndefiniteRowsFunc) {
|
||||
// The auto-generated COUNT function in the DELETE statement is legal
|
||||
if (isSelectStmt(pCxt->pCurrStmt) &&
|
||||
(((SSelectStmt*)pCxt->pCurrStmt)->hasIndefiniteRowsFunc || ((SSelectStmt*)pCxt->pCurrStmt)->hasMultiRowsFunc)) {
|
||||
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
|
||||
}
|
||||
|
||||
|
@ -1093,7 +1095,27 @@ static int32_t translateIndefiniteRowsFunc(STranslateContext* pCxt, SFunctionNod
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
if (!isSelectStmt(pCxt->pCurrStmt) || SQL_CLAUSE_SELECT != pCxt->currClause ||
|
||||
((SSelectStmt*)pCxt->pCurrStmt)->hasIndefiniteRowsFunc || ((SSelectStmt*)pCxt->pCurrStmt)->hasAggFuncs) {
|
||||
((SSelectStmt*)pCxt->pCurrStmt)->hasIndefiniteRowsFunc || ((SSelectStmt*)pCxt->pCurrStmt)->hasAggFuncs ||
|
||||
((SSelectStmt*)pCxt->pCurrStmt)->hasMultiRowsFunc) {
|
||||
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
|
||||
}
|
||||
if (NULL != ((SSelectStmt*)pCxt->pCurrStmt)->pWindow || NULL != ((SSelectStmt*)pCxt->pCurrStmt)->pGroupByList) {
|
||||
return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_NOT_ALLOWED_FUNC,
|
||||
"%s function is not supported in window query or group query", pFunc->functionName);
|
||||
}
|
||||
if (hasInvalidFuncNesting(pFunc->pParameterList)) {
|
||||
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_AGG_FUNC_NESTING);
|
||||
}
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t translateMultiRowsFunc(STranslateContext* pCxt, SFunctionNode* pFunc) {
|
||||
if (!fmIsMultiRowsFunc(pFunc->funcId)) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
if (!isSelectStmt(pCxt->pCurrStmt) || SQL_CLAUSE_SELECT != pCxt->currClause ||
|
||||
((SSelectStmt*)pCxt->pCurrStmt)->hasIndefiniteRowsFunc || ((SSelectStmt*)pCxt->pCurrStmt)->hasAggFuncs ||
|
||||
((SSelectStmt*)pCxt->pCurrStmt)->hasMultiRowsFunc) {
|
||||
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
|
||||
}
|
||||
if (hasInvalidFuncNesting(pFunc->pParameterList)) {
|
||||
|
@ -1131,16 +1153,6 @@ static int32_t translateWindowPseudoColumnFunc(STranslateContext* pCxt, SFunctio
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t translateForbidWindowFunc(STranslateContext* pCxt, SFunctionNode* pFunc) {
|
||||
if (!fmIsForbidWindowFunc(pFunc->funcId)) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
if (isSelectStmt(pCxt->pCurrStmt) && NULL != ((SSelectStmt*)pCxt->pCurrStmt)->pWindow) {
|
||||
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_WINDOW_NOT_ALLOWED_FUNC, pFunc->functionName);
|
||||
}
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t translateForbidStreamFunc(STranslateContext* pCxt, SFunctionNode* pFunc) {
|
||||
if (!fmIsForbidStreamFunc(pFunc->funcId)) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
@ -1151,21 +1163,15 @@ static int32_t translateForbidStreamFunc(STranslateContext* pCxt, SFunctionNode*
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t translateForbidGroupByFunc(STranslateContext* pCxt, SFunctionNode* pFunc) {
|
||||
if (!fmIsForbidGroupByFunc(pFunc->funcId)) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
if (isSelectStmt(pCxt->pCurrStmt) && NULL != ((SSelectStmt*)pCxt->pCurrStmt)->pGroupByList) {
|
||||
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_GROUP_BY_NOT_ALLOWED_FUNC, pFunc->functionName);
|
||||
}
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t translateRepeatScanFunc(STranslateContext* pCxt, SFunctionNode* pFunc) {
|
||||
if (!fmIsRepeatScanFunc(pFunc->funcId)) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
if (isSelectStmt(pCxt->pCurrStmt) && NULL != ((SSelectStmt*)pCxt->pCurrStmt)->pFromTable) {
|
||||
if (isSelectStmt(pCxt->pCurrStmt)) {
|
||||
//select percentile() without from clause is also valid
|
||||
if (NULL == ((SSelectStmt*)pCxt->pCurrStmt)->pFromTable) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
SNode* pTable = ((SSelectStmt*)pCxt->pCurrStmt)->pFromTable;
|
||||
if (QUERY_NODE_REAL_TABLE == nodeType(pTable) &&
|
||||
(TSDB_CHILD_TABLE == ((SRealTableNode*)pTable)->pMeta->tableType ||
|
||||
|
@ -1191,7 +1197,7 @@ static int32_t translateMultiResFunc(STranslateContext* pCxt, SFunctionNode* pFu
|
|||
if (!fmIsMultiResFunc(pFunc->funcId)) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
if (SQL_CLAUSE_SELECT != pCxt->currClause ) {
|
||||
if (SQL_CLAUSE_SELECT != pCxt->currClause) {
|
||||
SNode* pPara = nodesListGetNode(pFunc->pParameterList, 0);
|
||||
if (isStar(pPara) || isTableStar(pPara)) {
|
||||
return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_NOT_ALLOWED_FUNC,
|
||||
|
@ -1206,6 +1212,7 @@ static void setFuncClassification(SNode* pCurrStmt, SFunctionNode* pFunc) {
|
|||
pSelect->hasAggFuncs = pSelect->hasAggFuncs ? true : fmIsAggFunc(pFunc->funcId);
|
||||
pSelect->hasRepeatScanFuncs = pSelect->hasRepeatScanFuncs ? true : fmIsRepeatScanFunc(pFunc->funcId);
|
||||
pSelect->hasIndefiniteRowsFunc = pSelect->hasIndefiniteRowsFunc ? true : fmIsIndefiniteRowsFunc(pFunc->funcId);
|
||||
pSelect->hasMultiRowsFunc = pSelect->hasMultiRowsFunc ? true : fmIsMultiRowsFunc(pFunc->funcId);
|
||||
if (fmIsSelectFunc(pFunc->funcId)) {
|
||||
pSelect->hasSelectFunc = true;
|
||||
++(pSelect->selectFuncNum);
|
||||
|
@ -1322,21 +1329,18 @@ static int32_t translateNoramlFunction(STranslateContext* pCxt, SFunctionNode* p
|
|||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = translateWindowPseudoColumnFunc(pCxt, pFunc);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = translateForbidWindowFunc(pCxt, pFunc);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = translateForbidStreamFunc(pCxt, pFunc);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = translateForbidGroupByFunc(pCxt, pFunc);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = translateRepeatScanFunc(pCxt, pFunc);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = translateMultiResFunc(pCxt, pFunc);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = translateMultiRowsFunc(pCxt, pFunc);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
setFuncClassification(pCxt->pCurrStmt, pFunc);
|
||||
}
|
||||
|
@ -2938,7 +2942,7 @@ static int32_t buildCreateDbReq(STranslateContext* pCxt, SCreateDatabaseStmt* pS
|
|||
pReq->compression = pStmt->pOptions->compressionLevel;
|
||||
pReq->replications = pStmt->pOptions->replica;
|
||||
pReq->strict = pStmt->pOptions->strict;
|
||||
pReq->cacheLast = pStmt->pOptions->cacheLast;
|
||||
pReq->cacheLast = pStmt->pOptions->cacheModel;
|
||||
pReq->cacheLastSize = pStmt->pOptions->cacheLastSize;
|
||||
pReq->schemaless = pStmt->pOptions->schemaless;
|
||||
pReq->ignoreExist = pStmt->ignoreExists;
|
||||
|
@ -3019,13 +3023,31 @@ static int32_t checkDbKeepOption(STranslateContext* pCxt, SDatabaseOptions* pOpt
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t checkDbCacheModelOption(STranslateContext* pCxt, SDatabaseOptions* pOptions) {
|
||||
if ('\0' != pOptions->cacheModelStr[0]) {
|
||||
if (0 == strcasecmp(pOptions->cacheModelStr, TSDB_CACHE_MODEL_NONE_STR)) {
|
||||
pOptions->cacheModel = TSDB_CACHE_MODEL_NONE;
|
||||
} else if (0 == strcasecmp(pOptions->cacheModelStr, TSDB_CACHE_MODEL_LAST_ROW_STR)) {
|
||||
pOptions->cacheModel = TSDB_CACHE_MODEL_LAST_ROW;
|
||||
} else if (0 == strcasecmp(pOptions->cacheModelStr, TSDB_CACHE_MODEL_LAST_VALUE_STR)) {
|
||||
pOptions->cacheModel = TSDB_CACHE_MODEL_LAST_VALUE;
|
||||
} else if (0 == strcasecmp(pOptions->cacheModelStr, TSDB_CACHE_MODEL_BOTH_STR)) {
|
||||
pOptions->cacheModel = TSDB_CACHE_MODEL_BOTH;
|
||||
} else {
|
||||
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_STR_OPTION, "cacheModel",
|
||||
pOptions->cacheModelStr);
|
||||
}
|
||||
}
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t checkDbPrecisionOption(STranslateContext* pCxt, SDatabaseOptions* pOptions) {
|
||||
if ('\0' != pOptions->precisionStr[0]) {
|
||||
if (0 == strcmp(pOptions->precisionStr, TSDB_TIME_PRECISION_MILLI_STR)) {
|
||||
if (0 == strcasecmp(pOptions->precisionStr, TSDB_TIME_PRECISION_MILLI_STR)) {
|
||||
pOptions->precision = TSDB_TIME_PRECISION_MILLI;
|
||||
} else if (0 == strcmp(pOptions->precisionStr, TSDB_TIME_PRECISION_MICRO_STR)) {
|
||||
} else if (0 == strcasecmp(pOptions->precisionStr, TSDB_TIME_PRECISION_MICRO_STR)) {
|
||||
pOptions->precision = TSDB_TIME_PRECISION_MICRO;
|
||||
} else if (0 == strcmp(pOptions->precisionStr, TSDB_TIME_PRECISION_NANO_STR)) {
|
||||
} else if (0 == strcasecmp(pOptions->precisionStr, TSDB_TIME_PRECISION_NANO_STR)) {
|
||||
pOptions->precision = TSDB_TIME_PRECISION_NANO;
|
||||
} else {
|
||||
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_STR_OPTION, "precision", pOptions->precisionStr);
|
||||
|
@ -3034,6 +3056,19 @@ static int32_t checkDbPrecisionOption(STranslateContext* pCxt, SDatabaseOptions*
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t checkDbStrictOption(STranslateContext* pCxt, SDatabaseOptions* pOptions) {
|
||||
if ('\0' != pOptions->strictStr[0]) {
|
||||
if (0 == strcasecmp(pOptions->strictStr, TSDB_DB_STRICT_OFF_STR)) {
|
||||
pOptions->strict = TSDB_DB_STRICT_OFF;
|
||||
} else if (0 == strcasecmp(pOptions->strictStr, TSDB_DB_STRICT_ON_STR)) {
|
||||
pOptions->strict = TSDB_DB_STRICT_ON;
|
||||
} else {
|
||||
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_STR_OPTION, "strict", pOptions->strictStr);
|
||||
}
|
||||
}
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t checkDbEnumOption(STranslateContext* pCxt, const char* pName, int32_t val, int32_t v1, int32_t v2) {
|
||||
if (val >= 0 && val != v1 && val != v2) {
|
||||
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_ENUM_OPTION, pName, val, v1, v2);
|
||||
|
@ -3100,11 +3135,10 @@ static int32_t checkDatabaseOptions(STranslateContext* pCxt, const char* pDbName
|
|||
int32_t code =
|
||||
checkRangeOption(pCxt, "buffer", pOptions->buffer, TSDB_MIN_BUFFER_PER_VNODE, TSDB_MAX_BUFFER_PER_VNODE);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = checkRangeOption(pCxt, "cacheLast", pOptions->cacheLast, TSDB_MIN_DB_CACHE_LAST, TSDB_MAX_DB_CACHE_LAST);
|
||||
code = checkDbCacheModelOption(pCxt, pOptions);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = checkRangeOption(pCxt, "cacheLastSize", pOptions->cacheLastSize, TSDB_MIN_DB_CACHE_LAST_SIZE,
|
||||
TSDB_MAX_DB_CACHE_LAST_SIZE);
|
||||
code = checkRangeOption(pCxt, "cacheSize", pOptions->cacheLastSize, TSDB_MIN_DB_CACHE_SIZE, TSDB_MAX_DB_CACHE_SIZE);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = checkRangeOption(pCxt, "compression", pOptions->compressionLevel, TSDB_MIN_COMP_LEVEL, TSDB_MAX_COMP_LEVEL);
|
||||
|
@ -3140,7 +3174,7 @@ static int32_t checkDatabaseOptions(STranslateContext* pCxt, const char* pDbName
|
|||
code = checkDbEnumOption(pCxt, "replications", pOptions->replica, TSDB_MIN_DB_REPLICA, TSDB_MAX_DB_REPLICA);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = checkDbEnumOption(pCxt, "strict", pOptions->strict, TSDB_DB_STRICT_OFF, TSDB_DB_STRICT_ON);
|
||||
code = checkDbStrictOption(pCxt, pOptions);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = checkDbEnumOption(pCxt, "walLevel", pOptions->walLevel, TSDB_MIN_WAL_LEVEL, TSDB_MAX_WAL_LEVEL);
|
||||
|
@ -3225,7 +3259,7 @@ static void buildAlterDbReq(STranslateContext* pCxt, SAlterDatabaseStmt* pStmt,
|
|||
pReq->fsyncPeriod = pStmt->pOptions->fsyncPeriod;
|
||||
pReq->walLevel = pStmt->pOptions->walLevel;
|
||||
pReq->strict = pStmt->pOptions->strict;
|
||||
pReq->cacheLast = pStmt->pOptions->cacheLast;
|
||||
pReq->cacheLast = pStmt->pOptions->cacheModel;
|
||||
pReq->cacheLastSize = pStmt->pOptions->cacheLastSize;
|
||||
pReq->replications = pStmt->pOptions->replica;
|
||||
return;
|
||||
|
|
|
@ -384,32 +384,32 @@ static const YYACTIONTYPE yy_action[] = {
|
|||
/* 1630 */ 464, 1792, 468, 467, 1512, 469, 1511, 1496, 1587, 1824,
|
||||
/* 1640 */ 1154, 50, 196, 295, 1793, 586, 1795, 1796, 582, 1824,
|
||||
/* 1650 */ 577, 1153, 1586, 290, 1793, 586, 1795, 1796, 582, 1810,
|
||||
/* 1660 */ 577, 1079, 632, 1078, 634, 1077, 1076, 584, 1073, 1521,
|
||||
/* 1670 */ 1072, 319, 1761, 320, 583, 1071, 1070, 1516, 1514, 321,
|
||||
/* 1680 */ 496, 1792, 1495, 498, 1494, 500, 1493, 502, 493, 1732,
|
||||
/* 1690 */ 94, 551, 15, 1792, 1237, 1726, 140, 509, 1713, 1824,
|
||||
/* 1700 */ 1711, 1712, 1710, 146, 1793, 586, 1795, 1796, 582, 1810,
|
||||
/* 1710 */ 577, 1709, 1707, 56, 1699, 1247, 229, 581, 510, 227,
|
||||
/* 1720 */ 214, 1810, 1761, 16, 583, 232, 339, 225, 322, 584,
|
||||
/* 1730 */ 219, 78, 515, 41, 1761, 17, 583, 47, 79, 23,
|
||||
/* 1740 */ 524, 1437, 84, 234, 13, 243, 236, 1419, 1951, 1824,
|
||||
/* 1660 */ 577, 1079, 632, 1078, 634, 1077, 1076, 584, 1073, 1071,
|
||||
/* 1670 */ 1072, 1521, 1761, 319, 583, 1070, 1516, 320, 1514, 321,
|
||||
/* 1680 */ 496, 1792, 1495, 498, 1494, 500, 1493, 502, 493, 94,
|
||||
/* 1690 */ 1732, 551, 509, 1792, 1237, 1726, 140, 1713, 1711, 1824,
|
||||
/* 1700 */ 1712, 1710, 1709, 146, 1793, 586, 1795, 1796, 582, 1810,
|
||||
/* 1710 */ 577, 1247, 1707, 56, 1699, 41, 227, 581, 510, 84,
|
||||
/* 1720 */ 214, 1810, 1761, 16, 583, 232, 339, 15, 322, 584,
|
||||
/* 1730 */ 219, 225, 515, 243, 1761, 1437, 583, 47, 78, 79,
|
||||
/* 1740 */ 524, 23, 242, 229, 236, 234, 25, 1419, 1951, 1824,
|
||||
/* 1750 */ 1421, 238, 147, 294, 1793, 586, 1795, 1796, 582, 241,
|
||||
/* 1760 */ 577, 1824, 1843, 242, 1782, 295, 1793, 586, 1795, 1796,
|
||||
/* 1770 */ 582, 1792, 577, 24, 25, 252, 46, 1414, 83, 18,
|
||||
/* 1780 */ 1781, 1792, 1394, 1393, 151, 1449, 1448, 333, 1453, 1454,
|
||||
/* 1790 */ 1443, 1452, 334, 10, 1280, 1356, 1331, 45, 19, 1810,
|
||||
/* 1800 */ 1827, 576, 1311, 1329, 341, 1328, 31, 584, 152, 1810,
|
||||
/* 1810 */ 12, 20, 1761, 165, 583, 21, 589, 584, 585, 587,
|
||||
/* 1820 */ 342, 1140, 1761, 1137, 583, 591, 594, 593, 1134, 596,
|
||||
/* 1830 */ 597, 1792, 1128, 599, 600, 602, 1132, 1131, 1117, 1824,
|
||||
/* 1840 */ 609, 1792, 1149, 295, 1793, 586, 1795, 1796, 582, 1824,
|
||||
/* 1850 */ 577, 1792, 1126, 281, 1793, 586, 1795, 1796, 582, 1810,
|
||||
/* 1860 */ 577, 603, 85, 86, 62, 263, 1145, 584, 1130, 1810,
|
||||
/* 1870 */ 1129, 1042, 1761, 618, 583, 264, 1067, 584, 1086, 1810,
|
||||
/* 1880 */ 621, 1065, 1761, 1064, 583, 1063, 1062, 584, 1061, 1060,
|
||||
/* 1890 */ 1059, 1058, 1761, 1083, 583, 1081, 1055, 1054, 1053, 1824,
|
||||
/* 1900 */ 1050, 1049, 1528, 282, 1793, 586, 1795, 1796, 582, 1824,
|
||||
/* 1910 */ 577, 1048, 1047, 289, 1793, 586, 1795, 1796, 582, 1824,
|
||||
/* 1760 */ 577, 1824, 1843, 1782, 17, 295, 1793, 586, 1795, 1796,
|
||||
/* 1770 */ 582, 1792, 577, 24, 252, 1414, 83, 46, 1781, 1394,
|
||||
/* 1780 */ 1449, 1792, 1393, 151, 18, 1448, 333, 1453, 1452, 10,
|
||||
/* 1790 */ 1454, 45, 1443, 334, 1280, 1356, 1827, 1311, 19, 1810,
|
||||
/* 1800 */ 589, 1331, 1329, 13, 341, 576, 31, 584, 1328, 1810,
|
||||
/* 1810 */ 152, 12, 1761, 165, 583, 20, 21, 584, 585, 587,
|
||||
/* 1820 */ 342, 1140, 1761, 591, 583, 1137, 593, 594, 596, 1134,
|
||||
/* 1830 */ 1128, 1792, 597, 599, 600, 602, 1132, 1117, 1131, 1824,
|
||||
/* 1840 */ 1149, 1792, 1126, 295, 1793, 586, 1795, 1796, 582, 1824,
|
||||
/* 1850 */ 577, 1792, 263, 281, 1793, 586, 1795, 1796, 582, 1810,
|
||||
/* 1860 */ 577, 603, 85, 609, 86, 62, 1130, 584, 1129, 1810,
|
||||
/* 1870 */ 1145, 618, 1761, 1042, 583, 1086, 1067, 584, 621, 1810,
|
||||
/* 1880 */ 264, 1065, 1761, 1062, 583, 1064, 1063, 584, 1061, 1060,
|
||||
/* 1890 */ 1059, 1058, 1761, 1083, 583, 1081, 1048, 1055, 1054, 1824,
|
||||
/* 1900 */ 1053, 1050, 1528, 282, 1793, 586, 1795, 1796, 582, 1824,
|
||||
/* 1910 */ 577, 1049, 1047, 289, 1793, 586, 1795, 1796, 582, 1824,
|
||||
/* 1920 */ 577, 643, 644, 291, 1793, 586, 1795, 1796, 582, 645,
|
||||
/* 1930 */ 577, 1526, 1792, 647, 648, 649, 1524, 1522, 651, 652,
|
||||
/* 1940 */ 653, 655, 656, 657, 1510, 659, 1004, 1492, 267, 663,
|
||||
|
@ -641,30 +641,30 @@ static const YYCODETYPE yy_lookahead[] = {
|
|||
/* 1630 */ 47, 259, 47, 35, 0, 39, 0, 0, 0, 327,
|
||||
/* 1640 */ 35, 94, 92, 331, 332, 333, 334, 335, 336, 327,
|
||||
/* 1650 */ 338, 22, 0, 331, 332, 333, 334, 335, 336, 287,
|
||||
/* 1660 */ 338, 35, 43, 35, 43, 35, 35, 295, 35, 0,
|
||||
/* 1670 */ 35, 22, 300, 22, 302, 35, 35, 0, 0, 22,
|
||||
/* 1680 */ 35, 259, 0, 35, 0, 35, 0, 22, 49, 0,
|
||||
/* 1690 */ 20, 369, 85, 259, 35, 0, 172, 22, 0, 327,
|
||||
/* 1660 */ 338, 35, 43, 35, 43, 35, 35, 295, 35, 22,
|
||||
/* 1670 */ 35, 0, 300, 22, 302, 35, 0, 22, 0, 22,
|
||||
/* 1680 */ 35, 259, 0, 35, 0, 35, 0, 22, 49, 20,
|
||||
/* 1690 */ 0, 369, 22, 259, 35, 0, 172, 0, 0, 327,
|
||||
/* 1700 */ 0, 0, 0, 331, 332, 333, 334, 335, 336, 287,
|
||||
/* 1710 */ 338, 0, 0, 153, 0, 181, 149, 295, 153, 39,
|
||||
/* 1710 */ 338, 181, 0, 153, 0, 43, 39, 295, 153, 95,
|
||||
/* 1720 */ 150, 287, 300, 230, 302, 46, 292, 85, 153, 295,
|
||||
/* 1730 */ 86, 85, 155, 43, 300, 230, 302, 43, 85, 85,
|
||||
/* 1740 */ 151, 86, 95, 85, 230, 46, 86, 86, 376, 327,
|
||||
/* 1730 */ 86, 85, 155, 46, 300, 86, 302, 43, 85, 85,
|
||||
/* 1740 */ 151, 85, 43, 149, 86, 85, 43, 86, 376, 327,
|
||||
/* 1750 */ 86, 85, 85, 331, 332, 333, 334, 335, 336, 85,
|
||||
/* 1760 */ 338, 327, 340, 43, 46, 331, 332, 333, 334, 335,
|
||||
/* 1770 */ 336, 259, 338, 85, 43, 46, 43, 86, 85, 43,
|
||||
/* 1780 */ 46, 259, 86, 86, 46, 35, 35, 35, 35, 86,
|
||||
/* 1790 */ 86, 35, 35, 2, 22, 193, 86, 224, 43, 287,
|
||||
/* 1800 */ 85, 85, 22, 86, 292, 86, 85, 295, 46, 287,
|
||||
/* 1810 */ 85, 85, 300, 46, 302, 85, 35, 295, 195, 96,
|
||||
/* 1820 */ 35, 86, 300, 86, 302, 85, 85, 35, 86, 35,
|
||||
/* 1830 */ 85, 259, 86, 35, 85, 35, 109, 109, 22, 327,
|
||||
/* 1840 */ 97, 259, 35, 331, 332, 333, 334, 335, 336, 327,
|
||||
/* 1850 */ 338, 259, 86, 331, 332, 333, 334, 335, 336, 287,
|
||||
/* 1860 */ 338, 85, 85, 85, 85, 43, 22, 295, 109, 287,
|
||||
/* 1870 */ 109, 62, 300, 61, 302, 43, 35, 295, 68, 287,
|
||||
/* 1880 */ 83, 35, 300, 35, 302, 35, 35, 295, 35, 22,
|
||||
/* 1890 */ 35, 35, 300, 68, 302, 35, 35, 35, 35, 327,
|
||||
/* 1760 */ 338, 327, 340, 46, 230, 331, 332, 333, 334, 335,
|
||||
/* 1770 */ 336, 259, 338, 85, 46, 86, 85, 43, 46, 86,
|
||||
/* 1780 */ 35, 259, 86, 46, 43, 35, 35, 35, 35, 2,
|
||||
/* 1790 */ 86, 224, 86, 35, 22, 193, 85, 22, 43, 287,
|
||||
/* 1800 */ 35, 86, 86, 230, 292, 85, 85, 295, 86, 287,
|
||||
/* 1810 */ 46, 85, 300, 46, 302, 85, 85, 295, 195, 96,
|
||||
/* 1820 */ 35, 86, 300, 85, 302, 86, 35, 85, 35, 86,
|
||||
/* 1830 */ 86, 259, 85, 35, 85, 35, 109, 22, 109, 327,
|
||||
/* 1840 */ 35, 259, 86, 331, 332, 333, 334, 335, 336, 327,
|
||||
/* 1850 */ 338, 259, 43, 331, 332, 333, 334, 335, 336, 287,
|
||||
/* 1860 */ 338, 85, 85, 97, 85, 85, 109, 295, 109, 287,
|
||||
/* 1870 */ 22, 61, 300, 62, 302, 68, 35, 295, 83, 287,
|
||||
/* 1880 */ 43, 35, 300, 22, 302, 35, 35, 295, 35, 22,
|
||||
/* 1890 */ 35, 35, 300, 68, 302, 35, 22, 35, 35, 327,
|
||||
/* 1900 */ 35, 35, 0, 331, 332, 333, 334, 335, 336, 327,
|
||||
/* 1910 */ 338, 35, 35, 331, 332, 333, 334, 335, 336, 327,
|
||||
/* 1920 */ 338, 35, 47, 331, 332, 333, 334, 335, 336, 39,
|
||||
|
@ -783,23 +783,23 @@ static const unsigned short int yy_shift_ofst[] = {
|
|||
/* 450 */ 1603, 1606, 1608, 1553, 1611, 1613, 1581, 1571, 1580, 1620,
|
||||
/* 460 */ 1586, 1576, 1587, 1625, 1593, 1583, 1590, 1627, 1598, 1585,
|
||||
/* 470 */ 1596, 1634, 1636, 1637, 1638, 1547, 1550, 1605, 1629, 1652,
|
||||
/* 480 */ 1626, 1628, 1630, 1631, 1619, 1621, 1633, 1635, 1640, 1641,
|
||||
/* 490 */ 1669, 1649, 1677, 1651, 1639, 1678, 1657, 1645, 1682, 1648,
|
||||
/* 500 */ 1684, 1650, 1686, 1665, 1670, 1689, 1560, 1659, 1695, 1524,
|
||||
/* 510 */ 1675, 1565, 1570, 1698, 1700, 1575, 1577, 1701, 1702, 1711,
|
||||
/* 520 */ 1607, 1644, 1534, 1712, 1642, 1589, 1646, 1714, 1680, 1567,
|
||||
/* 530 */ 1653, 1647, 1679, 1690, 1493, 1654, 1655, 1658, 1660, 1661,
|
||||
/* 540 */ 1666, 1694, 1664, 1667, 1674, 1688, 1691, 1720, 1699, 1718,
|
||||
/* 550 */ 1693, 1731, 1505, 1696, 1697, 1729, 1573, 1733, 1734, 1738,
|
||||
/* 560 */ 1703, 1736, 1514, 1704, 1750, 1751, 1752, 1753, 1756, 1757,
|
||||
/* 570 */ 1704, 1791, 1772, 1602, 1755, 1715, 1710, 1716, 1717, 1721,
|
||||
/* 580 */ 1719, 1762, 1725, 1726, 1767, 1780, 1623, 1730, 1723, 1735,
|
||||
/* 590 */ 1781, 1785, 1740, 1737, 1792, 1741, 1742, 1794, 1745, 1746,
|
||||
/* 600 */ 1798, 1749, 1766, 1800, 1776, 1727, 1728, 1759, 1761, 1816,
|
||||
/* 610 */ 1743, 1777, 1778, 1807, 1779, 1822, 1822, 1844, 1809, 1812,
|
||||
/* 620 */ 1841, 1810, 1797, 1832, 1846, 1848, 1850, 1851, 1853, 1867,
|
||||
/* 630 */ 1855, 1856, 1825, 1619, 1860, 1621, 1861, 1862, 1863, 1865,
|
||||
/* 640 */ 1866, 1876, 1877, 1902, 1886, 1875, 1890, 1931, 1898, 1887,
|
||||
/* 480 */ 1626, 1628, 1630, 1631, 1619, 1621, 1633, 1635, 1647, 1640,
|
||||
/* 490 */ 1671, 1651, 1676, 1655, 1639, 1678, 1657, 1645, 1682, 1648,
|
||||
/* 500 */ 1684, 1650, 1686, 1665, 1669, 1690, 1560, 1659, 1695, 1524,
|
||||
/* 510 */ 1670, 1565, 1570, 1697, 1698, 1575, 1577, 1700, 1701, 1702,
|
||||
/* 520 */ 1642, 1644, 1530, 1712, 1646, 1589, 1653, 1714, 1677, 1594,
|
||||
/* 530 */ 1654, 1624, 1679, 1672, 1493, 1656, 1649, 1660, 1658, 1661,
|
||||
/* 540 */ 1666, 1694, 1664, 1667, 1674, 1688, 1689, 1699, 1687, 1717,
|
||||
/* 550 */ 1691, 1703, 1534, 1693, 1696, 1728, 1567, 1734, 1732, 1737,
|
||||
/* 560 */ 1704, 1741, 1573, 1706, 1745, 1750, 1751, 1752, 1753, 1758,
|
||||
/* 570 */ 1706, 1787, 1772, 1602, 1755, 1711, 1715, 1720, 1716, 1721,
|
||||
/* 580 */ 1722, 1764, 1726, 1730, 1767, 1775, 1623, 1731, 1723, 1735,
|
||||
/* 590 */ 1765, 1785, 1738, 1739, 1791, 1742, 1743, 1793, 1747, 1744,
|
||||
/* 600 */ 1798, 1749, 1756, 1800, 1776, 1727, 1729, 1757, 1759, 1815,
|
||||
/* 610 */ 1766, 1777, 1779, 1805, 1780, 1809, 1809, 1848, 1811, 1810,
|
||||
/* 620 */ 1841, 1807, 1795, 1837, 1846, 1850, 1851, 1861, 1853, 1867,
|
||||
/* 630 */ 1855, 1856, 1825, 1619, 1860, 1621, 1862, 1863, 1865, 1866,
|
||||
/* 640 */ 1876, 1874, 1877, 1902, 1886, 1875, 1890, 1931, 1898, 1887,
|
||||
/* 650 */ 1896, 1936, 1903, 1892, 1901, 1937, 1906, 1895, 1904, 1944,
|
||||
/* 660 */ 1910, 1911, 1947, 1926, 1928, 1930, 1932, 1929, 1933,
|
||||
};
|
||||
|
@ -987,8 +987,8 @@ static const YYCODETYPE yyFallback[] = {
|
|||
0, /* NOT => nothing */
|
||||
0, /* EXISTS => nothing */
|
||||
0, /* BUFFER => nothing */
|
||||
0, /* CACHELAST => nothing */
|
||||
0, /* CACHELASTSIZE => nothing */
|
||||
0, /* CACHEMODEL => nothing */
|
||||
0, /* CACHESIZE => nothing */
|
||||
0, /* COMP => nothing */
|
||||
0, /* DURATION => nothing */
|
||||
0, /* NK_VARIABLE => nothing */
|
||||
|
@ -1330,8 +1330,8 @@ static const char *const yyTokenName[] = {
|
|||
/* 61 */ "NOT",
|
||||
/* 62 */ "EXISTS",
|
||||
/* 63 */ "BUFFER",
|
||||
/* 64 */ "CACHELAST",
|
||||
/* 65 */ "CACHELASTSIZE",
|
||||
/* 64 */ "CACHEMODEL",
|
||||
/* 65 */ "CACHESIZE",
|
||||
/* 66 */ "COMP",
|
||||
/* 67 */ "DURATION",
|
||||
/* 68 */ "NK_VARIABLE",
|
||||
|
@ -1726,8 +1726,8 @@ static const char *const yyRuleName[] = {
|
|||
/* 71 */ "exists_opt ::=",
|
||||
/* 72 */ "db_options ::=",
|
||||
/* 73 */ "db_options ::= db_options BUFFER NK_INTEGER",
|
||||
/* 74 */ "db_options ::= db_options CACHELAST NK_INTEGER",
|
||||
/* 75 */ "db_options ::= db_options CACHELASTSIZE NK_INTEGER",
|
||||
/* 74 */ "db_options ::= db_options CACHEMODEL NK_STRING",
|
||||
/* 75 */ "db_options ::= db_options CACHESIZE NK_INTEGER",
|
||||
/* 76 */ "db_options ::= db_options COMP NK_INTEGER",
|
||||
/* 77 */ "db_options ::= db_options DURATION NK_INTEGER",
|
||||
/* 78 */ "db_options ::= db_options DURATION NK_VARIABLE",
|
||||
|
@ -1740,7 +1740,7 @@ static const char *const yyRuleName[] = {
|
|||
/* 85 */ "db_options ::= db_options PAGESIZE NK_INTEGER",
|
||||
/* 86 */ "db_options ::= db_options PRECISION NK_STRING",
|
||||
/* 87 */ "db_options ::= db_options REPLICA NK_INTEGER",
|
||||
/* 88 */ "db_options ::= db_options STRICT NK_INTEGER",
|
||||
/* 88 */ "db_options ::= db_options STRICT NK_STRING",
|
||||
/* 89 */ "db_options ::= db_options WAL NK_INTEGER",
|
||||
/* 90 */ "db_options ::= db_options VGROUPS NK_INTEGER",
|
||||
/* 91 */ "db_options ::= db_options SINGLE_STABLE NK_INTEGER",
|
||||
|
@ -1749,8 +1749,8 @@ static const char *const yyRuleName[] = {
|
|||
/* 94 */ "alter_db_options ::= alter_db_option",
|
||||
/* 95 */ "alter_db_options ::= alter_db_options alter_db_option",
|
||||
/* 96 */ "alter_db_option ::= BUFFER NK_INTEGER",
|
||||
/* 97 */ "alter_db_option ::= CACHELAST NK_INTEGER",
|
||||
/* 98 */ "alter_db_option ::= CACHELASTSIZE NK_INTEGER",
|
||||
/* 97 */ "alter_db_option ::= CACHEMODEL NK_STRING",
|
||||
/* 98 */ "alter_db_option ::= CACHESIZE NK_INTEGER",
|
||||
/* 99 */ "alter_db_option ::= FSYNC NK_INTEGER",
|
||||
/* 100 */ "alter_db_option ::= KEEP integer_list",
|
||||
/* 101 */ "alter_db_option ::= KEEP variable_list",
|
||||
|
@ -2816,8 +2816,8 @@ static const struct {
|
|||
{ 271, 0 }, /* (71) exists_opt ::= */
|
||||
{ 270, 0 }, /* (72) db_options ::= */
|
||||
{ 270, -3 }, /* (73) db_options ::= db_options BUFFER NK_INTEGER */
|
||||
{ 270, -3 }, /* (74) db_options ::= db_options CACHELAST NK_INTEGER */
|
||||
{ 270, -3 }, /* (75) db_options ::= db_options CACHELASTSIZE NK_INTEGER */
|
||||
{ 270, -3 }, /* (74) db_options ::= db_options CACHEMODEL NK_STRING */
|
||||
{ 270, -3 }, /* (75) db_options ::= db_options CACHESIZE NK_INTEGER */
|
||||
{ 270, -3 }, /* (76) db_options ::= db_options COMP NK_INTEGER */
|
||||
{ 270, -3 }, /* (77) db_options ::= db_options DURATION NK_INTEGER */
|
||||
{ 270, -3 }, /* (78) db_options ::= db_options DURATION NK_VARIABLE */
|
||||
|
@ -2830,7 +2830,7 @@ static const struct {
|
|||
{ 270, -3 }, /* (85) db_options ::= db_options PAGESIZE NK_INTEGER */
|
||||
{ 270, -3 }, /* (86) db_options ::= db_options PRECISION NK_STRING */
|
||||
{ 270, -3 }, /* (87) db_options ::= db_options REPLICA NK_INTEGER */
|
||||
{ 270, -3 }, /* (88) db_options ::= db_options STRICT NK_INTEGER */
|
||||
{ 270, -3 }, /* (88) db_options ::= db_options STRICT NK_STRING */
|
||||
{ 270, -3 }, /* (89) db_options ::= db_options WAL NK_INTEGER */
|
||||
{ 270, -3 }, /* (90) db_options ::= db_options VGROUPS NK_INTEGER */
|
||||
{ 270, -3 }, /* (91) db_options ::= db_options SINGLE_STABLE NK_INTEGER */
|
||||
|
@ -2839,8 +2839,8 @@ static const struct {
|
|||
{ 272, -1 }, /* (94) alter_db_options ::= alter_db_option */
|
||||
{ 272, -2 }, /* (95) alter_db_options ::= alter_db_options alter_db_option */
|
||||
{ 276, -2 }, /* (96) alter_db_option ::= BUFFER NK_INTEGER */
|
||||
{ 276, -2 }, /* (97) alter_db_option ::= CACHELAST NK_INTEGER */
|
||||
{ 276, -2 }, /* (98) alter_db_option ::= CACHELASTSIZE NK_INTEGER */
|
||||
{ 276, -2 }, /* (97) alter_db_option ::= CACHEMODEL NK_STRING */
|
||||
{ 276, -2 }, /* (98) alter_db_option ::= CACHESIZE NK_INTEGER */
|
||||
{ 276, -2 }, /* (99) alter_db_option ::= FSYNC NK_INTEGER */
|
||||
{ 276, -2 }, /* (100) alter_db_option ::= KEEP integer_list */
|
||||
{ 276, -2 }, /* (101) alter_db_option ::= KEEP variable_list */
|
||||
|
@ -3543,12 +3543,12 @@ static YYACTIONTYPE yy_reduce(
|
|||
{ yylhsminor.yy616 = setDatabaseOption(pCxt, yymsp[-2].minor.yy616, DB_OPTION_BUFFER, &yymsp[0].minor.yy0); }
|
||||
yymsp[-2].minor.yy616 = yylhsminor.yy616;
|
||||
break;
|
||||
case 74: /* db_options ::= db_options CACHELAST NK_INTEGER */
|
||||
{ yylhsminor.yy616 = setDatabaseOption(pCxt, yymsp[-2].minor.yy616, DB_OPTION_CACHELAST, &yymsp[0].minor.yy0); }
|
||||
case 74: /* db_options ::= db_options CACHEMODEL NK_STRING */
|
||||
{ yylhsminor.yy616 = setDatabaseOption(pCxt, yymsp[-2].minor.yy616, DB_OPTION_CACHEMODEL, &yymsp[0].minor.yy0); }
|
||||
yymsp[-2].minor.yy616 = yylhsminor.yy616;
|
||||
break;
|
||||
case 75: /* db_options ::= db_options CACHELASTSIZE NK_INTEGER */
|
||||
{ yylhsminor.yy616 = setDatabaseOption(pCxt, yymsp[-2].minor.yy616, DB_OPTION_CACHELASTSIZE, &yymsp[0].minor.yy0); }
|
||||
case 75: /* db_options ::= db_options CACHESIZE NK_INTEGER */
|
||||
{ yylhsminor.yy616 = setDatabaseOption(pCxt, yymsp[-2].minor.yy616, DB_OPTION_CACHESIZE, &yymsp[0].minor.yy0); }
|
||||
yymsp[-2].minor.yy616 = yylhsminor.yy616;
|
||||
break;
|
||||
case 76: /* db_options ::= db_options COMP NK_INTEGER */
|
||||
|
@ -3593,7 +3593,7 @@ static YYACTIONTYPE yy_reduce(
|
|||
{ yylhsminor.yy616 = setDatabaseOption(pCxt, yymsp[-2].minor.yy616, DB_OPTION_REPLICA, &yymsp[0].minor.yy0); }
|
||||
yymsp[-2].minor.yy616 = yylhsminor.yy616;
|
||||
break;
|
||||
case 88: /* db_options ::= db_options STRICT NK_INTEGER */
|
||||
case 88: /* db_options ::= db_options STRICT NK_STRING */
|
||||
{ yylhsminor.yy616 = setDatabaseOption(pCxt, yymsp[-2].minor.yy616, DB_OPTION_STRICT, &yymsp[0].minor.yy0); }
|
||||
yymsp[-2].minor.yy616 = yylhsminor.yy616;
|
||||
break;
|
||||
|
@ -3628,11 +3628,11 @@ static YYACTIONTYPE yy_reduce(
|
|||
case 96: /* alter_db_option ::= BUFFER NK_INTEGER */
|
||||
{ yymsp[-1].minor.yy409.type = DB_OPTION_BUFFER; yymsp[-1].minor.yy409.val = yymsp[0].minor.yy0; }
|
||||
break;
|
||||
case 97: /* alter_db_option ::= CACHELAST NK_INTEGER */
|
||||
{ yymsp[-1].minor.yy409.type = DB_OPTION_CACHELAST; yymsp[-1].minor.yy409.val = yymsp[0].minor.yy0; }
|
||||
case 97: /* alter_db_option ::= CACHEMODEL NK_STRING */
|
||||
{ yymsp[-1].minor.yy409.type = DB_OPTION_CACHEMODEL; yymsp[-1].minor.yy409.val = yymsp[0].minor.yy0; }
|
||||
break;
|
||||
case 98: /* alter_db_option ::= CACHELASTSIZE NK_INTEGER */
|
||||
{ yymsp[-1].minor.yy409.type = DB_OPTION_CACHELASTSIZE; yymsp[-1].minor.yy409.val = yymsp[0].minor.yy0; }
|
||||
case 98: /* alter_db_option ::= CACHESIZE NK_INTEGER */
|
||||
{ yymsp[-1].minor.yy409.type = DB_OPTION_CACHESIZE; yymsp[-1].minor.yy409.val = yymsp[0].minor.yy0; }
|
||||
break;
|
||||
case 99: /* alter_db_option ::= FSYNC NK_INTEGER */
|
||||
{ yymsp[-1].minor.yy409.type = DB_OPTION_FSYNC; yymsp[-1].minor.yy409.val = yymsp[0].minor.yy0; }
|
||||
|
|
|
@ -38,7 +38,7 @@ TEST_F(ParserInitialATest, alterDnode) {
|
|||
TEST_F(ParserInitialATest, alterDatabase) {
|
||||
useDb("root", "test");
|
||||
|
||||
run("ALTER DATABASE test CACHELAST 1 FSYNC 200 WAL 1");
|
||||
run("ALTER DATABASE test CACHEMODEL 'last_row' FSYNC 200 WAL 1");
|
||||
|
||||
run("ALTER DATABASE test KEEP 2400");
|
||||
}
|
||||
|
|
|
@ -43,7 +43,8 @@ TEST_F(ParserInitialCTest, createBnode) {
|
|||
*
|
||||
* database_option: {
|
||||
* BUFFER value
|
||||
* | CACHELAST value
|
||||
* | CACHEMODEL {'none' | 'last_row' | 'last_value' | 'both'}
|
||||
* | CACHESIZE value
|
||||
* | COMP {0 | 1 | 2}
|
||||
* | DURATION value
|
||||
* | FSYNC value
|
||||
|
@ -55,7 +56,7 @@ TEST_F(ParserInitialCTest, createBnode) {
|
|||
* | PRECISION {'ms' | 'us' | 'ns'}
|
||||
* | REPLICA value
|
||||
* | RETENTIONS ingestion_duration:keep_duration ...
|
||||
* | STRICT value
|
||||
* | STRICT {'off' | 'on'}
|
||||
* | WAL value
|
||||
* | VGROUPS value
|
||||
* | SINGLE_STABLE {0 | 1}
|
||||
|
@ -76,8 +77,8 @@ TEST_F(ParserInitialCTest, createDatabase) {
|
|||
expect.db[len] = '\0';
|
||||
expect.ignoreExist = igExists;
|
||||
expect.buffer = TSDB_DEFAULT_BUFFER_PER_VNODE;
|
||||
expect.cacheLast = TSDB_DEFAULT_CACHE_LAST;
|
||||
expect.cacheLastSize = TSDB_DEFAULT_CACHE_LAST_SIZE;
|
||||
expect.cacheLast = TSDB_DEFAULT_CACHE_MODEL;
|
||||
expect.cacheLastSize = TSDB_DEFAULT_CACHE_SIZE;
|
||||
expect.compression = TSDB_DEFAULT_COMP_LEVEL;
|
||||
expect.daysPerFile = TSDB_DEFAULT_DAYS_PER_FILE;
|
||||
expect.fsyncPeriod = TSDB_DEFAULT_FSYNC_PERIOD;
|
||||
|
@ -203,8 +204,8 @@ TEST_F(ParserInitialCTest, createDatabase) {
|
|||
setDbSchemalessFunc(1);
|
||||
run("CREATE DATABASE IF NOT EXISTS wxy_db "
|
||||
"BUFFER 64 "
|
||||
"CACHELAST 2 "
|
||||
"CACHELASTSIZE 20 "
|
||||
"CACHEMODEL 'last_value' "
|
||||
"CACHESIZE 20 "
|
||||
"COMP 1 "
|
||||
"DURATION 100 "
|
||||
"FSYNC 100 "
|
||||
|
@ -216,7 +217,7 @@ TEST_F(ParserInitialCTest, createDatabase) {
|
|||
"PRECISION 'ns' "
|
||||
"REPLICA 3 "
|
||||
"RETENTIONS 15s:7d,1m:21d,15m:500d "
|
||||
"STRICT 1 "
|
||||
"STRICT 'on' "
|
||||
"WAL 2 "
|
||||
"VGROUPS 100 "
|
||||
"SINGLE_STABLE 1 "
|
||||
|
|
|
@ -152,9 +152,9 @@ TEST_F(ParserSelectTest, IndefiniteRowsFuncSemanticCheck) {
|
|||
|
||||
run("SELECT DIFF(c1), CSUM(c1) FROM t1", TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
|
||||
|
||||
run("SELECT CSUM(c3) FROM t1 STATE_WINDOW(c1)", TSDB_CODE_PAR_WINDOW_NOT_ALLOWED_FUNC);
|
||||
run("SELECT CSUM(c3) FROM t1 STATE_WINDOW(c1)", TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
|
||||
|
||||
run("SELECT DIFF(c1) FROM t1 INTERVAL(10s)", TSDB_CODE_PAR_WINDOW_NOT_ALLOWED_FUNC);
|
||||
run("SELECT DIFF(c1) FROM t1 INTERVAL(10s)", TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
|
||||
}
|
||||
|
||||
TEST_F(ParserSelectTest, useDefinedFunc) {
|
||||
|
@ -178,9 +178,9 @@ TEST_F(ParserSelectTest, uniqueFunc) {
|
|||
TEST_F(ParserSelectTest, uniqueFuncSemanticCheck) {
|
||||
useDb("root", "test");
|
||||
|
||||
run("SELECT UNIQUE(c1) FROM t1 INTERVAL(10S)", TSDB_CODE_PAR_WINDOW_NOT_ALLOWED_FUNC);
|
||||
run("SELECT UNIQUE(c1) FROM t1 INTERVAL(10S)", TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
|
||||
|
||||
run("SELECT UNIQUE(c1) FROM t1 GROUP BY c2", TSDB_CODE_PAR_GROUP_BY_NOT_ALLOWED_FUNC);
|
||||
run("SELECT UNIQUE(c1) FROM t1 GROUP BY c2", TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
|
||||
}
|
||||
|
||||
TEST_F(ParserSelectTest, tailFunc) {
|
||||
|
@ -194,9 +194,9 @@ TEST_F(ParserSelectTest, tailFunc) {
|
|||
TEST_F(ParserSelectTest, tailFuncSemanticCheck) {
|
||||
useDb("root", "test");
|
||||
|
||||
run("SELECT TAIL(c1, 10) FROM t1 INTERVAL(10S)", TSDB_CODE_PAR_WINDOW_NOT_ALLOWED_FUNC);
|
||||
run("SELECT TAIL(c1, 10) FROM t1 INTERVAL(10S)", TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
|
||||
|
||||
run("SELECT TAIL(c1, 10) FROM t1 GROUP BY c2", TSDB_CODE_PAR_GROUP_BY_NOT_ALLOWED_FUNC);
|
||||
run("SELECT TAIL(c1, 10) FROM t1 GROUP BY c2", TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
|
||||
}
|
||||
|
||||
TEST_F(ParserSelectTest, partitionBy) {
|
||||
|
|
|
@ -278,6 +278,10 @@ static int32_t createScanLogicNode(SLogicPlanContext* pCxt, SSelectStmt* pSelect
|
|||
|
||||
pScan->scanType = getScanType(pCxt, pScan->pScanPseudoCols, pScan->pScanCols, pScan->tableType);
|
||||
|
||||
if (NULL != pScan->pScanCols) {
|
||||
pScan->hasNormalCols = true;
|
||||
}
|
||||
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = addPrimaryKeyCol(pScan->tableId, &pScan->pScanCols);
|
||||
}
|
||||
|
|
|
@ -2113,7 +2113,7 @@ static bool tagScanMayBeOptimized(SLogicNode* pNode) {
|
|||
return false;
|
||||
}
|
||||
SScanLogicNode* pScan = (SScanLogicNode*)pNode;
|
||||
if (NULL != pScan->pScanCols) {
|
||||
if (pScan->hasNormalCols) {
|
||||
return false;
|
||||
}
|
||||
if (NULL == pNode->pParent || QUERY_NODE_LOGIC_PLAN_AGG != nodeType(pNode->pParent) ||
|
||||
|
|
|
@ -198,8 +198,7 @@ static bool stbSplHasGatherExecFunc(const SNodeList* pFuncs) {
|
|||
}
|
||||
|
||||
static bool stbSplIsMultiTbScan(bool streamQuery, SScanLogicNode* pScan) {
|
||||
return (NULL != pScan->pVgroupList && pScan->pVgroupList->numOfVgroups > 1) ||
|
||||
(streamQuery && TSDB_SUPER_TABLE == pScan->tableType);
|
||||
return (NULL != pScan->pVgroupList && pScan->pVgroupList->numOfVgroups > 1);
|
||||
}
|
||||
|
||||
static bool stbSplHasMultiTbScan(bool streamQuery, SLogicNode* pNode) {
|
||||
|
|
|
@ -700,7 +700,7 @@ EDealRes sclRewriteNonConstOperator(SNode** pNode, SScalarCtx *ctx) {
|
|||
EDealRes sclRewriteFunction(SNode** pNode, SScalarCtx *ctx) {
|
||||
SFunctionNode *node = (SFunctionNode *)*pNode;
|
||||
SNode* tnode = NULL;
|
||||
if (!fmIsScalarFunc(node->funcId)) {
|
||||
if (!fmIsScalarFunc(node->funcId) && (!ctx->dual)) {
|
||||
return DEAL_RES_CONTINUE;
|
||||
}
|
||||
|
||||
|
|
|
@ -2307,3 +2307,117 @@ int32_t leastSQRScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarPa
|
|||
pOutput->numOfRows = 1;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t percentileScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput) {
|
||||
SColumnInfoData *pInputData = pInput->columnData;
|
||||
SColumnInfoData *pOutputData = pOutput->columnData;
|
||||
|
||||
int32_t type = GET_PARAM_TYPE(pInput);
|
||||
|
||||
double val;
|
||||
bool hasNull = false;
|
||||
for (int32_t i = 0; i < pInput->numOfRows; ++i) {
|
||||
if (colDataIsNull_s(pInputData, i)) {
|
||||
hasNull = true;
|
||||
break;
|
||||
}
|
||||
char *in = pInputData->pData;
|
||||
GET_TYPED_DATA(val, double, type, in);
|
||||
}
|
||||
|
||||
if (hasNull) {
|
||||
colDataAppendNULL(pOutputData, 0);
|
||||
} else {
|
||||
colDataAppend(pOutputData, 0, (char *)&val, false);
|
||||
}
|
||||
|
||||
pOutput->numOfRows = 1;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t apercentileScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput) {
|
||||
return percentileScalarFunction(pInput, inputNum, pOutput);
|
||||
}
|
||||
|
||||
int32_t spreadScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput) {
|
||||
SColumnInfoData *pInputData = pInput->columnData;
|
||||
SColumnInfoData *pOutputData = pOutput->columnData;
|
||||
|
||||
int32_t type = GET_PARAM_TYPE(pInput);
|
||||
|
||||
double min, max;
|
||||
SET_DOUBLE_VAL(&min, DBL_MAX);
|
||||
SET_DOUBLE_VAL(&max, -DBL_MAX);
|
||||
|
||||
bool hasNull = false;
|
||||
for (int32_t i = 0; i < pInput->numOfRows; ++i) {
|
||||
if (colDataIsNull_s(pInputData, i)) {
|
||||
hasNull = true;
|
||||
break;
|
||||
}
|
||||
|
||||
char *in = pInputData->pData;
|
||||
|
||||
double val = 0;
|
||||
GET_TYPED_DATA(val, double, type, in);
|
||||
|
||||
if (val < GET_DOUBLE_VAL(&min)) {
|
||||
SET_DOUBLE_VAL(&min, val);
|
||||
}
|
||||
|
||||
if (val > GET_DOUBLE_VAL(&max)) {
|
||||
SET_DOUBLE_VAL(&max, val);
|
||||
}
|
||||
}
|
||||
|
||||
if (hasNull) {
|
||||
colDataAppendNULL(pOutputData, 0);
|
||||
} else {
|
||||
double result = max - min;
|
||||
colDataAppend(pOutputData, 0, (char *)&result, false);
|
||||
}
|
||||
|
||||
pOutput->numOfRows = 1;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t nonCalcScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput) {
|
||||
SColumnInfoData *pInputData = pInput->columnData;
|
||||
SColumnInfoData *pOutputData = pOutput->columnData;
|
||||
|
||||
int32_t type = GET_PARAM_TYPE(pInput);
|
||||
bool hasNull = false;
|
||||
|
||||
for (int32_t i = 0; i < pInput->numOfRows; ++i) {
|
||||
if (colDataIsNull_s(pInputData, i)) {
|
||||
hasNull = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
double *out = (double *)pOutputData->pData;
|
||||
if (hasNull) {
|
||||
colDataAppendNULL(pOutputData, 0);
|
||||
} else {
|
||||
*out = 0;
|
||||
}
|
||||
|
||||
pOutput->numOfRows = 1;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t derivativeScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput) {
|
||||
return nonCalcScalarFunction(pInput, inputNum, pOutput);
|
||||
}
|
||||
|
||||
int32_t irateScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput) {
|
||||
return nonCalcScalarFunction(pInput, inputNum, pOutput);
|
||||
}
|
||||
|
||||
int32_t twaScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput) {
|
||||
return avgScalarFunction(pInput, inputNum, pOutput);
|
||||
}
|
||||
|
||||
int32_t mavgScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput) {
|
||||
return avgScalarFunction(pInput, inputNum, pOutput);
|
||||
}
|
||||
|
|
|
@ -33,6 +33,8 @@ typedef struct {
|
|||
static SStreamGlobalEnv streamEnv;
|
||||
|
||||
int32_t streamExec(SStreamTask* pTask, SMsgCb* pMsgCb);
|
||||
int32_t streamPipelineExec(SStreamTask* pTask, int32_t batchNum);
|
||||
|
||||
int32_t streamDispatch(SStreamTask* pTask, SMsgCb* pMsgCb);
|
||||
int32_t streamDispatchReqToData(const SStreamDispatchReq* pReq, SStreamDataBlock* pData);
|
||||
int32_t streamRetrieveReqToData(const SStreamRetrieveReq* pReq, SStreamDataBlock* pData);
|
||||
|
|
|
@ -82,6 +82,63 @@ static FORCE_INLINE int32_t streamUpdateVer(SStreamTask* pTask, SStreamDataBlock
|
|||
return 0;
|
||||
}
|
||||
|
||||
int32_t streamPipelineExec(SStreamTask* pTask, int32_t batchNum) {
|
||||
ASSERT(pTask->execType != TASK_EXEC__NONE);
|
||||
|
||||
void* exec = pTask->exec.executor;
|
||||
|
||||
while (1) {
|
||||
SArray* pRes = taosArrayInit(0, sizeof(SSDataBlock));
|
||||
if (pRes == NULL) {
|
||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||
return -1;
|
||||
}
|
||||
|
||||
int32_t batchCnt = 0;
|
||||
while (1) {
|
||||
SSDataBlock* output = NULL;
|
||||
uint64_t ts = 0;
|
||||
if (qExecTask(exec, &output, &ts) < 0) {
|
||||
ASSERT(0);
|
||||
}
|
||||
if (output == NULL) break;
|
||||
|
||||
SSDataBlock block = {0};
|
||||
assignOneDataBlock(&block, output);
|
||||
block.info.childId = pTask->selfChildId;
|
||||
taosArrayPush(pRes, &block);
|
||||
|
||||
if (++batchCnt >= batchNum) break;
|
||||
}
|
||||
if (taosArrayGetSize(pRes) == 0) {
|
||||
taosArrayDestroy(pRes);
|
||||
break;
|
||||
}
|
||||
SStreamDataBlock* qRes = taosAllocateQitem(sizeof(SStreamDataBlock), DEF_QITEM);
|
||||
if (qRes == NULL) {
|
||||
taosArrayDestroyEx(pRes, (FDelete)blockDataFreeRes);
|
||||
return -1;
|
||||
}
|
||||
|
||||
qRes->type = STREAM_INPUT__DATA_BLOCK;
|
||||
qRes->blocks = pRes;
|
||||
qRes->childId = pTask->selfChildId;
|
||||
|
||||
if (streamTaskOutput(pTask, qRes) < 0) {
|
||||
taosArrayDestroyEx(pRes, (FDelete)blockDataFreeRes);
|
||||
taosFreeQitem(qRes);
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (pTask->dispatchType != TASK_DISPATCH__NONE) {
|
||||
ASSERT(pTask->sinkType == TASK_SINK__NONE);
|
||||
streamDispatch(pTask, pTask->pMsgCb);
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static SArray* streamExecForQall(SStreamTask* pTask, SArray* pRes) {
|
||||
int32_t cnt = 0;
|
||||
void* data = NULL;
|
||||
|
|
|
@ -125,7 +125,11 @@ int32_t streamProcessFailRecoverReq(SStreamTask* pTask, SMStreamTaskRecoverReq*
|
|||
}
|
||||
|
||||
if (pTask->taskStatus == TASK_STATUS__RECOVERING) {
|
||||
streamProcessRunReq(pTask);
|
||||
if (streamPipelineExec(pTask, 10) < 0) {
|
||||
// set fail
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -67,7 +67,7 @@ class TDTestCase:
|
|||
slow = 0 #count time where lastRow on is slower
|
||||
for i in range(5):
|
||||
#switch lastRow to off and check
|
||||
tdSql.execute('alter database db cachelast 0')
|
||||
tdSql.execute('alter database db cachemodel 'none'')
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkData(0,15,0)
|
||||
|
||||
|
@ -79,7 +79,7 @@ class TDTestCase:
|
|||
tdLog.debug(f'time used:{lastRow_Off_end-lastRow_Off_start}')
|
||||
|
||||
#switch lastRow to on and check
|
||||
tdSql.execute('alter database db cachelast 1')
|
||||
tdSql.execute('alter database db cachemodel 'last_row'')
|
||||
tdSql.query('show databases')
|
||||
tdSql.checkData(0,15,1)
|
||||
|
||||
|
|
|
@ -89,36 +89,36 @@ class TDTestCase:
|
|||
tdSql.prepare()
|
||||
|
||||
# last_cache_0.sim
|
||||
tdSql.execute("create database test1 cachelast 0")
|
||||
tdSql.execute("create database test1 cachemodel 'none'")
|
||||
tdSql.execute("use test1")
|
||||
self.insertData()
|
||||
self.executeQueries()
|
||||
|
||||
tdSql.execute("alter database test1 cachelast 1")
|
||||
tdSql.execute("alter database test1 cachemodel 'last_row'")
|
||||
self.executeQueries()
|
||||
tdDnodes.stop(1)
|
||||
tdDnodes.start(1)
|
||||
self.executeQueries()
|
||||
|
||||
tdSql.execute("alter database test1 cachelast 0")
|
||||
tdSql.execute("alter database test1 cachemodel 'none'")
|
||||
self.executeQueries()
|
||||
tdDnodes.stop(1)
|
||||
tdDnodes.start(1)
|
||||
self.executeQueries()
|
||||
|
||||
# last_cache_1.sim
|
||||
tdSql.execute("create database test2 cachelast 1")
|
||||
tdSql.execute("create database test2 cachemodel 'last_row'")
|
||||
tdSql.execute("use test2")
|
||||
self.insertData()
|
||||
self.executeQueries()
|
||||
|
||||
tdSql.execute("alter database test2 cachelast 0")
|
||||
tdSql.execute("alter database test2 cachemodel 'none'")
|
||||
self.executeQueries()
|
||||
tdDnodes.stop(1)
|
||||
tdDnodes.start(1)
|
||||
self.executeQueries()
|
||||
|
||||
tdSql.execute("alter database test2 cachelast 1")
|
||||
tdSql.execute("alter database test2 cachemodel 'last_row'")
|
||||
self.executeQueries()
|
||||
tdDnodes.stop(1)
|
||||
tdDnodes.start(1)
|
||||
|
|
|
@ -142,56 +142,56 @@ class TDTestCase:
|
|||
tdSql.prepare()
|
||||
|
||||
print("============== Step1: last_row_cache_0.sim")
|
||||
tdSql.execute("create database test1 cachelast 0")
|
||||
tdSql.execute("create database test1 cachemodel 'none'")
|
||||
tdSql.execute("use test1")
|
||||
self.insertData()
|
||||
self.executeQueries()
|
||||
self.insertData2()
|
||||
self.executeQueries2()
|
||||
|
||||
print("============== Step2: alter database test1 cachelast 1")
|
||||
tdSql.execute("alter database test1 cachelast 1")
|
||||
print("============== Step2: alter database test1 cachemodel 'last_row'")
|
||||
tdSql.execute("alter database test1 cachemodel 'last_row'")
|
||||
self.executeQueries2()
|
||||
|
||||
print("============== Step3: alter database test1 cachelast 2")
|
||||
tdSql.execute("alter database test1 cachelast 2")
|
||||
print("============== Step3: alter database test1 cachemodel 'last_value'")
|
||||
tdSql.execute("alter database test1 cachemodel 'last_value'")
|
||||
self.executeQueries2()
|
||||
|
||||
print("============== Step4: alter database test1 cachelast 3")
|
||||
tdSql.execute("alter database test1 cachelast 3")
|
||||
print("============== Step4: alter database test1 cachemodel 'both'")
|
||||
tdSql.execute("alter database test1 cachemodel 'both'")
|
||||
self.executeQueries2()
|
||||
|
||||
|
||||
print("============== Step5: alter database test1 cachelast 0 and restart taosd")
|
||||
tdSql.execute("alter database test1 cachelast 0")
|
||||
print("============== Step5: alter database test1 cachemodel 'none' and restart taosd")
|
||||
tdSql.execute("alter database test1 cachemodel 'none'")
|
||||
self.executeQueries2()
|
||||
tdDnodes.stop(1)
|
||||
tdDnodes.start(1)
|
||||
self.executeQueries2()
|
||||
|
||||
print("============== Step6: alter database test1 cachelast 1 and restart taosd")
|
||||
tdSql.execute("alter database test1 cachelast 1")
|
||||
print("============== Step6: alter database test1 cachemodel 'last_row' and restart taosd")
|
||||
tdSql.execute("alter database test1 cachemodel 'last_row'")
|
||||
self.executeQueries2()
|
||||
tdDnodes.stop(1)
|
||||
tdDnodes.start(1)
|
||||
self.executeQueries2()
|
||||
|
||||
print("============== Step7: alter database test1 cachelast 2 and restart taosd")
|
||||
tdSql.execute("alter database test1 cachelast 2")
|
||||
print("============== Step7: alter database test1 cachemodel 'last_value' and restart taosd")
|
||||
tdSql.execute("alter database test1 cachemodel 'last_value'")
|
||||
self.executeQueries2()
|
||||
tdDnodes.stop(1)
|
||||
tdDnodes.start(1)
|
||||
self.executeQueries2()
|
||||
|
||||
print("============== Step8: alter database test1 cachelast 3 and restart taosd")
|
||||
tdSql.execute("alter database test1 cachelast 3")
|
||||
print("============== Step8: alter database test1 cachemodel 'both' and restart taosd")
|
||||
tdSql.execute("alter database test1 cachemodel 'both'")
|
||||
self.executeQueries2()
|
||||
tdDnodes.stop(1)
|
||||
tdDnodes.start(1)
|
||||
self.executeQueries2()
|
||||
|
||||
print("============== Step9: create database test2 cachelast 1")
|
||||
tdSql.execute("create database test2 cachelast 1")
|
||||
print("============== Step9: create database test2 cachemodel 'last_row'")
|
||||
tdSql.execute("create database test2 cachemodel 'last_row'")
|
||||
tdSql.execute("use test2")
|
||||
self.insertData()
|
||||
self.executeQueries()
|
||||
|
@ -201,45 +201,45 @@ class TDTestCase:
|
|||
tdDnodes.start(1)
|
||||
self.executeQueries2()
|
||||
|
||||
print("============== Step8: alter database test2 cachelast 0")
|
||||
tdSql.execute("alter database test2 cachelast 0")
|
||||
print("============== Step8: alter database test2 cachemodel 'none'")
|
||||
tdSql.execute("alter database test2 cachemodel 'none'")
|
||||
self.executeQueries2()
|
||||
|
||||
print("============== Step9: alter database test2 cachelast 1")
|
||||
tdSql.execute("alter database test2 cachelast 1")
|
||||
print("============== Step9: alter database test2 cachemodel 'last_row'")
|
||||
tdSql.execute("alter database test2 cachemodel 'last_row'")
|
||||
self.executeQueries2()
|
||||
|
||||
print("============== Step10: alter database test2 cachelast 2")
|
||||
tdSql.execute("alter database test2 cachelast 2")
|
||||
print("============== Step10: alter database test2 cachemodel 'last_value'")
|
||||
tdSql.execute("alter database test2 cachemodel 'last_value'")
|
||||
self.executeQueries2()
|
||||
|
||||
print("============== Step11: alter database test2 cachelast 3")
|
||||
tdSql.execute("alter database test2 cachelast 3")
|
||||
print("============== Step11: alter database test2 cachemodel 'both'")
|
||||
tdSql.execute("alter database test2 cachemodel 'both'")
|
||||
self.executeQueries2()
|
||||
|
||||
print("============== Step12: alter database test2 cachelast 0 and restart taosd")
|
||||
tdSql.execute("alter database test2 cachelast 0")
|
||||
print("============== Step12: alter database test2 cachemodel 'none' and restart taosd")
|
||||
tdSql.execute("alter database test2 cachemodel 'none'")
|
||||
self.executeQueries2()
|
||||
tdDnodes.stop(1)
|
||||
tdDnodes.start(1)
|
||||
self.executeQueries2()
|
||||
|
||||
print("============== Step13: alter database test2 cachelast 1 and restart taosd")
|
||||
tdSql.execute("alter database test2 cachelast 1")
|
||||
print("============== Step13: alter database test2 cachemodel 'last_row' and restart taosd")
|
||||
tdSql.execute("alter database test2 cachemodel 'last_row'")
|
||||
self.executeQueries2()
|
||||
tdDnodes.stop(1)
|
||||
tdDnodes.start(1)
|
||||
self.executeQueries2()
|
||||
|
||||
print("============== Step14: alter database test2 cachelast 2 and restart taosd")
|
||||
tdSql.execute("alter database test2 cachelast 2")
|
||||
print("============== Step14: alter database test2 cachemodel 'last_value' and restart taosd")
|
||||
tdSql.execute("alter database test2 cachemodel 'last_value'")
|
||||
self.executeQueries2()
|
||||
tdDnodes.stop(1)
|
||||
tdDnodes.start(1)
|
||||
self.executeQueries2()
|
||||
|
||||
print("============== Step15: alter database test2 cachelast 3 and restart taosd")
|
||||
tdSql.execute("alter database test2 cachelast 3")
|
||||
print("============== Step15: alter database test2 cachemodel 'both' and restart taosd")
|
||||
tdSql.execute("alter database test2 cachemodel 'both'")
|
||||
self.executeQueries2()
|
||||
tdDnodes.stop(1)
|
||||
tdDnodes.start(1)
|
||||
|
|
|
@ -39,6 +39,6 @@ class DataBoundary:
|
|||
self.DB_PARAM_PRECISION_CONFIG = {"create_name": "precision", "query_name": "precision", "vnode_json_key": "", "boundary": ['ms', 'us', 'ns'], "default": "ms"}
|
||||
self.DB_PARAM_REPLICA_CONFIG = {"create_name": "replica", "query_name": "replica", "vnode_json_key": "", "boundary": [1], "default": 1}
|
||||
self.DB_PARAM_SINGLE_STABLE_CONFIG = {"create_name": "single_stable", "query_name": "single_stable_model", "vnode_json_key": "", "boundary": [0, 1], "default": 0}
|
||||
self.DB_PARAM_STRICT_CONFIG = {"create_name": "strict", "query_name": "strict", "vnode_json_key": "", "boundary": {"no_strict": 0, "strict": 1}, "default": "no_strict"}
|
||||
self.DB_PARAM_STRICT_CONFIG = {"create_name": "strict", "query_name": "strict", "vnode_json_key": "", "boundary": {"off": 0, "strict": 1}, "default": "off"}
|
||||
self.DB_PARAM_VGROUPS_CONFIG = {"create_name": "vgroups", "query_name": "vgroups", "vnode_json_key": "", "boundary": [1, 32], "default": 2}
|
||||
self.DB_PARAM_WAL_CONFIG = {"create_name": "wal", "query_name": "wal", "vnode_json_key": "", "boundary": [1, 2], "default": 1}
|
|
@ -54,7 +54,7 @@ TAOS_KEYWORDS = [
|
|||
"BOOL", "EQ", "LINEAR", "RESET", "TSERIES",
|
||||
"BY", "EXISTS", "LOCAL", "RESTRICT", "UMINUS",
|
||||
"CACHE", "EXPLAIN", "LP", "ROW", "UNION",
|
||||
"CACHELAST", "FAIL", "LSHIFT", "RP", "UNSIGNED",
|
||||
"CACHEMODEL", "FAIL", "LSHIFT", "RP", "UNSIGNED",
|
||||
"CASCADE", "FILE", "LT", "RSHIFT", "UPDATE",
|
||||
"CHANGE", "FILL", "MATCH", "SCORES", "UPLUS",
|
||||
"CLUSTER", "FLOAT", "MAXROWS", "SELECT", "USE",
|
||||
|
|
|
@ -87,7 +87,7 @@
|
|||
./test.sh -f tsim/parser/alter__for_community_version.sim
|
||||
./test.sh -f tsim/parser/alter_column.sim
|
||||
./test.sh -f tsim/parser/alter_stable.sim
|
||||
# jira ./test.sh -f tsim/parser/auto_create_tb.sim
|
||||
# nojira ./test.sh -f tsim/parser/auto_create_tb.sim
|
||||
./test.sh -f tsim/parser/auto_create_tb_drop_tb.sim
|
||||
./test.sh -f tsim/parser/between_and.sim
|
||||
./test.sh -f tsim/parser/binary_escapeCharacter.sim
|
||||
|
|
|
@ -40,13 +40,13 @@ print ============= create database
|
|||
#database_option: {
|
||||
# | BUFFER value [3~16384, default: 96]
|
||||
# | PAGES value [64~16384, default: 256]
|
||||
# | CACHELAST value [0, 1, 2, 3]
|
||||
# | CACHEMODEL value ['node', 'last_row', 'last_value', 'both']
|
||||
# | FSYNC value [0 ~ 180000 ms]
|
||||
# | KEEP value [duration, 365000]
|
||||
# | REPLICA value [1 | 3]
|
||||
# | WAL value [1 | 2]
|
||||
|
||||
sql create database db CACHELAST 3 COMP 0 DURATION 240 FSYNC 1000 MAXROWS 8000 MINROWS 10 KEEP 1000 PRECISION 'ns' REPLICA 3 WAL 2 VGROUPS 6 SINGLE_STABLE 1
|
||||
sql create database db CACHEMODEL 'both' COMP 0 DURATION 240 FSYNC 1000 MAXROWS 8000 MINROWS 10 KEEP 1000 PRECISION 'ns' REPLICA 3 WAL 2 VGROUPS 6 SINGLE_STABLE 1
|
||||
sql show databases
|
||||
print rows: $rows
|
||||
print $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
|
@ -69,7 +69,7 @@ endi
|
|||
if $data4_db != 3 then # replica
|
||||
return -1
|
||||
endi
|
||||
if $data5_db != no_strict then # strict
|
||||
if $data5_db != off then # strict
|
||||
return -1
|
||||
endi
|
||||
if $data6_db != 345600m then # duration
|
||||
|
@ -102,7 +102,7 @@ endi
|
|||
if $data15_db != 0 then # comp
|
||||
return -1
|
||||
endi
|
||||
if $data16_db != 3 then # cachelast
|
||||
if $data16_db != both then # cachelast
|
||||
return -1
|
||||
endi
|
||||
if $data17_db != ns then # precision
|
||||
|
@ -333,40 +333,40 @@ sql_error alter database db comp 5
|
|||
sql_error alter database db comp -1
|
||||
|
||||
print ============== modify cachelast [0, 1, 2, 3]
|
||||
sql alter database db cachelast 2
|
||||
sql alter database db cachemodel 'last_value'
|
||||
sql show databases
|
||||
print cachelast $data16_db
|
||||
if $data16_db != 2 then
|
||||
if $data16_db != last_value then
|
||||
return -1
|
||||
endi
|
||||
sql alter database db cachelast 1
|
||||
sql alter database db cachemodel 'last_row'
|
||||
sql show databases
|
||||
print cachelast $data16_db
|
||||
if $data16_db != 1 then
|
||||
if $data16_db != last_row then
|
||||
return -1
|
||||
endi
|
||||
sql alter database db cachelast 0
|
||||
sql alter database db cachemodel 'none'
|
||||
sql show databases
|
||||
print cachelast $data16_db
|
||||
if $data16_db != 0 then
|
||||
if $data16_db != none then
|
||||
return -1
|
||||
endi
|
||||
sql alter database db cachelast 2
|
||||
sql alter database db cachemodel 'last_value'
|
||||
sql show databases
|
||||
print cachelast $data16_db
|
||||
if $data16_db != 2 then
|
||||
if $data16_db != last_value then
|
||||
return -1
|
||||
endi
|
||||
sql alter database db cachelast 3
|
||||
sql alter database db cachemodel 'both'
|
||||
sql show databases
|
||||
print cachelast $data16_db
|
||||
if $data16_db != 3 then
|
||||
if $data16_db != both then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql_error alter database db cachelast 4
|
||||
sql_error alter database db cachelast 10
|
||||
sql_error alter database db cachelast -1
|
||||
sql_error alter database db cachelast 'other'
|
||||
|
||||
print ============== modify precision
|
||||
sql_error alter database db precision 'ms'
|
||||
|
|
|
@ -15,7 +15,7 @@ $tb = $tbPrefix . $i
|
|||
|
||||
print =============== step1
|
||||
# quorum presicion
|
||||
sql create database $db vgroups 8 replica 1 duration 2 keep 10 minrows 80 maxrows 10000 wal 2 fsync 1000 comp 0 cachelast 2 precision 'us'
|
||||
sql create database $db vgroups 8 replica 1 duration 2 keep 10 minrows 80 maxrows 10000 wal 2 fsync 1000 comp 0 cachemodel 'last_value' precision 'us'
|
||||
sql show databases
|
||||
print $data00 $data01 $data02 $data03 $data04 $data05 $data06 $data07 $data08 $data09
|
||||
|
||||
|
|
|
@ -40,7 +40,7 @@ print ============= create database with all options
|
|||
# | BUFFER value [3~16384, default: 96]
|
||||
# | PAGES value [64~16384, default: 256]
|
||||
# | PAGESIZE value [1~16384, default: 4]
|
||||
# | CACHELAST value [0, 1, 2, 3, default: 0]
|
||||
# | CACHEMODEL value ['node', 'last_row', 'last_value', 'both', default: 'node']
|
||||
# | COMP [0 | 1 | 2, default: 2]
|
||||
# | DURATION value [60m ~ min(3650d,keep), default: 10d, unit may be minut/hour/day]
|
||||
# | FSYNC value [0 ~ 180000 ms, default: 3000]
|
||||
|
@ -89,7 +89,7 @@ if $data4_db != 1 then # replica
|
|||
print expect 1, actual: $data4_db
|
||||
return -1
|
||||
endi
|
||||
if $data5_db != no_strict then # strict
|
||||
if $data5_db != off then # strict
|
||||
return -1
|
||||
endi
|
||||
if $data6_db != 14400m then # duration
|
||||
|
@ -122,7 +122,7 @@ endi
|
|||
if $data15_db != 2 then # comp
|
||||
return -1
|
||||
endi
|
||||
if $data16_db != 0 then # cachelast
|
||||
if $data16_db != none then # cachelast
|
||||
return -1
|
||||
endi
|
||||
if $data17_db != ms then # precision
|
||||
|
@ -167,32 +167,32 @@ sql drop database db
|
|||
#endi
|
||||
#sql drop database db
|
||||
|
||||
print ====> CACHELAST value [0, 1, 2, 3, default: 0]
|
||||
sql create database db CACHELAST 1
|
||||
print ====> CACHEMODEL value [0, 1, 2, 3, default: 0]
|
||||
sql create database db CACHEMODEL 'last_row'
|
||||
sql show databases
|
||||
print $data0_db $data1_db $data2_db $data3_db $data4_db $data5_db $data6_db $data7_db $data8_db $data9_db $data10_db $data11_db $data12_db $data13_db $data14_db $data15_db $data16_db $data17_db
|
||||
if $data16_db != 1 then
|
||||
if $data16_db != last_row then
|
||||
return -1
|
||||
endi
|
||||
sql drop database db
|
||||
|
||||
sql create database db CACHELAST 2
|
||||
sql create database db CACHEMODEL 'last_value'
|
||||
sql show databases
|
||||
print $data0_db $data1_db $data2_db $data3_db $data4_db $data5_db $data6_db $data7_db $data8_db $data9_db $data10_db $data11_db $data12_db $data13_db $data14_db $data15_db $data16_db $data17_db
|
||||
if $data16_db != 2 then
|
||||
if $data16_db != last_value then
|
||||
return -1
|
||||
endi
|
||||
sql drop database db
|
||||
|
||||
sql create database db CACHELAST 3
|
||||
sql create database db CACHEMODEL 'both'
|
||||
sql show databases
|
||||
print $data0_db $data1_db $data2_db $data3_db $data4_db $data5_db $data6_db $data7_db $data8_db $data9_db $data10_db $data11_db $data12_db $data13_db $data14_db $data15_db $data16_db $data17_db
|
||||
if $data16_db != 3 then
|
||||
if $data16_db != both then
|
||||
return -1
|
||||
endi
|
||||
sql drop database db
|
||||
sql_error create database db CACHELAST 4
|
||||
sql_error create database db CACHELAST -1
|
||||
sql_error create database db CACHEMODEL 'other'
|
||||
sql_error create database db CACHEMODEL '-1'
|
||||
|
||||
print ====> COMP [0 | 1 | 2, default: 2]
|
||||
sql create database db COMP 1
|
||||
|
|
|
@ -184,23 +184,26 @@ if $data(3)[8] != 涛思数据3 then
|
|||
return -1
|
||||
endi
|
||||
|
||||
sql select count(*), first(c9) from $stb group by t1 order by t1 asc slimit 2 soffset 1
|
||||
if $rows != 2 then
|
||||
sql select t1, count(*), first(c9) from $stb partition by t1 order by t1 asc slimit 3
|
||||
if $rows != 3 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != 1 then
|
||||
if $data(1)[1] != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data01 != 涛思数据2 then
|
||||
if $data(1)[2] != 涛思数据1 then
|
||||
return -1
|
||||
endi
|
||||
if $data02 != 2 then
|
||||
if $data(2)[1] != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data11 != 涛思数据3 then
|
||||
if $data(2)[2] != 涛思数据2 then
|
||||
return -1
|
||||
endi
|
||||
if $data12 != 3 then
|
||||
if $data(3)[1] != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data(3)[2] != 涛思数据3 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
|
@ -248,23 +251,26 @@ if $data(3)[8] != 涛思数据3 then
|
|||
return -1
|
||||
endi
|
||||
|
||||
sql select count(*), first(c9) from $stb group by t1 order by t1 asc slimit 2 soffset 1
|
||||
if $rows != 2 then
|
||||
sql select t1, count(*), first(c9) from $stb partition by t1 order by t1 asc slimit 3
|
||||
if $rows != 3 then
|
||||
return -1
|
||||
endi
|
||||
if $data00 != 1 then
|
||||
if $data(1)[1] != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data01 != 涛思数据2 then
|
||||
if $data(1)[2] != 涛思数据1 then
|
||||
return -1
|
||||
endi
|
||||
if $data02 != 2 then
|
||||
if $data(2)[1] != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data11 != 涛思数据3 then
|
||||
if $data(2)[2] != 涛思数据2 then
|
||||
return -1
|
||||
endi
|
||||
if $data12 != 3 then
|
||||
if $data(3)[1] != 1 then
|
||||
return -1
|
||||
endi
|
||||
if $data(3)[2] != 涛思数据3 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
|
|
|
@ -7,7 +7,7 @@ print ======================== dnode1 start
|
|||
|
||||
$db = testdb
|
||||
sql drop database if exists $db
|
||||
sql create database $db cachelast 2
|
||||
sql create database $db cachemodel 'last_value'
|
||||
sql use $db
|
||||
|
||||
sql create stable st2 (ts timestamp, f1 int, f2 double, f3 binary(10), f4 timestamp) tags (id int)
|
||||
|
|
|
@ -8,7 +8,7 @@ print ======================== dnode1 start
|
|||
|
||||
$db = testdb
|
||||
sql drop database if exists $db
|
||||
sql create database $db cachelast 2
|
||||
sql create database $db cachemodel 'last_value'
|
||||
sql use $db
|
||||
|
||||
$table1 = table_name
|
||||
|
|
|
@ -366,4 +366,143 @@ if $data32 != 8 then
|
|||
goto loop1
|
||||
endi
|
||||
|
||||
sql drop database IF EXISTS test2;
|
||||
sql drop stream IF EXISTS streams21;
|
||||
sql drop stream IF EXISTS streams22;
|
||||
|
||||
sql create database test2 vgroups 2;
|
||||
sql use test2;
|
||||
sql create stable st(ts timestamp, a int, b int, c int, d double) tags(ta int,tb int,tc int);
|
||||
sql create table t1 using st tags(1,1,1);
|
||||
sql create table t2 using st tags(2,2,2);
|
||||
|
||||
sql create stream streams21 trigger at_once into streamt as select _wstart, count(*) c1, sum(a) c3 , max(b) c4, min(c) c5 from t1 interval(10s);
|
||||
sql create stream streams22 trigger at_once into streamt2 as select _wstart, count(*) c1, sum(a) c3 , max(b) c4, min(c) c5 from st interval(10s);
|
||||
|
||||
sql insert into t1 values(1648791213000,1,1,1,1.0);
|
||||
sql insert into t1 values(1648791223001,2,2,2,1.1);
|
||||
sql insert into t1 values(1648791233002,3,3,3,2.1);
|
||||
sql insert into t1 values(1648791243003,4,4,4,3.1);
|
||||
sql insert into t1 values(1648791213004,4,5,5,4.1);
|
||||
|
||||
sql insert into t2 values(1648791213000,1,6,6,1.0);
|
||||
sql insert into t2 values(1648791223001,2,7,7,1.1);
|
||||
sql insert into t2 values(1648791233002,3,8,8,2.1);
|
||||
sql insert into t2 values(1648791243003,4,9,9,3.1);
|
||||
sql insert into t2 values(1648791213004,4,10,10,4.1);
|
||||
|
||||
$loop_count = 0
|
||||
|
||||
loop2:
|
||||
sleep 300
|
||||
|
||||
$loop_count = $loop_count + 1
|
||||
if $loop_count == 10 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select * from streamt;
|
||||
|
||||
# row 0
|
||||
if $data01 != 2 then
|
||||
print =====data01=$data01
|
||||
goto loop2
|
||||
endi
|
||||
|
||||
if $data02 != 5 then
|
||||
print =====data02=$data02
|
||||
goto loop2
|
||||
endi
|
||||
|
||||
# row 1
|
||||
if $data11 != 1 then
|
||||
print =====data11=$data11
|
||||
goto loop2
|
||||
endi
|
||||
|
||||
if $data12 != 2 then
|
||||
print =====data12=$data12
|
||||
goto loop2
|
||||
endi
|
||||
|
||||
# row 2
|
||||
if $data21 != 1 then
|
||||
print =====data21=$data21
|
||||
goto loop2
|
||||
endi
|
||||
|
||||
if $data22 != 3 then
|
||||
print =====data22=$data22
|
||||
goto loop2
|
||||
endi
|
||||
|
||||
# row 3
|
||||
if $data31 != 1 then
|
||||
print =====data31=$data31
|
||||
goto loop2
|
||||
endi
|
||||
|
||||
if $data32 != 4 then
|
||||
print =====data32=$data32
|
||||
goto loop2
|
||||
endi
|
||||
|
||||
print step 6
|
||||
|
||||
$loop_count = 0
|
||||
|
||||
loop3:
|
||||
sleep 300
|
||||
|
||||
$loop_count = $loop_count + 1
|
||||
if $loop_count == 10 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
sql select * from streamt2;
|
||||
|
||||
# row 0
|
||||
if $data01 != 4 then
|
||||
print =====data01=$data01
|
||||
# goto loop3
|
||||
endi
|
||||
|
||||
if $data02 != 10 then
|
||||
print =====data02=$data02
|
||||
goto loop3
|
||||
endi
|
||||
|
||||
# row 1
|
||||
if $data11 != 2 then
|
||||
print =====data11=$data11
|
||||
goto loop3
|
||||
endi
|
||||
|
||||
if $data12 != 4 then
|
||||
print =====data12=$data12
|
||||
goto loop3
|
||||
endi
|
||||
|
||||
# row 2
|
||||
if $data21 != 2 then
|
||||
print =====data21=$data21
|
||||
goto loop3
|
||||
endi
|
||||
|
||||
if $data22 != 6 then
|
||||
print =====data22=$data22
|
||||
goto loop3
|
||||
endi
|
||||
|
||||
# row 3
|
||||
if $data31 != 2 then
|
||||
print =====data31=$data31
|
||||
goto loop3
|
||||
endi
|
||||
|
||||
if $data32 != 8 then
|
||||
print =====data32=$data32
|
||||
goto loop3
|
||||
endi
|
||||
|
||||
system sh/stop_dnodes.sh
|
|
@ -1,87 +1,50 @@
|
|||
system sh/stop_dnodes.sh
|
||||
system sh/deploy.sh -n dnode1 -i 1
|
||||
system sh/cfg.sh -n dnode1 -c debugflag -v 131
|
||||
system sh/exec.sh -n dnode1 -s start
|
||||
system sh/exec.sh -n dnode1 -s start -v
|
||||
sql connect
|
||||
|
||||
print =============== step1: create drop show dnodes
|
||||
$x = 0
|
||||
step1:
|
||||
$x = $x + 1
|
||||
sleep 1000
|
||||
if $x == 10 then
|
||||
print ---> dnode not ready!
|
||||
return -1
|
||||
endi
|
||||
sql show dnodes
|
||||
print ---> $data00 $data01 $data02 $data03 $data04 $data05
|
||||
if $rows != 1 then
|
||||
print ======== step1
|
||||
sql drop database if exists db1;
|
||||
sql create database db1 vgroups 3;
|
||||
sql use db1;
|
||||
sql create stable st1 (ts timestamp, f1 int, f2 binary(200)) tags(t1 int);
|
||||
sql create table tb1 using st1 tags(1);
|
||||
sql insert into tb1 values ('2022-07-07 10:01:01', 11, "aaa");
|
||||
sql insert into tb1 values ('2022-07-07 11:01:02', 12, "bbb");
|
||||
sql create table tb2 using st1 tags(2);
|
||||
sql insert into tb2 values ('2022-07-07 10:02:01', 21, "aaa");
|
||||
sql insert into tb2 values ('2022-07-07 11:02:02', 22, "bbb");
|
||||
sql create table tb3 using st1 tags(3);
|
||||
sql insert into tb3 values ('2022-07-07 10:03:01', 31, "aaa");
|
||||
sql insert into tb3 values ('2022-07-07 11:03:02', 32, "bbb");
|
||||
sql create table tb4 using st1 tags(4);
|
||||
|
||||
sql insert into tb4 select * from tb1;
|
||||
|
||||
goto _OVER
|
||||
|
||||
sql select * from tb4;
|
||||
if $rows != 2 then
|
||||
return -1
|
||||
endi
|
||||
if $data(1)[4] != ready then
|
||||
goto step1
|
||||
endi
|
||||
|
||||
print =============== step2: create db
|
||||
sql create database d1 vgroups 3 buffer 3
|
||||
sql show databases
|
||||
sql use d1
|
||||
sql show vgroups
|
||||
|
||||
print =============== step3: create show stable, include all type
|
||||
sql create table if not exists stb (ts timestamp, c1 bool, c2 tinyint, c3 smallint, c4 int, c5 bigint, c6 float, c7 double, c8 binary(16), c9 nchar(16), c10 timestamp, c11 tinyint unsigned, c12 smallint unsigned, c13 int unsigned, c14 bigint unsigned) tags (t1 bool, t2 tinyint, t3 smallint, t4 int, t5 bigint, t6 float, t7 double, t8 binary(16), t9 nchar(16), t10 timestamp, t11 tinyint unsigned, t12 smallint unsigned, t13 int unsigned, t14 bigint unsigned)
|
||||
sql create stable if not exists stb_1 (ts timestamp, c1 int) tags (j int)
|
||||
sql create table stb_2 (ts timestamp, c1 int) tags (t1 int)
|
||||
sql create stable stb_3 (ts timestamp, c1 int) tags (t1 int)
|
||||
sql show stables
|
||||
if $rows != 4 then
|
||||
sql insert into tb4 select ts,f1,f2 from st1;
|
||||
sql select * from tb4;
|
||||
if $rows != 6 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step4: ccreate child table
|
||||
sql create table c1 using stb tags(true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40)
|
||||
sql create table c2 using stb tags(false, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 2', 'child tbl 2', '2022-02-25 18:00:00.000', 10, 20, 30, 40)
|
||||
sql show tables
|
||||
if $rows != 2 then
|
||||
sql create table tba (ts timestamp, f1 binary(10), f2 bigint, f3 double);
|
||||
sql_error insert into tba select * from tb1;
|
||||
sql insert into tba (ts,f2,f1) select * from tb1;
|
||||
sql select * from tba;
|
||||
if $rows != 2 then
|
||||
return -1
|
||||
endi
|
||||
sql create table tbb (ts timestamp, f1 binary(10), f2 bigint, f3 double);
|
||||
sql insert into tbb (f2,f1,ts) select f1+1,f2,ts+3 from tb2;
|
||||
sql select * from tbb;
|
||||
if $rows != 2 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step5: insert data
|
||||
sql insert into c1 values(now-1s, true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40)
|
||||
sql insert into c1 values(now+0s, true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40) (now+1s, true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40) (now+2s, true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40)
|
||||
sql insert into c2 values(now-1s, true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40)
|
||||
sql insert into c2 values(now+0s, true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40) (now+1s, true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40) (now+2s, true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40)
|
||||
|
||||
print =============== step6: alter insert
|
||||
sql insert into c3 using stb tags(true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40) values(now-1s, true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40)
|
||||
sql insert into c3 using stb tags(true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40) values(now+0s, true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40)
|
||||
|
||||
print =============== restart
|
||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||
system sh/exec.sh -n dnode1 -s start -v
|
||||
|
||||
print =============== stepa: query data
|
||||
sql select * from c1
|
||||
sql select * from stb
|
||||
sql select * from stb_1
|
||||
sql select ts, c1, c2, c3 from c1
|
||||
sql select ts, c1, c2, c3 from stb
|
||||
sql select ts, c1 from stb_2
|
||||
sql select ts, c1, t1 from c1
|
||||
sql select ts, c1, t1 from stb
|
||||
sql select ts, c1, t1 from stb_2
|
||||
|
||||
print =============== stepb: count
|
||||
sql select count(*) from c1;
|
||||
sql select count(*) from stb;
|
||||
sql select count(ts), count(c1), count(c2), count(c3) from c1
|
||||
sql select count(ts), count(c1), count(c2), count(c3) from stb
|
||||
|
||||
print =============== stepc: func
|
||||
sql select first(ts), first(c1), first(c2), first(c3) from c1
|
||||
sql select min(c2), min(c3), min(c4) from c1
|
||||
sql select max(c2), max(c3), max(c4) from c1
|
||||
sql select sum(c2), sum(c3), sum(c4) from c1
|
||||
|
||||
_OVER:
|
||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||
|
@ -96,4 +59,4 @@ endi
|
|||
|
||||
if $system_content == $null then
|
||||
return -1
|
||||
endi
|
||||
endi
|
|
@ -23,62 +23,65 @@ if $data(1)[4] != ready then
|
|||
endi
|
||||
|
||||
print =============== step2: create db
|
||||
sql create database d1 vgroups 3 buffer 3
|
||||
sql create database d1 vgroups 2 buffer 3
|
||||
sql show databases
|
||||
sql use d1
|
||||
sql show vgroups
|
||||
|
||||
print =============== step3: create show stable, include all type
|
||||
sql create table if not exists stb (ts timestamp, c1 bool, c2 tinyint, c3 smallint, c4 int, c5 bigint, c6 float, c7 double, c8 binary(16), c9 nchar(16), c10 timestamp, c11 tinyint unsigned, c12 smallint unsigned, c13 int unsigned, c14 bigint unsigned) tags (t1 bool, t2 tinyint, t3 smallint, t4 int, t5 bigint, t6 float, t7 double, t8 binary(16), t9 nchar(16), t10 timestamp, t11 tinyint unsigned, t12 smallint unsigned, t13 int unsigned, t14 bigint unsigned)
|
||||
sql create stable if not exists stb_1 (ts timestamp, c1 int) tags (j int)
|
||||
sql create table stb_2 (ts timestamp, c1 int) tags (t1 int)
|
||||
sql create stable stb_3 (ts timestamp, c1 int) tags (t1 int)
|
||||
print =============== step3: create show stable
|
||||
sql create table if not exists stb (ts timestamp, c1 int, c2 float, c3 double) tags (t1 int unsigned)
|
||||
sql show stables
|
||||
if $rows != 4 then
|
||||
if $rows != 1 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step4: ccreate child table
|
||||
sql create table c1 using stb tags(true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40)
|
||||
sql create table c2 using stb tags(false, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 2', 'child tbl 2', '2022-02-25 18:00:00.000', 10, 20, 30, 40)
|
||||
print =============== step4: create show table
|
||||
sql create table ct1 using stb tags(1000)
|
||||
sql create table ct2 using stb tags(2000)
|
||||
sql create table ct3 using stb tags(3000)
|
||||
sql show tables
|
||||
if $rows != 2 then
|
||||
if $rows != 3 then
|
||||
return -1
|
||||
endi
|
||||
|
||||
print =============== step5: insert data
|
||||
sql insert into c1 values(now-1s, true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40)
|
||||
sql insert into c1 values(now+0s, true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40) (now+1s, true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40) (now+2s, true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40)
|
||||
sql insert into c2 values(now-1s, true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40)
|
||||
sql insert into c2 values(now+0s, true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40) (now+1s, true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40) (now+2s, true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40)
|
||||
sql insert into ct1 values(now+0s, 10, 2.0, 3.0)
|
||||
sql insert into ct1 values(now+1s, 11, 2.1, 3.1)(now+2s, -12, -2.2, -3.2)(now+3s, -13, -2.3, -3.3)
|
||||
sql insert into ct2 values(now+0s, 10, 2.0, 3.0)
|
||||
sql insert into ct2 values(now+1s, 11, 2.1, 3.1)(now+2s, -12, -2.2, -3.2)(now+3s, -13, -2.3, -3.3)
|
||||
sql insert into ct3 values('2021-01-01 00:00:00.000', 10, 2.0, 3.0)
|
||||
|
||||
print =============== step6: alter insert
|
||||
sql insert into c3 using stb tags(true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40) values(now-1s, true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40)
|
||||
sql insert into c3 using stb tags(true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40) values(now+0s, true, -1, -2, -3, -4, -6.0, -7.0, 'child tbl 1', 'child tbl 1', '2022-02-25 18:00:00.000', 10, 20, 30, 40)
|
||||
|
||||
goto _OVER
|
||||
print =============== stepa: query data
|
||||
sql select * from c1
|
||||
print =============== step6: query data
|
||||
sql select * from ct1
|
||||
sql select * from stb
|
||||
sql select * from stb_1
|
||||
sql select ts, c1, c2, c3 from c1
|
||||
sql select c1, c2, c3 from ct1
|
||||
sql select ts, c1, c2, c3 from stb
|
||||
sql select ts, c1 from stb_2
|
||||
sql select ts, c1, t1 from c1
|
||||
sql select ts, c1, t1 from stb
|
||||
sql select ts, c1, t1 from stb_2
|
||||
|
||||
print =============== stepb: count
|
||||
#sql select count(*) from c1;
|
||||
#sql select count(*) from stb;
|
||||
#sql select count(ts), count(c1), count(c2), count(c3) from c1
|
||||
#sql select count(ts), count(c1), count(c2), count(c3) from stb
|
||||
print =============== step7: count
|
||||
sql select count(*) from ct1;
|
||||
sql select count(*) from stb;
|
||||
sql select count(ts), count(c1), count(c2), count(c3) from ct1
|
||||
sql select count(ts), count(c1), count(c2), count(c3) from stb
|
||||
|
||||
print =============== stepc: func
|
||||
#sql select first(ts), first(c1), first(c2), first(c3) from c1
|
||||
#sql select min(c1), min(c2), min(c3) from c1
|
||||
#sql select max(c1), max(c2), max(c3) from c1
|
||||
#sql select sum(c1), sum(c2), sum(c3) from c1
|
||||
print =============== step8: func
|
||||
sql select first(ts), first(c1), first(c2), first(c3) from ct1
|
||||
sql select min(c1), min(c2), min(c3) from ct1
|
||||
sql select max(c1), max(c2), max(c3) from ct1
|
||||
sql select sum(c1), sum(c2), sum(c3) from ct1
|
||||
|
||||
print =============== step9: insert select
|
||||
sql create table ct4 using stb tags(4000);
|
||||
sql insert into ct4 select * from ct1;
|
||||
sql select * from ct4;
|
||||
sql insert into ct4 select ts,c1,c2,c3 from stb;
|
||||
|
||||
sql create table tb1 (ts timestamp, c1 int, c2 float, c3 double);
|
||||
sql insert into tb1 (ts, c1, c2, c3) select * from ct1;
|
||||
sql select * from tb1;
|
||||
|
||||
sql create table tb2 (ts timestamp, f1 binary(10), c1 int, c2 double);
|
||||
sql insert into tb2 (c2, c1, ts) select c2+1, c1, ts+3 from ct2;
|
||||
sql select * from tb2;
|
||||
|
||||
_OVER:
|
||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||
|
|
|
@ -36,22 +36,31 @@ class TDTestCase:
|
|||
|
||||
def illegal_params(self):
|
||||
|
||||
illegal_params = ["1","0","NULL","None","False","True" ,"keep","now" ,"*" , "," ,"_" , "abc" ,"keep"]
|
||||
illegal_params = ["1","0","NULL","False","True" ,"keep","now" ,"*" , "," ,"_" , "abc" ,"keep"]
|
||||
|
||||
for value in illegal_params:
|
||||
|
||||
tdSql.error("create database testdb replica 1 cachelast '%s' " %value)
|
||||
tdSql.error("create database testdb replica 1 cachemodel '%s' " %value)
|
||||
|
||||
unexpected_numbers = [-1 , 0.0 , 3.0 , 4, 10 , 100]
|
||||
|
||||
for number in unexpected_numbers:
|
||||
tdSql.error("create database testdb replica 1 cachelast %s " %number)
|
||||
tdSql.error("create database testdb replica 1 cachemodel %s " %number)
|
||||
|
||||
def getCacheModelStr(self, value):
|
||||
numbers = {
|
||||
0 : "none",
|
||||
1 : "last_row",
|
||||
2 : "last_value",
|
||||
3 : "both"
|
||||
}
|
||||
return numbers.get(value, 'other')
|
||||
|
||||
def prepare_datas(self):
|
||||
for i in range(4):
|
||||
tdSql.execute("create database test_db_%d replica 1 cachelast %d " %(i,i))
|
||||
tdSql.execute("use test_db_%d"%i)
|
||||
str = self.getCacheModelStr(i)
|
||||
tdSql.execute("create database testdb_%s replica 1 cachemodel '%s' " %(str, str))
|
||||
tdSql.execute("use testdb_%s"%str)
|
||||
tdSql.execute("create stable st(ts timestamp , c1 int ,c2 float ) tags(ind int) ")
|
||||
tdSql.execute("create table tb1 using st tags(1) ")
|
||||
tdSql.execute("create table tb2 using st tags(2) ")
|
||||
|
@ -81,10 +90,10 @@ class TDTestCase:
|
|||
# cache_last_set value
|
||||
for k , v in cache_lasts.items():
|
||||
|
||||
if k.split("_")[-1]==str(v):
|
||||
tdLog.info(" database %s cache_last value check pass, value is %d "%(k,v) )
|
||||
if k=="testdb_"+str(v):
|
||||
tdLog.info(" database %s cache_last value check pass, value is %s "%(k,v) )
|
||||
else:
|
||||
tdLog.exit(" database %s cache_last value check fail, value is %d "%(k,v) )
|
||||
tdLog.exit(" database %s cache_last value check fail, value is %s "%(k,v) )
|
||||
|
||||
# # check storage layer implementation
|
||||
|
||||
|
@ -132,13 +141,10 @@ class TDTestCase:
|
|||
|
||||
|
||||
def run(self): # sourcery skip: extract-duplicate-method, remove-redundant-fstring
|
||||
|
||||
|
||||
self.illegal_params()
|
||||
self.prepare_datas()
|
||||
self.check_cache_last_sets()
|
||||
self.restart_check_cache_last_sets()
|
||||
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
|
|
|
@ -25,7 +25,7 @@ class TDTestCase:
|
|||
def insert_datas_and_check_abs(self ,tbnums , rownums , time_step ):
|
||||
tdLog.info(" prepare datas for auto check abs function ")
|
||||
|
||||
tdSql.execute(" create database test cachelast 1 ")
|
||||
tdSql.execute(" create database test cachemodel 'last_row' ")
|
||||
tdSql.execute(" use test ")
|
||||
tdSql.execute(" create stable stb (ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint,\
|
||||
c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp) tags (t1 int)")
|
||||
|
@ -63,7 +63,7 @@ class TDTestCase:
|
|||
|
||||
|
||||
def prepare_datas(self):
|
||||
tdSql.execute("create database if not exists db keep 3650 duration 1000 cachelast 1")
|
||||
tdSql.execute("create database if not exists db keep 3650 duration 1000 cachemodel 'last_row'")
|
||||
tdSql.execute("use db")
|
||||
tdSql.execute(
|
||||
'''create table stb1
|
||||
|
@ -124,7 +124,7 @@ class TDTestCase:
|
|||
def prepare_tag_datas(self):
|
||||
# prepare datas
|
||||
tdSql.execute(
|
||||
"create database if not exists testdb keep 3650 duration 1000 cachelast 1")
|
||||
"create database if not exists testdb keep 3650 duration 1000 cachemodel 'last_row'")
|
||||
tdSql.execute(" use testdb ")
|
||||
|
||||
tdSql.execute(f" create stable stb1 (ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp , uc1 int unsigned,\
|
||||
|
@ -528,7 +528,7 @@ class TDTestCase:
|
|||
def check_boundary_values(self):
|
||||
|
||||
tdSql.execute("drop database if exists bound_test")
|
||||
tdSql.execute("create database if not exists bound_test cachelast 2")
|
||||
tdSql.execute("create database if not exists bound_test cachemodel 'last_value'")
|
||||
time.sleep(3)
|
||||
tdSql.execute("use bound_test")
|
||||
tdSql.execute(
|
||||
|
|
|
@ -335,7 +335,7 @@ class TDTestCase:
|
|||
# tdSql.checkRows(21)
|
||||
|
||||
# group by
|
||||
tdSql.query("select statecount(c1,'GT',1) from ct1 group by c1")
|
||||
tdSql.error("select statecount(c1,'GT',1) from ct1 group by c1")
|
||||
tdSql.error("select statecount(c1,'GT',1) from ct1 group by tbname")
|
||||
|
||||
# super table
|
||||
|
|
|
@ -122,9 +122,9 @@ class TDTestCase:
|
|||
tdSql.execute(f'create table ct{i+1} using stb1 tags ( {i+1} )')
|
||||
|
||||
tdSql.query('show databases;')
|
||||
tdSql.checkData(2,5,'no_strict')
|
||||
tdSql.error('alter database db strict 0')
|
||||
# tdSql.execute('alter database db strict 1')
|
||||
tdSql.checkData(2,5,'off')
|
||||
tdSql.error("alter database db strict 'off'")
|
||||
# tdSql.execute('alter database db strict 'on'')
|
||||
# tdSql.query('show databases;')
|
||||
# tdSql.checkData(2,5,'strict')
|
||||
|
||||
|
|
Loading…
Reference in New Issue