Merge branch '3.0' of https://github.com/taosdata/TDengine into fix/hzcheng_3.0
This commit is contained in:
commit
c4593d68ee
|
@ -110,3 +110,4 @@ contrib/*
|
|||
!contrib/CMakeLists.txt
|
||||
!contrib/test
|
||||
sql
|
||||
debug*/
|
||||
|
|
|
@ -22,7 +22,7 @@ title: 集群部署
|
|||
|
||||
### 第二步
|
||||
|
||||
建议关闭所有物理节点的防火墙,至少保证端口:6030 - 6042 的 TCP 和 UDP 端口都是开放的。强烈建议先关闭防火墙,集群搭建完毕之后,再来配置端口;
|
||||
确保集群中所有主机在端口 6030-6042 上的 TCP/UDP 协议能够互通。
|
||||
|
||||
### 第三步
|
||||
|
||||
|
|
|
@ -80,7 +80,7 @@ taos --dump-config
|
|||
| 补充说明 | RESTful 服务在 2.4.0.0 之前(不含)由 taosd 提供,默认端口为 6041; 在 2.4.0.0 及后续版本由 taosAdapter,默认端口为 6041 |
|
||||
|
||||
:::note
|
||||
对于端口,TDengine 会使用从 serverPort 起 13 个连续的 TCP 和 UDP 端口号,请务必在防火墙打开。因此如果是缺省配置,需要打开从 6030 到 6042 共 13 个端口,而且必须 TCP 和 UDP 都打开。(详细的端口情况请参见下表)
|
||||
确保集群中所有主机在端口 6030-6042 上的 TCP/UDP 协议能够互通。(详细的端口情况请参见下表)
|
||||
:::
|
||||
| 协议 | 默认端口 | 用途说明 | 修改方法 |
|
||||
| :--- | :-------- | :---------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------- |
|
||||
|
@ -590,7 +590,7 @@ charset 的有效值是 UTF-8。
|
|||
| 适用范围 | 仅服务端适用 |
|
||||
| 含义 | 每个 DB 中 能够使用的最大 vnode 个数 |
|
||||
| 取值范围 | 0-8192 |
|
||||
| 缺省值 | |
|
||||
| 缺省值 | 0 |
|
||||
|
||||
### maxTablesPerVnode
|
||||
|
||||
|
|
|
@ -41,7 +41,7 @@ TDengine 分布式架构的逻辑结构图如下:
|
|||
- 集群数据节点对外提供 RESTful 服务占用一个 TCP 端口,是 serverPort+11。
|
||||
- 集群内数据节点与 Arbitrator 节点之间通讯占用一个 TCP 端口,是 serverPort+12。
|
||||
|
||||
因此一个数据节点总的端口范围为 serverPort 到 serverPort+12,总共 13 个 TCP/UDP 端口。使用时,需要确保防火墙将这些端口打开。每个数据节点可以配置不同的 serverPort。详细的端口情况请参见 [TDengine 2.0 端口说明](/train-faq/faq#port)
|
||||
因此一个数据节点总的端口范围为 serverPort 到 serverPort+12,总共 13 个 TCP/UDP 端口。确保集群中所有主机在端口 6030-6042 上的 TCP/UDP 协议能够互通。详细的端口情况请参见 [TDengine 2.0 端口说明](/train-faq/faq#port)
|
||||
|
||||
**集群对外连接:**TDengine 集群可以容纳单个、多个甚至几千个数据节点。应用只需要向集群中任何一个数据节点发起连接即可,连接需要提供的网络参数是一数据节点的 End Point(FQDN 加配置的端口号)。通过命令行 CLI 启动应用 taos 时,可以通过选项-h 来指定数据节点的 FQDN,-P 来指定其配置的端口号,如果端口不配置,将采用 TDengine 的系统配置参数 serverPort。
|
||||
|
||||
|
|
|
@ -61,7 +61,7 @@ title: 常见问题及反馈
|
|||
|
||||
5. ping 服务器 FQDN,如果没有反应,请检查你的网络,DNS 设置,或客户端所在计算机的系统 hosts 文件。如果部署的是 TDengine 集群,客户端需要能 ping 通所有集群节点的 FQDN。
|
||||
|
||||
6. 检查防火墙设置(Ubuntu 使用 ufw status,CentOS 使用 firewall-cmd --list-port),确认 TCP/UDP 端口 6030-6042 是打开的
|
||||
6. 检查防火墙设置(Ubuntu 使用 ufw status,CentOS 使用 firewall-cmd --list-port),确保集群中所有主机在端口 6030-6042 上的 TCP/UDP 协议能够互通。
|
||||
|
||||
7. 对于 Linux 上的 JDBC(ODBC, Python, Go 等接口类似)连接, 确保*libtaos.so*在目录*/usr/local/taos/driver*里, 并且*/usr/local/taos/driver*在系统库函数搜索路径*LD_LIBRARY_PATH*里
|
||||
|
||||
|
|
|
@ -2,19 +2,26 @@
|
|||
title: Data Model
|
||||
---
|
||||
|
||||
The data model employed by TDengine is similar to a relational database, you need to create databases and tables. Design the data model based on your own application scenarios and you should design the STable (abbreviation for super table) schema to fit your data. This chapter will explain the big picture without getting into syntax details.
|
||||
The data model employed by TDengine is similar to that of a relational database. You have to create databases and tables. You must design the data model based on your own business and application requirements. You should design the STable (an abbreviation for super table) schema to fit your data. This chapter will explain the big picture without getting into syntactical details.
|
||||
|
||||
## Create Database
|
||||
|
||||
The characteristics of data from different data collection points may be different, such as collection frequency, days to keep, number of replicas, data block size, whether it's allowed to update data, etc. For TDengine to operate with the best performance, it's strongly suggested to put the data with different characteristics into different databases because different storage policies can be set for each database. When creating a database, there are a lot of parameters that can be configured, such as the days to keep data, the number of replicas, the number of memory blocks, time precision, the minimum and maximum number of rows in each data block, compress or not, the time range of the data in single data file, etc. Below is an example of the SQL statement for creating a database.
|
||||
The [characteristics of time-series data](https://www.taosdata.com/blog/2019/07/09/86.html) from different data collection points may be different. Characteristics include collection frequency, retention policy and others which determine how you create and configure the database. For e.g. days to keep, number of replicas, data block size, whether data updates are allowed and other configurable parameters would be determined by the characteristics of your data and your business requirements. For TDengine to operate with the best performance, we strongly recommend that you create and configure different databases for data with different characteristics. This allows you, for example, to set up different storage and retention policies. When creating a database, there are a lot of parameters that can be configured such as, the days to keep data, the number of replicas, the number of memory blocks, time precision, the minimum and maximum number of rows in each data block, whether compression is enabled, the time range of the data in single data file and so on. Below is an example of the SQL statement to create a database.
|
||||
|
||||
```sql
|
||||
CREATE DATABASE power KEEP 365 DAYS 10 BLOCKS 6 UPDATE 1;
|
||||
```
|
||||
|
||||
In the above SQL statement, a database named "power" will be created, the data in it will be kept for 365 days, which means the data older than 365 days will be deleted automatically, a new data file will be created every 10 days, the number of memory blocks is 6, data is allowed to be updated. For more details please refer to [Database](/taos-sql/database).
|
||||
In the above SQL statement:
|
||||
- a database named "power" will be created
|
||||
- the data in it will be kept for 365 days, which means that data older than 365 days will be deleted automatically
|
||||
- a new data file will be created every 10 days
|
||||
- the number of memory blocks is 6
|
||||
- data is allowed to be updated
|
||||
|
||||
After creating a database, the current database in use can be switched using SQL command `USE`, for example below SQL statement switches the current database to `power`. Without the current database specified, table name must be preceded with the corresponding database name.
|
||||
For more details please refer to [Database](/taos-sql/database).
|
||||
|
||||
After creating a database, the current database in use can be switched using SQL command `USE`. For example the SQL statement below switches the current database to `power`. Without the current database specified, table name must be preceded with the corresponding database name.
|
||||
|
||||
```sql
|
||||
USE power;
|
||||
|
@ -30,7 +37,7 @@ USE power;
|
|||
|
||||
## Create STable
|
||||
|
||||
In a time-series application, there may be multiple kinds of data collection points. For example, in the electrical power system there are meters, transformers, bus bars, switches, etc. For easy and efficient aggregation of multiple tables, one STable needs to be created for each kind of data collection point. For example, for the meters in [table 1](/tdinternal/arch#model_table1), the below SQL statement can be used to create the super table.
|
||||
In a time-series application, there may be multiple kinds of data collection points. For example, in the electrical power system there are meters, transformers, bus bars, switches, etc. For easy and efficient aggregation of multiple tables, one STable needs to be created for each kind of data collection point. For example, for the meters in [table 1](/tdinternal/arch#model_table1), the SQL statement below can be used to create the super table.
|
||||
|
||||
```sql
|
||||
CREATE STable meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int);
|
||||
|
@ -41,15 +48,15 @@ If you are using versions prior to 2.0.15, the `STable` keyword needs to be repl
|
|||
|
||||
:::
|
||||
|
||||
Similar to creating a regular table, when creating a STable, the name and schema need to be provided. In the STable schema, the first column must be timestamp (like ts in the example), and the other columns (like current, voltage and phase in the example) are the data collected. The column type can be integer, float, double, string ,etc. Besides, the schema for tags need to be provided, like location and groupId in the example. The tag type can be integer, float, string, etc. The static properties of a data collection point can be defined as tags, like the location, device type, device group ID, manager ID, etc. Tags in the schema can be added, removed or updated. Please refer to [STable](/taos-sql/stable) for more details.
|
||||
Similar to creating a regular table, when creating a STable, the name and schema need to be provided. In the STable schema, the first column must always be a timestamp (like ts in the example), and the other columns (like current, voltage and phase in the example) are the data collected. The remaining columns can [contain data of type](/taos-sql/data-type/) integer, float, double, string etc. In addition, the schema for tags, like location and groupId in the example, must be provided. The tag type can be integer, float, string, etc. Tags are essentially the static properties of a data collection point. For example, properties like the location, device type, device group ID, manager ID are tags. Tags in the schema can be added, removed or updated. Please refer to [STable](/taos-sql/stable) for more details.
|
||||
|
||||
For each kind of data collection point, a corresponding STable must be created. There may be many STables in an application. For electrical power system, we need to create a STable respectively for meters, transformers, busbars, switches. There may be multiple kinds of data collection points on a single device, for example there may be one data collection point for electrical data like current and voltage and another point for environmental data like temperature, humidity and wind direction, multiple STables are required for such kind of device.
|
||||
For each kind of data collection point, a corresponding STable must be created. There may be many STables in an application. For electrical power system, we need to create a STable respectively for meters, transformers, busbars, switches. There may be multiple kinds of data collection points on a single device, for example there may be one data collection point for electrical data like current and voltage and another data collection point for environmental data like temperature, humidity and wind direction. Multiple STables are required for these kinds of devices.
|
||||
|
||||
At most 4096 (or 1024 prior to version 2.1.7.0) columns are allowed in a STable. If there are more than 4096 of metrics to be collected for a data collection point, multiple STables are required. There can be multiple databases in a system, while one or more STables can exist in a database.
|
||||
|
||||
## Create Table
|
||||
|
||||
A specific table needs to be created for each data collection point. Similar to RDBMS, table name and schema are required to create a table. Beside, one or more tags can be created for each table. To create a table, a STable needs to be used as template and the values need to be specified for the tags. For example, for the meters in [Table 1](/tdinternal/arch#model_table1), the table can be created using below SQL statement.
|
||||
A specific table needs to be created for each data collection point. Similar to RDBMS, table name and schema are required to create a table. Additionally, one or more tags can be created for each table. To create a table, a STable needs to be used as template and the values need to be specified for the tags. For example, for the meters in [Table 1](/tdinternal/arch#model_table1), the table can be created using below SQL statement.
|
||||
|
||||
```sql
|
||||
CREATE TABLE d1001 USING meters TAGS ("California.SanFrancisco", 2);
|
||||
|
@ -57,17 +64,17 @@ CREATE TABLE d1001 USING meters TAGS ("California.SanFrancisco", 2);
|
|||
|
||||
In the above SQL statement, "d1001" is the table name, "meters" is the STable name, followed by the value of tag "Location" and the value of tag "groupId", which are "California.SanFrancisco" and "2" respectively in the example. The tag values can be updated after the table is created. Please refer to [Tables](/taos-sql/table) for details.
|
||||
|
||||
In TDengine system, it's recommended to create a table for a data collection point via STable. A table created via STable is called subtable in some parts of the TDengine documentation. All SQL commands applied on regular tables can be applied on subtables.
|
||||
In the TDengine system, it's recommended to create a table for a data collection point via STable. A table created via STable is called subtable in some parts of the TDengine documentation. All SQL commands applied on regular tables can be applied on subtables.
|
||||
|
||||
:::warning
|
||||
It's not recommended to create a table in a database while using a STable from another database as template.
|
||||
|
||||
:::tip
|
||||
It's suggested to use the global unique ID of a data collection point as the table name, for example the device serial number. If there isn't such a unique ID, multiple IDs that are not global unique can be combined to form a global unique ID. It's not recommended to use a global unique ID as tag value.
|
||||
It's suggested to use the globally unique ID of a data collection point as the table name. For example the device serial number could be used as a unique ID. If a unique ID doesn't exist, multiple IDs that are not globally unique can be combined to form a globally unique ID. It's not recommended to use a globally unique ID as tag value.
|
||||
|
||||
## Create Table Automatically
|
||||
|
||||
In some circumstances, it's unknown whether the table already exists when inserting rows. The table can be created automatically using the SQL statement below, and nothing will happen if the table already exist.
|
||||
In some circumstances, it's unknown whether the table already exists when inserting rows. The table can be created automatically using the SQL statement below, and nothing will happen if the table already exists.
|
||||
|
||||
```sql
|
||||
INSERT INTO d1001 USING meters TAGS ("California.SanFrancisco", 2) VALUES (now, 10.2, 219, 0.32);
|
||||
|
@ -79,6 +86,8 @@ For more details please refer to [Create Table Automatically](/taos-sql/insert#a
|
|||
|
||||
## Single Column vs Multiple Column
|
||||
|
||||
A multiple columns data model is supported in TDengine. As long as multiple metrics are collected by the same data collection point at the same time, i.e. the timestamp are identical, these metrics can be put in a single STable as columns. However, there is another kind of design, i.e. single column data model, a table is created for each metric, which means a STable is required for each kind of metric. For example, 3 STables are required for current, voltage and phase.
|
||||
A multiple columns data model is supported in TDengine. As long as multiple metrics are collected by the same data collection point at the same time, i.e. the timestamps are identical, these metrics can be put in a single STable as columns.
|
||||
|
||||
It's recommended to use a multiple column data model as much as possible because it's better in the performance of inserting or querying rows. In some cases, however, the metrics to be collected vary frequently and correspondingly the STable schema needs to be changed frequently too. In such case, it's more convenient to use single column data model.
|
||||
However, there is another kind of design, i.e. single column data model in which a table is created for each metric. This means that a STable is required for each kind of metric. For example in a single column model, 3 STables would be required for current, voltage and phase.
|
||||
|
||||
It's recommended to use a multiple column data model as much as possible because insert and query performance is higher. In some cases, however, the collected metrics may vary frequently and so the corresponding STable schema needs to be changed frequently too. In such cases, it's more convenient to use single column data model.
|
||||
|
|
|
@ -15,13 +15,13 @@ import CLine from "./_c_line.mdx";
|
|||
|
||||
## Introduction
|
||||
|
||||
A single line of text is used in InfluxDB Line protocol format represents one row of data, each line contains 4 parts as shown below.
|
||||
In the InfluxDB Line protocol format, a single line of text is used to represent one row of data. Each line contains 4 parts as shown below.
|
||||
|
||||
```
|
||||
measurement,tag_set field_set timestamp
|
||||
```
|
||||
|
||||
- `measurement` will be used as the STable name
|
||||
- `measurement` will be used as the name of the STable
|
||||
- `tag_set` will be used as tags, with format like `<tag_key>=<tag_value>,<tag_key>=<tag_value>`
|
||||
- `field_set`will be used as data columns, with format like `<field_key>=<field_value>,<field_key>=<field_value>`
|
||||
- `timestamp` is the primary key timestamp corresponding to this row of data
|
||||
|
@ -34,8 +34,8 @@ meters,location=California.LoSangeles,groupid=2 current=13.4,voltage=223,phase=0
|
|||
|
||||
:::note
|
||||
|
||||
- All the data in `tag_set` will be converted to ncahr type automatically .
|
||||
- Each data in `field_set` must be self-description for its data type. For example 1.2f32 means a value 1.2 of float type, it will be treated as double without the "f" type suffix.
|
||||
- All the data in `tag_set` will be converted to nchar type automatically .
|
||||
- Each data in `field_set` must be self-descriptive for its data type. For example 1.2f32 means a value 1.2 of float type. Without the "f" type suffix, it will be treated as type double.
|
||||
- Multiple kinds of precision can be used for the `timestamp` field. Time precision can be from nanosecond (ns) to hour (h).
|
||||
|
||||
:::
|
||||
|
|
|
@ -15,7 +15,7 @@ import CTelnet from "./_c_opts_telnet.mdx";
|
|||
|
||||
## Introduction
|
||||
|
||||
A single line of text is used in OpenTSDB line protocol to represent one row of data. OpenTSDB employs single column data model, so one line can only contain a single data column. There can be multiple tags. Each line contains 4 parts as below:
|
||||
A single line of text is used in OpenTSDB line protocol to represent one row of data. OpenTSDB employs a single column data model, so each line can only contain a single data column. There can be multiple tags. Each line contains 4 parts as below:
|
||||
|
||||
```
|
||||
<metric> <timestamp> <value> <tagk_1>=<tagv_1>[ <tagk_n>=<tagv_n>]
|
||||
|
@ -60,7 +60,7 @@ Please refer to [OpenTSDB Telnet API](http://opentsdb.net/docs/build/html/api_te
|
|||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
2 STables will be crated automatically while each STable has 4 rows of data in the above sample code.
|
||||
2 STables will be created automatically and each STable has 4 rows of data in the above sample code.
|
||||
|
||||
```cmd
|
||||
taos> use test;
|
||||
|
|
|
@ -47,7 +47,7 @@ Please refer to [OpenTSDB HTTP API](http://opentsdb.net/docs/build/html/api_http
|
|||
|
||||
:::note
|
||||
- In JSON protocol, strings will be converted to nchar type and numeric values will be converted to double type.
|
||||
- Only data in array format is accepted, array must be used even there is only one row.
|
||||
- Only data in array format is accepted and so an array must be used even if there is only one row.
|
||||
|
||||
:::
|
||||
|
||||
|
|
|
@ -20,7 +20,7 @@ import CAsync from "./_c_async.mdx";
|
|||
|
||||
## Introduction
|
||||
|
||||
SQL is used by TDengine as the query language. Application programs can send SQL statements to TDengine through REST API or connectors. TDengine CLI `taos` can also be used to execute SQL Ad-Hoc queries. Here is the list of major query functionalities supported by TDengine:
|
||||
SQL is used by TDengine as its query language. Application programs can send SQL statements to TDengine through REST API or connectors. TDengine's CLI `taos` can also be used to execute ad hoc SQL queries. Here is the list of major query functionalities supported by TDengine:
|
||||
|
||||
- Query on single column or multiple columns
|
||||
- Filter on tags or data columns:>, <, =, <\>, like
|
||||
|
@ -31,7 +31,7 @@ SQL is used by TDengine as the query language. Application programs can send SQL
|
|||
- Join query with timestamp alignment
|
||||
- Aggregate functions: count, max, min, avg, sum, twa, stddev, leastsquares, top, bottom, first, last, percentile, apercentile, last_row, spread, diff
|
||||
|
||||
For example, the SQL statement below can be executed in TDengine CLI `taos` to select the rows whose voltage column is bigger than 215 and limit the output to only 2 rows.
|
||||
For example, the SQL statement below can be executed in TDengine CLI `taos` to select records with voltage greater than 215 and limit the output to only 2 rows.
|
||||
|
||||
```sql
|
||||
select * from d1001 where voltage > 215 order by ts desc limit 2;
|
||||
|
@ -46,46 +46,46 @@ taos> select * from d1001 where voltage > 215 order by ts desc limit 2;
|
|||
Query OK, 2 row(s) in set (0.001100s)
|
||||
```
|
||||
|
||||
To meet the requirements of many use cases, some special functions have been added in TDengine, for example `twa` (Time Weighted Average), `spared` (The difference between the maximum and the minimum), and `last_row` (the last row). Furthermore, continuous query is also supported in TDengine.
|
||||
To meet the requirements of varied use cases, some special functions have been added in TDengine. Some examples are `twa` (Time Weighted Average), `spread` (The difference between the maximum and the minimum), and `last_row` (the last row). Furthermore, continuous query is also supported in TDengine.
|
||||
|
||||
For detailed query syntax please refer to [Select](/taos-sql/select).
|
||||
|
||||
## Aggregation among Tables
|
||||
|
||||
In many use cases, there are always multiple kinds of data collection points. A new concept, called STable (abbreviated for super table), is used in TDengine to represent a kind of data collection point, and a subtable is used to represent a specific data collection point. Tags are used by TDengine to represent the static properties of data collection points. A specific data collection point has its own values for static properties. By specifying filter conditions on tags, aggregation can be performed efficiently among all the subtables created via the same STable, i.e. same kind of data collection points. Aggregate functions applicable for tables can be used directly on STables, the syntax is exactly the same.
|
||||
In most use cases, there are always multiple kinds of data collection points. A new concept, called STable (abbreviation for super table), is used in TDengine to represent one type of data collection point, and a subtable is used to represent a specific data collection point of that type. Tags are used by TDengine to represent the static properties of data collection points. A specific data collection point has its own values for static properties. By specifying filter conditions on tags, aggregation can be performed efficiently among all the subtables created via the same STable, i.e. same type of data collection points. Aggregate functions applicable for tables can be used directly on STables; the syntax is exactly the same.
|
||||
|
||||
In summary, for a STable, its subtables can be aggregated by a simple query on the STable, it's a kind of join operation. But tables belong to different STables can not be aggregated.
|
||||
In summary, records across subtables can be aggregated by a simple query on their STable. It is like a join operation. However, tables belonging to different STables can not be aggregated.
|
||||
|
||||
### Example 1
|
||||
|
||||
In TDengine CLI `taos`, use below SQL to get the average voltage of all the meters in California grouped by location.
|
||||
In TDengine CLI `taos`, use the SQL below to get the average voltage of all the meters in California grouped by location.
|
||||
|
||||
```
|
||||
taos> SELECT AVG(voltage) FROM meters GROUP BY location;
|
||||
avg(voltage) | location |
|
||||
=============================================================
|
||||
222.000000000 | California.LoSangeles |
|
||||
222.000000000 | California.LosAngeles |
|
||||
219.200000000 | California.SanFrancisco |
|
||||
Query OK, 2 row(s) in set (0.002136s)
|
||||
```
|
||||
|
||||
### Example 2
|
||||
|
||||
In TDengine CLI `taos`, use below SQL to get the number of rows and the maximum current in the past 24 hours from meters whose groupId is 2.
|
||||
In TDengine CLI `taos`, use the SQL below to get the number of rows and the maximum current in the past 24 hours from meters whose groupId is 2.
|
||||
|
||||
```
|
||||
taos> SELECT count(*), max(current) FROM meters where groupId = 2 and ts > now - 24h;
|
||||
cunt(*) | max(current) |
|
||||
count(*) | max(current) |
|
||||
==================================
|
||||
5 | 13.4 |
|
||||
Query OK, 1 row(s) in set (0.002136s)
|
||||
```
|
||||
|
||||
Join queries are only allowed between the subtables of the same STable. In [Select](/taos-sql/select), all query operations are marked as to whether they supports STables or not.
|
||||
Join queries are only allowed between subtables of the same STable. In [Select](/taos-sql/select), all query operations are marked as to whether they support STables or not.
|
||||
|
||||
## Down Sampling and Interpolation
|
||||
|
||||
In IoT use cases, down sampling is widely used to aggregate the data by time range. The `INTERVAL` keyword in TDengine can be used to simplify the query by time window. For example, the SQL statement below can be used to get the sum of current every 10 seconds from meters table d1001.
|
||||
In IoT use cases, down sampling is widely used to aggregate data by time range. The `INTERVAL` keyword in TDengine can be used to simplify the query by time window. For example, the SQL statement below can be used to get the sum of current every 10 seconds from meters table d1001.
|
||||
|
||||
```
|
||||
taos> SELECT sum(current) FROM d1001 INTERVAL(10s);
|
||||
|
@ -169,7 +169,7 @@ In the section describing [Insert](/develop/insert-data/sql-writing), a database
|
|||
|
||||
### Asynchronous Query
|
||||
|
||||
Besides synchronous queries, an asynchronous query API is also provided by TDengine to insert or query data more efficiently. With a similar hardware and software environment, the async API is 2~4 times faster than sync APIs. Async API works in non-blocking mode, which means an operation can be returned without finishing so that the calling thread can switch to other works to improve the performance of the whole application system. Async APIs perform especially better in the case of poor networks.
|
||||
Besides synchronous queries, an asynchronous query API is also provided by TDengine to insert or query data more efficiently. With a similar hardware and software environment, the async API is 2~4 times faster than sync APIs. Async API works in non-blocking mode, which means an operation can be returned without finishing so that the calling thread can switch to other work to improve the performance of the whole application system. Async APIs perform especially better in the case of poor networks.
|
||||
|
||||
Please note that async query can only be used with a native connection.
|
||||
|
||||
|
|
|
@ -1,12 +1,12 @@
|
|||
---
|
||||
sidebar_label: Continuous Query
|
||||
description: "Continuous query is a query that's executed automatically according to predefined frequency to provide aggregate query capability by time window, it's actually a simplified time driven stream computing."
|
||||
description: "Continuous query is a query that's executed automatically at a predefined frequency to provide aggregate query capability by time window. It is essentially simplified, time driven, stream computing."
|
||||
title: "Continuous Query"
|
||||
---
|
||||
|
||||
Continuous query is a query that's executed automatically according to a predefined frequency to provide aggregate query capability by time window, it's actually a simplified time driven stream computing. Continuous query can be performed on a table or STable in TDengine. The result of continuous query can be pushed to clients or written back to TDengine. Each query is executed on a time window, which moves forward with time. The size of time window and the forward sliding time need to be specified with parameter `INTERVAL` and `SLIDING` respectively.
|
||||
A continuous query is a query that's executed automatically at a predefined frequency to provide aggregate query capability by time window. It is essentially simplified, time driven, stream computing. A continuous query can be performed on a table or STable in TDengine. The results of a continuous query can be pushed to clients or written back to TDengine. Each query is executed on a time window, which moves forward with time. The size of time window and the forward sliding time need to be specified with parameter `INTERVAL` and `SLIDING` respectively.
|
||||
|
||||
Continuous query in TDengine is time driven, and can be defined using TAOS SQL directly without any extra operations. With continuous query, the result can be generated according to a time window to achieve down sampling of the original data. Once a continuous query is defined using TAOS SQL, the query is automatically executed at the end of each time window and the result is pushed back to clients or written to TDengine.
|
||||
A continuous query in TDengine is time driven, and can be defined using TAOS SQL directly without any extra operations. With a continuous query, the result can be generated based on a time window to achieve down sampling of the original data. Once a continuous query is defined using TAOS SQL, the query is automatically executed at the end of each time window and the result is pushed back to clients or written to TDengine.
|
||||
|
||||
There are some differences between continuous query in TDengine and time window computation in stream computing:
|
||||
|
||||
|
@ -35,7 +35,7 @@ In this section the use case of meters will be used to introduce how to use cont
|
|||
```sql
|
||||
create table meters (ts timestamp, current float, voltage int, phase float) tags (location binary(64), groupId int);
|
||||
create table D1001 using meters tags ("California.SanFrancisco", 2);
|
||||
create table D1002 using meters tags ("California.LoSangeles", 2);
|
||||
create table D1002 using meters tags ("California.LosAngeles", 2);
|
||||
```
|
||||
|
||||
The SQL statement below retrieves the average voltage for a one minute time window, with each time window moving forward by 30 seconds.
|
||||
|
@ -68,7 +68,7 @@ taos> select * from avg_vol;
|
|||
2020-07-29 13:39:00.000 | 223.0800000 |
|
||||
```
|
||||
|
||||
Please note that the minimum allowed time window is 10 milliseconds, and no upper limit.
|
||||
Please note that the minimum allowed time window is 10 milliseconds, and there is no upper limit.
|
||||
|
||||
It's possible to specify the start and end time of a continuous query. If the start time is not specified, the timestamp of the first row will be considered as the start time; if the end time is not specified, the continuous query will be performed indefinitely, otherwise it will be terminated once the end time is reached. For example, the continuous query in the SQL statement below will be started from now and terminated one hour later.
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
---
|
||||
sidebar_label: Subscription
|
||||
description: "Lightweight service for data subscription and pushing, the time series data inserted into TDengine continuously can be pushed automatically to the subscribing clients."
|
||||
description: "Lightweight service for data subscription and publishing. Time series data inserted into TDengine continuously can be pushed automatically to subscribing clients."
|
||||
title: Data Subscription
|
||||
---
|
||||
|
||||
|
@ -16,9 +16,9 @@ import CDemo from "./_sub_c.mdx";
|
|||
|
||||
## Introduction
|
||||
|
||||
Due to the nature of time series data, data inserting in TDengine is similar to data publishing in message queues. Data is stored in ascending order of timestamp inside TDengine, so each table in TDengine can essentially be considered as a message queue.
|
||||
Due to the nature of time series data, data insertion into TDengine is similar to data publishing in message queues. Data is stored in ascending order of timestamp inside TDengine, and so each table in TDengine can essentially be considered as a message queue.
|
||||
|
||||
A lightweight service for data subscription and pushing is built in TDengine. With the API provided by TDengine, client programs can use `select` statements to subscribe to data from one or more tables. The subscription and state maintenance is performed on the client side, the client programs poll the server to check whether there is new data, and if so the new data will be pushed back to the client side. If the client program is restarted, where to start for retrieving new data is up to the client side.
|
||||
A lightweight service for data subscription and publishing is built into TDengine. With the API provided by TDengine, client programs can use `select` statements to subscribe to data from one or more tables. The subscription and state maintenance is performed on the client side. The client programs poll the server to check whether there is new data, and if so the new data will be pushed back to the client side. If the client program is restarted, where to start retrieving new data is up to the client side.
|
||||
|
||||
There are 3 major APIs related to subscription provided in the TDengine client driver.
|
||||
|
||||
|
@ -32,7 +32,7 @@ For more details about these APIs please refer to [C/C++ Connector](/reference/c
|
|||
|
||||
If we want to get a notification and take some actions if the current exceeds a threshold, like 10A, from some meters, there are two ways:
|
||||
|
||||
The first way is to query on each sub table and record the last timestamp matching the criteria, then after some time query on the data later than recorded timestamp and repeat this process. The SQL statements for this way are as below.
|
||||
The first way is to query each sub table and record the last timestamp matching the criteria. Then after some time, query the data later than the recorded timestamp, and repeat this process. The SQL statements for this way are as below.
|
||||
|
||||
```sql
|
||||
select * from D1001 where ts > {last_timestamp1} and current > 10;
|
||||
|
@ -50,7 +50,7 @@ select * from meters where ts > {last_timestamp} and current > 10;
|
|||
|
||||
However, this presents a new problem in how to choose `last_timestamp`. First, the timestamp when the data is generated is different from the timestamp when the data is inserted into the database, sometimes the difference between them may be very big. Second, the time when the data from different meters arrives at the database may be different too. If the timestamp of the "slowest" meter is used as `last_timestamp` in the query, the data from other meters may be selected repeatedly; but if the timestamp of the "fastest" meter is used as `last_timestamp`, some data from other meters may be missed.
|
||||
|
||||
All the problems mentioned above can be resolved thoroughly using subscription provided by TDengine.
|
||||
All the problems mentioned above can be resolved easily using the subscription functionality provided by TDengine.
|
||||
|
||||
The first step is to create subscription using `taos_subscribe`.
|
||||
|
||||
|
@ -65,31 +65,33 @@ if (async) {
|
|||
}
|
||||
```
|
||||
|
||||
The subscription in TDengine can be either synchronous or asynchronous. In the above sample code, the value of variable `async` is determined from the CLI input, then it's used to create either an async or sync subscription. Sync subscription means the client program needs to invoke `taos_consume` to retrieve data, and async subscription means another thread created by `taos_subscribe` internally invokes `taos_consume` to retrieve data and pass the data to `subscribe_callback` for processing, `subscribe_callback` is a call back function provided by the client program and it's suggested not to do time consuming operation in the call back function.
|
||||
The subscription in TDengine can be either synchronous or asynchronous. In the above sample code, the value of variable `async` is determined from the CLI input, then it's used to create either an async or sync subscription. Sync subscription means the client program needs to invoke `taos_consume` to retrieve data, and async subscription means another thread created by `taos_subscribe` internally invokes `taos_consume` to retrieve data and pass the data to `subscribe_callback` for processing. `subscribe_callback` is a callback function provided by the client program. You should not perform time consuming operations in the callback function.
|
||||
|
||||
The parameter `taos` is an established connection. There is nothing special in sync subscription mode. In async subscription, it should be exclusively by current thread, otherwise unpredictable error may occur.
|
||||
The parameter `taos` is an established connection. Nothing special needs to be done for thread safety for synchronous subscription. For asynchronous subscription, the taos_subscribe function should be called exclusively by the current thread, to avoid unpredictable errors.
|
||||
|
||||
The parameter `sql` is a `select` statement in which `where` clause can be used to specify filter conditions. In our example, the data whose current exceeds 10A needs to be subscribed like below SQL statement:
|
||||
The parameter `sql` is a `select` statement in which the `where` clause can be used to specify filter conditions. In our example, we can subscribe to the records in which the current exceeds 10A, with the following SQL statement:
|
||||
|
||||
```sql
|
||||
select * from meters where current > 10;
|
||||
```
|
||||
|
||||
Please note that, all the data will be processed because no start time is specified. If only the data from one day ago needs to be processed, a time related condition can be added:
|
||||
Please note that, all the data will be processed because no start time is specified. If we only want to process data for the past day, a time related condition can be added:
|
||||
|
||||
```sql
|
||||
select * from meters where ts > now - 1d and current > 10;
|
||||
```
|
||||
|
||||
The parameter `topic` is the name of the subscription, it needs to be guaranteed unique in the client program, but it's not necessary to be globally unique because subscription is implemented in the APIs on the client side.
|
||||
The parameter `topic` is the name of the subscription. The client application must guarantee that the name is unique. However, it doesn't have to be globally unique because subscription is implemented in the APIs on the client side.
|
||||
|
||||
If the subscription named as `topic` doesn't exist, the parameter `restart` will be ignored. If the subscription named as `topic` has been created before by the client program, when the client program is restarted with the subscription named `topic`, parameter `restart` is used to determine whether to retrieve data from the beginning or from the last point where the subscription was broken. If the value of `restart` is **true** (i.e. a non-zero value), the data will be retrieved from beginning, or if it is **false** (i.e. zero), the data already consumed before will not be processed again.
|
||||
If the subscription named as `topic` doesn't exist, the parameter `restart` will be ignored. If the subscription named as `topic` has been created before by the client program, when the client program is restarted with the subscription named `topic`, parameter `restart` is used to determine whether to retrieve data from the beginning or from the last point where the subscription was broken.
|
||||
|
||||
The last parameter of `taos_subscribe` is the polling interval in unit of millisecond. In sync mode, if the time difference between two continuous invocations to `taos_consume` is smaller than the interval specified by `taos_subscribe`, `taos_consume` will be blocked until the interval is reached. In async mode, this interval is the minimum interval between two invocations to the call back function.
|
||||
If the value of `restart` is **true** (i.e. a non-zero value), data will be retrieved from the beginning. If it is **false** (i.e. zero), the data already consumed before will not be processed again.
|
||||
|
||||
The last parameter of `taos_subscribe` is the polling interval in units of millisecond. In sync mode, if the time difference between two continuous invocations to `taos_consume` is smaller than the interval specified by `taos_subscribe`, `taos_consume` will be blocked until the interval is reached. In async mode, this interval is the minimum interval between two invocations to the call back function.
|
||||
|
||||
The second to last parameter of `taos_subscribe` is used to pass arguments to the call back function. `taos_subscribe` doesn't process this parameter and simply passes it to the call back function. This parameter is simply ignored in sync mode.
|
||||
|
||||
After a subscription is created, its data can be consumed and processed, below is the sample code of how to consume data in sync mode, in the else part if `if (async)`.
|
||||
After a subscription is created, its data can be consumed and processed. Shown below is the sample code to consume data in sync mode, in the else condition of `if (async)`.
|
||||
|
||||
```c
|
||||
if (async) {
|
||||
|
@ -106,7 +108,7 @@ if (async) {
|
|||
}
|
||||
```
|
||||
|
||||
In the above sample code, there is an infinite loop, each time carriage return is entered `taos_consume` is invoked, the return value of `taos_consume` is the selected result set, exactly as the input of `taos_use_result`, in the above sample `print_result` is used instead to simplify the sample. Below is the implementation of `print_result`.
|
||||
In the above sample code in the else condition, there is an infinite loop. Each time carriage return is entered `taos_consume` is invoked. The return value of `taos_consume` is the selected result set. In the above sample, `print_result` is used to simplify the printing of the result set. Below is the implementation of `print_result`.
|
||||
|
||||
```c
|
||||
void print_result(TAOS_RES* res, int blockFetch) {
|
||||
|
@ -133,9 +135,9 @@ void print_result(TAOS_RES* res, int blockFetch) {
|
|||
}
|
||||
```
|
||||
|
||||
In the above code `taos_print_row` is used to process the data consumed. All the matching rows will be printed.
|
||||
In the above code `taos_print_row` is used to process the data consumed. All matching rows are printed.
|
||||
|
||||
In async mode, the data consuming is simpler as below.
|
||||
In async mode, consuming data is simpler as shown below.
|
||||
|
||||
```c
|
||||
void subscribe_callback(TAOS_SUB* tsub, TAOS_RES *res, void* param, int code) {
|
||||
|
@ -175,7 +177,7 @@ Then, this row of data will be shown by the example program on the first termina
|
|||
|
||||
## Examples
|
||||
|
||||
Below example program demonstrates how to subscribe the data rows whose current exceeds 10A using connectors.
|
||||
The example program below demonstrates how to subscribe, using connectors, to data rows in which current exceeds 10A.
|
||||
|
||||
### Prepare Data
|
||||
|
||||
|
@ -250,7 +252,7 @@ taos> use power;
|
|||
taos> insert into d1001 values(now, 12.4, 220, 1);
|
||||
```
|
||||
|
||||
Because the current in inserted row exceeds 10A, it will be consumed by the example program.
|
||||
Because the current in the inserted row exceeds 10A, it will be consumed by the example program.
|
||||
|
||||
```
|
||||
ts: 1651146662805 current: 12.4 voltage: 220 phase: 1 location: California.SanFrancisco groupid: 2
|
||||
|
|
|
@ -6,29 +6,35 @@ title: Deployment
|
|||
|
||||
### Step 1
|
||||
|
||||
The FQDN of all hosts needs to be setup properly, all the FQDNs need to be configured in the /etc/hosts of each host. It must be confirmed that each FQDN can be accessed (by ping, for example) from any other hosts.
|
||||
The FQDN of all hosts must be setup properly. For e.g. FQDNs may have to be configured in the /etc/hosts file on each host. You must confirm that each FQDN can be accessed from any other host. For e.g. you can do this by using the `ping` command.
|
||||
|
||||
On each host the command `hostname -f` can be executed to get the hostname. `ping` command can be executed on each host to check whether any other host is accessible from it. If any host is not accessible, the network configuration, like /etc/hosts or DNS configuration, need to be checked and revised to make any two hosts accessible to each other.
|
||||
To get the hostname on any host, the command `hostname -f` can be executed. `ping <FQDN>` command can be executed on each host to check whether any other host is accessible from it. If any host is not accessible, the network configuration, like /etc/hosts or DNS configuration, needs to be checked and revised, to make any two hosts accessible to each other.
|
||||
|
||||
:::note
|
||||
|
||||
- The host where the client program runs also needs to be configured properly for FQDN, to make sure all hosts for client or server can be accessed from any other. In other words, the hosts where the client is running are also considered as a part of the cluster.
|
||||
|
||||
- It's suggested to disable the firewall for all hosts in the cluster. At least TCP/UDP for port 6030~6042 need to be open if a firewall is enabled.
|
||||
- Please ensure that your firewall rules do not block TCP/UDP on ports 6030-6042 on all hosts in the cluster.
|
||||
|
||||
:::
|
||||
|
||||
### Step 2
|
||||
|
||||
If any previous version of TDengine has been installed and configured on any host, the installation needs to be removed and the data needs to be cleaned up. For details about uninstalling please refer to [Install and Uninstall](/operation/pkg-install). To clean up the data, please use `rm -rf /var/lib/taos/\*` assuming the `dataDir` is configured as `/var/lib/taos`.
|
||||
If any previous version of TDengine has been installed and configured on any host, the installation needs to be removed and the data needs to be cleaned up. For details about uninstalling please refer to [Install and Uninstall](/operation/pkg-install). To clean up the data, please use `rm -rf /var/lib/taos/\*` assuming the `dataDir` is configured as `/var/lib/taos`.
|
||||
|
||||
:::note
|
||||
|
||||
As a best practice, before cleaning up any data files or directories, please ensure that your data has been backed up correctly, if required by your data integrity, backup, security, or other standard operating protocols (SOP).
|
||||
|
||||
:::
|
||||
|
||||
### Step 3
|
||||
|
||||
Now it's time to install TDengine on all hosts without starting `taosd`, the versions on all hosts should be same. If it's prompted to input the existing TDengine cluster, simply press carriage return to ignore it. `install.sh -e no` can also be used to disable this prompt. For details please refer to [Install and Uninstall](/operation/pkg-install).
|
||||
Now it's time to install TDengine on all hosts but without starting `taosd`. Note that the versions on all hosts should be same. If you are prompted to input the existing TDengine cluster, simply press carriage return to ignore the prompt. `install.sh -e no` can also be used to disable this prompt. For details please refer to [Install and Uninstall](/operation/pkg-install).
|
||||
|
||||
### Step 4
|
||||
|
||||
Now each physical node (referred to as `dnode` hereinafter, it's abbreviation for "data node") of TDengine needs to be configured properly. Please note that one dnode doesn't stand for one host, multiple TDengine nodes can be started on single host as long as they are configured properly without conflicting. More specifically each instance of the configuration file `taos.cfg` stands for a dnode. Assuming the first dnode of TDengine cluster is "h1.taosdata.com:6030", its `taos.cfg` is configured as following.
|
||||
Now each physical node (referred to, hereinafter, as `dnode` which is an abbreviation for "data node") of TDengine needs to be configured properly. Please note that one dnode doesn't stand for one host. Multiple TDengine dnodes can be started on a single host as long as they are configured properly without conflicting. More specifically each instance of the configuration file `taos.cfg` stands for a dnode. Assuming the first dnode of TDengine cluster is "h1.taosdata.com:6030", its `taos.cfg` is configured as following.
|
||||
|
||||
```c
|
||||
// firstEp is the end point to connect to when any dnode starts
|
||||
|
@ -67,9 +73,11 @@ Prior to version 2.0.19.0, besides the above parameters, `locale` and `charset`
|
|||
|
||||
## Start Cluster
|
||||
|
||||
In the following example we assume that first dnode has FQDN h1.taosdata.com and the second dnode has FQDN h2.taosdata.com.
|
||||
|
||||
### Start The First DNODE
|
||||
|
||||
The first dnode can be started following the instructions in [Get Started](/get-started/), for example h1.taosdata.com. Then TDengine CLI `taos` can be launched to execute command `show dnodes`, the output is as following for example:
|
||||
The first dnode can be started following the instructions in [Get Started](/get-started/). Then TDengine CLI `taos` can be launched to execute command `show dnodes`, the output is as following for example:
|
||||
|
||||
```
|
||||
Welcome to the TDengine shell from Linux, Client Version:2.0.0.0
|
||||
|
@ -80,27 +88,41 @@ Copyright (c) 2017 by TAOS Data, Inc. All rights reserved.
|
|||
taos> show dnodes;
|
||||
id | end_point | vnodes | cores | status | role | create_time |
|
||||
=====================================================================================
|
||||
1 | h1.taos.com:6030 | 0 | 2 | ready | any | 2020-07-31 03:49:29.202 |
|
||||
1 | h1.taosdata.com:6030 | 0 | 2 | ready | any | 2020-07-31 03:49:29.202 |
|
||||
Query OK, 1 row(s) in set (0.006385s)
|
||||
|
||||
taos>
|
||||
```
|
||||
|
||||
From the above output, it is shown that the end point of the started dnode is "h1.taos.com:6030", which is the `firstEp` of the cluster.
|
||||
From the above output, it is shown that the end point of the started dnode is "h1.taosdata.com:6030", which is the `firstEp` of the cluster.
|
||||
|
||||
### Start Other DNODEs
|
||||
|
||||
There are a few steps necessary to add other dnodes in the cluster.
|
||||
|
||||
First, start `taosd` as instructed in [Get Started](/get-started/), assuming it's for the second dnode. Before starting `taosd`, please making sure the configuration is correct, especially `firstEp`, `FQDN` and `serverPort`, `firstEp` must be same as the dnode shown in the section "Start First DNODE", i.e. "h1.taosdata.com" in this example.
|
||||
Let's assume we are starting the second dnode with FQDN, h2.taosdata.com. First we make sure the configuration is correct.
|
||||
|
||||
Then, on the first dnode, use TDengine CLI `taos` to execute below command to add the end point of the dnode in the cluster. In the command "fqdn:port" should be quoted using double quotes.
|
||||
```c
|
||||
// firstEp is the end point to connect to when any dnode starts
|
||||
firstEp h1.taosdata.com:6030
|
||||
|
||||
// must be configured to the FQDN of the host where the dnode is launched
|
||||
fqdn h2.taosdata.com
|
||||
|
||||
// the port used by the dnode, default is 6030
|
||||
serverPort 6030
|
||||
|
||||
```
|
||||
|
||||
Second, we can start `taosd` as instructed in [Get Started](/get-started/).
|
||||
|
||||
Then, on the first dnode i.e. h1.taosdata.com in our example, use TDengine CLI `taos` to execute the following command to add the end point of the dnode in the cluster. In the command "fqdn:port" should be quoted using double quotes.
|
||||
|
||||
```sql
|
||||
CREATE DNODE "h2.taos.com:6030";
|
||||
```
|
||||
|
||||
Then on the first dnode, execute `show dnodes` in `taos` to show whether the second dnode has been added in the cluster successfully or not.
|
||||
Then on the first dnode h1.taosdata.com, execute `show dnodes` in `taos` to show whether the second dnode has been added in the cluster successfully or not.
|
||||
|
||||
```sql
|
||||
SHOW DNODES;
|
||||
|
|
|
@ -3,13 +3,13 @@ title: Data Types
|
|||
description: "The data types supported by TDengine include timestamp, float, JSON, etc"
|
||||
---
|
||||
|
||||
When using TDengine to store and query data, the most important part of the data is timestamp. Timestamp must be specified when creating and inserting data rows or querying data, timestamp must follow below rules:
|
||||
When using TDengine to store and query data, the most important part of the data is timestamp. Timestamp must be specified when creating and inserting data rows or querying data, timestamp must follow the rules below:
|
||||
|
||||
- the format must be `YYYY-MM-DD HH:mm:ss.MS`, the default time precision is millisecond (ms), for example `2017-08-12 18:25:58.128`
|
||||
- internal function `now` can be used to get the current timestamp of the client side
|
||||
- the current timestamp of the client side is applied when `now` is used to insert data
|
||||
- Epoch Time:timestamp can also be a long integer number, which means the number of seconds, milliseconds or nanoseconds, depending on the time precision, from 1970-01-01 00:00:00.000 (UTC/GMT)
|
||||
- timestamp can be applied with add/subtract operation, for example `now-2h` means 2 hours back from the time at which query is executed,the unit can be b(nanosecond), u(microsecond), a(millisecond), s(second), m(minute), h(hour), d(day), w(week.。 So `select * from t1 where ts > now-2w and ts <= now-1w` means the data between two weeks ago and one week ago. The time unit can also be n (calendar month) or y (calendar year) when specifying the time window for down sampling operation.
|
||||
- timestamp can be applied with add/subtract operation, for example `now-2h` means 2 hours back from the time at which query is executed,the unit can be b(nanosecond), u(microsecond), a(millisecond), s(second), m(minute), h(hour), d(day), or w(week). So `select * from t1 where ts > now-2w and ts <= now-1w` means the data between two weeks ago and one week ago. The time unit can also be n (calendar month) or y (calendar year) when specifying the time window for down sampling operation.
|
||||
|
||||
Time precision in TDengine can be set by the `PRECISION` parameter when executing `CREATE DATABASE`, like below, the default time precision is millisecond.
|
||||
|
||||
|
@ -17,7 +17,7 @@ Time precision in TDengine can be set by the `PRECISION` parameter when executin
|
|||
CREATE DATABASE db_name PRECISION 'ns';
|
||||
```
|
||||
|
||||
In TDengine, below data types can be used when specifying a column or tag.
|
||||
In TDengine, the data types below can be used when specifying a column or tag.
|
||||
|
||||
| # | **type** | **Bytes** | **Description** |
|
||||
| --- | :-------: | --------- | ------------------------- |
|
||||
|
@ -25,12 +25,12 @@ In TDengine, below data types can be used when specifying a column or tag.
|
|||
| 2 | INT | 4 | Integer, the value range is [-2^31+1, 2^31-1], while -2^31 is treated as NULL |
|
||||
| 3 | BIGINT | 8 | Long integer, the value range is [-2^63+1, 2^63-1], while -2^63 is treated as NULL |
|
||||
| 4 | FLOAT | 4 | Floating point number, the effective number of digits is 6-7, the value range is [-3.4E38, 3.4E38] |
|
||||
| 5 | DOUBLE | 8 | double precision floating point number, the effective number of digits is 15-16, the value range is [-1.7E308, 1.7E308] |
|
||||
| 5 | DOUBLE | 8 | Double precision floating point number, the effective number of digits is 15-16, the value range is [-1.7E308, 1.7E308] |
|
||||
| 6 | BINARY | User Defined | Single-byte string for ASCII visible characters. Length must be specified when defining a column or tag of binary type. The string length can be up to 16374 bytes. The string value must be quoted with single quotes. The literal single quote inside the string must be preceded with back slash like `\'` |
|
||||
| 7 | SMALLINT | 2 | Short integer, the value range is [-32767, 32767], while -32768 is treated as NULL |
|
||||
| 8 | TINYINT | 1 | Single-byte integer, the value range is [-127, 127], while -128 is treated as NULL |
|
||||
| 9 | BOOL | 1 | Bool, the value range is {true, false} |
|
||||
| 10 | NCHAR | User Defined| Multiple-Byte string that can include like Chinese characters. Each character of NCHAR type consumes 4 bytes storage. The string value should be quoted with single quotes. Literal single quote inside the string must be preceded with backslash, like `\’`. The length must be specified when defining a column or tag of NCHAR type, for example nchar(10) means it can store at most 10 characters of nchar type and will consume fixed storage of 40 bytes. Error will be reported the string value exceeds the length defined. |
|
||||
| 10 | NCHAR | User Defined| Multiple-Byte string that can include like Chinese characters. Each character of NCHAR type consumes 4 bytes storage. The string value should be quoted with single quotes. Literal single quote inside the string must be preceded with backslash, like `\’`. The length must be specified when defining a column or tag of NCHAR type, for example nchar(10) means it can store at most 10 characters of nchar type and will consume fixed storage of 40 bytes. An error will be reported if the string value exceeds the length defined. |
|
||||
| 11 | JSON | | json type can only be used on tag, a tag of json type is excluded with any other tags of any other type |
|
||||
|
||||
:::tip
|
||||
|
|
|
@ -35,7 +35,7 @@ CREATE DATABASE [IF NOT EXISTS] db_name [KEEP keep] [DAYS days] [UPDATE 1];
|
|||
- maxVgroupsPerDb: [Description](/reference/config/#maxvgroupsperdb)
|
||||
- comp: [Description](/reference/config/#comp)
|
||||
- precision: [Description](/reference/config/#precision)
|
||||
6. Please be noted that all of the parameters mentioned in this section can be configured in configuration file `taosd.cfg` at server side and used by default, can be override if they are specified in `create database` statement.
|
||||
6. Please note that all of the parameters mentioned in this section can be configured in configuration file `taosd.cfg` at server side and used by default, the default parameters can be overriden if they are specified in `create database` statement.
|
||||
|
||||
:::
|
||||
|
||||
|
@ -69,7 +69,7 @@ All data in the database will be deleted too. This command must be used with cau
|
|||
|
||||
## Change Database Configuration
|
||||
|
||||
Some examples are shown below to demonstrate how to change the configuration of a database. Please be noted that some configuration parameters can be changed after the database is created, but some others can't, for details of the configuration parameters of database please refer to [Configuration Parameters](/reference/config/).
|
||||
Some examples are shown below to demonstrate how to change the configuration of a database. Please note that some configuration parameters can be changed after the database is created, but some others can't, for details of the configuration parameters of database please refer to [Configuration Parameters](/reference/config/).
|
||||
|
||||
```
|
||||
ALTER DATABASE db_name COMP 2;
|
||||
|
@ -124,4 +124,4 @@ SHOW DATABASES;
|
|||
SHOW CREATE DATABASE db_name;
|
||||
```
|
||||
|
||||
This command is useful when migrating the data from one TDengine cluster to another one. Firstly this command can be used to get the CREATE statement, which in turn can be used in another TDengine to create an exactly same database.
|
||||
This command is useful when migrating the data from one TDengine cluster to another one. This command can be used to get the CREATE statement, which can be used in another TDengine to create the exact same database.
|
||||
|
|
|
@ -12,12 +12,12 @@ CREATE TABLE [IF NOT EXISTS] tb_name (timestamp_field_name TIMESTAMP, field1_nam
|
|||
|
||||
:::info
|
||||
|
||||
1. The first column of a table must be in TIMESTAMP type, and it will be set as primary key automatically
|
||||
2. The maximum length of table name is 192 bytes.
|
||||
3. The maximum length of each row is 16k bytes, please be notes that the extra 2 bytes used by each BINARY/NCHAR column are also counted in.
|
||||
4. The name of sub-table can only be consisted of English characters, digits and underscore, and can't be started with digit. Table names are case insensitive.
|
||||
5. The maximum length in bytes must be specified when using BINARY or NCHAR type.
|
||||
6. Escape character "\`" can be used to avoid the conflict between table names and reserved keywords, above rules will be bypassed when using escape character on table names, but the upper limit for name length is still valid. The table names specified using escape character are case sensitive. Only ASCII visible characters can be used with escape character.
|
||||
1. The first column of a table must be of TIMESTAMP type, and it will be set as the primary key automatically
|
||||
2. The maximum length of the table name is 192 bytes.
|
||||
3. The maximum length of each row is 16k bytes, please note that the extra 2 bytes used by each BINARY/NCHAR column are also counted.
|
||||
4. The name of the subtable can only consist of English characters, digits and underscore, and can't start with a digit. Table names are case insensitive.
|
||||
5. The maximum length in bytes must be specified when using BINARY or NCHAR types.
|
||||
6. Escape character "\`" can be used to avoid the conflict between table names and reserved keywords, above rules will be bypassed when using escape character on table names, but the upper limit for the name length is still valid. The table names specified using escape character are case sensitive. Only ASCII visible characters can be used with escape character.
|
||||
For example \`aBc\` and \`abc\` are different table names but `abc` and `aBc` are same table names because they are both converted to `abc` internally.
|
||||
|
||||
:::
|
||||
|
@ -28,9 +28,9 @@ CREATE TABLE [IF NOT EXISTS] tb_name (timestamp_field_name TIMESTAMP, field1_nam
|
|||
CREATE TABLE [IF NOT EXISTS] tb_name USING stb_name TAGS (tag_value1, ...);
|
||||
```
|
||||
|
||||
The above command creates a subtable using the specified super table as template and the specified tab values.
|
||||
The above command creates a subtable using the specified super table as a template and the specified tag values.
|
||||
|
||||
### Create Subtable Using STable As Template With A Part of Tags
|
||||
### Create Subtable Using STable As Template With A Subset of Tags
|
||||
|
||||
```
|
||||
CREATE TABLE [IF NOT EXISTS] tb_name USING stb_name (tag_name1, ...) TAGS (tag_value1, ...);
|
||||
|
@ -44,11 +44,11 @@ The tags for which no value is specified will be set to NULL.
|
|||
CREATE TABLE [IF NOT EXISTS] tb_name1 USING stb_name TAGS (tag_value1, ...) [IF NOT EXISTS] tb_name2 USING stb_name TAGS (tag_value2, ...) ...;
|
||||
```
|
||||
|
||||
This way can be used to create a lot of tables in a single SQL statement to accelerate the speed of the creating tables.
|
||||
This can be used to create a lot of tables in a single SQL statement to accelerate the speed of the creating tables.
|
||||
|
||||
:::info
|
||||
|
||||
- Creating tables in batch must use super table as template.
|
||||
- Creating tables in batch must use a super table as a template.
|
||||
- The length of single statement is suggested to be between 1,000 and 3,000 bytes for best performance.
|
||||
|
||||
:::
|
||||
|
@ -71,7 +71,7 @@ SHOW TABLES [LIKE tb_name_wildcard];
|
|||
SHOW CREATE TABLE tb_name;
|
||||
```
|
||||
|
||||
This way is useful when migrating the data in one TDengine cluster to another one because it can be used to create exactly same tables in the target database.
|
||||
This is useful when migrating the data in one TDengine cluster to another one because it can be used to create the exact same tables in the target database.
|
||||
|
||||
## Show Table Definition
|
||||
|
||||
|
@ -90,7 +90,7 @@ ALTER TABLE tb_name ADD COLUMN field_name data_type;
|
|||
:::info
|
||||
|
||||
1. The maximum number of columns is 4096, the minimum number of columns is 2.
|
||||
2. The maximum length of column name is 64 bytes.
|
||||
2. The maximum length of a column name is 64 bytes.
|
||||
|
||||
:::
|
||||
|
||||
|
@ -101,7 +101,7 @@ ALTER TABLE tb_name DROP COLUMN field_name;
|
|||
```
|
||||
|
||||
:::note
|
||||
If a table is created using a super table as template, the table definition can only be changed on the corresponding super table, but the change will be automatically applied to all the sub tables created using this super table as template. For tables created in normal way, the table definition can be changed directly on the table.
|
||||
If a table is created using a super table as template, the table definition can only be changed on the corresponding super table, and the change will be automatically applied to all the subtables created using this super table as template. For tables created in the normal way, the table definition can be changed directly on the table.
|
||||
|
||||
:::
|
||||
|
||||
|
@ -111,10 +111,10 @@ If a table is created using a super table as template, the table definition can
|
|||
ALTER TABLE tb_name MODIFY COLUMN field_name data_type(length);
|
||||
```
|
||||
|
||||
The the type of a column is variable length, like BINARY or NCHAR, this way can be used to change (or increase) the length of the column.
|
||||
The type of a column is variable length, like BINARY or NCHAR, this can be used to change (or increase) the length of the column.
|
||||
|
||||
:::note
|
||||
If a table is created using a super table as template, the table definition can only be changed on the corresponding super table, but the change will be automatically applied to all the sub tables created using this super table as template. For tables created in normal way, the table definition can be changed directly on the table.
|
||||
If a table is created using a super table as template, the table definition can only be changed on the corresponding super table, and the change will be automatically applied to all the subtables created using this super table as template. For tables created in the normal way, the table definition can be changed directly on the table.
|
||||
|
||||
:::
|
||||
|
||||
|
|
|
@ -15,14 +15,14 @@ Keyword `STable`, abbreviated for super table, is supported since version 2.0.15
|
|||
CREATE STable [IF NOT EXISTS] stb_name (timestamp_field_name TIMESTAMP, field1_name data_type1 [, field2_name data_type2 ...]) TAGS (tag1_name tag_type1, tag2_name tag_type2 [, tag3_name tag_type3]);
|
||||
```
|
||||
|
||||
The SQL statement of creating STable is similar to that of creating table, but a special column named as `TAGS` must be specified with the names and types of the tags.
|
||||
The SQL statement of creating a STable is similar to that of creating a table, but a special column set named `TAGS` must be specified with the names and types of the tags.
|
||||
|
||||
:::info
|
||||
|
||||
1. The tag types specified in TAGS should NOT be timestamp. Since 2.1.3.0 timestamp type can be used in TAGS column, but its value must be fixed and arithmetic operation can't be applied on it.
|
||||
2. The tag names specified in TAGS should NOT be same as other columns.
|
||||
3. The tag names specified in TAGS should NOT be same as any reserved keywords.(Please refer to [keywords](/taos-sql/keywords/)
|
||||
4. The maximum number of tags specified in TAGS is 128, but there must be at least one tag, and the total length of all tag columns should NOT exceed 16KB.
|
||||
2. The tag names specified in TAGS should NOT be the same as other columns.
|
||||
3. The tag names specified in TAGS should NOT be the same as any reserved keywords.(Please refer to [keywords](/taos-sql/keywords/)
|
||||
4. The maximum number of tags specified in TAGS is 128, there must be at least one tag, and the total length of all tag columns should NOT exceed 16KB.
|
||||
|
||||
:::
|
||||
|
||||
|
@ -32,7 +32,7 @@ The SQL statement of creating STable is similar to that of creating table, but a
|
|||
DROP STable [IF EXISTS] stb_name;
|
||||
```
|
||||
|
||||
All the sub-tables created using the deleted STable will be deleted automatically.
|
||||
All the subtables created using the deleted STable will be deleted automatically.
|
||||
|
||||
## Show All STables
|
||||
|
||||
|
@ -40,7 +40,7 @@ All the sub-tables created using the deleted STable will be deleted automaticall
|
|||
SHOW STableS [LIKE tb_name_wildcard];
|
||||
```
|
||||
|
||||
This command can be used to display the information of all STables in the current database, including name, creation time, number of columns, number of tags, number of tables created using this STable.
|
||||
This command can be used to display the information of all STables in the current database, including name, creation time, number of columns, number of tags, and number of tables created using this STable.
|
||||
|
||||
## Show The Create Statement of A STable
|
||||
|
||||
|
@ -48,7 +48,7 @@ This command can be used to display the information of all STables in the curren
|
|||
SHOW CREATE STable stb_name;
|
||||
```
|
||||
|
||||
This command is useful in migrating data from one TDengine cluster to another one because it can be used to create an exactly same STable in the target database.
|
||||
This command is useful in migrating data from one TDengine cluster to another because it can be used to create the exact same STable in the target database.
|
||||
|
||||
## Get STable Definition
|
||||
|
||||
|
@ -94,7 +94,7 @@ This command is used to add a new tag for a STable and specify the tag type.
|
|||
ALTER STable stb_name DROP TAG tag_name;
|
||||
```
|
||||
|
||||
The tag will be removed automatically from all the sub tables crated using the super table as template once a tag is removed from a super table.
|
||||
The tag will be removed automatically from all the subtables created using the super table as template once a tag is removed from a super table.
|
||||
|
||||
### Change A Tag
|
||||
|
||||
|
@ -102,7 +102,7 @@ The tag will be removed automatically from all the sub tables crated using the s
|
|||
ALTER STable stb_name CHANGE TAG old_tag_name new_tag_name;
|
||||
```
|
||||
|
||||
The tag name will be changed automatically from all the sub tables crated using the super table as template once a tag name is changed for a super table.
|
||||
The tag name will be changed automatically for all the subtables created using the super table as template once a tag name is changed for a super table.
|
||||
|
||||
### Change Tag Length
|
||||
|
||||
|
@ -113,6 +113,6 @@ ALTER STable stb_name MODIFY TAG tag_name data_type(length);
|
|||
This command can be used to change (or increase, more specifically) the length of a tag of variable length types, like BINARY or NCHAR.
|
||||
|
||||
:::note
|
||||
Changing tag value can be applied to only sub tables. All other tag operations, like add tag, remove tag, however, can be applied to only STable. If a new tag is added for a STable, the tag will be added with NULL value for all its sub tables.
|
||||
Changing tag values can be applied to only subtables. All other tag operations, like add tag, remove tag, however, can be applied to only STable. If a new tag is added for a STable, the tag will be added with NULL value for all its subtables.
|
||||
|
||||
:::
|
||||
|
|
|
@ -19,15 +19,15 @@ INSERT INTO
|
|||
|
||||
## Insert Single or Multiple Rows
|
||||
|
||||
Single row or multiple rows specified with VALUES can be inserted into a specific table. For example
|
||||
Single row or multiple rows specified with VALUES can be inserted into a specific table. For example:
|
||||
|
||||
Single row is inserted using below statement.
|
||||
A single row is inserted using the below statement.
|
||||
|
||||
```sq;
|
||||
INSERT INTO d1001 VALUES (NOW, 10.2, 219, 0.32);
|
||||
```
|
||||
|
||||
Double rows can be inserted using below statement.
|
||||
Double rows are inserted using the below statement.
|
||||
|
||||
```sql
|
||||
INSERT INTO d1001 VALUES ('2021-07-13 14:06:32.272', 10.2, 219, 0.32) (1626164208000, 10.15, 217, 0.33);
|
||||
|
@ -36,7 +36,7 @@ INSERT INTO d1001 VALUES ('2021-07-13 14:06:32.272', 10.2, 219, 0.32) (162616420
|
|||
:::note
|
||||
|
||||
1. In the second example above, different formats are used in the two rows to be inserted. In the first row, the timestamp format is a date and time string, which is interpreted from the string value only. In the second row, the timestamp format is a long integer, which will be interpreted based on the database time precision.
|
||||
2. When trying to insert multiple rows in single statement, only the timestamp of one row can be set as NOW, otherwise there will be duplicate timestamps among the rows and the result may be out of expectation because NOW will be interpreted as the time when the statement is executed.
|
||||
2. When trying to insert multiple rows in a single statement, only the timestamp of one row can be set as NOW, otherwise there will be duplicate timestamps among the rows and the result may be out of expectation because NOW will be interpreted as the time when the statement is executed.
|
||||
3. The oldest timestamp that is allowed is subtracting the KEEP parameter from current time.
|
||||
4. The newest timestamp that is allowed is adding the DAYS parameter to current time.
|
||||
|
||||
|
@ -51,13 +51,13 @@ INSERT INTO d1001 (ts, current, phase) VALUES ('2021-07-13 14:06:33.196', 10.27,
|
|||
```
|
||||
|
||||
:::info
|
||||
If no columns are explicitly specified, all the columns must be provided with values, this is called "all column mode". The insert performance of all column mode is much better than specifying a part of columns, so it's encouraged to use "all column mode" while providing NULL value explicitly for the columns for which no actual value can be provided.
|
||||
If no columns are explicitly specified, all the columns must be provided with values, this is called "all column mode". The insert performance of all column mode is much better than specifying a subset of columns, so it's encouraged to use "all column mode" while providing NULL value explicitly for the columns for which no actual value can be provided.
|
||||
|
||||
:::
|
||||
|
||||
## Insert Into Multiple Tables
|
||||
|
||||
One or multiple rows can be inserted into multiple tables in single SQL statement, with or without specifying specific columns.
|
||||
One or multiple rows can be inserted into multiple tables in a single SQL statement, with or without specifying specific columns.
|
||||
|
||||
```sql
|
||||
INSERT INTO d1001 VALUES ('2021-07-13 14:06:34.630', 10.2, 219, 0.32) ('2021-07-13 14:06:35.779', 10.15, 217, 0.33)
|
||||
|
@ -66,19 +66,19 @@ INSERT INTO d1001 VALUES ('2021-07-13 14:06:34.630', 10.2, 219, 0.32) ('2021-07-
|
|||
|
||||
## Automatically Create Table When Inserting
|
||||
|
||||
If it's not sure whether the table already exists, the table can be created automatically while inserting using below SQL statement. To use this functionality, a STable must be used as template and tag values must be provided.
|
||||
If it's unknown whether the table already exists, the table can be created automatically while inserting using the SQL statement below. To use this functionality, a STable must be used as template and tag values must be provided.
|
||||
|
||||
```sql
|
||||
INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) VALUES ('2021-07-13 14:06:32.272', 10.2, 219, 0.32);
|
||||
```
|
||||
|
||||
It's not necessary to provide values for all tag when creating tables automatically, the tags without values provided will be set to NULL.
|
||||
It's not necessary to provide values for all tags when creating tables automatically, the tags without values provided will be set to NULL.
|
||||
|
||||
```sql
|
||||
INSERT INTO d21001 USING meters (groupId) TAGS (2) VALUES ('2021-07-13 14:06:33.196', 10.15, 217, 0.33);
|
||||
```
|
||||
|
||||
Multiple rows can also be inserted into same table in single SQL statement using this way.
|
||||
Multiple rows can also be inserted into the same table in a single SQL statement.
|
||||
|
||||
```sql
|
||||
INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) VALUES ('2021-07-13 14:06:34.630', 10.2, 219, 0.32) ('2021-07-13 14:06:35.779', 10.15, 217, 0.33)
|
||||
|
@ -87,19 +87,19 @@ INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) VALUES ('202
|
|||
```
|
||||
|
||||
:::info
|
||||
Prior to version 2.0.20.5, when using `INSERT` to create table automatically and specify the columns, the column names must follow the table name immediately. From version 2.0.20.5, the column names can follow the table name immediately, also can be put between `TAGS` and `VALUES`. In same SQL statement, however, these two ways of specifying column names can't be mixed.
|
||||
Prior to version 2.0.20.5, when using `INSERT` to create tables automatically and specifying the columns, the column names must follow the table name immediately. From version 2.0.20.5, the column names can follow the table name immediately, also can be put between `TAGS` and `VALUES`. In the same SQL statement, however, these two ways of specifying column names can't be mixed.
|
||||
:::
|
||||
|
||||
## Insert Rows From A File
|
||||
|
||||
Besides using `VALUES` to insert one or multiple rows, the data to be inserted can also be prepared in a CSV file with comma as separator and each field value quoted by single quotes. Table definition is not required in the CSV file. For example, if file "/tmp/csvfile.csv" contains below data:
|
||||
Besides using `VALUES` to insert one or multiple rows, the data to be inserted can also be prepared in a CSV file with comma as separator and each field value quoted by single quotes. Table definition is not required in the CSV file. For example, if file "/tmp/csvfile.csv" contains the below data:
|
||||
|
||||
```
|
||||
'2021-07-13 14:07:34.630', '10.2', '219', '0.32'
|
||||
'2021-07-13 14:07:35.779', '10.15', '217', '0.33'
|
||||
```
|
||||
|
||||
Then data in this file can be inserted by below SQL statement:
|
||||
Then data in this file can be inserted by the SQL statement below:
|
||||
|
||||
```sql
|
||||
INSERT INTO d1001 FILE '/tmp/csvfile.csv';
|
||||
|
@ -107,13 +107,13 @@ INSERT INTO d1001 FILE '/tmp/csvfile.csv';
|
|||
|
||||
## Create Tables Automatically and Insert Rows From File
|
||||
|
||||
From version 2.1.5.0, tables can be automatically created using a super table as template when inserting data from a CSV file, Like below:
|
||||
From version 2.1.5.0, tables can be automatically created using a super table as template when inserting data from a CSV file, like below:
|
||||
|
||||
```sql
|
||||
INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) FILE '/tmp/csvfile.csv';
|
||||
```
|
||||
|
||||
Multiple tables can be automatically created and inserted in single SQL statement, like below:
|
||||
Multiple tables can be automatically created and inserted in a single SQL statement, like below:
|
||||
|
||||
```sql
|
||||
INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) FILE '/tmp/csvfile_21001.csv'
|
||||
|
@ -122,15 +122,15 @@ INSERT INTO d21001 USING meters TAGS ('California.SanFrancisco', 2) FILE '/tmp/c
|
|||
|
||||
## More About Insert
|
||||
|
||||
For SQL statement like `insert`, stream parsing strategy is applied. That means before an error is found and the execution is aborted, the part prior to the error point has already been executed. Below is an experiment to help understand the behavior.
|
||||
For SQL statement like `insert`, a stream parsing strategy is applied. That means before an error is found and the execution is aborted, the part prior to the error point has already been executed. Below is an experiment to help understand the behavior.
|
||||
|
||||
Firstly, a super table is created.
|
||||
First, a super table is created.
|
||||
|
||||
```sql
|
||||
CREATE TABLE meters(ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS(location BINARY(30), groupId INT);
|
||||
```
|
||||
|
||||
It can be proved that the super table has been created by `SHOW STableS`, but no table exists by `SHOW TABLES`.
|
||||
It can be proven that the super table has been created by `SHOW STableS`, but no table exists using `SHOW TABLES`.
|
||||
|
||||
```
|
||||
taos> SHOW STableS;
|
||||
|
@ -161,4 +161,4 @@ taos> SHOW TABLES;
|
|||
Query OK, 1 row(s) in set (0.001091s)
|
||||
```
|
||||
|
||||
From the above experiment, we can see that even though the value to be inserted is invalid but the table is still created.
|
||||
From the above experiment, we can see that while the value to be inserted is invalid the table is still created.
|
||||
|
|
|
@ -96,7 +96,7 @@ Query OK, 1 row(s) in set (0.000849s)
|
|||
|
||||
## Tags
|
||||
|
||||
Starting from version 2.0.14, tag columns can be selected together with data columns when querying sub tables. Please be noted that, however, wildcard \* doesn't represent any tag column, that means tag columns must be specified explicitly like below example.
|
||||
Starting from version 2.0.14, tag columns can be selected together with data columns when querying sub tables. Please note that, however, wildcard \* doesn't represent any tag column, that means tag columns must be specified explicitly like the example below.
|
||||
|
||||
```
|
||||
taos> SELECT location, groupid, current FROM d1001 LIMIT 2;
|
||||
|
@ -109,7 +109,7 @@ Query OK, 2 row(s) in set (0.003112s)
|
|||
|
||||
## Get distinct values
|
||||
|
||||
`DISTINCT` keyword can be used to get all the unique values of tag columns from a super table, it can also be used to get all the unique values of data columns from a table or sub table.
|
||||
`DISTINCT` keyword can be used to get all the unique values of tag columns from a super table, it can also be used to get all the unique values of data columns from a table or subtable.
|
||||
|
||||
```sql
|
||||
SELECT DISTINCT tag_name [, tag_name ...] FROM stb_name;
|
||||
|
@ -120,7 +120,7 @@ SELECT DISTINCT col_name [, col_name ...] FROM tb_name;
|
|||
|
||||
1. Configuration parameter `maxNumOfDistinctRes` in `taos.cfg` is used to control the number of rows to output. The minimum configurable value is 100,000, the maximum configurable value is 100,000,000, the default value is 1000,000. If the actual number of rows exceeds the value of this parameter, only the number of rows specified by this parameter will be output.
|
||||
2. It can't be guaranteed that the results selected by using `DISTINCT` on columns of `FLOAT` or `DOUBLE` are exactly unique because of the precision nature of floating numbers.
|
||||
3. `DISTINCT` can't be used in the sub-query of a nested query statement, and can't be used together with aggregate functions, `GROUP BY` or `JOIN` in same SQL statement.
|
||||
3. `DISTINCT` can't be used in the sub-query of a nested query statement, and can't be used together with aggregate functions, `GROUP BY` or `JOIN` in the same SQL statement.
|
||||
|
||||
:::
|
||||
|
||||
|
@ -197,7 +197,7 @@ taos> SELECT SERVER_VERSION();
|
|||
Query OK, 1 row(s) in set (0.000077s)
|
||||
```
|
||||
|
||||
Below statement is used to check the server status. One integer, like `1`, is returned if the server status is OK, otherwise an error code is returned. This way is compatible with the status check for TDengine from connection pool or 3rd party tools, and can avoid the problem of losing connection from connection pool when using wrong heartbeat checking SQL statement.
|
||||
Below statement is used to check the server status. One integer, like `1`, is returned if the server status is OK, otherwise an error code is returned. This is compatible with the status check for TDengine from connection pool or 3rd party tools, and can avoid the problem of losing the connection from a connection pool when using the wrong heartbeat checking SQL statement.
|
||||
|
||||
```
|
||||
taos> SELECT SERVER_STATUS();
|
||||
|
@ -248,12 +248,12 @@ summary:
|
|||
|
||||
## Special Keywords in TAOS SQL
|
||||
|
||||
- `TBNAME`: it is treated as a special tag when selecting on a super table, representing the name of sub-tables in that super table.
|
||||
- `TBNAME`: it is treated as a special tag when selecting on a super table, representing the name of subtables in that super table.
|
||||
- `_c0`: represents the first column of a table or super table.
|
||||
|
||||
## Tips
|
||||
|
||||
To get all the sub tables and corresponding tag values from a super table:
|
||||
To get all the subtables and corresponding tag values from a super table:
|
||||
|
||||
```SQL
|
||||
SELECT TBNAME, location FROM meters;
|
||||
|
@ -271,8 +271,8 @@ Only filter on `TAGS` are allowed in the `where` clause for above two query stat
|
|||
taos> SELECT TBNAME, location FROM meters;
|
||||
tbname | location |
|
||||
==================================================================
|
||||
d1004 | California.LoSangeles |
|
||||
d1003 | California.LoSangeles |
|
||||
d1004 | California.LosAngeles |
|
||||
d1003 | California.LosAngeles |
|
||||
d1002 | California.SanFrancisco |
|
||||
d1001 | California.SanFrancisco |
|
||||
Query OK, 4 row(s) in set (0.000881s)
|
||||
|
@ -285,10 +285,10 @@ Query OK, 1 row(s) in set (0.001091s)
|
|||
```
|
||||
|
||||
- Wildcard \* can be used to get all columns, or specific column names can be specified. Arithmetic operation can be performed on columns of number types, columns can be renamed in the result set.
|
||||
- Arithmetic operation on columns can't be used in where clause. For example, `where a*2>6;` is not allowed but `where a>6/2;` can be used instead for same purpose.
|
||||
- Arithmetic operation on columns can't be used in where clause. For example, `where a*2>6;` is not allowed but `where a>6/2;` can be used instead for the same purpose.
|
||||
- Arithmetic operation on columns can't be used as the objectives of select statement. For example, `select min(2*a) from t;` is not allowed but `select 2*min(a) from t;` can be used instead.
|
||||
- Logical operation can be used in `WHERE` clause to filter numeric values, wildcard can be used to filter string values.
|
||||
- Result set are arranged in ascending order of the first column, i.e. timestamp, but it can be controlled to output as descending order of timestamp. If `order by` is used on other columns, the result may be not as expected. By the way, \_c0 is used to represent the first column, i.e. timestamp.
|
||||
- Result sets are arranged in ascending order of the first column, i.e. timestamp, but it can be controlled to output as descending order of timestamp. If `order by` is used on other columns, the result may not be as expected. By the way, \_c0 is used to represent the first column, i.e. timestamp.
|
||||
- `LIMIT` parameter is used to control the number of rows to output. `OFFSET` parameter is used to specify from which row to output. `LIMIT` and `OFFSET` are executed after `ORDER BY` in the query execution. A simple tip is that `LIMIT 5 OFFSET 2` can be abbreviated as `LIMIT 2, 5`.
|
||||
- What is controlled by `LIMIT` is the number of rows in each group when `GROUP BY` is used.
|
||||
- `SLIMIT` parameter is used to control the number of groups when `GROUP BY` is used. Similar to `LIMIT`, `SLIMIT 5 OFFSET 2` can be abbreviated as `SLIMIT 2, 5`.
|
||||
|
@ -296,7 +296,7 @@ Query OK, 1 row(s) in set (0.001091s)
|
|||
|
||||
## Where
|
||||
|
||||
Logical operations in below table can be used in `where` clause to filter the resulting rows.
|
||||
Logical operations in below table can be used in the `where` clause to filter the resulting rows.
|
||||
|
||||
| **Operation** | **Note** | **Applicable Data Types** |
|
||||
| ------------- | ------------------------ | ----------------------------------------- |
|
||||
|
@ -314,7 +314,7 @@ Logical operations in below table can be used in `where` clause to filter the re
|
|||
|
||||
**Explanations**:
|
||||
|
||||
- Operator `<\>` is equal to `!=`, please be noted that this operator can't be used on the first column of any table, i.e.timestamp column.
|
||||
- Operator `<\>` is equal to `!=`, please note that this operator can't be used on the first column of any table, i.e.timestamp column.
|
||||
- Operator `like` is used together with wildcards to match strings
|
||||
- '%' matches 0 or any number of characters, '\_' matches any single ASCII character.
|
||||
- `\_` is used to match the \_ in the string.
|
||||
|
@ -323,8 +323,8 @@ Logical operations in below table can be used in `where` clause to filter the re
|
|||
- For timestamp column, only one condition can be used; for other columns or tags, `OR` keyword can be used to combine multiple logical operators. For example, `((value > 20 AND value < 30) OR (value < 12))`.
|
||||
- From version 2.3.0.0, multiple conditions can be used on timestamp column, but the result set can only contain single time range.
|
||||
- From version 2.0.17.0, operator `BETWEEN AND` can be used in where clause, for example `WHERE col2 BETWEEN 1.5 AND 3.25` means the filter condition is equal to "1.5 ≤ col2 ≤ 3.25".
|
||||
- From version 2.1.4.0, operator `IN` can be used in where clause. For example, `WHERE city IN ('California.SanFrancisco', 'California.SanDieo')`. For bool type, both `{true, false}` and `{0, 1}` are allowed, but integers other than 0 or 1 are not allowed. FLOAT and DOUBLE types are impacted by floating precision, only values that match the condition within the tolerance will be selected. Non-primary key column of timestamp type can be used with `IN`.
|
||||
- From version 2.3.0.0, regular expression is supported in where clause with keyword `match` or `nmatch`, the regular expression is case insensitive.
|
||||
- From version 2.1.4.0, operator `IN` can be used in the where clause. For example, `WHERE city IN ('California.SanFrancisco', 'California.SanDiego')`. For bool type, both `{true, false}` and `{0, 1}` are allowed, but integers other than 0 or 1 are not allowed. FLOAT and DOUBLE types are impacted by floating precision, only values that match the condition within the tolerance will be selected. Non-primary key column of timestamp type can be used with `IN`.
|
||||
- From version 2.3.0.0, regular expression is supported in the where clause with keyword `match` or `nmatch`, the regular expression is case insensitive.
|
||||
|
||||
## Regular Expression
|
||||
|
||||
|
@ -342,11 +342,11 @@ The regular expression being used must be compliant with POSIX specification, pl
|
|||
|
||||
Regular expression can be used against only table names, i.e. `tbname`, and tags of binary/nchar types, but can't be used against data columns.
|
||||
|
||||
The maximum length of regular expression string is 128 bytes. Configuration parameter `maxRegexStringLen` can be used to set the maximum allowed regular expression. It's a configuration parameter on client side, and will take in effect after restarting the client.
|
||||
The maximum length of regular expression string is 128 bytes. Configuration parameter `maxRegexStringLen` can be used to set the maximum allowed regular expression. It's a configuration parameter on the client side, and will take effect after restarting the client.
|
||||
|
||||
## JOIN
|
||||
|
||||
From version 2.2.0.0, inner join is fully supported in TDengine. More specifically, the inner join between table and table, that between STable and STable, and that between sub query and sub query are supported.
|
||||
From version 2.2.0.0, inner join is fully supported in TDengine. More specifically, the inner join between table and table, between STable and STable, and between sub query and sub query are supported.
|
||||
|
||||
Only primary key, i.e. timestamp, can be used in the join operation between table and table. For example:
|
||||
|
||||
|
@ -369,7 +369,7 @@ Similary, join operation can be performed on the result set of multiple sub quer
|
|||
:::note
|
||||
Restrictions on join operation:
|
||||
|
||||
- The number of tables or STables in single join operation can't exceed 10.
|
||||
- The number of tables or STables in a single join operation can't exceed 10.
|
||||
- `FILL` is not allowed in the query statement that includes JOIN operation.
|
||||
- Arithmetic operation is not allowed on the result set of join operation.
|
||||
- `GROUP BY` is not allowed on a part of tables that participate in join operation.
|
||||
|
@ -382,7 +382,7 @@ Restrictions on join operation:
|
|||
|
||||
Nested query is also called sub query, that means in a single SQL statement the result of inner query can be used as the data source of the outer query.
|
||||
|
||||
From 2.2.0.0, unassociated sub query can be used in the `FROM` clause. unassociated means the sub query doesn't use the parameters in the parent query. More specifically, in the `tb_name_list` of `SELECT` statement, an independent SELECT statement can be used. So a complete nested query looks like:
|
||||
From 2.2.0.0, unassociated sub query can be used in the `FROM` clause. Unassociated means the sub query doesn't use the parameters in the parent query. More specifically, in the `tb_name_list` of `SELECT` statement, an independent SELECT statement can be used. So a complete nested query looks like:
|
||||
|
||||
```SQL
|
||||
SELECT ... FROM (SELECT ... FROM ...) ...;
|
||||
|
@ -414,7 +414,7 @@ UNION ALL SELECT ...
|
|||
[UNION ALL SELECT ...]
|
||||
```
|
||||
|
||||
`UNION ALL` operator can be used to combine the result set from multiple select statements as long as the result set of these select statements have exactly same columns. `UNION ALL` doesn't remove redundant rows from multiple result sets. In single SQL statement, at most 100 `UNION ALL` can be supported.
|
||||
`UNION ALL` operator can be used to combine the result set from multiple select statements as long as the result set of these select statements have exactly the same columns. `UNION ALL` doesn't remove redundant rows from multiple result sets. In a single SQL statement, at most 100 `UNION ALL` can be supported.
|
||||
|
||||
### Examples
|
||||
|
||||
|
|
|
@ -4,7 +4,7 @@ title: Functions
|
|||
|
||||
## Aggregate Functions
|
||||
|
||||
Aggregate query is supported in TDengine by following aggregate functions and selection functions.
|
||||
Aggregate queries are supported in TDengine by the following aggregate functions and selection functions.
|
||||
|
||||
### COUNT
|
||||
|
||||
|
@ -12,11 +12,11 @@ Aggregate query is supported in TDengine by following aggregate functions and se
|
|||
SELECT COUNT([*|field_name]) FROM tb_name [WHERE clause];
|
||||
```
|
||||
|
||||
**Description**:Get the number of rows or the number of non-null values in a table or a super table.
|
||||
**Description**: Get the number of rows or the number of non-null values in a table or a super table.
|
||||
|
||||
**Return value type**:Long integer INT64
|
||||
**Return value type**: Long integer INT64
|
||||
|
||||
**Applicable column types**:All
|
||||
**Applicable column types**: All
|
||||
|
||||
**Applicable table types**: table, super table, sub table
|
||||
|
||||
|
@ -47,13 +47,13 @@ Query OK, 1 row(s) in set (0.001075s)
|
|||
SELECT AVG(field_name) FROM tb_name [WHERE clause];
|
||||
```
|
||||
|
||||
**Description**:Get the average value of a column in a table or STable
|
||||
**Description**: Get the average value of a column in a table or STable
|
||||
|
||||
**Return value type**:Double precision floating number
|
||||
**Return value type**: Double precision floating number
|
||||
|
||||
**Applicable column types**:Data types except for timestamp, binary, nchar and bool
|
||||
**Applicable column types**: Data types except for timestamp, binary, nchar and bool
|
||||
|
||||
**Applicable table types**:table, STable
|
||||
**Applicable table types**: table, STable
|
||||
|
||||
**Examples**:
|
||||
|
||||
|
@ -77,13 +77,13 @@ Query OK, 1 row(s) in set (0.000943s)
|
|||
SELECT TWA(field_name) FROM tb_name WHERE clause;
|
||||
```
|
||||
|
||||
**Description**:Time weighted average on a specific column within a time range
|
||||
**Description**: Time weighted average on a specific column within a time range
|
||||
|
||||
**Return value type**:Double precision floating number
|
||||
**Return value type**: Double precision floating number
|
||||
|
||||
**Applicable column types**:Data types except for timestamp, binary, nchar and bool
|
||||
**Applicable column types**: Data types except for timestamp, binary, nchar and bool
|
||||
|
||||
**Applicable table types**:table, STable
|
||||
**Applicable table types**: table, STable
|
||||
|
||||
**More explanations**:
|
||||
|
||||
|
@ -95,13 +95,13 @@ SELECT TWA(field_name) FROM tb_name WHERE clause;
|
|||
SELECT IRATE(field_name) FROM tb_name WHERE clause;
|
||||
```
|
||||
|
||||
**Description**:instantaneous rate on a specific column. The last two samples in the specified time range are used to calculate instantaneous rate. If the last sample value is smaller, then only the last sample value is used instead of the difference between the last two sample values.
|
||||
**Description**: instantaneous rate on a specific column. The last two samples in the specified time range are used to calculate instantaneous rate. If the last sample value is smaller, then only the last sample value is used instead of the difference between the last two sample values.
|
||||
|
||||
**Return value type**:Double precision floating number
|
||||
**Return value type**: Double precision floating number
|
||||
|
||||
**Applicable column types**:Data types except for timestamp, binary, nchar and bool
|
||||
**Applicable column types**: Data types except for timestamp, binary, nchar and bool
|
||||
|
||||
**Applicable table types**:table, STable
|
||||
**Applicable table types**: table, STable
|
||||
|
||||
**More explanations**:
|
||||
|
||||
|
@ -113,13 +113,13 @@ SELECT IRATE(field_name) FROM tb_name WHERE clause;
|
|||
SELECT SUM(field_name) FROM tb_name [WHERE clause];
|
||||
```
|
||||
|
||||
**Description**:The sum of a specific column in a table or STable
|
||||
**Description**: The sum of a specific column in a table or STable
|
||||
|
||||
**Return value type**:Double precision floating number or long integer
|
||||
**Return value type**: Double precision floating number or long integer
|
||||
|
||||
**Applicable column types**:Data types except for timestamp, binary, nchar and bool
|
||||
**Applicable column types**: Data types except for timestamp, binary, nchar and bool
|
||||
|
||||
**Applicable table types**:table, STable
|
||||
**Applicable table types**: table, STable
|
||||
|
||||
**Examples**:
|
||||
|
||||
|
@ -143,13 +143,13 @@ Query OK, 1 row(s) in set (0.000980s)
|
|||
SELECT STDDEV(field_name) FROM tb_name [WHERE clause];
|
||||
```
|
||||
|
||||
**Description**:Standard deviation of a specific column in a table or STable
|
||||
**Description**: Standard deviation of a specific column in a table or STable
|
||||
|
||||
**Return value type**:Double precision floating number
|
||||
**Return value type**: Double precision floating number
|
||||
|
||||
**Applicable column types**:Data types except for timestamp, binary, nchar and bool
|
||||
**Applicable column types**: Data types except for timestamp, binary, nchar and bool
|
||||
|
||||
**Applicable table types**:table, STable (starting from version 2.0.15.1)
|
||||
**Applicable table types**: table, STable (starting from version 2.0.15.1)
|
||||
|
||||
**Examples**:
|
||||
|
||||
|
@ -261,7 +261,7 @@ Query OK, 1 row(s) in set (0.008388s)
|
|||
|
||||
## Selection Functions
|
||||
|
||||
When any selective function is used, timestamp column or tag columns including `tbname` can be specified to show that the selected value are from which rows.
|
||||
When any select function is used, timestamp column or tag columns including `tbname` can be specified to show that the selected value are from which rows.
|
||||
|
||||
### MIN
|
||||
|
||||
|
@ -269,13 +269,13 @@ When any selective function is used, timestamp column or tag columns including `
|
|||
SELECT MIN(field_name) FROM {tb_name | stb_name} [WHERE clause];
|
||||
```
|
||||
|
||||
**Description**:The minimum value of a specific column in a table or STable
|
||||
**Description**: The minimum value of a specific column in a table or STable
|
||||
|
||||
**Return value type**:Same as the data type of the column being operated
|
||||
**Return value type**: Same as the data type of the column being operated
|
||||
|
||||
**Applicable column types**:Data types except for timestamp, binary, nchar and bool
|
||||
**Applicable column types**: Data types except for timestamp, binary, nchar and bool
|
||||
|
||||
**Applicable table types**:table, STable
|
||||
**Applicable table types**: table, STable
|
||||
|
||||
**Examples**:
|
||||
|
||||
|
@ -299,13 +299,13 @@ Query OK, 1 row(s) in set (0.000950s)
|
|||
SELECT MAX(field_name) FROM { tb_name | stb_name } [WHERE clause];
|
||||
```
|
||||
|
||||
**Description**:The maximum value of a specific column of a table or STable
|
||||
**Description**: The maximum value of a specific column of a table or STable
|
||||
|
||||
**Return value type**:Same as the data type of the column being operated
|
||||
**Return value type**: Same as the data type of the column being operated
|
||||
|
||||
**Applicable column types**:Data types except for timestamp, binary, nchar and bool
|
||||
**Applicable column types**: Data types except for timestamp, binary, nchar and bool
|
||||
|
||||
**Applicable table types**:table, STable
|
||||
**Applicable table types**: table, STable
|
||||
|
||||
**Examples**:
|
||||
|
||||
|
@ -329,13 +329,13 @@ Query OK, 1 row(s) in set (0.000987s)
|
|||
SELECT FIRST(field_name) FROM { tb_name | stb_name } [WHERE clause];
|
||||
```
|
||||
|
||||
**Description**:The first non-null value of a specific column in a table or STable
|
||||
**Description**: The first non-null value of a specific column in a table or STable
|
||||
|
||||
**Return value type**:Same as the column being operated
|
||||
**Return value type**: Same as the column being operated
|
||||
|
||||
**Applicable column types**:Any data type
|
||||
**Applicable column types**: Any data type
|
||||
|
||||
**Applicable table types**:table, STable
|
||||
**Applicable table types**: table, STable
|
||||
|
||||
**More explanations**:
|
||||
|
||||
|
@ -365,13 +365,13 @@ Query OK, 1 row(s) in set (0.001023s)
|
|||
SELECT LAST(field_name) FROM { tb_name | stb_name } [WHERE clause];
|
||||
```
|
||||
|
||||
**Description**:The last non-NULL value of a specific column in a table or STable
|
||||
**Description**: The last non-NULL value of a specific column in a table or STable
|
||||
|
||||
**Return value type**:Same as the column being operated
|
||||
**Return value type**: Same as the column being operated
|
||||
|
||||
**Applicable column types**:Any data type
|
||||
**Applicable column types**: Any data type
|
||||
|
||||
**Applicable table types**:table, STable
|
||||
**Applicable table types**: table, STable
|
||||
|
||||
**More explanations**:
|
||||
|
||||
|
@ -403,11 +403,11 @@ SELECT TOP(field_name, K) FROM { tb_name | stb_name } [WHERE clause];
|
|||
|
||||
**Description**: The greatest _k_ values of a specific column in a table or STable. If a value has multiple occurrences in the column but counting all of them in will exceed the upper limit _k_, then a part of them will be returned randomly.
|
||||
|
||||
**Return value type**:Same as the column being operated
|
||||
**Return value type**: Same as the column being operated
|
||||
|
||||
**Applicable column types**:Data types except for timestamp, binary, nchar and bool
|
||||
**Applicable column types**: Data types except for timestamp, binary, nchar and bool
|
||||
|
||||
**Applicable table types**:table, STable
|
||||
**Applicable table types**: table, STable
|
||||
|
||||
**More explanations**:
|
||||
|
||||
|
@ -440,9 +440,9 @@ Query OK, 2 row(s) in set (0.000810s)
|
|||
SELECT BOTTOM(field_name, K) FROM { tb_name | stb_name } [WHERE clause];
|
||||
```
|
||||
|
||||
**Description**:The least _k_ values of a specific column in a table or STable. If a value has multiple occurrences in the column but counting all of them in will exceed the upper limit _k_, then a part of them will be returned randomly.
|
||||
**Description**: The least _k_ values of a specific column in a table or STable. If a value has multiple occurrences in the column but counting all of them in will exceed the upper limit _k_, then a part of them will be returned randomly.
|
||||
|
||||
**Return value type**:Same as the column being operated
|
||||
**Return value type**: Same as the column being operated
|
||||
|
||||
**Applicable column types**: Data types except for timestamp, binary, nchar and bool
|
||||
|
||||
|
@ -584,7 +584,7 @@ SELECT INTERP(field_name) FROM { tb_name | stb_name } [WHERE where_condition] [
|
|||
|
||||
**Description**: The value that matches the specified timestamp range is returned, if existing; or an interpolation value is returned.
|
||||
|
||||
**Return value type**: same as the column being operated
|
||||
**Return value type**: Same as the column being operated
|
||||
|
||||
**Applicable column types**: Numeric data types
|
||||
|
||||
|
@ -608,7 +608,7 @@ SELECT INTERP(field_name) FROM { tb_name | stb_name } [WHERE where_condition] [
|
|||
taos> SELECT INTERP(current) FROM t1 RANGE('2017-7-14 18:40:00','2017-7-14 18:40:00') FILL(LINEAR);
|
||||
```
|
||||
|
||||
- Get an original data every 5 seconds, no interpolation, between "2017-07-14 18:00:00" and "2017-07-14 19:00:00:
|
||||
- Get original data every 5 seconds, no interpolation, between "2017-07-14 18:00:00" and "2017-07-14 19:00:00:
|
||||
|
||||
```
|
||||
taos> SELECT INTERP(current) FROM t1 RANGE('2017-7-14 18:00:00','2017-7-14 19:00:00') EVERY(5s);
|
||||
|
@ -662,7 +662,7 @@ SELECT INTERP(field_name) FROM { tb_name | stb_name } WHERE ts='timestamp' [FILL
|
|||
Query OK, 1 row(s) in set (0.002652s)
|
||||
```
|
||||
|
||||
If there is not any data corresponding to the specified timestamp, an interpolation value is returned if interpolation policy is specified by `FILL` parameter; or nothing is returned\
|
||||
If there is no data corresponding to the specified timestamp, an interpolation value is returned if interpolation policy is specified by `FILL` parameter; or nothing is returned.
|
||||
|
||||
```
|
||||
taos> SELECT INTERP(*) FROM meters WHERE tbname IN ('d636') AND ts='2017-7-14 18:40:00.005';
|
||||
|
@ -819,7 +819,7 @@ SELECT DERIVATIVE(field_name, time_interval, ignore_negative) FROM tb_name [WHER
|
|||
|
||||
**More explanations**:
|
||||
|
||||
- It is available from version 2.1.3.0, the number of result rows is the number of total rows in the time range subtracted by one, no output for the first row.\
|
||||
- It is available from version 2.1.3.0, the number of result rows is the number of total rows in the time range subtracted by one, no output for the first row.
|
||||
- It can be used together with `GROUP BY tbname` against a STable.
|
||||
|
||||
**Examples**:
|
||||
|
@ -882,7 +882,7 @@ SELECT CEIL(field_name) FROM { tb_name | stb_name } [WHERE clause];
|
|||
|
||||
**Applicable table types**: table, STable
|
||||
|
||||
**Applicable nested query**: inner query and outer query
|
||||
**Applicable nested query**: Inner query and outer query
|
||||
|
||||
**More explanations**:
|
||||
|
||||
|
|
|
@ -8,11 +8,11 @@ Window related clauses are used to divide the data set to be queried into subset
|
|||
|
||||
## Time Window
|
||||
|
||||
`INTERVAL` clause is used to generate time windows of same time interval, `SLIDING` is used to specify the time step for which the time window moves forward. The query is performed on one time window each time, and the time window moves forward with time. When defining continuous query both the size of time window and the step of forward sliding time need to be specified. As shown in the figure blow, [t0s, t0e] ,[t1s , t1e], [t2s, t2e] are respectively the time range of three time windows on which continuous queries are executed. The time step for which time window moves forward is marked by `sliding time`. Query, filter and aggregate operations are executed on each time window respectively. When the time step specified by `SLIDING` is same as the time interval specified by `INTERVAL`, the sliding time window is actually a flip time window.
|
||||
`INTERVAL` clause is used to generate time windows of the same time interval, `SLIDING` is used to specify the time step for which the time window moves forward. The query is performed on one time window each time, and the time window moves forward with time. When defining continuous query both the size of time window and the step of forward sliding time need to be specified. As shown in the figure blow, [t0s, t0e] ,[t1s , t1e], [t2s, t2e] are respectively the time ranges of three time windows on which continuous queries are executed. The time step for which time window moves forward is marked by `sliding time`. Query, filter and aggregate operations are executed on each time window respectively. When the time step specified by `SLIDING` is same as the time interval specified by `INTERVAL`, the sliding time window is actually a flip time window.
|
||||
|
||||

|
||||
|
||||
`INTERVAL` and `SLIDING` should be used with aggregate functions and selection functions. Below SQL statement is illegal because no aggregate or selection function is used with `INTERVAL`.
|
||||
`INTERVAL` and `SLIDING` should be used with aggregate functions and select functions. Below SQL statement is illegal because no aggregate or selection function is used with `INTERVAL`.
|
||||
|
||||
```
|
||||
SELECT * FROM temp_tb_1 INTERVAL(1m);
|
||||
|
@ -24,11 +24,11 @@ The time step specified by `SLIDING` can't exceed the time interval specified by
|
|||
SELECT COUNT(*) FROM temp_tb_1 INTERVAL(1m) SLIDING(2m);
|
||||
```
|
||||
|
||||
When the time length specified by `SLIDING` is same as that specified by `INTERVAL`, sliding window is actually flip window. The minimum time range specified by `INTERVAL` is 10 milliseconds (10a) prior to version 2.1.5.0. From version 2.1.5.0, the minimum time range by `INTERVAL` can be 1 microsecond (1u). However, if the DB precision is millisecond, the minimum time range is 1 millisecond (1a). Please be noted that the `timezone` parameter should be configured to same value in the `taos.cfg` configuration file on client side and server side.
|
||||
When the time length specified by `SLIDING` is the same as that specified by `INTERVAL`, the sliding window is actually a flip window. The minimum time range specified by `INTERVAL` is 10 milliseconds (10a) prior to version 2.1.5.0. From version 2.1.5.0, the minimum time range by `INTERVAL` can be 1 microsecond (1u). However, if the DB precision is millisecond, the minimum time range is 1 millisecond (1a). Please note that the `timezone` parameter should be configured to be the same value in the `taos.cfg` configuration file on client side and server side.
|
||||
|
||||
## Status Window
|
||||
|
||||
In case of using integer, bool, or string to represent the device status at a moment, the continuous rows with same status belong to same status window. Once the status changes, the status window closes. As shown in the following figure,there are two status windows according to status, [2019-04-28 14:22:07,2019-04-28 14:22:10] and [2019-04-28 14:22:11,2019-04-28 14:22:12]. Status window is not applicable to STable for now.
|
||||
In case of using integer, bool, or string to represent the device status at a moment, the continuous rows with same status belong to same status window. Once the status changes, the status window closes. As shown in the following figure, there are two status windows according to status, [2019-04-28 14:22:07,2019-04-28 14:22:10] and [2019-04-28 14:22:11,2019-04-28 14:22:12]. Status window is not applicable to STable for now.
|
||||
|
||||

|
||||
|
||||
|
@ -44,7 +44,7 @@ SELECT COUNT(*), FIRST(ts), status FROM temp_tb_1 STATE_WINDOW(status);
|
|||
SELECT COUNT(*), FIRST(ts) FROM temp_tb_1 SESSION(ts, tol_val);
|
||||
```
|
||||
|
||||
The primary key, i.e. timestamp, is used to determine which session window the row belongs to. If the time interval between two adjacent rows is within the time range specified by `tol_val`, they belong to same session window; otherwise they belong to two different time windows. As shown in the figure below, if the limit of time interval for session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:10,2019-04-28 14:22:30] and [2019-04-28 14:23:10,2019-04-28 14:23:30], because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds.
|
||||
The primary key, i.e. timestamp, is used to determine which session window the row belongs to. If the time interval between two adjacent rows is within the time range specified by `tol_val`, they belong to the same session window; otherwise they belong to two different time windows. As shown in the figure below, if the limit of time interval for the session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:10,2019-04-28 14:22:30] and [2019-04-28 14:23:10,2019-04-28 14:23:30], because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds.
|
||||
|
||||

|
||||
|
||||
|
@ -54,7 +54,7 @@ If the time interval between two continuous rows are within the time interval sp
|
|||
|
||||
### Syntax
|
||||
|
||||
The full syntax of aggregate by window is as following:
|
||||
The full syntax of aggregate by window is as follows:
|
||||
|
||||
```sql
|
||||
SELECT function_list FROM tb_name
|
||||
|
@ -73,11 +73,11 @@ SELECT function_list FROM stb_name
|
|||
|
||||
### Restrictions
|
||||
|
||||
- Aggregate functions and selection functions can be used in `function_list`, with each function having only one output, for example COUNT, AVG, SUM, STDDEV, LEASTSQUARES, PERCENTILE, MIN, MAX, FIRST, LAST. Functions having multiple output can't be used, for example DIFF or arithmetic operations.
|
||||
- Aggregate functions and select functions can be used in `function_list`, with each function having only one output, for example COUNT, AVG, SUM, STDDEV, LEASTSQUARES, PERCENTILE, MIN, MAX, FIRST, LAST. Functions having multiple output can't be used, for example DIFF or arithmetic operations.
|
||||
- `LAST_ROW` can't be used together with window aggregate.
|
||||
- Scalar functions, like CEIL/FLOOR, can't be used with window aggregate.
|
||||
- `WHERE` clause can be used to specify the starting and ending time and other filter conditions
|
||||
- `FILL` clause is used to specify how to fill when there is data missing in any window, including: \
|
||||
- `FILL` clause is used to specify how to fill when there is data missing in any window, including:
|
||||
1. NONE: No fill (the default fill mode)
|
||||
2. VALUE:Fill with a fixed value, which should be specified together, for example `FILL(VALUE, 1.23)`
|
||||
3. PREV:Fill with the previous non-NULL value, `FILL(PREV)`
|
||||
|
@ -88,21 +88,22 @@ SELECT function_list FROM stb_name
|
|||
:::info
|
||||
|
||||
1. Huge volume of interpolation output may be returned using `FILL`, so it's recommended to specify the time range when using `FILL`. The maximum interpolation values that can be returned in single query is 10,000,000.
|
||||
2. The result set is in the ascending order of timestamp in aggregate by time window aggregate.
|
||||
2. The result set is in ascending order of timestamp in aggregate by time window aggregate.
|
||||
3. If aggregate by window is used on STable, the aggregate function is performed on all the rows matching the filter conditions. If `GROUP BY` is not used in the query, the result set will be returned in ascending order of timestamp; otherwise the result set is not exactly in the order of ascending timestamp in each group.
|
||||
:::
|
||||
|
||||
:::
|
||||
|
||||
Aggregate by time window is also used in continuous query, please refer to [Continuous Query](/develop/continuous-query).
|
||||
|
||||
## Examples
|
||||
|
||||
The table of intelligent meters can be created like below SQL statement:
|
||||
The table of intelligent meters can be created by the SQL statement below:
|
||||
|
||||
```sql
|
||||
CREATE TABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT);
|
||||
```
|
||||
|
||||
The average current, maximum current and median of current in every 10 minutes of the past 24 hours can be calculated using below SQL statement, with missing value filled with the previous non-NULL value.
|
||||
The average current, maximum current and median of current in every 10 minutes for the past 24 hours can be calculated using the below SQL statement, with missing values filled with the previous non-NULL values.
|
||||
|
||||
```
|
||||
SELECT AVG(current), MAX(current), APERCENTILE(current, 50) FROM meters
|
||||
|
|
|
@ -5,8 +5,8 @@ title: Limits & Restrictions
|
|||
## Naming Rules
|
||||
|
||||
1. Only English characters, digits and underscore are allowed
|
||||
2. Can't be started with digits
|
||||
3. Case Insensitive without escape character "\`"
|
||||
2. Can't start with a digit
|
||||
3. Case insensitive without escape character "\`"
|
||||
4. Identifier with escape character "\`"
|
||||
To support more flexible table or column names, a new escape character "\`" is introduced. For more details please refer to [escape](/taos-sql/escape).
|
||||
|
||||
|
@ -18,7 +18,7 @@ The legal character set is `[a-zA-Z0-9!?$%^&*()_–+={[}]:;@~#|<,>.?/]`.
|
|||
|
||||
- Maximum length of database name is 32 bytes
|
||||
- Maximum length of table name is 192 bytes, excluding the database name prefix and the separator
|
||||
- Maximum length of each data row is 48K bytes from version 2.1.7.0 , before which the limit is 16K bytes. Please be noted that the upper limit includes the extra 2 bytes consumed by each column of BINARY/NCHAR type.
|
||||
- Maximum length of each data row is 48K bytes from version 2.1.7.0 , before which the limit is 16K bytes. Please note that the upper limit includes the extra 2 bytes consumed by each column of BINARY/NCHAR type.
|
||||
- Maximum of column name is 64.
|
||||
- Maximum number of columns is 4096. There must be at least 2 columns, and the first column must be timestamp.
|
||||
- Maximum length of tag name is 64.
|
||||
|
@ -26,7 +26,7 @@ The legal character set is `[a-zA-Z0-9!?$%^&*()_–+={[}]:;@~#|<,>.?/]`.
|
|||
- Maximum length of singe SQL statement is 1048576, i.e. 1 MB bytes. It can be configured in the parameter `maxSQLLength` in the client side, the applicable range is [65480, 1048576].
|
||||
- At most 4096 columns (or 1024 prior to 2.1.7.0) can be returned by `SELECT`, functions in the query statement may constitute columns. Error will be returned if the limit is exceeded.
|
||||
- Maximum numbers of databases, STables, tables are only depending on the system resources.
|
||||
- Maximum of database name is 32 bytes, can't include "." and special characters.
|
||||
- Maximum of database name is 32 bytes, and it can't include "." or special characters.
|
||||
- Maximum replica number of database is 3
|
||||
- Maximum length of user name is 23 bytes
|
||||
- Maximum length of password is 15 bytes
|
||||
|
@ -37,7 +37,7 @@ The legal character set is `[a-zA-Z0-9!?$%^&*()_–+={[}]:;@~#|<,>.?/]`.
|
|||
|
||||
## Restrictions of `GROUP BY`
|
||||
|
||||
`GROUP BY` can be performed on tags and `TBNAME`. It can be performed on data columns too, with one restriction that only one column and the number of unique values on that column is lower than 100,000. Please be noted that `GROUP BY` can't be performed on float or double type.
|
||||
`GROUP BY` can be performed on tags and `TBNAME`. It can be performed on data columns too, with one restriction that only one column and the number of unique values on that column is lower than 100,000. Please note that `GROUP BY` can't be performed on float or double types.
|
||||
|
||||
## Restrictions of `IS NOT NULL`
|
||||
|
||||
|
@ -45,7 +45,7 @@ The legal character set is `[a-zA-Z0-9!?$%^&*()_–+={[}]:;@~#|<,>.?/]`.
|
|||
|
||||
## Restrictions of `ORDER BY`
|
||||
|
||||
- Only one `order by` is allowed for normal table and sub table.
|
||||
- Only one `order by` is allowed for normal table and subtable.
|
||||
- At most two `order by` are allowed for STable, and the second one must be `ts`.
|
||||
- `order by tag` must be used with `group by tag` on same tag, this rule is also applicable to `tbname`.
|
||||
- `order by column` must be used with `group by column` or `top/bottom` on same column. This rule is applicable to table and STable.
|
||||
|
@ -56,11 +56,11 @@ The legal character set is `[a-zA-Z0-9!?$%^&*()_–+={[}]:;@~#|<,>.?/]`.
|
|||
|
||||
### Name Restrictions of Table/Column
|
||||
|
||||
The name of a table or column can only be composed of ASCII characters, digits and underscore, while digit can't be used as the beginning. The maximum length is 192 bytes. Names are case insensitive. The name mentioned in this rule doesn't include the database name prefix and the separator.
|
||||
The name of a table or column can only be composed of ASCII characters, digits and underscore, while it can't start with a digit. The maximum length is 192 bytes. Names are case insensitive. The name mentioned in this rule doesn't include the database name prefix and the separator.
|
||||
|
||||
### Name Restrictions After Escaping
|
||||
|
||||
To support more flexible table or column names, new escape character "`" is introduced in TDengine to avoid the conflict between table name and keywords and break the above restrictions for table name. The escape character is not counted in the length of table name.
|
||||
To support more flexible table or column names, new escape character "\`" is introduced in TDengine to avoid the conflict between table name and keywords and break the above restrictions for table names. The escape character is not counted in the length of table name.
|
||||
|
||||
With escaping, the string inside escape characters are case sensitive, i.e. will not be converted to lower case internally.
|
||||
|
||||
|
|
|
@ -52,13 +52,13 @@ title: JSON Type
|
|||
|
||||
4. Tag Operations
|
||||
|
||||
The value of JSON tag can be altered. Please be noted that the full JSON will be override when doing this.
|
||||
The value of JSON tag can be altered. Please note that the full JSON will be overriden when doing this.
|
||||
|
||||
The name of JSON tag can be altered. A tag of JSON type can't be added or removed. The column length of a JSON tag can't be changed.
|
||||
|
||||
## Other Restrictions
|
||||
|
||||
- JSON type can only be used for tag. There can be only one tag of JSON type, and it's exclusive to any other types of tag.
|
||||
- JSON type can only be used for a tag. There can be only one tag of JSON type, and it's exclusive to any other types of tags.
|
||||
|
||||
- The maximum length of keys in JSON is 256 bytes, and key must be printable ASCII characters. The maximum total length of a JSON is 4,096 bytes.
|
||||
|
||||
|
@ -74,7 +74,7 @@ title: JSON Type
|
|||
|
||||
- If a tag of JSON is the result of inner query, it can't be parsed and queried in the outer query.
|
||||
|
||||
For example, below SQL statements are not supported.
|
||||
For example, the below SQL statements are not supported.
|
||||
|
||||
```sql;
|
||||
select jtag->'key' from (select jtag from STable);
|
||||
|
|
|
@ -6,7 +6,7 @@ description: Install, Uninstall, Start, Stop and Upgrade
|
|||
import Tabs from "@theme/Tabs";
|
||||
import TabItem from "@theme/TabItem";
|
||||
|
||||
TDengine community version provides dev and rpm package for users to choose based on the system environment. deb supports Debian, Ubuntu and systems derived from them. rpm supports CentOS, RHEL, SUSE and systems derived from them. Furthermore, tar.gz package is provided for enterprise customers.
|
||||
TDengine community version provides dev and rpm packages for users to choose based on the system environment. deb supports Debian, Ubuntu and systems derived from them. rpm supports CentOS, RHEL, SUSE and systems derived from them. Furthermore, tar.gz package is provided for enterprise customers.
|
||||
|
||||
## Install
|
||||
|
||||
|
@ -14,7 +14,7 @@ TDengine community version provides dev and rpm package for users to choose base
|
|||
<TabItem label="Install Deb" value="debinst">
|
||||
|
||||
1. Download deb package from official website, for example TDengine-server-2.4.0.7-Linux-x64.deb
|
||||
2. In the directory where the package is located, execute below command
|
||||
2. In the directory where the package is located, execute the command below
|
||||
|
||||
```bash
|
||||
$ sudo dpkg -i TDengine-server-2.4.0.7-Linux-x64.deb
|
||||
|
@ -46,7 +46,7 @@ TDengine is installed successfully!
|
|||
<TabItem label="Install RPM" value="rpminst">
|
||||
|
||||
1. Download rpm package from official website, for example TDengine-server-2.4.0.7-Linux-x64.rpm;
|
||||
2. In the directory where the package is located, execute below command
|
||||
2. In the directory where the package is located, execute the command below
|
||||
|
||||
```
|
||||
$ sudo rpm -ivh TDengine-server-2.4.0.7-Linux-x64.rpm
|
||||
|
@ -77,7 +77,7 @@ TDengine is installed successfully!
|
|||
<TabItem label="Install tar.gz" value="tarinst">
|
||||
|
||||
1. Download the tar.gz package, for example TDengine-server-2.4.0.7-Linux-x64.tar.gz;
|
||||
2、In the directory where the package is located, firstly decompress the file, then switch to the sub-directory generated in decompressing, i.e. "TDengine-enterprise-server-2.4.0.7/" in this example, and execute the `install.sh` script.
|
||||
2. In the directory where the package is located, first decompress the file, then switch to the sub-directory generated in decompressing, i.e. "TDengine-enterprise-server-2.4.0.7/" in this example, and execute the `install.sh` script.
|
||||
|
||||
```bash
|
||||
$ tar xvzf TDengine-enterprise-server-2.4.0.7-Linux-x64.tar.gz
|
||||
|
@ -132,7 +132,7 @@ Some configuration will be prompted for users to provide when install.sh is exec
|
|||
</Tabs>
|
||||
|
||||
:::note
|
||||
When installing on the first node in the cluster, when "Enter FQDN:" is prompted, nothing needs to be provided. When installing on following nodes, when "Enter FQDN:" is prompted, the end point of the first dnode in the cluster can be input if it has been already up; or just ignore it and configure later after installation is done.
|
||||
When installing on the first node in the cluster, when "Enter FQDN:" is prompted, nothing needs to be provided. When installing on following nodes, when "Enter FQDN:" is prompted, the end point of the first dnode in the cluster can be input if it is already up; or just ignore it and configure later after installation is done.
|
||||
|
||||
:::
|
||||
|
||||
|
@ -181,14 +181,14 @@ taosKeeper is removed successfully!
|
|||
|
||||
:::note
|
||||
|
||||
- It's strongly suggested not to use multiple kinds of installation packages on single host TDengine
|
||||
- After deb package is installed, if the installation directory is removed manually so that uninstall or reinstall can't succeed, it can be resolved by cleaning up TDengine package information as below command and then reinstalling.
|
||||
- It's strongly suggested not to use multiple kinds of installation packages on a single host TDengine
|
||||
- After deb package is installed, if the installation directory is removed manually so that uninstall or reinstall can't succeed, it can be resolved by cleaning up TDengine package information as in the command below and then reinstalling.
|
||||
|
||||
```bash
|
||||
$ sudo rm -f /var/lib/dpkg/info/tdengine*
|
||||
```
|
||||
|
||||
- After rpm package is installed, if the installation directory is removed manually so that uninstall or reinstall can't succeed, it can be resolved by cleaning up TDengine package information as below command and then reinstalling.
|
||||
- After rpm package is installed, if the installation directory is removed manually so that uninstall or reinstall can't succeed, it can be resolved by cleaning up TDengine package information as in the command below and then reinstalling.
|
||||
|
||||
```bash
|
||||
$ sudo rpm -e --noscripts tdengine
|
||||
|
@ -228,14 +228,14 @@ During the installation process:
|
|||
|
||||
:::note
|
||||
|
||||
- When TDengine is uninstalled, the configuration /etc/taos/taos.cfg, data directory /var/lib/taos, log directory /var/log/taos are kept. They can be deleted manually with caution because data can't be recovered once
|
||||
- When TDengine is uninstalled, the configuration /etc/taos/taos.cfg, data directory /var/lib/taos, log directory /var/log/taos are kept. They can be deleted manually with caution because data can't be recovered
|
||||
- When reinstalling TDengine, if the default configuration file /etc/taos/taos.cfg exists, it will be kept and the configuration file in the installation package will be renamed to taos.cfg.orig and stored at /usr/local/taos/cfg to be used as configuration sample. Otherwise the configuration file in the installation package will be installed to /etc/taos/taos.cfg and used.
|
||||
|
||||
## Start and Stop
|
||||
|
||||
Linux system services `systemd`, `systemctl` or `service` is used to start, stop and restart TDengine. The server process of TDengine is `taosd`, which is started automatically after the Linux system is started. System operator can use `systemd`, `systemctl` or `service` to start, stop or restart TDengine server.
|
||||
Linux system services `systemd`, `systemctl` or `service` are used to start, stop and restart TDengine. The server process of TDengine is `taosd`, which is started automatically after the Linux system is started. System operators can use `systemd`, `systemctl` or `service` to start, stop or restart TDengine server.
|
||||
|
||||
For example, if using `systemctl` , the commands to start, stop, restart and check TDengine server are as below:
|
||||
For example, if using `systemctl` , the commands to start, stop, restart and check TDengine server are below:
|
||||
|
||||
- Start server:`systemctl start taosd`
|
||||
|
||||
|
@ -263,12 +263,12 @@ Active: inactive (dead)
|
|||
|
||||
There are two aspects in upgrade operation: upgrade installation package and upgrade a running server.
|
||||
|
||||
Upgrading package should follow the steps mentioned previously to firstly uninstall old version then install new version.
|
||||
Upgrading package should follow the steps mentioned previously to first uninstall the old version then install the new version.
|
||||
|
||||
Upgrading a running server is much more complex. Firstly please check the version number of old version and new version. The version number of TDengine consists of 4 sections, only the first 3 section match can the old version be upgraded to the new version. The steps of upgrading a running server are as below:
|
||||
Upgrading a running server is much more complex. First please check the version number of the old version and the new version. The version number of TDengine consists of 4 sections, only if the first 3 section match can the old version be upgraded to the new version. The steps of upgrading a running server are as below:
|
||||
|
||||
- Stop inserting data
|
||||
- Make sure all data persisted into disk
|
||||
- Make sure all data are persisted into disk
|
||||
- Stop the cluster of TDengine
|
||||
- Uninstall old version and install new version
|
||||
- Start the cluster of TDengine
|
||||
|
@ -277,6 +277,7 @@ Upgrading a running server is much more complex. Firstly please check the versio
|
|||
- Restore business data
|
||||
|
||||
:::warning
|
||||
|
||||
TDengine doesn't guarantee any lower version is compatible with the data generated by a higher version, so it's never recommended to downgrade the version.
|
||||
|
||||
:::
|
||||
|
|
|
@ -6,7 +6,7 @@ The computing and storage resources need to be planned if using TDengine to buil
|
|||
|
||||
## Memory Requirement of Server Side
|
||||
|
||||
The number of vgroups created for each database is same as the number of CPU cores by default and can be configured by parameter `maxVgroupsPerDb`, each vnode in a vgroup stores one replica. Each vnode consumes fixed size of memory, i.e. `blocks` \* `cache`. Besides, some memory is required for tag values associated with each table. A fixed amount of memory is required for each cluster. So, the memory required for each DB can be calculated using below formula:
|
||||
The number of vgroups created for each database is the same as the number of CPU cores by default and can be configured by parameter `maxVgroupsPerDb`, each vnode in a vgroup stores one replica. Each vnode consumes a fixed size of memory, i.e. `blocks` \* `cache`. Besides, some memory is required for tag values associated with each table. A fixed amount of memory is required for each cluster. So, the memory required for each DB can be calculated using the formula below:
|
||||
|
||||
```
|
||||
Database Memory Size = maxVgroupsPerDb * replica * (blocks * cache + 10MB) + numOfTables * (tagSizePerTable + 0.5KB)
|
||||
|
@ -14,7 +14,7 @@ Database Memory Size = maxVgroupsPerDb * replica * (blocks * cache + 10MB) + num
|
|||
|
||||
For example, assuming the default value of `maxVgroupPerDB` is 64, the default value of `cache` 16M, the default value of `blocks` is 6, there are 100,000 tables in a DB, the replica number is 1, total length of tag values is 256 bytes, the total memory required for this DB is: 64 \* 1 \* (16 \* 6 + 10) + 100000 \* (0.25 + 0.5) / 1000 = 6792M.
|
||||
|
||||
In real operation of TDengine, we are more concerned about the memory used by each TDengine server process `taosd`.
|
||||
In the real operation of TDengine, we are more concerned about the memory used by each TDengine server process `taosd`.
|
||||
|
||||
```
|
||||
taosd_memory = vnode_memory + mnode_memory + query_memory
|
||||
|
@ -25,26 +25,26 @@ In the above formula:
|
|||
1. "vnode_memory" of a `taosd` process is the memory used by all vnodes hosted by this `taosd` process. It can be roughly calculated by firstly adding up the total memory of all DBs whose memory usage can be derived according to the formula mentioned previously then dividing by number of dnodes and multiplying the number of replicas.
|
||||
|
||||
```
|
||||
vnode_memory = sum(Database memory) / number_of_dnodes \* replica
|
||||
vnode_memory = sum(Database memory) / number_of_dnodes * replica
|
||||
```
|
||||
|
||||
2. "mnode_memory" of a `taosd` process is the memory consumed by a mnode. If there is one (and only one) mnode hosted in a `taosd` process, the memory consumed by "mnode" is "0.2KB \* the total number of tables in the cluster".
|
||||
|
||||
3. "query_memory" is the memory used when processing query requests. Each ongoing query consumes at least "0.2 KB \* total number of involved tables".
|
||||
|
||||
Please be noted that the above formulas can only be used to estimate the minimum memory requirement, instead of maximum memory usage. In a real production environment, it's better to preserve some redundance beyond the estimated minimum memory requirement. If memory is abundant, it's suggested to increase the value of parameter `blocks` to speed up data insertion and data query.
|
||||
Please note that the above formulas can only be used to estimate the minimum memory requirement, instead of maximum memory usage. In a real production environment, it's better to reserve some redundance beyond the estimated minimum memory requirement. If memory is abundant, it's suggested to increase the value of parameter `blocks` to speed up data insertion and data query.
|
||||
|
||||
## Memory Requirement of Client Side
|
||||
|
||||
The client programs use TDengine client driver `taosc` to connect to the server side, there is also memory requirement for a client program.
|
||||
For the client programs using TDengine client driver `taosc` to connect to the server side there is a memory requirement as well.
|
||||
|
||||
The memory consumed by a client program is mainly about the SQL statements for data insertion, caching of table metadata, and some internal use. Assuming maximum number of tables is N (the memory consumed by the metadata of each table is 256 bytes), maximum number of threads for parallel insertion is T, maximum length of a SQL statement is S (normally 1 MB), the memory required by a client program can be estimated using below formula:
|
||||
The memory consumed by a client program is mainly about the SQL statements for data insertion, caching of table metadata, and some internal use. Assuming maximum number of tables is N (the memory consumed by the metadata of each table is 256 bytes), maximum number of threads for parallel insertion is T, maximum length of a SQL statement is S (normally 1 MB), the memory required by a client program can be estimated using the below formula:
|
||||
|
||||
```
|
||||
M = (T * S * 3 + (N / 4096) + 100)
|
||||
```
|
||||
|
||||
For example, if the number of parallel data insertion threads is 100, total number of tables is 10,000,000, then minimum memory requirement of a client program is:
|
||||
For example, if the number of parallel data insertion threads is 100, total number of tables is 10,000,000, then the minimum memory requirement of a client program is:
|
||||
|
||||
```
|
||||
100 * 3 + (10000000 / 4096) + 100 = 2741 (MBytes)
|
||||
|
@ -56,10 +56,10 @@ So, at least 3GB needs to be reserved for such a client.
|
|||
|
||||
The CPU resources required depend on two aspects:
|
||||
|
||||
- **Data Insertion** Each dnode of TDengine can process at least 10,000 insertion requests in one second, while each insertion request can have multiple rows. The computing resource consumed between inserting 1 row one time and inserting 10 rows one time is very small. So, the more the rows to insert one time, the higher the efficiency. Inserting in bach also exposes requirement for the client side which needs to cache rows and insert in batch once the cached rows reaches a threshold.
|
||||
- **Data Insertion** Each dnode of TDengine can process at least 10,000 insertion requests in one second, while each insertion request can have multiple rows. The computing resource consumed between inserting 1 row one time and inserting 10 rows one time is very small. So, the more the rows to insert one time, the higher the efficiency. Inserting in bach also exposes requirements for the client side which needs to cache rows and insert in batch once the cached rows reaches a threshold.
|
||||
- **Data Query** High efficiency query is provided in TDengine, but it's hard to estimate the CPU resource required because the queries used in different use cases and the frequency of queries vary significantly. It can only be verified with the query statements, query frequency, data size to be queried, etc provided by user.
|
||||
|
||||
In short words, the CPU resource required for data insertion can be estimated but it's hard to do so for query use cases. In real operation, it's suggested to control CPU usage below 50%. If this threshold is exceeded, it's a reminder for system operator to add more nodes in the cluster to expand resources.
|
||||
In short, the CPU resource required for data insertion can be estimated but it's hard to do so for query use cases. In real operation, it's suggested to control CPU usage below 50%. If this threshold is exceeded, it's a reminder for system operator to add more nodes in the cluster to expand resources.
|
||||
|
||||
## Disk Requirement
|
||||
|
||||
|
@ -69,14 +69,14 @@ The compression ratio in TDengine is much higher than that in RDBMS. In most cas
|
|||
Raw DataSize = numOfTables * rowSizePerTable * rowsPerTable
|
||||
```
|
||||
|
||||
For example, there are 10,000,000 meters, while each meter collects data every 15 minutes and the data size of each collection si 128 bytes, so the raw data size of one year is: 10000000 \* 128 \* 24 \* 60 / 15 \* 365 = 44.8512(TB). Assuming compression ratio is 5, the actual disk size is: 44.851 / 5 = 8.97024(TB).
|
||||
For example, there are 10,000,000 meters, while each meter collects data every 15 minutes and the data size of each collection is 128 bytes, so the raw data size of one year is: 10000000 \* 128 \* 24 \* 60 / 15 \* 365 = 44.8512(TB). Assuming compression ratio is 5, the actual disk size is: 44.851 / 5 = 8.97024(TB).
|
||||
|
||||
Parameter `keep` can be used to set how long the data will be kept on disk. To further reduce storage cost, multiple storage levels can be enabled in TDengine, with the coldest data stored on the cheapest storage device, and this is transparent to application programs.
|
||||
|
||||
To increase the performance, multiple disks can be setup for parallel data reading or data inserting. Please be noted that expensive disk array is not necessary because replications are used in TDengine to provide high availability.
|
||||
To increase the performance, multiple disks can be setup for parallel data reading or data inserting. Please note that an expensive disk array is not necessary because replications are used in TDengine to provide high availability.
|
||||
|
||||
## Number of Hosts
|
||||
|
||||
A host can be either physical or virtual. The total memory, total CPU, total disk required can be estimated according to the formulas mentioned previously. Then, according to the system resources that a single host can provide, assuming all hosts are same in resources, the number of hosts can be derived easily.
|
||||
A host can be either physical or virtual. The total memory, total CPU, total disk required can be estimated according to the formulas mentioned previously. Then, according to the system resources that a single host can provide, assuming all hosts have the same resources, the number of hosts can be derived easily.
|
||||
|
||||
**Quick Estimation for CPU, Memory and Disk** Please refer to [Resource Estimate](https://www.taosdata.com/config/config.html).
|
||||
|
|
|
@ -7,12 +7,15 @@ title: Fault Tolerance & Disaster Recovery
|
|||
|
||||
TDengine uses **WAL**, i.e. Write Ahead Log, to achieve fault tolerance and high reliability.
|
||||
|
||||
When a data block is received by TDengine, the original data block is firstly written into WAL. The log in WAL will be deleted only after the data has been written into data files in the database. Data can be recovered from WAL in case the server is stopped abnormally due to any reason and then restarted.
|
||||
When a data block is received by TDengine, the original data block is first written into WAL. The log in WAL will be deleted only after the data has been written into data files in the database. Data can be recovered from WAL in case the server is stopped abnormally due to any reason and then restarted.
|
||||
|
||||
There are 2 configuration parameters related to WAL:
|
||||
|
||||
- walLevel:0:wal is disabled; 1:wal is enabled without fsync; 2:wal is enabled with fsync.
|
||||
- fsync:only valid when walLevel is set to 2, it specified the interval of invoking fsync. If set to 0, it means fsync is invoked immediately once WAL is written.
|
||||
- walLevel:
|
||||
- 0:wal is disabled;
|
||||
- 1:wal is enabled without fsync;
|
||||
- 2:wal is enabled with fsync.
|
||||
- fsync:only valid when walLevel is set to 2, it specifies the interval of invoking fsync. If set to 0, it means fsync is invoked immediately once WAL is written.
|
||||
|
||||
To achieve absolutely no data loss, walLevel needs to be set to 2 and fsync needs to be set to 1. The penalty is the performance of data ingestion downgrades. However, if the concurrent threads of data insertion on the client side can reach a big enough number, for example 50, the data ingestion performance would be still good enough, our verification shows that the drop is only 30% compared to fsync is set to 3,000 milliseconds.
|
||||
|
||||
|
@ -20,10 +23,10 @@ To achieve absolutely no data loss, walLevel needs to be set to 2 and fsync need
|
|||
|
||||
TDengine uses replications to provide high availability and disaster recovery capability.
|
||||
|
||||
TDengine cluster is managed by mnode. To make sure the high availability of mnode, multiple replicas can be configured by system parameter `numOfMnodes`. The data replication between mnode replicas is in synchronous way to guarantee the metadata consistency.
|
||||
TDengine cluster is managed by mnode. To make sure the high availability of mnode, multiple replicas can be configured by the system parameter `numOfMnodes`. The data replication between mnode replicas is performed in a synchronous way to guarantee the metadata consistency.
|
||||
|
||||
The number of replicas for time series data in TDengine is associated with each database, there can be a lot of databases in a cluster while each database can be configured with a different number of replicas. When creating a database, parameter `replica` is used to configure the number of replications. To achieve high availability, `replica` needs to be higher than 1.
|
||||
The number of replicas for the time series data in TDengine is associated with each database, there can be a lot of databases in a cluster while each database can be configured with a different number of replicas. When creating a database, parameter `replica` is used to configure the number of replications. To achieve high availability, `replica` needs to be higher than 1.
|
||||
|
||||
The number of dnodes in a TDengine cluster must NOT be lower than the number of replicas for any database, otherwise it would fail when trying to create table.
|
||||
The number of dnodes in a TDengine cluster must NOT be lower than the number of replicas for any database, otherwise it would fail when trying to create a table.
|
||||
|
||||
As long as the dnodes of a TDengine cluster are deployed on different physical machines and replica number is set to bigger than 1, high availability can be achieved without any other assistance. If dnodes of TDengine cluster are deployed in geographically different data centers, disaster recovery can be achieved too.
|
||||
As long as the dnodes of a TDengine cluster are deployed on different physical machines and the replica number is set to bigger than 1, high availability can be achieved without any other assistance. If dnodes of TDengine cluster are deployed in geographically different data centers, disaster recovery can be achieved too.
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
title: User Management
|
||||
---
|
||||
|
||||
System operator can use TDengine CLI `taos` to create or remove user or change password. The SQL command is as low:
|
||||
A system operator can use TDengine CLI `taos` to create or remove users or change passwords. The SQL commands are documented below:
|
||||
|
||||
## Create User
|
||||
|
||||
|
@ -10,7 +10,7 @@ System operator can use TDengine CLI `taos` to create or remove user or change p
|
|||
CREATE USER <user_name> PASS <'password'>;
|
||||
```
|
||||
|
||||
When creating a user and specifying the user name and password, password needs to be quoted using single quotes.
|
||||
When creating a user and specifying the user name and password, the password needs to be quoted using single quotes.
|
||||
|
||||
## Drop User
|
||||
|
||||
|
@ -18,7 +18,7 @@ When creating a user and specifying the user name and password, password needs t
|
|||
DROP USER <user_name>;
|
||||
```
|
||||
|
||||
Drop a user can only be performed by root.
|
||||
Dropping a user can only be performed by root.
|
||||
|
||||
## Change Password
|
||||
|
||||
|
@ -26,7 +26,7 @@ Drop a user can only be performed by root.
|
|||
ALTER USER <user_name> PASS <'password'>;
|
||||
```
|
||||
|
||||
To keep the case of the password when changing password, password needs to be quoted using single quotes.
|
||||
To keep the case of the password when changing password, the password needs to be quoted using single quotes.
|
||||
|
||||
## Change Privilege
|
||||
|
||||
|
@ -36,7 +36,7 @@ ALTER USER <user_name> PRIVILEGE <write|read>;
|
|||
|
||||
The privileges that can be changed to are `read` or `write` without single quotes.
|
||||
|
||||
Note:there is another privilege `super`, which not allowed to be authorized to any user.
|
||||
Note:there is another privilege `super`, which is not allowed to be authorized to any user.
|
||||
|
||||
## Show Users
|
||||
|
||||
|
@ -45,6 +45,6 @@ SHOW USERS;
|
|||
```
|
||||
|
||||
:::note
|
||||
In SQL syntax, `< >` means the part that needs to be input by user, excluding the `< >` itself.
|
||||
In SQL syntax, `< >` means the part that needs to be input by the user, excluding the `< >` itself.
|
||||
|
||||
:::
|
||||
|
|
|
@ -2,26 +2,26 @@
|
|||
title: Data Import
|
||||
---
|
||||
|
||||
There are multiple ways of importing data provided byTDengine: import with script, import from data file, import using `taosdump`.
|
||||
There are multiple ways of importing data provided by TDengine: import with script, import from data file, import using `taosdump`.
|
||||
|
||||
## Import Using Script
|
||||
|
||||
TDengine CLI `taos` supports `source <filename>` command for executing the SQL statements in the file in batch. The SQL statements for creating databases, creating tables, and inserting rows can be written in single file with one statement on each line, then the file can be executed using `source` command in TDengine CLI `taos` to execute the SQL statements in order and in batch. In the script file, any line beginning with "#" is treated as comments and ignored silently.
|
||||
TDengine CLI `taos` supports `source <filename>` command for executing the SQL statements in the file in batch. The SQL statements for creating databases, creating tables, and inserting rows can be written in a single file with one statement on each line, then the file can be executed using the `source` command in TDengine CLI `taos` to execute the SQL statements in order and in batch. In the script file, any line beginning with "#" is treated as comments and ignored silently.
|
||||
|
||||
## Import from Data File
|
||||
|
||||
In TDengine CLI, data can be imported from a CSV file into an existing table. The data in single CSV must belong to same table and must be consistent with the schema of that table. The SQL statement is as below:
|
||||
In TDengine CLI, data can be imported from a CSV file into an existing table. The data in a single CSV must belong to the same table and must be consistent with the schema of that table. The SQL statement is as below:
|
||||
|
||||
```sql
|
||||
insert into tb1 file 'path/data.csv';
|
||||
```
|
||||
|
||||
:::note
|
||||
If there is description in the first line of a CSV file, please remove it before importing. If there is no value for a column, please use `NULL` without quotes.
|
||||
If there is a description in the first line of the CSV file, please remove it before importing. If there is no value for a column, please use `NULL` without quotes.
|
||||
|
||||
:::
|
||||
|
||||
For example, there is a sub table d1001 whose schema is as below:
|
||||
For example, there is a subtable d1001 whose schema is as below:
|
||||
|
||||
```sql
|
||||
taos> DESCRIBE d1001
|
||||
|
@ -49,7 +49,7 @@ The format of the CSV file to be imported, data.csv, is as below:
|
|||
'2018-10-12 06:38:05.000',18.30000,219,0.31000
|
||||
```
|
||||
|
||||
Then, below SQL statement can be used to import data from file "data.csv", assuming the file is located under the home directory of current Linux user.
|
||||
Then, the below SQL statement can be used to import data from file "data.csv", assuming the file is located under the home directory of the current Linux user.
|
||||
|
||||
```sql
|
||||
taos> insert into d1001 file '~/data.csv';
|
||||
|
@ -58,4 +58,4 @@ Query OK, 9 row(s) affected (0.004763s)
|
|||
|
||||
## Import using taosdump
|
||||
|
||||
A convenient tool for importing and exporting data is provided by TDengine, `taosdump`, which can used to export data from one TDengine cluster and import into another one. For the details of using `taosdump` please refer to [Tool for exporting and importing data: taosdump](/reference/taosdump).
|
||||
A convenient tool for importing and exporting data is provided by TDengine, `taosdump`, which can be used to export data from one TDengine cluster and import into another one. For the details of using `taosdump` please refer to [Tool for exporting and importing data: taosdump](/reference/taosdump).
|
||||
|
|
|
@ -3,7 +3,7 @@ sidebar_label: Connections & Tasks
|
|||
title: Manage Connections and Query Tasks
|
||||
---
|
||||
|
||||
System operator can use TDengine CLI to show the connections, ongoing queries, stream computing, and can close connection or stop ongoing query task or stream computing.
|
||||
A system operator can use TDengine CLI to show the connections, ongoing queries, stream computing, and can close connection or stop ongoing query task or stream computing.
|
||||
|
||||
## Show Connections
|
||||
|
||||
|
@ -51,4 +51,4 @@ The first column of the output is stream ID, which is composed of the connection
|
|||
KILL STREAM <stream-id>;
|
||||
```
|
||||
|
||||
The the above SQL command, `stream-id` is from the first column of the output of `SHOW STREAMS`.
|
||||
The above SQL command, `stream-id` is from the first column of the output of `SHOW STREAMS`.
|
||||
|
|
|
@ -2,19 +2,19 @@
|
|||
title: TDengine Monitoring
|
||||
---
|
||||
|
||||
After TDengine is started, a database named `log` for monitoring is created automatically. The information about CPU, memory, disk, bandwidth, number of requests, disk I/O speed, slow query is written into `log` database on the basis of a predefined interval. Besides, some important system operations, like logon, create user, drop database, and alerts and warnings generated in TDengine are written into `log` database too. System operator can view the data in `log` database from TDengine CLI or from a web console.
|
||||
After TDengine is started, a database named `log` for monitoring is created automatically. The information about CPU, memory, disk, bandwidth, number of requests, disk I/O speed, slow query is written into `log` database on the basis of a predefined interval. Additionally, some important system operations, like logon, create user, drop database, and alerts and warnings generated in TDengine are written into the `log` database too. A system operator can view the data in `log` database from TDengine CLI or from a web console.
|
||||
|
||||
Collection of the monitoring information is enabled by default, but can be disabled by parameter `monitor` in configuration file.
|
||||
The collection of the monitoring information is enabled by default, but can be disabled by parameter `monitor` in the configuration file.
|
||||
|
||||
## TDinsight
|
||||
|
||||
TDinsight is a total solution which uses the monitor database `log` mentioned previously and Grafana to monitor a TDengine cluster.
|
||||
TDinsight is a complete solution which uses the monitor database `log` mentioned previously and Grafana to monitor a TDengine cluster.
|
||||
|
||||
From version 2.3.3.0, more monitoring data has been added in the `log` database. Please refer to [TDinsight Grafana Dashboard](https://grafana.com/grafana/dashboards/15167) to learn more details about using TDinsight to monitor TDengine.
|
||||
|
||||
A script `TDinsight.sh` is provided to deploy TDinsight in automatic way.
|
||||
A script `TDinsight.sh` is provided to deploy TDinsight automatically.
|
||||
|
||||
Download `TDinsight.sh` with below command:
|
||||
Download `TDinsight.sh` with the below command:
|
||||
|
||||
```bash
|
||||
wget https://github.com/taosdata/grafanaplugin/raw/master/dashboards/TDinsight.sh
|
||||
|
@ -38,7 +38,7 @@ There are two ways to setup Grafana alert notification.
|
|||
sudo ./TDinsight.sh -a http://localhost:6041 -u root -p taosdata -E <notifier uid>
|
||||
```
|
||||
|
||||
- The AliCloud SMS alert built in TDengine data source plugin can be enabled with parameter `-s`, the parameters of this way are as follows:
|
||||
- The AliCloud SMS alert built in TDengine data source plugin can be enabled with parameter `-s`, the parameters of enabling this plugin are listed below:
|
||||
|
||||
- `-I`: AliCloud SMS Key ID
|
||||
- `-K`: AliCloud SMS Key Secret
|
||||
|
@ -47,7 +47,7 @@ There are two ways to setup Grafana alert notification.
|
|||
- `-T`: Input parameters in JSON format for the SMS notification template, for example`{"alarm_level":"%s","time":"%s","name":"%s","content":"%s"}`
|
||||
- `-B`: List of mobile numbers to be notified
|
||||
|
||||
Below is an example of the full command using this way.
|
||||
Below is an example of the full command using the AliCloud SMS alert.
|
||||
|
||||
```bash
|
||||
sudo ./TDinsight.sh -a http://localhost:6041 -u root -p taosdata -s \
|
||||
|
@ -55,6 +55,6 @@ There are two ways to setup Grafana alert notification.
|
|||
-T '{"alarm_level":"%s","time":"%s","name":"%s","content":"%s"}'
|
||||
```
|
||||
|
||||
Launch `TDinsight.sh` as above command and restart Grafana, then open Dashboard `http://localhost:3000/d/tdinsight`.
|
||||
Launch `TDinsight.sh` with the command above and restart Grafana, then open Dashboard `http://localhost:3000/d/tdinsight`.
|
||||
|
||||
For more use cases and restrictions please refer to [TDinsight](/reference/tdinsight/).
|
||||
|
|
|
@ -10,13 +10,13 @@ The diagnostic for network connection can be executed between Linux and Linux or
|
|||
|
||||
Diagnostic steps:
|
||||
|
||||
1. If the port range to be diagnosed are being occupied by a `taosd` server process, please firstly stop `taosd.
|
||||
2. On the server side, execute command `taos -n server -P <port> -l <pktlen>` to monitor the port range starting from the port specified by `-P` parameter with the role of "server.
|
||||
3. On the client side, execute command `taos -n client -h <fqdn of server> -P <port> -l <pktlen>` to send testing package to the specified server and port.
|
||||
1. If the port range to be diagnosed are being occupied by a `taosd` server process, please first stop `taosd.
|
||||
2. On the server side, execute command `taos -n server -P <port> -l <pktlen>` to monitor the port range starting from the port specified by `-P` parameter with the role of "server".
|
||||
3. On the client side, execute command `taos -n client -h <fqdn of server> -P <port> -l <pktlen>` to send a testing package to the specified server and port.
|
||||
|
||||
-l <pktlen\>: The size of the testing package, in bytes. The value range is [11, 64,000] and default value is 1,000. Please be noted that the package length must be same in the above 2 commands executed on server side and client side respectively.
|
||||
-l <pktlen\>: The size of the testing package, in bytes. The value range is [11, 64,000] and default value is 1,000. Please note that the package length must be same in the above 2 commands executed on server side and client side respectively.
|
||||
|
||||
Output of the server side is as below for example:
|
||||
Output of the server side for the example is below:
|
||||
|
||||
```bash
|
||||
# taos -n server -P 6000
|
||||
|
@ -47,7 +47,7 @@ Output of the server side is as below for example:
|
|||
12/21 14:50:22.721261 0x7f53427ec700 UTL UDP: send:1000 bytes to 172.27.0.8 at 6011
|
||||
```
|
||||
|
||||
Output of the client side is as below for example:
|
||||
Output of the client side for the example is below:
|
||||
|
||||
```bash
|
||||
# taos -n client -h 172.27.0.7 -P 6000
|
||||
|
@ -65,13 +65,13 @@ Output of the client side is as below for example:
|
|||
12/21 14:50:22.721274 0x7fc95d859200 UTL successed to test UDP port:6011
|
||||
```
|
||||
|
||||
The output needs to be checked carefully for the system operator to find out root cause and solve the problem.
|
||||
The output needs to be checked carefully for the system operator to find out the root cause and solve the problem.
|
||||
|
||||
## Startup Status and RPC Diagnostic
|
||||
|
||||
`taos -n startup -h <fqdn of server>` can be used to check the startup status of a `taosd` process. This is a comman task for a system operator to do to determine whether `taosd` has been started successfully, especially in case of cluster.
|
||||
|
||||
`taos -n rpc -h <fqdn of server>` can be used to check whether the port of a started `taosd` can be accessed or not. If `taosd` process doesn't respond or work abnormally, this command can be used to initiate a rpc communication with the specified fqdn to determine whether it's network problem or `taosd` is abnormal.
|
||||
`taos -n rpc -h <fqdn of server>` can be used to check whether the port of a started `taosd` can be accessed or not. If `taosd` process doesn't respond or is working abnormally, this command can be used to initiate a rpc communication with the specified fqdn to determine whether it's a network problem or `taosd` is abnormal.
|
||||
|
||||
## Sync and Arbitrator Diagnostic
|
||||
|
||||
|
@ -80,7 +80,7 @@ taos -n sync -P 6040 -h <fqdn of server>
|
|||
taos -n sync -P 6042 -h <fqdn of server>
|
||||
```
|
||||
|
||||
The above commands can be executed on Linux Shell to check whether the port for sync works well and whether the sync module of the server side works well. Besides, `-P 6042` is used to check whether the arbitrator is configured properly and works well.
|
||||
The above commands can be executed on Linux Shell to check whether the port for sync is working well and whether the sync module on the server side is working well. Additionally, `-P 6042` is used to check whether the arbitrator is configured properly and is working well.
|
||||
|
||||
## Network Speed Diagnostic
|
||||
|
||||
|
@ -88,12 +88,12 @@ The above commands can be executed on Linux Shell to check whether the port for
|
|||
|
||||
From version 2.2.0.0, the above command can be executed on Linux Shell to test the network speed, it sends uncompressed package to a running `taosd` server process or a simulated server process started by `taos -n server` to test the network speed. Parameters can be used when testing network speed are as below:
|
||||
|
||||
-n:When set to "speed", it means testing network speed
|
||||
-h:The FQDN or IP of the server process to be connected to; if not set, the FQDN configured in `taos.cfg` is used
|
||||
-P:The port of the server process to connect to, the default value is 6030
|
||||
-N:The number of packages that will be sent in the test, range is [1,10000], default value is 100
|
||||
-l:The size of each package in bytes, range is [1024, 1024 \* 1024 \* 1024], default value is 1024
|
||||
-S:The type of network packages to send, can be either TCP or UDP, default value is
|
||||
-n:When set to "speed", it means testing network speed.
|
||||
-h:The FQDN or IP of the server process to be connected to; if not set, the FQDN configured in `taos.cfg` is used.
|
||||
-P:The port of the server process to connect to, the default value is 6030.
|
||||
-N:The number of packages that will be sent in the test, range is [1,10000], default value is 100.
|
||||
-l:The size of each package in bytes, range is [1024, 1024 \* 1024 \* 1024], default value is 1024.
|
||||
-S:The type of network packages to send, can be either TCP or UDP, default value is TCP.
|
||||
|
||||
## FQDN Resolution Diagnostic
|
||||
|
||||
|
@ -101,22 +101,22 @@ From version 2.2.0.0, the above command can be executed on Linux Shell to test t
|
|||
|
||||
From version 2.2.0.0, the above command can be executed on Linux Shell to test the resolution speed of FQDN. It can be used to try to resolve a FQDN to an IP address and record the time spent in this process. The parameters that can be used for this purpose are as below:
|
||||
|
||||
-n:When set to "fqdn", it means testing the speed of resolving FQDN
|
||||
-h:The FQDN to be resolved. If not set, the `FQDN` parameter in `taos.cfg` is used by default.
|
||||
-n:When set to "fqdn", it means testing the speed of resolving FQDN.
|
||||
-h:The FQDN to be resolved. If not set, the `FQDN` parameter in `taos.cfg` is used by default.
|
||||
|
||||
## Server Log
|
||||
|
||||
The parameter `debugFlag` is used to control the log level of `taosd` server process. The default value is 131, for debug purpose it needs to be escalated to 135 or 143.
|
||||
The parameter `debugFlag` is used to control the log level of the `taosd` server process. The default value is 131, for debug purpose it needs to be escalated to 135 or 143.
|
||||
|
||||
Once this parameter is set to 135 or 143, the log file grows very quickly especially when there is huge volume of data insertion and data query requests. If all the logs are stored together, some important information may be missed very easily, so on server side important information is stored at different place from other logs.
|
||||
Once this parameter is set to 135 or 143, the log file grows very quickly especially when there is a huge volume of data insertion and data query requests. If all the logs are stored together, some important information may be missed very easily, so on server side important information is stored at different place from other logs.
|
||||
|
||||
- The log at level of INFO, WARNING and ERROR is stored in `taosinfo` so that it is easy to find important information
|
||||
- The log at level of DEBUG (135) and TRACE (143) and other information not handled by `taosinfo` are stored in `taosdlog`
|
||||
|
||||
## Client Log
|
||||
|
||||
An independent log file, named as "taoslog+<seq num\>" is generated for each client program, i.e. a client process. The default value of `debugFlag` is also 131 and only log at level of INFO/ERROR/WARNING is recorded, it and needs to be changed to 135 or 143 so that log at DEBUG or TRACE level can be recorded for debugging purpose.
|
||||
An independent log file, named as "taoslog+<seq num\>" is generated for each client program, i.e. a client process. The default value of `debugFlag` is also 131 and only logs at level of INFO/ERROR/WARNING are recorded, for debugging purposes it needs to be changed to 135 or 143 so that logs at DEBUG or TRACE level can be recorded.
|
||||
|
||||
The maximum length of a single log file is controlled by parameter `numOfLogLines` and only 2 log files are kept for each `taosd` server process.
|
||||
|
||||
log file is written in async way to minimize the workload on disk, bu the penalty is that a few log lines may be lost in some extreme conditions.
|
||||
Log files are written in an async way to minimize the workload on disk, but the trade off for performance is that a few log lines may be lost in some extreme conditions.
|
||||
|
|
|
@ -1,178 +0,0 @@
|
|||
/*
|
||||
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
|
||||
*
|
||||
* This program is free software: you can use, redistribute, and/or modify
|
||||
* it under the terms of the GNU Affero General Public License, version 3
|
||||
* or later ("AGPL"), as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE.
|
||||
*
|
||||
* You should have received a copy of the GNU Affero General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
#include <pthread.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include <unistd.h>
|
||||
#include "../../../include/client/taos.h" // include TDengine header file
|
||||
|
||||
typedef struct {
|
||||
char server_ip[64];
|
||||
char db_name[64];
|
||||
char tbl_name[64];
|
||||
} param;
|
||||
|
||||
int g_thread_exit_flag = 0;
|
||||
void* insert_rows(void *sarg);
|
||||
|
||||
void streamCallBack(void *param, TAOS_RES *res, TAOS_ROW row)
|
||||
{
|
||||
// in this simple demo, it just print out the result
|
||||
char temp[128];
|
||||
|
||||
TAOS_FIELD *fields = taos_fetch_fields(res);
|
||||
int numFields = taos_num_fields(res);
|
||||
|
||||
taos_print_row(temp, row, fields, numFields);
|
||||
|
||||
printf("\n%s\n", temp);
|
||||
}
|
||||
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
TAOS *taos;
|
||||
char db_name[64];
|
||||
char tbl_name[64];
|
||||
char sql[1024] = { 0 };
|
||||
|
||||
if (argc != 4) {
|
||||
printf("usage: %s server-ip dbname tblname\n", argv[0]);
|
||||
exit(0);
|
||||
}
|
||||
|
||||
strcpy(db_name, argv[2]);
|
||||
strcpy(tbl_name, argv[3]);
|
||||
|
||||
// create pthread to insert into row per second for stream calc
|
||||
param *t_param = (param *)malloc(sizeof(param));
|
||||
if (NULL == t_param)
|
||||
{
|
||||
printf("failed to malloc\n");
|
||||
exit(1);
|
||||
}
|
||||
memset(t_param, 0, sizeof(param));
|
||||
strcpy(t_param->server_ip, argv[1]);
|
||||
strcpy(t_param->db_name, db_name);
|
||||
strcpy(t_param->tbl_name, tbl_name);
|
||||
|
||||
pthread_t pid;
|
||||
pthread_create(&pid, NULL, (void * (*)(void *))insert_rows, t_param);
|
||||
|
||||
sleep(3); // waiting for database is created.
|
||||
// open connection to database
|
||||
taos = taos_connect(argv[1], "root", "taosdata", db_name, 0);
|
||||
if (taos == NULL) {
|
||||
printf("failed to connet to server:%s\n", argv[1]);
|
||||
free(t_param);
|
||||
exit(1);
|
||||
}
|
||||
|
||||
// starting stream calc,
|
||||
printf("please input stream SQL:[e.g., select count(*) from tblname interval(5s) sliding(2s);]\n");
|
||||
fgets(sql, sizeof(sql), stdin);
|
||||
if (sql[0] == 0) {
|
||||
printf("input NULL stream SQL, so exit!\n");
|
||||
free(t_param);
|
||||
exit(1);
|
||||
}
|
||||
|
||||
// param is set to NULL in this demo, it shall be set to the pointer to app context
|
||||
TAOS_STREAM *pStream = taos_open_stream(taos, sql, streamCallBack, 0, NULL, NULL);
|
||||
if (NULL == pStream) {
|
||||
printf("failed to create stream\n");
|
||||
free(t_param);
|
||||
exit(1);
|
||||
}
|
||||
|
||||
printf("presss any key to exit\n");
|
||||
getchar();
|
||||
|
||||
taos_close_stream(pStream);
|
||||
|
||||
g_thread_exit_flag = 1;
|
||||
pthread_join(pid, NULL);
|
||||
|
||||
taos_close(taos);
|
||||
free(t_param);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
void* insert_rows(void *sarg)
|
||||
{
|
||||
TAOS *taos;
|
||||
char command[1024] = { 0 };
|
||||
param *winfo = (param * )sarg;
|
||||
|
||||
if (NULL == winfo){
|
||||
printf("para is null!\n");
|
||||
exit(1);
|
||||
}
|
||||
|
||||
taos = taos_connect(winfo->server_ip, "root", "taosdata", NULL, 0);
|
||||
if (taos == NULL) {
|
||||
printf("failed to connet to server:%s\n", winfo->server_ip);
|
||||
exit(1);
|
||||
}
|
||||
|
||||
// drop database
|
||||
sprintf(command, "drop database %s;", winfo->db_name);
|
||||
if (taos_query(taos, command) != 0) {
|
||||
printf("failed to drop database, reason:%s\n", taos_errstr(taos));
|
||||
exit(1);
|
||||
}
|
||||
|
||||
// create database
|
||||
sprintf(command, "create database %s;", winfo->db_name);
|
||||
if (taos_query(taos, command) != 0) {
|
||||
printf("failed to create database, reason:%s\n", taos_errstr(taos));
|
||||
exit(1);
|
||||
}
|
||||
|
||||
// use database
|
||||
sprintf(command, "use %s;", winfo->db_name);
|
||||
if (taos_query(taos, command) != 0) {
|
||||
printf("failed to use database, reason:%s\n", taos_errstr(taos));
|
||||
exit(1);
|
||||
}
|
||||
|
||||
// create table
|
||||
sprintf(command, "create table %s (ts timestamp, speed int);", winfo->tbl_name);
|
||||
if (taos_query(taos, command) != 0) {
|
||||
printf("failed to create table, reason:%s\n", taos_errstr(taos));
|
||||
exit(1);
|
||||
}
|
||||
|
||||
// insert data
|
||||
int64_t begin = (int64_t)time(NULL);
|
||||
int index = 0;
|
||||
while (1) {
|
||||
if (g_thread_exit_flag) break;
|
||||
|
||||
index++;
|
||||
sprintf(command, "insert into %s values (%ld, %d)", winfo->tbl_name, (begin + index) * 1000, index);
|
||||
if (taos_query(taos, command)) {
|
||||
printf("failed to insert row [%s], reason:%s\n", command, taos_errstr(taos));
|
||||
}
|
||||
sleep(1);
|
||||
}
|
||||
|
||||
taos_close(taos);
|
||||
return 0;
|
||||
}
|
||||
|
|
@ -34,7 +34,6 @@ extern "C" {
|
|||
#define TSDB_INS_TABLE_USER_FUNCTIONS "user_functions"
|
||||
#define TSDB_INS_TABLE_USER_INDEXES "user_indexes"
|
||||
#define TSDB_INS_TABLE_USER_STABLES "user_stables"
|
||||
#define TSDB_INS_TABLE_USER_STREAMS "user_streams"
|
||||
#define TSDB_INS_TABLE_USER_TABLES "user_tables"
|
||||
#define TSDB_INS_TABLE_USER_TABLE_DISTRIBUTED "user_table_distributed"
|
||||
#define TSDB_INS_TABLE_USER_USERS "user_users"
|
||||
|
|
|
@ -53,10 +53,9 @@ typedef enum EStreamType {
|
|||
} EStreamType;
|
||||
|
||||
typedef struct {
|
||||
uint32_t numOfTables;
|
||||
SArray* pGroupList;
|
||||
SArray* pTableList;
|
||||
SHashObj* map; // speedup acquire the tableQueryInfo by table uid
|
||||
} STableGroupInfo;
|
||||
} STableListInfo;
|
||||
|
||||
typedef struct SColumnDataAgg {
|
||||
int16_t colId;
|
||||
|
|
|
@ -232,7 +232,8 @@ void blockDebugShowData(const SArray* dataBlocks);
|
|||
int32_t buildSubmitReqFromDataBlock(SSubmitReq** pReq, const SArray* pDataBlocks, STSchema* pTSchema, int32_t vgId,
|
||||
tb_uid_t uid, tb_uid_t suid);
|
||||
|
||||
SSubmitReq* tdBlockToSubmit(const SArray* pBlocks, const STSchema* pSchema, bool createTb, int64_t suid, int32_t vgId);
|
||||
SSubmitReq* tdBlockToSubmit(const SArray* pBlocks, const STSchema* pSchema, bool createTb, int64_t suid,
|
||||
const char* stbFullName, int32_t vgId);
|
||||
|
||||
static FORCE_INLINE int32_t blockGetEncodeSize(const SSDataBlock* pBlock) {
|
||||
return blockDataGetSerialMetaSize(pBlock) + blockDataGetSize(pBlock);
|
||||
|
|
|
@ -300,9 +300,7 @@ typedef struct SSchema {
|
|||
|
||||
typedef struct {
|
||||
int32_t nCols;
|
||||
int32_t sver;
|
||||
int32_t tagVer;
|
||||
int32_t colVer;
|
||||
int32_t version;
|
||||
SSchema* pSchema;
|
||||
} SSchemaWrapper;
|
||||
|
||||
|
@ -310,9 +308,7 @@ static FORCE_INLINE SSchemaWrapper* tCloneSSchemaWrapper(const SSchemaWrapper* p
|
|||
SSchemaWrapper* pSW = (SSchemaWrapper*)taosMemoryMalloc(sizeof(SSchemaWrapper));
|
||||
if (pSW == NULL) return pSW;
|
||||
pSW->nCols = pSchemaWrapper->nCols;
|
||||
pSW->sver = pSchemaWrapper->sver;
|
||||
pSW->tagVer = pSchemaWrapper->tagVer;
|
||||
pSW->colVer = pSchemaWrapper->colVer;
|
||||
pSW->version = pSchemaWrapper->version;
|
||||
pSW->pSchema = (SSchema*)taosMemoryCalloc(pSW->nCols, sizeof(SSchema));
|
||||
if (pSW->pSchema == NULL) {
|
||||
taosMemoryFree(pSW);
|
||||
|
@ -367,9 +363,7 @@ static FORCE_INLINE int32_t tDecodeSSchema(SDecoder* pDecoder, SSchema* pSchema)
|
|||
static FORCE_INLINE int32_t taosEncodeSSchemaWrapper(void** buf, const SSchemaWrapper* pSW) {
|
||||
int32_t tlen = 0;
|
||||
tlen += taosEncodeVariantI32(buf, pSW->nCols);
|
||||
tlen += taosEncodeVariantI32(buf, pSW->sver);
|
||||
tlen += taosEncodeVariantI32(buf, pSW->tagVer);
|
||||
tlen += taosEncodeVariantI32(buf, pSW->colVer);
|
||||
tlen += taosEncodeVariantI32(buf, pSW->version);
|
||||
for (int32_t i = 0; i < pSW->nCols; i++) {
|
||||
tlen += taosEncodeSSchema(buf, &pSW->pSchema[i]);
|
||||
}
|
||||
|
@ -378,9 +372,7 @@ static FORCE_INLINE int32_t taosEncodeSSchemaWrapper(void** buf, const SSchemaWr
|
|||
|
||||
static FORCE_INLINE void* taosDecodeSSchemaWrapper(const void* buf, SSchemaWrapper* pSW) {
|
||||
buf = taosDecodeVariantI32(buf, &pSW->nCols);
|
||||
buf = taosDecodeVariantI32(buf, &pSW->sver);
|
||||
buf = taosDecodeVariantI32(buf, &pSW->tagVer);
|
||||
buf = taosDecodeVariantI32(buf, &pSW->colVer);
|
||||
buf = taosDecodeVariantI32(buf, &pSW->version);
|
||||
pSW->pSchema = (SSchema*)taosMemoryCalloc(pSW->nCols, sizeof(SSchema));
|
||||
if (pSW->pSchema == NULL) {
|
||||
return NULL;
|
||||
|
@ -394,9 +386,7 @@ static FORCE_INLINE void* taosDecodeSSchemaWrapper(const void* buf, SSchemaWrapp
|
|||
|
||||
static FORCE_INLINE int32_t tEncodeSSchemaWrapper(SEncoder* pEncoder, const SSchemaWrapper* pSW) {
|
||||
if (tEncodeI32v(pEncoder, pSW->nCols) < 0) return -1;
|
||||
if (tEncodeI32v(pEncoder, pSW->sver) < 0) return -1;
|
||||
if (tEncodeI32v(pEncoder, pSW->tagVer) < 0) return -1;
|
||||
if (tEncodeI32v(pEncoder, pSW->colVer) < 0) return -1;
|
||||
if (tEncodeI32v(pEncoder, pSW->version) < 0) return -1;
|
||||
for (int32_t i = 0; i < pSW->nCols; i++) {
|
||||
if (tEncodeSSchema(pEncoder, &pSW->pSchema[i]) < 0) return -1;
|
||||
}
|
||||
|
@ -406,9 +396,7 @@ static FORCE_INLINE int32_t tEncodeSSchemaWrapper(SEncoder* pEncoder, const SSch
|
|||
|
||||
static FORCE_INLINE int32_t tDecodeSSchemaWrapper(SDecoder* pDecoder, SSchemaWrapper* pSW) {
|
||||
if (tDecodeI32v(pDecoder, &pSW->nCols) < 0) return -1;
|
||||
if (tDecodeI32v(pDecoder, &pSW->sver) < 0) return -1;
|
||||
if (tDecodeI32v(pDecoder, &pSW->tagVer) < 0) return -1;
|
||||
if (tDecodeI32v(pDecoder, &pSW->colVer) < 0) return -1;
|
||||
if (tDecodeI32v(pDecoder, &pSW->version) < 0) return -1;
|
||||
|
||||
pSW->pSchema = (SSchema*)taosMemoryCalloc(pSW->nCols, sizeof(SSchema));
|
||||
if (pSW->pSchema == NULL) return -1;
|
||||
|
@ -421,9 +409,7 @@ static FORCE_INLINE int32_t tDecodeSSchemaWrapper(SDecoder* pDecoder, SSchemaWra
|
|||
|
||||
static FORCE_INLINE int32_t tDecodeSSchemaWrapperEx(SDecoder* pDecoder, SSchemaWrapper* pSW) {
|
||||
if (tDecodeI32v(pDecoder, &pSW->nCols) < 0) return -1;
|
||||
if (tDecodeI32v(pDecoder, &pSW->sver) < 0) return -1;
|
||||
if (tDecodeI32v(pDecoder, &pSW->tagVer) < 0) return -1;
|
||||
if (tDecodeI32v(pDecoder, &pSW->colVer) < 0) return -1;
|
||||
if (tDecodeI32v(pDecoder, &pSW->version) < 0) return -1;
|
||||
|
||||
pSW->pSchema = (SSchema*)tDecoderMalloc(pDecoder, pSW->nCols * sizeof(SSchema));
|
||||
if (pSW->pSchema == NULL) return -1;
|
||||
|
@ -469,7 +455,8 @@ int32_t tDeserializeSMDropStbReq(void* buf, int32_t bufLen, SMDropStbReq* pReq);
|
|||
typedef struct {
|
||||
char name[TSDB_TABLE_FNAME_LEN];
|
||||
int8_t alterType;
|
||||
int32_t verInBlock;
|
||||
int32_t tagVer;
|
||||
int32_t colVer;
|
||||
int32_t numOfFields;
|
||||
SArray* pFields;
|
||||
int32_t ttl;
|
||||
|
@ -695,6 +682,7 @@ typedef struct {
|
|||
int8_t replications;
|
||||
int8_t strict;
|
||||
int8_t cacheLastRow;
|
||||
int8_t schemaless;
|
||||
int8_t ignoreExist;
|
||||
int32_t numOfRetensions;
|
||||
SArray* pRetensions; // SRetention
|
||||
|
@ -1022,6 +1010,10 @@ typedef struct {
|
|||
SReplica replicas[TSDB_MAX_REPLICA];
|
||||
int32_t numOfRetensions;
|
||||
SArray* pRetensions; // SRetention
|
||||
|
||||
// for tsma
|
||||
int8_t isTsma;
|
||||
|
||||
} SCreateVnodeReq;
|
||||
|
||||
int32_t tSerializeSCreateVnodeReq(void* buf, int32_t bufLen, SCreateVnodeReq* pReq);
|
||||
|
@ -1273,7 +1265,6 @@ int32_t tSerializeSCreateDropMQSBNodeReq(void* buf, int32_t bufLen, SMCreateQnod
|
|||
int32_t tDeserializeSCreateDropMQSBNodeReq(void* buf, int32_t bufLen, SMCreateQnodeReq* pReq);
|
||||
|
||||
typedef struct {
|
||||
int32_t dnodeId;
|
||||
int8_t replica;
|
||||
SReplica replicas[TSDB_MAX_REPLICA];
|
||||
} SDCreateMnodeReq, SDAlterMnodeReq;
|
||||
|
@ -1646,8 +1637,8 @@ _err:
|
|||
return NULL;
|
||||
}
|
||||
|
||||
// this message is sent from mnode to mnode(read thread to write thread), so there is no need for serialization or
|
||||
// deserialization
|
||||
// this message is sent from mnode to mnode(read thread to write thread),
|
||||
// so there is no need for serialization or deserialization
|
||||
typedef struct {
|
||||
SHashObj* rebSubHash; // SHashObj<key, SMqRebSubscribe>
|
||||
} SMqDoRebalanceMsg;
|
||||
|
@ -1713,7 +1704,7 @@ typedef struct SVCreateStbReq {
|
|||
char* name;
|
||||
tb_uid_t suid;
|
||||
int8_t rollup;
|
||||
SSchemaWrapper schema;
|
||||
SSchemaWrapper schemaRow;
|
||||
SSchemaWrapper schemaTag;
|
||||
SRSmaParam pRSmaParam;
|
||||
} SVCreateStbReq;
|
||||
|
@ -1745,7 +1736,7 @@ typedef struct SVCreateTbReq {
|
|||
uint8_t* pTag;
|
||||
} ctb;
|
||||
struct {
|
||||
SSchemaWrapper schema;
|
||||
SSchemaWrapper schemaRow;
|
||||
} ntb;
|
||||
};
|
||||
} SVCreateTbReq;
|
||||
|
|
|
@ -144,6 +144,7 @@ enum {
|
|||
TD_DEF_MSG_TYPE(TDMT_MND_CREATE_TOPIC, "mnode-create-topic", SMCreateTopicReq, SMCreateTopicRsp)
|
||||
TD_DEF_MSG_TYPE(TDMT_MND_ALTER_TOPIC, "mnode-alter-topic", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_MND_DROP_TOPIC, "mnode-drop-topic", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_MND_DROP_CGROUP, "mnode-drop-cgroup", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_MND_SUBSCRIBE, "mnode-subscribe", SCMSubscribeReq, SCMSubscribeRsp)
|
||||
TD_DEF_MSG_TYPE(TDMT_MND_MQ_ASK_EP, "mnode-mq-ask-ep", SMqAskEpReq, SMqAskEpRsp)
|
||||
TD_DEF_MSG_TYPE(TDMT_MND_MQ_TIMER, "mnode-mq-tmr", SMTimerReq, NULL)
|
||||
|
@ -194,11 +195,8 @@ enum {
|
|||
TD_DEF_MSG_TYPE(TDMT_VND_EXPLAIN, "vnode-explain", NULL, NULL)
|
||||
|
||||
TD_DEF_MSG_TYPE(TDMT_VND_SUBSCRIBE, "vnode-subscribe", SMVSubscribeReq, SMVSubscribeRsp)
|
||||
TD_DEF_MSG_TYPE(TDMT_VND_CONSUME, "vnode-consume", SMqCVConsumeReq, SMqCVConsumeRsp)
|
||||
TD_DEF_MSG_TYPE(TDMT_VND_CONSUME, "vnode-consume", SMqPollReq, SMqDataBlkRsp)
|
||||
TD_DEF_MSG_TYPE(TDMT_VND_TASK_DEPLOY, "vnode-task-deploy", SStreamTaskDeployReq, SStreamTaskDeployRsp)
|
||||
TD_DEF_MSG_TYPE(TDMT_VND_TASK_PIPE_EXEC, "vnode-task-pipe-exec", SStreamTaskExecReq, SStreamTaskExecRsp)
|
||||
TD_DEF_MSG_TYPE(TDMT_VND_TASK_MERGE_EXEC, "vnode-task-merge-exec", SStreamTaskExecReq, SStreamTaskExecRsp)
|
||||
TD_DEF_MSG_TYPE(TDMT_VND_TASK_WRITE_EXEC, "vnode-task-write-exec", SStreamTaskExecReq, SStreamTaskExecRsp)
|
||||
TD_DEF_MSG_TYPE(TDMT_VND_STREAM_TRIGGER, "vnode-stream-trigger", NULL, NULL)
|
||||
|
||||
TD_DEF_MSG_TYPE(TDMT_VND_TASK_RUN, "vnode-stream-task-run", NULL, NULL)
|
||||
|
@ -235,9 +233,13 @@ enum {
|
|||
// Requests handled by SNODE
|
||||
TD_NEW_MSG_SEG(TDMT_SND_MSG)
|
||||
TD_DEF_MSG_TYPE(TDMT_SND_TASK_DEPLOY, "snode-task-deploy", SStreamTaskDeployReq, SStreamTaskDeployRsp)
|
||||
TD_DEF_MSG_TYPE(TDMT_SND_TASK_EXEC, "snode-task-exec", SStreamTaskExecReq, SStreamTaskExecRsp)
|
||||
TD_DEF_MSG_TYPE(TDMT_SND_TASK_PIPE_EXEC, "snode-task-pipe-exec", SStreamTaskExecReq, SStreamTaskExecRsp)
|
||||
TD_DEF_MSG_TYPE(TDMT_SND_TASK_MERGE_EXEC, "snode-task-merge-exec", SStreamTaskExecReq, SStreamTaskExecRsp)
|
||||
//TD_DEF_MSG_TYPE(TDMT_SND_TASK_EXEC, "snode-task-exec", SStreamTaskExecReq, SStreamTaskExecRsp)
|
||||
//TD_DEF_MSG_TYPE(TDMT_SND_TASK_PIPE_EXEC, "snode-task-pipe-exec", SStreamTaskExecReq, SStreamTaskExecRsp)
|
||||
//TD_DEF_MSG_TYPE(TDMT_SND_TASK_MERGE_EXEC, "snode-task-merge-exec", SStreamTaskExecReq, SStreamTaskExecRsp)
|
||||
|
||||
TD_DEF_MSG_TYPE(TDMT_SND_TASK_RUN, "snode-stream-task-run", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_SND_TASK_DISPATCH, "snode-stream-task-dispatch", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_SND_TASK_RECOVER, "snode-stream-task-recover", NULL, NULL)
|
||||
|
||||
// Requests handled by SCHEDULER
|
||||
TD_NEW_MSG_SEG(TDMT_SCH_MSG)
|
||||
|
|
|
@ -93,166 +93,168 @@
|
|||
#define TK_VGROUPS 75
|
||||
#define TK_SINGLE_STABLE 76
|
||||
#define TK_RETENTIONS 77
|
||||
#define TK_NK_COLON 78
|
||||
#define TK_TABLE 79
|
||||
#define TK_NK_LP 80
|
||||
#define TK_NK_RP 81
|
||||
#define TK_STABLE 82
|
||||
#define TK_ADD 83
|
||||
#define TK_COLUMN 84
|
||||
#define TK_MODIFY 85
|
||||
#define TK_RENAME 86
|
||||
#define TK_TAG 87
|
||||
#define TK_SET 88
|
||||
#define TK_NK_EQ 89
|
||||
#define TK_USING 90
|
||||
#define TK_TAGS 91
|
||||
#define TK_COMMENT 92
|
||||
#define TK_BOOL 93
|
||||
#define TK_TINYINT 94
|
||||
#define TK_SMALLINT 95
|
||||
#define TK_INT 96
|
||||
#define TK_INTEGER 97
|
||||
#define TK_BIGINT 98
|
||||
#define TK_FLOAT 99
|
||||
#define TK_DOUBLE 100
|
||||
#define TK_BINARY 101
|
||||
#define TK_TIMESTAMP 102
|
||||
#define TK_NCHAR 103
|
||||
#define TK_UNSIGNED 104
|
||||
#define TK_JSON 105
|
||||
#define TK_VARCHAR 106
|
||||
#define TK_MEDIUMBLOB 107
|
||||
#define TK_BLOB 108
|
||||
#define TK_VARBINARY 109
|
||||
#define TK_DECIMAL 110
|
||||
#define TK_DELAY 111
|
||||
#define TK_FILE_FACTOR 112
|
||||
#define TK_NK_FLOAT 113
|
||||
#define TK_ROLLUP 114
|
||||
#define TK_TTL 115
|
||||
#define TK_SMA 116
|
||||
#define TK_SHOW 117
|
||||
#define TK_DATABASES 118
|
||||
#define TK_TABLES 119
|
||||
#define TK_STABLES 120
|
||||
#define TK_MNODES 121
|
||||
#define TK_MODULES 122
|
||||
#define TK_QNODES 123
|
||||
#define TK_FUNCTIONS 124
|
||||
#define TK_INDEXES 125
|
||||
#define TK_ACCOUNTS 126
|
||||
#define TK_APPS 127
|
||||
#define TK_CONNECTIONS 128
|
||||
#define TK_LICENCE 129
|
||||
#define TK_GRANTS 130
|
||||
#define TK_QUERIES 131
|
||||
#define TK_SCORES 132
|
||||
#define TK_TOPICS 133
|
||||
#define TK_VARIABLES 134
|
||||
#define TK_BNODES 135
|
||||
#define TK_SNODES 136
|
||||
#define TK_CLUSTER 137
|
||||
#define TK_TRANSACTIONS 138
|
||||
#define TK_LIKE 139
|
||||
#define TK_INDEX 140
|
||||
#define TK_FULLTEXT 141
|
||||
#define TK_FUNCTION 142
|
||||
#define TK_INTERVAL 143
|
||||
#define TK_TOPIC 144
|
||||
#define TK_AS 145
|
||||
#define TK_WITH 146
|
||||
#define TK_SCHEMA 147
|
||||
#define TK_DESC 148
|
||||
#define TK_DESCRIBE 149
|
||||
#define TK_RESET 150
|
||||
#define TK_QUERY 151
|
||||
#define TK_CACHE 152
|
||||
#define TK_EXPLAIN 153
|
||||
#define TK_ANALYZE 154
|
||||
#define TK_VERBOSE 155
|
||||
#define TK_NK_BOOL 156
|
||||
#define TK_RATIO 157
|
||||
#define TK_COMPACT 158
|
||||
#define TK_VNODES 159
|
||||
#define TK_IN 160
|
||||
#define TK_OUTPUTTYPE 161
|
||||
#define TK_AGGREGATE 162
|
||||
#define TK_BUFSIZE 163
|
||||
#define TK_STREAM 164
|
||||
#define TK_INTO 165
|
||||
#define TK_TRIGGER 166
|
||||
#define TK_AT_ONCE 167
|
||||
#define TK_WINDOW_CLOSE 168
|
||||
#define TK_WATERMARK 169
|
||||
#define TK_KILL 170
|
||||
#define TK_CONNECTION 171
|
||||
#define TK_TRANSACTION 172
|
||||
#define TK_MERGE 173
|
||||
#define TK_VGROUP 174
|
||||
#define TK_REDISTRIBUTE 175
|
||||
#define TK_SPLIT 176
|
||||
#define TK_SYNCDB 177
|
||||
#define TK_NULL 178
|
||||
#define TK_NK_QUESTION 179
|
||||
#define TK_NK_ARROW 180
|
||||
#define TK_ROWTS 181
|
||||
#define TK_TBNAME 182
|
||||
#define TK_QSTARTTS 183
|
||||
#define TK_QENDTS 184
|
||||
#define TK_WSTARTTS 185
|
||||
#define TK_WENDTS 186
|
||||
#define TK_WDURATION 187
|
||||
#define TK_CAST 188
|
||||
#define TK_NOW 189
|
||||
#define TK_TODAY 190
|
||||
#define TK_TIMEZONE 191
|
||||
#define TK_COUNT 192
|
||||
#define TK_FIRST 193
|
||||
#define TK_LAST 194
|
||||
#define TK_LAST_ROW 195
|
||||
#define TK_BETWEEN 196
|
||||
#define TK_IS 197
|
||||
#define TK_NK_LT 198
|
||||
#define TK_NK_GT 199
|
||||
#define TK_NK_LE 200
|
||||
#define TK_NK_GE 201
|
||||
#define TK_NK_NE 202
|
||||
#define TK_MATCH 203
|
||||
#define TK_NMATCH 204
|
||||
#define TK_CONTAINS 205
|
||||
#define TK_JOIN 206
|
||||
#define TK_INNER 207
|
||||
#define TK_SELECT 208
|
||||
#define TK_DISTINCT 209
|
||||
#define TK_WHERE 210
|
||||
#define TK_PARTITION 211
|
||||
#define TK_BY 212
|
||||
#define TK_SESSION 213
|
||||
#define TK_STATE_WINDOW 214
|
||||
#define TK_SLIDING 215
|
||||
#define TK_FILL 216
|
||||
#define TK_VALUE 217
|
||||
#define TK_NONE 218
|
||||
#define TK_PREV 219
|
||||
#define TK_LINEAR 220
|
||||
#define TK_NEXT 221
|
||||
#define TK_GROUP 222
|
||||
#define TK_HAVING 223
|
||||
#define TK_ORDER 224
|
||||
#define TK_SLIMIT 225
|
||||
#define TK_SOFFSET 226
|
||||
#define TK_LIMIT 227
|
||||
#define TK_OFFSET 228
|
||||
#define TK_ASC 229
|
||||
#define TK_NULLS 230
|
||||
#define TK_ID 231
|
||||
#define TK_NK_BITNOT 232
|
||||
#define TK_INSERT 233
|
||||
#define TK_VALUES 234
|
||||
#define TK_IMPORT 235
|
||||
#define TK_NK_SEMI 236
|
||||
#define TK_FILE 237
|
||||
#define TK_SCHEMALESS 78
|
||||
#define TK_NK_COLON 79
|
||||
#define TK_TABLE 80
|
||||
#define TK_NK_LP 81
|
||||
#define TK_NK_RP 82
|
||||
#define TK_STABLE 83
|
||||
#define TK_ADD 84
|
||||
#define TK_COLUMN 85
|
||||
#define TK_MODIFY 86
|
||||
#define TK_RENAME 87
|
||||
#define TK_TAG 88
|
||||
#define TK_SET 89
|
||||
#define TK_NK_EQ 90
|
||||
#define TK_USING 91
|
||||
#define TK_TAGS 92
|
||||
#define TK_COMMENT 93
|
||||
#define TK_BOOL 94
|
||||
#define TK_TINYINT 95
|
||||
#define TK_SMALLINT 96
|
||||
#define TK_INT 97
|
||||
#define TK_INTEGER 98
|
||||
#define TK_BIGINT 99
|
||||
#define TK_FLOAT 100
|
||||
#define TK_DOUBLE 101
|
||||
#define TK_BINARY 102
|
||||
#define TK_TIMESTAMP 103
|
||||
#define TK_NCHAR 104
|
||||
#define TK_UNSIGNED 105
|
||||
#define TK_JSON 106
|
||||
#define TK_VARCHAR 107
|
||||
#define TK_MEDIUMBLOB 108
|
||||
#define TK_BLOB 109
|
||||
#define TK_VARBINARY 110
|
||||
#define TK_DECIMAL 111
|
||||
#define TK_DELAY 112
|
||||
#define TK_FILE_FACTOR 113
|
||||
#define TK_NK_FLOAT 114
|
||||
#define TK_ROLLUP 115
|
||||
#define TK_TTL 116
|
||||
#define TK_SMA 117
|
||||
#define TK_SHOW 118
|
||||
#define TK_DATABASES 119
|
||||
#define TK_TABLES 120
|
||||
#define TK_STABLES 121
|
||||
#define TK_MNODES 122
|
||||
#define TK_MODULES 123
|
||||
#define TK_QNODES 124
|
||||
#define TK_FUNCTIONS 125
|
||||
#define TK_INDEXES 126
|
||||
#define TK_ACCOUNTS 127
|
||||
#define TK_APPS 128
|
||||
#define TK_CONNECTIONS 129
|
||||
#define TK_LICENCE 130
|
||||
#define TK_GRANTS 131
|
||||
#define TK_QUERIES 132
|
||||
#define TK_SCORES 133
|
||||
#define TK_TOPICS 134
|
||||
#define TK_VARIABLES 135
|
||||
#define TK_BNODES 136
|
||||
#define TK_SNODES 137
|
||||
#define TK_CLUSTER 138
|
||||
#define TK_TRANSACTIONS 139
|
||||
#define TK_LIKE 140
|
||||
#define TK_INDEX 141
|
||||
#define TK_FULLTEXT 142
|
||||
#define TK_FUNCTION 143
|
||||
#define TK_INTERVAL 144
|
||||
#define TK_TOPIC 145
|
||||
#define TK_AS 146
|
||||
#define TK_CGROUP 147
|
||||
#define TK_WITH 148
|
||||
#define TK_SCHEMA 149
|
||||
#define TK_DESC 150
|
||||
#define TK_DESCRIBE 151
|
||||
#define TK_RESET 152
|
||||
#define TK_QUERY 153
|
||||
#define TK_CACHE 154
|
||||
#define TK_EXPLAIN 155
|
||||
#define TK_ANALYZE 156
|
||||
#define TK_VERBOSE 157
|
||||
#define TK_NK_BOOL 158
|
||||
#define TK_RATIO 159
|
||||
#define TK_COMPACT 160
|
||||
#define TK_VNODES 161
|
||||
#define TK_IN 162
|
||||
#define TK_OUTPUTTYPE 163
|
||||
#define TK_AGGREGATE 164
|
||||
#define TK_BUFSIZE 165
|
||||
#define TK_STREAM 166
|
||||
#define TK_INTO 167
|
||||
#define TK_TRIGGER 168
|
||||
#define TK_AT_ONCE 169
|
||||
#define TK_WINDOW_CLOSE 170
|
||||
#define TK_WATERMARK 171
|
||||
#define TK_KILL 172
|
||||
#define TK_CONNECTION 173
|
||||
#define TK_TRANSACTION 174
|
||||
#define TK_MERGE 175
|
||||
#define TK_VGROUP 176
|
||||
#define TK_REDISTRIBUTE 177
|
||||
#define TK_SPLIT 178
|
||||
#define TK_SYNCDB 179
|
||||
#define TK_NULL 180
|
||||
#define TK_NK_QUESTION 181
|
||||
#define TK_NK_ARROW 182
|
||||
#define TK_ROWTS 183
|
||||
#define TK_TBNAME 184
|
||||
#define TK_QSTARTTS 185
|
||||
#define TK_QENDTS 186
|
||||
#define TK_WSTARTTS 187
|
||||
#define TK_WENDTS 188
|
||||
#define TK_WDURATION 189
|
||||
#define TK_CAST 190
|
||||
#define TK_NOW 191
|
||||
#define TK_TODAY 192
|
||||
#define TK_TIMEZONE 193
|
||||
#define TK_COUNT 194
|
||||
#define TK_FIRST 195
|
||||
#define TK_LAST 196
|
||||
#define TK_LAST_ROW 197
|
||||
#define TK_BETWEEN 198
|
||||
#define TK_IS 199
|
||||
#define TK_NK_LT 200
|
||||
#define TK_NK_GT 201
|
||||
#define TK_NK_LE 202
|
||||
#define TK_NK_GE 203
|
||||
#define TK_NK_NE 204
|
||||
#define TK_MATCH 205
|
||||
#define TK_NMATCH 206
|
||||
#define TK_CONTAINS 207
|
||||
#define TK_JOIN 208
|
||||
#define TK_INNER 209
|
||||
#define TK_SELECT 210
|
||||
#define TK_DISTINCT 211
|
||||
#define TK_WHERE 212
|
||||
#define TK_PARTITION 213
|
||||
#define TK_BY 214
|
||||
#define TK_SESSION 215
|
||||
#define TK_STATE_WINDOW 216
|
||||
#define TK_SLIDING 217
|
||||
#define TK_FILL 218
|
||||
#define TK_VALUE 219
|
||||
#define TK_NONE 220
|
||||
#define TK_PREV 221
|
||||
#define TK_LINEAR 222
|
||||
#define TK_NEXT 223
|
||||
#define TK_GROUP 224
|
||||
#define TK_HAVING 225
|
||||
#define TK_ORDER 226
|
||||
#define TK_SLIMIT 227
|
||||
#define TK_SOFFSET 228
|
||||
#define TK_LIMIT 229
|
||||
#define TK_OFFSET 230
|
||||
#define TK_ASC 231
|
||||
#define TK_NULLS 232
|
||||
#define TK_ID 233
|
||||
#define TK_NK_BITNOT 234
|
||||
#define TK_INSERT 235
|
||||
#define TK_VALUES 236
|
||||
#define TK_IMPORT 237
|
||||
#define TK_NK_SEMI 238
|
||||
#define TK_FILE 239
|
||||
|
||||
#define TK_NK_SPACE 300
|
||||
#define TK_NK_COMMENT 301
|
||||
|
|
|
@ -64,7 +64,7 @@ typedef struct SCatalogReq {
|
|||
} SCatalogReq;
|
||||
|
||||
typedef struct SMetaData {
|
||||
SArray *pTableMeta; // SArray<STableMeta>
|
||||
SArray *pTableMeta; // SArray<STableMeta*>
|
||||
SArray *pDbVgroup; // SArray<SArray<SVgroupInfo>*>
|
||||
SArray *pTableHash; // SArray<SVgroupInfo>
|
||||
SArray *pUdfList; // SArray<SFuncInfo>
|
||||
|
@ -101,6 +101,7 @@ typedef struct SDbVgVersion {
|
|||
typedef struct STbSVersion {
|
||||
char* tbFName;
|
||||
int32_t sver;
|
||||
int32_t tver;
|
||||
} STbSVersion;
|
||||
|
||||
typedef struct SUserAuthVersion {
|
||||
|
@ -248,6 +249,8 @@ int32_t catalogGetTableHashVgroup(SCatalog* pCatalog, void * pTransporter, const
|
|||
*/
|
||||
int32_t catalogGetAllMeta(SCatalog* pCatalog, void *pTransporter, const SEpSet* pMgmtEps, const SCatalogReq* pReq, SMetaData* pRsp);
|
||||
|
||||
int32_t catalogAsyncGetAllMeta(SCatalog* pCtg, void *pTrans, const SEpSet* pMgmtEps, uint64_t reqId, const SCatalogReq* pReq, catalogCallback fp, void* param, int64_t* jobId);
|
||||
|
||||
int32_t catalogGetQnodeList(SCatalog* pCatalog, void *pTransporter, const SEpSet* pMgmtEps, SArray* pQnodeList);
|
||||
|
||||
int32_t catalogGetExpiredSTables(SCatalog* pCatalog, SSTableMetaVersion **stables, uint32_t *num);
|
||||
|
@ -267,6 +270,9 @@ int32_t catalogChkAuth(SCatalog* pCtg, void *pRpc, const SEpSet* pMgmtEps, const
|
|||
int32_t catalogUpdateUserAuthInfo(SCatalog* pCtg, SGetUserAuthRsp* pAuth);
|
||||
|
||||
|
||||
int32_t ctgdLaunchAsyncCall(SCatalog* pCtg, void *pTrans, const SEpSet* pMgmtEps, uint64_t reqId);
|
||||
|
||||
|
||||
/**
|
||||
* Destroy catalog and relase all resources
|
||||
*/
|
||||
|
|
|
@ -50,6 +50,7 @@ typedef struct SDatabaseOptions {
|
|||
int32_t numOfVgroups;
|
||||
int8_t singleStable;
|
||||
SNodeList* pRetentions;
|
||||
int8_t schemaless;
|
||||
} SDatabaseOptions;
|
||||
|
||||
typedef struct SCreateDatabaseStmt {
|
||||
|
@ -260,6 +261,13 @@ typedef struct SDropTopicStmt {
|
|||
bool ignoreNotExists;
|
||||
} SDropTopicStmt;
|
||||
|
||||
typedef struct SDropCGroupStmt {
|
||||
ENodeType type;
|
||||
char topicName[TSDB_TABLE_NAME_LEN];
|
||||
char cgroup[TSDB_CGROUP_LEN];
|
||||
bool ignoreNotExists;
|
||||
} SDropCGroupStmt;
|
||||
|
||||
typedef struct SAlterLocalStmt {
|
||||
ENodeType type;
|
||||
char config[TSDB_DNODE_CONFIG_LEN];
|
||||
|
|
|
@ -131,6 +131,7 @@ typedef enum ENodeType {
|
|||
QUERY_NODE_DROP_MNODE_STMT,
|
||||
QUERY_NODE_CREATE_TOPIC_STMT,
|
||||
QUERY_NODE_DROP_TOPIC_STMT,
|
||||
QUERY_NODE_DROP_CGROUP_STMT,
|
||||
QUERY_NODE_ALTER_LOCAL_STMT,
|
||||
QUERY_NODE_EXPLAIN_STMT,
|
||||
QUERY_NODE_DESCRIBE_STMT,
|
||||
|
@ -213,6 +214,7 @@ typedef enum ENodeType {
|
|||
QUERY_NODE_PHYSICAL_PLAN_FILL,
|
||||
QUERY_NODE_PHYSICAL_PLAN_SESSION_WINDOW,
|
||||
QUERY_NODE_PHYSICAL_PLAN_STREAM_SESSION_WINDOW,
|
||||
QUERY_NODE_PHYSICAL_PLAN_STREAM_FINAL_SESSION_WINDOW,
|
||||
QUERY_NODE_PHYSICAL_PLAN_STATE_WINDOW,
|
||||
QUERY_NODE_PHYSICAL_PLAN_PARTITION,
|
||||
QUERY_NODE_PHYSICAL_PLAN_DISPATCH,
|
||||
|
@ -243,7 +245,6 @@ typedef struct SNodeList {
|
|||
|
||||
#define SNodeptr void*
|
||||
|
||||
int32_t nodesNodeSize(ENodeType type);
|
||||
SNodeptr nodesMakeNode(ENodeType type);
|
||||
void nodesDestroyNode(SNodeptr pNode);
|
||||
|
||||
|
|
|
@ -31,6 +31,7 @@ typedef struct SLogicNode {
|
|||
SNodeList* pChildren;
|
||||
struct SLogicNode* pParent;
|
||||
int32_t optimizedFlag;
|
||||
uint8_t precision;
|
||||
} SLogicNode;
|
||||
|
||||
typedef enum EScanType { SCAN_TYPE_TAG = 1, SCAN_TYPE_TABLE, SCAN_TYPE_SYSTEM_TABLE, SCAN_TYPE_STREAM } EScanType;
|
||||
|
@ -61,6 +62,7 @@ typedef struct SJoinLogicNode {
|
|||
SLogicNode node;
|
||||
EJoinType joinType;
|
||||
SNode* pOnConditions;
|
||||
bool isSingleTableJoin;
|
||||
} SJoinLogicNode;
|
||||
|
||||
typedef struct SAggLogicNode {
|
||||
|
|
|
@ -132,6 +132,7 @@ typedef struct STableNode {
|
|||
char tableName[TSDB_TABLE_NAME_LEN];
|
||||
char tableAlias[TSDB_TABLE_NAME_LEN];
|
||||
uint8_t precision;
|
||||
bool singleTable;
|
||||
} STableNode;
|
||||
|
||||
struct STableMeta;
|
||||
|
@ -242,6 +243,7 @@ typedef struct SSelectStmt {
|
|||
bool hasAggFuncs;
|
||||
bool hasRepeatScanFuncs;
|
||||
bool hasIndefiniteRowsFunc;
|
||||
bool hasSelectFunc;
|
||||
bool hasSelectValFunc;
|
||||
} SSelectStmt;
|
||||
|
||||
|
|
|
@ -48,11 +48,12 @@ typedef struct SParseContext {
|
|||
} SParseContext;
|
||||
|
||||
int32_t qParseSql(SParseContext* pCxt, SQuery** pQuery);
|
||||
bool isInsertSql(const char* pStr, size_t length);
|
||||
bool qIsInsertSql(const char* pStr, size_t length);
|
||||
|
||||
void qDestroyQuery(SQuery* pQueryNode);
|
||||
|
||||
int32_t qExtractResultSchema(const SNode* pRoot, int32_t* numOfCols, SSchema** pSchema);
|
||||
int32_t qSetSTableIdForRSma(SNode* pStmt, int64_t uid);
|
||||
|
||||
int32_t qBuildStmtOutput(SQuery* pQuery, SHashObj* pVgHash, SHashObj* pBlockHash);
|
||||
int32_t qResetStmtDataBlock(void* block, bool keepBuf);
|
||||
|
|
|
@ -180,7 +180,7 @@ char* jobTaskStatusStr(int32_t status);
|
|||
|
||||
SSchema createSchema(int8_t type, int32_t bytes, col_id_t colId, const char* name);
|
||||
|
||||
extern int32_t (*queryBuildMsg[TDMT_MAX])(void* input, char** msg, int32_t msgSize, int32_t* msgLen);
|
||||
extern int32_t (*queryBuildMsg[TDMT_MAX])(void *input, char **msg, int32_t msgSize, int32_t *msgLen, void*(*mallocFp)(int32_t));
|
||||
extern int32_t (*queryProcessMsgRsp[TDMT_MAX])(void* output, char* msg, int32_t msgSize);
|
||||
|
||||
#define SET_META_TYPE_NULL(t) (t) = META_TYPE_NULL_TABLE
|
||||
|
@ -191,7 +191,7 @@ extern int32_t (*queryProcessMsgRsp[TDMT_MAX])(void* output, char* msg, int32_t
|
|||
#define NEED_CLIENT_RM_TBLMETA_ERROR(_code) \
|
||||
((_code) == TSDB_CODE_PAR_TABLE_NOT_EXIST || (_code) == TSDB_CODE_VND_TB_NOT_EXIST || \
|
||||
(_code) == TSDB_CODE_PAR_INVALID_COLUMNS_NUM || (_code) == TSDB_CODE_PAR_INVALID_COLUMN || \
|
||||
(_code) == TSDB_CODE_PAR_TAGS_NOT_MATCHED)
|
||||
(_code) == TSDB_CODE_PAR_TAGS_NOT_MATCHED || (_code == TSDB_CODE_PAR_VALUE_TOO_LONG))
|
||||
#define NEED_CLIENT_REFRESH_VG_ERROR(_code) \
|
||||
((_code) == TSDB_CODE_VND_HASH_MISMATCH || (_code) == TSDB_CODE_VND_INVALID_VGROUP_ID)
|
||||
#define NEED_CLIENT_REFRESH_TBLMETA_ERROR(_code) ((_code) == TSDB_CODE_TDB_TABLE_RECREATED)
|
||||
|
|
|
@ -23,6 +23,8 @@ extern "C" {
|
|||
#include "catalog.h"
|
||||
#include "planner.h"
|
||||
|
||||
extern tsem_t schdRspSem;
|
||||
|
||||
typedef struct SSchedulerCfg {
|
||||
uint32_t maxJobNum;
|
||||
int32_t maxNodeTableNum;
|
||||
|
@ -54,8 +56,6 @@ typedef struct SQueryProfileSummary {
|
|||
typedef struct SQueryResult {
|
||||
int32_t code;
|
||||
uint64_t numOfRows;
|
||||
int32_t msgSize;
|
||||
char *msg;
|
||||
void *res;
|
||||
} SQueryResult;
|
||||
|
||||
|
@ -64,6 +64,15 @@ typedef struct STaskInfo {
|
|||
SSubQueryMsg *msg;
|
||||
} STaskInfo;
|
||||
|
||||
typedef struct SSchdFetchParam {
|
||||
void **pData;
|
||||
int32_t* code;
|
||||
} SSchdFetchParam;
|
||||
|
||||
typedef void (*schedulerExecCallback)(SQueryResult* pResult, void* param, int32_t code);
|
||||
typedef void (*schedulerFetchCallback)(void* pResult, void* param, int32_t code);
|
||||
|
||||
|
||||
int32_t schedulerInit(SSchedulerCfg *cfg);
|
||||
|
||||
/**
|
||||
|
@ -80,7 +89,8 @@ int32_t schedulerExecJob(void *transport, SArray *nodeList, SQueryPlan *pDag, in
|
|||
* @param pNodeList Qnode/Vnode address list, element is SQueryNodeAddr
|
||||
* @return
|
||||
*/
|
||||
int32_t schedulerAsyncExecJob(void *transport, SArray *pNodeList, SQueryPlan* pDag, const char* sql, int64_t *pJob);
|
||||
int32_t schedulerAsyncExecJob(void *pTrans, SArray *pNodeList, SQueryPlan *pDag, int64_t *pJob, const char *sql,
|
||||
int64_t startTs, schedulerExecCallback fp, void* param);
|
||||
|
||||
/**
|
||||
* Fetch query result from the remote query executor
|
||||
|
@ -90,6 +100,8 @@ int32_t schedulerAsyncExecJob(void *transport, SArray *pNodeList, SQueryPlan* pD
|
|||
*/
|
||||
int32_t schedulerFetchRows(int64_t job, void **data);
|
||||
|
||||
int32_t schedulerAsyncFetchRows(int64_t job, schedulerFetchCallback fp, void* param);
|
||||
|
||||
int32_t schedulerGetTasksStatus(int64_t job, SArray *pSub);
|
||||
|
||||
|
||||
|
@ -108,23 +120,8 @@ void schedulerFreeJob(int64_t job);
|
|||
|
||||
void schedulerDestroy(void);
|
||||
|
||||
/**
|
||||
* convert dag to task list
|
||||
* @param pDag
|
||||
* @param pTasks SArray**<STaskInfo>
|
||||
* @return
|
||||
*/
|
||||
int32_t schedulerConvertDagToTaskList(SQueryPlan* pDag, SArray **pTasks);
|
||||
|
||||
/**
|
||||
* make one task info's multiple copies
|
||||
* @param src
|
||||
* @param dst SArray**<STaskInfo>
|
||||
* @return
|
||||
*/
|
||||
int32_t schedulerCopyTask(STaskInfo *src, SArray **dst, int32_t copyNum);
|
||||
|
||||
void schedulerFreeTaskList(SArray *taskList);
|
||||
void schdExecCallback(SQueryResult* pResult, void* param, int32_t code);
|
||||
void schdFetchCallback(void* pResult, void* param, int32_t code);
|
||||
|
||||
|
||||
#ifdef __cplusplus
|
||||
|
|
|
@ -84,15 +84,16 @@ typedef struct {
|
|||
} SStreamCheckpoint;
|
||||
|
||||
static FORCE_INLINE SStreamDataSubmit* streamDataSubmitNew(SSubmitReq* pReq) {
|
||||
SStreamDataSubmit* pDataSubmit = (SStreamDataSubmit*)taosMemoryCalloc(1, sizeof(SStreamDataSubmit));
|
||||
SStreamDataSubmit* pDataSubmit = (SStreamDataSubmit*)taosAllocateQitem(sizeof(SStreamDataSubmit), DEF_QITEM);
|
||||
if (pDataSubmit == NULL) return NULL;
|
||||
pDataSubmit->data = pReq;
|
||||
pDataSubmit->dataRef = (int32_t*)taosMemoryMalloc(sizeof(int32_t));
|
||||
if (pDataSubmit->data == NULL) goto FAIL;
|
||||
if (pDataSubmit->dataRef == NULL) goto FAIL;
|
||||
pDataSubmit->data = pReq;
|
||||
*pDataSubmit->dataRef = 1;
|
||||
pDataSubmit->type = STREAM_INPUT__DATA_SUBMIT;
|
||||
return pDataSubmit;
|
||||
FAIL:
|
||||
taosMemoryFree(pDataSubmit);
|
||||
taosFreeQitem(pDataSubmit);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
|
@ -107,7 +108,6 @@ static FORCE_INLINE void streamDataSubmitRefDec(SStreamDataSubmit* pDataSubmit)
|
|||
if (ref == 0) {
|
||||
taosMemoryFree(pDataSubmit->data);
|
||||
taosMemoryFree(pDataSubmit->dataRef);
|
||||
taosFreeQitem(pDataSubmit);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -142,6 +142,7 @@ typedef void FTbSink(SStreamTask* pTask, void* vnode, int64_t ver, void* data);
|
|||
|
||||
typedef struct {
|
||||
int64_t stbUid;
|
||||
char stbFullName[TSDB_TABLE_FNAME_LEN];
|
||||
SSchemaWrapper* pSchemaWrapper;
|
||||
// not applicable to encoder and decoder
|
||||
void* vnode;
|
||||
|
|
|
@ -80,6 +80,7 @@ typedef struct SFsmCbMeta {
|
|||
uint64_t seqNum;
|
||||
SyncTerm term;
|
||||
SyncTerm currentTerm;
|
||||
uint64_t flag;
|
||||
} SFsmCbMeta;
|
||||
|
||||
typedef struct SReConfigCbMeta {
|
||||
|
@ -87,6 +88,9 @@ typedef struct SReConfigCbMeta {
|
|||
SyncIndex index;
|
||||
SyncTerm term;
|
||||
SyncTerm currentTerm;
|
||||
SSyncCfg oldCfg;
|
||||
bool isDrop;
|
||||
uint64_t flag;
|
||||
} SReConfigCbMeta;
|
||||
|
||||
typedef struct SSyncFSM {
|
||||
|
@ -146,6 +150,7 @@ typedef struct SSyncLogStore {
|
|||
} SSyncLogStore;
|
||||
|
||||
typedef struct SSyncInfo {
|
||||
bool isStandBy;
|
||||
SyncGroupId vgId;
|
||||
SSyncCfg syncCfg;
|
||||
char path[TSDB_FILENAME_LEN];
|
||||
|
@ -160,8 +165,8 @@ int32_t syncInit();
|
|||
void syncCleanUp();
|
||||
int64_t syncOpen(const SSyncInfo* pSyncInfo);
|
||||
void syncStart(int64_t rid);
|
||||
void syncStartStandBy(int64_t rid);
|
||||
void syncStop(int64_t rid);
|
||||
int32_t syncSetStandby(int64_t rid);
|
||||
int32_t syncReconfig(int64_t rid, const SSyncCfg* pSyncCfg);
|
||||
ESyncState syncGetMyRole(int64_t rid);
|
||||
const char* syncGetMyRoleStr(int64_t rid);
|
||||
|
@ -173,6 +178,10 @@ bool syncEnvIsStart();
|
|||
const char* syncStr(ESyncState state);
|
||||
bool syncIsRestoreFinish(int64_t rid);
|
||||
|
||||
// to be moved to static
|
||||
void syncStartNormal(int64_t rid);
|
||||
void syncStartStandBy(int64_t rid);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
|
|
|
@ -89,19 +89,18 @@ typedef struct SRpcInit {
|
|||
typedef struct {
|
||||
void *val;
|
||||
int32_t (*clone)(void *src, void **dst);
|
||||
void (*freeFunc)(const void *arg);
|
||||
} SRpcCtxVal;
|
||||
|
||||
typedef struct {
|
||||
int32_t msgType;
|
||||
void * val;
|
||||
int32_t (*clone)(void *src, void **dst);
|
||||
void (*freeFunc)(const void *arg);
|
||||
} SRpcBrokenlinkVal;
|
||||
|
||||
typedef struct {
|
||||
SHashObj * args;
|
||||
SRpcBrokenlinkVal brokenVal;
|
||||
void (*freeFunc)(const void *arg);
|
||||
} SRpcCtx;
|
||||
|
||||
int32_t rpcInit();
|
||||
|
|
|
@ -641,6 +641,7 @@ int32_t* taosGetErrno();
|
|||
#define TSDB_CODE_PAR_NOT_ALLOWED_WIN_QUERY TAOS_DEF_ERROR_CODE(0, 0x2650)
|
||||
#define TSDB_CODE_PAR_INVALID_DROP_COL TAOS_DEF_ERROR_CODE(0, 0x2651)
|
||||
#define TSDB_CODE_PAR_INVALID_COL_JSON TAOS_DEF_ERROR_CODE(0, 0x2652)
|
||||
#define TSDB_CODE_PAR_VALUE_TOO_LONG TAOS_DEF_ERROR_CODE(0, 0x2653)
|
||||
|
||||
//planner
|
||||
#define TSDB_CODE_PLAN_INTERNAL_ERROR TAOS_DEF_ERROR_CODE(0, 0x2700)
|
||||
|
|
|
@ -334,9 +334,12 @@ typedef enum ELogicConditionType {
|
|||
#define TSDB_DB_STREAM_MODE_OFF 0
|
||||
#define TSDB_DB_STREAM_MODE_ON 1
|
||||
#define TSDB_DEFAULT_DB_STREAM_MODE 0
|
||||
#define TSDB_DB_SINGLE_STABLE_ON 0
|
||||
#define TSDB_DB_SINGLE_STABLE_OFF 1
|
||||
#define TSDB_DEFAULT_DB_SINGLE_STABLE 0
|
||||
#define TSDB_DB_SINGLE_STABLE_ON 1
|
||||
#define TSDB_DB_SINGLE_STABLE_OFF 0
|
||||
#define TSDB_DEFAULT_DB_SINGLE_STABLE TSDB_DB_SINGLE_STABLE_OFF
|
||||
#define TSDB_DB_SCHEMALESS_ON 1
|
||||
#define TSDB_DB_SCHEMALESS_OFF 0
|
||||
#define TSDB_DEFAULT_DB_SCHEMALESS TSDB_DB_SCHEMALESS_OFF
|
||||
|
||||
#define TSDB_MIN_ROLLUP_FILE_FACTOR 0
|
||||
#define TSDB_MAX_ROLLUP_FILE_FACTOR 1
|
||||
|
|
|
@ -0,0 +1,71 @@
|
|||
/*
|
||||
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
|
||||
*
|
||||
* This program is free software: you can use, redistribute, and/or modify
|
||||
* it under the terms of the GNU Affero General Public License, version 3
|
||||
* or later ("AGPL"), as published by the Free Software Foundation.
|
||||
*
|
||||
* This program is distributed in the hope that it will be useful, but WITHOUT
|
||||
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
|
||||
* FITNESS FOR A PARTICULAR PURPOSE.
|
||||
*
|
||||
* You should have received a copy of the GNU Affero General Public License
|
||||
* along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
*/
|
||||
|
||||
/*
|
||||
* include/tdigest.c
|
||||
*
|
||||
* Copyright (c) 2016, Usman Masood <usmanm at fastmail dot fm>
|
||||
*/
|
||||
|
||||
#ifndef TDIGEST_H
|
||||
#define TDIGEST_H
|
||||
|
||||
#ifndef M_PI
|
||||
#define M_PI 3.14159265358979323846264338327950288 /* pi */
|
||||
#endif
|
||||
|
||||
#define DOUBLE_MAX 1.79e+308
|
||||
|
||||
#define ADDITION_CENTROID_NUM 2
|
||||
#define COMPRESSION 300
|
||||
#define GET_CENTROID(compression) (ceil(compression * M_PI / 2) + 1 + ADDITION_CENTROID_NUM)
|
||||
#define GET_THRESHOLD(compression) (7.5 + 0.37 * compression - 2e-4 * pow(compression, 2))
|
||||
#define TDIGEST_SIZE(compression) (sizeof(TDigest) + sizeof(SCentroid)*GET_CENTROID(compression) + sizeof(SPt)*GET_THRESHOLD(compression))
|
||||
|
||||
typedef struct SCentroid {
|
||||
double mean;
|
||||
int64_t weight;
|
||||
}SCentroid;
|
||||
|
||||
typedef struct SPt {
|
||||
double value;
|
||||
int64_t weight;
|
||||
}SPt;
|
||||
|
||||
typedef struct TDigest {
|
||||
double compression;
|
||||
int32_t threshold;
|
||||
int64_t size;
|
||||
|
||||
int64_t total_weight;
|
||||
double min;
|
||||
double max;
|
||||
|
||||
int32_t num_buffered_pts;
|
||||
SPt *buffered_pts;
|
||||
|
||||
int32_t num_centroids;
|
||||
SCentroid *centroids;
|
||||
}TDigest;
|
||||
|
||||
TDigest *tdigestNewFrom(void* pBuf, int32_t compression);
|
||||
void tdigestAdd(TDigest *t, double x, int64_t w);
|
||||
void tdigestMerge(TDigest *t1, TDigest *t2);
|
||||
double tdigestQuantile(TDigest *t, double q);
|
||||
void tdigestCompress(TDigest *t);
|
||||
void tdigestFreeFrom(TDigest *t);
|
||||
void tdigestAutoFill(TDigest* t, int32_t compression);
|
||||
|
||||
#endif /* TDIGEST_H */
|
|
@ -140,7 +140,7 @@ static int32_t hbQueryHbRspHandle(SAppHbMgr *pAppHbMgr, SClientHbRsp *pRsp) {
|
|||
STscObj *pTscObj = (STscObj *)acquireTscObj(pRsp->connKey.tscRid);
|
||||
if (NULL == pTscObj) {
|
||||
tscDebug("tscObj rid %" PRIx64 " not exist", pRsp->connKey.tscRid);
|
||||
} else {
|
||||
} else {
|
||||
if (pRsp->query->totalDnodes > 1 && !isEpsetEqual(&pTscObj->pAppInfo->mgmtEp.epSet, &pRsp->query->epSet)) {
|
||||
updateEpSet_s(&pTscObj->pAppInfo->mgmtEp, &pRsp->query->epSet);
|
||||
}
|
||||
|
@ -582,8 +582,15 @@ void hbClearReqInfo(SAppHbMgr *pAppHbMgr) {
|
|||
}
|
||||
}
|
||||
|
||||
void hbThreadFuncUnexpectedStopped(void) {
|
||||
atomic_store_8(&clientHbMgr.threadStop, 2);
|
||||
}
|
||||
|
||||
static void *hbThreadFunc(void *param) {
|
||||
setThreadName("hb");
|
||||
#ifdef WINDOWS
|
||||
atexit(hbThreadFuncUnexpectedStopped);
|
||||
#endif
|
||||
while (1) {
|
||||
int8_t threadStop = atomic_val_compare_exchange_8(&clientHbMgr.threadStop, 1, 2);
|
||||
if (1 == threadStop) {
|
||||
|
|
|
@ -289,10 +289,56 @@ void setResPrecision(SReqResultInfo* pResInfo, int32_t precision) {
|
|||
pResInfo->precision = precision;
|
||||
}
|
||||
|
||||
int32_t scheduleAsyncQuery(SRequestObj* pRequest, SQueryPlan* pDag, SArray* pNodeList, void** pRes) {
|
||||
void* pTransporter = pRequest->pTscObj->pAppInfo->pTransporter;
|
||||
|
||||
tsem_init(&schdRspSem, 0, 0);
|
||||
|
||||
SQueryResult res = {.code = 0, .numOfRows = 0};
|
||||
int32_t code = schedulerAsyncExecJob(pTransporter, pNodeList, pDag, &pRequest->body.queryJob, pRequest->sqlstr,
|
||||
pRequest->metric.start, schdExecCallback, &res);
|
||||
while (true) {
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
if (pRequest->body.queryJob != 0) {
|
||||
schedulerFreeJob(pRequest->body.queryJob);
|
||||
}
|
||||
|
||||
*pRes = res.res;
|
||||
|
||||
pRequest->code = code;
|
||||
terrno = code;
|
||||
return pRequest->code;
|
||||
} else {
|
||||
tsem_wait(&schdRspSem);
|
||||
|
||||
if (res.code) {
|
||||
code = res.code;
|
||||
} else {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (TDMT_VND_SUBMIT == pRequest->type || TDMT_VND_CREATE_TABLE == pRequest->type) {
|
||||
pRequest->body.resInfo.numOfRows = res.numOfRows;
|
||||
|
||||
if (pRequest->body.queryJob != 0) {
|
||||
schedulerFreeJob(pRequest->body.queryJob);
|
||||
}
|
||||
}
|
||||
|
||||
*pRes = res.res;
|
||||
|
||||
pRequest->code = res.code;
|
||||
terrno = res.code;
|
||||
return pRequest->code;
|
||||
}
|
||||
|
||||
|
||||
int32_t scheduleQuery(SRequestObj* pRequest, SQueryPlan* pDag, SArray* pNodeList, void** pRes) {
|
||||
void* pTransporter = pRequest->pTscObj->pAppInfo->pTransporter;
|
||||
|
||||
SQueryResult res = {.code = 0, .numOfRows = 0, .msgSize = ERROR_MSG_BUF_DEFAULT_SIZE, .msg = pRequest->msgBuf};
|
||||
SQueryResult res = {.code = 0, .numOfRows = 0};
|
||||
int32_t code = schedulerExecJob(pTransporter, pNodeList, pDag, &pRequest->body.queryJob, pRequest->sqlstr,
|
||||
pRequest->metric.start, &res);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
|
@ -367,7 +413,7 @@ int32_t validateSversion(SRequestObj* pRequest, void* res) {
|
|||
|
||||
for (int32_t i = 0; i < tbNum; ++i) {
|
||||
STbVerInfo* tbInfo = taosArrayGet(pTbArray, i);
|
||||
STbSVersion tbSver = {.tbFName = tbInfo->tbFName, .sver = tbInfo->sversion};
|
||||
STbSVersion tbSver = {.tbFName = tbInfo->tbFName, .sver = tbInfo->sversion, .tver = tbInfo->tversion};
|
||||
taosArrayPush(pArray, &tbSver);
|
||||
}
|
||||
}
|
||||
|
@ -699,12 +745,12 @@ void processMsgFromServer(void* parent, SRpcMsg* pMsg, SEpSet* pEpSet) {
|
|||
|
||||
pRequest->metric.rsp = taosGetTimestampUs();
|
||||
|
||||
STscObj* pTscObj = pRequest->pTscObj;
|
||||
if (pEpSet) {
|
||||
if (!isEpsetEqual(&pTscObj->pAppInfo->mgmtEp.epSet, pEpSet)) {
|
||||
updateEpSet_s(&pTscObj->pAppInfo->mgmtEp, pEpSet);
|
||||
}
|
||||
}
|
||||
//STscObj* pTscObj = pRequest->pTscObj;
|
||||
//if (pEpSet) {
|
||||
// if (!isEpsetEqual(&pTscObj->pAppInfo->mgmtEp.epSet, pEpSet)) {
|
||||
// updateEpSet_s(&pTscObj->pAppInfo->mgmtEp, pEpSet);
|
||||
// }
|
||||
//}
|
||||
|
||||
/*
|
||||
* There is not response callback function for submit response.
|
||||
|
@ -796,7 +842,58 @@ void doSetOneRowPtr(SReqResultInfo* pResultInfo) {
|
|||
}
|
||||
}
|
||||
|
||||
void* doAsyncFetchRows(SRequestObj* pRequest, bool setupOneRowPtr, bool convertUcs4) {
|
||||
assert(pRequest != NULL);
|
||||
|
||||
SReqResultInfo* pResultInfo = &pRequest->body.resInfo;
|
||||
if (pResultInfo->pData == NULL || pResultInfo->current >= pResultInfo->numOfRows) {
|
||||
// All data has returned to App already, no need to try again
|
||||
if (pResultInfo->completed) {
|
||||
pResultInfo->numOfRows = 0;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
tsem_init(&schdRspSem, 0, 0);
|
||||
|
||||
SReqResultInfo* pResInfo = &pRequest->body.resInfo;
|
||||
SSchdFetchParam param = {.pData = (void**)&pResInfo->pData, .code = &pRequest->code};
|
||||
pRequest->code = schedulerAsyncFetchRows(pRequest->body.queryJob, schdFetchCallback, ¶m);
|
||||
if (pRequest->code != TSDB_CODE_SUCCESS) {
|
||||
pResultInfo->numOfRows = 0;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
tsem_wait(&schdRspSem);
|
||||
if (pRequest->code != TSDB_CODE_SUCCESS) {
|
||||
pResultInfo->numOfRows = 0;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
pRequest->code = setQueryResultFromRsp(&pRequest->body.resInfo, (SRetrieveTableRsp*)pResInfo->pData, convertUcs4);
|
||||
if (pRequest->code != TSDB_CODE_SUCCESS) {
|
||||
pResultInfo->numOfRows = 0;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
tscDebug("0x%" PRIx64 " fetch results, numOfRows:%d total Rows:%" PRId64 ", complete:%d, reqId:0x%" PRIx64,
|
||||
pRequest->self, pResInfo->numOfRows, pResInfo->totalRows, pResInfo->completed, pRequest->requestId);
|
||||
|
||||
if (pResultInfo->numOfRows == 0) {
|
||||
return NULL;
|
||||
}
|
||||
}
|
||||
|
||||
if (setupOneRowPtr) {
|
||||
doSetOneRowPtr(pResultInfo);
|
||||
pResultInfo->current += 1;
|
||||
}
|
||||
|
||||
return pResultInfo->row;
|
||||
}
|
||||
|
||||
|
||||
void* doFetchRows(SRequestObj* pRequest, bool setupOneRowPtr, bool convertUcs4) {
|
||||
//return doAsyncFetchRows(pRequest, setupOneRowPtr, convertUcs4);
|
||||
assert(pRequest != NULL);
|
||||
|
||||
SReqResultInfo* pResultInfo = &pRequest->body.resInfo;
|
||||
|
|
|
@ -48,7 +48,8 @@ int32_t stmtSwitchStatus(STscStmt* pStmt, STMT_STATUS newStatus) {
|
|||
break;
|
||||
case STMT_EXECUTE:
|
||||
if (STMT_TYPE_QUERY == pStmt->sql.type) {
|
||||
if (STMT_STATUS_NE(ADD_BATCH) && STMT_STATUS_NE(FETCH_FIELDS) && STMT_STATUS_NE(BIND) && STMT_STATUS_NE(BIND_COL)) {
|
||||
if (STMT_STATUS_NE(ADD_BATCH) && STMT_STATUS_NE(FETCH_FIELDS) && STMT_STATUS_NE(BIND) &&
|
||||
STMT_STATUS_NE(BIND_COL)) {
|
||||
code = TSDB_CODE_TSC_STMT_API_ERROR;
|
||||
}
|
||||
} else {
|
||||
|
@ -230,22 +231,6 @@ int32_t stmtParseSql(STscStmt* pStmt) {
|
|||
pStmt->sql.type = STMT_TYPE_QUERY;
|
||||
}
|
||||
|
||||
/*
|
||||
switch (nodeType(pStmt->sql.pQuery->pRoot)) {
|
||||
case QUERY_NODE_VNODE_MODIF_STMT:
|
||||
if (0 == pStmt->sql.type) {
|
||||
pStmt->sql.type = STMT_TYPE_INSERT;
|
||||
}
|
||||
break;
|
||||
case QUERY_NODE_SELECT_STMT:
|
||||
pStmt->sql.type = STMT_TYPE_QUERY;
|
||||
break;
|
||||
default:
|
||||
tscError("not supported stmt type %d", nodeType(pStmt->sql.pQuery->pRoot));
|
||||
STMT_ERR_RET(TSDB_CODE_TSC_STMT_CLAUSE_ERROR);
|
||||
}
|
||||
*/
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
@ -823,7 +808,7 @@ _return:
|
|||
code = stmtUpdateTableUid(pStmt, pRsp);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
tFreeSSubmitRsp(pRsp);
|
||||
|
||||
++pStmt->sql.runTimes;
|
||||
|
@ -861,7 +846,7 @@ int stmtIsInsert(TAOS_STMT* stmt, int* insert) {
|
|||
if (pStmt->sql.type) {
|
||||
*insert = (STMT_TYPE_INSERT == pStmt->sql.type || STMT_TYPE_MULTI_INSERT == pStmt->sql.type);
|
||||
} else {
|
||||
*insert = isInsertSql(pStmt->sql.sqlStr, 0);
|
||||
*insert = qIsInsertSql(pStmt->sql.sqlStr, 0);
|
||||
}
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
|
|
@ -123,7 +123,7 @@ static const SSysDbTableSchema userStbsSchema[] = {
|
|||
{.name = "table_comment", .bytes = 1024 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||
};
|
||||
|
||||
static const SSysDbTableSchema userStreamsSchema[] = {
|
||||
static const SSysDbTableSchema streamSchema[] = {
|
||||
{.name = "stream_name", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||
{.name = "user_name", .bytes = 23, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||
{.name = "dest_table", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||
|
@ -196,12 +196,14 @@ static const SSysDbTableSchema vgroupsSchema[] = {
|
|||
{.name = "status", .bytes = 12 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||
{.name = "nfiles", .bytes = 4, .type = TSDB_DATA_TYPE_INT},
|
||||
{.name = "file_size", .bytes = 4, .type = TSDB_DATA_TYPE_INT},
|
||||
{.name = "tsma", .bytes = 1, .type = TSDB_DATA_TYPE_TINYINT},
|
||||
};
|
||||
|
||||
static const SSysDbTableSchema smaSchema[] = {
|
||||
{.name = "sma_name", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||
{.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP},
|
||||
{.name = "stable_name", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||
{.name = "vgroup_id", .bytes = 4, .type = TSDB_DATA_TYPE_INT},
|
||||
};
|
||||
|
||||
static const SSysDbTableSchema transSchema[] = {
|
||||
|
@ -232,7 +234,7 @@ static const SSysTableMeta infosMeta[] = {
|
|||
{TSDB_INS_TABLE_USER_FUNCTIONS, userFuncSchema, tListLen(userFuncSchema)},
|
||||
{TSDB_INS_TABLE_USER_INDEXES, userIdxSchema, tListLen(userIdxSchema)},
|
||||
{TSDB_INS_TABLE_USER_STABLES, userStbsSchema, tListLen(userStbsSchema)},
|
||||
{TSDB_INS_TABLE_USER_STREAMS, userStreamsSchema, tListLen(userStreamsSchema)},
|
||||
{TSDB_PERFS_TABLE_STREAMS, streamSchema, tListLen(streamSchema)},
|
||||
{TSDB_INS_TABLE_USER_TABLES, userTblsSchema, tListLen(userTblsSchema)},
|
||||
{TSDB_INS_TABLE_USER_TABLE_DISTRIBUTED, userTblDistSchema, tListLen(userTblDistSchema)},
|
||||
{TSDB_INS_TABLE_USER_USERS, userUsersSchema, tListLen(userUsersSchema)},
|
||||
|
@ -305,17 +307,7 @@ static const SSysDbTableSchema querySchema[] = {
|
|||
{.name = "sql", .bytes = TSDB_SHOW_SQL_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||
};
|
||||
|
||||
static const SSysDbTableSchema streamSchema[] = {
|
||||
{.name = "stream_name", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||
{.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP},
|
||||
{.name = "sql", .bytes = TSDB_SHOW_SQL_LEN, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||
{.name = "status", .bytes = 20 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_BINARY},
|
||||
{.name = "source_db", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||
{.name = "target_db", .bytes = SYSTABLE_SCH_DB_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||
{.name = "target_table", .bytes = SYSTABLE_SCH_TABLE_NAME_LEN, .type = TSDB_DATA_TYPE_VARCHAR},
|
||||
{.name = "watermark", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT},
|
||||
{.name = "trigger", .bytes = 4, .type = TSDB_DATA_TYPE_INT},
|
||||
};
|
||||
|
||||
|
||||
static const SSysTableMeta perfsMeta[] = {
|
||||
{TSDB_PERFS_TABLE_CONNECTIONS, connectionsSchema, tListLen(connectionsSchema)},
|
||||
|
|
|
@ -275,8 +275,10 @@ int32_t colDataMergeCol(SColumnInfoData* pColumnInfoData, uint32_t numOfRow1, in
|
|||
|
||||
doBitmapMerge(pColumnInfoData, numOfRow1, pSource, numOfRow2);
|
||||
|
||||
int32_t offset = pColumnInfoData->info.bytes * numOfRow1;
|
||||
memcpy(pColumnInfoData->pData + offset, pSource->pData, pSource->info.bytes * numOfRow2);
|
||||
if (pSource->pData) {
|
||||
int32_t offset = pColumnInfoData->info.bytes * numOfRow1;
|
||||
memcpy(pColumnInfoData->pData + offset, pSource->pData, pSource->info.bytes * numOfRow2);
|
||||
}
|
||||
}
|
||||
|
||||
return numOfRow1 + numOfRow2;
|
||||
|
@ -319,14 +321,16 @@ int32_t colDataAssign(SColumnInfoData* pColumnInfoData, const SColumnInfoData* p
|
|||
pColumnInfoData->nullbitmap = tmp;
|
||||
memcpy(pColumnInfoData->nullbitmap, pSource->nullbitmap, BitmapLen(numOfRows));
|
||||
|
||||
int32_t newSize = numOfRows * pColumnInfoData->info.bytes;
|
||||
tmp = taosMemoryRealloc(pColumnInfoData->pData, newSize);
|
||||
if (tmp == NULL) {
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
if (pSource->pData) {
|
||||
int32_t newSize = numOfRows * pColumnInfoData->info.bytes;
|
||||
tmp = taosMemoryRealloc(pColumnInfoData->pData, newSize);
|
||||
if (tmp == NULL) {
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
pColumnInfoData->pData = tmp;
|
||||
memcpy(pColumnInfoData->pData, pSource->pData, pSource->info.bytes * numOfRows);
|
||||
pColumnInfoData->pData = tmp;
|
||||
memcpy(pColumnInfoData->pData, pSource->pData, pSource->info.bytes * numOfRows);
|
||||
}
|
||||
}
|
||||
|
||||
pColumnInfoData->hasNull = pSource->hasNull;
|
||||
|
@ -350,7 +354,7 @@ int32_t blockDataUpdateTsWindow(SSDataBlock* pDataBlock, int32_t tsColumnIndex)
|
|||
return -1;
|
||||
}
|
||||
|
||||
int32_t index = (tsColumnIndex == -1)? 0:tsColumnIndex;
|
||||
int32_t index = (tsColumnIndex == -1) ? 0 : tsColumnIndex;
|
||||
SColumnInfoData* pColInfoData = taosArrayGet(pDataBlock->pDataBlock, index);
|
||||
if (pColInfoData->info.type != TSDB_DATA_TYPE_TIMESTAMP) {
|
||||
return 0;
|
||||
|
@ -599,8 +603,8 @@ int32_t blockDataFromBuf(SSDataBlock* pBlock, const char* buf) {
|
|||
}
|
||||
|
||||
int32_t blockDataFromBuf1(SSDataBlock* pBlock, const char* buf, size_t capacity) {
|
||||
pBlock->info.rows = *(int32_t*) buf;
|
||||
pBlock->info.groupId = *(uint64_t*) (buf + sizeof(int32_t));
|
||||
pBlock->info.rows = *(int32_t*)buf;
|
||||
pBlock->info.groupId = *(uint64_t*)(buf + sizeof(int32_t));
|
||||
|
||||
int32_t numOfCols = pBlock->info.numOfCols;
|
||||
const char* pStart = buf + sizeof(uint32_t) + sizeof(uint64_t);
|
||||
|
@ -669,7 +673,7 @@ size_t blockDataGetSerialMetaSize(const SSDataBlock* pBlock) {
|
|||
return sizeof(int32_t) + sizeof(uint64_t) + pBlock->info.numOfCols * sizeof(int32_t);
|
||||
}
|
||||
|
||||
double blockDataGetSerialRowSize(const SSDataBlock* pBlock) {
|
||||
double blockDataGetSerialRowSize(const SSDataBlock* pBlock) {
|
||||
ASSERT(pBlock != NULL);
|
||||
double rowSize = 0;
|
||||
|
||||
|
@ -1232,7 +1236,7 @@ size_t blockDataGetCapacityInRow(const SSDataBlock* pBlock, size_t pageSize) {
|
|||
|
||||
// the true value must be less than the value of nRows
|
||||
int32_t additional = 0;
|
||||
for(int32_t i = 0; i < pBlock->info.numOfCols; ++i) {
|
||||
for (int32_t i = 0; i < pBlock->info.numOfCols; ++i) {
|
||||
SColumnInfoData* pCol = taosArrayGet(pBlock->pDataBlock, i);
|
||||
if (IS_VAR_DATA_TYPE(pCol->info.type)) {
|
||||
additional += nRows * sizeof(int32_t);
|
||||
|
@ -1242,7 +1246,7 @@ size_t blockDataGetCapacityInRow(const SSDataBlock* pBlock, size_t pageSize) {
|
|||
}
|
||||
|
||||
int32_t newRows = (payloadSize - additional) / rowSize;
|
||||
ASSERT(newRows <= nRows && newRows > 1);
|
||||
ASSERT(newRows <= nRows && newRows >= 1);
|
||||
|
||||
return newRows;
|
||||
}
|
||||
|
@ -1626,7 +1630,7 @@ int32_t buildSubmitReqFromDataBlock(SSubmitReq** pReq, const SArray* pDataBlocks
|
|||
}
|
||||
|
||||
SSubmitReq* tdBlockToSubmit(const SArray* pBlocks, const STSchema* pTSchema, bool createTb, int64_t suid,
|
||||
int32_t vgId) {
|
||||
const char* stbFullName, int32_t vgId) {
|
||||
SSubmitReq* ret = NULL;
|
||||
|
||||
// cal size
|
||||
|
@ -1642,10 +1646,12 @@ SSubmitReq* tdBlockToSubmit(const SArray* pBlocks, const STSchema* pTSchema, boo
|
|||
|
||||
if (createTb) {
|
||||
SVCreateTbReq createTbReq = {0};
|
||||
createTbReq.name = "a";
|
||||
char* cname = taosMemoryCalloc(1, TSDB_TABLE_FNAME_LEN);
|
||||
snprintf(cname, TSDB_TABLE_FNAME_LEN, "%s:%ld", stbFullName, pDataBlock->info.groupId);
|
||||
createTbReq.name = cname;
|
||||
createTbReq.flags = 0;
|
||||
createTbReq.type = TSDB_CHILD_TABLE;
|
||||
createTbReq.ctb.suid = htobe64(suid);
|
||||
createTbReq.ctb.suid = suid;
|
||||
|
||||
SKVRowBuilder kvRowBuilder = {0};
|
||||
if (tdInitKVRowBuilder(&kvRowBuilder) < 0) {
|
||||
|
@ -1658,6 +1664,7 @@ SSubmitReq* tdBlockToSubmit(const SArray* pBlocks, const STSchema* pTSchema, boo
|
|||
int32_t code;
|
||||
tEncodeSize(tEncodeSVCreateTbReq, &createTbReq, schemaLen, code);
|
||||
if (code < 0) return NULL;
|
||||
taosMemoryFree(cname);
|
||||
}
|
||||
|
||||
cap += sizeof(SSubmitBlk) + schemaLen + rows * maxLen;
|
||||
|
@ -1693,7 +1700,9 @@ SSubmitReq* tdBlockToSubmit(const SArray* pBlocks, const STSchema* pTSchema, boo
|
|||
int32_t schemaLen = 0;
|
||||
if (createTb) {
|
||||
SVCreateTbReq createTbReq = {0};
|
||||
createTbReq.name = "a";
|
||||
char* cname = taosMemoryCalloc(1, TSDB_TABLE_FNAME_LEN);
|
||||
snprintf(cname, TSDB_TABLE_FNAME_LEN, "%s:%ld", stbFullName, pDataBlock->info.groupId);
|
||||
createTbReq.name = cname;
|
||||
createTbReq.flags = 0;
|
||||
createTbReq.type = TSDB_CHILD_TABLE;
|
||||
createTbReq.ctb.suid = suid;
|
||||
|
@ -1728,8 +1737,12 @@ SSubmitReq* tdBlockToSubmit(const SArray* pBlocks, const STSchema* pTSchema, boo
|
|||
for (int32_t k = 0; k < pTSchema->numOfCols; k++) {
|
||||
const STColumn* pColumn = &pTSchema->columns[k];
|
||||
SColumnInfoData* pColData = taosArrayGet(pDataBlock->pDataBlock, k);
|
||||
void* data = colDataGetData(pColData, j);
|
||||
tdAppendColValToRow(&rb, pColumn->colId, pColumn->type, TD_VTYPE_NORM, data, true, pColumn->offset, k);
|
||||
if (colDataIsNull_s(pColData, j)) {
|
||||
tdAppendColValToRow(&rb, pColumn->colId, pColumn->type, TD_VTYPE_NONE, NULL, false, pColumn->offset, k);
|
||||
} else {
|
||||
void* data = colDataGetData(pColData, j);
|
||||
tdAppendColValToRow(&rb, pColumn->colId, pColumn->type, TD_VTYPE_NORM, data, true, pColumn->offset, k);
|
||||
}
|
||||
}
|
||||
int32_t rowLen = TD_ROW_LEN(rowData);
|
||||
rowData = POINTER_SHIFT(rowData, rowLen);
|
||||
|
|
|
@ -600,7 +600,8 @@ int32_t tSerializeSMAlterStbReq(void *buf, int32_t bufLen, SMAlterStbReq *pReq)
|
|||
if (tStartEncode(&encoder) < 0) return -1;
|
||||
if (tEncodeCStr(&encoder, pReq->name) < 0) return -1;
|
||||
if (tEncodeI8(&encoder, pReq->alterType) < 0) return -1;
|
||||
if (tEncodeI32(&encoder, pReq->verInBlock) < 0) return -1;
|
||||
if (tEncodeI32(&encoder, pReq->tagVer) < 0) return -1;
|
||||
if (tEncodeI32(&encoder, pReq->colVer) < 0) return -1;
|
||||
if (tEncodeI32(&encoder, pReq->numOfFields) < 0) return -1;
|
||||
for (int32_t i = 0; i < pReq->numOfFields; ++i) {
|
||||
SField *pField = taosArrayGet(pReq->pFields, i);
|
||||
|
@ -627,7 +628,8 @@ int32_t tDeserializeSMAlterStbReq(void *buf, int32_t bufLen, SMAlterStbReq *pReq
|
|||
if (tStartDecode(&decoder) < 0) return -1;
|
||||
if (tDecodeCStrTo(&decoder, pReq->name) < 0) return -1;
|
||||
if (tDecodeI8(&decoder, &pReq->alterType) < 0) return -1;
|
||||
if (tDecodeI32(&decoder, &pReq->verInBlock) < 0) return -1;
|
||||
if (tDecodeI32(&decoder, &pReq->tagVer) < 0) return -1;
|
||||
if (tDecodeI32(&decoder, &pReq->colVer) < 0) return -1;
|
||||
if (tDecodeI32(&decoder, &pReq->numOfFields) < 0) return -1;
|
||||
pReq->pFields = taosArrayInit(pReq->numOfFields, sizeof(SField));
|
||||
if (pReq->pFields == NULL) {
|
||||
|
@ -1680,6 +1682,7 @@ int32_t tSerializeSCreateDbReq(void *buf, int32_t bufLen, SCreateDbReq *pReq) {
|
|||
if (tEncodeI8(&encoder, pReq->replications) < 0) return -1;
|
||||
if (tEncodeI8(&encoder, pReq->strict) < 0) return -1;
|
||||
if (tEncodeI8(&encoder, pReq->cacheLastRow) < 0) return -1;
|
||||
if (tEncodeI8(&encoder, pReq->schemaless) < 0) return -1;
|
||||
if (tEncodeI8(&encoder, pReq->ignoreExist) < 0) return -1;
|
||||
if (tEncodeI32(&encoder, pReq->numOfRetensions) < 0) return -1;
|
||||
for (int32_t i = 0; i < pReq->numOfRetensions; ++i) {
|
||||
|
@ -1720,6 +1723,7 @@ int32_t tDeserializeSCreateDbReq(void *buf, int32_t bufLen, SCreateDbReq *pReq)
|
|||
if (tDecodeI8(&decoder, &pReq->replications) < 0) return -1;
|
||||
if (tDecodeI8(&decoder, &pReq->strict) < 0) return -1;
|
||||
if (tDecodeI8(&decoder, &pReq->cacheLastRow) < 0) return -1;
|
||||
if (tDecodeI8(&decoder, &pReq->schemaless) < 0) return -1;
|
||||
if (tDecodeI8(&decoder, &pReq->ignoreExist) < 0) return -1;
|
||||
if (tDecodeI32(&decoder, &pReq->numOfRetensions) < 0) return -1;
|
||||
pReq->pRetensions = taosArrayInit(pReq->numOfRetensions, sizeof(SRetention));
|
||||
|
@ -2913,6 +2917,8 @@ int32_t tSerializeSCreateVnodeReq(void *buf, int32_t bufLen, SCreateVnodeReq *pR
|
|||
if (tEncodeI8(&encoder, pRetension->freqUnit) < 0) return -1;
|
||||
if (tEncodeI8(&encoder, pRetension->keepUnit) < 0) return -1;
|
||||
}
|
||||
|
||||
if (tEncodeI8(&encoder, pReq->isTsma) < 0) return -1;
|
||||
tEndEncode(&encoder);
|
||||
|
||||
int32_t tlen = encoder.pos;
|
||||
|
@ -2975,6 +2981,8 @@ int32_t tDeserializeSCreateVnodeReq(void *buf, int32_t bufLen, SCreateVnodeReq *
|
|||
}
|
||||
}
|
||||
|
||||
if (tDecodeI8(&decoder, &pReq->isTsma) < 0) return -1;
|
||||
|
||||
tEndDecode(&decoder);
|
||||
tDecoderClear(&decoder);
|
||||
return 0;
|
||||
|
@ -3186,7 +3194,6 @@ int32_t tSerializeSDCreateMnodeReq(void *buf, int32_t bufLen, SDCreateMnodeReq *
|
|||
tEncoderInit(&encoder, buf, bufLen);
|
||||
|
||||
if (tStartEncode(&encoder) < 0) return -1;
|
||||
if (tEncodeI32(&encoder, pReq->dnodeId) < 0) return -1;
|
||||
if (tEncodeI8(&encoder, pReq->replica) < 0) return -1;
|
||||
for (int32_t i = 0; i < TSDB_MAX_REPLICA; ++i) {
|
||||
SReplica *pReplica = &pReq->replicas[i];
|
||||
|
@ -3204,7 +3211,6 @@ int32_t tDeserializeSDCreateMnodeReq(void *buf, int32_t bufLen, SDCreateMnodeReq
|
|||
tDecoderInit(&decoder, buf, bufLen);
|
||||
|
||||
if (tStartDecode(&decoder) < 0) return -1;
|
||||
if (tDecodeI32(&decoder, &pReq->dnodeId) < 0) return -1;
|
||||
if (tDecodeI8(&decoder, &pReq->replica) < 0) return -1;
|
||||
for (int32_t i = 0; i < TSDB_MAX_REPLICA; ++i) {
|
||||
SReplica *pReplica = &pReq->replicas[i];
|
||||
|
@ -3352,7 +3358,8 @@ int32_t tDeserializeSExplainRsp(void *buf, int32_t bufLen, SExplainRsp *pRsp) {
|
|||
if (tDecodeDouble(&decoder, &pRsp->subplanInfo[i].totalCost) < 0) return -1;
|
||||
if (tDecodeU64(&decoder, &pRsp->subplanInfo[i].numOfRows) < 0) return -1;
|
||||
if (tDecodeU32(&decoder, &pRsp->subplanInfo[i].verboseLen) < 0) return -1;
|
||||
if (tDecodeBinary(&decoder, (uint8_t**) &pRsp->subplanInfo[i].verboseInfo, &pRsp->subplanInfo[i].verboseLen) < 0) return -1;
|
||||
if (tDecodeBinary(&decoder, (uint8_t **)&pRsp->subplanInfo[i].verboseInfo, &pRsp->subplanInfo[i].verboseLen) < 0)
|
||||
return -1;
|
||||
}
|
||||
|
||||
tEndDecode(&decoder);
|
||||
|
@ -3701,6 +3708,7 @@ int32_t tSerializeSCMCreateStreamReq(void *buf, int32_t bufLen, const SCMCreateS
|
|||
|
||||
if (tStartEncode(&encoder) < 0) return -1;
|
||||
if (tEncodeCStr(&encoder, pReq->name) < 0) return -1;
|
||||
if (tEncodeCStr(&encoder, pReq->sourceDB) < 0) return -1;
|
||||
if (tEncodeCStr(&encoder, pReq->targetStbFullName) < 0) return -1;
|
||||
if (tEncodeI8(&encoder, pReq->igExists) < 0) return -1;
|
||||
if (tEncodeI32(&encoder, sqlLen) < 0) return -1;
|
||||
|
@ -3726,6 +3734,7 @@ int32_t tDeserializeSCMCreateStreamReq(void *buf, int32_t bufLen, SCMCreateStrea
|
|||
|
||||
if (tStartDecode(&decoder) < 0) return -1;
|
||||
if (tDecodeCStrTo(&decoder, pReq->name) < 0) return -1;
|
||||
if (tDecodeCStrTo(&decoder, pReq->sourceDB) < 0) return -1;
|
||||
if (tDecodeCStrTo(&decoder, pReq->targetStbFullName) < 0) return -1;
|
||||
if (tDecodeI8(&decoder, &pReq->igExists) < 0) return -1;
|
||||
if (tDecodeI32(&decoder, &sqlLen) < 0) return -1;
|
||||
|
@ -3798,7 +3807,7 @@ int tEncodeSVCreateStbReq(SEncoder *pCoder, const SVCreateStbReq *pReq) {
|
|||
if (tEncodeCStr(pCoder, pReq->name) < 0) return -1;
|
||||
if (tEncodeI64(pCoder, pReq->suid) < 0) return -1;
|
||||
if (tEncodeI8(pCoder, pReq->rollup) < 0) return -1;
|
||||
if (tEncodeSSchemaWrapper(pCoder, &pReq->schema) < 0) return -1;
|
||||
if (tEncodeSSchemaWrapper(pCoder, &pReq->schemaRow) < 0) return -1;
|
||||
if (tEncodeSSchemaWrapper(pCoder, &pReq->schemaTag) < 0) return -1;
|
||||
if (pReq->rollup) {
|
||||
if (tEncodeSRSmaParam(pCoder, &pReq->pRSmaParam) < 0) return -1;
|
||||
|
@ -3814,7 +3823,7 @@ int tDecodeSVCreateStbReq(SDecoder *pCoder, SVCreateStbReq *pReq) {
|
|||
if (tDecodeCStr(pCoder, &pReq->name) < 0) return -1;
|
||||
if (tDecodeI64(pCoder, &pReq->suid) < 0) return -1;
|
||||
if (tDecodeI8(pCoder, &pReq->rollup) < 0) return -1;
|
||||
if (tDecodeSSchemaWrapper(pCoder, &pReq->schema) < 0) return -1;
|
||||
if (tDecodeSSchemaWrapper(pCoder, &pReq->schemaRow) < 0) return -1;
|
||||
if (tDecodeSSchemaWrapper(pCoder, &pReq->schemaTag) < 0) return -1;
|
||||
if (pReq->rollup) {
|
||||
if (tDecodeSRSmaParam(pCoder, &pReq->pRSmaParam) < 0) return -1;
|
||||
|
@ -3863,7 +3872,7 @@ int tEncodeSVCreateTbReq(SEncoder *pCoder, const SVCreateTbReq *pReq) {
|
|||
if (tEncodeI64(pCoder, pReq->ctb.suid) < 0) return -1;
|
||||
if (tEncodeBinary(pCoder, pReq->ctb.pTag, kvRowLen(pReq->ctb.pTag)) < 0) return -1;
|
||||
} else if (pReq->type == TSDB_NORMAL_TABLE) {
|
||||
if (tEncodeSSchemaWrapper(pCoder, &pReq->ntb.schema) < 0) return -1;
|
||||
if (tEncodeSSchemaWrapper(pCoder, &pReq->ntb.schemaRow) < 0) return -1;
|
||||
} else {
|
||||
ASSERT(0);
|
||||
}
|
||||
|
@ -3889,7 +3898,7 @@ int tDecodeSVCreateTbReq(SDecoder *pCoder, SVCreateTbReq *pReq) {
|
|||
if (tDecodeI64(pCoder, &pReq->ctb.suid) < 0) return -1;
|
||||
if (tDecodeBinary(pCoder, &pReq->ctb.pTag, &len) < 0) return -1;
|
||||
} else if (pReq->type == TSDB_NORMAL_TABLE) {
|
||||
if (tDecodeSSchemaWrapper(pCoder, &pReq->ntb.schema) < 0) return -1;
|
||||
if (tDecodeSSchemaWrapper(pCoder, &pReq->ntb.schemaRow) < 0) return -1;
|
||||
} else {
|
||||
ASSERT(0);
|
||||
}
|
||||
|
|
|
@ -127,7 +127,7 @@ int32_t tNameExtractFullName(const SName* name, char* dst) {
|
|||
|
||||
size_t tnameLen = strlen(name->tname);
|
||||
if (tnameLen > 0) {
|
||||
assert(name->type == TSDB_TABLE_NAME_T);
|
||||
/*assert(name->type == TSDB_TABLE_NAME_T);*/
|
||||
dst[len] = TS_PATH_DELIMITER[0];
|
||||
|
||||
memcpy(dst + len + 1, name->tname, tnameLen);
|
||||
|
@ -314,9 +314,9 @@ void buildChildTableName(RandTableName* rName) {
|
|||
for (int j = 0; j < taosArrayGetSize(rName->tags); ++j) {
|
||||
SSmlKv* tagKv = taosArrayGetP(rName->tags, j);
|
||||
taosStringBuilderAppendStringLen(&sb, tagKv->key, tagKv->keyLen);
|
||||
if(IS_VAR_DATA_TYPE(tagKv->type)){
|
||||
if (IS_VAR_DATA_TYPE(tagKv->type)) {
|
||||
taosStringBuilderAppendStringLen(&sb, tagKv->value, tagKv->length);
|
||||
}else{
|
||||
} else {
|
||||
taosStringBuilderAppendStringLen(&sb, (char*)(&(tagKv->value)), tagKv->length);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -184,6 +184,16 @@ int32_t parseTimezone(char* str, int64_t* tzOffset) {
|
|||
|
||||
i++;
|
||||
|
||||
int32_t j = i;
|
||||
while (str[j]) {
|
||||
if ((str[j] >= '0' && str[j] <= '9') || str[j] == ':') {
|
||||
++j;
|
||||
continue;
|
||||
}
|
||||
|
||||
return -1;
|
||||
}
|
||||
|
||||
char* sep = strchr(&str[i], ':');
|
||||
if (sep != NULL) {
|
||||
int32_t len = (int32_t)(sep - &str[i]);
|
||||
|
|
|
@ -92,6 +92,13 @@ void dmSendStatusReq(SDnodeMgmt *pMgmt) {
|
|||
SEpSet epSet = {0};
|
||||
dmGetMnodeEpSet(pMgmt->pData, &epSet);
|
||||
rpcSendRecv(pMgmt->msgCb.clientRpc, &epSet, &rpcMsg, &rpcRsp);
|
||||
if (rpcRsp.code != 0) {
|
||||
dError("failed to send status msg since %s, numOfEps:%d inUse:%d", tstrerror(rpcRsp.code), epSet.numOfEps,
|
||||
epSet.inUse);
|
||||
for (int32_t i = 0; i < epSet.numOfEps; ++i) {
|
||||
dDebug("index:%d, mnode ep:%s:%u", i, epSet.eps[i].fqdn, epSet.eps[i].port);
|
||||
}
|
||||
}
|
||||
dmProcessStatusRsp(pMgmt, &rpcRsp);
|
||||
}
|
||||
|
||||
|
|
|
@ -79,7 +79,7 @@ int32_t mmProcessCreateReq(const SMgmtInputOpt *pInput, SRpcMsg *pMsg) {
|
|||
return -1;
|
||||
}
|
||||
|
||||
if (createReq.replica <= 1 || (createReq.dnodeId != pInput->pData->dnodeId && pInput->pData->dnodeId != 0)) {
|
||||
if (createReq.replica != 1) {
|
||||
terrno = TSDB_CODE_INVALID_OPTION;
|
||||
dError("failed to create mnode since %s", terrstr());
|
||||
return -1;
|
||||
|
|
|
@ -96,7 +96,7 @@ SArray *smGetMsgHandles() {
|
|||
|
||||
// Requests handled by SNODE
|
||||
if (dmSetMgmtHandle(pArray, TDMT_SND_TASK_DEPLOY, smPutNodeMsgToMgmtQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_SND_TASK_EXEC, smPutNodeMsgToExecQueue, 0) == NULL) goto _OVER;
|
||||
/*if (dmSetMgmtHandle(pArray, TDMT_SND_TASK_EXEC, smPutNodeMsgToExecQueue, 0) == NULL) goto _OVER;*/
|
||||
|
||||
code = 0;
|
||||
_OVER:
|
||||
|
|
|
@ -183,7 +183,7 @@ int32_t vmProcessCreateVnodeReq(SVnodeMgmt *pMgmt, SRpcMsg *pMsg) {
|
|||
return -1;
|
||||
}
|
||||
|
||||
dDebug("vgId:%d, create vnode req is received", createReq.vgId);
|
||||
dDebug("vgId:%d, create vnode req is received, tsma:%d", createReq.vgId, createReq.isTsma);
|
||||
|
||||
SVnodeCfg vnodeCfg = {0};
|
||||
vmGenerateVnodeCfg(&createReq, &vnodeCfg);
|
||||
|
@ -310,9 +310,6 @@ SArray *vmGetMsgHandles() {
|
|||
if (dmSetMgmtHandle(pArray, TDMT_VND_CONSUME, vmPutNodeMsgToFetchQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_VND_TASK_DEPLOY, vmPutNodeMsgToWriteQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_VND_QUERY_HEARTBEAT, vmPutNodeMsgToFetchQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_VND_TASK_PIPE_EXEC, vmPutNodeMsgToFetchQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_VND_TASK_MERGE_EXEC, vmPutNodeMsgToMergeQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_VND_TASK_WRITE_EXEC, vmPutNodeMsgToWriteQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_VND_STREAM_TRIGGER, vmPutNodeMsgToFetchQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_VND_TASK_RUN, vmPutNodeMsgToFetchQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_VND_TASK_DISPATCH, vmPutNodeMsgToFetchQueue, 0) == NULL) goto _OVER;
|
||||
|
|
|
@ -3,7 +3,7 @@ if(${BUILD_TEST})
|
|||
add_subdirectory(qnode)
|
||||
add_subdirectory(bnode)
|
||||
add_subdirectory(snode)
|
||||
add_subdirectory(mnode)
|
||||
#add_subdirectory(mnode)
|
||||
add_subdirectory(vnode)
|
||||
add_subdirectory(sut)
|
||||
endif(${BUILD_TEST})
|
||||
|
|
|
@ -330,6 +330,7 @@ typedef struct {
|
|||
int64_t compStorage;
|
||||
int64_t pointsWritten;
|
||||
int8_t compact;
|
||||
int8_t isTsma;
|
||||
int8_t replica;
|
||||
SVnodeGid vnodeGid[TSDB_MAX_REPLICA];
|
||||
} SVgObj;
|
||||
|
@ -366,7 +367,6 @@ typedef struct {
|
|||
int64_t updateTime;
|
||||
int64_t uid;
|
||||
int64_t dbUid;
|
||||
int32_t version;
|
||||
int32_t tagVer;
|
||||
int32_t colVer;
|
||||
int32_t nextColId;
|
||||
|
@ -589,7 +589,8 @@ typedef struct {
|
|||
int8_t status;
|
||||
int8_t createdBy; // STREAM_CREATED_BY__USER or SMA
|
||||
int32_t fixedSinkVgId; // 0 for shuffle
|
||||
int64_t smaId; // 0 for unused
|
||||
SVgObj fixedSinkVg;
|
||||
int64_t smaId; // 0 for unused
|
||||
int8_t trigger;
|
||||
int32_t triggerParam;
|
||||
int64_t waterMark;
|
||||
|
|
|
@ -29,7 +29,8 @@ int32_t mndSchedInitSubEp(SMnode* pMnode, const SMqTopicObj* pTopic, SMqSubscrib
|
|||
|
||||
int32_t mndScheduleStream(SMnode* pMnode, STrans* pTrans, SStreamObj* pStream);
|
||||
|
||||
int32_t mndConvertRSmaTask(const char* ast, int8_t triggerType, int64_t watermark, char** pStr, int32_t* pLen);
|
||||
int32_t mndConvertRSmaTask(const char* ast, int64_t uid, int8_t triggerType, int64_t watermark, char** pStr,
|
||||
int32_t* pLen);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
|
|
|
@ -30,6 +30,7 @@ SSdbRaw *mndVgroupActionEncode(SVgObj *pVgroup);
|
|||
SEpSet mndGetVgroupEpset(SMnode *pMnode, const SVgObj *pVgroup);
|
||||
int32_t mndGetVnodesNum(SMnode *pMnode, int32_t dnodeId);
|
||||
|
||||
int32_t mndAllocSmaVgroup(SMnode *pMnode, SDbObj *pDb, SVgObj *pVgroup);
|
||||
int32_t mndAllocVgroup(SMnode *pMnode, SDbObj *pDb, SVgObj **ppVgroups);
|
||||
SArray *mndBuildDnodesArray(SMnode *pMnode);
|
||||
int32_t mndAddVnodeToVgroup(SMnode *pMnode, SVgObj *pVgroup, SArray *pArray);
|
||||
|
|
|
@ -144,6 +144,7 @@ _OVER:
|
|||
|
||||
static int32_t mndClusterActionInsert(SSdb *pSdb, SClusterObj *pCluster) {
|
||||
mTrace("cluster:%" PRId64 ", perform insert action, row:%p", pCluster->id, pCluster);
|
||||
pSdb->pMnode->clusterId = pCluster->id;
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -1155,7 +1155,7 @@ static void mndBuildDBVgroupInfo(SDbObj *pDb, SMnode *pMnode, SArray *pVgList) {
|
|||
pIter = sdbFetch(pSdb, SDB_VGROUP, pIter, (void **)&pVgroup);
|
||||
if (pIter == NULL) break;
|
||||
|
||||
if (NULL == pDb || pVgroup->dbUid == pDb->uid) {
|
||||
if ((NULL == pDb || pVgroup->dbUid == pDb->uid) && !pVgroup->isTsma) {
|
||||
SVgroupInfo vgInfo = {0};
|
||||
vgInfo.vgId = pVgroup->vgId;
|
||||
vgInfo.hashBegin = pVgroup->hashBegin;
|
||||
|
|
|
@ -441,7 +441,7 @@ static int32_t mndProcessStatusReq(SRpcMsg *pReq) {
|
|||
pDnode->numOfSupportVnodes = statusReq.numOfSupportVnodes;
|
||||
|
||||
SStatusRsp statusRsp = {0};
|
||||
statusRsp.dnodeVer = sdbGetTableVer(pMnode->pSdb, SDB_DNODE);
|
||||
statusRsp.dnodeVer = sdbGetTableVer(pMnode->pSdb, SDB_DNODE) + sdbGetTableVer(pMnode->pSdb, SDB_MNODE);
|
||||
statusRsp.dnodeCfg.dnodeId = pDnode->id;
|
||||
statusRsp.dnodeCfg.clusterId = pMnode->clusterId;
|
||||
statusRsp.pDnodeEps = taosArrayInit(mndGetDnodeSize(pMnode), sizeof(SDnodeEp));
|
||||
|
|
|
@ -233,7 +233,7 @@ void mndGetMnodeEpSet(SMnode *pMnode, SEpSet *pEpSet) {
|
|||
if (pObj->pDnode == NULL) {
|
||||
mError("mnode:%d, no corresponding dnode exists", pObj->id);
|
||||
} else {
|
||||
if (pObj->state == TAOS_SYNC_STATE_LEADER) {
|
||||
if (pObj->id == pMnode->selfDnodeId || pObj->state == TAOS_SYNC_STATE_LEADER) {
|
||||
pEpSet->inUse = pEpSet->numOfEps;
|
||||
}
|
||||
addEpIntoEpSet(pEpSet, pObj->pDnode->fqdn, pObj->pDnode->port);
|
||||
|
@ -267,75 +267,83 @@ static int32_t mndSetCreateMnodeCommitLogs(SMnode *pMnode, STrans *pTrans, SMnod
|
|||
}
|
||||
|
||||
static int32_t mndSetCreateMnodeRedoActions(SMnode *pMnode, STrans *pTrans, SDnodeObj *pDnode, SMnodeObj *pObj) {
|
||||
SSdb *pSdb = pMnode->pSdb;
|
||||
void *pIter = NULL;
|
||||
int32_t numOfReplicas = 0;
|
||||
|
||||
SSdb *pSdb = pMnode->pSdb;
|
||||
void *pIter = NULL;
|
||||
int32_t numOfReplicas = 0;
|
||||
SDAlterMnodeReq alterReq = {0};
|
||||
SDCreateMnodeReq createReq = {0};
|
||||
SEpSet alterEpset = {0};
|
||||
SEpSet createEpset = {0};
|
||||
|
||||
while (1) {
|
||||
SMnodeObj *pMObj = NULL;
|
||||
pIter = sdbFetch(pSdb, SDB_MNODE, pIter, (void **)&pMObj);
|
||||
if (pIter == NULL) break;
|
||||
|
||||
SReplica *pReplica = &createReq.replicas[numOfReplicas];
|
||||
pReplica->id = pMObj->id;
|
||||
pReplica->port = pMObj->pDnode->port;
|
||||
memcpy(pReplica->fqdn, pMObj->pDnode->fqdn, TSDB_FQDN_LEN);
|
||||
numOfReplicas++;
|
||||
alterReq.replicas[numOfReplicas].id = pMObj->id;
|
||||
alterReq.replicas[numOfReplicas].port = pMObj->pDnode->port;
|
||||
memcpy(alterReq.replicas[numOfReplicas].fqdn, pMObj->pDnode->fqdn, TSDB_FQDN_LEN);
|
||||
|
||||
alterEpset.eps[numOfReplicas].port = pMObj->pDnode->port;
|
||||
memcpy(alterEpset.eps[numOfReplicas].fqdn, pMObj->pDnode->fqdn, TSDB_FQDN_LEN);
|
||||
if (pMObj->state == TAOS_SYNC_STATE_LEADER) {
|
||||
alterEpset.inUse = numOfReplicas;
|
||||
}
|
||||
|
||||
numOfReplicas++;
|
||||
sdbRelease(pSdb, pMObj);
|
||||
}
|
||||
|
||||
SReplica *pReplica = &createReq.replicas[numOfReplicas];
|
||||
pReplica->id = pDnode->id;
|
||||
pReplica->port = pDnode->port;
|
||||
memcpy(pReplica->fqdn, pDnode->fqdn, TSDB_FQDN_LEN);
|
||||
numOfReplicas++;
|
||||
alterReq.replica = numOfReplicas + 1;
|
||||
alterReq.replicas[numOfReplicas].id = pDnode->id;
|
||||
alterReq.replicas[numOfReplicas].port = pDnode->port;
|
||||
memcpy(alterReq.replicas[numOfReplicas].fqdn, pDnode->fqdn, TSDB_FQDN_LEN);
|
||||
|
||||
createReq.replica = numOfReplicas;
|
||||
alterEpset.numOfEps = numOfReplicas + 1;
|
||||
alterEpset.eps[numOfReplicas].port = pDnode->port;
|
||||
memcpy(alterEpset.eps[numOfReplicas].fqdn, pDnode->fqdn, TSDB_FQDN_LEN);
|
||||
|
||||
while (1) {
|
||||
SMnodeObj *pMObj = NULL;
|
||||
pIter = sdbFetch(pSdb, SDB_MNODE, pIter, (void **)&pMObj);
|
||||
if (pIter == NULL) break;
|
||||
createReq.replica = 1;
|
||||
createReq.replicas[0].id = pDnode->id;
|
||||
createReq.replicas[0].port = pDnode->port;
|
||||
memcpy(createReq.replicas[0].fqdn, pDnode->fqdn, TSDB_FQDN_LEN);
|
||||
|
||||
STransAction action = {0};
|
||||
createEpset.numOfEps = 1;
|
||||
createEpset.eps[0].port = pDnode->port;
|
||||
memcpy(createEpset.eps[0].fqdn, pDnode->fqdn, TSDB_FQDN_LEN);
|
||||
|
||||
createReq.dnodeId = pMObj->id;
|
||||
int32_t contLen = tSerializeSDCreateMnodeReq(NULL, 0, &createReq);
|
||||
{
|
||||
int32_t contLen = tSerializeSDCreateMnodeReq(NULL, 0, &alterReq);
|
||||
void *pReq = taosMemoryMalloc(contLen);
|
||||
tSerializeSDCreateMnodeReq(pReq, contLen, &createReq);
|
||||
tSerializeSDCreateMnodeReq(pReq, contLen, &alterReq);
|
||||
|
||||
action.epSet = mndGetDnodeEpset(pMObj->pDnode);
|
||||
action.pCont = pReq;
|
||||
action.contLen = contLen;
|
||||
action.msgType = TDMT_DND_ALTER_MNODE;
|
||||
action.acceptableCode = TSDB_CODE_NODE_ALREADY_DEPLOYED;
|
||||
STransAction action = {
|
||||
.epSet = alterEpset,
|
||||
.pCont = pReq,
|
||||
.contLen = contLen,
|
||||
.msgType = TDMT_DND_ALTER_MNODE,
|
||||
.acceptableCode = 0,
|
||||
};
|
||||
|
||||
if (mndTransAppendRedoAction(pTrans, &action) != 0) {
|
||||
taosMemoryFree(pReq);
|
||||
sdbCancelFetch(pSdb, pIter);
|
||||
sdbRelease(pSdb, pMObj);
|
||||
return -1;
|
||||
}
|
||||
|
||||
sdbRelease(pSdb, pMObj);
|
||||
}
|
||||
|
||||
{
|
||||
STransAction action = {0};
|
||||
action.epSet = mndGetDnodeEpset(pDnode);
|
||||
|
||||
createReq.dnodeId = pObj->id;
|
||||
int32_t contLen = tSerializeSDCreateMnodeReq(NULL, 0, &createReq);
|
||||
void *pReq = taosMemoryMalloc(contLen);
|
||||
tSerializeSDCreateMnodeReq(pReq, contLen, &createReq);
|
||||
|
||||
action.epSet = mndGetDnodeEpset(pDnode);
|
||||
action.pCont = pReq;
|
||||
action.contLen = contLen;
|
||||
action.msgType = TDMT_DND_CREATE_MNODE;
|
||||
action.acceptableCode = TSDB_CODE_NODE_ALREADY_DEPLOYED;
|
||||
STransAction action = {
|
||||
.epSet = createEpset,
|
||||
.pCont = pReq,
|
||||
.contLen = contLen,
|
||||
.msgType = TDMT_DND_CREATE_MNODE,
|
||||
.acceptableCode = TSDB_CODE_NODE_ALREADY_DEPLOYED,
|
||||
};
|
||||
|
||||
if (mndTransAppendRedoAction(pTrans, &action) != 0) {
|
||||
taosMemoryFree(pReq);
|
||||
return -1;
|
||||
|
@ -441,73 +449,77 @@ static int32_t mndSetDropMnodeCommitLogs(SMnode *pMnode, STrans *pTrans, SMnodeO
|
|||
}
|
||||
|
||||
static int32_t mndSetDropMnodeRedoActions(SMnode *pMnode, STrans *pTrans, SDnodeObj *pDnode, SMnodeObj *pObj) {
|
||||
SSdb *pSdb = pMnode->pSdb;
|
||||
void *pIter = NULL;
|
||||
int32_t numOfReplicas = 0;
|
||||
|
||||
SSdb *pSdb = pMnode->pSdb;
|
||||
void *pIter = NULL;
|
||||
int32_t numOfReplicas = 0;
|
||||
SDAlterMnodeReq alterReq = {0};
|
||||
SDDropMnodeReq dropReq = {0};
|
||||
SEpSet alterEpset = {0};
|
||||
SEpSet dropEpSet = {0};
|
||||
|
||||
while (1) {
|
||||
SMnodeObj *pMObj = NULL;
|
||||
pIter = sdbFetch(pSdb, SDB_MNODE, pIter, (void **)&pMObj);
|
||||
if (pIter == NULL) break;
|
||||
|
||||
if (pMObj->id != pObj->id) {
|
||||
SReplica *pReplica = &alterReq.replicas[numOfReplicas];
|
||||
pReplica->id = pMObj->id;
|
||||
pReplica->port = pMObj->pDnode->port;
|
||||
memcpy(pReplica->fqdn, pMObj->pDnode->fqdn, TSDB_FQDN_LEN);
|
||||
numOfReplicas++;
|
||||
if (pMObj->id == pObj->id) {
|
||||
sdbRelease(pSdb, pMObj);
|
||||
continue;
|
||||
}
|
||||
|
||||
alterReq.replicas[numOfReplicas].id = pMObj->id;
|
||||
alterReq.replicas[numOfReplicas].port = pMObj->pDnode->port;
|
||||
memcpy(alterReq.replicas[numOfReplicas].fqdn, pMObj->pDnode->fqdn, TSDB_FQDN_LEN);
|
||||
|
||||
alterEpset.eps[numOfReplicas].port = pMObj->pDnode->port;
|
||||
memcpy(alterEpset.eps[numOfReplicas].fqdn, pMObj->pDnode->fqdn, TSDB_FQDN_LEN);
|
||||
if (pMObj->state == TAOS_SYNC_STATE_LEADER) {
|
||||
alterEpset.inUse = numOfReplicas;
|
||||
}
|
||||
|
||||
numOfReplicas++;
|
||||
sdbRelease(pSdb, pMObj);
|
||||
}
|
||||
|
||||
alterReq.replica = numOfReplicas;
|
||||
alterEpset.numOfEps = numOfReplicas;
|
||||
|
||||
while (1) {
|
||||
SMnodeObj *pMObj = NULL;
|
||||
pIter = sdbFetch(pSdb, SDB_MNODE, pIter, (void **)&pMObj);
|
||||
if (pIter == NULL) break;
|
||||
if (pMObj->id != pObj->id) {
|
||||
STransAction action = {0};
|
||||
dropReq.dnodeId = pDnode->id;
|
||||
dropEpSet.numOfEps = 1;
|
||||
dropEpSet.eps[0].port = pDnode->port;
|
||||
memcpy(dropEpSet.eps[0].fqdn, pDnode->fqdn, TSDB_FQDN_LEN);
|
||||
|
||||
alterReq.dnodeId = pMObj->id;
|
||||
int32_t contLen = tSerializeSDCreateMnodeReq(NULL, 0, &alterReq);
|
||||
void *pReq = taosMemoryMalloc(contLen);
|
||||
tSerializeSDCreateMnodeReq(pReq, contLen, &alterReq);
|
||||
{
|
||||
int32_t contLen = tSerializeSDCreateMnodeReq(NULL, 0, &alterReq);
|
||||
void *pReq = taosMemoryMalloc(contLen);
|
||||
tSerializeSDCreateMnodeReq(pReq, contLen, &alterReq);
|
||||
|
||||
action.epSet = mndGetDnodeEpset(pMObj->pDnode);
|
||||
action.pCont = pReq;
|
||||
action.contLen = contLen;
|
||||
action.msgType = TDMT_DND_ALTER_MNODE;
|
||||
action.acceptableCode = TSDB_CODE_NODE_ALREADY_DEPLOYED;
|
||||
STransAction action = {
|
||||
.epSet = alterEpset,
|
||||
.pCont = pReq,
|
||||
.contLen = contLen,
|
||||
.msgType = TDMT_DND_ALTER_MNODE,
|
||||
.acceptableCode = 0,
|
||||
};
|
||||
|
||||
if (mndTransAppendRedoAction(pTrans, &action) != 0) {
|
||||
taosMemoryFree(pReq);
|
||||
sdbCancelFetch(pSdb, pIter);
|
||||
sdbRelease(pSdb, pMObj);
|
||||
return -1;
|
||||
}
|
||||
if (mndTransAppendRedoAction(pTrans, &action) != 0) {
|
||||
taosMemoryFree(pReq);
|
||||
return -1;
|
||||
}
|
||||
|
||||
sdbRelease(pSdb, pMObj);
|
||||
}
|
||||
|
||||
{
|
||||
STransAction action = {0};
|
||||
action.epSet = mndGetDnodeEpset(pDnode);
|
||||
|
||||
SDDropMnodeReq dropReq = {0};
|
||||
dropReq.dnodeId = pObj->id;
|
||||
int32_t contLen = tSerializeSCreateDropMQSBNodeReq(NULL, 0, &dropReq);
|
||||
void *pReq = taosMemoryMalloc(contLen);
|
||||
tSerializeSCreateDropMQSBNodeReq(pReq, contLen, &dropReq);
|
||||
|
||||
action.epSet = mndGetDnodeEpset(pDnode);
|
||||
action.pCont = pReq;
|
||||
action.contLen = contLen;
|
||||
action.msgType = TDMT_DND_DROP_MNODE;
|
||||
action.acceptableCode = TSDB_CODE_NODE_NOT_DEPLOYED;
|
||||
STransAction action = {
|
||||
.epSet = dropEpSet,
|
||||
.pCont = pReq,
|
||||
.contLen = contLen,
|
||||
.msgType = TDMT_DND_DROP_MNODE,
|
||||
.acceptableCode = TSDB_CODE_NODE_NOT_DEPLOYED,
|
||||
};
|
||||
|
||||
if (mndTransAppendRedoAction(pTrans, &action) != 0) {
|
||||
taosMemoryFree(pReq);
|
||||
return -1;
|
||||
|
@ -662,7 +674,7 @@ static void mndCancelGetNextMnode(SMnode *pMnode, void *pIter) {
|
|||
}
|
||||
|
||||
static int32_t mndProcessAlterMnodeReq(SRpcMsg *pReq) {
|
||||
SMnode *pMnode = pReq->info.node;
|
||||
SMnode *pMnode = pReq->info.node;
|
||||
SDAlterMnodeReq alterReq = {0};
|
||||
|
||||
if (tDeserializeSDCreateMnodeReq(pReq->pCont, pReq->contLen, &alterReq) != 0) {
|
||||
|
@ -670,12 +682,6 @@ static int32_t mndProcessAlterMnodeReq(SRpcMsg *pReq) {
|
|||
return -1;
|
||||
}
|
||||
|
||||
if (alterReq.dnodeId != pMnode->selfDnodeId) {
|
||||
terrno = TSDB_CODE_INVALID_OPTION;
|
||||
mError("failed to alter mnode since %s, input:%d cur:%d", terrstr(), alterReq.dnodeId, pMnode->selfDnodeId);
|
||||
return -1;
|
||||
}
|
||||
|
||||
SSyncCfg cfg = {.replicaNum = alterReq.replica, .myIndex = -1};
|
||||
for (int32_t i = 0; i < alterReq.replica; ++i) {
|
||||
SNodeInfo *pNode = &cfg.nodeInfo[i];
|
||||
|
|
|
@ -28,13 +28,15 @@
|
|||
#include "mndTrans.h"
|
||||
#include "mndUser.h"
|
||||
#include "mndVgroup.h"
|
||||
#include "parser.h"
|
||||
#include "tcompare.h"
|
||||
#include "tname.h"
|
||||
#include "tuuid.h"
|
||||
|
||||
extern bool tsStreamSchedV;
|
||||
|
||||
int32_t mndConvertRSmaTask(const char* ast, int8_t triggerType, int64_t watermark, char** pStr, int32_t* pLen) {
|
||||
int32_t mndConvertRSmaTask(const char* ast, int64_t uid, int8_t triggerType, int64_t watermark, char** pStr,
|
||||
int32_t* pLen) {
|
||||
SNode* pAst = NULL;
|
||||
SQueryPlan* pPlan = NULL;
|
||||
terrno = TSDB_CODE_SUCCESS;
|
||||
|
@ -44,6 +46,11 @@ int32_t mndConvertRSmaTask(const char* ast, int8_t triggerType, int64_t watermar
|
|||
goto END;
|
||||
}
|
||||
|
||||
if (qSetSTableIdForRSma(pAst, uid) < 0) {
|
||||
terrno = TSDB_CODE_QRY_INVALID_INPUT;
|
||||
goto END;
|
||||
}
|
||||
|
||||
SPlanContext cxt = {
|
||||
.pAstRoot = pAst,
|
||||
.topicQuery = false,
|
||||
|
@ -206,6 +213,7 @@ int32_t mndAddShuffledSinkToStream(SMnode* pMnode, STrans* pTrans, SStreamObj* p
|
|||
} else {
|
||||
pTask->sinkType = TASK_SINK__TABLE;
|
||||
pTask->tbSink.stbUid = pStream->targetStbUid;
|
||||
memcpy(pTask->tbSink.stbFullName, pStream->targetSTbName, TSDB_TABLE_FNAME_LEN);
|
||||
pTask->tbSink.pSchemaWrapper = tCloneSSchemaWrapper(&pStream->outputSchema);
|
||||
ASSERT(pTask->tbSink.pSchemaWrapper);
|
||||
}
|
||||
|
@ -229,11 +237,14 @@ int32_t mndAddFixedSinkToStream(SMnode* pMnode, STrans* pTrans, SStreamObj* pStr
|
|||
taosArrayPush(tasks, &pTask);
|
||||
|
||||
pTask->nodeId = pStream->fixedSinkVgId;
|
||||
#if 0
|
||||
SVgObj* pVgroup = mndAcquireVgroup(pMnode, pStream->fixedSinkVgId);
|
||||
if (pVgroup == NULL) {
|
||||
return -1;
|
||||
}
|
||||
pTask->epSet = mndGetVgroupEpset(pMnode, pVgroup);
|
||||
#endif
|
||||
pTask->epSet = mndGetVgroupEpset(pMnode, &pStream->fixedSinkVg);
|
||||
// source
|
||||
pTask->sourceType = TASK_SOURCE__MERGE;
|
||||
pTask->inputType = TASK_INPUT_TYPE__DATA_BLOCK;
|
||||
|
@ -248,13 +259,15 @@ int32_t mndAddFixedSinkToStream(SMnode* pMnode, STrans* pTrans, SStreamObj* pStr
|
|||
} else {
|
||||
pTask->sinkType = TASK_SINK__TABLE;
|
||||
pTask->tbSink.stbUid = pStream->targetStbUid;
|
||||
memcpy(pTask->tbSink.stbFullName, pStream->targetSTbName, TSDB_TABLE_FNAME_LEN);
|
||||
pTask->tbSink.pSchemaWrapper = tCloneSSchemaWrapper(&pStream->outputSchema);
|
||||
}
|
||||
|
||||
// dispatch
|
||||
pTask->dispatchType = TASK_DISPATCH__NONE;
|
||||
|
||||
mndPersistTaskDeployReq(pTrans, pTask, &pTask->epSet, TDMT_VND_TASK_DEPLOY, pVgroup->vgId);
|
||||
/*mndPersistTaskDeployReq(pTrans, pTask, &pTask->epSet, TDMT_VND_TASK_DEPLOY, pVgroup->vgId);*/
|
||||
mndPersistTaskDeployReq(pTrans, pTask, &pTask->epSet, TDMT_VND_TASK_DEPLOY, pStream->fixedSinkVg.vgId);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -325,6 +338,7 @@ int32_t mndScheduleStream(SMnode* pMnode, STrans* pTrans, SStreamObj* pStream) {
|
|||
} else {
|
||||
pTask->sinkType = TASK_SINK__TABLE;
|
||||
pTask->tbSink.stbUid = pStream->targetStbUid;
|
||||
memcpy(pTask->tbSink.stbFullName, pStream->targetSTbName, TSDB_TABLE_FNAME_LEN);
|
||||
pTask->tbSink.pSchemaWrapper = tCloneSSchemaWrapper(&pStream->outputSchema);
|
||||
}
|
||||
#endif
|
||||
|
@ -345,7 +359,8 @@ int32_t mndScheduleStream(SMnode* pMnode, STrans* pTrans, SStreamObj* pStream) {
|
|||
// one merge only
|
||||
ASSERT(taosArrayGetSize(pArray) == 1);
|
||||
SStreamTask* lastLevelTask = taosArrayGetP(pArray, 0);
|
||||
pTask->dispatchMsgType = TDMT_VND_TASK_MERGE_EXEC;
|
||||
/*pTask->dispatchMsgType = TDMT_VND_TASK_MERGE_EXEC;*/
|
||||
pTask->dispatchMsgType = TDMT_VND_TASK_DISPATCH;
|
||||
pTask->dispatchType = TASK_DISPATCH__FIXED;
|
||||
|
||||
pTask->fixedEpDispatcher.taskId = lastLevelTask->taskId;
|
||||
|
@ -390,7 +405,8 @@ int32_t mndScheduleStream(SMnode* pMnode, STrans* pTrans, SStreamObj* pStream) {
|
|||
if (pStream->fixedSinkVgId == 0) {
|
||||
pTask->dispatchType = TASK_DISPATCH__SHUFFLE;
|
||||
|
||||
pTask->dispatchMsgType = TDMT_VND_TASK_WRITE_EXEC;
|
||||
/*pTask->dispatchMsgType = TDMT_VND_TASK_WRITE_EXEC;*/
|
||||
pTask->dispatchMsgType = TDMT_VND_TASK_DISPATCH;
|
||||
SDbObj* pDb = mndAcquireDb(pMnode, pStream->sourceDb);
|
||||
ASSERT(pDb);
|
||||
if (mndExtractDbInfo(pMnode, pDb, &pTask->shuffleDispatcher.dbInfo, NULL) < 0) {
|
||||
|
@ -420,7 +436,8 @@ int32_t mndScheduleStream(SMnode* pMnode, STrans* pTrans, SStreamObj* pStream) {
|
|||
}
|
||||
} else {
|
||||
pTask->dispatchType = TASK_DISPATCH__FIXED;
|
||||
pTask->dispatchMsgType = TDMT_VND_TASK_WRITE_EXEC;
|
||||
/*pTask->dispatchMsgType = TDMT_VND_TASK_WRITE_EXEC;*/
|
||||
pTask->dispatchMsgType = TDMT_VND_TASK_DISPATCH;
|
||||
SArray* pArray = taosArrayGetP(pStream->tasks, 0);
|
||||
// one sink only
|
||||
ASSERT(taosArrayGetSize(pArray) == 1);
|
||||
|
|
|
@ -257,6 +257,7 @@ static int32_t mndProcessRetrieveSysTableReq(SRpcMsg *pReq) {
|
|||
terrno = rowsRead;
|
||||
mDebug("show:0x%" PRIx64 ", retrieve completed", pShow->id);
|
||||
mndReleaseShowObj(pShow, true);
|
||||
blockDataDestroy(pBlock);
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
|
|
@ -295,9 +295,9 @@ static void *mndBuildVCreateSmaReq(SMnode *pMnode, SVgObj *pVgroup, SSmaObj *pSm
|
|||
}
|
||||
|
||||
static void *mndBuildVDropSmaReq(SMnode *pMnode, SVgObj *pVgroup, SSmaObj *pSma, int32_t *pContLen) {
|
||||
SEncoder encoder = {0};
|
||||
int32_t contLen;
|
||||
SName name = {0};
|
||||
SEncoder encoder = {0};
|
||||
int32_t contLen;
|
||||
SName name = {0};
|
||||
tNameFromString(&name, pSma->name, T_NAME_ACCT | T_NAME_DB | T_NAME_TABLE);
|
||||
|
||||
SVDropTSmaReq req = {0};
|
||||
|
@ -354,6 +354,22 @@ static int32_t mndSetCreateSmaCommitLogs(SMnode *pMnode, STrans *pTrans, SSmaObj
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int32_t mndSetCreateSmaVgroupRedoLogs(SMnode *pMnode, STrans *pTrans, SVgObj *pVgroup) {
|
||||
SSdbRaw *pVgRaw = mndVgroupActionEncode(pVgroup);
|
||||
if (pVgRaw == NULL) return -1;
|
||||
if (mndTransAppendRedolog(pTrans, pVgRaw) != 0) return -1;
|
||||
if (sdbSetRawStatus(pVgRaw, SDB_STATUS_CREATING) != 0) return -1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int32_t mndSetCreateSmaVgroupCommitLogs(SMnode *pMnode, STrans *pTrans, SVgObj *pVgroup) {
|
||||
SSdbRaw *pVgRaw = mndVgroupActionEncode(pVgroup);
|
||||
if (pVgRaw == NULL) return -1;
|
||||
if (mndTransAppendCommitlog(pTrans, pVgRaw) != 0) return -1;
|
||||
if (sdbSetRawStatus(pVgRaw, SDB_STATUS_READY) != 0) return -1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int32_t mndSetCreateSmaRedoActions(SMnode *pMnode, STrans *pTrans, SDbObj *pDb, SSmaObj *pSma) {
|
||||
SSdb *pSdb = pMnode->pSdb;
|
||||
SVgObj *pVgroup = NULL;
|
||||
|
@ -393,6 +409,34 @@ static int32_t mndSetCreateSmaRedoActions(SMnode *pMnode, STrans *pTrans, SDbObj
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int32_t mndSetCreateSmaVgroupRedoActions(SMnode *pMnode, STrans *pTrans, SDbObj *pDb, SVgObj *pVgroup) {
|
||||
SVnodeGid *pVgid = pVgroup->vnodeGid + 0;
|
||||
SDnodeObj *pDnode = mndAcquireDnode(pMnode, pVgid->dnodeId);
|
||||
if (pDnode == NULL) return -1;
|
||||
|
||||
STransAction action = {0};
|
||||
action.epSet = mndGetDnodeEpset(pDnode);
|
||||
mndReleaseDnode(pMnode, pDnode);
|
||||
|
||||
// todo add sma info here
|
||||
|
||||
int32_t contLen = 0;
|
||||
void *pReq = mndBuildCreateVnodeReq(pMnode, pDnode, pDb, pVgroup, &contLen);
|
||||
if (pReq == NULL) return -1;
|
||||
|
||||
action.pCont = pReq;
|
||||
action.contLen = contLen;
|
||||
action.msgType = TDMT_DND_CREATE_VNODE;
|
||||
action.acceptableCode = TSDB_CODE_NODE_ALREADY_DEPLOYED;
|
||||
|
||||
if (mndTransAppendRedoAction(pTrans, &action) != 0) {
|
||||
taosMemoryFree(pReq);
|
||||
return -1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int32_t mndCreateSma(SMnode *pMnode, SRpcMsg *pReq, SMCreateSmaReq *pCreate, SDbObj *pDb, SStbObj *pStb) {
|
||||
SSmaObj smaObj = {0};
|
||||
memcpy(smaObj.name, pCreate->name, TSDB_TABLE_FNAME_LEN);
|
||||
|
@ -448,9 +492,14 @@ static int32_t mndCreateSma(SMnode *pMnode, SRpcMsg *pReq, SMCreateSmaReq *pCrea
|
|||
streamObj.version = 1;
|
||||
streamObj.sql = pCreate->sql;
|
||||
streamObj.createdBy = STREAM_CREATED_BY__SMA;
|
||||
streamObj.fixedSinkVgId = smaObj.dstVgId;
|
||||
streamObj.smaId = smaObj.uid;
|
||||
/*streamObj.physicalPlan = "";*/
|
||||
|
||||
if (mndAllocSmaVgroup(pMnode, pDb, &streamObj.fixedSinkVg) != 0) {
|
||||
mError("sma:%s, failed to create since %s", smaObj.name, terrstr());
|
||||
return -1;
|
||||
}
|
||||
smaObj.dstVgId = streamObj.fixedSinkVg.vgId;
|
||||
streamObj.fixedSinkVgId = smaObj.dstVgId;
|
||||
|
||||
int32_t code = -1;
|
||||
STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_TYPE_CREATE_SMA, pReq);
|
||||
|
@ -460,8 +509,11 @@ static int32_t mndCreateSma(SMnode *pMnode, SRpcMsg *pReq, SMCreateSmaReq *pCrea
|
|||
mndTransSetDbInfo(pTrans, pDb);
|
||||
|
||||
if (mndSetCreateSmaRedoLogs(pMnode, pTrans, &smaObj) != 0) goto _OVER;
|
||||
if (mndSetCreateSmaVgroupRedoLogs(pMnode, pTrans, &streamObj.fixedSinkVg) != 0) goto _OVER;
|
||||
if (mndSetCreateSmaCommitLogs(pMnode, pTrans, &smaObj) != 0) goto _OVER;
|
||||
if (mndSetCreateSmaVgroupCommitLogs(pMnode, pTrans, &streamObj.fixedSinkVg) != 0) goto _OVER;
|
||||
if (mndSetCreateSmaRedoActions(pMnode, pTrans, pDb, &smaObj) != 0) goto _OVER;
|
||||
if (mndSetCreateSmaVgroupRedoActions(pMnode, pTrans, pDb, &streamObj.fixedSinkVg) != 0) goto _OVER;
|
||||
if (mndAddStreamToTrans(pMnode, &streamObj, pCreate->ast, STREAM_TRIGGER_AT_ONCE, 0, pTrans) != 0) goto _OVER;
|
||||
if (mndTransPrepare(pMnode, pTrans) != 0) goto _OVER;
|
||||
|
||||
|
@ -480,7 +532,6 @@ static int32_t mndCheckCreateSmaReq(SMCreateSmaReq *pCreate) {
|
|||
if (pCreate->intervalUnit < 0) return -1;
|
||||
if (pCreate->slidingUnit < 0) return -1;
|
||||
if (pCreate->timezone < 0) return -1;
|
||||
if (pCreate->dstVgId < 0) return -1;
|
||||
if (pCreate->interval < 0) return -1;
|
||||
if (pCreate->offset < 0) return -1;
|
||||
if (pCreate->sliding < 0) return -1;
|
||||
|
@ -602,6 +653,24 @@ static int32_t mndSetDropSmaCommitLogs(SMnode *pMnode, STrans *pTrans, SSmaObj *
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int32_t mndSetDropSmaVgroupRedoLogs(SMnode *pMnode, STrans *pTrans, SVgObj *pVgroup) {
|
||||
SSdbRaw *pVgRaw = mndVgroupActionEncode(pVgroup);
|
||||
if (pVgRaw == NULL) return -1;
|
||||
if (mndTransAppendRedolog(pTrans, pVgRaw) != 0) return -1;
|
||||
if (sdbSetRawStatus(pVgRaw, SDB_STATUS_DROPPING) != 0) return -1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int32_t mndSetDropSmaVgroupCommitLogs(SMnode *pMnode, STrans *pTrans, SVgObj *pVgroup) {
|
||||
SSdbRaw *pVgRaw = mndVgroupActionEncode(pVgroup);
|
||||
if (pVgRaw == NULL) return -1;
|
||||
if (mndTransAppendCommitlog(pTrans, pVgRaw) != 0) return -1;
|
||||
if (sdbSetRawStatus(pVgRaw, SDB_STATUS_DROPPED) != 0) return -1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int32_t mndSetDropSmaRedoActions(SMnode *pMnode, STrans *pTrans, SDbObj *pDb, SSmaObj *pSma) {
|
||||
SSdb *pSdb = pMnode->pSdb;
|
||||
SVgObj *pVgroup = NULL;
|
||||
|
@ -643,23 +712,59 @@ static int32_t mndSetDropSmaRedoActions(SMnode *pMnode, STrans *pTrans, SDbObj *
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int32_t mndSetDropSmaVgroupRedoActions(SMnode *pMnode, STrans *pTrans, SDbObj *pDb, SVgObj *pVgroup) {
|
||||
SVnodeGid *pVgid = pVgroup->vnodeGid + 0;
|
||||
SDnodeObj *pDnode = mndAcquireDnode(pMnode, pVgid->dnodeId);
|
||||
if (pDnode == NULL) return -1;
|
||||
|
||||
STransAction action = {0};
|
||||
action.epSet = mndGetDnodeEpset(pDnode);
|
||||
mndReleaseDnode(pMnode, pDnode);
|
||||
|
||||
int32_t contLen = 0;
|
||||
void *pReq = mndBuildDropVnodeReq(pMnode, pDnode, pDb, pVgroup, &contLen);
|
||||
if (pReq == NULL) return -1;
|
||||
|
||||
action.pCont = pReq;
|
||||
action.contLen = contLen;
|
||||
action.msgType = TDMT_DND_DROP_VNODE;
|
||||
action.acceptableCode = TSDB_CODE_NODE_NOT_DEPLOYED;
|
||||
|
||||
if (mndTransAppendRedoAction(pTrans, &action) != 0) {
|
||||
taosMemoryFree(pReq);
|
||||
return -1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int32_t mndDropSma(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb, SSmaObj *pSma) {
|
||||
int32_t code = -1;
|
||||
STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_TYPE_DROP_SMA, pReq);
|
||||
SVgObj *pVgroup = NULL;
|
||||
STrans *pTrans = NULL;
|
||||
|
||||
pVgroup = mndAcquireVgroup(pMnode, pSma->dstVgId);
|
||||
if (pVgroup == NULL) goto _OVER;
|
||||
|
||||
pTrans = mndTransCreate(pMnode, TRN_POLICY_ROLLBACK, TRN_TYPE_DROP_SMA, pReq);
|
||||
if (pTrans == NULL) goto _OVER;
|
||||
|
||||
mDebug("trans:%d, used to drop sma:%s", pTrans->id, pSma->name);
|
||||
mndTransSetDbInfo(pTrans, pDb);
|
||||
|
||||
if (mndSetDropSmaRedoLogs(pMnode, pTrans, pSma) != 0) goto _OVER;
|
||||
if (mndSetDropSmaVgroupRedoLogs(pMnode, pTrans, pVgroup) != 0) goto _OVER;
|
||||
if (mndSetDropSmaCommitLogs(pMnode, pTrans, pSma) != 0) goto _OVER;
|
||||
if (mndSetDropSmaVgroupCommitLogs(pMnode, pTrans, pVgroup) != 0) goto _OVER;
|
||||
if (mndSetDropSmaRedoActions(pMnode, pTrans, pDb, pSma) != 0) goto _OVER;
|
||||
if (mndSetDropSmaVgroupRedoActions(pMnode, pTrans, pDb, pVgroup) != 0) goto _OVER;
|
||||
if (mndTransPrepare(pMnode, pTrans) != 0) goto _OVER;
|
||||
|
||||
code = 0;
|
||||
|
||||
_OVER:
|
||||
mndTransDrop(pTrans);
|
||||
mndReleaseVgroup(pMnode, pVgroup);
|
||||
return code;
|
||||
}
|
||||
|
||||
|
@ -846,6 +951,9 @@ static int32_t mndRetrieveSma(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pBloc
|
|||
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
|
||||
colDataAppend(pColInfo, numOfRows, (const char *)n1, false);
|
||||
|
||||
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
|
||||
colDataAppend(pColInfo, numOfRows, (const char *)&pSma->dstVgId, false);
|
||||
|
||||
numOfRows++;
|
||||
sdbRelease(pSdb, pSma);
|
||||
}
|
||||
|
|
|
@ -87,7 +87,6 @@ SSdbRaw *mndStbActionEncode(SStbObj *pStb) {
|
|||
SDB_SET_INT64(pRaw, dataPos, pStb->updateTime, _OVER)
|
||||
SDB_SET_INT64(pRaw, dataPos, pStb->uid, _OVER)
|
||||
SDB_SET_INT64(pRaw, dataPos, pStb->dbUid, _OVER)
|
||||
SDB_SET_INT32(pRaw, dataPos, pStb->version, _OVER)
|
||||
SDB_SET_INT32(pRaw, dataPos, pStb->tagVer, _OVER)
|
||||
SDB_SET_INT32(pRaw, dataPos, pStb->colVer, _OVER)
|
||||
SDB_SET_INT32(pRaw, dataPos, pStb->nextColId, _OVER)
|
||||
|
@ -167,7 +166,6 @@ static SSdbRow *mndStbActionDecode(SSdbRaw *pRaw) {
|
|||
SDB_GET_INT64(pRaw, dataPos, &pStb->updateTime, _OVER)
|
||||
SDB_GET_INT64(pRaw, dataPos, &pStb->uid, _OVER)
|
||||
SDB_GET_INT64(pRaw, dataPos, &pStb->dbUid, _OVER)
|
||||
SDB_GET_INT32(pRaw, dataPos, &pStb->version, _OVER)
|
||||
SDB_GET_INT32(pRaw, dataPos, &pStb->tagVer, _OVER)
|
||||
SDB_GET_INT32(pRaw, dataPos, &pStb->colVer, _OVER)
|
||||
SDB_GET_INT32(pRaw, dataPos, &pStb->nextColId, _OVER)
|
||||
|
@ -320,7 +318,6 @@ static int32_t mndStbActionUpdate(SSdb *pSdb, SStbObj *pOld, SStbObj *pNew) {
|
|||
}
|
||||
|
||||
pOld->updateTime = pNew->updateTime;
|
||||
pOld->version = pNew->version;
|
||||
pOld->tagVer = pNew->tagVer;
|
||||
pOld->colVer = pNew->colVer;
|
||||
pOld->nextColId = pNew->nextColId;
|
||||
|
@ -388,25 +385,26 @@ static void *mndBuildVCreateStbReq(SMnode *pMnode, SVgObj *pVgroup, SStbObj *pSt
|
|||
req.name = (char *)tNameGetTableName(&name);
|
||||
req.suid = pStb->uid;
|
||||
req.rollup = pStb->ast1Len > 0 ? 1 : 0;
|
||||
req.schema.nCols = pStb->numOfColumns;
|
||||
req.schema.sver = pStb->version;
|
||||
req.schema.tagVer = pStb->tagVer;
|
||||
req.schema.colVer = pStb->colVer;
|
||||
req.schema.pSchema = pStb->pColumns;
|
||||
// todo
|
||||
req.schemaRow.nCols = pStb->numOfColumns;
|
||||
req.schemaRow.version = pStb->colVer;
|
||||
req.schemaRow.pSchema = pStb->pColumns;
|
||||
req.schemaTag.nCols = pStb->numOfTags;
|
||||
req.schemaTag.sver = 1;
|
||||
req.schemaTag.version = pStb->tagVer;
|
||||
req.schemaTag.pSchema = pStb->pTags;
|
||||
|
||||
if (req.rollup) {
|
||||
req.pRSmaParam.xFilesFactor = pStb->xFilesFactor;
|
||||
req.pRSmaParam.delay = pStb->delay;
|
||||
if (pStb->ast1Len > 0) {
|
||||
if (mndConvertRSmaTask(pStb->pAst1, 0, 0, &req.pRSmaParam.qmsg1, &req.pRSmaParam.qmsg1Len) != TSDB_CODE_SUCCESS) {
|
||||
if (mndConvertRSmaTask(pStb->pAst1, pStb->uid, 0, 0, &req.pRSmaParam.qmsg1, &req.pRSmaParam.qmsg1Len) !=
|
||||
TSDB_CODE_SUCCESS) {
|
||||
return NULL;
|
||||
}
|
||||
}
|
||||
if (pStb->ast2Len > 0) {
|
||||
if (mndConvertRSmaTask(pStb->pAst2, 0, 0, &req.pRSmaParam.qmsg2, &req.pRSmaParam.qmsg2Len) != TSDB_CODE_SUCCESS) {
|
||||
if (mndConvertRSmaTask(pStb->pAst2, pStb->uid, 0, 0, &req.pRSmaParam.qmsg2, &req.pRSmaParam.qmsg2Len) !=
|
||||
TSDB_CODE_SUCCESS) {
|
||||
return NULL;
|
||||
}
|
||||
}
|
||||
|
@ -664,7 +662,6 @@ int32_t mndBuildStbFromReq(SMnode *pMnode, SStbObj *pDst, SMCreateStbReq *pCreat
|
|||
pDst->updateTime = pDst->createdTime;
|
||||
pDst->uid = mndGenerateUid(pCreate->name, TSDB_TABLE_FNAME_LEN);
|
||||
pDst->dbUid = pDb->uid;
|
||||
pDst->version = 1;
|
||||
pDst->tagVer = 1;
|
||||
pDst->colVer = 1;
|
||||
pDst->nextColId = 1;
|
||||
|
@ -956,7 +953,6 @@ static int32_t mndAddSuperTableTag(const SStbObj *pOld, SStbObj *pNew, SArray *p
|
|||
mDebug("stb:%s, start to add tag %s", pNew->name, pSchema->name);
|
||||
}
|
||||
|
||||
pNew->version++;
|
||||
pNew->tagVer++;
|
||||
return 0;
|
||||
}
|
||||
|
@ -975,7 +971,6 @@ static int32_t mndDropSuperTableTag(const SStbObj *pOld, SStbObj *pNew, const ch
|
|||
memmove(pNew->pTags + tag, pNew->pTags + tag + 1, sizeof(SSchema) * (pNew->numOfTags - tag - 1));
|
||||
pNew->numOfTags--;
|
||||
|
||||
pNew->version++;
|
||||
pNew->tagVer++;
|
||||
mDebug("stb:%s, start to drop tag %s", pNew->name, tagName);
|
||||
return 0;
|
||||
|
@ -1016,7 +1011,6 @@ static int32_t mndAlterStbTagName(const SStbObj *pOld, SStbObj *pNew, SArray *pF
|
|||
SSchema *pSchema = (SSchema *)(pNew->pTags + tag);
|
||||
memcpy(pSchema->name, newTagName, TSDB_COL_NAME_LEN);
|
||||
|
||||
pNew->version++;
|
||||
pNew->tagVer++;
|
||||
mDebug("stb:%s, start to modify tag %s to %s", pNew->name, oldTagName, newTagName);
|
||||
return 0;
|
||||
|
@ -1046,7 +1040,6 @@ static int32_t mndAlterStbTagBytes(const SStbObj *pOld, SStbObj *pNew, const SFi
|
|||
}
|
||||
|
||||
pTag->bytes = pField->bytes;
|
||||
pNew->version++;
|
||||
pNew->tagVer++;
|
||||
|
||||
mDebug("stb:%s, start to modify tag len %s to %d", pNew->name, pField->name, pField->bytes);
|
||||
|
@ -1086,7 +1079,6 @@ static int32_t mndAddSuperTableColumn(const SStbObj *pOld, SStbObj *pNew, SArray
|
|||
mDebug("stb:%s, start to add column %s", pNew->name, pSchema->name);
|
||||
}
|
||||
|
||||
pNew->version++;
|
||||
pNew->colVer++;
|
||||
return 0;
|
||||
}
|
||||
|
@ -1115,7 +1107,6 @@ static int32_t mndDropSuperTableColumn(const SStbObj *pOld, SStbObj *pNew, const
|
|||
memmove(pNew->pColumns + col, pNew->pColumns + col + 1, sizeof(SSchema) * (pNew->numOfColumns - col - 1));
|
||||
pNew->numOfColumns--;
|
||||
|
||||
pNew->version++;
|
||||
pNew->colVer++;
|
||||
mDebug("stb:%s, start to drop col %s", pNew->name, colName);
|
||||
return 0;
|
||||
|
@ -1154,7 +1145,6 @@ static int32_t mndAlterStbColumnBytes(const SStbObj *pOld, SStbObj *pNew, const
|
|||
}
|
||||
|
||||
pCol->bytes = pField->bytes;
|
||||
pNew->version++;
|
||||
pNew->colVer++;
|
||||
|
||||
mDebug("stb:%s, start to modify col len %s to %d", pNew->name, pField->name, pField->bytes);
|
||||
|
@ -1315,9 +1305,10 @@ static int32_t mndProcessMAlterStbReq(SRpcMsg *pReq) {
|
|||
goto _OVER;
|
||||
}
|
||||
|
||||
if (alterReq.verInBlock > 0 && alterReq.verInBlock <= pStb->version) {
|
||||
mDebug("stb:%s, already exist, verInBlock:%d smaller than verInStb:%d, alter success", alterReq.name,
|
||||
alterReq.verInBlock, pStb->version);
|
||||
if ((alterReq.tagVer > 0 && alterReq.colVer > 0) &&
|
||||
(alterReq.tagVer <= pStb->tagVer || alterReq.colVer <= pStb->colVer)) {
|
||||
mDebug("stb:%s, already exist, tagVer:%d colVer:%d smaller than in mnode, tagVer:%d colVer:%d, alter success",
|
||||
alterReq.name, alterReq.tagVer, alterReq.colVer, pStb->tagVer, pStb->colVer);
|
||||
code = 0;
|
||||
goto _OVER;
|
||||
}
|
||||
|
@ -1511,7 +1502,8 @@ static int32_t mndBuildStbSchemaImp(SDbObj *pDb, SStbObj *pStb, const char *tbNa
|
|||
pRsp->numOfColumns = pStb->numOfColumns;
|
||||
pRsp->precision = pDb->cfg.precision;
|
||||
pRsp->tableType = TSDB_SUPER_TABLE;
|
||||
pRsp->sversion = pStb->version;
|
||||
pRsp->sversion = pStb->colVer;
|
||||
pRsp->tversion = pStb->tagVer;
|
||||
pRsp->suid = pStb->uid;
|
||||
pRsp->tuid = pStb->uid;
|
||||
|
||||
|
@ -1638,7 +1630,7 @@ int32_t mndValidateStbInfo(SMnode *pMnode, SSTableMetaVersion *pStbVersions, int
|
|||
metaRsp.suid = pStbVersion->suid;
|
||||
}
|
||||
|
||||
if (pStbVersion->sversion != metaRsp.sversion) {
|
||||
if (pStbVersion->sversion != metaRsp.sversion || pStbVersion->tversion != metaRsp.tversion) {
|
||||
taosArrayPush(batchMetaRsp.pArray, &metaRsp);
|
||||
} else {
|
||||
tFreeSTableMetaRsp(&metaRsp);
|
||||
|
|
|
@ -456,7 +456,7 @@ static int32_t mndProcessCreateStreamReq(SRpcMsg *pReq) {
|
|||
goto CREATE_STREAM_OVER;
|
||||
}
|
||||
|
||||
pDb = mndAcquireDbByStream(pMnode, createStreamReq.name);
|
||||
pDb = mndAcquireDb(pMnode, createStreamReq.sourceDB);
|
||||
if (pDb == NULL) {
|
||||
terrno = TSDB_CODE_MND_DB_NOT_SELECTED;
|
||||
goto CREATE_STREAM_OVER;
|
||||
|
|
|
@ -31,7 +31,8 @@ void mndSyncCommitMsg(struct SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMeta cbM
|
|||
SMnode *pMnode = pFsm->data;
|
||||
SSdbRaw *pRaw = pMsg->pCont;
|
||||
|
||||
mTrace("raw:%p, apply to sdb, ver:%" PRId64 " role:%s", pRaw, cbMeta.index, syncStr(cbMeta.state));
|
||||
mTrace("raw:%p, apply to sdb, ver:%" PRId64 " term:%" PRId64 " role:%s", pRaw, cbMeta.index, cbMeta.term,
|
||||
syncStr(cbMeta.state));
|
||||
sdbWriteWithoutFree(pMnode->pSdb, pRaw);
|
||||
sdbSetApplyIndex(pMnode->pSdb, cbMeta.index);
|
||||
sdbSetApplyTerm(pMnode->pSdb, cbMeta.term);
|
||||
|
@ -50,6 +51,7 @@ int32_t mndSyncGetSnapshot(struct SSyncFSM *pFsm, SSnapshot *pSnapshot) {
|
|||
void mndRestoreFinish(struct SSyncFSM *pFsm) {
|
||||
SMnode *pMnode = pFsm->data;
|
||||
if (!pMnode->deploy) {
|
||||
mInfo("mnode sync restore finished");
|
||||
mndTransPullup(pMnode);
|
||||
pMnode->syncMgmt.restored = true;
|
||||
}
|
||||
|
@ -125,6 +127,7 @@ int32_t mndInitSync(SMnode *pMnode) {
|
|||
snprintf(syncInfo.path, sizeof(syncInfo.path), "%s%ssync", pMnode->path, TD_DIRSEP);
|
||||
syncInfo.pWal = pMgmt->pWal;
|
||||
syncInfo.pFsm = mndSyncMakeFsm(pMnode);
|
||||
syncInfo.isStandBy = pMgmt->standby;
|
||||
|
||||
SSyncCfg *pCfg = &syncInfo.syncCfg;
|
||||
pCfg->replicaNum = pMnode->replica;
|
||||
|
@ -191,11 +194,17 @@ int32_t mndSyncPropose(SMnode *pMnode, SSdbRaw *pRaw) {
|
|||
void mndSyncStart(SMnode *pMnode) {
|
||||
SSyncMgmt *pMgmt = &pMnode->syncMgmt;
|
||||
syncSetMsgCb(pMgmt->sync, &pMnode->msgCb);
|
||||
|
||||
syncStart(pMgmt->sync);
|
||||
|
||||
#if 0
|
||||
if (pMgmt->standby) {
|
||||
syncStartStandBy(pMgmt->sync);
|
||||
} else {
|
||||
syncStart(pMgmt->sync);
|
||||
}
|
||||
#endif
|
||||
|
||||
mDebug("sync:%" PRId64 " is started", pMgmt->sync);
|
||||
}
|
||||
|
||||
|
|
|
@ -217,7 +217,7 @@ SSdbRow *mndTopicActionDecode(SSdbRaw *pRaw) {
|
|||
}
|
||||
} else {
|
||||
pTopic->schema.nCols = 0;
|
||||
pTopic->schema.sver = 0;
|
||||
pTopic->schema.version = 0;
|
||||
pTopic->schema.pSchema = NULL;
|
||||
}
|
||||
|
||||
|
|
|
@ -829,7 +829,7 @@ static void mndTransSendRpcRsp(SMnode *pMnode, STrans *pTrans) {
|
|||
sendRsp = true;
|
||||
}
|
||||
} else {
|
||||
if (pTrans->stage == TRN_STAGE_REDO_ACTION && pTrans->failedTimes > 0) {
|
||||
if (pTrans->stage == TRN_STAGE_REDO_ACTION && pTrans->failedTimes > 6) {
|
||||
if (code == 0) code = TSDB_CODE_MND_TRANS_UNKNOW_ERROR;
|
||||
sendRsp = true;
|
||||
}
|
||||
|
|
|
@ -80,6 +80,7 @@ SSdbRaw *mndVgroupActionEncode(SVgObj *pVgroup) {
|
|||
SDB_SET_INT32(pRaw, dataPos, pVgroup->hashEnd, _OVER)
|
||||
SDB_SET_BINARY(pRaw, dataPos, pVgroup->dbName, TSDB_DB_FNAME_LEN, _OVER)
|
||||
SDB_SET_INT64(pRaw, dataPos, pVgroup->dbUid, _OVER)
|
||||
SDB_SET_INT8(pRaw, dataPos, pVgroup->isTsma, _OVER)
|
||||
SDB_SET_INT8(pRaw, dataPos, pVgroup->replica, _OVER)
|
||||
for (int8_t i = 0; i < pVgroup->replica; ++i) {
|
||||
SVnodeGid *pVgid = &pVgroup->vnodeGid[i];
|
||||
|
@ -127,6 +128,7 @@ SSdbRow *mndVgroupActionDecode(SSdbRaw *pRaw) {
|
|||
SDB_GET_INT32(pRaw, dataPos, &pVgroup->hashEnd, _OVER)
|
||||
SDB_GET_BINARY(pRaw, dataPos, pVgroup->dbName, TSDB_DB_FNAME_LEN, _OVER)
|
||||
SDB_GET_INT64(pRaw, dataPos, &pVgroup->dbUid, _OVER)
|
||||
SDB_GET_INT8(pRaw, dataPos, &pVgroup->isTsma, _OVER)
|
||||
SDB_GET_INT8(pRaw, dataPos, &pVgroup->replica, _OVER)
|
||||
for (int8_t i = 0; i < pVgroup->replica; ++i) {
|
||||
SVnodeGid *pVgid = &pVgroup->vnodeGid[i];
|
||||
|
@ -167,6 +169,7 @@ static int32_t mndVgroupActionUpdate(SSdb *pSdb, SVgObj *pOld, SVgObj *pNew) {
|
|||
pOld->hashBegin = pNew->hashBegin;
|
||||
pOld->hashEnd = pNew->hashEnd;
|
||||
pOld->replica = pNew->replica;
|
||||
pOld->isTsma = pNew->isTsma;
|
||||
memcpy(pOld->vnodeGid, pNew->vnodeGid, TSDB_MAX_REPLICA * sizeof(SVnodeGid));
|
||||
return 0;
|
||||
}
|
||||
|
@ -426,6 +429,25 @@ static int32_t mndGetAvailableDnode(SMnode *pMnode, SVgObj *pVgroup, SArray *pAr
|
|||
return 0;
|
||||
}
|
||||
|
||||
int32_t mndAllocSmaVgroup(SMnode *pMnode, SDbObj *pDb, SVgObj *pVgroup) {
|
||||
SArray *pArray = mndBuildDnodesArray(pMnode);
|
||||
if (pArray == NULL) return -1;
|
||||
|
||||
pVgroup->vgId = sdbGetMaxId(pMnode->pSdb, SDB_VGROUP);
|
||||
pVgroup->isTsma = 1;
|
||||
pVgroup->createdTime = taosGetTimestampMs();
|
||||
pVgroup->updateTime = pVgroup->createdTime;
|
||||
pVgroup->version = 1;
|
||||
memcpy(pVgroup->dbName, pDb->name, TSDB_DB_FNAME_LEN);
|
||||
pVgroup->dbUid = pDb->uid;
|
||||
pVgroup->replica = 1;
|
||||
|
||||
if (mndGetAvailableDnode(pMnode, pVgroup, pArray) != 0) return -1;
|
||||
|
||||
mInfo("db:%s, sma vgId:%d is alloced", pDb->name, pVgroup->vgId);
|
||||
return 0;
|
||||
}
|
||||
|
||||
int32_t mndAllocVgroup(SMnode *pMnode, SDbObj *pDb, SVgObj **ppVgroups) {
|
||||
int32_t code = -1;
|
||||
SArray *pArray = NULL;
|
||||
|
@ -702,9 +724,12 @@ static int32_t mndRetrieveVgroups(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *p
|
|||
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
|
||||
colDataAppendNULL(pColInfo, numOfRows);
|
||||
|
||||
pColInfo = taosArrayGet(pBlock->pDataBlock, cols);
|
||||
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
|
||||
colDataAppendNULL(pColInfo, numOfRows);
|
||||
|
||||
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
|
||||
colDataAppend(pColInfo, numOfRows, (const char *)&pVgroup->isTsma, false);
|
||||
|
||||
numOfRows++;
|
||||
sdbRelease(pSdb, pVgroup);
|
||||
}
|
||||
|
|
|
@ -277,7 +277,8 @@ void* MndTestStb::BuildAlterStbUpdateColumnBytesReq(const char* stbname, const c
|
|||
req.numOfFields = 1;
|
||||
req.pFields = taosArrayInit(1, sizeof(SField));
|
||||
req.alterType = TSDB_ALTER_TABLE_UPDATE_COLUMN_BYTES;
|
||||
req.verInBlock = verInBlock;
|
||||
req.tagVer = verInBlock;
|
||||
req.colVer = verInBlock;
|
||||
|
||||
SField field = {0};
|
||||
field.bytes = bytes;
|
||||
|
@ -343,7 +344,7 @@ TEST_F(MndTestStb, 01_Create_Show_Meta_Drop_Restart_Stb) {
|
|||
EXPECT_EQ(metaRsp.precision, TSDB_TIME_PRECISION_MILLI);
|
||||
EXPECT_EQ(metaRsp.tableType, TSDB_SUPER_TABLE);
|
||||
EXPECT_EQ(metaRsp.sversion, 1);
|
||||
EXPECT_EQ(metaRsp.tversion, 0);
|
||||
EXPECT_EQ(metaRsp.tversion, 1);
|
||||
EXPECT_GT(metaRsp.suid, 0);
|
||||
EXPECT_GT(metaRsp.tuid, 0);
|
||||
EXPECT_EQ(metaRsp.vgId, 0);
|
||||
|
|
|
@ -103,8 +103,8 @@ void sndProcessUMsg(SSnode *pSnode, SRpcMsg *pMsg) {
|
|||
tDecoderClear(&decoder);
|
||||
|
||||
sndMetaDeployTask(pSnode->pMeta, pTask);
|
||||
} else if (pMsg->msgType == TDMT_SND_TASK_EXEC) {
|
||||
sndProcessTaskExecReq(pSnode, pMsg);
|
||||
/*} else if (pMsg->msgType == TDMT_SND_TASK_EXEC) {*/
|
||||
/*sndProcessTaskExecReq(pSnode, pMsg);*/
|
||||
} else {
|
||||
ASSERT(0);
|
||||
}
|
||||
|
@ -112,9 +112,9 @@ void sndProcessUMsg(SSnode *pSnode, SRpcMsg *pMsg) {
|
|||
|
||||
void sndProcessSMsg(SSnode *pSnode, SRpcMsg *pMsg) {
|
||||
// operator exec
|
||||
if (pMsg->msgType == TDMT_SND_TASK_EXEC) {
|
||||
sndProcessTaskExecReq(pSnode, pMsg);
|
||||
} else {
|
||||
ASSERT(0);
|
||||
}
|
||||
/*if (pMsg->msgType == TDMT_SND_TASK_EXEC) {*/
|
||||
/*sndProcessTaskExecReq(pSnode, pMsg);*/
|
||||
/*} else {*/
|
||||
ASSERT(0);
|
||||
/*}*/
|
||||
}
|
||||
|
|
|
@ -99,24 +99,19 @@ typedef void *tsdbReaderT;
|
|||
#define BLOCK_LOAD_TABLE_SEQ_ORDER 2
|
||||
#define BLOCK_LOAD_TABLE_RR_ORDER 3
|
||||
|
||||
tsdbReaderT *tsdbQueryTables(SVnode *pVnode, SQueryTableDataCond *pCond, STableGroupInfo *tableInfoGroup, uint64_t qId,
|
||||
tsdbReaderT *tsdbQueryTables(SVnode *pVnode, SQueryTableDataCond *pCond, STableListInfo *tableInfoGroup, uint64_t qId,
|
||||
uint64_t taskId);
|
||||
tsdbReaderT tsdbQueryCacheLast(SVnode *pVnode, SQueryTableDataCond *pCond, STableGroupInfo *groupList, uint64_t qId,
|
||||
tsdbReaderT tsdbQueryCacheLast(SVnode *pVnode, SQueryTableDataCond *pCond, STableListInfo *groupList, uint64_t qId,
|
||||
void *pMemRef);
|
||||
int32_t tsdbGetFileBlocksDistInfo(tsdbReaderT *pReader, STableBlockDistInfo *pTableBlockInfo);
|
||||
bool isTsdbCacheLastRow(tsdbReaderT *pReader);
|
||||
int32_t tsdbQuerySTableByTagCond(void *pMeta, uint64_t uid, TSKEY skey, const char *pTagCond, size_t len,
|
||||
int16_t tagNameRelType, const char *tbnameCond, STableGroupInfo *pGroupInfo,
|
||||
SColIndex *pColIndex, int32_t numOfCols, uint64_t reqId, uint64_t taskId);
|
||||
int32_t tsdbGetAllTableList(SMeta* pMeta, uint64_t uid, SArray* list);
|
||||
int64_t tsdbGetNumOfRowsInMemTable(tsdbReaderT *pHandle);
|
||||
bool tsdbNextDataBlock(tsdbReaderT pTsdbReadHandle);
|
||||
void tsdbRetrieveDataBlockInfo(tsdbReaderT *pTsdbReadHandle, SDataBlockInfo *pBlockInfo);
|
||||
int32_t tsdbRetrieveDataBlockStatisInfo(tsdbReaderT *pTsdbReadHandle, SColumnDataAgg ***pBlockStatis, bool *allHave);
|
||||
SArray *tsdbRetrieveDataBlock(tsdbReaderT *pTsdbReadHandle, SArray *pColumnIdList);
|
||||
void tsdbResetReadHandle(tsdbReaderT queryHandle, SQueryTableDataCond *pCond);
|
||||
void tsdbDestroyTableGroup(STableGroupInfo *pGroupList);
|
||||
int32_t tsdbGetOneTableGroup(void *pMeta, uint64_t uid, TSKEY startKey, STableGroupInfo *pGroupInfo);
|
||||
int32_t tsdbGetTableGroupFromIdList(SVnode *pVnode, SArray *pTableIdList, STableGroupInfo *pGroupInfo);
|
||||
void tsdbCleanupReadHandle(tsdbReaderT queryHandle);
|
||||
|
||||
// tq
|
||||
|
@ -182,7 +177,7 @@ struct SMetaEntry {
|
|||
char *name;
|
||||
union {
|
||||
struct {
|
||||
SSchemaWrapper schema;
|
||||
SSchemaWrapper schemaRow;
|
||||
SSchemaWrapper schemaTag;
|
||||
} stbEntry;
|
||||
struct {
|
||||
|
@ -195,7 +190,7 @@ struct SMetaEntry {
|
|||
int64_t ctime;
|
||||
int32_t ttlDays;
|
||||
int32_t ncid; // next column id
|
||||
SSchemaWrapper schema;
|
||||
SSchemaWrapper schemaRow;
|
||||
} ntbEntry;
|
||||
struct {
|
||||
STSma *tsma;
|
||||
|
|
|
@ -40,8 +40,8 @@ typedef struct STable STable;
|
|||
|
||||
int tsdbMemTableCreate(STsdb *pTsdb, STsdbMemTable **ppMemTable);
|
||||
void tsdbMemTableDestroy(STsdb *pTsdb, STsdbMemTable *pMemTable);
|
||||
int tsdbLoadDataFromCache(STable *pTable, SSkipListIterator *pIter, TSKEY maxKey, int maxRowsToRead, SDataCols *pCols,
|
||||
TKEY *filterKeys, int nFilterKeys, bool keepDup, SMergeInfo *pMergeInfo);
|
||||
int tsdbLoadDataFromCache(STsdb *pTsdb, STable *pTable, SSkipListIterator *pIter, TSKEY maxKey, int maxRowsToRead,
|
||||
SDataCols *pCols, TKEY *filterKeys, int nFilterKeys, bool keepDup, SMergeInfo *pMergeInfo);
|
||||
|
||||
// tsdbCommit ================
|
||||
|
||||
|
@ -179,7 +179,14 @@ struct STsdbFS {
|
|||
int tsdbLockRepo(STsdb *pTsdb);
|
||||
int tsdbUnlockRepo(STsdb *pTsdb);
|
||||
|
||||
static FORCE_INLINE STSchema *tsdbGetTableSchemaImpl(STable *pTable, bool lock, bool copy, int32_t version) {
|
||||
static FORCE_INLINE STSchema *tsdbGetTableSchemaImpl(STsdb *pTsdb, STable *pTable, bool lock, bool copy,
|
||||
int32_t version) {
|
||||
|
||||
if ((version != -1) && (schemaVersion(pTable->pSchema) != version)) {
|
||||
taosMemoryFreeClear(pTable->pSchema);
|
||||
pTable->pSchema = metaGetTbTSchema(REPO_META(pTsdb), pTable->uid, version);
|
||||
}
|
||||
|
||||
return pTable->pSchema;
|
||||
}
|
||||
|
||||
|
|
|
@ -114,11 +114,10 @@ int tsdbCommit(STsdb* pTsdb);
|
|||
int tsdbScanAndConvertSubmitMsg(STsdb* pTsdb, SSubmitReq* pMsg);
|
||||
int tsdbInsertData(STsdb* pTsdb, int64_t version, SSubmitReq* pMsg, SSubmitRsp* pRsp);
|
||||
int tsdbInsertTableData(STsdb* pTsdb, SSubmitMsgIter* pMsgIter, SSubmitBlk* pBlock, SSubmitBlkRsp* pRsp);
|
||||
tsdbReaderT* tsdbQueryTables(SVnode* pVnode, SQueryTableDataCond* pCond, STableGroupInfo* groupList, uint64_t qId,
|
||||
tsdbReaderT* tsdbQueryTables(SVnode* pVnode, SQueryTableDataCond* pCond, STableListInfo* tableList, uint64_t qId,
|
||||
uint64_t taskId);
|
||||
tsdbReaderT tsdbQueryCacheLastT(STsdb* tsdb, SQueryTableDataCond* pCond, STableGroupInfo* groupList, uint64_t qId,
|
||||
tsdbReaderT tsdbQueryCacheLastT(STsdb* tsdb, SQueryTableDataCond* pCond, STableListInfo* tableList, uint64_t qId,
|
||||
void* pMemRef);
|
||||
int32_t tsdbGetTableGroupFromIdListT(STsdb* tsdb, SArray* pTableIdList, STableGroupInfo* pGroupInfo);
|
||||
int32_t tsdbSnapshotReaderOpen(STsdb* pTsdb, STsdbSnapshotReader** ppReader, int64_t sver, int64_t ever);
|
||||
int32_t tsdbSnapshotReaderClose(STsdbSnapshotReader* pReader);
|
||||
int32_t tsdbSnapshotRead(STsdbSnapshotReader* pReader, void** ppData, uint32_t* nData);
|
||||
|
|
|
@ -24,7 +24,7 @@ int metaEncodeEntry(SEncoder *pCoder, const SMetaEntry *pME) {
|
|||
if (tEncodeCStr(pCoder, pME->name) < 0) return -1;
|
||||
|
||||
if (pME->type == TSDB_SUPER_TABLE) {
|
||||
if (tEncodeSSchemaWrapper(pCoder, &pME->stbEntry.schema) < 0) return -1;
|
||||
if (tEncodeSSchemaWrapper(pCoder, &pME->stbEntry.schemaRow) < 0) return -1;
|
||||
if (tEncodeSSchemaWrapper(pCoder, &pME->stbEntry.schemaTag) < 0) return -1;
|
||||
} else if (pME->type == TSDB_CHILD_TABLE) {
|
||||
if (tEncodeI64(pCoder, pME->ctbEntry.ctime) < 0) return -1;
|
||||
|
@ -35,7 +35,7 @@ int metaEncodeEntry(SEncoder *pCoder, const SMetaEntry *pME) {
|
|||
if (tEncodeI64(pCoder, pME->ntbEntry.ctime) < 0) return -1;
|
||||
if (tEncodeI32(pCoder, pME->ntbEntry.ttlDays) < 0) return -1;
|
||||
if (tEncodeI32v(pCoder, pME->ntbEntry.ncid) < 0) return -1;
|
||||
if (tEncodeSSchemaWrapper(pCoder, &pME->ntbEntry.schema) < 0) return -1;
|
||||
if (tEncodeSSchemaWrapper(pCoder, &pME->ntbEntry.schemaRow) < 0) return -1;
|
||||
} else if (pME->type == TSDB_TSMA_TABLE) {
|
||||
if (tEncodeTSma(pCoder, pME->smaEntry.tsma) < 0) return -1;
|
||||
} else {
|
||||
|
@ -56,7 +56,7 @@ int metaDecodeEntry(SDecoder *pCoder, SMetaEntry *pME) {
|
|||
if (tDecodeCStr(pCoder, &pME->name) < 0) return -1;
|
||||
|
||||
if (pME->type == TSDB_SUPER_TABLE) {
|
||||
if (tDecodeSSchemaWrapperEx(pCoder, &pME->stbEntry.schema) < 0) return -1;
|
||||
if (tDecodeSSchemaWrapperEx(pCoder, &pME->stbEntry.schemaRow) < 0) return -1;
|
||||
if (tDecodeSSchemaWrapperEx(pCoder, &pME->stbEntry.schemaTag) < 0) return -1;
|
||||
} else if (pME->type == TSDB_CHILD_TABLE) {
|
||||
if (tDecodeI64(pCoder, &pME->ctbEntry.ctime) < 0) return -1;
|
||||
|
@ -67,7 +67,7 @@ int metaDecodeEntry(SDecoder *pCoder, SMetaEntry *pME) {
|
|||
if (tDecodeI64(pCoder, &pME->ntbEntry.ctime) < 0) return -1;
|
||||
if (tDecodeI32(pCoder, &pME->ntbEntry.ttlDays) < 0) return -1;
|
||||
if (tDecodeI32v(pCoder, &pME->ntbEntry.ncid) < 0) return -1;
|
||||
if (tDecodeSSchemaWrapperEx(pCoder, &pME->ntbEntry.schema) < 0) return -1;
|
||||
if (tDecodeSSchemaWrapperEx(pCoder, &pME->ntbEntry.schemaRow) < 0) return -1;
|
||||
} else if (pME->type == TSDB_TSMA_TABLE) {
|
||||
pME->smaEntry.tsma = tDecoderMalloc(pCoder, sizeof(STSma));
|
||||
if (!pME->smaEntry.tsma) {
|
||||
|
|
|
@ -56,7 +56,7 @@ int metaCreateSTable(SMeta *pMeta, int64_t version, SVCreateStbReq *pReq) {
|
|||
me.type = TSDB_SUPER_TABLE;
|
||||
me.uid = pReq->suid;
|
||||
me.name = pReq->name;
|
||||
me.stbEntry.schema = pReq->schema;
|
||||
me.stbEntry.schemaRow = pReq->schemaRow;
|
||||
me.stbEntry.schemaTag = pReq->schemaTag;
|
||||
|
||||
if (metaHandleEntry(pMeta, &me) < 0) goto _err;
|
||||
|
@ -182,15 +182,13 @@ int metaAlterSTable(SMeta *pMeta, int64_t version, SVCreateStbReq *pReq) {
|
|||
nStbEntry.type = TSDB_SUPER_TABLE;
|
||||
nStbEntry.uid = pReq->suid;
|
||||
nStbEntry.name = pReq->name;
|
||||
nStbEntry.stbEntry.schema = pReq->schema;
|
||||
nStbEntry.stbEntry.schemaRow = pReq->schemaRow;
|
||||
nStbEntry.stbEntry.schemaTag = pReq->schemaTag;
|
||||
|
||||
metaWLock(pMeta);
|
||||
// compare two entry
|
||||
if (oStbEntry.stbEntry.schema.sver != pReq->schema.sver) {
|
||||
if (oStbEntry.stbEntry.schema.nCols != pReq->schema.nCols) {
|
||||
metaSaveToSkmDb(pMeta, &nStbEntry);
|
||||
}
|
||||
if (oStbEntry.stbEntry.schemaRow.version != pReq->schemaRow.version) {
|
||||
metaSaveToSkmDb(pMeta, &nStbEntry);
|
||||
}
|
||||
|
||||
// if (oStbEntry.stbEntry.schemaTag.sver != pReq->schemaTag.sver) {
|
||||
|
@ -247,8 +245,8 @@ int metaCreateTable(SMeta *pMeta, int64_t version, SVCreateTbReq *pReq) {
|
|||
} else {
|
||||
me.ntbEntry.ctime = pReq->ctime;
|
||||
me.ntbEntry.ttlDays = pReq->ttl;
|
||||
me.ntbEntry.schema = pReq->ntb.schema;
|
||||
me.ntbEntry.ncid = me.ntbEntry.schema.pSchema[me.ntbEntry.schema.nCols - 1].colId + 1;
|
||||
me.ntbEntry.schemaRow = pReq->ntb.schemaRow;
|
||||
me.ntbEntry.ncid = me.ntbEntry.schemaRow.pSchema[me.ntbEntry.schemaRow.nCols - 1].colId + 1;
|
||||
}
|
||||
|
||||
if (metaHandleEntry(pMeta, &me) < 0) goto _err;
|
||||
|
@ -381,7 +379,7 @@ static int metaAlterTableColumn(SMeta *pMeta, int64_t version, SVAlterTbReq *pAl
|
|||
}
|
||||
|
||||
// search the column to add/drop/update
|
||||
pSchema = &entry.ntbEntry.schema;
|
||||
pSchema = &entry.ntbEntry.schemaRow;
|
||||
int32_t iCol = 0;
|
||||
for (;;) {
|
||||
pColumn = NULL;
|
||||
|
@ -402,16 +400,16 @@ static int metaAlterTableColumn(SMeta *pMeta, int64_t version, SVAlterTbReq *pAl
|
|||
terrno = TSDB_CODE_VND_COL_ALREADY_EXISTS;
|
||||
goto _err;
|
||||
}
|
||||
pSchema->sver++;
|
||||
pSchema->version++;
|
||||
pSchema->nCols++;
|
||||
pNewSchema = taosMemoryMalloc(sizeof(SSchema) * pSchema->nCols);
|
||||
memcpy(pNewSchema, pSchema->pSchema, sizeof(SSchema) * (pSchema->nCols - 1));
|
||||
pSchema->pSchema = pNewSchema;
|
||||
pSchema->pSchema[entry.ntbEntry.schema.nCols - 1].bytes = pAlterTbReq->bytes;
|
||||
pSchema->pSchema[entry.ntbEntry.schema.nCols - 1].type = pAlterTbReq->type;
|
||||
pSchema->pSchema[entry.ntbEntry.schema.nCols - 1].flags = pAlterTbReq->flags;
|
||||
pSchema->pSchema[entry.ntbEntry.schema.nCols - 1].colId = entry.ntbEntry.ncid++;
|
||||
strcpy(pSchema->pSchema[entry.ntbEntry.schema.nCols - 1].name, pAlterTbReq->colName);
|
||||
pSchema->pSchema[entry.ntbEntry.schemaRow.nCols - 1].bytes = pAlterTbReq->bytes;
|
||||
pSchema->pSchema[entry.ntbEntry.schemaRow.nCols - 1].type = pAlterTbReq->type;
|
||||
pSchema->pSchema[entry.ntbEntry.schemaRow.nCols - 1].flags = pAlterTbReq->flags;
|
||||
pSchema->pSchema[entry.ntbEntry.schemaRow.nCols - 1].colId = entry.ntbEntry.ncid++;
|
||||
strcpy(pSchema->pSchema[entry.ntbEntry.schemaRow.nCols - 1].name, pAlterTbReq->colName);
|
||||
break;
|
||||
case TSDB_ALTER_TABLE_DROP_COLUMN:
|
||||
if (pColumn == NULL) {
|
||||
|
@ -422,7 +420,7 @@ static int metaAlterTableColumn(SMeta *pMeta, int64_t version, SVAlterTbReq *pAl
|
|||
terrno = TSDB_CODE_VND_INVALID_TABLE_ACTION;
|
||||
goto _err;
|
||||
}
|
||||
pSchema->sver++;
|
||||
pSchema->version++;
|
||||
tlen = (pSchema->nCols - iCol - 1) * sizeof(SSchema);
|
||||
if (tlen) {
|
||||
memmove(pColumn, pColumn + 1, tlen);
|
||||
|
@ -438,7 +436,7 @@ static int metaAlterTableColumn(SMeta *pMeta, int64_t version, SVAlterTbReq *pAl
|
|||
terrno = TSDB_CODE_VND_INVALID_TABLE_ACTION;
|
||||
goto _err;
|
||||
}
|
||||
pSchema->sver++;
|
||||
pSchema->version++;
|
||||
pColumn->bytes = pAlterTbReq->colModBytes;
|
||||
break;
|
||||
case TSDB_ALTER_TABLE_UPDATE_COLUMN_NAME:
|
||||
|
@ -446,7 +444,7 @@ static int metaAlterTableColumn(SMeta *pMeta, int64_t version, SVAlterTbReq *pAl
|
|||
terrno = TSDB_CODE_VND_TABLE_COL_NOT_EXISTS;
|
||||
goto _err;
|
||||
}
|
||||
pSchema->sver++;
|
||||
pSchema->version++;
|
||||
strcpy(pColumn->name, pAlterTbReq->colNewName);
|
||||
break;
|
||||
}
|
||||
|
@ -813,15 +811,15 @@ static int metaSaveToSkmDb(SMeta *pMeta, const SMetaEntry *pME) {
|
|||
const SSchemaWrapper *pSW;
|
||||
|
||||
if (pME->type == TSDB_SUPER_TABLE) {
|
||||
pSW = &pME->stbEntry.schema;
|
||||
pSW = &pME->stbEntry.schemaRow;
|
||||
} else if (pME->type == TSDB_NORMAL_TABLE) {
|
||||
pSW = &pME->ntbEntry.schema;
|
||||
pSW = &pME->ntbEntry.schemaRow;
|
||||
} else {
|
||||
ASSERT(0);
|
||||
}
|
||||
|
||||
skmDbKey.uid = pME->uid;
|
||||
skmDbKey.sver = pSW->sver;
|
||||
skmDbKey.sver = pSW->version;
|
||||
|
||||
// encode schema
|
||||
int32_t ret = 0;
|
||||
|
|
|
@ -748,7 +748,8 @@ void tqTableSink(SStreamTask* pTask, void* vnode, int64_t ver, void* data) {
|
|||
SVnode* pVnode = (SVnode*)vnode;
|
||||
|
||||
ASSERT(pTask->tbSink.pTSchema);
|
||||
SSubmitReq* pReq = tdBlockToSubmit(pRes, pTask->tbSink.pTSchema, true, pTask->tbSink.stbUid, pVnode->config.vgId);
|
||||
SSubmitReq* pReq = tdBlockToSubmit(pRes, pTask->tbSink.pTSchema, true, pTask->tbSink.stbUid,
|
||||
pTask->tbSink.stbFullName, pVnode->config.vgId);
|
||||
/*tPrintFixedSchemaSubmitReq(pReq, pTask->tbSink.pTSchema);*/
|
||||
// build write msg
|
||||
SRpcMsg msg = {
|
||||
|
@ -828,27 +829,15 @@ FAIL:
|
|||
}
|
||||
|
||||
int32_t tqProcessStreamTrigger(STQ* pTq, SSubmitReq* pReq) {
|
||||
void* pIter = NULL;
|
||||
bool failed = false;
|
||||
void* pIter = NULL;
|
||||
bool failed = false;
|
||||
SStreamDataSubmit* pSubmit = NULL;
|
||||
|
||||
SStreamDataSubmit* pSubmit = taosAllocateQitem(sizeof(SStreamDataSubmit), DEF_QITEM);
|
||||
pSubmit = streamDataSubmitNew(pReq);
|
||||
if (pSubmit == NULL) {
|
||||
failed = true;
|
||||
goto SET_TASK_FAIL;
|
||||
}
|
||||
pSubmit->dataRef = taosMemoryMalloc(sizeof(int32_t));
|
||||
if (pSubmit->dataRef == NULL) {
|
||||
failed = true;
|
||||
goto SET_TASK_FAIL;
|
||||
}
|
||||
|
||||
pSubmit->type = STREAM_INPUT__DATA_SUBMIT;
|
||||
/*pSubmit->sourceVer = ver;*/
|
||||
/*pSubmit->sourceVg = pTq->pVnode->config.vgId;*/
|
||||
pSubmit->data = pReq;
|
||||
*pSubmit->dataRef = 1;
|
||||
|
||||
SET_TASK_FAIL:
|
||||
while (1) {
|
||||
pIter = taosHashIterate(pTq->pStreamTasks, pIter);
|
||||
if (pIter == NULL) break;
|
||||
|
@ -863,7 +852,9 @@ SET_TASK_FAIL:
|
|||
}
|
||||
|
||||
streamDataSubmitRefInc(pSubmit);
|
||||
taosWriteQitem(pTask->inputQ, pSubmit);
|
||||
SStreamDataSubmit* pSubmitClone = taosAllocateQitem(sizeof(SStreamDataSubmit), DEF_QITEM);
|
||||
memcpy(pSubmitClone, pSubmit, sizeof(SStreamDataSubmit));
|
||||
taosWriteQitem(pTask->inputQ, pSubmitClone);
|
||||
|
||||
int8_t execStatus = atomic_load_8(&pTask->status);
|
||||
if (execStatus == TASK_STATUS__IDLE || execStatus == TASK_STATUS__CLOSING) {
|
||||
|
@ -886,18 +877,12 @@ SET_TASK_FAIL:
|
|||
}
|
||||
}
|
||||
|
||||
if (!failed) {
|
||||
if (pSubmit) {
|
||||
streamDataSubmitRefDec(pSubmit);
|
||||
return 0;
|
||||
} else {
|
||||
if (pSubmit) {
|
||||
if (pSubmit->dataRef) {
|
||||
taosMemoryFree(pSubmit->dataRef);
|
||||
}
|
||||
taosFreeQitem(pSubmit);
|
||||
}
|
||||
return -1;
|
||||
taosFreeQitem(pSubmit);
|
||||
}
|
||||
|
||||
return failed ? -1 : 0;
|
||||
}
|
||||
|
||||
int32_t tqProcessTaskRunReq(STQ* pTq, SRpcMsg* pMsg) {
|
||||
|
|
|
@ -84,7 +84,7 @@ static int tsdbMergeBlockData(SCommitH *pCommith, SCommitIter *pIter, SDataCols
|
|||
static void tsdbResetCommitTable(SCommitH *pCommith);
|
||||
static void tsdbCloseCommitFile(SCommitH *pCommith, bool hasError);
|
||||
static bool tsdbCanAddSubBlock(SCommitH *pCommith, SBlock *pBlock, SMergeInfo *pInfo);
|
||||
static void tsdbLoadAndMergeFromCache(SDataCols *pDataCols, int *iter, SCommitIter *pCommitIter, SDataCols *pTarget,
|
||||
static void tsdbLoadAndMergeFromCache(STsdb *pTsdb, SDataCols *pDataCols, int *iter, SCommitIter *pCommitIter, SDataCols *pTarget,
|
||||
TSKEY maxKey, int maxRows, int8_t update);
|
||||
int tsdbWriteBlockIdx(SDFile *pHeadf, SArray *pIdxA, void **ppBuf);
|
||||
|
||||
|
@ -301,7 +301,8 @@ static void tsdbSeekCommitIter(SCommitH *pCommith, TSKEY key) {
|
|||
SCommitIter *pIter = pCommith->iters + i;
|
||||
if (pIter->pTable == NULL || pIter->pIter == NULL) continue;
|
||||
|
||||
tsdbLoadDataFromCache(pIter->pTable, pIter->pIter, key - 1, INT32_MAX, NULL, NULL, 0, true, NULL);
|
||||
tsdbLoadDataFromCache(TSDB_COMMIT_REPO(pCommith), pIter->pTable, pIter->pIter, key - 1, INT32_MAX, NULL, NULL, 0,
|
||||
true, NULL);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -947,7 +948,7 @@ static int tsdbMoveBlkIdx(SCommitH *pCommith, SBlockIdx *pIdx) {
|
|||
}
|
||||
|
||||
static int tsdbSetCommitTable(SCommitH *pCommith, STable *pTable) {
|
||||
STSchema *pSchema = tsdbGetTableSchemaImpl(pTable, false, false, -1);
|
||||
STSchema *pSchema = tsdbGetTableSchemaImpl(TSDB_COMMIT_REPO(pCommith),pTable, false, false, -1);
|
||||
|
||||
pCommith->pTable = pTable;
|
||||
|
||||
|
@ -1254,8 +1255,8 @@ static int tsdbCommitMemData(SCommitH *pCommith, SCommitIter *pIter, TSKEY keyLi
|
|||
SBlock block;
|
||||
|
||||
while (true) {
|
||||
tsdbLoadDataFromCache(pIter->pTable, pIter->pIter, keyLimit, defaultRows, pCommith->pDataCols, NULL, 0,
|
||||
pCfg->update, &mInfo);
|
||||
tsdbLoadDataFromCache(TSDB_COMMIT_REPO(pCommith), pIter->pTable, pIter->pIter, keyLimit, defaultRows,
|
||||
pCommith->pDataCols, NULL, 0, pCfg->update, &mInfo);
|
||||
|
||||
if (pCommith->pDataCols->numOfRows <= 0) break;
|
||||
|
||||
|
@ -1298,8 +1299,9 @@ static int tsdbMergeMemData(SCommitH *pCommith, SCommitIter *pIter, int bidx) {
|
|||
SSkipListIterator titer = *(pIter->pIter);
|
||||
if (tsdbLoadBlockDataCols(&(pCommith->readh), pBlock, NULL, &colId, 1, false) < 0) return -1;
|
||||
|
||||
tsdbLoadDataFromCache(pIter->pTable, &titer, keyLimit, INT32_MAX, NULL, pCommith->readh.pDCols[0]->cols[0].pData,
|
||||
pCommith->readh.pDCols[0]->numOfRows, pCfg->update, &mInfo);
|
||||
tsdbLoadDataFromCache(TSDB_COMMIT_REPO(pCommith), pIter->pTable, &titer, keyLimit, INT32_MAX, NULL,
|
||||
pCommith->readh.pDCols[0]->cols[0].pData, pCommith->readh.pDCols[0]->numOfRows, pCfg->update,
|
||||
&mInfo);
|
||||
|
||||
if (mInfo.nOperations == 0) {
|
||||
// no new data to insert (all updates denied)
|
||||
|
@ -1313,9 +1315,9 @@ static int tsdbMergeMemData(SCommitH *pCommith, SCommitIter *pIter, int bidx) {
|
|||
*(pIter->pIter) = titer;
|
||||
} else if (tsdbCanAddSubBlock(pCommith, pBlock, &mInfo)) {
|
||||
// Add a sub-block
|
||||
tsdbLoadDataFromCache(pIter->pTable, pIter->pIter, keyLimit, INT32_MAX, pCommith->pDataCols,
|
||||
pCommith->readh.pDCols[0]->cols[0].pData, pCommith->readh.pDCols[0]->numOfRows, pCfg->update,
|
||||
&mInfo);
|
||||
tsdbLoadDataFromCache(TSDB_COMMIT_REPO(pCommith), pIter->pTable, pIter->pIter, keyLimit, INT32_MAX,
|
||||
pCommith->pDataCols, pCommith->readh.pDCols[0]->cols[0].pData,
|
||||
pCommith->readh.pDCols[0]->numOfRows, pCfg->update, &mInfo);
|
||||
if (pBlock->last) {
|
||||
pDFile = TSDB_COMMIT_LAST_FILE(pCommith);
|
||||
} else {
|
||||
|
@ -1420,7 +1422,7 @@ static int tsdbMergeBlockData(SCommitH *pCommith, SCommitIter *pIter, SDataCols
|
|||
|
||||
int biter = 0;
|
||||
while (true) {
|
||||
tsdbLoadAndMergeFromCache(pCommith->readh.pDCols[0], &biter, pIter, pCommith->pDataCols, keyLimit, defaultRows,
|
||||
tsdbLoadAndMergeFromCache(TSDB_COMMIT_REPO(pCommith), pCommith->readh.pDCols[0], &biter, pIter, pCommith->pDataCols, keyLimit, defaultRows,
|
||||
pCfg->update);
|
||||
|
||||
if (pCommith->pDataCols->numOfRows == 0) break;
|
||||
|
@ -1445,7 +1447,7 @@ static int tsdbMergeBlockData(SCommitH *pCommith, SCommitIter *pIter, SDataCols
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void tsdbLoadAndMergeFromCache(SDataCols *pDataCols, int *iter, SCommitIter *pCommitIter, SDataCols *pTarget,
|
||||
static void tsdbLoadAndMergeFromCache(STsdb *pTsdb, SDataCols *pDataCols, int *iter, SCommitIter *pCommitIter, SDataCols *pTarget,
|
||||
TSKEY maxKey, int maxRows, int8_t update) {
|
||||
TSKEY key1 = INT64_MAX;
|
||||
TSKEY key2 = INT64_MAX;
|
||||
|
@ -1487,7 +1489,7 @@ static void tsdbLoadAndMergeFromCache(SDataCols *pDataCols, int *iter, SCommitIt
|
|||
++(*iter);
|
||||
} else if (key1 > key2) {
|
||||
if (pSchema == NULL || schemaVersion(pSchema) != TD_ROW_SVER(row)) {
|
||||
pSchema = tsdbGetTableSchemaImpl(pCommitIter->pTable, false, false, TD_ROW_SVER(row));
|
||||
pSchema = tsdbGetTableSchemaImpl(pTsdb, pCommitIter->pTable, false, false, TD_ROW_SVER(row));
|
||||
ASSERT(pSchema != NULL);
|
||||
}
|
||||
|
||||
|
@ -1527,7 +1529,7 @@ static void tsdbLoadAndMergeFromCache(SDataCols *pDataCols, int *iter, SCommitIt
|
|||
if (TD_SUPPORT_UPDATE(update)) {
|
||||
// copy mem data(Multi-Version)
|
||||
if (pSchema == NULL || schemaVersion(pSchema) != TD_ROW_SVER(row)) {
|
||||
pSchema = tsdbGetTableSchemaImpl(pCommitIter->pTable, false, false, TD_ROW_SVER(row));
|
||||
pSchema = tsdbGetTableSchemaImpl(pTsdb, pCommitIter->pTable, false, false, TD_ROW_SVER(row));
|
||||
ASSERT(pSchema != NULL);
|
||||
}
|
||||
|
||||
|
|
|
@ -20,7 +20,8 @@ static void tsdbFreeTbData(STbData *pTbData);
|
|||
static char *tsdbGetTsTupleKey(const void *data);
|
||||
static int tsdbTbDataComp(const void *arg1, const void *arg2);
|
||||
static char *tsdbTbDataGetUid(const void *arg);
|
||||
static int tsdbAppendTableRowToCols(STable *pTable, SDataCols *pCols, STSchema **ppSchema, STSRow *row, bool merge);
|
||||
static int tsdbAppendTableRowToCols(STsdb *pTsdb, STable *pTable, SDataCols *pCols, STSchema **ppSchema, STSRow *row,
|
||||
bool merge);
|
||||
|
||||
int tsdbMemTableCreate(STsdb *pTsdb, STsdbMemTable **ppMemTable) {
|
||||
STsdbMemTable *pMemTable;
|
||||
|
@ -88,8 +89,8 @@ void tsdbMemTableDestroy(STsdb *pTsdb, STsdbMemTable *pMemTable) {
|
|||
*
|
||||
* The function tries to procceed AS MUCH AS POSSIBLE.
|
||||
*/
|
||||
int tsdbLoadDataFromCache(STable *pTable, SSkipListIterator *pIter, TSKEY maxKey, int maxRowsToRead, SDataCols *pCols,
|
||||
TKEY *filterKeys, int nFilterKeys, bool keepDup, SMergeInfo *pMergeInfo) {
|
||||
int tsdbLoadDataFromCache(STsdb *pTsdb, STable *pTable, SSkipListIterator *pIter, TSKEY maxKey, int maxRowsToRead,
|
||||
SDataCols *pCols, TKEY *filterKeys, int nFilterKeys, bool keepDup, SMergeInfo *pMergeInfo) {
|
||||
ASSERT(maxRowsToRead > 0 && nFilterKeys >= 0);
|
||||
if (pIter == NULL) return 0;
|
||||
STSchema *pSchema = NULL;
|
||||
|
@ -222,12 +223,12 @@ int tsdbLoadDataFromCache(STable *pTable, SSkipListIterator *pIter, TSKEY maxKey
|
|||
if (lastKey != TSKEY_INITIAL_VAL) {
|
||||
++pCols->numOfRows;
|
||||
}
|
||||
tsdbAppendTableRowToCols(pTable, pCols, &pSchema, row, false);
|
||||
tsdbAppendTableRowToCols(pTsdb, pTable, pCols, &pSchema, row, false);
|
||||
}
|
||||
lastKey = rowKey;
|
||||
} else {
|
||||
if (keepDup) {
|
||||
tsdbAppendTableRowToCols(pTable, pCols, &pSchema, row, true);
|
||||
tsdbAppendTableRowToCols(pTsdb, pTable, pCols, &pSchema, row, true);
|
||||
} else {
|
||||
// discard
|
||||
}
|
||||
|
@ -249,7 +250,7 @@ int tsdbLoadDataFromCache(STable *pTable, SSkipListIterator *pIter, TSKEY maxKey
|
|||
if (pCols && pMergeInfo->nOperations >= pCols->maxPoints) break;
|
||||
pMergeInfo->rowsDeleteSucceed++;
|
||||
pMergeInfo->nOperations++;
|
||||
tsdbAppendTableRowToCols(pTable, pCols, &pSchema, row, false);
|
||||
tsdbAppendTableRowToCols(pTsdb, pTable, pCols, &pSchema, row, false);
|
||||
} else {
|
||||
if (keepDup) {
|
||||
if (pCols && pMergeInfo->nOperations >= pCols->maxPoints) break;
|
||||
|
@ -262,11 +263,11 @@ int tsdbLoadDataFromCache(STable *pTable, SSkipListIterator *pIter, TSKEY maxKey
|
|||
if (lastKey != TSKEY_INITIAL_VAL) {
|
||||
++pCols->numOfRows;
|
||||
}
|
||||
tsdbAppendTableRowToCols(pTable, pCols, &pSchema, row, false);
|
||||
tsdbAppendTableRowToCols(pTsdb, pTable, pCols, &pSchema, row, false);
|
||||
}
|
||||
lastKey = rowKey;
|
||||
} else {
|
||||
tsdbAppendTableRowToCols(pTable, pCols, &pSchema, row, true);
|
||||
tsdbAppendTableRowToCols(pTsdb, pTable, pCols, &pSchema, row, true);
|
||||
}
|
||||
} else {
|
||||
pMergeInfo->keyFirst = TMIN(pMergeInfo->keyFirst, fKey);
|
||||
|
@ -320,13 +321,13 @@ int tsdbInsertTableData(STsdb *pTsdb, SSubmitMsgIter *pMsgIter, SSubmitBlk *pBlo
|
|||
terrno = TSDB_CODE_PAR_TABLE_NOT_EXIST;
|
||||
return -1;
|
||||
}
|
||||
strcat(pRsp->tblFName, mr.me.name);
|
||||
|
||||
if(pRsp->tblFName) strcat(pRsp->tblFName, mr.me.name);
|
||||
|
||||
if (mr.me.type == TSDB_NORMAL_TABLE) {
|
||||
sverNew = mr.me.ntbEntry.schema.sver;
|
||||
sverNew = mr.me.ntbEntry.schemaRow.version;
|
||||
} else {
|
||||
metaGetTableEntryByUid(&mr, mr.me.ctbEntry.suid);
|
||||
sverNew = mr.me.stbEntry.schema.sver;
|
||||
sverNew = mr.me.stbEntry.schemaRow.version;
|
||||
}
|
||||
metaReaderClear(&mr);
|
||||
|
||||
|
@ -431,10 +432,12 @@ static char *tsdbTbDataGetUid(const void *arg) {
|
|||
STbData *pTbData = (STbData *)arg;
|
||||
return (char *)(&(pTbData->uid));
|
||||
}
|
||||
static int tsdbAppendTableRowToCols(STable *pTable, SDataCols *pCols, STSchema **ppSchema, STSRow *row, bool merge) {
|
||||
|
||||
static int tsdbAppendTableRowToCols(STsdb *pTsdb, STable *pTable, SDataCols *pCols, STSchema **ppSchema, STSRow *row,
|
||||
bool merge) {
|
||||
if (pCols) {
|
||||
if (*ppSchema == NULL || schemaVersion(*ppSchema) != TD_ROW_SVER(row)) {
|
||||
*ppSchema = tsdbGetTableSchemaImpl(pTable, false, false, TD_ROW_SVER(row));
|
||||
*ppSchema = tsdbGetTableSchemaImpl(pTsdb, pTable, false, false, TD_ROW_SVER(row));
|
||||
if (*ppSchema == NULL) {
|
||||
ASSERT(false);
|
||||
return -1;
|
||||
|
|
|
@ -146,10 +146,8 @@ typedef struct STableGroupSupporter {
|
|||
SSchema* pTagSchema;
|
||||
} STableGroupSupporter;
|
||||
|
||||
int32_t tsdbQueryTableList(void* pMeta, SArray* pRes, void* filterInfo);
|
||||
|
||||
static STimeWindow updateLastrowForEachGroup(STableGroupInfo* groupList);
|
||||
static int32_t checkForCachedLastRow(STsdbReadHandle* pTsdbReadHandle, STableGroupInfo* groupList);
|
||||
static STimeWindow updateLastrowForEachGroup(STableListInfo* pList);
|
||||
static int32_t checkForCachedLastRow(STsdbReadHandle* pTsdbReadHandle, STableListInfo* pList);
|
||||
static int32_t checkForCachedLast(STsdbReadHandle* pTsdbReadHandle);
|
||||
// static int32_t tsdbGetCachedLastRow(STable* pTable, STSRow** pRes, TSKEY* lastKey);
|
||||
|
||||
|
@ -235,41 +233,34 @@ int64_t tsdbGetNumOfRowsInMemTable(tsdbReaderT* pHandle) {
|
|||
return rows;
|
||||
}
|
||||
|
||||
static SArray* createCheckInfoFromTableGroup(STsdbReadHandle* pTsdbReadHandle, STableGroupInfo* pGroupList) {
|
||||
size_t numOfGroup = taosArrayGetSize(pGroupList->pGroupList);
|
||||
assert(numOfGroup >= 1);
|
||||
static SArray* createCheckInfoFromTableGroup(STsdbReadHandle* pTsdbReadHandle, STableListInfo* pTableList) {
|
||||
size_t tableSize = taosArrayGetSize(pTableList->pTableList);
|
||||
assert(tableSize >= 1);
|
||||
|
||||
// allocate buffer in order to load data blocks from file
|
||||
SArray* pTableCheckInfo = taosArrayInit(pGroupList->numOfTables, sizeof(STableCheckInfo));
|
||||
SArray* pTableCheckInfo = taosArrayInit(tableSize, sizeof(STableCheckInfo));
|
||||
if (pTableCheckInfo == NULL) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
// todo apply the lastkey of table check to avoid to load header file
|
||||
for (int32_t i = 0; i < numOfGroup; ++i) {
|
||||
SArray* group = *(SArray**)taosArrayGet(pGroupList->pGroupList, i);
|
||||
for (int32_t j = 0; j < tableSize; ++j) {
|
||||
STableKeyInfo* pKeyInfo = (STableKeyInfo*)taosArrayGet(pTableList->pTableList, j);
|
||||
|
||||
size_t gsize = taosArrayGetSize(group);
|
||||
assert(gsize > 0);
|
||||
|
||||
for (int32_t j = 0; j < gsize; ++j) {
|
||||
STableKeyInfo* pKeyInfo = (STableKeyInfo*)taosArrayGet(group, j);
|
||||
|
||||
STableCheckInfo info = {.lastKey = pKeyInfo->lastKey, .tableId = pKeyInfo->uid};
|
||||
if (ASCENDING_TRAVERSE(pTsdbReadHandle->order)) {
|
||||
if (info.lastKey == INT64_MIN || info.lastKey < pTsdbReadHandle->window.skey) {
|
||||
info.lastKey = pTsdbReadHandle->window.skey;
|
||||
}
|
||||
|
||||
assert(info.lastKey >= pTsdbReadHandle->window.skey && info.lastKey <= pTsdbReadHandle->window.ekey);
|
||||
} else {
|
||||
STableCheckInfo info = {.lastKey = pKeyInfo->lastKey, .tableId = pKeyInfo->uid};
|
||||
if (ASCENDING_TRAVERSE(pTsdbReadHandle->order)) {
|
||||
if (info.lastKey == INT64_MIN || info.lastKey < pTsdbReadHandle->window.skey) {
|
||||
info.lastKey = pTsdbReadHandle->window.skey;
|
||||
}
|
||||
|
||||
taosArrayPush(pTableCheckInfo, &info);
|
||||
tsdbDebug("%p check table uid:%" PRId64 " from lastKey:%" PRId64 " %s", pTsdbReadHandle, info.tableId,
|
||||
info.lastKey, pTsdbReadHandle->idStr);
|
||||
assert(info.lastKey >= pTsdbReadHandle->window.skey && info.lastKey <= pTsdbReadHandle->window.ekey);
|
||||
} else {
|
||||
info.lastKey = pTsdbReadHandle->window.skey;
|
||||
}
|
||||
|
||||
taosArrayPush(pTableCheckInfo, &info);
|
||||
tsdbDebug("%p check table uid:%" PRId64 " from lastKey:%" PRId64 " %s", pTsdbReadHandle, info.tableId,
|
||||
info.lastKey, pTsdbReadHandle->idStr);
|
||||
}
|
||||
|
||||
// TODO group table according to the tag value.
|
||||
|
@ -480,7 +471,7 @@ _end:
|
|||
return NULL;
|
||||
}
|
||||
|
||||
tsdbReaderT* tsdbQueryTables(SVnode* pVnode, SQueryTableDataCond* pCond, STableGroupInfo* groupList, uint64_t qId,
|
||||
tsdbReaderT* tsdbQueryTables(SVnode* pVnode, SQueryTableDataCond* pCond, STableListInfo* tableList, uint64_t qId,
|
||||
uint64_t taskId) {
|
||||
STsdbReadHandle* pTsdbReadHandle = tsdbQueryTablesImpl(pVnode, pCond, qId, taskId);
|
||||
if (pTsdbReadHandle == NULL) {
|
||||
|
@ -492,7 +483,7 @@ tsdbReaderT* tsdbQueryTables(SVnode* pVnode, SQueryTableDataCond* pCond, STableG
|
|||
}
|
||||
|
||||
// todo apply the lastkey of table check to avoid to load header file
|
||||
pTsdbReadHandle->pTableCheckInfo = createCheckInfoFromTableGroup(pTsdbReadHandle, groupList);
|
||||
pTsdbReadHandle->pTableCheckInfo = createCheckInfoFromTableGroup(pTsdbReadHandle, tableList);
|
||||
if (pTsdbReadHandle->pTableCheckInfo == NULL) {
|
||||
// tsdbCleanupReadHandle(pTsdbReadHandle);
|
||||
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
|
||||
|
@ -522,8 +513,8 @@ tsdbReaderT* tsdbQueryTables(SVnode* pVnode, SQueryTableDataCond* pCond, STableG
|
|||
}
|
||||
}
|
||||
|
||||
tsdbDebug("%p total numOfTable:%" PRIzu " in this query, group %" PRIzu " %s", pTsdbReadHandle,
|
||||
taosArrayGetSize(pTsdbReadHandle->pTableCheckInfo), taosArrayGetSize(groupList->pGroupList),
|
||||
tsdbDebug("%p total numOfTable:%" PRIzu " in this query, table %" PRIzu " %s", pTsdbReadHandle,
|
||||
taosArrayGetSize(pTsdbReadHandle->pTableCheckInfo), taosArrayGetSize(tableList->pTableList),
|
||||
pTsdbReadHandle->idStr);
|
||||
|
||||
return (tsdbReaderT)pTsdbReadHandle;
|
||||
|
@ -567,7 +558,7 @@ void tsdbResetReadHandle(tsdbReaderT queryHandle, SQueryTableDataCond* pCond) {
|
|||
resetCheckInfo(pTsdbReadHandle);
|
||||
}
|
||||
|
||||
void tsdbResetQueryHandleForNewTable(tsdbReaderT queryHandle, SQueryTableDataCond* pCond, STableGroupInfo* groupList) {
|
||||
void tsdbResetQueryHandleForNewTable(tsdbReaderT queryHandle, SQueryTableDataCond* pCond, STableListInfo* tableList) {
|
||||
STsdbReadHandle* pTsdbReadHandle = queryHandle;
|
||||
|
||||
pTsdbReadHandle->order = pCond->order;
|
||||
|
@ -609,21 +600,21 @@ void tsdbResetQueryHandleForNewTable(tsdbReaderT queryHandle, SQueryTableDataCon
|
|||
// pTsdbReadHandle->next = doFreeColumnInfoData(pTsdbReadHandle->next);
|
||||
}
|
||||
|
||||
tsdbReaderT tsdbQueryLastRow(SVnode* pVnode, SQueryTableDataCond* pCond, STableGroupInfo* groupList, uint64_t qId,
|
||||
tsdbReaderT tsdbQueryLastRow(SVnode* pVnode, SQueryTableDataCond* pCond, STableListInfo* pList, uint64_t qId,
|
||||
uint64_t taskId) {
|
||||
pCond->twindow = updateLastrowForEachGroup(groupList);
|
||||
pCond->twindow = updateLastrowForEachGroup(pList);
|
||||
|
||||
// no qualified table
|
||||
if (groupList->numOfTables == 0) {
|
||||
if (taosArrayGetSize(pList->pTableList) == 0) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
STsdbReadHandle* pTsdbReadHandle = (STsdbReadHandle*)tsdbQueryTables(pVnode, pCond, groupList, qId, taskId);
|
||||
STsdbReadHandle* pTsdbReadHandle = (STsdbReadHandle*)tsdbQueryTables(pVnode, pCond, pList, qId, taskId);
|
||||
if (pTsdbReadHandle == NULL) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
int32_t code = checkForCachedLastRow(pTsdbReadHandle, groupList);
|
||||
int32_t code = checkForCachedLastRow(pTsdbReadHandle, pList);
|
||||
if (code != TSDB_CODE_SUCCESS) { // set the numOfTables to be 0
|
||||
terrno = code;
|
||||
return NULL;
|
||||
|
@ -669,60 +660,60 @@ SArray* tsdbGetQueriedTableList(tsdbReaderT* pHandle) {
|
|||
}
|
||||
|
||||
// leave only one table for each group
|
||||
static STableGroupInfo* trimTableGroup(STimeWindow* window, STableGroupInfo* pGroupList) {
|
||||
assert(pGroupList);
|
||||
size_t numOfGroup = taosArrayGetSize(pGroupList->pGroupList);
|
||||
//static STableGroupInfo* trimTableGroup(STimeWindow* window, STableGroupInfo* pGroupList) {
|
||||
// assert(pGroupList);
|
||||
// size_t numOfGroup = taosArrayGetSize(pGroupList->pGroupList);
|
||||
//
|
||||
// STableGroupInfo* pNew = taosMemoryCalloc(1, sizeof(STableGroupInfo));
|
||||
// pNew->pGroupList = taosArrayInit(numOfGroup, POINTER_BYTES);
|
||||
//
|
||||
// for (int32_t i = 0; i < numOfGroup; ++i) {
|
||||
// SArray* oneGroup = taosArrayGetP(pGroupList->pGroupList, i);
|
||||
// size_t numOfTables = taosArrayGetSize(oneGroup);
|
||||
//
|
||||
// SArray* px = taosArrayInit(4, sizeof(STableKeyInfo));
|
||||
// for (int32_t j = 0; j < numOfTables; ++j) {
|
||||
// STableKeyInfo* pInfo = (STableKeyInfo*)taosArrayGet(oneGroup, j);
|
||||
// // if (window->skey <= pInfo->lastKey && ((STable*)pInfo->pTable)->lastKey != TSKEY_INITIAL_VAL) {
|
||||
// // taosArrayPush(px, pInfo);
|
||||
// // pNew->numOfTables += 1;
|
||||
// // break;
|
||||
// // }
|
||||
// }
|
||||
//
|
||||
// // there are no data in this group
|
||||
// if (taosArrayGetSize(px) == 0) {
|
||||
// taosArrayDestroy(px);
|
||||
// } else {
|
||||
// taosArrayPush(pNew->pGroupList, &px);
|
||||
// }
|
||||
// }
|
||||
//
|
||||
// return pNew;
|
||||
//}
|
||||
|
||||
STableGroupInfo* pNew = taosMemoryCalloc(1, sizeof(STableGroupInfo));
|
||||
pNew->pGroupList = taosArrayInit(numOfGroup, POINTER_BYTES);
|
||||
|
||||
for (int32_t i = 0; i < numOfGroup; ++i) {
|
||||
SArray* oneGroup = taosArrayGetP(pGroupList->pGroupList, i);
|
||||
size_t numOfTables = taosArrayGetSize(oneGroup);
|
||||
|
||||
SArray* px = taosArrayInit(4, sizeof(STableKeyInfo));
|
||||
for (int32_t j = 0; j < numOfTables; ++j) {
|
||||
STableKeyInfo* pInfo = (STableKeyInfo*)taosArrayGet(oneGroup, j);
|
||||
// if (window->skey <= pInfo->lastKey && ((STable*)pInfo->pTable)->lastKey != TSKEY_INITIAL_VAL) {
|
||||
// taosArrayPush(px, pInfo);
|
||||
// pNew->numOfTables += 1;
|
||||
// break;
|
||||
// }
|
||||
}
|
||||
|
||||
// there are no data in this group
|
||||
if (taosArrayGetSize(px) == 0) {
|
||||
taosArrayDestroy(px);
|
||||
} else {
|
||||
taosArrayPush(pNew->pGroupList, &px);
|
||||
}
|
||||
}
|
||||
|
||||
return pNew;
|
||||
}
|
||||
|
||||
tsdbReaderT tsdbQueryRowsInExternalWindow(SVnode* pVnode, SQueryTableDataCond* pCond, STableGroupInfo* groupList,
|
||||
uint64_t qId, uint64_t taskId) {
|
||||
STableGroupInfo* pNew = trimTableGroup(&pCond->twindow, groupList);
|
||||
|
||||
if (pNew->numOfTables == 0) {
|
||||
tsdbDebug("update query time range to invalidate time window");
|
||||
|
||||
assert(taosArrayGetSize(pNew->pGroupList) == 0);
|
||||
bool asc = ASCENDING_TRAVERSE(pCond->order);
|
||||
if (asc) {
|
||||
pCond->twindow.ekey = pCond->twindow.skey - 1;
|
||||
} else {
|
||||
pCond->twindow.skey = pCond->twindow.ekey - 1;
|
||||
}
|
||||
}
|
||||
|
||||
STsdbReadHandle* pTsdbReadHandle = (STsdbReadHandle*)tsdbQueryTables(pVnode, pCond, pNew, qId, taskId);
|
||||
pTsdbReadHandle->loadExternalRow = true;
|
||||
pTsdbReadHandle->currentLoadExternalRows = true;
|
||||
|
||||
return pTsdbReadHandle;
|
||||
}
|
||||
//tsdbReaderT tsdbQueryRowsInExternalWindow(SVnode* pVnode, SQueryTableDataCond* pCond, STableGroupInfo* groupList,
|
||||
// uint64_t qId, uint64_t taskId) {
|
||||
// STableGroupInfo* pNew = trimTableGroup(&pCond->twindow, groupList);
|
||||
//
|
||||
// if (pNew->numOfTables == 0) {
|
||||
// tsdbDebug("update query time range to invalidate time window");
|
||||
//
|
||||
// assert(taosArrayGetSize(pNew->pGroupList) == 0);
|
||||
// bool asc = ASCENDING_TRAVERSE(pCond->order);
|
||||
// if (asc) {
|
||||
// pCond->twindow.ekey = pCond->twindow.skey - 1;
|
||||
// } else {
|
||||
// pCond->twindow.skey = pCond->twindow.ekey - 1;
|
||||
// }
|
||||
// }
|
||||
//
|
||||
// STsdbReadHandle* pTsdbReadHandle = (STsdbReadHandle*)tsdbQueryTables(pVnode, pCond, pNew, qId, taskId);
|
||||
// pTsdbReadHandle->loadExternalRow = true;
|
||||
// pTsdbReadHandle->currentLoadExternalRows = true;
|
||||
//
|
||||
// return pTsdbReadHandle;
|
||||
//}
|
||||
|
||||
static bool initTableMemIterator(STsdbReadHandle* pHandle, STableCheckInfo* pCheckInfo) {
|
||||
if (pCheckInfo->initBuf) {
|
||||
|
@ -2803,7 +2794,7 @@ static int tsdbReadRowsFromCache(STableCheckInfo* pCheckInfo, TSKEY maxKey, int
|
|||
return numOfRows;
|
||||
}
|
||||
|
||||
static int32_t getAllTableList(SMeta* pMeta, uint64_t uid, SArray* list) {
|
||||
int32_t tsdbGetAllTableList(SMeta* pMeta, uint64_t uid, SArray* list) {
|
||||
SMCtbCursor* pCur = metaOpenCtbCursor(pMeta, uid);
|
||||
|
||||
while (1) {
|
||||
|
@ -3340,8 +3331,8 @@ bool isTsdbCacheLastRow(tsdbReaderT* pReader) {
|
|||
return ((STsdbReadHandle*)pReader)->cachelastrow > TSDB_CACHED_TYPE_NONE;
|
||||
}
|
||||
|
||||
int32_t checkForCachedLastRow(STsdbReadHandle* pTsdbReadHandle, STableGroupInfo* groupList) {
|
||||
assert(pTsdbReadHandle != NULL && groupList != NULL);
|
||||
int32_t checkForCachedLastRow(STsdbReadHandle* pTsdbReadHandle, STableListInfo* tableList) {
|
||||
assert(pTsdbReadHandle != NULL && tableList != NULL);
|
||||
|
||||
// TSKEY key = TSKEY_INITIAL_VAL;
|
||||
//
|
||||
|
@ -3388,68 +3379,68 @@ int32_t checkForCachedLast(STsdbReadHandle* pTsdbReadHandle) {
|
|||
return code;
|
||||
}
|
||||
|
||||
STimeWindow updateLastrowForEachGroup(STableGroupInfo* groupList) {
|
||||
STimeWindow updateLastrowForEachGroup(STableListInfo* pList) {
|
||||
STimeWindow window = {INT64_MAX, INT64_MIN};
|
||||
|
||||
int32_t totalNumOfTable = 0;
|
||||
SArray* emptyGroup = taosArrayInit(16, sizeof(int32_t));
|
||||
|
||||
// NOTE: starts from the buffer in case of descending timestamp order check data blocks
|
||||
size_t numOfGroups = taosArrayGetSize(groupList->pGroupList);
|
||||
for (int32_t j = 0; j < numOfGroups; ++j) {
|
||||
SArray* pGroup = taosArrayGetP(groupList->pGroupList, j);
|
||||
TSKEY key = TSKEY_INITIAL_VAL;
|
||||
|
||||
STableKeyInfo keyInfo = {0};
|
||||
|
||||
size_t numOfTables = taosArrayGetSize(pGroup);
|
||||
for (int32_t i = 0; i < numOfTables; ++i) {
|
||||
STableKeyInfo* pInfo = (STableKeyInfo*)taosArrayGet(pGroup, i);
|
||||
|
||||
// if the lastKey equals to INT64_MIN, there is no data in this table
|
||||
TSKEY lastKey = 0; //((STable*)(pInfo->pTable))->lastKey;
|
||||
if (key < lastKey) {
|
||||
key = lastKey;
|
||||
|
||||
// keyInfo.pTable = pInfo->pTable;
|
||||
keyInfo.lastKey = key;
|
||||
pInfo->lastKey = key;
|
||||
|
||||
if (key < window.skey) {
|
||||
window.skey = key;
|
||||
}
|
||||
|
||||
if (key > window.ekey) {
|
||||
window.ekey = key;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// more than one table in each group, only one table left for each group
|
||||
// if (keyInfo.pTable != NULL) {
|
||||
// totalNumOfTable++;
|
||||
// if (taosArrayGetSize(pGroup) == 1) {
|
||||
// // do nothing
|
||||
// } else {
|
||||
// taosArrayClear(pGroup);
|
||||
// taosArrayPush(pGroup, &keyInfo);
|
||||
// }
|
||||
// } else { // mark all the empty groups, and remove it later
|
||||
// taosArrayDestroy(pGroup);
|
||||
// taosArrayPush(emptyGroup, &j);
|
||||
// }
|
||||
}
|
||||
|
||||
// window does not being updated, so set the original
|
||||
if (window.skey == INT64_MAX && window.ekey == INT64_MIN) {
|
||||
window = TSWINDOW_INITIALIZER;
|
||||
assert(totalNumOfTable == 0 && taosArrayGetSize(groupList->pGroupList) == numOfGroups);
|
||||
}
|
||||
|
||||
taosArrayRemoveBatch(groupList->pGroupList, TARRAY_GET_START(emptyGroup), (int32_t)taosArrayGetSize(emptyGroup));
|
||||
taosArrayDestroy(emptyGroup);
|
||||
|
||||
groupList->numOfTables = totalNumOfTable;
|
||||
// int32_t totalNumOfTable = 0;
|
||||
// SArray* emptyGroup = taosArrayInit(16, sizeof(int32_t));
|
||||
//
|
||||
// // NOTE: starts from the buffer in case of descending timestamp order check data blocks
|
||||
// size_t numOfGroups = taosArrayGetSize(groupList->pGroupList);
|
||||
// for (int32_t j = 0; j < numOfGroups; ++j) {
|
||||
// SArray* pGroup = taosArrayGetP(groupList->pGroupList, j);
|
||||
// TSKEY key = TSKEY_INITIAL_VAL;
|
||||
//
|
||||
// STableKeyInfo keyInfo = {0};
|
||||
//
|
||||
// size_t numOfTables = taosArrayGetSize(pGroup);
|
||||
// for (int32_t i = 0; i < numOfTables; ++i) {
|
||||
// STableKeyInfo* pInfo = (STableKeyInfo*)taosArrayGet(pGroup, i);
|
||||
//
|
||||
// // if the lastKey equals to INT64_MIN, there is no data in this table
|
||||
// TSKEY lastKey = 0; //((STable*)(pInfo->pTable))->lastKey;
|
||||
// if (key < lastKey) {
|
||||
// key = lastKey;
|
||||
//
|
||||
// // keyInfo.pTable = pInfo->pTable;
|
||||
// keyInfo.lastKey = key;
|
||||
// pInfo->lastKey = key;
|
||||
//
|
||||
// if (key < window.skey) {
|
||||
// window.skey = key;
|
||||
// }
|
||||
//
|
||||
// if (key > window.ekey) {
|
||||
// window.ekey = key;
|
||||
// }
|
||||
// }
|
||||
// }
|
||||
//
|
||||
// // more than one table in each group, only one table left for each group
|
||||
// // if (keyInfo.pTable != NULL) {
|
||||
// // totalNumOfTable++;
|
||||
// // if (taosArrayGetSize(pGroup) == 1) {
|
||||
// // // do nothing
|
||||
// // } else {
|
||||
// // taosArrayClear(pGroup);
|
||||
// // taosArrayPush(pGroup, &keyInfo);
|
||||
// // }
|
||||
// // } else { // mark all the empty groups, and remove it later
|
||||
// // taosArrayDestroy(pGroup);
|
||||
// // taosArrayPush(emptyGroup, &j);
|
||||
// // }
|
||||
// }
|
||||
//
|
||||
// // window does not being updated, so set the original
|
||||
// if (window.skey == INT64_MAX && window.ekey == INT64_MIN) {
|
||||
// window = TSWINDOW_INITIALIZER;
|
||||
// assert(totalNumOfTable == 0 && taosArrayGetSize(groupList->pGroupList) == numOfGroups);
|
||||
// }
|
||||
//
|
||||
// taosArrayRemoveBatch(groupList->pGroupList, TARRAY_GET_START(emptyGroup), (int32_t)taosArrayGetSize(emptyGroup));
|
||||
// taosArrayDestroy(emptyGroup);
|
||||
//
|
||||
// groupList->numOfTables = totalNumOfTable;
|
||||
return window;
|
||||
}
|
||||
|
||||
|
@ -3875,156 +3866,6 @@ SArray* createTableGroup(SArray* pTableList, SSchemaWrapper* pTagSchema, SColInd
|
|||
// return TSDB_CODE_SUCCESS;
|
||||
//}
|
||||
|
||||
int32_t tsdbQuerySTableByTagCond(void* pMeta, uint64_t uid, TSKEY skey, const char* pTagCond, size_t len,
|
||||
int16_t tagNameRelType, const char* tbnameCond, STableGroupInfo* pGroupInfo,
|
||||
SColIndex* pColIndex, int32_t numOfCols, uint64_t reqId, uint64_t taskId) {
|
||||
SMetaReader mr = {0};
|
||||
|
||||
metaReaderInit(&mr, (SMeta*)pMeta, 0);
|
||||
|
||||
if (metaGetTableEntryByUid(&mr, uid) < 0) {
|
||||
tsdbError("%p failed to get stable, uid:%" PRIu64 ", TID:0x%" PRIx64 " QID:0x%" PRIx64, pMeta, uid, taskId, reqId);
|
||||
metaReaderClear(&mr);
|
||||
terrno = TSDB_CODE_PAR_TABLE_NOT_EXIST;
|
||||
goto _error;
|
||||
} else {
|
||||
tsdbDebug("%p succeed to get stable, uid:%" PRIu64 ", TID:0x%" PRIx64 " QID:0x%" PRIx64, pMeta, uid, taskId, reqId);
|
||||
}
|
||||
|
||||
if (mr.me.type != TSDB_SUPER_TABLE) {
|
||||
tsdbError("%p query normal tag not allowed, uid:%" PRIu64 ", TID:0x%" PRIx64 " QID:0x%" PRIx64, pMeta, uid, taskId,
|
||||
reqId);
|
||||
terrno = TSDB_CODE_OPS_NOT_SUPPORT; // basically, this error is caused by invalid sql issued by client
|
||||
metaReaderClear(&mr);
|
||||
goto _error;
|
||||
}
|
||||
|
||||
metaReaderClear(&mr);
|
||||
|
||||
// NOTE: not add ref count for super table
|
||||
SArray* res = taosArrayInit(8, sizeof(STableKeyInfo));
|
||||
SSchemaWrapper* pTagSchema = metaGetTableSchema(pMeta, uid, 1, true);
|
||||
|
||||
// no tags and tbname condition, all child tables of this stable are involved
|
||||
if (tbnameCond == NULL && (pTagCond == NULL || len == 0)) {
|
||||
int32_t ret = getAllTableList(pMeta, uid, res);
|
||||
if (ret != TSDB_CODE_SUCCESS) {
|
||||
goto _error;
|
||||
}
|
||||
|
||||
pGroupInfo->numOfTables = (uint32_t)taosArrayGetSize(res);
|
||||
pGroupInfo->pGroupList = createTableGroup(res, pTagSchema, pColIndex, numOfCols, skey);
|
||||
|
||||
tsdbDebug("%p no table name/tag condition, all tables qualified, numOfTables:%u, group:%zu, TID:0x%" PRIx64
|
||||
" QID:0x%" PRIx64,
|
||||
pMeta, pGroupInfo->numOfTables, taosArrayGetSize(pGroupInfo->pGroupList), taskId, reqId);
|
||||
|
||||
taosArrayDestroy(res);
|
||||
return ret;
|
||||
}
|
||||
|
||||
int32_t ret = TSDB_CODE_SUCCESS;
|
||||
|
||||
SFilterInfo* filterInfo = NULL;
|
||||
ret = filterInitFromNode((SNode*)pTagCond, &filterInfo, 0);
|
||||
if (ret != TSDB_CODE_SUCCESS) {
|
||||
terrno = ret;
|
||||
return ret;
|
||||
}
|
||||
ret = tsdbQueryTableList(pMeta, res, filterInfo);
|
||||
pGroupInfo->numOfTables = (uint32_t)taosArrayGetSize(res);
|
||||
pGroupInfo->pGroupList = createTableGroup(res, pTagSchema, pColIndex, numOfCols, skey);
|
||||
|
||||
// tsdbDebug("%p stable tid:%d, uid:%" PRIu64 " query, numOfTables:%u, belong to %" PRIzu " groups", tsdb,
|
||||
// pTable->tableId, pTable->uid, pGroupInfo->numOfTables, taosArrayGetSize(pGroupInfo->pGroupList));
|
||||
|
||||
taosArrayDestroy(res);
|
||||
return ret;
|
||||
|
||||
_error:
|
||||
return terrno;
|
||||
}
|
||||
|
||||
int32_t tsdbQueryTableList(void* pMeta, SArray* pRes, void* filterInfo) {
|
||||
// impl later
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
int32_t tsdbGetOneTableGroup(void* pMeta, uint64_t uid, TSKEY startKey, STableGroupInfo* pGroupInfo) {
|
||||
SMetaReader mr = {0};
|
||||
|
||||
metaReaderInit(&mr, (SMeta*)pMeta, 0);
|
||||
|
||||
if (metaGetTableEntryByUid(&mr, uid) < 0) {
|
||||
terrno = TSDB_CODE_PAR_TABLE_NOT_EXIST;
|
||||
goto _error;
|
||||
}
|
||||
|
||||
metaReaderClear(&mr);
|
||||
|
||||
pGroupInfo->numOfTables = 1;
|
||||
pGroupInfo->pGroupList = taosArrayInit(1, POINTER_BYTES);
|
||||
|
||||
SArray* group = taosArrayInit(1, sizeof(STableKeyInfo));
|
||||
|
||||
STableKeyInfo info = {.lastKey = startKey, .uid = uid};
|
||||
taosArrayPush(group, &info);
|
||||
|
||||
taosArrayPush(pGroupInfo->pGroupList, &group);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
||||
_error:
|
||||
metaReaderClear(&mr);
|
||||
return terrno;
|
||||
}
|
||||
|
||||
#if 0
|
||||
int32_t tsdbGetTableGroupFromIdListT(STsdb* tsdb, SArray* pTableIdList, STableGroupInfo* pGroupInfo) {
|
||||
if (tsdbRLockRepoMeta(tsdb) < 0) {
|
||||
return terrno;
|
||||
}
|
||||
|
||||
assert(pTableIdList != NULL);
|
||||
size_t size = taosArrayGetSize(pTableIdList);
|
||||
pGroupInfo->pGroupList = taosArrayInit(1, POINTER_BYTES);
|
||||
SArray* group = taosArrayInit(1, sizeof(STableKeyInfo));
|
||||
|
||||
for(int32_t i = 0; i < size; ++i) {
|
||||
STableIdInfo *id = taosArrayGet(pTableIdList, i);
|
||||
|
||||
STable* pTable = tsdbGetTableByUid(tsdbGetMeta(tsdb), id->uid);
|
||||
if (pTable == NULL) {
|
||||
tsdbWarn("table uid:%"PRIu64", tid:%d has been drop already", id->uid, id->tid);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (pTable->type == TSDB_SUPER_TABLE) {
|
||||
tsdbError("direct query on super tale is not allowed, table uid:%"PRIu64", tid:%d", id->uid, id->tid);
|
||||
terrno = TSDB_CODE_QRY_INVALID_MSG;
|
||||
tsdbUnlockRepoMeta(tsdb);
|
||||
taosArrayDestroy(group);
|
||||
return terrno;
|
||||
}
|
||||
|
||||
STableKeyInfo info = {.pTable = pTable, .lastKey = id->key};
|
||||
taosArrayPush(group, &info);
|
||||
}
|
||||
|
||||
if (tsdbUnlockRepoMeta(tsdb) < 0) {
|
||||
taosArrayDestroy(group);
|
||||
return terrno;
|
||||
}
|
||||
|
||||
pGroupInfo->numOfTables = (uint32_t) taosArrayGetSize(group);
|
||||
if (pGroupInfo->numOfTables > 0) {
|
||||
taosArrayPush(pGroupInfo->pGroupList, &group);
|
||||
} else {
|
||||
taosArrayDestroy(group);
|
||||
}
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
#endif
|
||||
static void* doFreeColumnInfoData(SArray* pColumnInfoData) {
|
||||
if (pColumnInfoData == NULL) {
|
||||
return NULL;
|
||||
|
@ -4095,30 +3936,6 @@ void tsdbCleanupReadHandle(tsdbReaderT queryHandle) {
|
|||
}
|
||||
|
||||
#if 0
|
||||
void tsdbDestroyTableGroup(STableGroupInfo *pGroupList) {
|
||||
assert(pGroupList != NULL);
|
||||
|
||||
size_t numOfGroup = taosArrayGetSize(pGroupList->pGroupList);
|
||||
|
||||
for(int32_t i = 0; i < numOfGroup; ++i) {
|
||||
SArray* p = taosArrayGetP(pGroupList->pGroupList, i);
|
||||
|
||||
size_t numOfTables = taosArrayGetSize(p);
|
||||
for(int32_t j = 0; j < numOfTables; ++j) {
|
||||
STable* pTable = taosArrayGetP(p, j);
|
||||
if (pTable != NULL) { // in case of handling retrieve data from tsdb
|
||||
tsdbUnRefTable(pTable);
|
||||
}
|
||||
//assert(pTable != NULL);
|
||||
}
|
||||
|
||||
taosArrayDestroy(p);
|
||||
}
|
||||
|
||||
taosHashCleanup(pGroupList->map);
|
||||
taosArrayDestroy(pGroupList->pGroupList);
|
||||
pGroupList->numOfTables = 0;
|
||||
}
|
||||
|
||||
static void applyFilterToSkipListNode(SSkipList *pSkipList, tExprNode *pExpr, SArray *pResult, SExprTraverseSupp *param) {
|
||||
SSkipListIterator* iter = tSkipListCreateIter(pSkipList);
|
||||
|
|
|
@ -157,7 +157,7 @@ int tsdbLoadBlockIdx(SReadH *pReadh) {
|
|||
}
|
||||
|
||||
int tsdbSetReadTable(SReadH *pReadh, STable *pTable) {
|
||||
STSchema *pSchema = tsdbGetTableSchemaImpl(pTable, false, false, -1);
|
||||
STSchema *pSchema = tsdbGetTableSchemaImpl(TSDB_READ_REPO(pReadh), pTable, false, false, -1);
|
||||
|
||||
pReadh->pTable = pTable;
|
||||
|
||||
|
|
|
@ -64,7 +64,7 @@ int vnodeGetTableMeta(SVnode *pVnode, SRpcMsg *pMsg) {
|
|||
|
||||
if (mer1.me.type == TSDB_SUPER_TABLE) {
|
||||
strcpy(metaRsp.stbName, mer1.me.name);
|
||||
schema = mer1.me.stbEntry.schema;
|
||||
schema = mer1.me.stbEntry.schemaRow;
|
||||
schemaTag = mer1.me.stbEntry.schemaTag;
|
||||
metaRsp.suid = mer1.me.uid;
|
||||
} else if (mer1.me.type == TSDB_CHILD_TABLE) {
|
||||
|
@ -73,10 +73,10 @@ int vnodeGetTableMeta(SVnode *pVnode, SRpcMsg *pMsg) {
|
|||
|
||||
strcpy(metaRsp.stbName, mer2.me.name);
|
||||
metaRsp.suid = mer2.me.uid;
|
||||
schema = mer2.me.stbEntry.schema;
|
||||
schema = mer2.me.stbEntry.schemaRow;
|
||||
schemaTag = mer2.me.stbEntry.schemaTag;
|
||||
} else if (mer1.me.type == TSDB_NORMAL_TABLE) {
|
||||
schema = mer1.me.ntbEntry.schema;
|
||||
schema = mer1.me.ntbEntry.schemaRow;
|
||||
} else {
|
||||
ASSERT(0);
|
||||
}
|
||||
|
@ -84,7 +84,7 @@ int vnodeGetTableMeta(SVnode *pVnode, SRpcMsg *pMsg) {
|
|||
metaRsp.numOfTags = schemaTag.nCols;
|
||||
metaRsp.numOfColumns = schema.nCols;
|
||||
metaRsp.precision = pVnode->config.tsdbCfg.precision;
|
||||
metaRsp.sversion = schema.sver;
|
||||
metaRsp.sversion = schema.version;
|
||||
metaRsp.pSchemas = (SSchema *)taosMemoryMalloc(sizeof(SSchema) * (metaRsp.numOfColumns + metaRsp.numOfTags));
|
||||
|
||||
memcpy(metaRsp.pSchemas, schema.pSchema, sizeof(SSchema) * schema.nCols);
|
||||
|
@ -147,16 +147,10 @@ void vnodeGetInfo(SVnode *pVnode, const char **dbname, int32_t *vgId) {
|
|||
}
|
||||
|
||||
// wrapper of tsdb read interface
|
||||
tsdbReaderT tsdbQueryCacheLast(SVnode *pVnode, SQueryTableDataCond *pCond, STableGroupInfo *groupList, uint64_t qId,
|
||||
tsdbReaderT tsdbQueryCacheLast(SVnode *pVnode, SQueryTableDataCond *pCond, STableListInfo* tableList, uint64_t qId,
|
||||
void *pMemRef) {
|
||||
#if 0
|
||||
return tsdbQueryCacheLastT(pVnode->pTsdb, pCond, groupList, qId, pMemRef);
|
||||
#endif
|
||||
return 0;
|
||||
}
|
||||
int32_t tsdbGetTableGroupFromIdList(SVnode *pVnode, SArray *pTableIdList, STableGroupInfo *pGroupInfo) {
|
||||
#if 0
|
||||
return tsdbGetTableGroupFromIdListT(pVnode->pTsdb, pTableIdList, pGroupInfo);
|
||||
#endif
|
||||
return 0;
|
||||
}
|
|
@ -171,7 +171,7 @@ typedef struct SCtgJob {
|
|||
uint64_t queryId;
|
||||
SCatalog* pCtg;
|
||||
void* pTrans;
|
||||
const SEpSet* pMgmtEps;
|
||||
SEpSet pMgmtEps;
|
||||
void* userParam;
|
||||
catalogCallback userFp;
|
||||
int32_t tbMetaNum;
|
||||
|
@ -447,7 +447,7 @@ void ctgReleaseVgInfo(SCtgDBCache *dbCache);
|
|||
int32_t ctgAcquireVgInfoFromCache(SCatalog* pCtg, const char *dbFName, SCtgDBCache **pCache);
|
||||
int32_t ctgTbMetaExistInCache(SCatalog* pCtg, char *dbFName, char* tbName, int32_t *exist);
|
||||
int32_t ctgReadTbMetaFromCache(SCatalog* pCtg, SCtgTbMetaCtx* ctx, STableMeta** pTableMeta);
|
||||
int32_t ctgReadTbSverFromCache(SCatalog *pCtg, const SName *pTableName, int32_t *sver, int32_t *tbType, uint64_t *suid, char *stbName);
|
||||
int32_t ctgReadTbVerFromCache(SCatalog *pCtg, const SName *pTableName, int32_t *sver, int32_t *tver, int32_t *tbType, uint64_t *suid, char *stbName);
|
||||
int32_t ctgChkAuthFromCache(SCatalog* pCtg, const char* user, const char* dbFName, AUTH_TYPE type, bool *inCache, bool *pass);
|
||||
int32_t ctgPutRmDBToQueue(SCatalog* pCtg, const char *dbFName, int64_t dbId);
|
||||
int32_t ctgPutRmStbToQueue(SCatalog* pCtg, const char *dbFName, int64_t dbId, const char *stbName, uint64_t suid, bool syncReq);
|
||||
|
|
|
@ -540,12 +540,6 @@ int32_t catalogGetHandle(uint64_t clusterId, SCatalog** catalogHandle) {
|
|||
CTG_ERR_JRET(TSDB_CODE_CTG_MEM_ERROR);
|
||||
}
|
||||
|
||||
SHashObj *metaCache = taosHashInit(gCtgMgmt.cfg.maxTblCacheNum, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_ENTRY_LOCK);
|
||||
if (NULL == metaCache) {
|
||||
qError("taosHashInit failed, num:%d", gCtgMgmt.cfg.maxTblCacheNum);
|
||||
CTG_ERR_RET(TSDB_CODE_CTG_MEM_ERROR);
|
||||
}
|
||||
|
||||
code = taosHashPut(gCtgMgmt.pCluster, &clusterId, sizeof(clusterId), &clusterCtg, POINTER_BYTES);
|
||||
if (code) {
|
||||
if (HASH_NODE_EXIST(code)) {
|
||||
|
@ -818,6 +812,7 @@ int32_t catalogChkTbMetaVersion(SCatalog* pCtg, void *pTrans, const SEpSet* pMgm
|
|||
|
||||
SName name;
|
||||
int32_t sver = 0;
|
||||
int32_t tver = 0;
|
||||
int32_t tbNum = taosArrayGetSize(pTables);
|
||||
for (int32_t i = 0; i < tbNum; ++i) {
|
||||
STbSVersion* pTb = (STbSVersion*)taosArrayGet(pTables, i);
|
||||
|
@ -834,8 +829,8 @@ int32_t catalogChkTbMetaVersion(SCatalog* pCtg, void *pTrans, const SEpSet* pMgm
|
|||
int32_t tbType = 0;
|
||||
uint64_t suid = 0;
|
||||
char stbName[TSDB_TABLE_FNAME_LEN];
|
||||
ctgReadTbSverFromCache(pCtg, &name, &sver, &tbType, &suid, stbName);
|
||||
if (sver >= 0 && sver < pTb->sver) {
|
||||
ctgReadTbVerFromCache(pCtg, &name, &sver, &tver, &tbType, &suid, stbName);
|
||||
if ((sver >= 0 && sver < pTb->sver) || (tver >= 0 && tver < pTb->tver)) {
|
||||
switch (tbType) {
|
||||
case TSDB_CHILD_TABLE: {
|
||||
SName stb = name;
|
||||
|
|
|
@ -21,173 +21,189 @@
|
|||
#include "tref.h"
|
||||
|
||||
int32_t ctgInitGetTbMetaTask(SCtgJob *pJob, int32_t taskIdx, SName *name) {
|
||||
SCtgTask *pTask = taosArrayGet(pJob->pTasks, taskIdx);
|
||||
SCtgTask task = {0};
|
||||
|
||||
pTask->type = CTG_TASK_GET_TB_META;
|
||||
pTask->taskId = taskIdx;
|
||||
pTask->pJob = pJob;
|
||||
task.type = CTG_TASK_GET_TB_META;
|
||||
task.taskId = taskIdx;
|
||||
task.pJob = pJob;
|
||||
|
||||
pTask->taskCtx = taosMemoryCalloc(1, sizeof(SCtgTbMetaCtx));
|
||||
if (NULL == pTask->taskCtx) {
|
||||
task.taskCtx = taosMemoryCalloc(1, sizeof(SCtgTbMetaCtx));
|
||||
if (NULL == task.taskCtx) {
|
||||
CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY);
|
||||
}
|
||||
|
||||
SCtgTbMetaCtx* ctx = pTask->taskCtx;
|
||||
SCtgTbMetaCtx* ctx = task.taskCtx;
|
||||
ctx->pName = taosMemoryMalloc(sizeof(*name));
|
||||
if (NULL == ctx->pName) {
|
||||
taosMemoryFree(pTask->taskCtx);
|
||||
taosMemoryFree(task.taskCtx);
|
||||
CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY);
|
||||
}
|
||||
|
||||
memcpy(ctx->pName, name, sizeof(*name));
|
||||
ctx->flag = CTG_FLAG_UNKNOWN_STB;
|
||||
|
||||
qDebug("QID:%" PRIx64 " task %d type %d initialized, tableName:%s", pJob->queryId, taskIdx, pTask->type, name->tname);
|
||||
taosArrayPush(pJob->pTasks, &task);
|
||||
|
||||
qDebug("QID:%" PRIx64 " task %d type %d initialized, tableName:%s", pJob->queryId, taskIdx, task.type, name->tname);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t ctgInitGetDbVgTask(SCtgJob *pJob, int32_t taskIdx, char *dbFName) {
|
||||
SCtgTask *pTask = taosArrayGet(pJob->pTasks, taskIdx);
|
||||
SCtgTask task = {0};
|
||||
|
||||
pTask->type = CTG_TASK_GET_DB_VGROUP;
|
||||
pTask->taskId = taskIdx;
|
||||
pTask->pJob = pJob;
|
||||
task.type = CTG_TASK_GET_DB_VGROUP;
|
||||
task.taskId = taskIdx;
|
||||
task.pJob = pJob;
|
||||
|
||||
pTask->taskCtx = taosMemoryCalloc(1, sizeof(SCtgDbVgCtx));
|
||||
if (NULL == pTask->taskCtx) {
|
||||
task.taskCtx = taosMemoryCalloc(1, sizeof(SCtgDbVgCtx));
|
||||
if (NULL == task.taskCtx) {
|
||||
CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY);
|
||||
}
|
||||
|
||||
SCtgDbVgCtx* ctx = pTask->taskCtx;
|
||||
SCtgDbVgCtx* ctx = task.taskCtx;
|
||||
|
||||
memcpy(ctx->dbFName, dbFName, sizeof(ctx->dbFName));
|
||||
|
||||
qDebug("QID:%" PRIx64 " task %d type %d initialized, dbFName:%s", pJob->queryId, taskIdx, pTask->type, dbFName);
|
||||
taosArrayPush(pJob->pTasks, &task);
|
||||
|
||||
qDebug("QID:%" PRIx64 " task %d type %d initialized, dbFName:%s", pJob->queryId, taskIdx, task.type, dbFName);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t ctgInitGetDbCfgTask(SCtgJob *pJob, int32_t taskIdx, char *dbFName) {
|
||||
SCtgTask *pTask = taosArrayGet(pJob->pTasks, taskIdx);
|
||||
SCtgTask task = {0};
|
||||
|
||||
pTask->type = CTG_TASK_GET_DB_CFG;
|
||||
pTask->taskId = taskIdx;
|
||||
pTask->pJob = pJob;
|
||||
task.type = CTG_TASK_GET_DB_CFG;
|
||||
task.taskId = taskIdx;
|
||||
task.pJob = pJob;
|
||||
|
||||
pTask->taskCtx = taosMemoryCalloc(1, sizeof(SCtgDbCfgCtx));
|
||||
if (NULL == pTask->taskCtx) {
|
||||
task.taskCtx = taosMemoryCalloc(1, sizeof(SCtgDbCfgCtx));
|
||||
if (NULL == task.taskCtx) {
|
||||
CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY);
|
||||
}
|
||||
|
||||
SCtgDbCfgCtx* ctx = pTask->taskCtx;
|
||||
SCtgDbCfgCtx* ctx = task.taskCtx;
|
||||
|
||||
memcpy(ctx->dbFName, dbFName, sizeof(ctx->dbFName));
|
||||
|
||||
qDebug("QID:%" PRIx64 " task %d type %d initialized, dbFName:%s", pJob->queryId, taskIdx, pTask->type, dbFName);
|
||||
taosArrayPush(pJob->pTasks, &task);
|
||||
|
||||
qDebug("QID:%" PRIx64 " task %d type %d initialized, dbFName:%s", pJob->queryId, taskIdx, task.type, dbFName);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t ctgInitGetTbHashTask(SCtgJob *pJob, int32_t taskIdx, SName *name) {
|
||||
SCtgTask *pTask = taosArrayGet(pJob->pTasks, taskIdx);
|
||||
SCtgTask task = {0};
|
||||
|
||||
pTask->type = CTG_TASK_GET_TB_HASH;
|
||||
pTask->taskId = taskIdx;
|
||||
pTask->pJob = pJob;
|
||||
task.type = CTG_TASK_GET_TB_HASH;
|
||||
task.taskId = taskIdx;
|
||||
task.pJob = pJob;
|
||||
|
||||
pTask->taskCtx = taosMemoryCalloc(1, sizeof(SCtgTbHashCtx));
|
||||
if (NULL == pTask->taskCtx) {
|
||||
task.taskCtx = taosMemoryCalloc(1, sizeof(SCtgTbHashCtx));
|
||||
if (NULL == task.taskCtx) {
|
||||
CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY);
|
||||
}
|
||||
|
||||
SCtgTbHashCtx* ctx = pTask->taskCtx;
|
||||
SCtgTbHashCtx* ctx = task.taskCtx;
|
||||
ctx->pName = taosMemoryMalloc(sizeof(*name));
|
||||
if (NULL == ctx->pName) {
|
||||
taosMemoryFree(pTask->taskCtx);
|
||||
taosMemoryFree(task.taskCtx);
|
||||
CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY);
|
||||
}
|
||||
|
||||
memcpy(ctx->pName, name, sizeof(*name));
|
||||
tNameGetFullDbName(ctx->pName, ctx->dbFName);
|
||||
|
||||
qDebug("QID:%" PRIx64 " task %d type %d initialized, tableName:%s", pJob->queryId, taskIdx, pTask->type, name->tname);
|
||||
taosArrayPush(pJob->pTasks, &task);
|
||||
|
||||
qDebug("QID:%" PRIx64 " task %d type %d initialized, tableName:%s", pJob->queryId, taskIdx, task.type, name->tname);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t ctgInitGetQnodeTask(SCtgJob *pJob, int32_t taskIdx) {
|
||||
SCtgTask *pTask = taosArrayGet(pJob->pTasks, taskIdx);
|
||||
SCtgTask task = {0};
|
||||
|
||||
pTask->type = CTG_TASK_GET_QNODE;
|
||||
pTask->taskId = taskIdx;
|
||||
pTask->pJob = pJob;
|
||||
pTask->taskCtx = NULL;
|
||||
task.type = CTG_TASK_GET_QNODE;
|
||||
task.taskId = taskIdx;
|
||||
task.pJob = pJob;
|
||||
task.taskCtx = NULL;
|
||||
|
||||
qDebug("QID:%" PRIx64 " task %d type %d initialized", pJob->queryId, taskIdx, pTask->type);
|
||||
taosArrayPush(pJob->pTasks, &task);
|
||||
|
||||
qDebug("QID:%" PRIx64 " task %d type %d initialized", pJob->queryId, taskIdx, task.type);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t ctgInitGetIndexTask(SCtgJob *pJob, int32_t taskIdx, char *name) {
|
||||
SCtgTask *pTask = taosArrayGet(pJob->pTasks, taskIdx);
|
||||
SCtgTask task = {0};
|
||||
|
||||
pTask->type = CTG_TASK_GET_INDEX;
|
||||
pTask->taskId = taskIdx;
|
||||
pTask->pJob = pJob;
|
||||
task.type = CTG_TASK_GET_INDEX;
|
||||
task.taskId = taskIdx;
|
||||
task.pJob = pJob;
|
||||
|
||||
pTask->taskCtx = taosMemoryCalloc(1, sizeof(SCtgIndexCtx));
|
||||
if (NULL == pTask->taskCtx) {
|
||||
task.taskCtx = taosMemoryCalloc(1, sizeof(SCtgIndexCtx));
|
||||
if (NULL == task.taskCtx) {
|
||||
CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY);
|
||||
}
|
||||
|
||||
SCtgIndexCtx* ctx = pTask->taskCtx;
|
||||
SCtgIndexCtx* ctx = task.taskCtx;
|
||||
|
||||
strcpy(ctx->indexFName, name);
|
||||
|
||||
qDebug("QID:%" PRIx64 " task %d type %d initialized, indexFName:%s", pJob->queryId, taskIdx, pTask->type, name);
|
||||
taosArrayPush(pJob->pTasks, &task);
|
||||
|
||||
qDebug("QID:%" PRIx64 " task %d type %d initialized, indexFName:%s", pJob->queryId, taskIdx, task.type, name);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t ctgInitGetUdfTask(SCtgJob *pJob, int32_t taskIdx, char *name) {
|
||||
SCtgTask *pTask = taosArrayGet(pJob->pTasks, taskIdx);
|
||||
SCtgTask task = {0};
|
||||
|
||||
pTask->type = CTG_TASK_GET_UDF;
|
||||
pTask->taskId = taskIdx;
|
||||
pTask->pJob = pJob;
|
||||
task.type = CTG_TASK_GET_UDF;
|
||||
task.taskId = taskIdx;
|
||||
task.pJob = pJob;
|
||||
|
||||
pTask->taskCtx = taosMemoryCalloc(1, sizeof(SCtgUdfCtx));
|
||||
if (NULL == pTask->taskCtx) {
|
||||
task.taskCtx = taosMemoryCalloc(1, sizeof(SCtgUdfCtx));
|
||||
if (NULL == task.taskCtx) {
|
||||
CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY);
|
||||
}
|
||||
|
||||
SCtgUdfCtx* ctx = pTask->taskCtx;
|
||||
SCtgUdfCtx* ctx = task.taskCtx;
|
||||
|
||||
strcpy(ctx->udfName, name);
|
||||
|
||||
qDebug("QID:%" PRIx64 " task %d type %d initialized, udfName:%s", pJob->queryId, taskIdx, pTask->type, name);
|
||||
taosArrayPush(pJob->pTasks, &task);
|
||||
|
||||
qDebug("QID:%" PRIx64 " task %d type %d initialized, udfName:%s", pJob->queryId, taskIdx, task.type, name);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t ctgInitGetUserTask(SCtgJob *pJob, int32_t taskIdx, SUserAuthInfo *user) {
|
||||
SCtgTask *pTask = taosArrayGet(pJob->pTasks, taskIdx);
|
||||
SCtgTask task = {0};
|
||||
|
||||
pTask->type = CTG_TASK_GET_USER;
|
||||
pTask->taskId = taskIdx;
|
||||
pTask->pJob = pJob;
|
||||
task.type = CTG_TASK_GET_USER;
|
||||
task.taskId = taskIdx;
|
||||
task.pJob = pJob;
|
||||
|
||||
pTask->taskCtx = taosMemoryCalloc(1, sizeof(SCtgUserCtx));
|
||||
if (NULL == pTask->taskCtx) {
|
||||
task.taskCtx = taosMemoryCalloc(1, sizeof(SCtgUserCtx));
|
||||
if (NULL == task.taskCtx) {
|
||||
CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY);
|
||||
}
|
||||
|
||||
SCtgUserCtx* ctx = pTask->taskCtx;
|
||||
SCtgUserCtx* ctx = task.taskCtx;
|
||||
|
||||
memcpy(&ctx->user, user, sizeof(*user));
|
||||
|
||||
qDebug("QID:%" PRIx64 " task %d type %d initialized, user:%s", pJob->queryId, taskIdx, pTask->type, user->user);
|
||||
taosArrayPush(pJob->pTasks, &task);
|
||||
|
||||
qDebug("QID:%" PRIx64 " task %d type %d initialized, user:%s", pJob->queryId, taskIdx, task.type, user->user);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
@ -222,7 +238,7 @@ int32_t ctgInitJob(CTG_PARAMS, SCtgJob** job, uint64_t reqId, const SCatalogReq*
|
|||
pJob->userFp = fp;
|
||||
pJob->pCtg = pCtg;
|
||||
pJob->pTrans = pTrans;
|
||||
pJob->pMgmtEps = pMgmtEps;
|
||||
pJob->pMgmtEps = *pMgmtEps;
|
||||
pJob->userParam = param;
|
||||
|
||||
pJob->tbMetaNum = tbMetaNum;
|
||||
|
@ -303,15 +319,13 @@ _return:
|
|||
int32_t ctgDumpTbMetaRes(SCtgTask* pTask) {
|
||||
SCtgJob* pJob = pTask->pJob;
|
||||
if (NULL == pJob->jobRes.pTableMeta) {
|
||||
pJob->jobRes.pTableMeta = taosArrayInit(pJob->tbMetaNum, sizeof(STableMeta));
|
||||
pJob->jobRes.pTableMeta = taosArrayInit(pJob->tbMetaNum, POINTER_BYTES);
|
||||
if (NULL == pJob->jobRes.pTableMeta) {
|
||||
CTG_ERR_RET(TSDB_CODE_OUT_OF_MEMORY);
|
||||
}
|
||||
}
|
||||
|
||||
taosArrayPush(pJob->jobRes.pTableMeta, pTask->res);
|
||||
|
||||
taosMemoryFreeClear(pTask->res);
|
||||
taosArrayPush(pJob->jobRes.pTableMeta, &pTask->res);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
@ -340,7 +354,7 @@ int32_t ctgDumpTbHashRes(SCtgTask* pTask) {
|
|||
}
|
||||
}
|
||||
|
||||
taosArrayPush(pJob->jobRes.pTableHash, &pTask->res);
|
||||
taosArrayPush(pJob->jobRes.pTableHash, pTask->res);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
@ -376,7 +390,7 @@ int32_t ctgDumpDbCfgRes(SCtgTask* pTask) {
|
|||
}
|
||||
}
|
||||
|
||||
taosArrayPush(pJob->jobRes.pDbCfg, &pTask->res);
|
||||
taosArrayPush(pJob->jobRes.pDbCfg, pTask->res);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
@ -451,7 +465,7 @@ int32_t ctgHandleGetTbMetaRsp(SCtgTask* pTask, int32_t reqType, const SDataBuf *
|
|||
SCtgTbMetaCtx* ctx = (SCtgTbMetaCtx*)pTask->taskCtx;
|
||||
SCatalog* pCtg = pTask->pJob->pCtg;
|
||||
void *pTrans = pTask->pJob->pTrans;
|
||||
const SEpSet* pMgmtEps = pTask->pJob->pMgmtEps;
|
||||
const SEpSet* pMgmtEps = &pTask->pJob->pMgmtEps;
|
||||
|
||||
switch (reqType) {
|
||||
case TDMT_MND_USE_DB: {
|
||||
|
@ -529,7 +543,7 @@ int32_t ctgHandleGetTbMetaRsp(SCtgTask* pTask, int32_t reqType, const SDataBuf *
|
|||
|
||||
taosMemoryFreeClear(pOut->tbMeta);
|
||||
|
||||
CTG_ERR_JRET(ctgGetTbMetaFromMnode(CTG_PARAMS_LIST(), ctx->pName, NULL, pTask));
|
||||
CTG_RET(ctgGetTbMetaFromMnode(CTG_PARAMS_LIST(), ctx->pName, NULL, pTask));
|
||||
} else if (CTG_IS_META_BOTH(pOut->metaType)) {
|
||||
int32_t exist = 0;
|
||||
if (!CTG_FLAG_IS_FORCE_UPDATE(ctx->flag)) {
|
||||
|
@ -538,7 +552,7 @@ int32_t ctgHandleGetTbMetaRsp(SCtgTask* pTask, int32_t reqType, const SDataBuf *
|
|||
|
||||
if (0 == exist) {
|
||||
TSWAP(pTask->msgCtx.lastOut, pTask->msgCtx.out);
|
||||
CTG_ERR_JRET(ctgGetTbMetaFromMnodeImpl(CTG_PARAMS_LIST(), pOut->dbFName, pOut->tbName, NULL, pTask));
|
||||
CTG_RET(ctgGetTbMetaFromMnodeImpl(CTG_PARAMS_LIST(), pOut->dbFName, pOut->tbName, NULL, pTask));
|
||||
} else {
|
||||
taosMemoryFreeClear(pOut->tbMeta);
|
||||
|
||||
|
@ -598,7 +612,7 @@ int32_t ctgHandleGetDbVgRsp(SCtgTask* pTask, int32_t reqType, const SDataBuf *pM
|
|||
SCtgDbVgCtx* ctx = (SCtgDbVgCtx*)pTask->taskCtx;
|
||||
SCatalog* pCtg = pTask->pJob->pCtg;
|
||||
void *pTrans = pTask->pJob->pTrans;
|
||||
const SEpSet* pMgmtEps = pTask->pJob->pMgmtEps;
|
||||
const SEpSet* pMgmtEps = &pTask->pJob->pMgmtEps;
|
||||
|
||||
switch (reqType) {
|
||||
case TDMT_MND_USE_DB: {
|
||||
|
@ -632,7 +646,7 @@ int32_t ctgHandleGetTbHashRsp(SCtgTask* pTask, int32_t reqType, const SDataBuf *
|
|||
SCtgTbHashCtx* ctx = (SCtgTbHashCtx*)pTask->taskCtx;
|
||||
SCatalog* pCtg = pTask->pJob->pCtg;
|
||||
void *pTrans = pTask->pJob->pTrans;
|
||||
const SEpSet* pMgmtEps = pTask->pJob->pMgmtEps;
|
||||
const SEpSet* pMgmtEps = &pTask->pJob->pMgmtEps;
|
||||
|
||||
switch (reqType) {
|
||||
case TDMT_MND_USE_DB: {
|
||||
|
@ -724,7 +738,7 @@ int32_t ctgHandleGetUserRsp(SCtgTask* pTask, int32_t reqType, const SDataBuf *pM
|
|||
SCtgUserCtx* ctx = (SCtgUserCtx*)pTask->taskCtx;
|
||||
SCatalog* pCtg = pTask->pJob->pCtg;
|
||||
void *pTrans = pTask->pJob->pTrans;
|
||||
const SEpSet* pMgmtEps = pTask->pJob->pMgmtEps;
|
||||
const SEpSet* pMgmtEps = &pTask->pJob->pMgmtEps;
|
||||
bool pass = false;
|
||||
SGetUserAuthRsp* pOut = (SGetUserAuthRsp*)pTask->msgCtx.out;
|
||||
|
||||
|
@ -756,7 +770,7 @@ _return:
|
|||
}
|
||||
|
||||
ctgPutUpdateUserToQueue(pCtg, pOut, false);
|
||||
pTask->msgCtx.out = NULL;
|
||||
taosMemoryFreeClear(pTask->msgCtx.out);
|
||||
|
||||
ctgHandleTaskEnd(pTask, code);
|
||||
|
||||
|
@ -766,7 +780,7 @@ _return:
|
|||
int32_t ctgAsyncRefreshTbMeta(SCtgTask *pTask) {
|
||||
SCatalog* pCtg = pTask->pJob->pCtg;
|
||||
void *pTrans = pTask->pJob->pTrans;
|
||||
const SEpSet* pMgmtEps = pTask->pJob->pMgmtEps;
|
||||
const SEpSet* pMgmtEps = &pTask->pJob->pMgmtEps;
|
||||
int32_t code = 0;
|
||||
SCtgTbMetaCtx* ctx = (SCtgTbMetaCtx*)pTask->taskCtx;
|
||||
|
||||
|
@ -788,7 +802,7 @@ int32_t ctgAsyncRefreshTbMeta(SCtgTask *pTask) {
|
|||
tNameGetFullDbName(ctx->pName, dbFName);
|
||||
|
||||
CTG_ERR_RET(ctgAcquireVgInfoFromCache(pCtg, dbFName, &dbCache));
|
||||
if (NULL == dbCache) {
|
||||
if (dbCache) {
|
||||
SVgroupInfo vgInfo = {0};
|
||||
CTG_ERR_RET(ctgGetVgInfoFromHashValue(pCtg, dbCache->vgInfo, ctx->pName, &vgInfo));
|
||||
|
||||
|
@ -817,7 +831,7 @@ _return:
|
|||
int32_t ctgLaunchGetTbMetaTask(SCtgTask *pTask) {
|
||||
SCatalog* pCtg = pTask->pJob->pCtg;
|
||||
void *pTrans = pTask->pJob->pTrans;
|
||||
const SEpSet* pMgmtEps = pTask->pJob->pMgmtEps;
|
||||
const SEpSet* pMgmtEps = &pTask->pJob->pMgmtEps;
|
||||
|
||||
CTG_ERR_RET(ctgGetTbMetaFromCache(CTG_PARAMS_LIST(), (SCtgTbMetaCtx*)pTask->taskCtx, (STableMeta**)&pTask->res));
|
||||
if (pTask->res) {
|
||||
|
@ -834,7 +848,7 @@ int32_t ctgLaunchGetDbVgTask(SCtgTask *pTask) {
|
|||
int32_t code = 0;
|
||||
SCatalog* pCtg = pTask->pJob->pCtg;
|
||||
void *pTrans = pTask->pJob->pTrans;
|
||||
const SEpSet* pMgmtEps = pTask->pJob->pMgmtEps;
|
||||
const SEpSet* pMgmtEps = &pTask->pJob->pMgmtEps;
|
||||
SCtgDBCache *dbCache = NULL;
|
||||
SCtgDbVgCtx* pCtx = (SCtgDbVgCtx*)pTask->taskCtx;
|
||||
|
||||
|
@ -866,7 +880,7 @@ int32_t ctgLaunchGetTbHashTask(SCtgTask *pTask) {
|
|||
int32_t code = 0;
|
||||
SCatalog* pCtg = pTask->pJob->pCtg;
|
||||
void *pTrans = pTask->pJob->pTrans;
|
||||
const SEpSet* pMgmtEps = pTask->pJob->pMgmtEps;
|
||||
const SEpSet* pMgmtEps = &pTask->pJob->pMgmtEps;
|
||||
SCtgDBCache *dbCache = NULL;
|
||||
SCtgTbHashCtx* pCtx = (SCtgTbHashCtx*)pTask->taskCtx;
|
||||
|
||||
|
@ -901,7 +915,7 @@ _return:
|
|||
int32_t ctgLaunchGetQnodeTask(SCtgTask *pTask) {
|
||||
SCatalog* pCtg = pTask->pJob->pCtg;
|
||||
void *pTrans = pTask->pJob->pTrans;
|
||||
const SEpSet* pMgmtEps = pTask->pJob->pMgmtEps;
|
||||
const SEpSet* pMgmtEps = &pTask->pJob->pMgmtEps;
|
||||
|
||||
CTG_ERR_RET(ctgGetQnodeListFromMnode(CTG_PARAMS_LIST(), NULL, pTask));
|
||||
|
||||
|
@ -911,7 +925,7 @@ int32_t ctgLaunchGetQnodeTask(SCtgTask *pTask) {
|
|||
int32_t ctgLaunchGetDbCfgTask(SCtgTask *pTask) {
|
||||
SCatalog* pCtg = pTask->pJob->pCtg;
|
||||
void *pTrans = pTask->pJob->pTrans;
|
||||
const SEpSet* pMgmtEps = pTask->pJob->pMgmtEps;
|
||||
const SEpSet* pMgmtEps = &pTask->pJob->pMgmtEps;
|
||||
SCtgDbCfgCtx* pCtx = (SCtgDbCfgCtx*)pTask->taskCtx;
|
||||
|
||||
CTG_ERR_RET(ctgGetDBCfgFromMnode(CTG_PARAMS_LIST(), pCtx->dbFName, NULL, pTask));
|
||||
|
@ -922,7 +936,7 @@ int32_t ctgLaunchGetDbCfgTask(SCtgTask *pTask) {
|
|||
int32_t ctgLaunchGetIndexTask(SCtgTask *pTask) {
|
||||
SCatalog* pCtg = pTask->pJob->pCtg;
|
||||
void *pTrans = pTask->pJob->pTrans;
|
||||
const SEpSet* pMgmtEps = pTask->pJob->pMgmtEps;
|
||||
const SEpSet* pMgmtEps = &pTask->pJob->pMgmtEps;
|
||||
SCtgIndexCtx* pCtx = (SCtgIndexCtx*)pTask->taskCtx;
|
||||
|
||||
CTG_ERR_RET(ctgGetIndexInfoFromMnode(CTG_PARAMS_LIST(), pCtx->indexFName, NULL, pTask));
|
||||
|
@ -933,7 +947,7 @@ int32_t ctgLaunchGetIndexTask(SCtgTask *pTask) {
|
|||
int32_t ctgLaunchGetUdfTask(SCtgTask *pTask) {
|
||||
SCatalog* pCtg = pTask->pJob->pCtg;
|
||||
void *pTrans = pTask->pJob->pTrans;
|
||||
const SEpSet* pMgmtEps = pTask->pJob->pMgmtEps;
|
||||
const SEpSet* pMgmtEps = &pTask->pJob->pMgmtEps;
|
||||
SCtgUdfCtx* pCtx = (SCtgUdfCtx*)pTask->taskCtx;
|
||||
|
||||
CTG_ERR_RET(ctgGetUdfInfoFromMnode(CTG_PARAMS_LIST(), pCtx->udfName, NULL, pTask));
|
||||
|
@ -944,7 +958,7 @@ int32_t ctgLaunchGetUdfTask(SCtgTask *pTask) {
|
|||
int32_t ctgLaunchGetUserTask(SCtgTask *pTask) {
|
||||
SCatalog* pCtg = pTask->pJob->pCtg;
|
||||
void *pTrans = pTask->pJob->pTrans;
|
||||
const SEpSet* pMgmtEps = pTask->pJob->pMgmtEps;
|
||||
const SEpSet* pMgmtEps = &pTask->pJob->pMgmtEps;
|
||||
SCtgUserCtx* pCtx = (SCtgUserCtx*)pTask->taskCtx;
|
||||
bool inCache = false;
|
||||
bool pass = false;
|
||||
|
|
|
@ -248,7 +248,7 @@ int32_t ctgReadTbMetaFromCache(SCatalog* pCtg, SCtgTbMetaCtx* ctx, STableMeta**
|
|||
|
||||
ctgAcquireDBCache(pCtg, dbFName, &dbCache);
|
||||
if (NULL == dbCache) {
|
||||
ctgDebug("db %s not in cache", ctx->pName->tname);
|
||||
ctgDebug("db %d.%s not in cache", ctx->pName->acctId, ctx->pName->dbname);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
|
@ -322,9 +322,10 @@ _return:
|
|||
CTG_RET(code);
|
||||
}
|
||||
|
||||
int32_t ctgReadTbSverFromCache(SCatalog *pCtg, const SName *pTableName, int32_t *sver, int32_t *tbType, uint64_t *suid,
|
||||
int32_t ctgReadTbVerFromCache(SCatalog *pCtg, const SName *pTableName, int32_t *sver, int32_t *tver, int32_t *tbType, uint64_t *suid,
|
||||
char *stbName) {
|
||||
*sver = -1;
|
||||
*tver = -1;
|
||||
|
||||
if (NULL == pCtg->dbCache) {
|
||||
ctgDebug("empty tbmeta cache, tbName:%s", pTableName->tname);
|
||||
|
@ -348,6 +349,7 @@ int32_t ctgReadTbSverFromCache(SCatalog *pCtg, const SName *pTableName, int32_t
|
|||
*suid = tbMeta->suid;
|
||||
if (*tbType != TSDB_CHILD_TABLE) {
|
||||
*sver = tbMeta->sversion;
|
||||
*tver = tbMeta->tversion;
|
||||
}
|
||||
}
|
||||
CTG_UNLOCK(CTG_READ, &dbCache->tbCache.metaLock);
|
||||
|
@ -359,7 +361,7 @@ int32_t ctgReadTbSverFromCache(SCatalog *pCtg, const SName *pTableName, int32_t
|
|||
|
||||
if (*tbType != TSDB_CHILD_TABLE) {
|
||||
ctgReleaseDBCache(pCtg, dbCache);
|
||||
ctgDebug("Got sver %d from cache, type:%d, dbFName:%s, tbName:%s", *sver, *tbType, dbFName, pTableName->tname);
|
||||
ctgDebug("Got sver %d tver %d from cache, type:%d, dbFName:%s, tbName:%s", *sver, *tver, *tbType, dbFName, pTableName->tname);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
@ -391,12 +393,13 @@ int32_t ctgReadTbSverFromCache(SCatalog *pCtg, const SName *pTableName, int32_t
|
|||
stbName[nameLen] = 0;
|
||||
|
||||
*sver = (*stbMeta)->sversion;
|
||||
*tver = (*stbMeta)->tversion;
|
||||
|
||||
CTG_UNLOCK(CTG_READ, &dbCache->tbCache.stbLock);
|
||||
|
||||
ctgReleaseDBCache(pCtg, dbCache);
|
||||
|
||||
ctgDebug("Got sver %d from cache, type:%d, dbFName:%s, tbName:%s", *sver, *tbType, dbFName, pTableName->tname);
|
||||
ctgDebug("Got sver %d tver %d from cache, type:%d, dbFName:%s, tbName:%s", *sver, *tver, *tbType, dbFName, pTableName->tname);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
@ -715,7 +718,7 @@ int32_t ctgPutUpdateUserToQueue(SCatalog* pCtg, SGetUserAuthRsp *pAuth, bool syn
|
|||
action.data = msg;
|
||||
|
||||
CTG_ERR_JRET(ctgPushAction(pCtg, &action));
|
||||
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
||||
_return:
|
||||
|
@ -1457,10 +1460,15 @@ _return:
|
|||
CTG_RET(code);
|
||||
}
|
||||
|
||||
void ctgUpdateThreadFuncUnexpectedStopped(void) {
|
||||
if (CTG_IS_LOCKED(&gCtgMgmt.lock) > 0) CTG_UNLOCK(CTG_READ, &gCtgMgmt.lock);
|
||||
}
|
||||
|
||||
void* ctgUpdateThreadFunc(void* param) {
|
||||
setThreadName("catalog");
|
||||
|
||||
#ifdef WINDOWS
|
||||
atexit(ctgUpdateThreadFuncUnexpectedStopped);
|
||||
#endif
|
||||
qInfo("catalog update thread started");
|
||||
|
||||
CTG_LOCK(CTG_READ, &gCtgMgmt.lock);
|
||||
|
@ -1494,7 +1502,7 @@ void* ctgUpdateThreadFunc(void* param) {
|
|||
ctgdShowClusterCache(pCtg);
|
||||
}
|
||||
|
||||
CTG_UNLOCK(CTG_READ, &gCtgMgmt.lock);
|
||||
if (CTG_IS_LOCKED(&gCtgMgmt.lock)) CTG_UNLOCK(CTG_READ, &gCtgMgmt.lock);
|
||||
|
||||
qInfo("catalog update thread stopped");
|
||||
|
||||
|
|
|
@ -21,6 +21,179 @@
|
|||
extern SCatalogMgmt gCtgMgmt;
|
||||
SCtgDebug gCTGDebug = {0};
|
||||
|
||||
void ctgdUserCallback(SMetaData* pResult, void* param, int32_t code) {
|
||||
ASSERT(*(int32_t*)param == 1);
|
||||
taosMemoryFree(param);
|
||||
|
||||
qDebug("async call result: %s", tstrerror(code));
|
||||
if (NULL == pResult) {
|
||||
qDebug("empty meta result");
|
||||
return;
|
||||
}
|
||||
|
||||
int32_t num = 0;
|
||||
|
||||
if (pResult->pTableMeta && taosArrayGetSize(pResult->pTableMeta) > 0) {
|
||||
num = taosArrayGetSize(pResult->pTableMeta);
|
||||
for (int32_t i = 0; i < num; ++i) {
|
||||
STableMeta *p = *(STableMeta **)taosArrayGet(pResult->pTableMeta, i);
|
||||
STableComInfo *c = &p->tableInfo;
|
||||
|
||||
if (TSDB_CHILD_TABLE == p->tableType) {
|
||||
qDebug("table meta: type:%d, vgId:%d, uid:%" PRIx64 ",suid:%" PRIx64, p->tableType, p->vgId, p->uid, p->suid);
|
||||
} else {
|
||||
qDebug("table meta: type:%d, vgId:%d, uid:%" PRIx64 ",suid:%" PRIx64 ",sv:%d, tv:%d, tagNum:%d, precision:%d, colNum:%d, rowSize:%d",
|
||||
p->tableType, p->vgId, p->uid, p->suid, p->sversion, p->tversion, c->numOfTags, c->precision, c->numOfColumns, c->rowSize);
|
||||
}
|
||||
|
||||
int32_t colNum = c->numOfColumns + c->numOfTags;
|
||||
for (int32_t j = 0; j < colNum; ++j) {
|
||||
SSchema *s = &p->schema[j];
|
||||
qDebug("[%d] name:%s, type:%d, colId:%d, bytes:%d", j, s->name, s->type, s->colId, s->bytes);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
qDebug("empty table meta");
|
||||
}
|
||||
|
||||
if (pResult->pDbVgroup && taosArrayGetSize(pResult->pDbVgroup) > 0) {
|
||||
num = taosArrayGetSize(pResult->pDbVgroup);
|
||||
for (int32_t i = 0; i < num; ++i) {
|
||||
SArray *pDb = *(SArray**)taosArrayGet(pResult->pDbVgroup, i);
|
||||
int32_t vgNum = taosArrayGetSize(pDb);
|
||||
qDebug("db %d vgInfo:", i);
|
||||
for (int32_t j = 0; j < vgNum; ++j) {
|
||||
SVgroupInfo* pInfo = taosArrayGet(pDb, j);
|
||||
qDebug("vg %d info: vgId:%d", j, pInfo->vgId);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
qDebug("empty db vgroup");
|
||||
}
|
||||
|
||||
if (pResult->pTableHash && taosArrayGetSize(pResult->pTableHash) > 0) {
|
||||
num = taosArrayGetSize(pResult->pTableHash);
|
||||
for (int32_t i = 0; i < num; ++i) {
|
||||
SVgroupInfo* pInfo = taosArrayGet(pResult->pTableHash, i);
|
||||
qDebug("table %d vg info: vgId:%d", i, pInfo->vgId);
|
||||
}
|
||||
} else {
|
||||
qDebug("empty table hash vgroup");
|
||||
}
|
||||
|
||||
if (pResult->pUdfList && taosArrayGetSize(pResult->pUdfList) > 0) {
|
||||
num = taosArrayGetSize(pResult->pUdfList);
|
||||
for (int32_t i = 0; i < num; ++i) {
|
||||
SFuncInfo* pInfo = taosArrayGet(pResult->pUdfList, i);
|
||||
qDebug("udf %d info: name:%s, funcType:%d", i, pInfo->name, pInfo->funcType);
|
||||
}
|
||||
} else {
|
||||
qDebug("empty udf info");
|
||||
}
|
||||
|
||||
if (pResult->pDbCfg && taosArrayGetSize(pResult->pDbCfg) > 0) {
|
||||
num = taosArrayGetSize(pResult->pDbCfg);
|
||||
for (int32_t i = 0; i < num; ++i) {
|
||||
SDbCfgInfo* pInfo = taosArrayGet(pResult->pDbCfg, i);
|
||||
qDebug("db %d info: numOFVgroups:%d, numOfStables:%d", i, pInfo->numOfVgroups, pInfo->numOfStables);
|
||||
}
|
||||
} else {
|
||||
qDebug("empty db cfg info");
|
||||
}
|
||||
|
||||
if (pResult->pUser && taosArrayGetSize(pResult->pUser) > 0) {
|
||||
num = taosArrayGetSize(pResult->pUser);
|
||||
for (int32_t i = 0; i < num; ++i) {
|
||||
bool* auth = taosArrayGet(pResult->pUser, i);
|
||||
qDebug("user auth %d info: %d", i, *auth);
|
||||
}
|
||||
} else {
|
||||
qDebug("empty user auth info");
|
||||
}
|
||||
|
||||
if (pResult->pQnodeList && taosArrayGetSize(pResult->pQnodeList) > 0) {
|
||||
num = taosArrayGetSize(pResult->pQnodeList);
|
||||
for (int32_t i = 0; i < num; ++i) {
|
||||
SQueryNodeAddr* qaddr = taosArrayGet(pResult->pQnodeList, i);
|
||||
qDebug("qnode %d info: id:%d", i, qaddr->nodeId);
|
||||
}
|
||||
} else {
|
||||
qDebug("empty qnode info");
|
||||
}
|
||||
}
|
||||
|
||||
int32_t ctgdLaunchAsyncCall(SCatalog* pCtg, void *pTrans, const SEpSet* pMgmtEps, uint64_t reqId) {
|
||||
int32_t code = 0;
|
||||
SCatalogReq req = {0};
|
||||
req.pTableMeta = taosArrayInit(2, sizeof(SName));
|
||||
req.pDbVgroup = taosArrayInit(2, TSDB_DB_FNAME_LEN);
|
||||
req.pTableHash = taosArrayInit(2, sizeof(SName));
|
||||
req.pUdf = taosArrayInit(2, TSDB_FUNC_NAME_LEN);
|
||||
req.pDbCfg = taosArrayInit(2, TSDB_DB_FNAME_LEN);
|
||||
req.pIndex = NULL;//taosArrayInit(2, TSDB_INDEX_FNAME_LEN);
|
||||
req.pUser = taosArrayInit(2, sizeof(SUserAuthInfo));
|
||||
req.qNodeRequired = true;
|
||||
|
||||
SName name = {0};
|
||||
char dbFName[TSDB_DB_FNAME_LEN] = {0};
|
||||
char funcName[TSDB_FUNC_NAME_LEN] = {0};
|
||||
SUserAuthInfo user = {0};
|
||||
|
||||
tNameFromString(&name, "1.db1.tb1", T_NAME_ACCT | T_NAME_DB | T_NAME_TABLE);
|
||||
taosArrayPush(req.pTableMeta, &name);
|
||||
taosArrayPush(req.pTableHash, &name);
|
||||
tNameFromString(&name, "1.db1.st1", T_NAME_ACCT | T_NAME_DB | T_NAME_TABLE);
|
||||
taosArrayPush(req.pTableMeta, &name);
|
||||
taosArrayPush(req.pTableHash, &name);
|
||||
|
||||
strcpy(dbFName, "1.db1");
|
||||
taosArrayPush(req.pDbVgroup, dbFName);
|
||||
taosArrayPush(req.pDbCfg, dbFName);
|
||||
strcpy(dbFName, "1.db2");
|
||||
taosArrayPush(req.pDbVgroup, dbFName);
|
||||
taosArrayPush(req.pDbCfg, dbFName);
|
||||
|
||||
strcpy(funcName, "udf1");
|
||||
taosArrayPush(req.pUdf, funcName);
|
||||
strcpy(funcName, "udf2");
|
||||
taosArrayPush(req.pUdf, funcName);
|
||||
|
||||
strcpy(user.user, "root");
|
||||
strcpy(user.dbFName, "1.db1");
|
||||
user.type = AUTH_TYPE_READ;
|
||||
taosArrayPush(req.pUser, &user);
|
||||
user.type = AUTH_TYPE_WRITE;
|
||||
taosArrayPush(req.pUser, &user);
|
||||
user.type = AUTH_TYPE_OTHER;
|
||||
taosArrayPush(req.pUser, &user);
|
||||
|
||||
strcpy(user.user, "user1");
|
||||
strcpy(user.dbFName, "1.db2");
|
||||
user.type = AUTH_TYPE_READ;
|
||||
taosArrayPush(req.pUser, &user);
|
||||
user.type = AUTH_TYPE_WRITE;
|
||||
taosArrayPush(req.pUser, &user);
|
||||
user.type = AUTH_TYPE_OTHER;
|
||||
taosArrayPush(req.pUser, &user);
|
||||
|
||||
int32_t *param = taosMemoryCalloc(1, sizeof(int32_t));
|
||||
*param = 1;
|
||||
|
||||
int64_t jobId = 0;
|
||||
CTG_ERR_JRET(catalogAsyncGetAllMeta(pCtg, pTrans, pMgmtEps, reqId, &req, ctgdUserCallback, param, &jobId));
|
||||
|
||||
_return:
|
||||
|
||||
taosArrayDestroy(req.pTableMeta);
|
||||
taosArrayDestroy(req.pDbVgroup);
|
||||
taosArrayDestroy(req.pTableHash);
|
||||
taosArrayDestroy(req.pUdf);
|
||||
taosArrayDestroy(req.pDbCfg);
|
||||
taosArrayDestroy(req.pUser);
|
||||
|
||||
CTG_RET(code);
|
||||
}
|
||||
|
||||
int32_t ctgdEnableDebug(char *option) {
|
||||
if (0 == strcasecmp(option, "lock")) {
|
||||
gCTGDebug.lockEnable = true;
|
||||
|
|
|
@ -264,10 +264,11 @@ int32_t ctgGetQnodeListFromMnode(CTG_PARAMS, SArray *out, SCtgTask* pTask) {
|
|||
char *msg = NULL;
|
||||
int32_t msgLen = 0;
|
||||
int32_t reqType = TDMT_MND_QNODE_LIST;
|
||||
void*(*mallocFp)(int32_t) = pTask ? taosMemoryMalloc : rpcMallocCont;
|
||||
|
||||
ctgDebug("try to get qnode list from mnode, mgmtEpInUse:%d", pMgmtEps->inUse);
|
||||
|
||||
int32_t code = queryBuildMsg[TMSG_INDEX(reqType)](NULL, &msg, 0, &msgLen);
|
||||
int32_t code = queryBuildMsg[TMSG_INDEX(reqType)](NULL, &msg, 0, &msgLen, mallocFp);
|
||||
if (code) {
|
||||
ctgError("Build qnode list msg failed, error:%s", tstrerror(code));
|
||||
CTG_ERR_RET(code);
|
||||
|
@ -301,10 +302,11 @@ int32_t ctgGetDBVgInfoFromMnode(CTG_PARAMS, SBuildUseDBInput *input, SUseDbOutpu
|
|||
char *msg = NULL;
|
||||
int32_t msgLen = 0;
|
||||
int32_t reqType = TDMT_MND_USE_DB;
|
||||
void*(*mallocFp)(int32_t) = pTask ? taosMemoryMalloc : rpcMallocCont;
|
||||
|
||||
ctgDebug("try to get db vgInfo from mnode, dbFName:%s", input->db);
|
||||
|
||||
int32_t code = queryBuildMsg[TMSG_INDEX(reqType)](input, &msg, 0, &msgLen);
|
||||
int32_t code = queryBuildMsg[TMSG_INDEX(reqType)](input, &msg, 0, &msgLen, mallocFp);
|
||||
if (code) {
|
||||
ctgError("Build use db msg failed, code:%x, db:%s", code, input->db);
|
||||
CTG_ERR_RET(code);
|
||||
|
@ -338,10 +340,11 @@ int32_t ctgGetDBCfgFromMnode(CTG_PARAMS, const char *dbFName, SDbCfgInfo *out, S
|
|||
char *msg = NULL;
|
||||
int32_t msgLen = 0;
|
||||
int32_t reqType = TDMT_MND_GET_DB_CFG;
|
||||
void*(*mallocFp)(int32_t) = pTask ? taosMemoryMalloc : rpcMallocCont;
|
||||
|
||||
ctgDebug("try to get db cfg from mnode, dbFName:%s", dbFName);
|
||||
|
||||
int32_t code = queryBuildMsg[TMSG_INDEX(reqType)]((void *)dbFName, &msg, 0, &msgLen);
|
||||
int32_t code = queryBuildMsg[TMSG_INDEX(reqType)]((void *)dbFName, &msg, 0, &msgLen, mallocFp);
|
||||
if (code) {
|
||||
ctgError("Build get db cfg msg failed, code:%x, db:%s", code, dbFName);
|
||||
CTG_ERR_RET(code);
|
||||
|
@ -375,10 +378,11 @@ int32_t ctgGetIndexInfoFromMnode(CTG_PARAMS, const char *indexName, SIndexInfo *
|
|||
char *msg = NULL;
|
||||
int32_t msgLen = 0;
|
||||
int32_t reqType = TDMT_MND_GET_INDEX;
|
||||
void*(*mallocFp)(int32_t) = pTask ? taosMemoryMalloc : rpcMallocCont;
|
||||
|
||||
ctgDebug("try to get index from mnode, indexName:%s", indexName);
|
||||
|
||||
int32_t code = queryBuildMsg[TMSG_INDEX(reqType)]((void *)indexName, &msg, 0, &msgLen);
|
||||
int32_t code = queryBuildMsg[TMSG_INDEX(reqType)]((void *)indexName, &msg, 0, &msgLen, mallocFp);
|
||||
if (code) {
|
||||
ctgError("Build get index msg failed, code:%x, db:%s", code, indexName);
|
||||
CTG_ERR_RET(code);
|
||||
|
@ -412,10 +416,11 @@ int32_t ctgGetUdfInfoFromMnode(CTG_PARAMS, const char *funcName, SFuncInfo *out,
|
|||
char *msg = NULL;
|
||||
int32_t msgLen = 0;
|
||||
int32_t reqType = TDMT_MND_RETRIEVE_FUNC;
|
||||
void*(*mallocFp)(int32_t) = pTask ? taosMemoryMalloc : rpcMallocCont;
|
||||
|
||||
ctgDebug("try to get udf info from mnode, funcName:%s", funcName);
|
||||
|
||||
int32_t code = queryBuildMsg[TMSG_INDEX(reqType)]((void *)funcName, &msg, 0, &msgLen);
|
||||
int32_t code = queryBuildMsg[TMSG_INDEX(reqType)]((void *)funcName, &msg, 0, &msgLen, mallocFp);
|
||||
if (code) {
|
||||
ctgError("Build get udf msg failed, code:%x, db:%s", code, funcName);
|
||||
CTG_ERR_RET(code);
|
||||
|
@ -449,10 +454,11 @@ int32_t ctgGetUserDbAuthFromMnode(CTG_PARAMS, const char *user, SGetUserAuthRsp
|
|||
char *msg = NULL;
|
||||
int32_t msgLen = 0;
|
||||
int32_t reqType = TDMT_MND_GET_USER_AUTH;
|
||||
void*(*mallocFp)(int32_t) = pTask ? taosMemoryMalloc : rpcMallocCont;
|
||||
|
||||
ctgDebug("try to get user auth from mnode, user:%s", user);
|
||||
|
||||
int32_t code = queryBuildMsg[TMSG_INDEX(reqType)]((void *)user, &msg, 0, &msgLen);
|
||||
int32_t code = queryBuildMsg[TMSG_INDEX(reqType)]((void *)user, &msg, 0, &msgLen, mallocFp);
|
||||
if (code) {
|
||||
ctgError("Build get user auth msg failed, code:%x, db:%s", code, user);
|
||||
CTG_ERR_RET(code);
|
||||
|
@ -491,10 +497,11 @@ int32_t ctgGetTbMetaFromMnodeImpl(CTG_PARAMS, char *dbFName, char* tbName, STabl
|
|||
int32_t reqType = TDMT_MND_TABLE_META;
|
||||
char tbFName[TSDB_TABLE_FNAME_LEN];
|
||||
sprintf(tbFName, "%s.%s", dbFName, tbName);
|
||||
void*(*mallocFp)(int32_t) = pTask ? taosMemoryMalloc : rpcMallocCont;
|
||||
|
||||
ctgDebug("try to get table meta from mnode, tbFName:%s", tbFName);
|
||||
|
||||
int32_t code = queryBuildMsg[TMSG_INDEX(reqType)](&bInput, &msg, 0, &msgLen);
|
||||
int32_t code = queryBuildMsg[TMSG_INDEX(reqType)](&bInput, &msg, 0, &msgLen, mallocFp);
|
||||
if (code) {
|
||||
ctgError("Build mnode stablemeta msg failed, code:%x", code);
|
||||
CTG_ERR_RET(code);
|
||||
|
@ -537,6 +544,7 @@ int32_t ctgGetTbMetaFromVnode(CTG_PARAMS, const SName* pTableName, SVgroupInfo *
|
|||
int32_t reqType = TDMT_VND_TABLE_META;
|
||||
char tbFName[TSDB_TABLE_FNAME_LEN];
|
||||
sprintf(tbFName, "%s.%s", dbFName, pTableName->tname);
|
||||
void*(*mallocFp)(int32_t) = pTask ? taosMemoryMalloc : rpcMallocCont;
|
||||
|
||||
ctgDebug("try to get table meta from vnode, vgId:%d, tbFName:%s", vgroupInfo->vgId, tbFName);
|
||||
|
||||
|
@ -544,7 +552,7 @@ int32_t ctgGetTbMetaFromVnode(CTG_PARAMS, const SName* pTableName, SVgroupInfo *
|
|||
char *msg = NULL;
|
||||
int32_t msgLen = 0;
|
||||
|
||||
int32_t code = queryBuildMsg[TMSG_INDEX(reqType)](&bInput, &msg, 0, &msgLen);
|
||||
int32_t code = queryBuildMsg[TMSG_INDEX(reqType)](&bInput, &msg, 0, &msgLen, mallocFp);
|
||||
if (code) {
|
||||
ctgError("Build vnode tablemeta msg failed, code:%x, tbFName:%s", code, tbFName);
|
||||
CTG_ERR_RET(code);
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue