Merge branch '3.0' into feature/TD-14481-3.0
This commit is contained in:
commit
fa1e38b6cc
|
@ -52,7 +52,7 @@ INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6,
|
||||||
|
|
||||||
:::info
|
:::info
|
||||||
|
|
||||||
- 要提高写入效率,需要批量写入。一批写入的记录条数越多,插入效率就越高。但一条记录不能超过 16K,一条 SQL 语句总长度不能超过 1M 。
|
- 要提高写入效率,需要批量写入。一批写入的记录条数越多,插入效率就越高。但一条记录不能超过 48K,一条 SQL 语句总长度不能超过 1M 。
|
||||||
- TDengine 支持多线程同时写入,要进一步提高写入速度,一个客户端需要打开 20 个以上的线程同时写。但线程数达到一定数量后,无法再提高,甚至还会下降,因为线程频繁切换,带来额外开销。
|
- TDengine 支持多线程同时写入,要进一步提高写入速度,一个客户端需要打开 20 个以上的线程同时写。但线程数达到一定数量后,无法再提高,甚至还会下降,因为线程频繁切换,带来额外开销。
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
|
@ -12,7 +12,7 @@ CREATE TABLE [IF NOT EXISTS] tb_name (timestamp_field_name TIMESTAMP, field1_nam
|
||||||
|
|
||||||
1. 表的第一个字段必须是 TIMESTAMP,并且系统自动将其设为主键;
|
1. 表的第一个字段必须是 TIMESTAMP,并且系统自动将其设为主键;
|
||||||
2. 表名最大长度为 192;
|
2. 表名最大长度为 192;
|
||||||
3. 表的每行长度不能超过 16k 个字符;(注意:每个 BINARY/NCHAR 类型的列还会额外占用 2 个字节的存储位置)
|
3. 表的每行长度不能超过 48KB;(注意:每个 BINARY/NCHAR 类型的列还会额外占用 2 个字节的存储位置)
|
||||||
4. 子表名只能由字母、数字和下划线组成,且不能以数字开头,不区分大小写
|
4. 子表名只能由字母、数字和下划线组成,且不能以数字开头,不区分大小写
|
||||||
5. 使用数据类型 binary 或 nchar,需指定其最长的字节数,如 binary(20),表示 20 字节;
|
5. 使用数据类型 binary 或 nchar,需指定其最长的字节数,如 binary(20),表示 20 字节;
|
||||||
6. 为了兼容支持更多形式的表名,TDengine 引入新的转义符 "\`",可以让表名与关键词不冲突,同时不受限于上述表名称合法性约束检查。但是同样具有长度限制要求。使用转义字符以后,不再对转义字符中的内容进行大小写统一。
|
6. 为了兼容支持更多形式的表名,TDengine 引入新的转义符 "\`",可以让表名与关键词不冲突,同时不受限于上述表名称合法性约束检查。但是同样具有长度限制要求。使用转义字符以后,不再对转义字符中的内容进行大小写统一。
|
||||||
|
|
|
@ -86,7 +86,7 @@ ALTER STABLE stb_name MODIFY COLUMN field_name data_type(length);
|
||||||
ALTER STABLE stb_name ADD TAG new_tag_name tag_type;
|
ALTER STABLE stb_name ADD TAG new_tag_name tag_type;
|
||||||
```
|
```
|
||||||
|
|
||||||
为 STable 增加一个新的标签,并指定新标签的类型。标签总数不能超过 128 个,总长度不超过 16k 个字符。
|
为 STable 增加一个新的标签,并指定新标签的类型。标签总数不能超过 128 个,总长度不超过 16KB 。
|
||||||
|
|
||||||
### 删除标签
|
### 删除标签
|
||||||
|
|
||||||
|
|
|
@ -7,9 +7,9 @@ title: 边界限制
|
||||||
|
|
||||||
- 数据库名最大长度为 32。
|
- 数据库名最大长度为 32。
|
||||||
- 表名最大长度为 192,不包括数据库名前缀和分隔符
|
- 表名最大长度为 192,不包括数据库名前缀和分隔符
|
||||||
- 每行数据最大长度 16k 个字符, 从 2.1.7.0 版本开始,每行数据最大长度 48k 个字符(注意:数据行内每个 BINARY/NCHAR 类型的列还会额外占用 2 个字节的存储位置)。
|
- 每行数据最大长度 48KB (注意:数据行内每个 BINARY/NCHAR 类型的列还会额外占用 2 个字节的存储位置)。
|
||||||
- 列名最大长度为 64,最多允许 4096 列,最少需要 2 列,第一列必须是时间戳。注:从 2.1.7.0 版本(不含)以前最多允许 4096 列
|
- 列名最大长度为 64,最多允许 4096 列,最少需要 2 列,第一列必须是时间戳。注:从 2.1.7.0 版本(不含)以前最多允许 4096 列
|
||||||
- 标签名最大长度为 64,最多允许 128 个,至少要有 1 个标签,一个表中标签值的总长度不超过 16k 个字符。
|
- 标签名最大长度为 64,最多允许 128 个,至少要有 1 个标签,一个表中标签值的总长度不超过 16KB 。
|
||||||
- SQL 语句最大长度 1048576 个字符,也可通过客户端配置参数 maxSQLLength 修改,取值范围 65480 ~ 1048576。
|
- SQL 语句最大长度 1048576 个字符,也可通过客户端配置参数 maxSQLLength 修改,取值范围 65480 ~ 1048576。
|
||||||
- SELECT 语句的查询结果,最多允许返回 4096 列(语句中的函数调用可能也会占用一些列空间),超限时需要显式指定较少的返回数据列,以避免语句执行报错。注: 2.1.7.0 版本(不含)之前为最多允许 1024 列
|
- SELECT 语句的查询结果,最多允许返回 4096 列(语句中的函数调用可能也会占用一些列空间),超限时需要显式指定较少的返回数据列,以避免语句执行报错。注: 2.1.7.0 版本(不含)之前为最多允许 1024 列
|
||||||
- 库的数目,超级表的数目、表的数目,系统不做限制,仅受系统资源限制。
|
- 库的数目,超级表的数目、表的数目,系统不做限制,仅受系统资源限制。
|
||||||
|
|
|
@ -23,17 +23,17 @@ title: TDengine 参数限制与保留关键字
|
||||||
去掉了 `` ‘“`\ `` (单双引号、撇号、反斜杠、空格)
|
去掉了 `` ‘“`\ `` (单双引号、撇号、反斜杠、空格)
|
||||||
|
|
||||||
- 数据库名:不能包含“.”以及特殊字符,不能超过 32 个字符
|
- 数据库名:不能包含“.”以及特殊字符,不能超过 32 个字符
|
||||||
- 表名:不能包含“.”以及特殊字符,与所属数据库名一起,不能超过 192 个字符,每行数据最大长度 16k 个字符
|
- 表名:不能包含“.”以及特殊字符,与所属数据库名一起,不能超过 192 个字节 ,每行数据最大长度 48KB
|
||||||
- 表的列名:不能包含特殊字符,不能超过 64 个字符
|
- 表的列名:不能包含特殊字符,不能超过 64 个字节
|
||||||
- 数据库名、表名、列名,都不能以数字开头,合法的可用字符集是“英文字符、数字和下划线”
|
- 数据库名、表名、列名,都不能以数字开头,合法的可用字符集是“英文字符、数字和下划线”
|
||||||
- 表的列数:不能超过 1024 列,最少需要 2 列,第一列必须是时间戳(从 2.1.7.0 版本开始,改为最多支持 4096 列)
|
- 表的列数:不能超过 1024 列,最少需要 2 列,第一列必须是时间戳(从 2.1.7.0 版本开始,改为最多支持 4096 列)
|
||||||
- 记录的最大长度:包括时间戳 8 byte,不能超过 16KB(每个 BINARY/NCHAR 类型的列还会额外占用 2 个 byte 的存储位置)
|
- 记录的最大长度:包括时间戳 8 字节,不能超过 48KB(每个 BINARY/NCHAR 类型的列还会额外占用 2 个 字节 的存储位置)
|
||||||
- 单条 SQL 语句默认最大字符串长度:1048576 byte,但可通过系统配置参数 maxSQLLength 修改,取值范围 65480 ~ 1048576 byte
|
- 单条 SQL 语句默认最大字符串长度:1048576 字节,但可通过系统配置参数 maxSQLLength 修改,取值范围 65480 ~ 1048576 字节
|
||||||
- 数据库副本数:不能超过 3
|
- 数据库副本数:不能超过 3
|
||||||
- 用户名:不能超过 23 个 byte
|
- 用户名:不能超过 23 个 字节
|
||||||
- 用户密码:不能超过 15 个 byte
|
- 用户密码:不能超过 15 个 字节
|
||||||
- 标签(Tags)数量:不能超过 128 个,可以 0 个
|
- 标签(Tags)数量:不能超过 128 个,可以 0 个
|
||||||
- 标签的总长度:不能超过 16K byte
|
- 标签的总长度:不能超过 16KB
|
||||||
- 记录条数:仅受存储空间限制
|
- 记录条数:仅受存储空间限制
|
||||||
- 表的个数:仅受节点个数限制
|
- 表的个数:仅受节点个数限制
|
||||||
- 库的个数:仅受节点个数限制
|
- 库的个数:仅受节点个数限制
|
||||||
|
|
|
@ -14,7 +14,6 @@ import NodeInfluxLine from "../../07-develop/03-insert-data/_js_line.mdx";
|
||||||
import NodeOpenTSDBTelnet from "../../07-develop/03-insert-data/_js_opts_telnet.mdx";
|
import NodeOpenTSDBTelnet from "../../07-develop/03-insert-data/_js_opts_telnet.mdx";
|
||||||
import NodeOpenTSDBJson from "../../07-develop/03-insert-data/_js_opts_json.mdx";
|
import NodeOpenTSDBJson from "../../07-develop/03-insert-data/_js_opts_json.mdx";
|
||||||
import NodeQuery from "../../07-develop/04-query-data/_js.mdx";
|
import NodeQuery from "../../07-develop/04-query-data/_js.mdx";
|
||||||
import NodeAsyncQuery from "../../07-develop/04-query-data/_js_async.mdx";
|
|
||||||
|
|
||||||
`td2.0-connector` 和 `td2.0-rest-connector` 是 TDengine 的官方 Node.js 语言连接器。Node.js 开发人员可以通过它开发可以存取 TDengine 集群数据的应用软件。
|
`td2.0-connector` 和 `td2.0-rest-connector` 是 TDengine 的官方 Node.js 语言连接器。Node.js 开发人员可以通过它开发可以存取 TDengine 集群数据的应用软件。
|
||||||
|
|
||||||
|
@ -189,14 +188,8 @@ let cursor = conn.cursor();
|
||||||
|
|
||||||
### 查询数据
|
### 查询数据
|
||||||
|
|
||||||
#### 同步查询
|
|
||||||
|
|
||||||
<NodeQuery />
|
<NodeQuery />
|
||||||
|
|
||||||
#### 异步查询
|
|
||||||
|
|
||||||
<NodeAsyncQuery />
|
|
||||||
|
|
||||||
## 更多示例程序
|
## 更多示例程序
|
||||||
|
|
||||||
| 示例程序 | 示例程序描述 |
|
| 示例程序 | 示例程序描述 |
|
||||||
|
|
|
@ -82,7 +82,7 @@ st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000
|
||||||
|
|
||||||
:::tip
|
:::tip
|
||||||
无模式所有的处理逻辑,仍会遵循 TDengine 对数据结构的底层限制,例如每行数据的总长度不能超过
|
无模式所有的处理逻辑,仍会遵循 TDengine 对数据结构的底层限制,例如每行数据的总长度不能超过
|
||||||
16k 字节。这方面的具体限制约束请参见 [TAOS SQL 边界限制](/taos-sql/limit)
|
48KB。这方面的具体限制约束请参见 [TAOS SQL 边界限制](/taos-sql/limit)
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
|
|
@ -52,7 +52,7 @@ For more details about `INSERT` please refer to [INSERT](/taos-sql/insert).
|
||||||
|
|
||||||
:::info
|
:::info
|
||||||
|
|
||||||
- Inserting in batches can improve performance. Normally, the higher the batch size, the better the performance. Please note that a single row can't exceed 16K bytes and each SQL statement can't exceed 1MB.
|
- Inserting in batches can improve performance. Normally, the higher the batch size, the better the performance. Please note that a single row can't exceed 48K bytes and each SQL statement can't exceed 1MB.
|
||||||
- Inserting with multiple threads can also improve performance. However, depending on the system resources on the application side and the server side, when the number of inserting threads grows beyond a specific point the performance may drop instead of improving. The proper number of threads needs to be tested in a specific environment to find the best number.
|
- Inserting with multiple threads can also improve performance. However, depending on the system resources on the application side and the server side, when the number of inserting threads grows beyond a specific point the performance may drop instead of improving. The proper number of threads needs to be tested in a specific environment to find the best number.
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
|
@ -14,7 +14,7 @@ CREATE TABLE [IF NOT EXISTS] tb_name (timestamp_field_name TIMESTAMP, field1_nam
|
||||||
|
|
||||||
1. The first column of a table MUST be of type TIMESTAMP. It is automatically set as the primary key.
|
1. The first column of a table MUST be of type TIMESTAMP. It is automatically set as the primary key.
|
||||||
2. The maximum length of the table name is 192 bytes.
|
2. The maximum length of the table name is 192 bytes.
|
||||||
3. The maximum length of each row is 16k bytes, please note that the extra 2 bytes used by each BINARY/NCHAR column are also counted.
|
3. The maximum length of each row is 48k bytes, please note that the extra 2 bytes used by each BINARY/NCHAR column are also counted.
|
||||||
4. The name of the subtable can only consist of characters from the English alphabet, digits and underscore. Table names can't start with a digit. Table names are case insensitive.
|
4. The name of the subtable can only consist of characters from the English alphabet, digits and underscore. Table names can't start with a digit. Table names are case insensitive.
|
||||||
5. The maximum length in bytes must be specified when using BINARY or NCHAR types.
|
5. The maximum length in bytes must be specified when using BINARY or NCHAR types.
|
||||||
6. Escape character "\`" can be used to avoid the conflict between table names and reserved keywords, above rules will be bypassed when using escape character on table names, but the upper limit for the name length is still valid. The table names specified using escape character are case sensitive. Only ASCII visible characters can be used with escape character.
|
6. Escape character "\`" can be used to avoid the conflict between table names and reserved keywords, above rules will be bypassed when using escape character on table names, but the upper limit for the name length is still valid. The table names specified using escape character are case sensitive. Only ASCII visible characters can be used with escape character.
|
||||||
|
|
|
@ -179,9 +179,9 @@ namespace TDengineExample
|
||||||
|
|
||||||
1. "Unable to establish connection", "Unable to resolve FQDN"
|
1. "Unable to establish connection", "Unable to resolve FQDN"
|
||||||
|
|
||||||
Usually, it cause by the FQDN configuration is incorrect, you can refer to [How to understand TDengine's FQDN (Chinese)](https://www.taosdata.com/blog/2021/07/29/2741.html) to solve it. 2.
|
Usually, it cause by the FQDN configuration is incorrect, you can refer to [How to understand TDengine's FQDN (Chinese)](https://www.taosdata.com/blog/2021/07/29/2741.html) to solve it.
|
||||||
|
|
||||||
Unhandled exception. System.DllNotFoundException: Unable to load DLL 'taos' or one of its dependencies: The specified module cannot be found.
|
2. Unhandled exception. System.DllNotFoundException: Unable to load DLL 'taos' or one of its dependencies: The specified module cannot be found.
|
||||||
|
|
||||||
This is usually because the program did not find the dependent client driver. The solution is to copy `C:\TDengine\driver\taos.dll` to the `C:\Windows\System32\` directory on Windows, and create the following soft link on Linux `ln -s /usr/local/taos/driver/libtaos.so.x.x .x.x /usr/lib/libtaos.so` will work.
|
This is usually because the program did not find the dependent client driver. The solution is to copy `C:\TDengine\driver\taos.dll` to the `C:\Windows\System32\` directory on Windows, and create the following soft link on Linux `ln -s /usr/local/taos/driver/libtaos.so.x.x .x.x /usr/lib/libtaos.so` will work.
|
||||||
|
|
||||||
|
|
|
@ -15,9 +15,9 @@ import GoOpenTSDBTelnet from "../../07-develop/03-insert-data/_go_opts_telnet.md
|
||||||
import GoOpenTSDBJson from "../../07-develop/03-insert-data/_go_opts_json.mdx"
|
import GoOpenTSDBJson from "../../07-develop/03-insert-data/_go_opts_json.mdx"
|
||||||
import GoQuery from "../../07-develop/04-query-data/_go.mdx"
|
import GoQuery from "../../07-develop/04-query-data/_go.mdx"
|
||||||
|
|
||||||
`driver-go` is the official Go language connector for TDengine, which implements the interface to the Go language [database/sql](https://golang.org/pkg/database/sql/) package. Go developers can use it to develop applications that access TDengine cluster data.
|
`driver-go` is the official Go language connector for TDengine. It implements the [database/sql](https://golang.org/pkg/database/sql/) package, the generic Go language interface to SQL databases. Go developers can use it to develop applications that access TDengine cluster data.
|
||||||
|
|
||||||
`driver-go` provides two ways to establish connections. One is **native connection**, which connects to TDengine instances natively through the TDengine client driver (taosc), supporting data writing, querying, subscriptions, schemaless writing, and bind interface. The other is the **REST connection**, which connects to TDengine instances via the REST interface provided by taosAdapter. The set of features implemented by the REST connection differs slightly from the native connection.
|
`driver-go` provides two ways to establish connections. One is **native connection**, which connects to TDengine instances natively through the TDengine client driver (taosc), supporting data writing, querying, subscriptions, schemaless writing, and bind interface. The other is the **REST connection**, which connects to TDengine instances via the REST interface provided by taosAdapter. The set of features implemented by the REST connection differs slightly from those implemented by the native connection.
|
||||||
|
|
||||||
This article describes how to install `driver-go` and connect to TDengine clusters and perform basic operations such as data query and data writing through `driver-go`.
|
This article describes how to install `driver-go` and connect to TDengine clusters and perform basic operations such as data query and data writing through `driver-go`.
|
||||||
|
|
||||||
|
@ -213,7 +213,7 @@ func main() {
|
||||||
|
|
||||||
Since the REST interface is stateless, the `use db` syntax will not work. You need to put the db name into the SQL command, e.g. `create table if not exists tb1 (ts timestamp, a int)` to `create table if not exists test.tb1 (ts timestamp, a int)` otherwise it will report the error `[0x217] Database not specified or available`.
|
Since the REST interface is stateless, the `use db` syntax will not work. You need to put the db name into the SQL command, e.g. `create table if not exists tb1 (ts timestamp, a int)` to `create table if not exists test.tb1 (ts timestamp, a int)` otherwise it will report the error `[0x217] Database not specified or available`.
|
||||||
|
|
||||||
You can also put the db name in the DSN by changing `root:taosdata@http(localhost:6041)/` to `root:taosdata@http(localhost:6041)/test`. This method is supported by taosAdapter in TDengine 2.4.0.5. is supported since TDengine 2.4.0.5. Executing the `create database` statement when the specified db does not exist will not report an error while executing other queries or writing against that db will report an error.
|
You can also put the db name in the DSN by changing `root:taosdata@http(localhost:6041)/` to `root:taosdata@http(localhost:6041)/test`. This method is supported by taosAdapter since TDengine 2.4.0.5. Executing the `create database` statement when the specified db does not exist will not report an error while executing other queries or writing against that db will report an error.
|
||||||
|
|
||||||
The complete example is as follows.
|
The complete example is as follows.
|
||||||
|
|
||||||
|
@ -289,7 +289,7 @@ func main() {
|
||||||
|
|
||||||
6. `readBufferSize` parameter has no significant effect after being increased
|
6. `readBufferSize` parameter has no significant effect after being increased
|
||||||
|
|
||||||
If you increase `readBufferSize` will reduce the number of `syscall` calls when fetching results. If the query result is smaller, modifying this parameter will not improve significantly. If you increase the parameter value too much, the bottleneck will be parsing JSON data. If you need to optimize the query speed, you must adjust the value according to the actual situation to achieve the best query result.
|
Increasing `readBufferSize` will reduce the number of `syscall` calls when fetching results. If the query result is smaller, modifying this parameter will not improve performance significantly. If you increase the parameter value too much, the bottleneck will be parsing JSON data. If you need to optimize the query speed, you must adjust the value based on the actual situation to achieve the best query performance.
|
||||||
|
|
||||||
7. `disableCompression` parameter is set to `false` when the query efficiency is reduced
|
7. `disableCompression` parameter is set to `false` when the query efficiency is reduced
|
||||||
|
|
||||||
|
|
|
@ -9,19 +9,19 @@ description: TDengine Java based on JDBC API and provide both native and REST co
|
||||||
import Tabs from '@theme/Tabs';
|
import Tabs from '@theme/Tabs';
|
||||||
import TabItem from '@theme/TabItem';
|
import TabItem from '@theme/TabItem';
|
||||||
|
|
||||||
'taos-jdbcdriver' is TDengine's official Java language connector, which allows Java developers to develop applications that access the TDengine database. 'taos-jdbcdriver' implements the interface of the JDBC driver standard and provides two forms of connectors. One is to connect to a TDengine instance natively through the TDengine client driver (taosc), which supports functions including data writing, querying, subscription, schemaless writing, and bind interface. And the other is to connect to a TDengine instance through the REST interface provided by taosAdapter (2.4.0.0 and later). REST connections implement has a slight differences to compare the set of features implemented and native connections.
|
'taos-jdbcdriver' is TDengine's official Java language connector, which allows Java developers to develop applications that access the TDengine database. 'taos-jdbcdriver' implements the interface of the JDBC driver standard and provides two forms of connectors. One is to connect to a TDengine instance natively through the TDengine client driver (taosc), which supports functions including data writing, querying, subscription, schemaless writing, and bind interface. And the other is to connect to a TDengine instance through the REST interface provided by taosAdapter (2.4.0.0 and later). The implementation of the REST connection and those of the native connections have slight differences in features.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
The preceding diagram shows two ways for a Java app to access TDengine via connector:
|
The preceding diagram shows two ways for a Java app to access TDengine via connector:
|
||||||
|
|
||||||
- JDBC native connection: Java applications use TSDBDriver on physical node 1 (pnode1) to call client-driven directly (`libtaos.so` or `taos.dll`) APIs to send writing and query requests to taosd instances located on physical node 2 (pnode2).
|
- JDBC native connection: Java applications use TSDBDriver on physical node 1 (pnode1) to call client-driven directly (`libtaos.so` or `taos.dll`) APIs to send writing and query requests to taosd instances located on physical node 2 (pnode2).
|
||||||
- JDBC REST connection: The Java application encapsulates the SQL as a REST request via RestfulDriver, sends it to the REST server of physical node 2 (taosAdapter), requests TDengine server through the REST server, and returns the result.
|
- JDBC REST connection: The Java application encapsulates the SQL as a REST request via RestfulDriver, sends it to the REST server (taosAdapter) on physical node 2. taosAdapter forwards the request to TDengine server and returns the result.
|
||||||
|
|
||||||
Using REST connection, which does not rely on TDengine client drivers.It can be cross-platform more convenient and flexible but introduce about 30% lower performance than native connection.
|
The REST connection, which does not rely on TDengine client drivers, is more convenient and flexible, in addition to being cross-platform. However the performance is about 30% lower than that of the native connection.
|
||||||
|
|
||||||
:::info
|
:::info
|
||||||
TDengine's JDBC driver implementation is as consistent as possible with the relational database driver. Still, there are differences in the use scenarios and technical characteristics of TDengine and relational object databases, so 'taos-jdbcdriver' also has some differences from traditional JDBC drivers. You need to pay attention to the following points when using:
|
TDengine's JDBC driver implementation is as consistent as possible with the relational database driver. Still, there are differences in the use scenarios and technical characteristics of TDengine and relational object databases. So 'taos-jdbcdriver' also has some differences from traditional JDBC drivers. It is important to keep the following points in mind:
|
||||||
|
|
||||||
- TDengine does not currently support delete operations for individual data records.
|
- TDengine does not currently support delete operations for individual data records.
|
||||||
- Transactional operations are not currently supported.
|
- Transactional operations are not currently supported.
|
||||||
|
@ -88,7 +88,7 @@ Add following dependency in the `pom.xml` file of your Maven project:
|
||||||
</TabItem>
|
</TabItem>
|
||||||
<TabItem value="source" label="Build from source code">
|
<TabItem value="source" label="Build from source code">
|
||||||
|
|
||||||
You can build Java connector from source code after clone TDengine project:
|
You can build Java connector from source code after cloning the TDengine project:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
git clone https://github.com/taosdata/TDengine.git
|
git clone https://github.com/taosdata/TDengine.git
|
||||||
|
@ -96,7 +96,7 @@ cd TDengine/src/connector/jdbc
|
||||||
mvn clean install -Dmaven.test.skip=true
|
mvn clean install -Dmaven.test.skip=true
|
||||||
```
|
```
|
||||||
|
|
||||||
After compilation, a jar package of taos-jdbcdriver-2.0.XX-dist .jar is generated in the target directory, and the compiled jar file is automatically placed in the local Maven repository.
|
After compilation, a jar package named taos-jdbcdriver-2.0.XX-dist.jar is generated in the target directory, and the compiled jar file is automatically placed in the local Maven repository.
|
||||||
|
|
||||||
</TabItem>
|
</TabItem>
|
||||||
</Tabs>
|
</Tabs>
|
||||||
|
@ -186,7 +186,7 @@ Connection conn = DriverManager.getConnection(jdbcUrl);
|
||||||
|
|
||||||
In the above example, a RestfulDriver with a JDBC REST connection is used to establish a connection to a database named `test` with hostname `taosdemo.com` on port `6041`. The URL specifies the user name as `root` and the password as `taosdata`.
|
In the above example, a RestfulDriver with a JDBC REST connection is used to establish a connection to a database named `test` with hostname `taosdemo.com` on port `6041`. The URL specifies the user name as `root` and the password as `taosdata`.
|
||||||
|
|
||||||
There is no dependency on the client driver when Using a JDBC REST connection. Compared to a JDBC native connection, only the following are required: 1.
|
There is no dependency on the client driver when Using a JDBC REST connection. Compared to a JDBC native connection, only the following are required:
|
||||||
|
|
||||||
1. driverClass specified as "com.taosdata.jdbc.rs.RestfulDriver".
|
1. driverClass specified as "com.taosdata.jdbc.rs.RestfulDriver".
|
||||||
2. jdbcUrl starting with "jdbc:TAOS-RS://".
|
2. jdbcUrl starting with "jdbc:TAOS-RS://".
|
||||||
|
@ -209,7 +209,7 @@ The configuration parameters in the URL are as follows.
|
||||||
INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('California.SanFrancisco') VALUES(now, 24.6);
|
INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('California.SanFrancisco') VALUES(now, 24.6);
|
||||||
```
|
```
|
||||||
|
|
||||||
- Starting from taos-jdbcdriver-2.0.36 and TDengine 2.2.0.0, if dbname is specified in the URL, JDBC REST connections will use `/rest/sql/dbname` as the URL for REST requests by default, and there is no need to specify dbname in SQL. For example, if the URL is `jdbc:TAOS-RS://127.0.0.1:6041/test`, then the SQL can be executed: insert into t1 using weather(ts, temperature) tags('California.SanFrancisco') values(now, 24.6);
|
- Starting from taos-jdbcdriver-2.0.36 and TDengine 2.2.0.0, if dbname is specified in the URL, JDBC REST connections will use `/rest/sql/dbname` as the URL for REST requests by default, and there is no need to specify dbname in SQL. For example, if the URL is `jdbc:TAOS-RS://127.0.0.1:6041/test`, then the SQL can be executed: insert into test using weather(ts, temperature) tags('California.SanFrancisco') values(now, 24.6);
|
||||||
|
|
||||||
:::
|
:::
|
||||||
|
|
||||||
|
@ -271,7 +271,7 @@ If the configuration parameters are duplicated in the URL, Properties, or client
|
||||||
2. Properties connProps
|
2. Properties connProps
|
||||||
3. the configuration file taos.cfg of the TDengine client driver when using a native connection
|
3. the configuration file taos.cfg of the TDengine client driver when using a native connection
|
||||||
|
|
||||||
For example, if you specify the password as `taosdata` in the URL and specify the password as `taosdemo` in the Properties simultaneously. In this case, JDBC will use the password in the URL to establish the connection.
|
For example, if you specify the password as `taosdata` in the URL and specify the password as `taosdemo` in the Properties simultaneously, JDBC will use the password in the URL to establish the connection.
|
||||||
|
|
||||||
## Usage examples
|
## Usage examples
|
||||||
|
|
||||||
|
@ -323,7 +323,7 @@ while(resultSet.next()){
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
> The query is consistent with operating a relational database. When using subscripts to get the contents of the returned fields, starting from 1, it is recommended to use the field names to get them.
|
> The query is consistent with operating a relational database. When using subscripts to get the contents of the returned fields, you have to start from 1. However, we recommend using the field names to get the values of the fields in the result set.
|
||||||
|
|
||||||
### Handling exceptions
|
### Handling exceptions
|
||||||
|
|
||||||
|
@ -623,7 +623,7 @@ public void setNString(int columnIndex, ArrayList<String> list, int size) throws
|
||||||
|
|
||||||
### Schemaless Writing
|
### Schemaless Writing
|
||||||
|
|
||||||
Starting with version 2.2.0.0, TDengine has added the ability to schemaless writing. It is compatible with InfluxDB's Line Protocol, OpenTSDB's telnet line protocol, and OpenTSDB's JSON format protocol. See [schemaless writing](/reference/schemaless/) for details.
|
Starting with version 2.2.0.0, TDengine has added the ability to perform schemaless writing. It is compatible with InfluxDB's Line Protocol, OpenTSDB's telnet line protocol, and OpenTSDB's JSON format protocol. See [schemaless writing](/reference/schemaless/) for details.
|
||||||
|
|
||||||
**Note**.
|
**Note**.
|
||||||
|
|
||||||
|
@ -666,16 +666,16 @@ The TDengine Java Connector supports subscription functionality with the followi
|
||||||
#### Create subscriptions
|
#### Create subscriptions
|
||||||
|
|
||||||
```java
|
```java
|
||||||
TSDBSubscribe sub = ((TSDBConnection)conn).subscribe("topic", "select * from meters", false);
|
TSDBSubscribe sub = ((TSDBConnection)conn).subscribe("topicname", "select * from meters", false);
|
||||||
```
|
```
|
||||||
|
|
||||||
The three parameters of the `subscribe()` method have the following meanings.
|
The three parameters of the `subscribe()` method have the following meanings.
|
||||||
|
|
||||||
- topic: the subscribed topic (i.e., name). This parameter is the unique identifier of the subscription
|
- topicname: the name of the subscribed topic. This parameter is the unique identifier of the subscription.
|
||||||
- sql: the query statement of the subscription, this statement can only be `select` statement, only the original data should be queried, and you can query only the data in the positive time order
|
- sql: the query statement of the subscription. This statement can only be a `select` statement. Only original data can be queried, and you can query the data only temporal order.
|
||||||
- restart: if the subscription already exists, whether to restart or continue the previous subscription
|
- restart: if the subscription already exists, whether to restart or continue the previous subscription
|
||||||
|
|
||||||
The above example will use the SQL command `select * from meters` to create a subscription named `topic`. If the subscription exists, it will continue the progress of the previous query instead of consuming all the data from the beginning.
|
The above example will use the SQL command `select * from meters` to create a subscription named `topicname`. If the subscription exists, it will continue the progress of the previous query instead of consuming all the data from the beginning.
|
||||||
|
|
||||||
#### Subscribe to consume data
|
#### Subscribe to consume data
|
||||||
|
|
||||||
|
|
|
@ -14,7 +14,6 @@ import NodeInfluxLine from "../../07-develop/03-insert-data/_js_line.mdx";
|
||||||
import NodeOpenTSDBTelnet from "../../07-develop/03-insert-data/_js_opts_telnet.mdx";
|
import NodeOpenTSDBTelnet from "../../07-develop/03-insert-data/_js_opts_telnet.mdx";
|
||||||
import NodeOpenTSDBJson from "../../07-develop/03-insert-data/_js_opts_json.mdx";
|
import NodeOpenTSDBJson from "../../07-develop/03-insert-data/_js_opts_json.mdx";
|
||||||
import NodeQuery from "../../07-develop/04-query-data/_js.mdx";
|
import NodeQuery from "../../07-develop/04-query-data/_js.mdx";
|
||||||
import NodeAsyncQuery from "../../07-develop/04-query-data/_js_async.mdx";
|
|
||||||
|
|
||||||
`td2.0-connector` and `td2.0-rest-connector` are the official Node.js language connectors for TDengine. Node.js developers can develop applications to access TDengine instance data.
|
`td2.0-connector` and `td2.0-rest-connector` are the official Node.js language connectors for TDengine. Node.js developers can develop applications to access TDengine instance data.
|
||||||
|
|
||||||
|
@ -189,14 +188,8 @@ let cursor = conn.cursor();
|
||||||
|
|
||||||
### Query data
|
### Query data
|
||||||
|
|
||||||
#### Synchronous queries
|
|
||||||
|
|
||||||
<NodeQuery />
|
<NodeQuery />
|
||||||
|
|
||||||
#### asynchronous query
|
|
||||||
|
|
||||||
<NodeAsyncQuery />
|
|
||||||
|
|
||||||
## More Sample Programs
|
## More Sample Programs
|
||||||
|
|
||||||
| Sample Programs | Sample Program Description |
|
| Sample Programs | Sample Program Description |
|
||||||
|
|
|
@ -11,18 +11,18 @@ import TabItem from "@theme/TabItem";
|
||||||
`taospy` is the official Python connector for TDengine. `taospy` provides a rich set of APIs that makes it easy for Python applications to access TDengine. `taospy` wraps both the [native interface](/reference/connector/cpp) and [REST interface](/reference/rest-api) of TDengine, which correspond to the `taos` and `taosrest` modules of the `taospy` package, respectively.
|
`taospy` is the official Python connector for TDengine. `taospy` provides a rich set of APIs that makes it easy for Python applications to access TDengine. `taospy` wraps both the [native interface](/reference/connector/cpp) and [REST interface](/reference/rest-api) of TDengine, which correspond to the `taos` and `taosrest` modules of the `taospy` package, respectively.
|
||||||
In addition to wrapping the native and REST interfaces, `taospy` also provides a set of programming interfaces that conforms to the [Python Data Access Specification (PEP 249)](https://peps.python.org/pep-0249/). It is easy to integrate `taospy` with many third-party tools, such as [SQLAlchemy](https://www.sqlalchemy.org/) and [pandas](https://pandas.pydata.org/).
|
In addition to wrapping the native and REST interfaces, `taospy` also provides a set of programming interfaces that conforms to the [Python Data Access Specification (PEP 249)](https://peps.python.org/pep-0249/). It is easy to integrate `taospy` with many third-party tools, such as [SQLAlchemy](https://www.sqlalchemy.org/) and [pandas](https://pandas.pydata.org/).
|
||||||
|
|
||||||
The connection to the server directly using the native interface provided by the client driver is referred to hereinafter as a "native connection"; the connection to the server using the REST interface provided by taosAdapter is referred to hereinafter as a "REST connection".
|
The direct connection to the server using the native interface provided by the client driver is referred to hereinafter as a "native connection"; the connection to the server using the REST interface provided by taosAdapter is referred to hereinafter as a "REST connection".
|
||||||
|
|
||||||
The source code for the Python connector is hosted on [GitHub](https://github.com/taosdata/taos-connector-python).
|
The source code for the Python connector is hosted on [GitHub](https://github.com/taosdata/taos-connector-python).
|
||||||
|
|
||||||
## Supported Platforms
|
## Supported Platforms
|
||||||
|
|
||||||
- The native connection [supported platforms](/reference/connector/#supported-platforms) is the same as the one supported by the TDengine client.
|
- The [supported platforms](/reference/connector/#supported-platforms) for the native connection are the same as the ones supported by the TDengine client.
|
||||||
- REST connections are supported on all platforms that can run Python.
|
- REST connections are supported on all platforms that can run Python.
|
||||||
|
|
||||||
## Version selection
|
## Version selection
|
||||||
|
|
||||||
We recommend using the latest version of `taospy`, regardless what the version of TDengine is.
|
We recommend using the latest version of `taospy`, regardless of the version of TDengine.
|
||||||
|
|
||||||
## Supported features
|
## Supported features
|
||||||
|
|
||||||
|
@ -139,7 +139,7 @@ The FQDN above can be the FQDN of any dnode in the cluster, and the PORT is the
|
||||||
</TabItem>
|
</TabItem>
|
||||||
<TabItem value="rest" label="REST connection" groupId="connect">
|
<TabItem value="rest" label="REST connection" groupId="connect">
|
||||||
|
|
||||||
For REST connections and making sure the cluster is up, make sure the taosAdapter component is up. This can be tested using the following `curl ` command.
|
For REST connections, make sure the cluster and taosAdapter component, are running. This can be tested using the following `curl ` command.
|
||||||
|
|
||||||
```
|
```
|
||||||
curl -u root:taosdata http://<FQDN>:<PORT>/rest/sql -d "select server_version()"
|
curl -u root:taosdata http://<FQDN>:<PORT>/rest/sql -d "select server_version()"
|
||||||
|
@ -312,7 +312,7 @@ For a more detailed description of the `sql()` method, please refer to [RestClie
|
||||||
|
|
||||||
### Exception handling
|
### Exception handling
|
||||||
|
|
||||||
All database operations will be thrown directly if an exception occurs. The application is responsible for exception handling. For example:
|
All errors from database operations are thrown directly as exceptions and the error message from the database is passed up the exception stack. The application is responsible for exception handling. For example:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
{{#include docs-examples/python/handle_exception.py}}
|
{{#include docs-examples/python/handle_exception.py}}
|
||||||
|
|
|
@ -30,7 +30,7 @@ REST connections are supported on all platforms that can run Rust.
|
||||||
|
|
||||||
Please refer to [version support list](/reference/connector#version-support).
|
Please refer to [version support list](/reference/connector#version-support).
|
||||||
|
|
||||||
The Rust Connector is still under rapid development and is not guaranteed to be backward compatible before 1.0. Recommend to use TDengine version 2.4 or higher to avoid known issues.
|
The Rust Connector is still under rapid development and is not guaranteed to be backward compatible before 1.0. We recommend using TDengine version 2.4 or higher to avoid known issues.
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
|
@ -206,7 +206,7 @@ let conn: Taos = cfg.connect();
|
||||||
|
|
||||||
### Connection pooling
|
### Connection pooling
|
||||||
|
|
||||||
In complex applications, recommand to enable connection pool. Connection pool for [libtaos] is implemented using [r2d2].
|
In complex applications, we recommend enabling connection pools. Connection pool for [libtaos] is implemented using [r2d2].
|
||||||
|
|
||||||
As follows, a connection pool with default parameters can be generated.
|
As follows, a connection pool with default parameters can be generated.
|
||||||
|
|
||||||
|
@ -269,7 +269,7 @@ The [Taos] structure is the connection manager in [libtaos] and provides two mai
|
||||||
|
|
||||||
Note that Rust asynchronous functions and an asynchronous runtime are required.
|
Note that Rust asynchronous functions and an asynchronous runtime are required.
|
||||||
|
|
||||||
[Taos] provides partial Rust methodization of SQL to reduce the frequency of `format!` code blocks.
|
[Taos] provides a few Rust methods that encapsulate SQL to reduce the frequency of `format!` code blocks.
|
||||||
|
|
||||||
- `.describe(table: &str)`: Executes `DESCRIBE` and returns a Rust data structure.
|
- `.describe(table: &str)`: Executes `DESCRIBE` and returns a Rust data structure.
|
||||||
- `.create_database(database: &str)`: Executes the `CREATE DATABASE` statement.
|
- `.create_database(database: &str)`: Executes the `CREATE DATABASE` statement.
|
||||||
|
@ -279,7 +279,7 @@ In addition, this structure is also the entry point for [Parameter Binding](#Par
|
||||||
|
|
||||||
### Bind Interface
|
### Bind Interface
|
||||||
|
|
||||||
Similar to the C interface, Rust provides the bind interface's wraping. First, create a bind object [Stmt] for a SQL command from the [Taos] object.
|
Similar to the C interface, Rust provides the bind interface's wrapping. First, create a bind object [Stmt] for a SQL command from the [Taos] object.
|
||||||
|
|
||||||
```rust
|
```rust
|
||||||
let mut stmt: Stmt = taos.stmt("insert into ? values(? ,?)") ? ;
|
let mut stmt: Stmt = taos.stmt("insert into ? values(? ,?)") ? ;
|
||||||
|
|
|
@ -77,7 +77,7 @@ If the subtable obtained by the parse line protocol does not exist, Schemaless c
|
||||||
|
|
||||||
:::tip
|
:::tip
|
||||||
All processing logic of schemaless will still follow TDengine's underlying restrictions on data structures, such as the total length of each row of data cannot exceed
|
All processing logic of schemaless will still follow TDengine's underlying restrictions on data structures, such as the total length of each row of data cannot exceed
|
||||||
16k bytes. See [TAOS SQL Boundary Limits](/taos-sql/limit) for specific constraints in this area.
|
48k bytes. See [TAOS SQL Boundary Limits](/taos-sql/limit) for specific constraints in this area.
|
||||||
:::
|
:::
|
||||||
|
|
||||||
## Time resolution recognition
|
## Time resolution recognition
|
||||||
|
|
|
@ -55,9 +55,9 @@ int32_t qParseSql(SParseContext* pCxt, SQuery** pQuery);
|
||||||
bool qIsInsertSql(const char* pStr, size_t length);
|
bool qIsInsertSql(const char* pStr, size_t length);
|
||||||
|
|
||||||
// for async mode
|
// for async mode
|
||||||
int32_t qSyntaxParseSql(SParseContext* pCxt, SQuery** pQuery, struct SCatalogReq* pCatalogReq);
|
int32_t qParseSqlSyntax(SParseContext* pCxt, SQuery** pQuery, struct SCatalogReq* pCatalogReq);
|
||||||
int32_t qSemanticAnalysisSql(SParseContext* pCxt, const struct SCatalogReq* pCatalogReq,
|
int32_t qAnalyseSqlSemantic(SParseContext* pCxt, const struct SCatalogReq* pCatalogReq,
|
||||||
const struct SMetaData* pMetaData, SQuery* pQuery);
|
const struct SMetaData* pMetaData, SQuery* pQuery);
|
||||||
|
|
||||||
void qDestroyQuery(SQuery* pQueryNode);
|
void qDestroyQuery(SQuery* pQueryNode);
|
||||||
|
|
||||||
|
|
|
@ -160,7 +160,7 @@ struct SOperatorInfo;
|
||||||
//struct SOptrBasicInfo;
|
//struct SOptrBasicInfo;
|
||||||
|
|
||||||
typedef int32_t (*__optr_encode_fn_t)(struct SOperatorInfo* pOperator, char** result, int32_t* length);
|
typedef int32_t (*__optr_encode_fn_t)(struct SOperatorInfo* pOperator, char** result, int32_t* length);
|
||||||
typedef int32_t (*__optr_decode_fn_t)(struct SOperatorInfo* pOperator, char* result, int32_t length);
|
typedef int32_t (*__optr_decode_fn_t)(struct SOperatorInfo* pOperator, char* result);
|
||||||
|
|
||||||
typedef int32_t (*__optr_open_fn_t)(struct SOperatorInfo* pOptr);
|
typedef int32_t (*__optr_open_fn_t)(struct SOperatorInfo* pOptr);
|
||||||
typedef SSDataBlock* (*__optr_fn_t)(struct SOperatorInfo* pOptr);
|
typedef SSDataBlock* (*__optr_fn_t)(struct SOperatorInfo* pOptr);
|
||||||
|
@ -821,7 +821,7 @@ int32_t createExecTaskInfoImpl(SSubplan* pPlan, SExecTaskInfo** pTaskInfo, SRead
|
||||||
int32_t getOperatorExplainExecInfo(SOperatorInfo* operatorInfo, SExplainExecInfo** pRes, int32_t* capacity,
|
int32_t getOperatorExplainExecInfo(SOperatorInfo* operatorInfo, SExplainExecInfo** pRes, int32_t* capacity,
|
||||||
int32_t* resNum);
|
int32_t* resNum);
|
||||||
|
|
||||||
int32_t aggDecodeResultRow(SOperatorInfo* pOperator, char* result, int32_t length);
|
int32_t aggDecodeResultRow(SOperatorInfo* pOperator, char* result);
|
||||||
int32_t aggEncodeResultRow(SOperatorInfo* pOperator, char** result, int32_t* length);
|
int32_t aggEncodeResultRow(SOperatorInfo* pOperator, char** result, int32_t* length);
|
||||||
|
|
||||||
STimeWindow getActiveTimeWindow(SDiskbasedBuf* pBuf, SResultRowInfo* pResultRowInfo, int64_t ts,
|
STimeWindow getActiveTimeWindow(SDiskbasedBuf* pBuf, SResultRowInfo* pResultRowInfo, int64_t ts,
|
||||||
|
|
|
@ -3448,14 +3448,14 @@ static int32_t doOpenAggregateOptr(SOperatorInfo* pOperator) {
|
||||||
}
|
}
|
||||||
|
|
||||||
#if 0 // test for encode/decode result info
|
#if 0 // test for encode/decode result info
|
||||||
if(pOperator->encodeResultRow){
|
if(pOperator->fpSet.encodeResultRow){
|
||||||
char *result = NULL;
|
char *result = NULL;
|
||||||
int32_t length = 0;
|
int32_t length = 0;
|
||||||
SAggSupporter *pSup = &pAggInfo->aggSup;
|
pOperator->fpSet.encodeResultRow(pOperator, &result, &length);
|
||||||
pOperator->encodeResultRow(pOperator, pSup, pInfo, &result, &length);
|
SAggSupporter* pSup = &pAggInfo->aggSup;
|
||||||
taosHashClear(pSup->pResultRowHashTable);
|
taosHashClear(pSup->pResultRowHashTable);
|
||||||
pInfo->resultRowInfo.size = 0;
|
pInfo->resultRowInfo.size = 0;
|
||||||
pOperator->decodeResultRow(pOperator, pSup, pInfo, result, length);
|
pOperator->fpSet.decodeResultRow(pOperator, result);
|
||||||
if(result){
|
if(result){
|
||||||
taosMemoryFree(result);
|
taosMemoryFree(result);
|
||||||
}
|
}
|
||||||
|
@ -3567,17 +3567,20 @@ int32_t aggEncodeResultRow(SOperatorInfo* pOperator, char** result, int32_t* len
|
||||||
return TDB_CODE_SUCCESS;
|
return TDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t aggDecodeResultRow(SOperatorInfo* pOperator, char* result, int32_t length) {
|
int32_t aggDecodeResultRow(SOperatorInfo* pOperator, char* result) {
|
||||||
if(result == NULL || length <= 0){
|
if(result == NULL){
|
||||||
return TSDB_CODE_TSC_INVALID_INPUT;
|
return TSDB_CODE_TSC_INVALID_INPUT;
|
||||||
}
|
}
|
||||||
SOptrBasicInfo* pInfo = (SOptrBasicInfo*)(pOperator->info);
|
SOptrBasicInfo* pInfo = (SOptrBasicInfo*)(pOperator->info);
|
||||||
SAggSupporter* pSup = (SAggSupporter*)POINTER_SHIFT(pOperator->info, sizeof(SOptrBasicInfo));
|
SAggSupporter* pSup = (SAggSupporter*)POINTER_SHIFT(pOperator->info, sizeof(SOptrBasicInfo));
|
||||||
|
|
||||||
// int32_t size = taosHashGetSize(pSup->pResultRowHashTable);
|
// int32_t size = taosHashGetSize(pSup->pResultRowHashTable);
|
||||||
int32_t count = *(int32_t*)(result);
|
int32_t length = *(int32_t*)(result);
|
||||||
|
|
||||||
int32_t offset = sizeof(int32_t);
|
int32_t offset = sizeof(int32_t);
|
||||||
|
|
||||||
|
int32_t count = *(int32_t*)(result + offset);
|
||||||
|
offset += sizeof(int32_t);
|
||||||
|
|
||||||
while (count-- > 0 && length > offset) {
|
while (count-- > 0 && length > offset) {
|
||||||
int32_t keyLen = *(int32_t*)(result + offset);
|
int32_t keyLen = *(int32_t*)(result + offset);
|
||||||
offset += sizeof(int32_t);
|
offset += sizeof(int32_t);
|
||||||
|
@ -5048,17 +5051,19 @@ int32_t encodeOperator(SOperatorInfo* ops, char** result, int32_t *length){
|
||||||
int32_t decodeOperator(SOperatorInfo* ops, char* result, int32_t length){
|
int32_t decodeOperator(SOperatorInfo* ops, char* result, int32_t length){
|
||||||
int32_t code = TDB_CODE_SUCCESS;
|
int32_t code = TDB_CODE_SUCCESS;
|
||||||
if(ops->fpSet.decodeResultRow){
|
if(ops->fpSet.decodeResultRow){
|
||||||
if(result == NULL || length <= 0){
|
if(result == NULL){
|
||||||
return TSDB_CODE_TSC_INVALID_INPUT;
|
return TSDB_CODE_TSC_INVALID_INPUT;
|
||||||
}
|
}
|
||||||
char* data = result + 2 * sizeof(int32_t);
|
ASSERT(length == *(int32_t*)result);
|
||||||
int32_t dataLength = *(int32_t*)(result + sizeof(int32_t));
|
char* data = result + sizeof(int32_t);
|
||||||
code = ops->fpSet.decodeResultRow(ops, data, dataLength - sizeof(int32_t));
|
code = ops->fpSet.decodeResultRow(ops, data);
|
||||||
if(code != TDB_CODE_SUCCESS){
|
if(code != TDB_CODE_SUCCESS){
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t totalLength = *(int32_t*)result;
|
int32_t totalLength = *(int32_t*)result;
|
||||||
|
int32_t dataLength = *(int32_t*)data;
|
||||||
|
|
||||||
if(totalLength == dataLength + sizeof(int32_t)) { // the last data
|
if(totalLength == dataLength + sizeof(int32_t)) { // the last data
|
||||||
result = NULL;
|
result = NULL;
|
||||||
length = 0;
|
length = 0;
|
||||||
|
|
|
@ -318,7 +318,20 @@ static SSDataBlock* hashGroupbyAggregate(SOperatorInfo* pOperator) {
|
||||||
// updateNumOfRowsInResultRows(pInfo->binfo.pCtx, pOperator->numOfExprs, &pInfo->binfo.resultRowInfo,
|
// updateNumOfRowsInResultRows(pInfo->binfo.pCtx, pOperator->numOfExprs, &pInfo->binfo.resultRowInfo,
|
||||||
// pInfo->binfo.rowCellInfoOffset);
|
// pInfo->binfo.rowCellInfoOffset);
|
||||||
// }
|
// }
|
||||||
|
#if 0
|
||||||
|
if(pOperator->fpSet.encodeResultRow){
|
||||||
|
char *result = NULL;
|
||||||
|
int32_t length = 0;
|
||||||
|
pOperator->fpSet.encodeResultRow(pOperator, &result, &length);
|
||||||
|
SAggSupporter* pSup = &pInfo->aggSup;
|
||||||
|
taosHashClear(pSup->pResultRowHashTable);
|
||||||
|
pInfo->binfo.resultRowInfo.size = 0;
|
||||||
|
pOperator->fpSet.decodeResultRow(pOperator, result);
|
||||||
|
if(result){
|
||||||
|
taosMemoryFree(result);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
#endif
|
||||||
blockDataEnsureCapacity(pRes, pOperator->resultInfo.capacity);
|
blockDataEnsureCapacity(pRes, pOperator->resultInfo.capacity);
|
||||||
initGroupedResultInfo(&pInfo->groupResInfo, pInfo->aggSup.pResultRowHashTable, 0);
|
initGroupedResultInfo(&pInfo->groupResInfo, pInfo->aggSup.pResultRowHashTable, 0);
|
||||||
|
|
||||||
|
|
|
@ -880,14 +880,14 @@ static int32_t doOpenIntervalAgg(SOperatorInfo* pOperator) {
|
||||||
hashIntervalAgg(pOperator, &pInfo->binfo.resultRowInfo, pBlock, pBlock->info.groupId, NULL);
|
hashIntervalAgg(pOperator, &pInfo->binfo.resultRowInfo, pBlock, pBlock->info.groupId, NULL);
|
||||||
|
|
||||||
#if 0 // test for encode/decode result info
|
#if 0 // test for encode/decode result info
|
||||||
if(pOperator->encodeResultRow){
|
if(pOperator->fpSet.encodeResultRow){
|
||||||
char *result = NULL;
|
char *result = NULL;
|
||||||
int32_t length = 0;
|
int32_t length = 0;
|
||||||
SAggSupporter *pSup = &pInfo->aggSup;
|
SAggSupporter *pSup = &pInfo->aggSup;
|
||||||
pOperator->encodeResultRow(pOperator, pSup, &pInfo->binfo, &result, &length);
|
pOperator->fpSet.encodeResultRow(pOperator, &result, &length);
|
||||||
taosHashClear(pSup->pResultRowHashTable);
|
taosHashClear(pSup->pResultRowHashTable);
|
||||||
pInfo->binfo.resultRowInfo.size = 0;
|
pInfo->binfo.resultRowInfo.size = 0;
|
||||||
pOperator->decodeResultRow(pOperator, pSup, &pInfo->binfo, result, length);
|
pOperator->fpSet.decodeResultRow(pOperator, result);
|
||||||
if(result){
|
if(result){
|
||||||
taosMemoryFree(result);
|
taosMemoryFree(result);
|
||||||
}
|
}
|
||||||
|
|
|
@ -445,6 +445,11 @@ static int32_t translateStateCount(SFunctionNode* pFunc, char* pErrBuf, int32_t
|
||||||
}
|
}
|
||||||
|
|
||||||
// param0
|
// param0
|
||||||
|
SNode* pParaNode0 = nodesListGetNode(pFunc->pParameterList, 0);
|
||||||
|
if (QUERY_NODE_COLUMN != nodeType(pParaNode0)) {
|
||||||
|
return buildFuncErrMsg(pErrBuf, len, TSDB_CODE_FUNC_FUNTION_ERROR,
|
||||||
|
"The input parameter of STATECOUNT function can only be column");
|
||||||
|
}
|
||||||
uint8_t colType = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 0))->resType.type;
|
uint8_t colType = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 0))->resType.type;
|
||||||
if (!IS_NUMERIC_TYPE(colType)) {
|
if (!IS_NUMERIC_TYPE(colType)) {
|
||||||
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
|
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
|
||||||
|
@ -480,6 +485,11 @@ static int32_t translateStateDuration(SFunctionNode* pFunc, char* pErrBuf, int32
|
||||||
}
|
}
|
||||||
|
|
||||||
// param0
|
// param0
|
||||||
|
SNode* pParaNode0 = nodesListGetNode(pFunc->pParameterList, 0);
|
||||||
|
if (QUERY_NODE_COLUMN != nodeType(pParaNode0)) {
|
||||||
|
return buildFuncErrMsg(pErrBuf, len, TSDB_CODE_FUNC_FUNTION_ERROR,
|
||||||
|
"The input parameter of STATEDURATION function can only be column");
|
||||||
|
}
|
||||||
uint8_t colType = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 0))->resType.type;
|
uint8_t colType = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 0))->resType.type;
|
||||||
if (!IS_NUMERIC_TYPE(colType)) {
|
if (!IS_NUMERIC_TYPE(colType)) {
|
||||||
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
|
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
|
||||||
|
@ -1181,7 +1191,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
||||||
.finalizeFunc = functionFinalize
|
.finalizeFunc = functionFinalize
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
.name = "state_count",
|
.name = "statecount",
|
||||||
.type = FUNCTION_TYPE_STATE_COUNT,
|
.type = FUNCTION_TYPE_STATE_COUNT,
|
||||||
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC,
|
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC,
|
||||||
.translateFunc = translateStateCount,
|
.translateFunc = translateStateCount,
|
||||||
|
@ -1191,7 +1201,7 @@ const SBuiltinFuncDefinition funcMgtBuiltins[] = {
|
||||||
.finalizeFunc = NULL
|
.finalizeFunc = NULL
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
.name = "state_duration",
|
.name = "stateduration",
|
||||||
.type = FUNCTION_TYPE_STATE_DURATION,
|
.type = FUNCTION_TYPE_STATE_DURATION,
|
||||||
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_TIMELINE_FUNC,
|
.classification = FUNC_MGT_INDEFINITE_ROWS_FUNC | FUNC_MGT_TIMELINE_FUNC,
|
||||||
.translateFunc = translateStateDuration,
|
.translateFunc = translateStateDuration,
|
||||||
|
|
|
@ -3776,6 +3776,7 @@ static void tailAssignResult(STailItem* pItem, char *data, int32_t colBytes, TSK
|
||||||
if (isNull) {
|
if (isNull) {
|
||||||
pItem->isNull = true;
|
pItem->isNull = true;
|
||||||
} else {
|
} else {
|
||||||
|
pItem->isNull = false;
|
||||||
memcpy(pItem->data, data, colBytes);
|
memcpy(pItem->data, data, colBytes);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -24,6 +24,7 @@ extern "C" {
|
||||||
#include "parUtil.h"
|
#include "parUtil.h"
|
||||||
#include "parser.h"
|
#include "parser.h"
|
||||||
|
|
||||||
|
int32_t parseInsertSyntax(SParseContext* pContext, SQuery** pQuery);
|
||||||
int32_t parseInsertSql(SParseContext* pContext, SQuery** pQuery);
|
int32_t parseInsertSql(SParseContext* pContext, SQuery** pQuery);
|
||||||
int32_t parse(SParseContext* pParseCxt, SQuery** pQuery);
|
int32_t parse(SParseContext* pParseCxt, SQuery** pQuery);
|
||||||
int32_t collectMetaKey(SParseContext* pParseCxt, SQuery* pQuery);
|
int32_t collectMetaKey(SParseContext* pParseCxt, SQuery* pQuery);
|
||||||
|
|
|
@ -65,12 +65,15 @@ int32_t trimString(const char* src, int32_t len, char* dst, int32_t dlen);
|
||||||
int32_t buildCatalogReq(const SParseMetaCache* pMetaCache, SCatalogReq* pCatalogReq);
|
int32_t buildCatalogReq(const SParseMetaCache* pMetaCache, SCatalogReq* pCatalogReq);
|
||||||
int32_t putMetaDataToCache(const SCatalogReq* pCatalogReq, const SMetaData* pMetaData, SParseMetaCache* pMetaCache);
|
int32_t putMetaDataToCache(const SCatalogReq* pCatalogReq, const SMetaData* pMetaData, SParseMetaCache* pMetaCache);
|
||||||
int32_t reserveTableMetaInCache(int32_t acctId, const char* pDb, const char* pTable, SParseMetaCache* pMetaCache);
|
int32_t reserveTableMetaInCache(int32_t acctId, const char* pDb, const char* pTable, SParseMetaCache* pMetaCache);
|
||||||
|
int32_t reserveTableMetaInCacheExt(const SName* pName, SParseMetaCache* pMetaCache);
|
||||||
int32_t reserveDbVgInfoInCache(int32_t acctId, const char* pDb, SParseMetaCache* pMetaCache);
|
int32_t reserveDbVgInfoInCache(int32_t acctId, const char* pDb, SParseMetaCache* pMetaCache);
|
||||||
int32_t reserveTableVgroupInCache(int32_t acctId, const char* pDb, const char* pTable, SParseMetaCache* pMetaCache);
|
int32_t reserveTableVgroupInCache(int32_t acctId, const char* pDb, const char* pTable, SParseMetaCache* pMetaCache);
|
||||||
|
int32_t reserveTableVgroupInCacheExt(const SName* pName, SParseMetaCache* pMetaCache);
|
||||||
int32_t reserveDbVgVersionInCache(int32_t acctId, const char* pDb, SParseMetaCache* pMetaCache);
|
int32_t reserveDbVgVersionInCache(int32_t acctId, const char* pDb, SParseMetaCache* pMetaCache);
|
||||||
int32_t reserveDbCfgInCache(int32_t acctId, const char* pDb, SParseMetaCache* pMetaCache);
|
int32_t reserveDbCfgInCache(int32_t acctId, const char* pDb, SParseMetaCache* pMetaCache);
|
||||||
int32_t reserveUserAuthInCache(int32_t acctId, const char* pUser, const char* pDb, AUTH_TYPE type,
|
int32_t reserveUserAuthInCache(int32_t acctId, const char* pUser, const char* pDb, AUTH_TYPE type,
|
||||||
SParseMetaCache* pMetaCache);
|
SParseMetaCache* pMetaCache);
|
||||||
|
int32_t reserveUserAuthInCacheExt(const char* pUser, const SName* pName, AUTH_TYPE type, SParseMetaCache* pMetaCache);
|
||||||
int32_t reserveUdfInCache(const char* pFunc, SParseMetaCache* pMetaCache);
|
int32_t reserveUdfInCache(const char* pFunc, SParseMetaCache* pMetaCache);
|
||||||
int32_t getTableMetaFromCache(SParseMetaCache* pMetaCache, const SName* pName, STableMeta** pMeta);
|
int32_t getTableMetaFromCache(SParseMetaCache* pMetaCache, const SName* pName, STableMeta** pMeta);
|
||||||
int32_t getDbVgInfoFromCache(SParseMetaCache* pMetaCache, const char* pDbFName, SArray** pVgInfo);
|
int32_t getDbVgInfoFromCache(SParseMetaCache* pMetaCache, const char* pDbFName, SArray** pVgInfo);
|
||||||
|
@ -78,7 +81,7 @@ int32_t getTableVgroupFromCache(SParseMetaCache* pMetaCache, const SName* pName,
|
||||||
int32_t getDbVgVersionFromCache(SParseMetaCache* pMetaCache, const char* pDbFName, int32_t* pVersion, int64_t* pDbId,
|
int32_t getDbVgVersionFromCache(SParseMetaCache* pMetaCache, const char* pDbFName, int32_t* pVersion, int64_t* pDbId,
|
||||||
int32_t* pTableNum);
|
int32_t* pTableNum);
|
||||||
int32_t getDbCfgFromCache(SParseMetaCache* pMetaCache, const char* pDbFName, SDbCfgInfo* pInfo);
|
int32_t getDbCfgFromCache(SParseMetaCache* pMetaCache, const char* pDbFName, SDbCfgInfo* pInfo);
|
||||||
int32_t getUserAuthFromCache(SParseMetaCache* pMetaCache, const char* pUser, const char* pDb, AUTH_TYPE type,
|
int32_t getUserAuthFromCache(SParseMetaCache* pMetaCache, const char* pUser, const char* pDbFName, AUTH_TYPE type,
|
||||||
bool* pPass);
|
bool* pPass);
|
||||||
int32_t getUdfInfoFromCache(SParseMetaCache* pMetaCache, const char* pFunc, SFuncInfo* pInfo);
|
int32_t getUdfInfoFromCache(SParseMetaCache* pMetaCache, const char* pFunc, SFuncInfo* pInfo);
|
||||||
|
|
||||||
|
|
|
@ -333,68 +333,22 @@ static int32_t collectMetaKeyFromQuery(SCollectMetaKeyCxt* pCxt, SNode* pStmt) {
|
||||||
return collectMetaKeyFromSetOperator(pCxt, (SSetOperator*)pStmt);
|
return collectMetaKeyFromSetOperator(pCxt, (SSetOperator*)pStmt);
|
||||||
case QUERY_NODE_SELECT_STMT:
|
case QUERY_NODE_SELECT_STMT:
|
||||||
return collectMetaKeyFromSelect(pCxt, (SSelectStmt*)pStmt);
|
return collectMetaKeyFromSelect(pCxt, (SSelectStmt*)pStmt);
|
||||||
case QUERY_NODE_VNODE_MODIF_STMT:
|
|
||||||
case QUERY_NODE_CREATE_DATABASE_STMT:
|
|
||||||
case QUERY_NODE_DROP_DATABASE_STMT:
|
|
||||||
case QUERY_NODE_ALTER_DATABASE_STMT:
|
|
||||||
break;
|
|
||||||
case QUERY_NODE_CREATE_TABLE_STMT:
|
case QUERY_NODE_CREATE_TABLE_STMT:
|
||||||
return collectMetaKeyFromCreateTable(pCxt, (SCreateTableStmt*)pStmt);
|
return collectMetaKeyFromCreateTable(pCxt, (SCreateTableStmt*)pStmt);
|
||||||
case QUERY_NODE_CREATE_SUBTABLE_CLAUSE:
|
|
||||||
break;
|
|
||||||
case QUERY_NODE_CREATE_MULTI_TABLE_STMT:
|
case QUERY_NODE_CREATE_MULTI_TABLE_STMT:
|
||||||
return collectMetaKeyFromCreateMultiTable(pCxt, (SCreateMultiTableStmt*)pStmt);
|
return collectMetaKeyFromCreateMultiTable(pCxt, (SCreateMultiTableStmt*)pStmt);
|
||||||
case QUERY_NODE_DROP_TABLE_CLAUSE:
|
|
||||||
case QUERY_NODE_DROP_TABLE_STMT:
|
|
||||||
case QUERY_NODE_DROP_SUPER_TABLE_STMT:
|
|
||||||
break;
|
|
||||||
case QUERY_NODE_ALTER_TABLE_STMT:
|
case QUERY_NODE_ALTER_TABLE_STMT:
|
||||||
return collectMetaKeyFromAlterTable(pCxt, (SAlterTableStmt*)pStmt);
|
return collectMetaKeyFromAlterTable(pCxt, (SAlterTableStmt*)pStmt);
|
||||||
case QUERY_NODE_CREATE_USER_STMT:
|
|
||||||
case QUERY_NODE_ALTER_USER_STMT:
|
|
||||||
case QUERY_NODE_DROP_USER_STMT:
|
|
||||||
break;
|
|
||||||
case QUERY_NODE_USE_DATABASE_STMT:
|
case QUERY_NODE_USE_DATABASE_STMT:
|
||||||
return collectMetaKeyFromUseDatabase(pCxt, (SUseDatabaseStmt*)pStmt);
|
return collectMetaKeyFromUseDatabase(pCxt, (SUseDatabaseStmt*)pStmt);
|
||||||
case QUERY_NODE_CREATE_DNODE_STMT:
|
|
||||||
case QUERY_NODE_DROP_DNODE_STMT:
|
|
||||||
case QUERY_NODE_ALTER_DNODE_STMT:
|
|
||||||
break;
|
|
||||||
case QUERY_NODE_CREATE_INDEX_STMT:
|
case QUERY_NODE_CREATE_INDEX_STMT:
|
||||||
return collectMetaKeyFromCreateIndex(pCxt, (SCreateIndexStmt*)pStmt);
|
return collectMetaKeyFromCreateIndex(pCxt, (SCreateIndexStmt*)pStmt);
|
||||||
case QUERY_NODE_DROP_INDEX_STMT:
|
|
||||||
case QUERY_NODE_CREATE_QNODE_STMT:
|
|
||||||
case QUERY_NODE_DROP_QNODE_STMT:
|
|
||||||
case QUERY_NODE_CREATE_BNODE_STMT:
|
|
||||||
case QUERY_NODE_DROP_BNODE_STMT:
|
|
||||||
case QUERY_NODE_CREATE_SNODE_STMT:
|
|
||||||
case QUERY_NODE_DROP_SNODE_STMT:
|
|
||||||
case QUERY_NODE_CREATE_MNODE_STMT:
|
|
||||||
case QUERY_NODE_DROP_MNODE_STMT:
|
|
||||||
break;
|
|
||||||
case QUERY_NODE_CREATE_TOPIC_STMT:
|
case QUERY_NODE_CREATE_TOPIC_STMT:
|
||||||
return collectMetaKeyFromCreateTopic(pCxt, (SCreateTopicStmt*)pStmt);
|
return collectMetaKeyFromCreateTopic(pCxt, (SCreateTopicStmt*)pStmt);
|
||||||
case QUERY_NODE_DROP_TOPIC_STMT:
|
|
||||||
case QUERY_NODE_DROP_CGROUP_STMT:
|
|
||||||
case QUERY_NODE_ALTER_LOCAL_STMT:
|
|
||||||
break;
|
|
||||||
case QUERY_NODE_EXPLAIN_STMT:
|
case QUERY_NODE_EXPLAIN_STMT:
|
||||||
return collectMetaKeyFromExplain(pCxt, (SExplainStmt*)pStmt);
|
return collectMetaKeyFromExplain(pCxt, (SExplainStmt*)pStmt);
|
||||||
case QUERY_NODE_DESCRIBE_STMT:
|
|
||||||
case QUERY_NODE_RESET_QUERY_CACHE_STMT:
|
|
||||||
case QUERY_NODE_COMPACT_STMT:
|
|
||||||
case QUERY_NODE_CREATE_FUNCTION_STMT:
|
|
||||||
case QUERY_NODE_DROP_FUNCTION_STMT:
|
|
||||||
break;
|
|
||||||
case QUERY_NODE_CREATE_STREAM_STMT:
|
case QUERY_NODE_CREATE_STREAM_STMT:
|
||||||
return collectMetaKeyFromCreateStream(pCxt, (SCreateStreamStmt*)pStmt);
|
return collectMetaKeyFromCreateStream(pCxt, (SCreateStreamStmt*)pStmt);
|
||||||
case QUERY_NODE_DROP_STREAM_STMT:
|
|
||||||
case QUERY_NODE_MERGE_VGROUP_STMT:
|
|
||||||
case QUERY_NODE_REDISTRIBUTE_VGROUP_STMT:
|
|
||||||
case QUERY_NODE_SPLIT_VGROUP_STMT:
|
|
||||||
case QUERY_NODE_SYNCDB_STMT:
|
|
||||||
case QUERY_NODE_GRANT_STMT:
|
|
||||||
case QUERY_NODE_REVOKE_STMT:
|
|
||||||
case QUERY_NODE_SHOW_DNODES_STMT:
|
case QUERY_NODE_SHOW_DNODES_STMT:
|
||||||
return collectMetaKeyFromShowDnodes(pCxt, (SShowStmt*)pStmt);
|
return collectMetaKeyFromShowDnodes(pCxt, (SShowStmt*)pStmt);
|
||||||
case QUERY_NODE_SHOW_MNODES_STMT:
|
case QUERY_NODE_SHOW_MNODES_STMT:
|
||||||
|
@ -407,8 +361,6 @@ static int32_t collectMetaKeyFromQuery(SCollectMetaKeyCxt* pCxt, SNode* pStmt) {
|
||||||
return collectMetaKeyFromShowSnodes(pCxt, (SShowStmt*)pStmt);
|
return collectMetaKeyFromShowSnodes(pCxt, (SShowStmt*)pStmt);
|
||||||
case QUERY_NODE_SHOW_BNODES_STMT:
|
case QUERY_NODE_SHOW_BNODES_STMT:
|
||||||
return collectMetaKeyFromShowBnodes(pCxt, (SShowStmt*)pStmt);
|
return collectMetaKeyFromShowBnodes(pCxt, (SShowStmt*)pStmt);
|
||||||
case QUERY_NODE_SHOW_CLUSTER_STMT:
|
|
||||||
break;
|
|
||||||
case QUERY_NODE_SHOW_DATABASES_STMT:
|
case QUERY_NODE_SHOW_DATABASES_STMT:
|
||||||
return collectMetaKeyFromShowDatabases(pCxt, (SShowStmt*)pStmt);
|
return collectMetaKeyFromShowDatabases(pCxt, (SShowStmt*)pStmt);
|
||||||
case QUERY_NODE_SHOW_FUNCTIONS_STMT:
|
case QUERY_NODE_SHOW_FUNCTIONS_STMT:
|
||||||
|
@ -429,25 +381,8 @@ static int32_t collectMetaKeyFromQuery(SCollectMetaKeyCxt* pCxt, SNode* pStmt) {
|
||||||
return collectMetaKeyFromShowVgroups(pCxt, (SShowStmt*)pStmt);
|
return collectMetaKeyFromShowVgroups(pCxt, (SShowStmt*)pStmt);
|
||||||
case QUERY_NODE_SHOW_TOPICS_STMT:
|
case QUERY_NODE_SHOW_TOPICS_STMT:
|
||||||
return collectMetaKeyFromShowTopics(pCxt, (SShowStmt*)pStmt);
|
return collectMetaKeyFromShowTopics(pCxt, (SShowStmt*)pStmt);
|
||||||
case QUERY_NODE_SHOW_CONSUMERS_STMT:
|
|
||||||
case QUERY_NODE_SHOW_SUBSCRIBES_STMT:
|
|
||||||
case QUERY_NODE_SHOW_SMAS_STMT:
|
|
||||||
case QUERY_NODE_SHOW_CONFIGS_STMT:
|
|
||||||
case QUERY_NODE_SHOW_CONNECTIONS_STMT:
|
|
||||||
case QUERY_NODE_SHOW_QUERIES_STMT:
|
|
||||||
case QUERY_NODE_SHOW_VNODES_STMT:
|
|
||||||
case QUERY_NODE_SHOW_APPS_STMT:
|
|
||||||
case QUERY_NODE_SHOW_SCORES_STMT:
|
|
||||||
case QUERY_NODE_SHOW_VARIABLE_STMT:
|
|
||||||
case QUERY_NODE_SHOW_CREATE_DATABASE_STMT:
|
|
||||||
case QUERY_NODE_SHOW_CREATE_TABLE_STMT:
|
|
||||||
case QUERY_NODE_SHOW_CREATE_STABLE_STMT:
|
|
||||||
break;
|
|
||||||
case QUERY_NODE_SHOW_TRANSACTIONS_STMT:
|
case QUERY_NODE_SHOW_TRANSACTIONS_STMT:
|
||||||
return collectMetaKeyFromShowTransactions(pCxt, (SShowStmt*)pStmt);
|
return collectMetaKeyFromShowTransactions(pCxt, (SShowStmt*)pStmt);
|
||||||
case QUERY_NODE_KILL_CONNECTION_STMT:
|
|
||||||
case QUERY_NODE_KILL_QUERY_STMT:
|
|
||||||
case QUERY_NODE_KILL_TRANSACTION_STMT:
|
|
||||||
default:
|
default:
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
|
@ -64,6 +64,7 @@ typedef struct SInsertParseContext {
|
||||||
int32_t totalNum;
|
int32_t totalNum;
|
||||||
SVnodeModifOpStmt* pOutput;
|
SVnodeModifOpStmt* pOutput;
|
||||||
SStmtCallback* pStmtCb;
|
SStmtCallback* pStmtCb;
|
||||||
|
SParseMetaCache* pMetaCache;
|
||||||
} SInsertParseContext;
|
} SInsertParseContext;
|
||||||
|
|
||||||
typedef int32_t (*_row_append_fn_t)(SMsgBuf* pMsgBuf, const void* value, int32_t len, void* param);
|
typedef int32_t (*_row_append_fn_t)(SMsgBuf* pMsgBuf, const void* value, int32_t len, void* param);
|
||||||
|
@ -92,15 +93,15 @@ typedef struct SMemParam {
|
||||||
} \
|
} \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
static int32_t skipInsertInto(SInsertParseContext* pCxt) {
|
static int32_t skipInsertInto(char** pSql, SMsgBuf* pMsg) {
|
||||||
SToken sToken;
|
SToken sToken;
|
||||||
NEXT_TOKEN(pCxt->pSql, sToken);
|
NEXT_TOKEN(*pSql, sToken);
|
||||||
if (TK_INSERT != sToken.type) {
|
if (TK_INSERT != sToken.type) {
|
||||||
return buildSyntaxErrMsg(&pCxt->msg, "keyword INSERT is expected", sToken.z);
|
return buildSyntaxErrMsg(pMsg, "keyword INSERT is expected", sToken.z);
|
||||||
}
|
}
|
||||||
NEXT_TOKEN(pCxt->pSql, sToken);
|
NEXT_TOKEN(*pSql, sToken);
|
||||||
if (TK_INTO != sToken.type) {
|
if (TK_INTO != sToken.type) {
|
||||||
return buildSyntaxErrMsg(&pCxt->msg, "keyword INTO is expected", sToken.z);
|
return buildSyntaxErrMsg(pMsg, "keyword INTO is expected", sToken.z);
|
||||||
}
|
}
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
@ -212,7 +213,7 @@ static int32_t createSName(SName* pName, SToken* pTableName, int32_t acctId, con
|
||||||
return buildInvalidOperationMsg(pMsgBuf, msg4);
|
return buildInvalidOperationMsg(pMsgBuf, msg4);
|
||||||
}
|
}
|
||||||
|
|
||||||
char tbname[TSDB_TABLE_FNAME_LEN] = {0};
|
char tbname[TSDB_TABLE_FNAME_LEN] = {0};
|
||||||
strncpy(tbname, p + 1, tbLen);
|
strncpy(tbname, p + 1, tbLen);
|
||||||
/*tbLen = */ strdequote(tbname);
|
/*tbLen = */ strdequote(tbname);
|
||||||
|
|
||||||
|
@ -250,25 +251,46 @@ static int32_t createSName(SName* pName, SToken* pTableName, int32_t acctId, con
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t getTableMetaImpl(SInsertParseContext* pCxt, SName* name, char* dbFname, bool isStb) {
|
static int32_t checkAuth(SInsertParseContext* pCxt, char* pDbFname, bool* pPass) {
|
||||||
SParseContext* pBasicCtx = pCxt->pComCxt;
|
SParseContext* pBasicCtx = pCxt->pComCxt;
|
||||||
|
if (NULL != pCxt->pMetaCache) {
|
||||||
|
return getUserAuthFromCache(pCxt->pMetaCache, pBasicCtx->pUser, pDbFname, AUTH_TYPE_WRITE, pPass);
|
||||||
|
}
|
||||||
|
return catalogChkAuth(pBasicCtx->pCatalog, pBasicCtx->pTransporter, &pBasicCtx->mgmtEpSet, pBasicCtx->pUser, pDbFname,
|
||||||
|
AUTH_TYPE_WRITE, pPass);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int32_t getTableSchema(SInsertParseContext* pCxt, SName* pTbName, bool isStb, STableMeta** pTableMeta) {
|
||||||
|
SParseContext* pBasicCtx = pCxt->pComCxt;
|
||||||
|
if (NULL != pCxt->pMetaCache) {
|
||||||
|
return getTableMetaFromCache(pCxt->pMetaCache, pTbName, pTableMeta);
|
||||||
|
}
|
||||||
|
if (isStb) {
|
||||||
|
return catalogGetSTableMeta(pBasicCtx->pCatalog, pBasicCtx->pTransporter, &pBasicCtx->mgmtEpSet, pTbName,
|
||||||
|
pTableMeta);
|
||||||
|
}
|
||||||
|
return catalogGetTableMeta(pBasicCtx->pCatalog, pBasicCtx->pTransporter, &pBasicCtx->mgmtEpSet, pTbName, pTableMeta);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int32_t getTableVgroup(SInsertParseContext* pCxt, SName* pTbName, SVgroupInfo* pVg) {
|
||||||
|
SParseContext* pBasicCtx = pCxt->pComCxt;
|
||||||
|
if (NULL != pCxt->pMetaCache) {
|
||||||
|
return getTableVgroupFromCache(pCxt->pMetaCache, pTbName, pVg);
|
||||||
|
}
|
||||||
|
return catalogGetTableHashVgroup(pBasicCtx->pCatalog, pBasicCtx->pTransporter, &pBasicCtx->mgmtEpSet, pTbName, pVg);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int32_t getTableMetaImpl(SInsertParseContext* pCxt, SName* name, char* dbFname, bool isStb) {
|
||||||
bool pass = false;
|
bool pass = false;
|
||||||
CHECK_CODE(catalogChkAuth(pBasicCtx->pCatalog, pBasicCtx->pTransporter, &pBasicCtx->mgmtEpSet, pBasicCtx->pUser,
|
CHECK_CODE(checkAuth(pCxt, dbFname, &pass));
|
||||||
dbFname, AUTH_TYPE_WRITE, &pass));
|
|
||||||
if (!pass) {
|
if (!pass) {
|
||||||
return TSDB_CODE_PAR_PERMISSION_DENIED;
|
return TSDB_CODE_PAR_PERMISSION_DENIED;
|
||||||
}
|
}
|
||||||
if (isStb) {
|
|
||||||
CHECK_CODE(catalogGetSTableMeta(pBasicCtx->pCatalog, pBasicCtx->pTransporter, &pBasicCtx->mgmtEpSet, name,
|
CHECK_CODE(getTableSchema(pCxt, name, isStb, &pCxt->pTableMeta));
|
||||||
&pCxt->pTableMeta));
|
if (!isStb) {
|
||||||
} else {
|
|
||||||
CHECK_CODE(catalogGetTableMeta(pBasicCtx->pCatalog, pBasicCtx->pTransporter, &pBasicCtx->mgmtEpSet, name,
|
|
||||||
&pCxt->pTableMeta));
|
|
||||||
ASSERT(pCxt->pTableMeta->tableInfo.rowSize > 0);
|
|
||||||
SVgroupInfo vg;
|
SVgroupInfo vg;
|
||||||
CHECK_CODE(
|
CHECK_CODE(getTableVgroup(pCxt, name, &vg));
|
||||||
catalogGetTableHashVgroup(pBasicCtx->pCatalog, pBasicCtx->pTransporter, &pBasicCtx->mgmtEpSet, name, &vg));
|
|
||||||
CHECK_CODE(taosHashPut(pCxt->pVgroupsHashObj, (const char*)&vg.vgId, sizeof(vg.vgId), (char*)&vg, sizeof(vg)));
|
CHECK_CODE(taosHashPut(pCxt->pVgroupsHashObj, (const char*)&vg.vgId, sizeof(vg.vgId), (char*)&vg, sizeof(vg)));
|
||||||
}
|
}
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
|
@ -777,7 +799,7 @@ static int32_t KvRowAppend(SMsgBuf* pMsgBuf, const void* value, int32_t len, voi
|
||||||
if (errno == E2BIG) {
|
if (errno == E2BIG) {
|
||||||
return generateSyntaxErrMsg(pMsgBuf, TSDB_CODE_PAR_VALUE_TOO_LONG, pa->schema->name);
|
return generateSyntaxErrMsg(pMsgBuf, TSDB_CODE_PAR_VALUE_TOO_LONG, pa->schema->name);
|
||||||
}
|
}
|
||||||
|
|
||||||
char buf[512] = {0};
|
char buf[512] = {0};
|
||||||
snprintf(buf, tListLen(buf), " taosMbsToUcs4 error:%s", strerror(errno));
|
snprintf(buf, tListLen(buf), " taosMbsToUcs4 error:%s", strerror(errno));
|
||||||
return buildSyntaxErrMsg(pMsgBuf, buf, value);
|
return buildSyntaxErrMsg(pMsgBuf, buf, value);
|
||||||
|
@ -857,10 +879,8 @@ static int32_t cloneTableMeta(STableMeta* pSrc, STableMeta** pDst) {
|
||||||
|
|
||||||
static int32_t storeTableMeta(SInsertParseContext* pCxt, SHashObj* pHash, SName* pTableName, const char* pName,
|
static int32_t storeTableMeta(SInsertParseContext* pCxt, SHashObj* pHash, SName* pTableName, const char* pName,
|
||||||
int32_t len, STableMeta* pMeta) {
|
int32_t len, STableMeta* pMeta) {
|
||||||
SVgroupInfo vg;
|
SVgroupInfo vg;
|
||||||
SParseContext* pBasicCtx = pCxt->pComCxt;
|
CHECK_CODE(getTableVgroup(pCxt, pTableName, &vg));
|
||||||
CHECK_CODE(
|
|
||||||
catalogGetTableHashVgroup(pBasicCtx->pCatalog, pBasicCtx->pTransporter, &pBasicCtx->mgmtEpSet, pTableName, &vg));
|
|
||||||
CHECK_CODE(taosHashPut(pCxt->pVgroupsHashObj, (const char*)&vg.vgId, sizeof(vg.vgId), (char*)&vg, sizeof(vg)));
|
CHECK_CODE(taosHashPut(pCxt->pVgroupsHashObj, (const char*)&vg.vgId, sizeof(vg.vgId), (char*)&vg, sizeof(vg)));
|
||||||
|
|
||||||
pMeta->uid = 0;
|
pMeta->uid = 0;
|
||||||
|
@ -1082,9 +1102,9 @@ static void destroyInsertParseContext(SInsertParseContext* pCxt) {
|
||||||
// VALUES (field1_value, ...) [(field1_value2, ...) ...] | FILE csv_file_path
|
// VALUES (field1_value, ...) [(field1_value2, ...) ...] | FILE csv_file_path
|
||||||
// [...];
|
// [...];
|
||||||
static int32_t parseInsertBody(SInsertParseContext* pCxt) {
|
static int32_t parseInsertBody(SInsertParseContext* pCxt) {
|
||||||
int32_t tbNum = 0;
|
int32_t tbNum = 0;
|
||||||
char tbFName[TSDB_TABLE_FNAME_LEN];
|
char tbFName[TSDB_TABLE_FNAME_LEN];
|
||||||
bool autoCreateTbl = false;
|
bool autoCreateTbl = false;
|
||||||
|
|
||||||
// for each table
|
// for each table
|
||||||
while (1) {
|
while (1) {
|
||||||
|
@ -1186,8 +1206,8 @@ static int32_t parseInsertBody(SInsertParseContext* pCxt) {
|
||||||
return TSDB_CODE_TSC_OUT_OF_MEMORY;
|
return TSDB_CODE_TSC_OUT_OF_MEMORY;
|
||||||
}
|
}
|
||||||
memcpy(tags, &pCxt->tags, sizeof(pCxt->tags));
|
memcpy(tags, &pCxt->tags, sizeof(pCxt->tags));
|
||||||
(*pCxt->pStmtCb->setInfoFn)(pCxt->pStmtCb->pStmt, pCxt->pTableMeta, tags, tbFName, autoCreateTbl, pCxt->pVgroupsHashObj,
|
(*pCxt->pStmtCb->setInfoFn)(pCxt->pStmtCb->pStmt, pCxt->pTableMeta, tags, tbFName, autoCreateTbl,
|
||||||
pCxt->pTableBlockHashObj);
|
pCxt->pVgroupsHashObj, pCxt->pTableBlockHashObj);
|
||||||
|
|
||||||
memset(&pCxt->tags, 0, sizeof(pCxt->tags));
|
memset(&pCxt->tags, 0, sizeof(pCxt->tags));
|
||||||
pCxt->pVgroupsHashObj = NULL;
|
pCxt->pVgroupsHashObj = NULL;
|
||||||
|
@ -1245,12 +1265,11 @@ int32_t parseInsertSql(SParseContext* pContext, SQuery** pQuery) {
|
||||||
if (NULL == *pQuery) {
|
if (NULL == *pQuery) {
|
||||||
return TSDB_CODE_OUT_OF_MEMORY;
|
return TSDB_CODE_OUT_OF_MEMORY;
|
||||||
}
|
}
|
||||||
|
|
||||||
(*pQuery)->execMode = QUERY_EXEC_MODE_SCHEDULE;
|
|
||||||
(*pQuery)->haveResultSet = false;
|
|
||||||
(*pQuery)->msgType = TDMT_VND_SUBMIT;
|
|
||||||
(*pQuery)->pRoot = (SNode*)context.pOutput;
|
|
||||||
}
|
}
|
||||||
|
(*pQuery)->execMode = QUERY_EXEC_MODE_SCHEDULE;
|
||||||
|
(*pQuery)->haveResultSet = false;
|
||||||
|
(*pQuery)->msgType = TDMT_VND_SUBMIT;
|
||||||
|
(*pQuery)->pRoot = (SNode*)context.pOutput;
|
||||||
|
|
||||||
if (NULL == (*pQuery)->pTableList) {
|
if (NULL == (*pQuery)->pTableList) {
|
||||||
(*pQuery)->pTableList = taosArrayInit(taosHashGetSize(context.pTableNameHashObj), sizeof(SName));
|
(*pQuery)->pTableList = taosArrayInit(taosHashGetSize(context.pTableNameHashObj), sizeof(SName));
|
||||||
|
@ -1261,7 +1280,7 @@ int32_t parseInsertSql(SParseContext* pContext, SQuery** pQuery) {
|
||||||
|
|
||||||
context.pOutput->payloadType = PAYLOAD_TYPE_KV;
|
context.pOutput->payloadType = PAYLOAD_TYPE_KV;
|
||||||
|
|
||||||
int32_t code = skipInsertInto(&context);
|
int32_t code = skipInsertInto(&context.pSql, &context.msg);
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
code = parseInsertBody(&context);
|
code = parseInsertBody(&context);
|
||||||
}
|
}
|
||||||
|
@ -1276,6 +1295,171 @@ int32_t parseInsertSql(SParseContext* pContext, SQuery** pQuery) {
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
typedef struct SInsertParseSyntaxCxt {
|
||||||
|
SParseContext* pComCxt;
|
||||||
|
char* pSql;
|
||||||
|
SMsgBuf msg;
|
||||||
|
SParseMetaCache* pMetaCache;
|
||||||
|
} SInsertParseSyntaxCxt;
|
||||||
|
|
||||||
|
static int32_t skipParentheses(SInsertParseSyntaxCxt* pCxt) {
|
||||||
|
SToken sToken;
|
||||||
|
while (1) {
|
||||||
|
NEXT_TOKEN(pCxt->pSql, sToken);
|
||||||
|
if (TK_NK_RP == sToken.type) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
if (0 == sToken.n) {
|
||||||
|
return buildSyntaxErrMsg(&pCxt->msg, ") expected", NULL);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int32_t skipBoundColumns(SInsertParseSyntaxCxt* pCxt) { return skipParentheses(pCxt); }
|
||||||
|
|
||||||
|
// pSql -> (field1_value, ...) [(field1_value2, ...) ...]
|
||||||
|
static int32_t skipValuesClause(SInsertParseSyntaxCxt* pCxt) {
|
||||||
|
int32_t numOfRows = 0;
|
||||||
|
SToken sToken;
|
||||||
|
while (1) {
|
||||||
|
int32_t index = 0;
|
||||||
|
NEXT_TOKEN_KEEP_SQL(pCxt->pSql, sToken, index);
|
||||||
|
if (TK_NK_LP != sToken.type) {
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
pCxt->pSql += index;
|
||||||
|
|
||||||
|
CHECK_CODE(skipParentheses(pCxt));
|
||||||
|
++numOfRows;
|
||||||
|
}
|
||||||
|
if (0 == numOfRows) {
|
||||||
|
return buildSyntaxErrMsg(&pCxt->msg, "no any data points", NULL);
|
||||||
|
}
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int32_t skipTagsClause(SInsertParseSyntaxCxt* pCxt) { return skipParentheses(pCxt); }
|
||||||
|
|
||||||
|
// pSql -> [(tag1_name, ...)] TAGS (tag1_value, ...)
|
||||||
|
static int32_t skipUsingClause(SInsertParseSyntaxCxt* pCxt) {
|
||||||
|
SToken sToken;
|
||||||
|
NEXT_TOKEN(pCxt->pSql, sToken);
|
||||||
|
if (TK_NK_LP == sToken.type) {
|
||||||
|
CHECK_CODE(skipBoundColumns(pCxt));
|
||||||
|
NEXT_TOKEN(pCxt->pSql, sToken);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (TK_TAGS != sToken.type) {
|
||||||
|
return buildSyntaxErrMsg(&pCxt->msg, "TAGS is expected", sToken.z);
|
||||||
|
}
|
||||||
|
// pSql -> (tag1_value, ...)
|
||||||
|
NEXT_TOKEN(pCxt->pSql, sToken);
|
||||||
|
if (TK_NK_LP != sToken.type) {
|
||||||
|
return buildSyntaxErrMsg(&pCxt->msg, "( is expected", sToken.z);
|
||||||
|
}
|
||||||
|
CHECK_CODE(skipTagsClause(pCxt));
|
||||||
|
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int32_t collectTableMetaKey(SInsertParseSyntaxCxt* pCxt, SToken* pTbToken) {
|
||||||
|
SName name;
|
||||||
|
CHECK_CODE(createSName(&name, pTbToken, pCxt->pComCxt->acctId, pCxt->pComCxt->db, &pCxt->msg));
|
||||||
|
CHECK_CODE(reserveUserAuthInCacheExt(pCxt->pComCxt->pUser, &name, AUTH_TYPE_WRITE, pCxt->pMetaCache));
|
||||||
|
CHECK_CODE(reserveTableMetaInCacheExt(&name, pCxt->pMetaCache));
|
||||||
|
CHECK_CODE(reserveTableVgroupInCacheExt(&name, pCxt->pMetaCache));
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
|
||||||
|
static int32_t parseInsertBodySyntax(SInsertParseSyntaxCxt* pCxt) {
|
||||||
|
bool hasData = false;
|
||||||
|
// for each table
|
||||||
|
while (1) {
|
||||||
|
SToken sToken;
|
||||||
|
|
||||||
|
// pSql -> tb_name ...
|
||||||
|
NEXT_TOKEN(pCxt->pSql, sToken);
|
||||||
|
|
||||||
|
// no data in the sql string anymore.
|
||||||
|
if (sToken.n == 0) {
|
||||||
|
if (sToken.type && pCxt->pSql[0]) {
|
||||||
|
return buildSyntaxErrMsg(&pCxt->msg, "invalid charactor in SQL", sToken.z);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!hasData) {
|
||||||
|
return buildInvalidOperationMsg(&pCxt->msg, "no data in sql");
|
||||||
|
}
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
|
hasData = false;
|
||||||
|
|
||||||
|
SToken tbnameToken = sToken;
|
||||||
|
NEXT_TOKEN(pCxt->pSql, sToken);
|
||||||
|
|
||||||
|
// USING clause
|
||||||
|
if (TK_USING == sToken.type) {
|
||||||
|
NEXT_TOKEN(pCxt->pSql, sToken);
|
||||||
|
CHECK_CODE(collectTableMetaKey(pCxt, &sToken));
|
||||||
|
CHECK_CODE(skipUsingClause(pCxt));
|
||||||
|
NEXT_TOKEN(pCxt->pSql, sToken);
|
||||||
|
} else {
|
||||||
|
CHECK_CODE(collectTableMetaKey(pCxt, &tbnameToken));
|
||||||
|
}
|
||||||
|
|
||||||
|
if (TK_NK_LP == sToken.type) {
|
||||||
|
// pSql -> field1_name, ...)
|
||||||
|
CHECK_CODE(skipBoundColumns(pCxt));
|
||||||
|
NEXT_TOKEN(pCxt->pSql, sToken);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (TK_VALUES == sToken.type) {
|
||||||
|
// pSql -> (field1_value, ...) [(field1_value2, ...) ...]
|
||||||
|
CHECK_CODE(skipValuesClause(pCxt));
|
||||||
|
hasData = true;
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
// FILE csv_file_path
|
||||||
|
if (TK_FILE == sToken.type) {
|
||||||
|
// pSql -> csv_file_path
|
||||||
|
NEXT_TOKEN(pCxt->pSql, sToken);
|
||||||
|
if (0 == sToken.n || (TK_NK_STRING != sToken.type && TK_NK_ID != sToken.type)) {
|
||||||
|
return buildSyntaxErrMsg(&pCxt->msg, "file path is required following keyword FILE", sToken.z);
|
||||||
|
}
|
||||||
|
hasData = true;
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
return buildSyntaxErrMsg(&pCxt->msg, "keyword VALUES or FILE is expected", sToken.z);
|
||||||
|
}
|
||||||
|
|
||||||
|
return TSDB_CODE_SUCCESS;
|
||||||
|
}
|
||||||
|
|
||||||
|
int32_t parseInsertSyntax(SParseContext* pContext, SQuery** pQuery) {
|
||||||
|
SInsertParseSyntaxCxt context = {.pComCxt = pContext,
|
||||||
|
.pSql = (char*)pContext->pSql,
|
||||||
|
.msg = {.buf = pContext->pMsg, .len = pContext->msgLen},
|
||||||
|
.pMetaCache = taosMemoryCalloc(1, sizeof(SParseMetaCache))};
|
||||||
|
if (NULL == context.pMetaCache) {
|
||||||
|
return TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
}
|
||||||
|
int32_t code = skipInsertInto(&context.pSql, &context.msg);
|
||||||
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
|
code = parseInsertBodySyntax(&context);
|
||||||
|
}
|
||||||
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
|
*pQuery = taosMemoryCalloc(1, sizeof(SQuery));
|
||||||
|
if (NULL == *pQuery) {
|
||||||
|
return TSDB_CODE_OUT_OF_MEMORY;
|
||||||
|
}
|
||||||
|
TSWAP((*pQuery)->pMetaCache, context.pMetaCache);
|
||||||
|
}
|
||||||
|
return code;
|
||||||
|
}
|
||||||
|
|
||||||
int32_t qCreateSName(SName* pName, const char* pTableName, int32_t acctId, char* dbName, char* msgBuf,
|
int32_t qCreateSName(SName* pName, const char* pTableName, int32_t acctId, char* dbName, char* msgBuf,
|
||||||
int32_t msgBufLen) {
|
int32_t msgBufLen) {
|
||||||
SMsgBuf msg = {.buf = msgBuf, .len = msgBufLen};
|
SMsgBuf msg = {.buf = msgBuf, .len = msgBufLen};
|
||||||
|
|
|
@ -671,22 +671,32 @@ int32_t putMetaDataToCache(const SCatalogReq* pCatalogReq, const SMetaData* pMet
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t reserveTableReqInCache(int32_t acctId, const char* pDb, const char* pTable, SHashObj** pTables) {
|
static int32_t reserveTableReqInCacheImpl(const char* pTbFName, int32_t len, SHashObj** pTables) {
|
||||||
if (NULL == *pTables) {
|
if (NULL == *pTables) {
|
||||||
*pTables = taosHashInit(4, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_NO_LOCK);
|
*pTables = taosHashInit(4, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_NO_LOCK);
|
||||||
if (NULL == *pTables) {
|
if (NULL == *pTables) {
|
||||||
return TSDB_CODE_OUT_OF_MEMORY;
|
return TSDB_CODE_OUT_OF_MEMORY;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
return taosHashPut(*pTables, pTbFName, len, &pTables, POINTER_BYTES);
|
||||||
|
}
|
||||||
|
|
||||||
|
static int32_t reserveTableReqInCache(int32_t acctId, const char* pDb, const char* pTable, SHashObj** pTables) {
|
||||||
char fullName[TSDB_TABLE_FNAME_LEN];
|
char fullName[TSDB_TABLE_FNAME_LEN];
|
||||||
int32_t len = snprintf(fullName, sizeof(fullName), "%d.%s.%s", acctId, pDb, pTable);
|
int32_t len = snprintf(fullName, sizeof(fullName), "%d.%s.%s", acctId, pDb, pTable);
|
||||||
return taosHashPut(*pTables, fullName, len, &pTables, POINTER_BYTES);
|
return reserveTableReqInCacheImpl(fullName, len, pTables);
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t reserveTableMetaInCache(int32_t acctId, const char* pDb, const char* pTable, SParseMetaCache* pMetaCache) {
|
int32_t reserveTableMetaInCache(int32_t acctId, const char* pDb, const char* pTable, SParseMetaCache* pMetaCache) {
|
||||||
return reserveTableReqInCache(acctId, pDb, pTable, &pMetaCache->pTableMeta);
|
return reserveTableReqInCache(acctId, pDb, pTable, &pMetaCache->pTableMeta);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
int32_t reserveTableMetaInCacheExt(const SName* pName, SParseMetaCache* pMetaCache) {
|
||||||
|
char fullName[TSDB_TABLE_FNAME_LEN];
|
||||||
|
tNameExtractFullName(pName, fullName);
|
||||||
|
return reserveTableReqInCacheImpl(fullName, strlen(fullName), &pMetaCache->pTableMeta);
|
||||||
|
}
|
||||||
|
|
||||||
int32_t getTableMetaFromCache(SParseMetaCache* pMetaCache, const SName* pName, STableMeta** pMeta) {
|
int32_t getTableMetaFromCache(SParseMetaCache* pMetaCache, const SName* pName, STableMeta** pMeta) {
|
||||||
char fullName[TSDB_TABLE_FNAME_LEN];
|
char fullName[TSDB_TABLE_FNAME_LEN];
|
||||||
tNameExtractFullName(pName, fullName);
|
tNameExtractFullName(pName, fullName);
|
||||||
|
@ -736,6 +746,12 @@ int32_t reserveTableVgroupInCache(int32_t acctId, const char* pDb, const char* p
|
||||||
return reserveTableReqInCache(acctId, pDb, pTable, &pMetaCache->pTableVgroup);
|
return reserveTableReqInCache(acctId, pDb, pTable, &pMetaCache->pTableVgroup);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
int32_t reserveTableVgroupInCacheExt(const SName* pName, SParseMetaCache* pMetaCache) {
|
||||||
|
char fullName[TSDB_TABLE_FNAME_LEN];
|
||||||
|
tNameExtractFullName(pName, fullName);
|
||||||
|
return reserveTableReqInCacheImpl(fullName, strlen(fullName), &pMetaCache->pTableVgroup);
|
||||||
|
}
|
||||||
|
|
||||||
int32_t getTableVgroupFromCache(SParseMetaCache* pMetaCache, const SName* pName, SVgroupInfo* pVgroup) {
|
int32_t getTableVgroupFromCache(SParseMetaCache* pMetaCache, const SName* pName, SVgroupInfo* pVgroup) {
|
||||||
char fullName[TSDB_TABLE_FNAME_LEN];
|
char fullName[TSDB_TABLE_FNAME_LEN];
|
||||||
tNameExtractFullName(pName, fullName);
|
tNameExtractFullName(pName, fullName);
|
||||||
|
@ -776,18 +792,30 @@ int32_t getDbCfgFromCache(SParseMetaCache* pMetaCache, const char* pDbFName, SDb
|
||||||
return TSDB_CODE_SUCCESS;
|
return TSDB_CODE_SUCCESS;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t reserveUserAuthInCache(int32_t acctId, const char* pUser, const char* pDb, AUTH_TYPE type,
|
static int32_t reserveUserAuthInCacheImpl(const char* pKey, int32_t len, SParseMetaCache* pMetaCache) {
|
||||||
SParseMetaCache* pMetaCache) {
|
|
||||||
if (NULL == pMetaCache->pUserAuth) {
|
if (NULL == pMetaCache->pUserAuth) {
|
||||||
pMetaCache->pUserAuth = taosHashInit(4, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_NO_LOCK);
|
pMetaCache->pUserAuth = taosHashInit(4, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_NO_LOCK);
|
||||||
if (NULL == pMetaCache->pUserAuth) {
|
if (NULL == pMetaCache->pUserAuth) {
|
||||||
return TSDB_CODE_OUT_OF_MEMORY;
|
return TSDB_CODE_OUT_OF_MEMORY;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
bool pass = false;
|
||||||
|
return taosHashPut(pMetaCache->pUserAuth, pKey, len, &pass, sizeof(pass));
|
||||||
|
}
|
||||||
|
|
||||||
|
int32_t reserveUserAuthInCache(int32_t acctId, const char* pUser, const char* pDb, AUTH_TYPE type,
|
||||||
|
SParseMetaCache* pMetaCache) {
|
||||||
char key[USER_AUTH_KEY_MAX_LEN] = {0};
|
char key[USER_AUTH_KEY_MAX_LEN] = {0};
|
||||||
int32_t len = userAuthToString(acctId, pUser, pDb, type, key);
|
int32_t len = userAuthToString(acctId, pUser, pDb, type, key);
|
||||||
bool pass = false;
|
return reserveUserAuthInCacheImpl(key, len, pMetaCache);
|
||||||
return taosHashPut(pMetaCache->pUserAuth, key, len, &pass, sizeof(pass));
|
}
|
||||||
|
|
||||||
|
int32_t reserveUserAuthInCacheExt(const char* pUser, const SName* pName, AUTH_TYPE type, SParseMetaCache* pMetaCache) {
|
||||||
|
char dbFName[TSDB_DB_FNAME_LEN] = {0};
|
||||||
|
tNameGetFullDbName(pName, dbFName);
|
||||||
|
char key[USER_AUTH_KEY_MAX_LEN] = {0};
|
||||||
|
int32_t len = userAuthToStringExt(pUser, dbFName, type, key);
|
||||||
|
return reserveUserAuthInCacheImpl(key, len, pMetaCache);
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t getUserAuthFromCache(SParseMetaCache* pMetaCache, const char* pUser, const char* pDbFName, AUTH_TYPE type,
|
int32_t getUserAuthFromCache(SParseMetaCache* pMetaCache, const char* pUser, const char* pDbFName, AUTH_TYPE type,
|
||||||
|
|
|
@ -34,7 +34,7 @@ bool qIsInsertSql(const char* pStr, size_t length) {
|
||||||
} while (1);
|
} while (1);
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t semanticAnalysis(SParseContext* pCxt, SQuery* pQuery) {
|
static int32_t analyseSemantic(SParseContext* pCxt, SQuery* pQuery) {
|
||||||
int32_t code = authenticate(pCxt, pQuery);
|
int32_t code = authenticate(pCxt, pQuery);
|
||||||
|
|
||||||
if (TSDB_CODE_SUCCESS == code && pQuery->placeholderNum > 0) {
|
if (TSDB_CODE_SUCCESS == code && pQuery->placeholderNum > 0) {
|
||||||
|
@ -54,12 +54,12 @@ static int32_t semanticAnalysis(SParseContext* pCxt, SQuery* pQuery) {
|
||||||
static int32_t parseSqlIntoAst(SParseContext* pCxt, SQuery** pQuery) {
|
static int32_t parseSqlIntoAst(SParseContext* pCxt, SQuery** pQuery) {
|
||||||
int32_t code = parse(pCxt, pQuery);
|
int32_t code = parse(pCxt, pQuery);
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
code = semanticAnalysis(pCxt, *pQuery);
|
code = analyseSemantic(pCxt, *pQuery);
|
||||||
}
|
}
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
static int32_t syntaxParseSql(SParseContext* pCxt, SQuery** pQuery) {
|
static int32_t parseSqlSyntax(SParseContext* pCxt, SQuery** pQuery) {
|
||||||
int32_t code = parse(pCxt, pQuery);
|
int32_t code = parse(pCxt, pQuery);
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
code = collectMetaKey(pCxt, *pQuery);
|
code = collectMetaKey(pCxt, *pQuery);
|
||||||
|
@ -192,12 +192,12 @@ int32_t qParseSql(SParseContext* pCxt, SQuery** pQuery) {
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t qSyntaxParseSql(SParseContext* pCxt, SQuery** pQuery, struct SCatalogReq* pCatalogReq) {
|
int32_t qParseSqlSyntax(SParseContext* pCxt, SQuery** pQuery, struct SCatalogReq* pCatalogReq) {
|
||||||
int32_t code = TSDB_CODE_SUCCESS;
|
int32_t code = TSDB_CODE_SUCCESS;
|
||||||
if (qIsInsertSql(pCxt->pSql, pCxt->sqlLen)) {
|
if (qIsInsertSql(pCxt->pSql, pCxt->sqlLen)) {
|
||||||
// todo insert sql
|
code = parseInsertSyntax(pCxt, pQuery);
|
||||||
} else {
|
} else {
|
||||||
code = syntaxParseSql(pCxt, pQuery);
|
code = parseSqlSyntax(pCxt, pQuery);
|
||||||
}
|
}
|
||||||
if (TSDB_CODE_SUCCESS == code) {
|
if (TSDB_CODE_SUCCESS == code) {
|
||||||
code = buildCatalogReq((*pQuery)->pMetaCache, pCatalogReq);
|
code = buildCatalogReq((*pQuery)->pMetaCache, pCatalogReq);
|
||||||
|
@ -206,13 +206,13 @@ int32_t qSyntaxParseSql(SParseContext* pCxt, SQuery** pQuery, struct SCatalogReq
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t qSemanticAnalysisSql(SParseContext* pCxt, const struct SCatalogReq* pCatalogReq,
|
int32_t qAnalyseSqlSemantic(SParseContext* pCxt, const struct SCatalogReq* pCatalogReq,
|
||||||
const struct SMetaData* pMetaData, SQuery* pQuery) {
|
const struct SMetaData* pMetaData, SQuery* pQuery) {
|
||||||
int32_t code = putMetaDataToCache(pCatalogReq, pMetaData, pQuery->pMetaCache);
|
int32_t code = putMetaDataToCache(pCatalogReq, pMetaData, pQuery->pMetaCache);
|
||||||
if (NULL == pQuery->pRoot) {
|
if (NULL == pQuery->pRoot) {
|
||||||
// todo insert sql
|
return parseInsertSql(pCxt, &pQuery);
|
||||||
}
|
}
|
||||||
return semanticAnalysis(pCxt, pQuery);
|
return analyseSemantic(pCxt, pQuery);
|
||||||
}
|
}
|
||||||
|
|
||||||
void qDestroyQuery(SQuery* pQueryNode) { nodesDestroyNode(pQueryNode); }
|
void qDestroyQuery(SQuery* pQueryNode) { nodesDestroyNode(pQueryNode); }
|
||||||
|
|
|
@ -26,9 +26,7 @@ if(${BUILD_WINGETOPT})
|
||||||
target_link_libraries(parserTest PUBLIC wingetopt)
|
target_link_libraries(parserTest PUBLIC wingetopt)
|
||||||
endif()
|
endif()
|
||||||
|
|
||||||
if(NOT TD_WINDOWS)
|
add_test(
|
||||||
add_test(
|
NAME parserTest
|
||||||
NAME parserTest
|
COMMAND parserTest
|
||||||
COMMAND parserTest
|
)
|
||||||
)
|
|
||||||
endif(NOT TD_WINDOWS)
|
|
||||||
|
|
|
@ -242,6 +242,8 @@ class MockCatalogServiceImpl {
|
||||||
info->outputType = outputType;
|
info->outputType = outputType;
|
||||||
info->outputLen = outputLen;
|
info->outputLen = outputLen;
|
||||||
info->bufSize = bufSize;
|
info->bufSize = bufSize;
|
||||||
|
info->pCode = nullptr;
|
||||||
|
info->pComment = nullptr;
|
||||||
udf_.insert(std::make_pair(func, info));
|
udf_.insert(std::make_pair(func, info));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -15,6 +15,7 @@
|
||||||
|
|
||||||
#include <gtest/gtest.h>
|
#include <gtest/gtest.h>
|
||||||
|
|
||||||
|
#include "mockCatalogService.h"
|
||||||
#include "os.h"
|
#include "os.h"
|
||||||
#include "parInt.h"
|
#include "parInt.h"
|
||||||
|
|
||||||
|
@ -57,6 +58,38 @@ class InsertTest : public Test {
|
||||||
return code_;
|
return code_;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
int32_t runAsync() {
|
||||||
|
code_ = parseInsertSyntax(&cxt_, &res_);
|
||||||
|
if (code_ != TSDB_CODE_SUCCESS) {
|
||||||
|
cout << "parseInsertSyntax code:" << toString(code_) << ", msg:" << errMagBuf_ << endl;
|
||||||
|
return code_;
|
||||||
|
}
|
||||||
|
|
||||||
|
SCatalogReq catalogReq = {0};
|
||||||
|
code_ = buildCatalogReq(res_->pMetaCache, &catalogReq);
|
||||||
|
if (code_ != TSDB_CODE_SUCCESS) {
|
||||||
|
cout << "buildCatalogReq code:" << toString(code_) << ", msg:" << errMagBuf_ << endl;
|
||||||
|
return code_;
|
||||||
|
}
|
||||||
|
|
||||||
|
SMetaData metaData = {0};
|
||||||
|
g_mockCatalogService->catalogGetAllMeta(&catalogReq, &metaData);
|
||||||
|
|
||||||
|
code_ = putMetaDataToCache(&catalogReq, &metaData, res_->pMetaCache);
|
||||||
|
if (code_ != TSDB_CODE_SUCCESS) {
|
||||||
|
cout << "putMetaDataToCache code:" << toString(code_) << ", msg:" << errMagBuf_ << endl;
|
||||||
|
return code_;
|
||||||
|
}
|
||||||
|
|
||||||
|
code_ = parseInsertSql(&cxt_, &res_);
|
||||||
|
if (code_ != TSDB_CODE_SUCCESS) {
|
||||||
|
cout << "parseInsertSql code:" << toString(code_) << ", msg:" << errMagBuf_ << endl;
|
||||||
|
return code_;
|
||||||
|
}
|
||||||
|
|
||||||
|
return code_;
|
||||||
|
}
|
||||||
|
|
||||||
void dumpReslut() {
|
void dumpReslut() {
|
||||||
SVnodeModifOpStmt* pStmt = getVnodeModifStmt(res_);
|
SVnodeModifOpStmt* pStmt = getVnodeModifStmt(res_);
|
||||||
size_t num = taosArrayGetSize(pStmt->pDataBlocks);
|
size_t num = taosArrayGetSize(pStmt->pDataBlocks);
|
||||||
|
@ -125,7 +158,7 @@ class InsertTest : public Test {
|
||||||
SQuery* res_;
|
SQuery* res_;
|
||||||
};
|
};
|
||||||
|
|
||||||
// INSERT INTO tb_name VALUES (field1_value, ...)
|
// INSERT INTO tb_name [(field1_name, ...)] VALUES (field1_value, ...)
|
||||||
TEST_F(InsertTest, singleTableSingleRowTest) {
|
TEST_F(InsertTest, singleTableSingleRowTest) {
|
||||||
setDatabase("root", "test");
|
setDatabase("root", "test");
|
||||||
|
|
||||||
|
@ -133,6 +166,17 @@ TEST_F(InsertTest, singleTableSingleRowTest) {
|
||||||
ASSERT_EQ(run(), TSDB_CODE_SUCCESS);
|
ASSERT_EQ(run(), TSDB_CODE_SUCCESS);
|
||||||
dumpReslut();
|
dumpReslut();
|
||||||
checkReslut(1, 1);
|
checkReslut(1, 1);
|
||||||
|
|
||||||
|
bind("insert into t1 (ts, c1, c2, c3, c4, c5) values (now, 1, 'beijing', 3, 4, 5)");
|
||||||
|
ASSERT_EQ(run(), TSDB_CODE_SUCCESS);
|
||||||
|
|
||||||
|
bind("insert into t1 values (now, 1, 'beijing', 3, 4, 5)");
|
||||||
|
ASSERT_EQ(runAsync(), TSDB_CODE_SUCCESS);
|
||||||
|
dumpReslut();
|
||||||
|
checkReslut(1, 1);
|
||||||
|
|
||||||
|
bind("insert into t1 (ts, c1, c2, c3, c4, c5) values (now, 1, 'beijing', 3, 4, 5)");
|
||||||
|
ASSERT_EQ(runAsync(), TSDB_CODE_SUCCESS);
|
||||||
}
|
}
|
||||||
|
|
||||||
// INSERT INTO tb_name VALUES (field1_value, ...)(field1_value, ...)
|
// INSERT INTO tb_name VALUES (field1_value, ...)(field1_value, ...)
|
||||||
|
@ -140,11 +184,16 @@ TEST_F(InsertTest, singleTableMultiRowTest) {
|
||||||
setDatabase("root", "test");
|
setDatabase("root", "test");
|
||||||
|
|
||||||
bind(
|
bind(
|
||||||
"insert into t1 values (now, 1, 'beijing', 3, 4, 5)(now+1s, 2, 'shanghai', 6, 7, 8)(now+2s, 3, 'guangzhou', 9, "
|
"insert into t1 values (now, 1, 'beijing', 3, 4, 5)(now+1s, 2, 'shanghai', 6, 7, 8)"
|
||||||
"10, 11)");
|
"(now+2s, 3, 'guangzhou', 9, 10, 11)");
|
||||||
ASSERT_EQ(run(), TSDB_CODE_SUCCESS);
|
ASSERT_EQ(run(), TSDB_CODE_SUCCESS);
|
||||||
dumpReslut();
|
dumpReslut();
|
||||||
checkReslut(1, 3);
|
checkReslut(1, 3);
|
||||||
|
|
||||||
|
bind(
|
||||||
|
"insert into t1 values (now, 1, 'beijing', 3, 4, 5)(now+1s, 2, 'shanghai', 6, 7, 8)"
|
||||||
|
"(now+2s, 3, 'guangzhou', 9, 10, 11)");
|
||||||
|
ASSERT_EQ(runAsync(), TSDB_CODE_SUCCESS);
|
||||||
}
|
}
|
||||||
|
|
||||||
// INSERT INTO tb1_name VALUES (field1_value, ...) tb2_name VALUES (field1_value, ...)
|
// INSERT INTO tb1_name VALUES (field1_value, ...) tb2_name VALUES (field1_value, ...)
|
||||||
|
@ -155,6 +204,9 @@ TEST_F(InsertTest, multiTableSingleRowTest) {
|
||||||
ASSERT_EQ(run(), TSDB_CODE_SUCCESS);
|
ASSERT_EQ(run(), TSDB_CODE_SUCCESS);
|
||||||
dumpReslut();
|
dumpReslut();
|
||||||
checkReslut(2, 1);
|
checkReslut(2, 1);
|
||||||
|
|
||||||
|
bind("insert into st1s1 values (now, 1, \"beijing\") st1s2 values (now, 10, \"131028\")");
|
||||||
|
ASSERT_EQ(runAsync(), TSDB_CODE_SUCCESS);
|
||||||
}
|
}
|
||||||
|
|
||||||
// INSERT INTO tb1_name VALUES (field1_value, ...) tb2_name VALUES (field1_value, ...)
|
// INSERT INTO tb1_name VALUES (field1_value, ...) tb2_name VALUES (field1_value, ...)
|
||||||
|
@ -167,6 +219,11 @@ TEST_F(InsertTest, multiTableMultiRowTest) {
|
||||||
ASSERT_EQ(run(), TSDB_CODE_SUCCESS);
|
ASSERT_EQ(run(), TSDB_CODE_SUCCESS);
|
||||||
dumpReslut();
|
dumpReslut();
|
||||||
checkReslut(2, 3, 2);
|
checkReslut(2, 3, 2);
|
||||||
|
|
||||||
|
bind(
|
||||||
|
"insert into st1s1 values (now, 1, \"beijing\")(now+1s, 2, \"shanghai\")(now+2s, 3, \"guangzhou\")"
|
||||||
|
" st1s2 values (now, 10, \"131028\")(now+1s, 20, \"132028\")");
|
||||||
|
ASSERT_EQ(runAsync(), TSDB_CODE_SUCCESS);
|
||||||
}
|
}
|
||||||
|
|
||||||
// INSERT INTO
|
// INSERT INTO
|
||||||
|
@ -181,6 +238,21 @@ TEST_F(InsertTest, autoCreateTableTest) {
|
||||||
ASSERT_EQ(run(), TSDB_CODE_SUCCESS);
|
ASSERT_EQ(run(), TSDB_CODE_SUCCESS);
|
||||||
dumpReslut();
|
dumpReslut();
|
||||||
checkReslut(1, 3);
|
checkReslut(1, 3);
|
||||||
|
|
||||||
|
bind(
|
||||||
|
"insert into st1s1 using st1 (tag1, tag2) tags(1, 'wxy') values (now, 1, \"beijing\")"
|
||||||
|
"(now+1s, 2, \"shanghai\")(now+2s, 3, \"guangzhou\")");
|
||||||
|
ASSERT_EQ(run(), TSDB_CODE_SUCCESS);
|
||||||
|
|
||||||
|
bind(
|
||||||
|
"insert into st1s1 using st1 tags(1, 'wxy') values (now, 1, \"beijing\")(now+1s, 2, \"shanghai\")(now+2s, 3, "
|
||||||
|
"\"guangzhou\")");
|
||||||
|
ASSERT_EQ(runAsync(), TSDB_CODE_SUCCESS);
|
||||||
|
|
||||||
|
bind(
|
||||||
|
"insert into st1s1 using st1 (tag1, tag2) tags(1, 'wxy') values (now, 1, \"beijing\")"
|
||||||
|
"(now+1s, 2, \"shanghai\")(now+2s, 3, \"guangzhou\")");
|
||||||
|
ASSERT_EQ(runAsync(), TSDB_CODE_SUCCESS);
|
||||||
}
|
}
|
||||||
|
|
||||||
TEST_F(InsertTest, toleranceTest) {
|
TEST_F(InsertTest, toleranceTest) {
|
||||||
|
@ -190,4 +262,9 @@ TEST_F(InsertTest, toleranceTest) {
|
||||||
ASSERT_NE(run(), TSDB_CODE_SUCCESS);
|
ASSERT_NE(run(), TSDB_CODE_SUCCESS);
|
||||||
bind("insert into t");
|
bind("insert into t");
|
||||||
ASSERT_NE(run(), TSDB_CODE_SUCCESS);
|
ASSERT_NE(run(), TSDB_CODE_SUCCESS);
|
||||||
|
|
||||||
|
bind("insert into");
|
||||||
|
ASSERT_NE(runAsync(), TSDB_CODE_SUCCESS);
|
||||||
|
bind("insert into t");
|
||||||
|
ASSERT_NE(runAsync(), TSDB_CODE_SUCCESS);
|
||||||
}
|
}
|
||||||
|
|
|
@ -23,20 +23,18 @@ target_link_libraries(
|
||||||
PUBLIC os
|
PUBLIC os
|
||||||
)
|
)
|
||||||
|
|
||||||
if(NOT TD_WINDOWS)
|
add_executable(sdbDump sdbDump.c)
|
||||||
add_executable(sdbDump sdbDump.c)
|
target_link_libraries(
|
||||||
target_link_libraries(
|
sdbDump
|
||||||
sdbDump
|
PUBLIC dnode
|
||||||
PUBLIC dnode
|
PUBLIC mnode
|
||||||
PUBLIC mnode
|
PUBLIC sdb
|
||||||
PUBLIC sdb
|
PUBLIC os
|
||||||
PUBLIC os
|
)
|
||||||
)
|
target_include_directories(
|
||||||
target_include_directories(
|
sdbDump
|
||||||
sdbDump
|
PUBLIC "${TD_SOURCE_DIR}/include/dnode/mnode"
|
||||||
PUBLIC "${TD_SOURCE_DIR}/include/dnode/mnode"
|
PRIVATE "${TD_SOURCE_DIR}/source/dnode/mnode/impl/inc"
|
||||||
PRIVATE "${TD_SOURCE_DIR}/source/dnode/mnode/impl/inc"
|
PRIVATE "${TD_SOURCE_DIR}/source/dnode/mnode/sdb/inc"
|
||||||
PRIVATE "${TD_SOURCE_DIR}/source/dnode/mnode/sdb/inc"
|
PRIVATE "${TD_SOURCE_DIR}/source/dnode/mgmt/node_mgmt/inc"
|
||||||
PRIVATE "${TD_SOURCE_DIR}/source/dnode/mgmt/node_mgmt/inc"
|
)
|
||||||
)
|
|
||||||
ENDIF ()
|
|
|
@ -21,12 +21,12 @@
|
||||||
#include "tjson.h"
|
#include "tjson.h"
|
||||||
|
|
||||||
#define TMP_DNODE_DIR TD_TMP_DIR_PATH "dumpsdb"
|
#define TMP_DNODE_DIR TD_TMP_DIR_PATH "dumpsdb"
|
||||||
#define TMP_MNODE_DIR TD_TMP_DIR_PATH "dumpsdb/mnode"
|
#define TMP_MNODE_DIR TD_TMP_DIR_PATH "dumpsdb" TD_DIRSEP "mnode"
|
||||||
#define TMP_SDB_DATA_DIR TD_TMP_DIR_PATH "dumpsdb/mnode/data"
|
#define TMP_SDB_DATA_DIR TD_TMP_DIR_PATH "dumpsdb" TD_DIRSEP "mnode" TD_DIRSEP "data"
|
||||||
#define TMP_SDB_SYNC_DIR TD_TMP_DIR_PATH "dumpsdb/mnode/sync"
|
#define TMP_SDB_SYNC_DIR TD_TMP_DIR_PATH "dumpsdb" TD_DIRSEP "mnode" TD_DIRSEP "sync"
|
||||||
#define TMP_SDB_DATA_FILE TD_TMP_DIR_PATH "dumpsdb/mnode/data/sdb.data"
|
#define TMP_SDB_DATA_FILE TD_TMP_DIR_PATH "dumpsdb" TD_DIRSEP "mnode" TD_DIRSEP "data" TD_DIRSEP "sdb.data"
|
||||||
#define TMP_SDB_RAFT_CFG_FILE TD_TMP_DIR_PATH "dumpsdb/mnode/sync/raft_config.json"
|
#define TMP_SDB_RAFT_CFG_FILE TD_TMP_DIR_PATH "dumpsdb" TD_DIRSEP "mnode" TD_DIRSEP "sync" TD_DIRSEP "raft_config.json"
|
||||||
#define TMP_SDB_RAFT_STORE_FILE TD_TMP_DIR_PATH "dumpsdb/mnode/sync/raft_store.json"
|
#define TMP_SDB_RAFT_STORE_FILE TD_TMP_DIR_PATH "dumpsdb" TD_DIRSEP "mnode" TD_DIRSEP "sync" TD_DIRSEP "raft_store.json"
|
||||||
|
|
||||||
void reportStartup(const char *name, const char *desc) {}
|
void reportStartup(const char *name, const char *desc) {}
|
||||||
|
|
||||||
|
@ -412,13 +412,23 @@ int32_t parseArgs(int32_t argc, char *argv[]) {
|
||||||
char dataFile[PATH_MAX] = {0};
|
char dataFile[PATH_MAX] = {0};
|
||||||
char raftCfgFile[PATH_MAX] = {0};
|
char raftCfgFile[PATH_MAX] = {0};
|
||||||
char raftStoreFile[PATH_MAX] = {0};
|
char raftStoreFile[PATH_MAX] = {0};
|
||||||
snprintf(dataFile, PATH_MAX, "%s/mnode/data/sdb.data", tsDataDir);
|
snprintf(dataFile, PATH_MAX, "%s" TD_DIRSEP "mnode" TD_DIRSEP "data" TD_DIRSEP "sdb.data", tsDataDir);
|
||||||
snprintf(raftCfgFile, PATH_MAX, "%s/mnode/sync/raft_config.json", tsDataDir);
|
snprintf(raftCfgFile, PATH_MAX, "%s" TD_DIRSEP "mnode" TD_DIRSEP "sync" TD_DIRSEP "raft_config.json", tsDataDir);
|
||||||
snprintf(raftStoreFile, PATH_MAX, "%s/mnode/sync/raft_store.json", tsDataDir);
|
snprintf(raftStoreFile, PATH_MAX, "%s" TD_DIRSEP "mnode" TD_DIRSEP "sync" TD_DIRSEP "raft_store.json", tsDataDir);
|
||||||
|
|
||||||
char cmd[PATH_MAX * 2] = {0};
|
char cmd[PATH_MAX * 2] = {0};
|
||||||
snprintf(cmd, sizeof(cmd), "rm -rf %s", TMP_DNODE_DIR);
|
snprintf(cmd, sizeof(cmd), "rm -rf %s", TMP_DNODE_DIR);
|
||||||
system(cmd);
|
system(cmd);
|
||||||
|
#ifdef WINDOWS
|
||||||
|
taosMulMkDir(TMP_SDB_DATA_DIR);
|
||||||
|
taosMulMkDir(TMP_SDB_SYNC_DIR);
|
||||||
|
snprintf(cmd, sizeof(cmd), "cp %s %s 2>nul", dataFile, TMP_SDB_DATA_FILE);
|
||||||
|
system(cmd);
|
||||||
|
snprintf(cmd, sizeof(cmd), "cp %s %s 2>nul", raftCfgFile, TMP_SDB_RAFT_CFG_FILE);
|
||||||
|
system(cmd);
|
||||||
|
snprintf(cmd, sizeof(cmd), "cp %s %s 2>nul", raftStoreFile, TMP_SDB_RAFT_STORE_FILE);
|
||||||
|
system(cmd);
|
||||||
|
#else
|
||||||
snprintf(cmd, sizeof(cmd), "mkdir -p %s", TMP_SDB_DATA_DIR);
|
snprintf(cmd, sizeof(cmd), "mkdir -p %s", TMP_SDB_DATA_DIR);
|
||||||
system(cmd);
|
system(cmd);
|
||||||
snprintf(cmd, sizeof(cmd), "mkdir -p %s", TMP_SDB_SYNC_DIR);
|
snprintf(cmd, sizeof(cmd), "mkdir -p %s", TMP_SDB_SYNC_DIR);
|
||||||
|
@ -429,6 +439,7 @@ int32_t parseArgs(int32_t argc, char *argv[]) {
|
||||||
system(cmd);
|
system(cmd);
|
||||||
snprintf(cmd, sizeof(cmd), "cp %s %s 2>/dev/null", raftStoreFile, TMP_SDB_RAFT_STORE_FILE);
|
snprintf(cmd, sizeof(cmd), "cp %s %s 2>/dev/null", raftStoreFile, TMP_SDB_RAFT_STORE_FILE);
|
||||||
system(cmd);
|
system(cmd);
|
||||||
|
#endif
|
||||||
|
|
||||||
strcpy(tsDataDir, TMP_DNODE_DIR);
|
strcpy(tsDataDir, TMP_DNODE_DIR);
|
||||||
return 0;
|
return 0;
|
||||||
|
|
Loading…
Reference in New Issue