Merge branch 'main' of https://github.com/taosdata/TDengine into fix/TD-33600

This commit is contained in:
wangmm0220 2025-01-23 14:59:14 +08:00
commit c3c2dcf522
80 changed files with 1290 additions and 443 deletions

View File

@ -26,7 +26,8 @@ Flink Connector supports all platforms that can run Flink 1.19 and above version
| Flink Connector Version | Major Changes | TDengine Version|
|-------------------------| ------------------------------------ | ---------------- |
| 2.0.0 | 1.Support SQL queries on data in TDengine database. <br/> 2. Support CDC subscription to data in TDengine database.<br/> 3. Supports reading and writing to TDengine database using Table SQL. | 3.3.5.0 and higher|
| 2.0.1 | Sink supports writing types from Rowdata implementations.| - |
| 2.0.0 | 1.Support SQL queries on data in TDengine database. <br/> 2. Support CDC subscription to data in TDengine database.<br/> 3. Supports reading and writing to TDengine database using Table SQL. | 3.3.5.1 and higher|
| 1.0.0 | Support Sink function to write data from other sources to TDengine in the future.| 3.3.2.0 and higher|
## Exception and error codes
@ -114,7 +115,7 @@ If using Maven to manage a project, simply add the following dependencies in pom
<dependency>
<groupId>com.taosdata.flink</groupId>
<artifactId>flink-connector-tdengine</artifactId>
<version>2.0.0</version>
<version>2.0.1</version>
</dependency>
```

View File

@ -33,6 +33,7 @@ The JDBC driver implementation for TDengine strives to be consistent with relati
| taos-jdbcdriver Version | Major Changes | TDengine Version |
| ----------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ |
| 3.5.3 | Support unsigned data types in WebSocket connections. | - |
| 3.5.2 | Fixed WebSocket result set free bug. | - |
| 3.5.1 | Fixed the getObject issue in data subscription. | - |
| 3.5.0 | 1. Optimized the performance of WebSocket connection parameter binding, supporting parameter binding queries using binary data. <br/> 2. Optimized the performance of small queries in WebSocket connection. <br/> 3. Added support for setting time zone and app info on WebSocket connection. | 3.3.5.0 and higher |
@ -128,24 +129,27 @@ Please refer to the specific error codes:
TDengine currently supports timestamp, numeric, character, boolean types, and the corresponding Java type conversions are as follows:
| TDengine DataType | JDBCType |
| ----------------- | ------------------ |
| TIMESTAMP | java.sql.Timestamp |
| INT | java.lang.Integer |
| BIGINT | java.lang.Long |
| FLOAT | java.lang.Float |
| DOUBLE | java.lang.Double |
| SMALLINT | java.lang.Short |
| TINYINT | java.lang.Byte |
| BOOL | java.lang.Boolean |
| BINARY | byte array |
| NCHAR | java.lang.String |
| JSON | java.lang.String |
| VARBINARY | byte[] |
| GEOMETRY | byte[] |
| TDengine DataType | JDBCType | Remark |
| ----------------- | -------------------- | --------------------------------------- |
| TIMESTAMP | java.sql.Timestamp | |
| BOOL | java.lang.Boolean | |
| TINYINT | java.lang.Byte | |
| TINYINT UNSIGNED | java.lang.Short | only supported in WebSocket connections |
| SMALLINT | java.lang.Short | |
| SMALLINT UNSIGNED | java.lang.Integer | only supported in WebSocket connections |
| INT | java.lang.Integer | |
| INT UNSIGNED | java.lang.Long | only supported in WebSocket connections |
| BIGINT | java.lang.Long | |
| BIGINT UNSIGNED | java.math.BigInteger | only supported in WebSocket connections |
| FLOAT | java.lang.Float | |
| DOUBLE | java.lang.Double | |
| BINARY | byte array | |
| NCHAR | java.lang.String | |
| JSON | java.lang.String | only supported in tags |
| VARBINARY | byte[] | |
| GEOMETRY | byte[] | |
**Note**: JSON type is only supported in tags.
Due to historical reasons, the BINARY type in TDengine is not truly binary data and is no longer recommended. Please use VARBINARY type instead.
**Note**: Due to historical reasons, the BINARY type in TDengine is not truly binary data and is no longer recommended. Please use VARBINARY type instead.
GEOMETRY type is binary data in little endian byte order, complying with the WKB standard. For more details, please refer to [Data Types](../../sql-manual/data-types/)
For the WKB standard, please refer to [Well-Known Binary (WKB)](https://libgeos.org/specifications/wkb/)
For the Java connector, you can use the jts library to conveniently create GEOMETRY type objects, serialize them, and write to TDengine. Here is an example [Geometry Example](https://github.com/taosdata/TDengine/blob/main/docs/examples/java/src/main/java/com/taos/example/GeometryDemo.java)

View File

@ -19,7 +19,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.2</version>
<version>3.5.3</version>
</dependency>
<dependency>
<groupId>org.locationtech.jts</groupId>

View File

@ -47,7 +47,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.2</version>
<version>3.5.3</version>
</dependency>
</dependencies>

View File

@ -18,7 +18,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.2</version>
<version>3.5.3</version>
</dependency>
<!-- druid -->
<dependency>

View File

@ -17,7 +17,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.2</version>
<version>3.5.3</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>

View File

@ -47,7 +47,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.2</version>
<version>3.5.3</version>
</dependency>
<dependency>

View File

@ -70,7 +70,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.2</version>
<version>3.5.3</version>
</dependency>
<dependency>

View File

@ -67,7 +67,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.2</version>
<version>3.5.3</version>
<!-- <scope>system</scope>-->
<!-- <systemPath>${project.basedir}/src/main/resources/lib/taos-jdbcdriver-2.0.15-dist.jar</systemPath>-->
</dependency>

View File

@ -22,7 +22,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.2</version>
<version>3.5.3</version>
</dependency>
<!-- ANCHOR_END: dep-->

View File

@ -2,6 +2,7 @@ package com.taos.example;
import com.taosdata.jdbc.ws.TSWSPreparedStatement;
import java.math.BigInteger;
import java.sql.*;
import java.util.Random;
@ -26,7 +27,12 @@ public class WSParameterBindingFullDemo {
"binary_col BINARY(100), " +
"nchar_col NCHAR(100), " +
"varbinary_col VARBINARY(100), " +
"geometry_col GEOMETRY(100)) " +
"geometry_col GEOMETRY(100)," +
"utinyint_col tinyint unsigned," +
"usmallint_col smallint unsigned," +
"uint_col int unsigned," +
"ubigint_col bigint unsigned" +
") " +
"tags (" +
"int_tag INT, " +
"double_tag DOUBLE, " +
@ -34,7 +40,12 @@ public class WSParameterBindingFullDemo {
"binary_tag BINARY(100), " +
"nchar_tag NCHAR(100), " +
"varbinary_tag VARBINARY(100), " +
"geometry_tag GEOMETRY(100))"
"geometry_tag GEOMETRY(100)," +
"utinyint_tag tinyint unsigned," +
"usmallint_tag smallint unsigned," +
"uint_tag int unsigned," +
"ubigint_tag bigint unsigned" +
")"
};
private static final int numOfSubTable = 10, numOfRow = 10;
@ -79,7 +90,7 @@ public class WSParameterBindingFullDemo {
// set table name
pstmt.setTableName("ntb_json_" + i);
// set tags
pstmt.setTagJson(1, "{\"device\":\"device_" + i + "\"}");
pstmt.setTagJson(0, "{\"device\":\"device_" + i + "\"}");
// set columns
long current = System.currentTimeMillis();
for (int j = 0; j < numOfRow; j++) {
@ -94,25 +105,29 @@ public class WSParameterBindingFullDemo {
}
private static void stmtAll(Connection conn) throws SQLException {
String sql = "INSERT INTO ? using stb tags(?,?,?,?,?,?,?) VALUES (?,?,?,?,?,?,?,?)";
String sql = "INSERT INTO ? using stb tags(?,?,?,?,?,?,?,?,?,?,?) VALUES (?,?,?,?,?,?,?,?,?,?,?,?)";
try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) {
// set table name
pstmt.setTableName("ntb");
// set tags
pstmt.setTagInt(1, 1);
pstmt.setTagDouble(2, 1.1);
pstmt.setTagBoolean(3, true);
pstmt.setTagString(4, "binary_value");
pstmt.setTagNString(5, "nchar_value");
pstmt.setTagVarbinary(6, new byte[] { (byte) 0x98, (byte) 0xf4, 0x6e });
pstmt.setTagGeometry(7, new byte[] {
pstmt.setTagInt(0, 1);
pstmt.setTagDouble(1, 1.1);
pstmt.setTagBoolean(2, true);
pstmt.setTagString(3, "binary_value");
pstmt.setTagNString(4, "nchar_value");
pstmt.setTagVarbinary(5, new byte[] { (byte) 0x98, (byte) 0xf4, 0x6e });
pstmt.setTagGeometry(6, new byte[] {
0x01, 0x01, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x59,
0x40, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x59, 0x40 });
pstmt.setTagShort(7, (short)255);
pstmt.setTagInt(8, 65535);
pstmt.setTagLong(9, 4294967295L);
pstmt.setTagBigInteger(10, new BigInteger("18446744073709551615"));
long current = System.currentTimeMillis();
@ -129,6 +144,10 @@ public class WSParameterBindingFullDemo {
0x00, 0x00, 0x00, 0x59,
0x40, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x59, 0x40 });
pstmt.setShort(9, (short)255);
pstmt.setInt(10, 65535);
pstmt.setLong(11, 4294967295L);
pstmt.setObject(12, new BigInteger("18446744073709551615"));
pstmt.addBatch();
pstmt.executeBatch();
System.out.println("Successfully inserted rows to example_all_type_stmt.ntb");

View File

@ -89,7 +89,7 @@ TDengine 提供了丰富的应用程序开发接口,为了便于用户快速
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.5.2</version>
<version>3.5.3</version>
</dependency>
```

View File

@ -19,7 +19,8 @@ TDengine 面向多种写入场景而很多写入场景下TDengine 的存
```SQL
COMPACT DATABASE db_name [start with 'XXXX'] [end with 'YYYY']
COMPACT [db_name.]VGROUPS IN (vgroup_id1, vgroup_id2, ...) [start with 'XXXX'] [end with 'YYYY']
SHOW COMPACTS [compact_id]
SHOW COMPACTS
SHOW COMPACT compact_id;
KILL COMPACT compact_id
```

View File

@ -69,7 +69,7 @@ taosExplorer 服务页面中,进入“系统管理 - 备份”页面,在“
1. 数据库:需要备份的数据库名称。一个备份计划只能备份一个数据库/超级表。
2. 超级表:需要备份的超级表名称。如果不填写,则备份整个数据库。
3. 下次执行时间:首次执行备份任务的日期时间。
4. 备份周期:备份点之间的时间间隔。注意:备份周期必须于数据库的 WAL_RETENTION_PERIOD 参数值。
4. 备份周期:备份点之间的时间间隔。注意:备份周期必须于数据库的 WAL_RETENTION_PERIOD 参数值。
5. 错误重试次数:对于可通过重试解决的错误,系统会按照此次数进行重试。
6. 错误重试间隔:每次重试之间的时间间隔。
7. 目录:存储备份文件的目录。
@ -152,4 +152,4 @@ Caused by:
```sql
alter
database test wal_retention_period 3600;
```
```

View File

@ -24,7 +24,8 @@ Flink Connector 支持所有能运行 Flink 1.19 及以上版本的平台。
## 版本历史
| Flink Connector 版本 | 主要变化 | TDengine 版本 |
| ------------------| ------------------------------------ | ---------------- |
| 2.0.0 | 1. 支持 SQL 查询 TDengine 数据库中的数据<br/> 2. 支持 CDC 订阅 TDengine 数据库中的数据<br/> 3. 支持 Table SQL 方式读取和写入 TDengine 数据库| 3.3.5.0 及以上版本 |
| 2.0.1 | Sink 支持对所有继承自 RowData 并已实现的类型进行数据写入| - |
| 2.0.0 | 1. 支持 SQL 查询 TDengine 数据库中的数据<br/> 2. 支持 CDC 订阅 TDengine 数据库中的数据<br/> 3. 支持 Table SQL 方式读取和写入 TDengine 数据库| 3.3.5.1 及以上版本 |
| 1.0.0 | 支持 Sink 功能,将来着其他数据源的数据写入到 TDengine| 3.3.2.0 及以上版本|
## 异常和错误码
@ -111,7 +112,7 @@ env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.AT_LEAST_ONCE);
<dependency>
<groupId>com.taosdata.flink</groupId>
<artifactId>flink-connector-tdengine</artifactId>
<version>2.0.0</version>
<version>2.0.1</version>
</dependency>
```

View File

@ -0,0 +1,35 @@
---
sidebar_label: Tableau
title: 与 Tableau 集成
---
Tableau 是一款知名的商业智能工具,它支持多种数据源,可方便地连接、导入和整合数据。并且可以通过直观的操作界面,让用户创建丰富多样的可视化图表,并具备强大的分析和筛选功能,为数据决策提供有力支持。
## 前置条件
准备以下环境:
- TDengine 3.3.5.4以上版本集群已部署并正常运行(企业及社区版均可)
- taosAdapter 能够正常运行。详细参考 [taosAdapter 使用手册](../../../reference/components/taosadapter)
- Tableau 桌面版安装并运行(如未安装,请下载并安装 Windows 操作系统 32/64 位 [Tableau 桌面版](https://www.tableau.com/products/desktop/download) )。安装 Tableau 桌面版请参考 [官方文档](https://www.tableau.com)。
- ODBC 驱动安装成功。详细参考[安装 ODBC 驱动](../../../reference/connector/odbc/#安装)
- ODBC 数据源配置成功。详细参考[配置ODBC数据源](../../../reference/connector/odbc/#配置数据源)
## 加载和分析 TDengine 数据
**第 1 步**,在 Windows 系统环境下启动 Tableau之后在其连接页面中搜索 “ODBC”并选择 “其他数据库 (ODBC)”。
**第 2 步**,点击 DNS 单选框,接着选择已配置好的数据源(MyTDengine),然后点击连接按钮。待连接成功后,删除字符串附加部分的内容,最后点击登录按钮即可。
![tableau-odbc](./tableau/tableau-odbc.jpg)
**第 3 步**,在弹出的工作簿页面中,会显示已连接的数据源。点击数据库的下拉列表,会显示需要进行数据分析的数据库。在此基础上,点击表选项中的查找按钮,即可将该数据库下的所有表显示出来。然后,拖动需要分析的表到右侧区域,即可显示出表结构。
![tableau-workbook](./tableau/tableau-table.jpg)
**第 4 步**,点击下方的"立即更新"按钮,即可将表中的数据展示出来。
![tableau-workbook](./tableau/tableau-data.jpg)
**第 5 步**,点击窗口下方的"工作表",弹出数据分析窗口, 并展示分析表的所有字段,将字段拖动到行列即可展示出图表。
![tableau-workbook](./tableau/tableau-analysis.jpg)

Binary file not shown.

After

Width:  |  Height:  |  Size: 235 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 238 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 237 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 102 KiB

View File

@ -306,7 +306,7 @@ charset 的有效值是 UTF-8。
|compressor | |支持动态修改 重启生效 |内部参数,用于有损压缩设置|
**补充说明**
1. 在 3.4.0.0 之后,所有配置参数都将被持久化到本地存储,重启数据库服务后,将默认使用持久化的配置参数列表;如果您希望继续使用 config 文件中配置的参数,需设置 forceReadConfig 为 1。
1. 在 3.3.5.0 之后,所有配置参数都将被持久化到本地存储,重启数据库服务后,将默认使用持久化的配置参数列表;如果您希望继续使用 config 文件中配置的参数,需设置 forceReadConfig 为 1。
2. 在 3.2.0.0 ~ 3.3.0.0(不包含)版本生效,启用该参数后不能回退到升级前的版本
3. TSZ 压缩算法是通过数据预测技术完成的压缩,所以更适合有规律变化的数据
4. TSZ 压缩时间会更长一些,如果您的服务器 CPU 空闲多,存储空间小的情况下适合选用

View File

@ -33,6 +33,7 @@ TDengine 的 JDBC 驱动实现尽可能与关系型数据库驱动保持一致
| taos-jdbcdriver 版本 | 主要变化 | TDengine 版本 |
| ------------------| ---------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------- |
| 3.5.3 | 在 WebSocket 连接上支持无符号数据类型 | - |
| 3.5.2 | 解决了 WebSocket 查询结果集释放 bug | - |
| 3.5.1 | 解决了数据订阅获取时间戳对象类型问题 | - |
| 3.5.0 | 1. 优化了 WebSocket 连接参数绑定性能,支持参数绑定查询使用二进制数据 <br/> 2. 优化了 WebSocket 连接在小查询上的性能 <br/> 3. WebSocket 连接上支持设置时区和应用信息 | 3.3.5.0 及更高版本 |
@ -128,24 +129,27 @@ JDBC 连接器可能报错的错误码包括 4 种:
TDengine 目前支持时间戳、数字、字符、布尔类型,与 Java 对应类型转换如下:
| TDengine DataType | JDBCType |
| ----------------- | ------------------ |
| TIMESTAMP | java.sql.Timestamp |
| INT | java.lang.Integer |
| BIGINT | java.lang.Long |
| FLOAT | java.lang.Float |
| DOUBLE | java.lang.Double |
| SMALLINT | java.lang.Short |
| TINYINT | java.lang.Byte |
| BOOL | java.lang.Boolean |
| BINARY | byte array |
| NCHAR | java.lang.String |
| JSON | java.lang.String |
| VARBINARY | byte[] |
| GEOMETRY | byte[] |
| TDengine DataType | JDBCType | 备注|
| ----------------- | -------------------- |-------------------- |
| TIMESTAMP | java.sql.Timestamp ||
| BOOL | java.lang.Boolean ||
| TINYINT | java.lang.Byte ||
| TINYINT UNSIGNED | java.lang.Short |仅在 WebSocket 连接方式支持|
| SMALLINT | java.lang.Short ||
| SMALLINT UNSIGNED | java.lang.Integer |仅在 WebSocket 连接方式支持|
| INT | java.lang.Integer ||
| INT UNSIGNED | java.lang.Long |仅在 WebSocket 连接方式支持|
| BIGINT | java.lang.Long ||
| BIGINT UNSIGNED | java.math.BigInteger |仅在 WebSocket 连接方式支持|
| FLOAT | java.lang.Float ||
| DOUBLE | java.lang.Double ||
| BINARY | byte array ||
| NCHAR | java.lang.String ||
| JSON | java.lang.String |仅在 tag 中支持|
| VARBINARY | byte[] ||
| GEOMETRY | byte[] ||
**注意**JSON 类型仅在 tag 中支持。
由于历史原因TDengine中的BINARY底层不是真正的二进制数据已不建议使用。请用VARBINARY类型代替。
**注意**由于历史原因TDengine中的BINARY底层不是真正的二进制数据已不建议使用。请用VARBINARY类型代替。
GEOMETRY类型是little endian字节序的二进制数据符合WKB规范。详细信息请参考 [数据类型](../../taos-sql/data-type/#数据类型)
WKB规范请参考[Well-Known Binary (WKB)](https://libgeos.org/specifications/wkb/)
对于java连接器可以使用jts库来方便的创建GEOMETRY类型对象序列化后写入TDengine这里有一个样例[Geometry示例](https://github.com/taosdata/TDengine/blob/main/docs/examples/java/src/main/java/com/taos/example/GeometryDemo.java)

View File

@ -191,7 +191,7 @@ TDengine 存储的数据包括采集的时序数据以及库、表相关的元
在进行海量数据管理时为了实现水平扩展通常需要采用数据分片sharding和数据分区partitioning策略。TDengine 通过 vnode 来实现数据分片,并通过按时间段划分数据文件来实现时序数据的分区。
vnode 不仅负责处理时序数据的写入、查询和计算任务还承担着负载均衡、数据恢复以及支持异构环境的重要角色。为了实现这些目标TDengine 将一个 dnode 根据其计算和存储资源切分为多个 vnode。这些 vnode 的管理过程对应用程序是完全透明的由TDengine 自动完成。
vnode 不仅负责处理时序数据的写入、查询和计算任务还承担着负载均衡、数据恢复以及支持异构环境的重要角色。为了实现这些目标TDengine 将一个 dnode 根据其计算和存储资源切分为多个 vnode。这些 vnode 的管理过程对应用程序是完全透明的由TDengine 自动完成。
对于单个数据采集点,无论其数据量有多大,一个 vnode 都拥有足够的计算资源和存储资源来应对(例如,如果每秒生成一条 16B 的记录,一年产生的原始数据量也不到 0.5GB。因此TDengine 将一张表即一个数据采集点的所有数据都存储在一个vnode 中,避免将同一数据采集点的数据分散到两个或多个 dnode 上。同时,一个 vnode 可以存储多个数据采集点(表)的数据,最大可容纳的表数目上限为 100 万。设计上,一个 vnode 中的所有表都属于同一个数据库。
@ -371,4 +371,4 @@ alter dnode 1 "/mnt/disk2/taos 1";
3. 多级存储目前不支持删除已经挂载的硬盘的功能。
4. 0 级存储至少存在一个 disable_create_new_file 为 0 的挂载点1 级 和 2 级存储没有该限制。
:::
:::

View File

@ -34,6 +34,9 @@ extern "C" {
#define GLOBAL_CONFIG_FILE_VERSION 1
#define LOCAL_CONFIG_FILE_VERSION 1
#define RPC_MEMORY_USAGE_RATIO 0.1
#define QUEUE_MEMORY_USAGE_RATIO 0.6
typedef enum {
DND_CA_SM4 = 1,
} EEncryptAlgor;
@ -110,6 +113,7 @@ extern int32_t tsNumOfQnodeFetchThreads;
extern int32_t tsNumOfSnodeStreamThreads;
extern int32_t tsNumOfSnodeWriteThreads;
extern int64_t tsQueueMemoryAllowed;
extern int64_t tsApplyMemoryAllowed;
extern int32_t tsRetentionSpeedLimitMB;
extern int32_t tsNumOfCompactThreads;

View File

@ -268,7 +268,7 @@ typedef struct SStoreMeta {
const void* (*extractTagVal)(const void* tag, int16_t type, STagVal* tagVal); // todo remove it
int32_t (*getTableUidByName)(void* pVnode, char* tbName, uint64_t* uid);
int32_t (*getTableTypeByName)(void* pVnode, char* tbName, ETableType* tbType);
int32_t (*getTableTypeSuidByName)(void* pVnode, char* tbName, ETableType* tbType, uint64_t* suid);
int32_t (*getTableNameByUid)(void* pVnode, uint64_t uid, char* tbName);
bool (*isTableExisted)(void* pVnode, tb_uid_t uid);

View File

@ -313,9 +313,9 @@ typedef struct SOrderByExprNode {
} SOrderByExprNode;
typedef struct SLimitNode {
ENodeType type; // QUERY_NODE_LIMIT
int64_t limit;
int64_t offset;
ENodeType type; // QUERY_NODE_LIMIT
SValueNode* limit;
SValueNode* offset;
} SLimitNode;
typedef struct SStateWindowNode {
@ -681,6 +681,7 @@ int32_t nodesValueNodeToVariant(const SValueNode* pNode, SVariant* pVal);
int32_t nodesMakeValueNodeFromString(char* literal, SValueNode** ppValNode);
int32_t nodesMakeValueNodeFromBool(bool b, SValueNode** ppValNode);
int32_t nodesMakeValueNodeFromInt32(int32_t value, SNode** ppNode);
int32_t nodesMakeValueNodeFromInt64(int64_t value, SNode** ppNode);
char* nodesGetFillModeString(EFillMode mode);
int32_t nodesMergeConds(SNode** pDst, SNodeList** pSrc);

View File

@ -55,6 +55,7 @@ typedef struct {
typedef enum {
DEF_QITEM = 0,
RPC_QITEM = 1,
APPLY_QITEM = 2,
} EQItype;
typedef void (*FItem)(SQueueInfo *pInfo, void *pItem);

View File

@ -131,6 +131,8 @@ typedef struct SStmtQueue {
SStmtQNode* head;
SStmtQNode* tail;
uint64_t qRemainNum;
TdThreadMutex mutex;
TdThreadCond waitCond;
} SStmtQueue;
typedef struct STscStmt {

View File

@ -39,31 +39,39 @@ static FORCE_INLINE int32_t stmtAllocQNodeFromBuf(STableBufInfo* pTblBuf, void**
}
bool stmtDequeue(STscStmt* pStmt, SStmtQNode** param) {
while (0 == atomic_load_64(&pStmt->queue.qRemainNum)) {
taosUsleep(1);
return false;
(void)taosThreadMutexLock(&pStmt->queue.mutex);
while (0 == atomic_load_64((int64_t*)&pStmt->queue.qRemainNum)) {
(void)taosThreadCondWait(&pStmt->queue.waitCond, &pStmt->queue.mutex);
if (atomic_load_8((int8_t*)&pStmt->queue.stopQueue)) {
(void)taosThreadMutexUnlock(&pStmt->queue.mutex);
return false;
}
}
SStmtQNode* orig = pStmt->queue.head;
SStmtQNode* node = pStmt->queue.head->next;
pStmt->queue.head = pStmt->queue.head->next;
// taosMemoryFreeClear(orig);
*param = node;
(void)atomic_sub_fetch_64(&pStmt->queue.qRemainNum, 1);
(void)atomic_sub_fetch_64((int64_t*)&pStmt->queue.qRemainNum, 1);
(void)taosThreadMutexUnlock(&pStmt->queue.mutex);
*param = node;
return true;
}
void stmtEnqueue(STscStmt* pStmt, SStmtQNode* param) {
(void)taosThreadMutexLock(&pStmt->queue.mutex);
pStmt->queue.tail->next = param;
pStmt->queue.tail = param;
pStmt->stat.bindDataNum++;
(void)atomic_add_fetch_64(&pStmt->queue.qRemainNum, 1);
(void)taosThreadCondSignal(&(pStmt->queue.waitCond));
(void)taosThreadMutexUnlock(&pStmt->queue.mutex);
}
static int32_t stmtCreateRequest(STscStmt* pStmt) {
@ -415,9 +423,11 @@ void stmtResetQueueTableBuf(STableBufInfo* pTblBuf, SStmtQueue* pQueue) {
pTblBuf->buffIdx = 1;
pTblBuf->buffOffset = sizeof(*pQueue->head);
(void)taosThreadMutexLock(&pQueue->mutex);
pQueue->head = pQueue->tail = pTblBuf->pCurBuff;
pQueue->qRemainNum = 0;
pQueue->head->next = NULL;
(void)taosThreadMutexUnlock(&pQueue->mutex);
}
int32_t stmtCleanExecInfo(STscStmt* pStmt, bool keepTable, bool deepClean) {
@ -809,6 +819,8 @@ int32_t stmtStartBindThread(STscStmt* pStmt) {
}
int32_t stmtInitQueue(STscStmt* pStmt) {
(void)taosThreadCondInit(&pStmt->queue.waitCond, NULL);
(void)taosThreadMutexInit(&pStmt->queue.mutex, NULL);
STMT_ERR_RET(stmtAllocQNodeFromBuf(&pStmt->sql.siInfo.tbBuf, (void**)&pStmt->queue.head));
pStmt->queue.tail = pStmt->queue.head;
@ -1619,11 +1631,18 @@ int stmtClose(TAOS_STMT* stmt) {
pStmt->queue.stopQueue = true;
(void)taosThreadMutexLock(&pStmt->queue.mutex);
(void)taosThreadCondSignal(&(pStmt->queue.waitCond));
(void)taosThreadMutexUnlock(&pStmt->queue.mutex);
if (pStmt->bindThreadInUse) {
(void)taosThreadJoin(pStmt->bindThread, NULL);
pStmt->bindThreadInUse = false;
}
(void)taosThreadCondDestroy(&pStmt->queue.waitCond);
(void)taosThreadMutexDestroy(&pStmt->queue.mutex);
STMT_DLOG("stmt %p closed, stbInterlaceMode: %d, statInfo: ctgGetTbMetaNum=>%" PRId64 ", getCacheTbInfo=>%" PRId64
", parseSqlNum=>%" PRId64 ", pStmt->stat.bindDataNum=>%" PRId64
", settbnameAPI:%u, bindAPI:%u, addbatchAPI:%u, execAPI:%u"
@ -1757,7 +1776,9 @@ _return:
}
int stmtGetParamNum(TAOS_STMT* stmt, int* nums) {
int code = 0;
STscStmt* pStmt = (STscStmt*)stmt;
int32_t preCode = pStmt->errCode;
STMT_DLOG_E("start to get param num");
@ -1765,7 +1786,7 @@ int stmtGetParamNum(TAOS_STMT* stmt, int* nums) {
return pStmt->errCode;
}
STMT_ERR_RET(stmtSwitchStatus(pStmt, STMT_FETCH_FIELDS));
STMT_ERRI_JRET(stmtSwitchStatus(pStmt, STMT_FETCH_FIELDS));
if (pStmt->bInfo.needParse && pStmt->sql.runTimes && pStmt->sql.type > 0 &&
STMT_TYPE_MULTI_INSERT != pStmt->sql.type) {
@ -1777,23 +1798,29 @@ int stmtGetParamNum(TAOS_STMT* stmt, int* nums) {
pStmt->exec.pRequest = NULL;
}
STMT_ERR_RET(stmtCreateRequest(pStmt));
STMT_ERRI_JRET(stmtCreateRequest(pStmt));
if (pStmt->bInfo.needParse) {
STMT_ERR_RET(stmtParseSql(pStmt));
STMT_ERRI_JRET(stmtParseSql(pStmt));
}
if (STMT_TYPE_QUERY == pStmt->sql.type) {
*nums = taosArrayGetSize(pStmt->sql.pQuery->pPlaceholderValues);
} else {
STMT_ERR_RET(stmtFetchColFields(stmt, nums, NULL));
STMT_ERRI_JRET(stmtFetchColFields(stmt, nums, NULL));
}
return TSDB_CODE_SUCCESS;
_return:
pStmt->errCode = preCode;
return code;
}
int stmtGetParam(TAOS_STMT* stmt, int idx, int* type, int* bytes) {
int code = 0;
STscStmt* pStmt = (STscStmt*)stmt;
int32_t preCode = pStmt->errCode;
STMT_DLOG_E("start to get param");
@ -1802,10 +1829,10 @@ int stmtGetParam(TAOS_STMT* stmt, int idx, int* type, int* bytes) {
}
if (STMT_TYPE_QUERY == pStmt->sql.type) {
STMT_RET(TSDB_CODE_TSC_STMT_API_ERROR);
STMT_ERRI_JRET(TSDB_CODE_TSC_STMT_API_ERROR);
}
STMT_ERR_RET(stmtSwitchStatus(pStmt, STMT_FETCH_FIELDS));
STMT_ERRI_JRET(stmtSwitchStatus(pStmt, STMT_FETCH_FIELDS));
if (pStmt->bInfo.needParse && pStmt->sql.runTimes && pStmt->sql.type > 0 &&
STMT_TYPE_MULTI_INSERT != pStmt->sql.type) {
@ -1817,27 +1844,29 @@ int stmtGetParam(TAOS_STMT* stmt, int idx, int* type, int* bytes) {
pStmt->exec.pRequest = NULL;
}
STMT_ERR_RET(stmtCreateRequest(pStmt));
STMT_ERRI_JRET(stmtCreateRequest(pStmt));
if (pStmt->bInfo.needParse) {
STMT_ERR_RET(stmtParseSql(pStmt));
STMT_ERRI_JRET(stmtParseSql(pStmt));
}
int32_t nums = 0;
TAOS_FIELD_E* pField = NULL;
STMT_ERR_RET(stmtFetchColFields(stmt, &nums, &pField));
STMT_ERRI_JRET(stmtFetchColFields(stmt, &nums, &pField));
if (idx >= nums) {
tscError("idx %d is too big", idx);
taosMemoryFree(pField);
STMT_ERR_RET(TSDB_CODE_INVALID_PARA);
STMT_ERRI_JRET(TSDB_CODE_INVALID_PARA);
}
*type = pField[idx].type;
*bytes = pField[idx].bytes;
taosMemoryFree(pField);
_return:
return TSDB_CODE_SUCCESS;
taosMemoryFree(pField);
pStmt->errCode = preCode;
return code;
}
TAOS_RES* stmtUseResult(TAOS_STMT* stmt) {

View File

@ -39,31 +39,35 @@ static FORCE_INLINE int32_t stmtAllocQNodeFromBuf(STableBufInfo* pTblBuf, void**
}
static bool stmtDequeue(STscStmt2* pStmt, SStmtQNode** param) {
(void)taosThreadMutexLock(&pStmt->queue.mutex);
while (0 == atomic_load_64((int64_t*)&pStmt->queue.qRemainNum)) {
taosUsleep(1);
return false;
(void)taosThreadCondWait(&pStmt->queue.waitCond, &pStmt->queue.mutex);
if (atomic_load_8((int8_t*)&pStmt->queue.stopQueue)) {
(void)taosThreadMutexUnlock(&pStmt->queue.mutex);
return false;
}
}
SStmtQNode* orig = pStmt->queue.head;
SStmtQNode* node = pStmt->queue.head->next;
pStmt->queue.head = pStmt->queue.head->next;
// taosMemoryFreeClear(orig);
*param = node;
(void)atomic_sub_fetch_64((int64_t*)&pStmt->queue.qRemainNum, 1);
(void)taosThreadMutexUnlock(&pStmt->queue.mutex);
return true;
}
static void stmtEnqueue(STscStmt2* pStmt, SStmtQNode* param) {
(void)taosThreadMutexLock(&pStmt->queue.mutex);
pStmt->queue.tail->next = param;
pStmt->queue.tail = param;
pStmt->stat.bindDataNum++;
(void)atomic_add_fetch_64((int64_t*)&pStmt->queue.qRemainNum, 1);
(void)taosThreadCondSignal(&(pStmt->queue.waitCond));
(void)taosThreadMutexUnlock(&pStmt->queue.mutex);
}
static int32_t stmtCreateRequest(STscStmt2* pStmt) {
@ -339,9 +343,11 @@ static void stmtResetQueueTableBuf(STableBufInfo* pTblBuf, SStmtQueue* pQueue) {
pTblBuf->buffIdx = 1;
pTblBuf->buffOffset = sizeof(*pQueue->head);
(void)taosThreadMutexLock(&pQueue->mutex);
pQueue->head = pQueue->tail = pTblBuf->pCurBuff;
pQueue->qRemainNum = 0;
pQueue->head->next = NULL;
(void)taosThreadMutexUnlock(&pQueue->mutex);
}
static int32_t stmtCleanExecInfo(STscStmt2* pStmt, bool keepTable, bool deepClean) {
@ -735,6 +741,8 @@ static int32_t stmtStartBindThread(STscStmt2* pStmt) {
}
static int32_t stmtInitQueue(STscStmt2* pStmt) {
(void)taosThreadCondInit(&pStmt->queue.waitCond, NULL);
(void)taosThreadMutexInit(&pStmt->queue.mutex, NULL);
STMT_ERR_RET(stmtAllocQNodeFromBuf(&pStmt->sql.siInfo.tbBuf, (void**)&pStmt->queue.head));
pStmt->queue.tail = pStmt->queue.head;
@ -1066,13 +1074,16 @@ static int stmtFetchColFields2(STscStmt2* pStmt, int32_t* fieldNum, TAOS_FIELD_E
}
static int stmtFetchStbColFields2(STscStmt2* pStmt, int32_t* fieldNum, TAOS_FIELD_ALL** fields) {
int32_t code = 0;
int32_t preCode = pStmt->errCode;
if (pStmt->errCode != TSDB_CODE_SUCCESS) {
return pStmt->errCode;
}
if (STMT_TYPE_QUERY == pStmt->sql.type) {
tscError("invalid operation to get query column fileds");
STMT_ERR_RET(TSDB_CODE_TSC_STMT_API_ERROR);
STMT_ERRI_JRET(TSDB_CODE_TSC_STMT_API_ERROR);
}
STableDataCxt** pDataBlock = NULL;
@ -1084,21 +1095,25 @@ static int stmtFetchStbColFields2(STscStmt2* pStmt, int32_t* fieldNum, TAOS_FIEL
(STableDataCxt**)taosHashGet(pStmt->exec.pBlockHash, pStmt->bInfo.tbFName, strlen(pStmt->bInfo.tbFName));
if (NULL == pDataBlock) {
tscError("table %s not found in exec blockHash", pStmt->bInfo.tbFName);
STMT_ERR_RET(TSDB_CODE_APP_ERROR);
STMT_ERRI_JRET(TSDB_CODE_APP_ERROR);
}
}
STMT_ERR_RET(qBuildStmtStbColFields(*pDataBlock, pStmt->bInfo.boundTags, pStmt->bInfo.preCtbname, fieldNum, fields));
STMT_ERRI_JRET(qBuildStmtStbColFields(*pDataBlock, pStmt->bInfo.boundTags, pStmt->bInfo.preCtbname, fieldNum, fields));
if (pStmt->bInfo.tbType == TSDB_SUPER_TABLE) {
pStmt->bInfo.needParse = true;
qDestroyStmtDataBlock(*pDataBlock);
if (taosHashRemove(pStmt->exec.pBlockHash, pStmt->bInfo.tbFName, strlen(pStmt->bInfo.tbFName)) != 0) {
tscError("get fileds %s remove exec blockHash fail", pStmt->bInfo.tbFName);
STMT_ERR_RET(TSDB_CODE_APP_ERROR);
STMT_ERRI_JRET(TSDB_CODE_APP_ERROR);
}
}
return TSDB_CODE_SUCCESS;
_return:
pStmt->errCode = preCode;
return code;
}
/*
SArray* stmtGetFreeCol(STscStmt2* pStmt, int32_t* idx) {
@ -1748,11 +1763,18 @@ int stmtClose2(TAOS_STMT2* stmt) {
pStmt->queue.stopQueue = true;
(void)taosThreadMutexLock(&pStmt->queue.mutex);
(void)taosThreadCondSignal(&(pStmt->queue.waitCond));
(void)taosThreadMutexUnlock(&pStmt->queue.mutex);
if (pStmt->bindThreadInUse) {
(void)taosThreadJoin(pStmt->bindThread, NULL);
pStmt->bindThreadInUse = false;
}
(void)taosThreadCondDestroy(&pStmt->queue.waitCond);
(void)taosThreadMutexDestroy(&pStmt->queue.mutex);
if (pStmt->options.asyncExecFn && !pStmt->semWaited) {
if (tsem_wait(&pStmt->asyncQuerySem) != 0) {
tscError("failed to wait asyncQuerySem");
@ -1824,7 +1846,7 @@ int stmtParseColFields2(TAOS_STMT2* stmt) {
if (pStmt->exec.pRequest && STMT_TYPE_QUERY == pStmt->sql.type && pStmt->sql.runTimes) {
taos_free_result(pStmt->exec.pRequest);
pStmt->exec.pRequest = NULL;
STMT_ERR_RET(stmtCreateRequest(pStmt));
STMT_ERRI_JRET(stmtCreateRequest(pStmt));
}
STMT_ERRI_JRET(stmtCreateRequest(pStmt));
@ -1850,7 +1872,9 @@ int stmtGetStbColFields2(TAOS_STMT2* stmt, int* nums, TAOS_FIELD_ALL** fields) {
}
int stmtGetParamNum2(TAOS_STMT2* stmt, int* nums) {
int32_t code = 0;
STscStmt2* pStmt = (STscStmt2*)stmt;
int32_t preCode = pStmt->errCode;
STMT_DLOG_E("start to get param num");
@ -1858,7 +1882,7 @@ int stmtGetParamNum2(TAOS_STMT2* stmt, int* nums) {
return pStmt->errCode;
}
STMT_ERR_RET(stmtSwitchStatus(pStmt, STMT_FETCH_FIELDS));
STMT_ERRI_JRET(stmtSwitchStatus(pStmt, STMT_FETCH_FIELDS));
if (pStmt->bInfo.needParse && pStmt->sql.runTimes && pStmt->sql.type > 0 &&
STMT_TYPE_MULTI_INSERT != pStmt->sql.type) {
@ -1870,19 +1894,23 @@ int stmtGetParamNum2(TAOS_STMT2* stmt, int* nums) {
pStmt->exec.pRequest = NULL;
}
STMT_ERR_RET(stmtCreateRequest(pStmt));
STMT_ERRI_JRET(stmtCreateRequest(pStmt));
if (pStmt->bInfo.needParse) {
STMT_ERR_RET(stmtParseSql(pStmt));
STMT_ERRI_JRET(stmtParseSql(pStmt));
}
if (STMT_TYPE_QUERY == pStmt->sql.type) {
*nums = taosArrayGetSize(pStmt->sql.pQuery->pPlaceholderValues);
} else {
STMT_ERR_RET(stmtFetchColFields2(stmt, nums, NULL));
STMT_ERRI_JRET(stmtFetchColFields2(stmt, nums, NULL));
}
return TSDB_CODE_SUCCESS;
_return:
pStmt->errCode = preCode;
return code;
}
TAOS_RES* stmtUseResult2(TAOS_STMT2* stmt) {

View File

@ -237,6 +237,63 @@ int main(int argc, char** argv) {
return RUN_ALL_TESTS();
}
TEST(stmt2Case, stmt2_test_limit) {
TAOS* taos = taos_connect("localhost", "root", "taosdata", "", 0);
ASSERT_NE(taos, nullptr);
do_query(taos, "drop database if exists stmt2_testdb_7");
do_query(taos, "create database IF NOT EXISTS stmt2_testdb_7");
do_query(taos, "create stable stmt2_testdb_7.stb (ts timestamp, b binary(10)) tags(t1 int, t2 binary(10))");
do_query(taos,
"insert into stmt2_testdb_7.tb2 using stmt2_testdb_7.stb tags(2,'xyz') values(1591060628000, "
"'abc'),(1591060628001,'def'),(1591060628004, 'hij')");
do_query(taos, "use stmt2_testdb_7");
TAOS_STMT2_OPTION option = {0, true, true, NULL, NULL};
TAOS_STMT2* stmt = taos_stmt2_init(taos, &option);
ASSERT_NE(stmt, nullptr);
const char* sql = "select * from stmt2_testdb_7.tb2 where ts > ? and ts < ? limit ?";
int code = taos_stmt2_prepare(stmt, sql, 0);
checkError(stmt, code);
int t64_len[1] = {sizeof(int64_t)};
int b_len[1] = {3};
int x = 2;
int x_len = sizeof(int);
int64_t ts[2] = {1591060627000, 1591060628005};
TAOS_STMT2_BIND params[3] = {{TSDB_DATA_TYPE_TIMESTAMP, &ts[0], t64_len, NULL, 1},
{TSDB_DATA_TYPE_TIMESTAMP, &ts[1], t64_len, NULL, 1},
{TSDB_DATA_TYPE_INT, &x, &x_len, NULL, 1}};
TAOS_STMT2_BIND* paramv = &params[0];
TAOS_STMT2_BINDV bindv = {1, NULL, NULL, &paramv};
code = taos_stmt2_bind_param(stmt, &bindv, -1);
checkError(stmt, code);
taos_stmt2_exec(stmt, NULL);
checkError(stmt, code);
TAOS_RES* pRes = taos_stmt2_result(stmt);
ASSERT_NE(pRes, nullptr);
int getRecordCounts = 0;
while ((taos_fetch_row(pRes))) {
getRecordCounts++;
}
ASSERT_EQ(getRecordCounts, 2);
taos_stmt2_close(stmt);
do_query(taos, "drop database if exists stmt2_testdb_7");
taos_close(taos);
}
TEST(stmt2Case, insert_stb_get_fields_Test) {
TAOS* taos = taos_connect("localhost", "root", "taosdata", NULL, 0);
ASSERT_NE(taos, nullptr);
@ -735,7 +792,7 @@ TEST(stmt2Case, insert_ntb_get_fields_Test) {
{
const char* sql = "insert into stmt2_testdb_4.? values(?,?)";
printf("case 2 : %s\n", sql);
getFieldsError(taos, sql, TSDB_CODE_PAR_TABLE_NOT_EXIST);
getFieldsError(taos, sql, TSDB_CODE_TSC_STMT_TBNAME_ERROR);
}
// case 3 : wrong para nums
@ -1496,8 +1553,51 @@ TEST(stmt2Case, geometry) {
checkError(stmt, code);
ASSERT_EQ(affected_rows, 3);
// test wrong wkb input
unsigned char wkb2[3][61] = {
{
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0xF0, 0x3F, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40,
},
{0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0, 0x3f},
{0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40}};
params[1].buffer = wkb2;
code = taos_stmt2_bind_param(stmt, &bindv, -1);
ASSERT_EQ(code, TSDB_CODE_FUNC_FUNTION_PARA_VALUE);
taos_stmt2_close(stmt);
do_query(taos, "DROP DATABASE IF EXISTS stmt2_testdb_13");
taos_close(taos);
}
// TD-33582
TEST(stmt2Case, errcode) {
TAOS* taos = taos_connect("localhost", "root", "taosdata", NULL, 0);
ASSERT_NE(taos, nullptr);
do_query(taos, "DROP DATABASE IF EXISTS stmt2_testdb_14");
do_query(taos, "CREATE DATABASE IF NOT EXISTS stmt2_testdb_14");
do_query(taos, "use stmt2_testdb_14");
TAOS_STMT2_OPTION option = {0};
TAOS_STMT2* stmt = taos_stmt2_init(taos, &option);
ASSERT_NE(stmt, nullptr);
char* sql = "select * from t where ts > ? and name = ? foo = ?";
int code = taos_stmt2_prepare(stmt, sql, 0);
checkError(stmt, code);
int fieldNum = 0;
TAOS_FIELD_ALL* pFields = NULL;
code = taos_stmt2_get_fields(stmt, &fieldNum, &pFields);
ASSERT_EQ(code, TSDB_CODE_PAR_SYNTAX_ERROR);
// get fail dont influence the next stmt prepare
sql = "nsert into ? (ts, name) values (?, ?)";
code = taos_stmt_prepare(stmt, sql, 0);
checkError(stmt, code);
}
#pragma GCC diagnostic pop

View File

@ -212,15 +212,6 @@ void insertData(TAOS *taos, TAOS_STMT_OPTIONS *option, const char *sql, int CTB_
void getFields(TAOS *taos, const char *sql, int expectedALLFieldNum, TAOS_FIELD_E *expectedTagFields,
int expectedTagFieldNum, TAOS_FIELD_E *expectedColFields, int expectedColFieldNum) {
// create database and table
do_query(taos, "DROP DATABASE IF EXISTS stmt_testdb_3");
do_query(taos, "CREATE DATABASE IF NOT EXISTS stmt_testdb_3");
do_query(taos, "USE stmt_testdb_3");
do_query(
taos,
"CREATE STABLE IF NOT EXISTS stmt_testdb_3.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS "
"(groupId INT, location BINARY(24))");
TAOS_STMT *stmt = taos_stmt_init(taos);
ASSERT_NE(stmt, nullptr);
int code = taos_stmt_prepare(stmt, sql, 0);
@ -267,6 +258,24 @@ void getFields(TAOS *taos, const char *sql, int expectedALLFieldNum, TAOS_FIELD_
taos_stmt_close(stmt);
}
void getFieldsError(TAOS *taos, const char *sql, int expectedErrocode) {
TAOS_STMT *stmt = taos_stmt_init(taos);
ASSERT_NE(stmt, nullptr);
STscStmt *pStmt = (STscStmt *)stmt;
int code = taos_stmt_prepare(stmt, sql, 0);
int fieldNum = 0;
TAOS_FIELD_E *pFields = NULL;
code = taos_stmt_get_tag_fields(stmt, &fieldNum, &pFields);
ASSERT_EQ(code, expectedErrocode);
ASSERT_EQ(pStmt->errCode, TSDB_CODE_SUCCESS);
taosMemoryFree(pFields);
taos_stmt_close(stmt);
}
} // namespace
int main(int argc, char **argv) {
@ -298,6 +307,15 @@ TEST(stmtCase, get_fields) {
TAOS *taos = taos_connect("localhost", "root", "taosdata", NULL, 0);
ASSERT_NE(taos, nullptr);
// create database and table
do_query(taos, "DROP DATABASE IF EXISTS stmt_testdb_3");
do_query(taos, "CREATE DATABASE IF NOT EXISTS stmt_testdb_3");
do_query(taos, "USE stmt_testdb_3");
do_query(
taos,
"CREATE STABLE IF NOT EXISTS stmt_testdb_3.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS "
"(groupId INT, location BINARY(24))");
// nomarl test
{
TAOS_FIELD_E tagFields[2] = {{"groupid", TSDB_DATA_TYPE_INT, 0, 0, sizeof(int)},
{"location", TSDB_DATA_TYPE_BINARY, 0, 0, 24}};
@ -307,6 +325,12 @@ TEST(stmtCase, get_fields) {
{"phase", TSDB_DATA_TYPE_FLOAT, 0, 0, sizeof(float)}};
getFields(taos, "INSERT INTO ? USING meters TAGS(?,?) VALUES (?,?,?,?)", 7, &tagFields[0], 2, &colFields[0], 4);
}
// error case [TD-33570]
{ getFieldsError(taos, "INSERT INTO ? VALUES (?,?,?,?)", TSDB_CODE_TSC_STMT_TBNAME_ERROR); }
{ getFieldsError(taos, "INSERT INTO ? USING meters TAGS(?,?) VALUES (?,?,?,?)", TSDB_CODE_TSC_STMT_TBNAME_ERROR); }
do_query(taos, "DROP DATABASE IF EXISTS stmt_testdb_3");
taos_close(taos);
}
@ -520,9 +544,6 @@ TEST(stmtCase, geometry) {
int code = taos_stmt_prepare(stmt, stmt_sql, 0);
checkError(stmt, code);
// code = taos_stmt_set_tbname(stmt, "tb1");
// checkError(stmt, code);
code = taos_stmt_bind_param_batch(stmt, params);
checkError(stmt, code);
@ -532,11 +553,58 @@ TEST(stmtCase, geometry) {
code = taos_stmt_execute(stmt);
checkError(stmt, code);
//test wrong wkb input
unsigned char wkb2[3][61] = {
{
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0xF0, 0x3F, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40,
},
{0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0, 0x3f},
{0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0, 0x3f, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x40, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40}};
params[1].buffer = wkb2;
code = taos_stmt_bind_param_batch(stmt, params);
ASSERT_EQ(code, TSDB_CODE_FUNC_FUNTION_PARA_VALUE);
taosMemoryFree(t64_len);
taosMemoryFree(wkb_len);
taos_stmt_close(stmt);
do_query(taos, "DROP DATABASE IF EXISTS stmt_testdb_5");
taos_close(taos);
}
//TD-33582
TEST(stmtCase, errcode) {
TAOS *taos = taos_connect("localhost", "root", "taosdata", NULL, 0);
ASSERT_NE(taos, nullptr);
do_query(taos, "DROP DATABASE IF EXISTS stmt_testdb_4");
do_query(taos, "CREATE DATABASE IF NOT EXISTS stmt_testdb_4");
do_query(taos, "USE stmt_testdb_4");
do_query(
taos,
"CREATE STABLE IF NOT EXISTS stmt_testdb_4.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS "
"(groupId INT, location BINARY(24))");
TAOS_STMT *stmt = taos_stmt_init(taos);
ASSERT_NE(stmt, nullptr);
char *sql = "select * from t where ts > ? and name = ? foo = ?";
int code = taos_stmt_prepare(stmt, sql, 0);
checkError(stmt, code);
int fieldNum = 0;
TAOS_FIELD_E *pFields = NULL;
code = stmtGetParamNum(stmt, &fieldNum);
ASSERT_EQ(code, TSDB_CODE_PAR_SYNTAX_ERROR);
code = taos_stmt_get_tag_fields(stmt, &fieldNum, &pFields);
ASSERT_EQ(code, TSDB_CODE_PAR_SYNTAX_ERROR);
// get fail dont influence the next stmt prepare
sql = "nsert into ? (ts, name) values (?, ?)";
code = taos_stmt_prepare(stmt, sql, 0);
checkError(stmt, code);
}
#pragma GCC diagnostic pop

View File

@ -14,12 +14,12 @@
*/
#define _DEFAULT_SOURCE
#include "tglobal.h"
#include "cJSON.h"
#include "defines.h"
#include "os.h"
#include "osString.h"
#include "tconfig.h"
#include "tglobal.h"
#include "tgrant.h"
#include "tjson.h"
#include "tlog.h"
@ -500,7 +500,9 @@ int32_t taosSetS3Cfg(SConfig *pCfg) {
TAOS_RETURN(TSDB_CODE_SUCCESS);
}
struct SConfig *taosGetCfg() { return tsCfg; }
struct SConfig *taosGetCfg() {
return tsCfg;
}
static int32_t taosLoadCfg(SConfig *pCfg, const char **envCmd, const char *inputCfgDir, const char *envFile,
char *apolloUrl) {
@ -818,8 +820,13 @@ static int32_t taosAddServerCfg(SConfig *pCfg) {
tsNumOfSnodeWriteThreads = tsNumOfCores / 4;
tsNumOfSnodeWriteThreads = TRANGE(tsNumOfSnodeWriteThreads, 2, 4);
tsQueueMemoryAllowed = tsTotalMemoryKB * 1024 * 0.1;
tsQueueMemoryAllowed = TRANGE(tsQueueMemoryAllowed, TSDB_MAX_MSG_SIZE * 10LL, TSDB_MAX_MSG_SIZE * 10000LL);
tsQueueMemoryAllowed = tsTotalMemoryKB * 1024 * RPC_MEMORY_USAGE_RATIO * QUEUE_MEMORY_USAGE_RATIO;
tsQueueMemoryAllowed = TRANGE(tsQueueMemoryAllowed, TSDB_MAX_MSG_SIZE * QUEUE_MEMORY_USAGE_RATIO * 10LL,
TSDB_MAX_MSG_SIZE * QUEUE_MEMORY_USAGE_RATIO * 10000LL);
tsApplyMemoryAllowed = tsTotalMemoryKB * 1024 * RPC_MEMORY_USAGE_RATIO * (1 - QUEUE_MEMORY_USAGE_RATIO);
tsApplyMemoryAllowed = TRANGE(tsApplyMemoryAllowed, TSDB_MAX_MSG_SIZE * (1 - QUEUE_MEMORY_USAGE_RATIO) * 10LL,
TSDB_MAX_MSG_SIZE * (1 - QUEUE_MEMORY_USAGE_RATIO) * 10000LL);
tsLogBufferMemoryAllowed = tsTotalMemoryKB * 1024 * 0.1;
tsLogBufferMemoryAllowed = TRANGE(tsLogBufferMemoryAllowed, TSDB_MAX_MSG_SIZE * 10LL, TSDB_MAX_MSG_SIZE * 10000LL);
@ -857,7 +864,7 @@ static int32_t taosAddServerCfg(SConfig *pCfg) {
TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "numOfSnodeSharedThreads", tsNumOfSnodeStreamThreads, 2, 1024, CFG_SCOPE_SERVER, CFG_DYN_SERVER_LAZY,CFG_CATEGORY_LOCAL));
TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "numOfSnodeUniqueThreads", tsNumOfSnodeWriteThreads, 2, 1024, CFG_SCOPE_SERVER, CFG_DYN_SERVER_LAZY,CFG_CATEGORY_LOCAL));
TAOS_CHECK_RETURN(cfgAddInt64(pCfg, "rpcQueueMemoryAllowed", tsQueueMemoryAllowed, TSDB_MAX_MSG_SIZE * 10L, INT64_MAX, CFG_SCOPE_SERVER, CFG_DYN_SERVER,CFG_CATEGORY_GLOBAL));
TAOS_CHECK_RETURN(cfgAddInt64(pCfg, "rpcQueueMemoryAllowed", tsQueueMemoryAllowed, TSDB_MAX_MSG_SIZE * RPC_MEMORY_USAGE_RATIO * 10L, INT64_MAX, CFG_SCOPE_SERVER, CFG_DYN_SERVER,CFG_CATEGORY_GLOBAL));
TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "syncElectInterval", tsElectInterval, 10, 1000 * 60 * 24 * 2, CFG_SCOPE_SERVER, CFG_DYN_NONE,CFG_CATEGORY_GLOBAL));
TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "syncHeartbeatInterval", tsHeartbeatInterval, 10, 1000 * 60 * 24 * 2, CFG_SCOPE_SERVER, CFG_DYN_NONE,CFG_CATEGORY_GLOBAL));
TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "syncHeartbeatTimeout", tsHeartbeatTimeout, 10, 1000 * 60 * 24 * 2, CFG_SCOPE_SERVER, CFG_DYN_NONE,CFG_CATEGORY_GLOBAL));
@ -1572,7 +1579,8 @@ static int32_t taosSetServerCfg(SConfig *pCfg) {
tsNumOfSnodeWriteThreads = pItem->i32;
TAOS_CHECK_GET_CFG_ITEM(pCfg, pItem, "rpcQueueMemoryAllowed");
tsQueueMemoryAllowed = pItem->i64;
tsQueueMemoryAllowed = cfgGetItem(pCfg, "rpcQueueMemoryAllowed")->i64 * QUEUE_MEMORY_USAGE_RATIO;
tsApplyMemoryAllowed = cfgGetItem(pCfg, "rpcQueueMemoryAllowed")->i64 * (1 - QUEUE_MEMORY_USAGE_RATIO);
TAOS_CHECK_GET_CFG_ITEM(pCfg, pItem, "simdEnable");
tsSIMDEnable = (bool)pItem->bval;
@ -2395,6 +2403,12 @@ static int32_t taosCfgDynamicOptionsForServer(SConfig *pCfg, const char *name) {
code = TSDB_CODE_SUCCESS;
goto _exit;
}
if (strcasecmp("rpcQueueMemoryAllowed", name) == 0) {
tsQueueMemoryAllowed = cfgGetItem(pCfg, "rpcQueueMemoryAllowed")->i64 * QUEUE_MEMORY_USAGE_RATIO;
tsApplyMemoryAllowed = cfgGetItem(pCfg, "rpcQueueMemoryAllowed")->i64 * (1 - QUEUE_MEMORY_USAGE_RATIO);
code = TSDB_CODE_SUCCESS;
goto _exit;
}
if (strcasecmp(name, "numOfCompactThreads") == 0) {
#ifdef TD_ENTERPRISE
@ -2500,7 +2514,6 @@ static int32_t taosCfgDynamicOptionsForServer(SConfig *pCfg, const char *name) {
{"experimental", &tsExperimental},
{"numOfRpcSessions", &tsNumOfRpcSessions},
{"rpcQueueMemoryAllowed", &tsQueueMemoryAllowed},
{"shellActivityTimer", &tsShellActivityTimer},
{"readTimeout", &tsReadTimeout},
{"safetyCheckLevel", &tsSafetyCheckLevel},

View File

@ -181,7 +181,7 @@ void dmSendStatusReq(SDnodeMgmt *pMgmt) {
req.numOfSupportVnodes = tsNumOfSupportVnodes;
req.numOfDiskCfg = tsDiskCfgNum;
req.memTotal = tsTotalMemoryKB * 1024;
req.memAvail = req.memTotal - tsQueueMemoryAllowed - 16 * 1024 * 1024;
req.memAvail = req.memTotal - tsQueueMemoryAllowed - tsApplyMemoryAllowed - 16 * 1024 * 1024;
tstrncpy(req.dnodeEp, tsLocalEp, TSDB_EP_LEN);
tstrncpy(req.machineId, pMgmt->pData->machineId, TSDB_MACHINE_ID_LEN + 1);

View File

@ -323,7 +323,7 @@ int32_t vmPutRpcMsgToQueue(SVnodeMgmt *pMgmt, EQueueType qtype, SRpcMsg *pRpc) {
return TSDB_CODE_INVALID_MSG;
}
EQItype itype = APPLY_QUEUE == qtype ? DEF_QITEM : RPC_QITEM;
EQItype itype = APPLY_QUEUE == qtype ? APPLY_QITEM : RPC_QITEM;
SRpcMsg *pMsg;
code = taosAllocateQitem(sizeof(SRpcMsg), itype, pRpc->contLen, (void **)&pMsg);
if (code) {

View File

@ -36,7 +36,8 @@ void Testbase::InitLog(const char* path) {
tstrncpy(tsLogDir, path, PATH_MAX);
taosGetSystemInfo();
tsQueueMemoryAllowed = tsTotalMemoryKB * 0.1;
tsQueueMemoryAllowed = tsTotalMemoryKB * 0.06;
tsApplyMemoryAllowed = tsTotalMemoryKB * 0.04;
if (taosInitLog("taosdlog", 1, false) != 0) {
printf("failed to init log file\n");
}

View File

@ -159,6 +159,7 @@ void removeTasksInBuf(SArray *pTaskIds, SStreamExecInfo *pExecInfo);
int32_t mndFindChangedNodeInfo(SMnode *pMnode, const SArray *pPrevNodeList, const SArray *pNodeList,
SVgroupChangeInfo *pInfo);
void killAllCheckpointTrans(SMnode *pMnode, SVgroupChangeInfo *pChangeInfo);
bool isNodeUpdateTransActive();
int32_t createStreamTaskIter(SStreamObj *pStream, SStreamTaskIter **pIter);
void destroyStreamTaskIter(SStreamTaskIter *pIter);
@ -175,8 +176,8 @@ void removeStreamTasksInBuf(SStreamObj *pStream, SStreamExecInfo *pExecNode);
int32_t mndGetConsensusInfo(SHashObj *pHash, int64_t streamId, int32_t numOfTasks, SCheckpointConsensusInfo **pInfo);
void mndAddConsensusTasks(SCheckpointConsensusInfo *pInfo, const SRestoreCheckpointInfo *pRestoreInfo);
void mndClearConsensusRspEntry(SCheckpointConsensusInfo *pInfo);
int64_t mndClearConsensusCheckpointId(SHashObj *pHash, int64_t streamId);
int64_t mndClearChkptReportInfo(SHashObj *pHash, int64_t streamId);
int32_t mndClearConsensusCheckpointId(SHashObj *pHash, int64_t streamId);
int32_t mndClearChkptReportInfo(SHashObj *pHash, int64_t streamId);
int32_t mndResetChkptReportInfo(SHashObj *pHash, int64_t streamId);
int32_t setStreamAttrInResBlock(SStreamObj *pStream, SSDataBlock *pBlock, int32_t numOfRows);

View File

@ -65,8 +65,6 @@ static int32_t mndProcessDropOrphanTaskReq(SRpcMsg *pReq);
static void saveTaskAndNodeInfoIntoBuf(SStreamObj *pStream, SStreamExecInfo *pExecNode);
static void addAllStreamTasksIntoBuf(SMnode *pMnode, SStreamExecInfo *pExecInfo);
static void removeExpiredNodeInfo(const SArray *pNodeSnapshot);
static int32_t doKillCheckpointTrans(SMnode *pMnode, const char *pDbName, size_t len);
static SSdbRow *mndStreamActionDecode(SSdbRaw *pRaw);
SSdbRaw *mndStreamSeqActionEncode(SStreamObj *pStream);
@ -801,6 +799,13 @@ static int32_t mndProcessCreateStreamReq(SRpcMsg *pReq) {
TSDB_CHECK_NULL(sql, code, lino, _OVER, terrno);
}
// check for the taskEp update trans
if (isNodeUpdateTransActive()) {
mError("stream:%s failed to create stream, node update trans is active", createReq.name);
code = TSDB_CODE_STREAM_TASK_IVLD_STATUS;
goto _OVER;
}
SDbObj *pSourceDb = mndAcquireDb(pMnode, createReq.sourceDB);
if (pSourceDb == NULL) {
code = terrno;
@ -2416,8 +2421,8 @@ static bool validateChkptReport(const SCheckpointReport *pReport, int64_t report
return true;
}
static void doAddReportStreamTask(SArray *pList, int64_t reportChkptId, const SCheckpointReport *pReport) {
bool valid = validateChkptReport(pReport, reportChkptId);
static void doAddReportStreamTask(SArray *pList, int64_t reportedChkptId, const SCheckpointReport *pReport) {
bool valid = validateChkptReport(pReport, reportedChkptId);
if (!valid) {
return;
}
@ -2433,7 +2438,7 @@ static void doAddReportStreamTask(SArray *pList, int64_t reportChkptId, const SC
mError("s-task:0x%x invalid checkpoint-report msg, existed:%" PRId64 " req checkpointId:%" PRId64 ", discard",
pReport->taskId, p->checkpointId, pReport->checkpointId);
} else if (p->checkpointId < pReport->checkpointId) { // expired checkpoint-report msg, update it
mDebug("s-task:0x%x expired checkpoint-report msg in checkpoint-report list update from %" PRId64 "->%" PRId64,
mInfo("s-task:0x%x expired checkpoint-report info in checkpoint-report list update from %" PRId64 "->%" PRId64,
pReport->taskId, p->checkpointId, pReport->checkpointId);
// update the checkpoint report info
@ -2465,7 +2470,8 @@ static void doAddReportStreamTask(SArray *pList, int64_t reportChkptId, const SC
mError("failed to put into task list, taskId:0x%x", pReport->taskId);
} else {
int32_t size = taosArrayGetSize(pList);
mDebug("stream:0x%" PRIx64 " %d tasks has send checkpoint-report", pReport->streamId, size);
mDebug("stream:0x%" PRIx64 " taskId:0x%x checkpoint-report recv, %d tasks has send checkpoint-report",
pReport->streamId, pReport->taskId, size);
}
}
@ -2491,7 +2497,7 @@ int32_t mndProcessCheckpointReport(SRpcMsg *pReq) {
" checkpointVer:%" PRId64 " transId:%d",
req.nodeId, req.taskId, req.checkpointId, req.checkpointVer, req.transId);
// register to the stream task done map, if all tasks has sent this kinds of message, start the checkpoint trans.
// register to the stream task done map, if all tasks has sent these kinds of message, start the checkpoint trans.
streamMutexLock(&execInfo.lock);
SStreamObj *pStream = NULL;
@ -2500,7 +2506,7 @@ int32_t mndProcessCheckpointReport(SRpcMsg *pReq) {
mWarn("failed to find the stream:0x%" PRIx64 ", not handle checkpoint-report, try to acquire in buf", req.streamId);
// not in meta-store yet, try to acquire the task in exec buffer
// the checkpoint req arrives too soon before the completion of the create stream trans.
// the checkpoint req arrives too soon before the completion of the creation of stream trans.
STaskId id = {.streamId = req.streamId, .taskId = req.taskId};
void *p = taosHashGet(execInfo.pTaskMap, &id, sizeof(id));
if (p == NULL) {
@ -2533,7 +2539,7 @@ int32_t mndProcessCheckpointReport(SRpcMsg *pReq) {
}
int32_t total = taosArrayGetSize(pInfo->pTaskList);
if (total == numOfTasks) { // all tasks has send the reqs
if (total == numOfTasks) { // all tasks have sent the reqs
mInfo("stream:0x%" PRIx64 " %s all %d tasks send checkpoint-report, checkpoint meta-info for checkpointId:%" PRId64
" will be issued soon",
req.streamId, pStream->name, total, req.checkpointId);

View File

@ -292,6 +292,25 @@ int32_t setTransAction(STrans *pTrans, void *pCont, int32_t contLen, int32_t msg
return mndTransAppendRedoAction(pTrans, &action);
}
bool isNodeUpdateTransActive() {
bool exist = false;
void *pIter = NULL;
streamMutexLock(&execInfo.lock);
while ((pIter = taosHashIterate(execInfo.transMgmt.pDBTrans, pIter)) != NULL) {
SStreamTransInfo *pTransInfo = (SStreamTransInfo *)pIter;
if (strcmp(pTransInfo->name, MND_STREAM_TASK_UPDATE_NAME) == 0) {
mDebug("stream:0x%" PRIx64 " %s st:%" PRId64 " is in task nodeEp update, create new stream not allowed",
pTransInfo->streamId, pTransInfo->name, pTransInfo->startTime);
exist = true;
}
}
streamMutexUnlock(&execInfo.lock);
return exist;
}
int32_t doKillCheckpointTrans(SMnode *pMnode, const char *pDBName, size_t len) {
void *pIter = NULL;

View File

@ -658,6 +658,72 @@ int32_t removeExpiredNodeEntryAndTaskInBuf(SArray *pNodeSnapshot) {
return 0;
}
static int32_t allTasksSendChkptReport(SChkptReportInfo* pReportInfo, int32_t numOfTasks, const char* pName) {
int64_t checkpointId = -1;
int32_t transId = -1;
int32_t taskId = -1;
int32_t existed = (int32_t)taosArrayGetSize(pReportInfo->pTaskList);
if (existed != numOfTasks) {
mDebug("stream:0x%" PRIx64 " %s %d/%d tasks send checkpoint-report, %d not send", pReportInfo->streamId, pName,
existed, numOfTasks, numOfTasks - existed);
return -1;
}
// acquire current active checkpointId, and do cross-check checkpointId info in exec.pTaskList
for(int32_t i = 0; i < numOfTasks; ++i) {
STaskChkptInfo *pInfo = taosArrayGet(pReportInfo->pTaskList, i);
if (pInfo == NULL) {
continue;
}
if (checkpointId == -1) {
checkpointId = pInfo->checkpointId;
transId = pInfo->transId;
taskId = pInfo->taskId;
} else if (checkpointId != pInfo->checkpointId) {
mError("stream:0x%" PRIx64
" checkpointId in checkpoint-report list are not identical, type 1 taskId:0x%x checkpointId:%" PRId64
", type 2 taskId:0x%x checkpointId:%" PRId64,
pReportInfo->streamId, taskId, checkpointId, pInfo->taskId, pInfo->checkpointId);
return -1;
}
}
// check for the correct checkpointId for current task info in STaskChkptInfo
STaskChkptInfo *p = taosArrayGet(pReportInfo->pTaskList, 0);
STaskId id = {.streamId = p->streamId, .taskId = p->taskId};
STaskStatusEntry *pe = taosHashGet(execInfo.pTaskMap, &id, sizeof(id));
// cross-check failed, there must be something unknown wrong
SStreamTransInfo *pTransInfo = taosHashGet(execInfo.transMgmt.pDBTrans, &id.streamId, sizeof(id.streamId));
if (pTransInfo == NULL) {
mWarn("stream:0x%" PRIx64 " no active trans exists for checkpoint transId:%d, it may have been cleared already",
id.streamId, transId);
if (pe->checkpointInfo.activeId != 0 && pe->checkpointInfo.activeId != checkpointId) {
mWarn("stream:0x%" PRIx64 " active checkpointId is not equalled to the required, current:%" PRId64
", req:%" PRId64 " recheck next time",
id.streamId, pe->checkpointInfo.activeId, checkpointId);
return -1;
} else {
// do nothing
}
} else {
if (pTransInfo->transId != transId) {
mError("stream:0x%" PRIx64
" checkpoint-report list info are expired, active transId:%d trans in list:%d, recheck next time",
id.streamId, pTransInfo->transId, transId);
return -1;
}
}
mDebug("stream:0x%" PRIx64 " %s all %d tasks send checkpoint-report, start to update checkpoint-info", id.streamId,
pName, numOfTasks);
return TSDB_CODE_SUCCESS;
}
int32_t mndScanCheckpointReportInfo(SRpcMsg *pReq) {
SMnode *pMnode = pReq->info.node;
void *pIter = NULL;
@ -668,6 +734,7 @@ int32_t mndScanCheckpointReportInfo(SRpcMsg *pReq) {
}
mDebug("start to scan checkpoint report info");
streamMutexLock(&execInfo.lock);
while ((pIter = taosHashIterate(execInfo.pChkptStreams, pIter)) != NULL) {
@ -693,30 +760,27 @@ int32_t mndScanCheckpointReportInfo(SRpcMsg *pReq) {
}
int32_t total = mndGetNumOfStreamTasks(pStream);
int32_t existed = (int32_t)taosArrayGetSize(px->pTaskList);
if (total == existed) {
mDebug("stream:0x%" PRIx64 " %s all %d tasks send checkpoint-report, start to update checkpoint-info",
pStream->uid, pStream->name, total);
int32_t ret = allTasksSendChkptReport(px, total, pStream->name);
if (ret == 0) {
code = mndStreamTransConflictCheck(pMnode, pStream->uid, MND_STREAM_CHKPT_UPDATE_NAME, false);
if (code == 0) {
code = mndCreateStreamChkptInfoUpdateTrans(pMnode, pStream, px->pTaskList);
if (code == TSDB_CODE_SUCCESS || code == TSDB_CODE_ACTION_IN_PROGRESS) { // remove this entry
taosArrayClear(px->pTaskList);
mInfo("stream:0x%" PRIx64 " clear checkpoint-report list and update the report checkpointId from:%" PRId64
" to %" PRId64,
pInfo->streamId, px->reportChkpt, pInfo->checkpointId);
px->reportChkpt = pInfo->checkpointId;
mDebug("stream:0x%" PRIx64 " clear checkpoint-report list", pInfo->streamId);
} else {
mDebug("stream:0x%" PRIx64 " not launch chkpt-meta update trans, due to checkpoint not finished yet",
mDebug("stream:0x%" PRIx64 " not launch chkpt-info update trans, due to checkpoint not finished yet",
pInfo->streamId);
}
sdbRelease(pMnode->pSdb, pStream);
break;
} else {
mDebug("stream:0x%" PRIx64 " active checkpoint trans not finished yet, wait", pInfo->streamId);
}
} else {
mDebug("stream:0x%" PRIx64 " %s %d/%d tasks send checkpoint-report, %d not send", pInfo->streamId, pStream->name,
existed, total, total - existed);
}
sdbRelease(pMnode->pSdb, pStream);
@ -743,6 +807,8 @@ int32_t mndScanCheckpointReportInfo(SRpcMsg *pReq) {
streamMutexUnlock(&execInfo.lock);
taosArrayDestroy(pDropped);
mDebug("end to scan checkpoint report info")
return TSDB_CODE_SUCCESS;
}
@ -836,7 +902,8 @@ void mndAddConsensusTasks(SCheckpointConsensusInfo *pInfo, const SRestoreCheckpo
SCheckpointConsensusEntry info = {.ts = taosGetTimestampMs()};
memcpy(&info.req, pRestoreInfo, sizeof(info.req));
for (int32_t i = 0; i < taosArrayGetSize(pInfo->pTaskList); ++i) {
int32_t num = (int32_t) taosArrayGetSize(pInfo->pTaskList);
for (int32_t i = 0; i < num; ++i) {
SCheckpointConsensusEntry *p = taosArrayGet(pInfo->pTaskList, i);
if (p == NULL) {
continue;
@ -844,10 +911,12 @@ void mndAddConsensusTasks(SCheckpointConsensusInfo *pInfo, const SRestoreCheckpo
if (p->req.taskId == info.req.taskId) {
mDebug("s-task:0x%x already in consensus-checkpointId list for stream:0x%" PRIx64 ", update ts %" PRId64
"->%" PRId64 " total existed:%d",
pRestoreInfo->taskId, pRestoreInfo->streamId, p->req.startTs, info.req.startTs,
(int32_t)taosArrayGetSize(pInfo->pTaskList));
"->%" PRId64 " checkpointId:%" PRId64 " -> %" PRId64 " total existed:%d",
pRestoreInfo->taskId, pRestoreInfo->streamId, p->req.startTs, info.req.startTs, p->req.checkpointId,
info.req.checkpointId, num);
p->req.startTs = info.req.startTs;
p->req.checkpointId = info.req.checkpointId;
p->req.transId = info.req.transId;
return;
}
}
@ -856,7 +925,7 @@ void mndAddConsensusTasks(SCheckpointConsensusInfo *pInfo, const SRestoreCheckpo
if (p == NULL) {
mError("s-task:0x%x failed to put task into consensus-checkpointId list, code: out of memory", info.req.taskId);
} else {
int32_t num = taosArrayGetSize(pInfo->pTaskList);
num = taosArrayGetSize(pInfo->pTaskList);
mDebug("s-task:0x%x checkpointId:%" PRId64 " added into consensus-checkpointId list, stream:0x%" PRIx64
" waiting tasks:%d",
pRestoreInfo->taskId, pRestoreInfo->checkpointId, pRestoreInfo->streamId, num);
@ -868,7 +937,7 @@ void mndClearConsensusRspEntry(SCheckpointConsensusInfo *pInfo) {
pInfo->pTaskList = NULL;
}
int64_t mndClearConsensusCheckpointId(SHashObj *pHash, int64_t streamId) {
int32_t mndClearConsensusCheckpointId(SHashObj *pHash, int64_t streamId) {
int32_t code = 0;
int32_t numOfStreams = taosHashGetSize(pHash);
if (numOfStreams == 0) {
@ -885,7 +954,7 @@ int64_t mndClearConsensusCheckpointId(SHashObj *pHash, int64_t streamId) {
return code;
}
int64_t mndClearChkptReportInfo(SHashObj *pHash, int64_t streamId) {
int32_t mndClearChkptReportInfo(SHashObj *pHash, int64_t streamId) {
int32_t code = 0;
int32_t numOfStreams = taosHashGetSize(pHash);
if (numOfStreams == 0) {

View File

@ -130,7 +130,7 @@ int32_t metaGetTableNameByUid(void *pVnode, uint64_t uid, char *tbName);
int metaGetTableSzNameByUid(void *meta, uint64_t uid, char *tbName);
int metaGetTableUidByName(void *pVnode, char *tbName, uint64_t *uid);
int metaGetTableTypeByName(void *meta, char *tbName, ETableType *tbType);
int metaGetTableTypeSuidByName(void *meta, char *tbName, ETableType *tbType, uint64_t* suid);
int metaGetTableTtlByUid(void *meta, uint64_t uid, int64_t *ttlDays);
bool metaIsTableExist(void *pVnode, tb_uid_t uid);
int32_t metaGetCachedTableUidList(void *pVnode, tb_uid_t suid, const uint8_t *key, int32_t keyLen, SArray *pList,

View File

@ -190,13 +190,20 @@ int metaGetTableUidByName(void *pVnode, char *tbName, uint64_t *uid) {
return 0;
}
int metaGetTableTypeByName(void *pVnode, char *tbName, ETableType *tbType) {
int metaGetTableTypeSuidByName(void *pVnode, char *tbName, ETableType *tbType, uint64_t* suid) {
int code = 0;
SMetaReader mr = {0};
metaReaderDoInit(&mr, ((SVnode *)pVnode)->pMeta, META_READER_LOCK);
code = metaGetTableEntryByName(&mr, tbName);
if (code == 0) *tbType = mr.me.type;
if (TSDB_CHILD_TABLE == mr.me.type) {
*suid = mr.me.ctbEntry.suid;
} else if (TSDB_SUPER_TABLE == mr.me.type) {
*suid = mr.me.uid;
} else {
*suid = 0;
}
metaReaderClear(&mr);
return code;

View File

@ -94,7 +94,7 @@ void initMetadataAPI(SStoreMeta* pMeta) {
pMeta->getTableTagsByUid = metaGetTableTagsByUids;
pMeta->getTableUidByName = metaGetTableUidByName;
pMeta->getTableTypeByName = metaGetTableTypeByName;
pMeta->getTableTypeSuidByName = metaGetTableTypeSuidByName;
pMeta->getTableNameByUid = metaGetTableNameByUid;
pMeta->getTableSchema = vnodeGetTableSchema;

View File

@ -202,13 +202,13 @@ do { \
#define EXPLAIN_SUM_ROW_END() do { varDataSetLen(tbuf, tlen); tlen += VARSTR_HEADER_SIZE; } while (0)
#define EXPLAIN_ROW_APPEND_LIMIT_IMPL(_pLimit, sl) do { \
if (_pLimit) { \
if (_pLimit && ((SLimitNode*)_pLimit)->limit) { \
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT); \
SLimitNode* pLimit = (SLimitNode*)_pLimit; \
EXPLAIN_ROW_APPEND(((sl) ? EXPLAIN_SLIMIT_FORMAT : EXPLAIN_LIMIT_FORMAT), pLimit->limit); \
EXPLAIN_ROW_APPEND(((sl) ? EXPLAIN_SLIMIT_FORMAT : EXPLAIN_LIMIT_FORMAT), pLimit->limit->datum.i); \
if (pLimit->offset) { \
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT); \
EXPLAIN_ROW_APPEND(((sl) ? EXPLAIN_SOFFSET_FORMAT : EXPLAIN_OFFSET_FORMAT), pLimit->offset);\
EXPLAIN_ROW_APPEND(((sl) ? EXPLAIN_SOFFSET_FORMAT : EXPLAIN_OFFSET_FORMAT), pLimit->offset->datum.i);\
} \
} \
} while (0)

View File

@ -676,9 +676,9 @@ static int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx
EXPLAIN_ROW_APPEND(EXPLAIN_WIN_OFFSET_FORMAT, pStart->literal, pEnd->literal);
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
}
if (NULL != pJoinNode->pJLimit) {
if (NULL != pJoinNode->pJLimit && NULL != ((SLimitNode*)pJoinNode->pJLimit)->limit) {
SLimitNode* pJLimit = (SLimitNode*)pJoinNode->pJLimit;
EXPLAIN_ROW_APPEND(EXPLAIN_JLIMIT_FORMAT, pJLimit->limit);
EXPLAIN_ROW_APPEND(EXPLAIN_JLIMIT_FORMAT, pJLimit->limit->datum.i);
EXPLAIN_ROW_APPEND(EXPLAIN_BLANK_FORMAT);
}
if (IS_WINDOW_JOIN(pJoinNode->subType)) {

View File

@ -49,13 +49,13 @@ typedef enum {
static FilterCondType checkTagCond(SNode* cond);
static int32_t optimizeTbnameInCond(void* metaHandle, int64_t suid, SArray* list, SNode* pTagCond, SStorageAPI* pAPI);
static int32_t optimizeTbnameInCondImpl(void* metaHandle, SArray* list, SNode* pTagCond, SStorageAPI* pStoreAPI);
static int32_t optimizeTbnameInCondImpl(void* metaHandle, SArray* list, SNode* pTagCond, SStorageAPI* pStoreAPI, uint64_t suid);
static int32_t getTableList(void* pVnode, SScanPhysiNode* pScanNode, SNode* pTagCond, SNode* pTagIndexCond,
STableListInfo* pListInfo, uint8_t* digest, const char* idstr, SStorageAPI* pStorageAPI);
static int64_t getLimit(const SNode* pLimit) { return NULL == pLimit ? -1 : ((SLimitNode*)pLimit)->limit; }
static int64_t getOffset(const SNode* pLimit) { return NULL == pLimit ? -1 : ((SLimitNode*)pLimit)->offset; }
static int64_t getLimit(const SNode* pLimit) { return (NULL == pLimit || NULL == ((SLimitNode*)pLimit)->limit) ? -1 : ((SLimitNode*)pLimit)->limit->datum.i; }
static int64_t getOffset(const SNode* pLimit) { return (NULL == pLimit || NULL == ((SLimitNode*)pLimit)->offset) ? -1 : ((SLimitNode*)pLimit)->offset->datum.i; }
static void releaseColInfoData(void* pCol);
void initResultRowInfo(SResultRowInfo* pResultRowInfo) {
@ -1061,7 +1061,7 @@ static int32_t optimizeTbnameInCond(void* pVnode, int64_t suid, SArray* list, SN
int32_t ntype = nodeType(cond);
if (ntype == QUERY_NODE_OPERATOR) {
ret = optimizeTbnameInCondImpl(pVnode, list, cond, pAPI);
ret = optimizeTbnameInCondImpl(pVnode, list, cond, pAPI, suid);
}
if (ntype != QUERY_NODE_LOGIC_CONDITION || ((SLogicConditionNode*)cond)->condType != LOGIC_COND_TYPE_AND) {
@ -1080,7 +1080,7 @@ static int32_t optimizeTbnameInCond(void* pVnode, int64_t suid, SArray* list, SN
SListCell* cell = pList->pHead;
for (int i = 0; i < len; i++) {
if (cell == NULL) break;
if (optimizeTbnameInCondImpl(pVnode, list, cell->pNode, pAPI) == 0) {
if (optimizeTbnameInCondImpl(pVnode, list, cell->pNode, pAPI, suid) == 0) {
hasTbnameCond = true;
break;
}
@ -1099,7 +1099,7 @@ static int32_t optimizeTbnameInCond(void* pVnode, int64_t suid, SArray* list, SN
// only return uid that does not contained in pExistedUidList
static int32_t optimizeTbnameInCondImpl(void* pVnode, SArray* pExistedUidList, SNode* pTagCond,
SStorageAPI* pStoreAPI) {
SStorageAPI* pStoreAPI, uint64_t suid) {
if (nodeType(pTagCond) != QUERY_NODE_OPERATOR) {
return -1;
}
@ -1148,10 +1148,13 @@ static int32_t optimizeTbnameInCondImpl(void* pVnode, SArray* pExistedUidList, S
for (int i = 0; i < numOfTables; i++) {
char* name = taosArrayGetP(pTbList, i);
uint64_t uid = 0;
uint64_t uid = 0, csuid = 0;
if (pStoreAPI->metaFn.getTableUidByName(pVnode, name, &uid) == 0) {
ETableType tbType = TSDB_TABLE_MAX;
if (pStoreAPI->metaFn.getTableTypeByName(pVnode, name, &tbType) == 0 && tbType == TSDB_CHILD_TABLE) {
if (pStoreAPI->metaFn.getTableTypeSuidByName(pVnode, name, &tbType, &csuid) == 0 && tbType == TSDB_CHILD_TABLE) {
if (suid != csuid) {
continue;
}
if (NULL == uHash || taosHashGet(uHash, &uid, sizeof(uid)) == NULL) {
STUidTagInfo s = {.uid = uid, .name = name, .pTagVal = NULL};
void* tmp = taosArrayPush(pExistedUidList, &s);

View File

@ -1185,7 +1185,7 @@ int32_t createHashJoinOperatorInfo(SOperatorInfo** pDownstream, int32_t numOfDow
pInfo->tblTimeRange.skey = pJoinNode->timeRange.skey;
pInfo->tblTimeRange.ekey = pJoinNode->timeRange.ekey;
pInfo->ctx.limit = pJoinNode->node.pLimit ? ((SLimitNode*)pJoinNode->node.pLimit)->limit : INT64_MAX;
pInfo->ctx.limit = (pJoinNode->node.pLimit && ((SLimitNode*)pJoinNode->node.pLimit)->limit) ? ((SLimitNode*)pJoinNode->node.pLimit)->limit->datum.i : INT64_MAX;
setOperatorInfo(pOperator, "HashJoinOperator", QUERY_NODE_PHYSICAL_PLAN_HASH_JOIN, false, OP_NOT_OPENED, pInfo, pTaskInfo);

View File

@ -3592,7 +3592,7 @@ int32_t mJoinInitWindowCtx(SMJoinOperatorInfo* pJoin, SSortMergeJoinPhysiNode* p
switch (pJoinNode->subType) {
case JOIN_STYPE_ASOF:
pCtx->asofOpType = pJoinNode->asofOpType;
pCtx->jLimit = pJoinNode->pJLimit ? ((SLimitNode*)pJoinNode->pJLimit)->limit : 1;
pCtx->jLimit = (pJoinNode->pJLimit && ((SLimitNode*)pJoinNode->pJLimit)->limit) ? ((SLimitNode*)pJoinNode->pJLimit)->limit->datum.i : 1;
pCtx->eqRowsAcq = ASOF_EQ_ROW_INCLUDED(pCtx->asofOpType);
pCtx->lowerRowsAcq = (JOIN_TYPE_RIGHT != pJoin->joinType) ? ASOF_LOWER_ROW_INCLUDED(pCtx->asofOpType) : ASOF_GREATER_ROW_INCLUDED(pCtx->asofOpType);
pCtx->greaterRowsAcq = (JOIN_TYPE_RIGHT != pJoin->joinType) ? ASOF_GREATER_ROW_INCLUDED(pCtx->asofOpType) : ASOF_LOWER_ROW_INCLUDED(pCtx->asofOpType);
@ -3609,7 +3609,7 @@ int32_t mJoinInitWindowCtx(SMJoinOperatorInfo* pJoin, SSortMergeJoinPhysiNode* p
SWindowOffsetNode* pOffsetNode = (SWindowOffsetNode*)pJoinNode->pWindowOffset;
SValueNode* pWinBegin = (SValueNode*)pOffsetNode->pStartOffset;
SValueNode* pWinEnd = (SValueNode*)pOffsetNode->pEndOffset;
pCtx->jLimit = pJoinNode->pJLimit ? ((SLimitNode*)pJoinNode->pJLimit)->limit : INT64_MAX;
pCtx->jLimit = (pJoinNode->pJLimit && ((SLimitNode*)pJoinNode->pJLimit)->limit) ? ((SLimitNode*)pJoinNode->pJLimit)->limit->datum.i : INT64_MAX;
pCtx->winBeginOffset = pWinBegin->datum.i;
pCtx->winEndOffset = pWinEnd->datum.i;
pCtx->eqRowsAcq = (pCtx->winBeginOffset <= 0 && pCtx->winEndOffset >= 0);
@ -3662,7 +3662,7 @@ int32_t mJoinInitMergeCtx(SMJoinOperatorInfo* pJoin, SSortMergeJoinPhysiNode* pJ
pCtx->hashCan = pJoin->probe->keyNum > 0;
if (JOIN_STYPE_ASOF == pJoinNode->subType || JOIN_STYPE_WIN == pJoinNode->subType) {
pCtx->jLimit = pJoinNode->pJLimit ? ((SLimitNode*)pJoinNode->pJLimit)->limit : 1;
pCtx->jLimit = (pJoinNode->pJLimit && ((SLimitNode*)pJoinNode->pJLimit)->limit) ? ((SLimitNode*)pJoinNode->pJLimit)->limit->datum.i : 1;
pJoin->subType = JOIN_STYPE_OUTER;
pJoin->build->eqRowLimit = pCtx->jLimit;
pJoin->grpResetFp = mLeftJoinGroupReset;

View File

@ -986,7 +986,7 @@ static int32_t mJoinInitTableInfo(SMJoinOperatorInfo* pJoin, SSortMergeJoinPhysi
pTable->multiEqGrpRows = !((JOIN_STYPE_SEMI == pJoin->subType || JOIN_STYPE_ANTI == pJoin->subType) && NULL == pJoin->pFPreFilter);
pTable->multiRowsGrp = !((JOIN_STYPE_SEMI == pJoin->subType || JOIN_STYPE_ANTI == pJoin->subType) && NULL == pJoin->pPreFilter);
if (JOIN_STYPE_ASOF == pJoinNode->subType) {
pTable->eqRowLimit = pJoinNode->pJLimit ? ((SLimitNode*)pJoinNode->pJLimit)->limit : 1;
pTable->eqRowLimit = (pJoinNode->pJLimit && ((SLimitNode*)pJoinNode->pJLimit)->limit) ? ((SLimitNode*)pJoinNode->pJLimit)->limit->datum.i : 1;
}
} else {
pTable->multiEqGrpRows = true;
@ -1169,7 +1169,7 @@ static FORCE_INLINE SSDataBlock* mJoinRetrieveImpl(SMJoinOperatorInfo* pJoin, SM
static int32_t mJoinInitCtx(SMJoinOperatorInfo* pJoin, SSortMergeJoinPhysiNode* pJoinNode) {
pJoin->ctx.mergeCtx.groupJoin = pJoinNode->grpJoin;
pJoin->ctx.mergeCtx.limit = pJoinNode->node.pLimit ? ((SLimitNode*)pJoinNode->node.pLimit)->limit : INT64_MAX;
pJoin->ctx.mergeCtx.limit = (pJoinNode->node.pLimit && ((SLimitNode*)pJoinNode->node.pLimit)->limit) ? ((SLimitNode*)pJoinNode->node.pLimit)->limit->datum.i : INT64_MAX;
pJoin->retrieveFp = pJoinNode->grpJoin ? mJoinGrpRetrieveImpl : mJoinRetrieveImpl;
pJoin->outBlkId = pJoinNode->node.pOutputDataBlockDesc->dataBlockId;

View File

@ -84,9 +84,11 @@ int32_t createSortOperatorInfo(SOperatorInfo* downstream, SSortPhysiNode* pSortN
calcSortOperMaxTupleLength(pInfo, pSortNode->pSortKeys);
pInfo->maxRows = -1;
if (pSortNode->node.pLimit) {
if (pSortNode->node.pLimit && ((SLimitNode*)pSortNode->node.pLimit)->limit) {
SLimitNode* pLimit = (SLimitNode*)pSortNode->node.pLimit;
if (pLimit->limit > 0) pInfo->maxRows = pLimit->limit + pLimit->offset;
if (pLimit->limit->datum.i > 0) {
pInfo->maxRows = pLimit->limit->datum.i + (pLimit->offset ? pLimit->offset->datum.i : 0);
}
}
pOperator->exprSupp.pCtx =

View File

@ -63,7 +63,7 @@ static void doKeepPrevRows(STimeSliceOperatorInfo* pSliceInfo, const SSDataBlock
pkey->isNull = false;
char* val = colDataGetData(pColInfoData, rowIndex);
if (IS_VAR_DATA_TYPE(pkey->type)) {
memcpy(pkey->pData, val, varDataLen(val));
memcpy(pkey->pData, val, varDataTLen(val));
} else {
memcpy(pkey->pData, val, pkey->bytes);
}
@ -87,7 +87,7 @@ static void doKeepNextRows(STimeSliceOperatorInfo* pSliceInfo, const SSDataBlock
if (!IS_VAR_DATA_TYPE(pkey->type)) {
memcpy(pkey->pData, val, pkey->bytes);
} else {
memcpy(pkey->pData, val, varDataLen(val));
memcpy(pkey->pData, val, varDataTLen(val));
}
} else {
pkey->isNull = true;

View File

@ -1417,15 +1417,15 @@ int32_t createIntervalOperatorInfo(SOperatorInfo* downstream, SIntervalPhysiNode
pInfo->interval = interval;
pInfo->twAggSup = as;
pInfo->binfo.mergeResultBlock = pPhyNode->window.mergeDataBlock;
if (pPhyNode->window.node.pLimit) {
if (pPhyNode->window.node.pLimit && ((SLimitNode*)pPhyNode->window.node.pLimit)->limit) {
SLimitNode* pLimit = (SLimitNode*)pPhyNode->window.node.pLimit;
pInfo->limited = true;
pInfo->limit = pLimit->limit + pLimit->offset;
pInfo->limit = pLimit->limit->datum.i + (pLimit->offset ? pLimit->offset->datum.i : 0);
}
if (pPhyNode->window.node.pSlimit) {
if (pPhyNode->window.node.pSlimit && ((SLimitNode*)pPhyNode->window.node.pSlimit)->limit) {
SLimitNode* pLimit = (SLimitNode*)pPhyNode->window.node.pSlimit;
pInfo->slimited = true;
pInfo->slimit = pLimit->limit + pLimit->offset;
pInfo->slimit = pLimit->limit->datum.i + (pLimit->offset ? pLimit->offset->datum.i : 0);
pInfo->curGroupId = UINT64_MAX;
}

View File

@ -864,7 +864,11 @@ SSortMergeJoinPhysiNode* createDummySortMergeJoinPhysiNode(SJoinTestParam* param
SLimitNode* limitNode = NULL;
code = nodesMakeNode(QUERY_NODE_LIMIT, (SNode**)&limitNode);
assert(limitNode);
limitNode->limit = param->jLimit;
code = nodesMakeNode(QUERY_NODE_VALUE, (SNode**)&limitNode->limit);
assert(limitNode->limit);
limitNode->limit->node.resType.type = TSDB_DATA_TYPE_BIGINT;
limitNode->limit->node.resType.bytes = tDataTypes[TSDB_DATA_TYPE_BIGINT].bytes;
limitNode->limit->datum.i = param->jLimit;
p->pJLimit = (SNode*)limitNode;
}

View File

@ -1418,6 +1418,7 @@ SNode* qptMakeExprNode(SNode** ppNode) {
SNode* qptMakeLimitNode(SNode** ppNode) {
SNode* pNode = NULL;
int32_t code = 0;
if (QPT_NCORRECT_LOW_PROB()) {
return qptMakeRandNode(&pNode);
}
@ -1429,15 +1430,27 @@ SNode* qptMakeLimitNode(SNode** ppNode) {
if (!qptCtx.param.correctExpected) {
if (taosRand() % 2) {
pLimit->limit = taosRand() * ((taosRand() % 2) ? 1 : -1);
code = nodesMakeNode(QUERY_NODE_VALUE, (SNode**)&pLimit->limit);
assert(pLimit->limit);
pLimit->limit->node.resType.type = TSDB_DATA_TYPE_BIGINT;
pLimit->limit->node.resType.bytes = tDataTypes[TSDB_DATA_TYPE_BIGINT].bytes;
pLimit->limit->datum.i = taosRand() * ((taosRand() % 2) ? 1 : -1);
}
if (taosRand() % 2) {
pLimit->offset = taosRand() * ((taosRand() % 2) ? 1 : -1);
code = nodesMakeNode(QUERY_NODE_VALUE, (SNode**)&pLimit->offset);
assert(pLimit->offset);
pLimit->offset->node.resType.type = TSDB_DATA_TYPE_BIGINT;
pLimit->offset->node.resType.bytes = tDataTypes[TSDB_DATA_TYPE_BIGINT].bytes;
pLimit->offset->datum.i = taosRand() * ((taosRand() % 2) ? 1 : -1);
}
} else {
pLimit->limit = taosRand();
pLimit->limit->datum.i = taosRand();
if (taosRand() % 2) {
pLimit->offset = taosRand();
code = nodesMakeNode(QUERY_NODE_VALUE, (SNode**)&pLimit->offset);
assert(pLimit->offset);
pLimit->offset->node.resType.type = TSDB_DATA_TYPE_BIGINT;
pLimit->offset->node.resType.bytes = tDataTypes[TSDB_DATA_TYPE_BIGINT].bytes;
pLimit->offset->datum.i = taosRand();
}
}

View File

@ -52,7 +52,7 @@
if (NULL == (pSrc)->fldname) { \
break; \
} \
int32_t code = nodesCloneNode((pSrc)->fldname, &((pDst)->fldname)); \
int32_t code = nodesCloneNode((SNode*)(pSrc)->fldname, (SNode**)&((pDst)->fldname)); \
if (NULL == (pDst)->fldname) { \
return code; \
} \
@ -346,8 +346,8 @@ static int32_t orderByExprNodeCopy(const SOrderByExprNode* pSrc, SOrderByExprNod
}
static int32_t limitNodeCopy(const SLimitNode* pSrc, SLimitNode* pDst) {
COPY_SCALAR_FIELD(limit);
COPY_SCALAR_FIELD(offset);
CLONE_NODE_FIELD(limit);
CLONE_NODE_FIELD(offset);
return TSDB_CODE_SUCCESS;
}

View File

@ -4933,9 +4933,9 @@ static const char* jkLimitOffset = "Offset";
static int32_t limitNodeToJson(const void* pObj, SJson* pJson) {
const SLimitNode* pNode = (const SLimitNode*)pObj;
int32_t code = tjsonAddIntegerToObject(pJson, jkLimitLimit, pNode->limit);
if (TSDB_CODE_SUCCESS == code) {
code = tjsonAddIntegerToObject(pJson, jkLimitOffset, pNode->offset);
int32_t code = tjsonAddObject(pJson, jkLimitLimit, nodeToJson, pNode->limit);
if (TSDB_CODE_SUCCESS == code && pNode->offset) {
code = tjsonAddObject(pJson, jkLimitOffset, nodeToJson, pNode->offset);
}
return code;
@ -4944,9 +4944,9 @@ static int32_t limitNodeToJson(const void* pObj, SJson* pJson) {
static int32_t jsonToLimitNode(const SJson* pJson, void* pObj) {
SLimitNode* pNode = (SLimitNode*)pObj;
int32_t code = tjsonGetBigIntValue(pJson, jkLimitLimit, &pNode->limit);
int32_t code = jsonToNodeObject(pJson, jkLimitLimit, (SNode**)&pNode->limit);
if (TSDB_CODE_SUCCESS == code) {
code = tjsonGetBigIntValue(pJson, jkLimitOffset, &pNode->offset);
code = jsonToNodeObject(pJson, jkLimitOffset, (SNode**)&pNode->offset);
}
return code;

View File

@ -1246,9 +1246,9 @@ enum { LIMIT_CODE_LIMIT = 1, LIMIT_CODE_OFFSET };
static int32_t limitNodeToMsg(const void* pObj, STlvEncoder* pEncoder) {
const SLimitNode* pNode = (const SLimitNode*)pObj;
int32_t code = tlvEncodeI64(pEncoder, LIMIT_CODE_LIMIT, pNode->limit);
if (TSDB_CODE_SUCCESS == code) {
code = tlvEncodeI64(pEncoder, LIMIT_CODE_OFFSET, pNode->offset);
int32_t code = tlvEncodeObj(pEncoder, LIMIT_CODE_LIMIT, nodeToMsg, pNode->limit);
if (TSDB_CODE_SUCCESS == code && pNode->offset) {
code = tlvEncodeObj(pEncoder, LIMIT_CODE_OFFSET, nodeToMsg, pNode->offset);
}
return code;
@ -1262,10 +1262,10 @@ static int32_t msgToLimitNode(STlvDecoder* pDecoder, void* pObj) {
tlvForEach(pDecoder, pTlv, code) {
switch (pTlv->type) {
case LIMIT_CODE_LIMIT:
code = tlvDecodeI64(pTlv, &pNode->limit);
code = msgToNodeFromTlv(pTlv, (void**)&pNode->limit);
break;
case LIMIT_CODE_OFFSET:
code = tlvDecodeI64(pTlv, &pNode->offset);
code = msgToNodeFromTlv(pTlv, (void**)&pNode->offset);
break;
default:
break;

View File

@ -1106,8 +1106,12 @@ void nodesDestroyNode(SNode* pNode) {
case QUERY_NODE_ORDER_BY_EXPR:
nodesDestroyNode(((SOrderByExprNode*)pNode)->pExpr);
break;
case QUERY_NODE_LIMIT: // no pointer field
case QUERY_NODE_LIMIT: {
SLimitNode* pLimit = (SLimitNode*)pNode;
nodesDestroyNode((SNode*)pLimit->limit);
nodesDestroyNode((SNode*)pLimit->offset);
break;
}
case QUERY_NODE_STATE_WINDOW: {
SStateWindowNode* pState = (SStateWindowNode*)pNode;
nodesDestroyNode(pState->pCol);
@ -3097,6 +3101,25 @@ int32_t nodesMakeValueNodeFromInt32(int32_t value, SNode** ppNode) {
return code;
}
int32_t nodesMakeValueNodeFromInt64(int64_t value, SNode** ppNode) {
SValueNode* pValNode = NULL;
int32_t code = nodesMakeNode(QUERY_NODE_VALUE, (SNode**)&pValNode);
if (TSDB_CODE_SUCCESS == code) {
pValNode->node.resType.type = TSDB_DATA_TYPE_BIGINT;
pValNode->node.resType.bytes = tDataTypes[TSDB_DATA_TYPE_BIGINT].bytes;
code = nodesSetValueNodeValue(pValNode, &value);
if (TSDB_CODE_SUCCESS == code) {
pValNode->translate = true;
pValNode->isNull = false;
*ppNode = (SNode*)pValNode;
} else {
nodesDestroyNode((SNode*)pValNode);
}
}
return code;
}
bool nodesIsStar(SNode* pNode) {
return (QUERY_NODE_COLUMN == nodeType(pNode)) && ('\0' == ((SColumnNode*)pNode)->tableAlias[0]) &&
(0 == strcmp(((SColumnNode*)pNode)->colName, "*"));

View File

@ -152,7 +152,7 @@ SNode* createTempTableNode(SAstCreateContext* pCxt, SNode* pSubquery, SToken
SNode* createJoinTableNode(SAstCreateContext* pCxt, EJoinType type, EJoinSubType stype, SNode* pLeft, SNode* pRight,
SNode* pJoinCond);
SNode* createViewNode(SAstCreateContext* pCxt, SToken* pDbName, SToken* pViewName);
SNode* createLimitNode(SAstCreateContext* pCxt, const SToken* pLimit, const SToken* pOffset);
SNode* createLimitNode(SAstCreateContext* pCxt, SNode* pLimit, SNode* pOffset);
SNode* createOrderByExprNode(SAstCreateContext* pCxt, SNode* pExpr, EOrder order, ENullOrder nullOrder);
SNode* createSessionWindowNode(SAstCreateContext* pCxt, SNode* pCol, SNode* pGap);
SNode* createStateWindowNode(SAstCreateContext* pCxt, SNode* pExpr);

19
source/libs/parser/inc/sql.y Normal file → Executable file
View File

@ -1078,6 +1078,10 @@ signed_integer(A) ::= NK_MINUS(B) NK_INTEGER(C).
A = createValueNode(pCxt, TSDB_DATA_TYPE_BIGINT, &t);
}
unsigned_integer(A) ::= NK_INTEGER(B). { A = createValueNode(pCxt, TSDB_DATA_TYPE_BIGINT, &B); }
unsigned_integer(A) ::= NK_QUESTION(B). { A = releaseRawExprNode(pCxt, createRawExprNode(pCxt, &B, createPlaceholderValueNode(pCxt, &B))); }
signed_float(A) ::= NK_FLOAT(B). { A = createValueNode(pCxt, TSDB_DATA_TYPE_DOUBLE, &B); }
signed_float(A) ::= NK_PLUS NK_FLOAT(B). { A = createValueNode(pCxt, TSDB_DATA_TYPE_DOUBLE, &B); }
signed_float(A) ::= NK_MINUS(B) NK_FLOAT(C). {
@ -1098,6 +1102,7 @@ signed_literal(A) ::= NULL(B).
signed_literal(A) ::= literal_func(B). { A = releaseRawExprNode(pCxt, B); }
signed_literal(A) ::= NK_QUESTION(B). { A = createPlaceholderValueNode(pCxt, &B); }
%type literal_list { SNodeList* }
%destructor literal_list { nodesDestroyList($$); }
literal_list(A) ::= signed_literal(B). { A = createNodeList(pCxt, B); }
@ -1480,7 +1485,7 @@ window_offset_literal(A) ::= NK_MINUS(B) NK_VARIABLE(C).
}
jlimit_clause_opt(A) ::= . { A = NULL; }
jlimit_clause_opt(A) ::= JLIMIT NK_INTEGER(B). { A = createLimitNode(pCxt, &B, NULL); }
jlimit_clause_opt(A) ::= JLIMIT unsigned_integer(B). { A = createLimitNode(pCxt, B, NULL); }
/************************************************ query_specification *************************************************/
query_specification(A) ::=
@ -1660,14 +1665,14 @@ order_by_clause_opt(A) ::= .
order_by_clause_opt(A) ::= ORDER BY sort_specification_list(B). { A = B; }
slimit_clause_opt(A) ::= . { A = NULL; }
slimit_clause_opt(A) ::= SLIMIT NK_INTEGER(B). { A = createLimitNode(pCxt, &B, NULL); }
slimit_clause_opt(A) ::= SLIMIT NK_INTEGER(B) SOFFSET NK_INTEGER(C). { A = createLimitNode(pCxt, &B, &C); }
slimit_clause_opt(A) ::= SLIMIT NK_INTEGER(C) NK_COMMA NK_INTEGER(B). { A = createLimitNode(pCxt, &B, &C); }
slimit_clause_opt(A) ::= SLIMIT unsigned_integer(B). { A = createLimitNode(pCxt, B, NULL); }
slimit_clause_opt(A) ::= SLIMIT unsigned_integer(B) SOFFSET unsigned_integer(C). { A = createLimitNode(pCxt, B, C); }
slimit_clause_opt(A) ::= SLIMIT unsigned_integer(C) NK_COMMA unsigned_integer(B). { A = createLimitNode(pCxt, B, C); }
limit_clause_opt(A) ::= . { A = NULL; }
limit_clause_opt(A) ::= LIMIT NK_INTEGER(B). { A = createLimitNode(pCxt, &B, NULL); }
limit_clause_opt(A) ::= LIMIT NK_INTEGER(B) OFFSET NK_INTEGER(C). { A = createLimitNode(pCxt, &B, &C); }
limit_clause_opt(A) ::= LIMIT NK_INTEGER(C) NK_COMMA NK_INTEGER(B). { A = createLimitNode(pCxt, &B, &C); }
limit_clause_opt(A) ::= LIMIT unsigned_integer(B). { A = createLimitNode(pCxt, B, NULL); }
limit_clause_opt(A) ::= LIMIT unsigned_integer(B) OFFSET unsigned_integer(C). { A = createLimitNode(pCxt, B, C); }
limit_clause_opt(A) ::= LIMIT unsigned_integer(C) NK_COMMA unsigned_integer(B). { A = createLimitNode(pCxt, B, C); }
/************************************************ subquery ************************************************************/
subquery(A) ::= NK_LP(B) query_expression(C) NK_RP(D). { A = createRawExprNodeExt(pCxt, &B, &D, C); }

View File

@ -1287,14 +1287,14 @@ _err:
return NULL;
}
SNode* createLimitNode(SAstCreateContext* pCxt, const SToken* pLimit, const SToken* pOffset) {
SNode* createLimitNode(SAstCreateContext* pCxt, SNode* pLimit, SNode* pOffset) {
CHECK_PARSER_STATUS(pCxt);
SLimitNode* limitNode = NULL;
pCxt->errCode = nodesMakeNode(QUERY_NODE_LIMIT, (SNode**)&limitNode);
CHECK_MAKE_NODE(limitNode);
limitNode->limit = taosStr2Int64(pLimit->z, NULL, 10);
limitNode->limit = (SValueNode*)pLimit;
if (NULL != pOffset) {
limitNode->offset = taosStr2Int64(pOffset->z, NULL, 10);
limitNode->offset = (SValueNode*)pOffset;
}
return (SNode*)limitNode;
_err:

View File

@ -2751,6 +2751,9 @@ static int32_t parseInsertBody(SInsertParseContext* pCxt, SVnodeModifyOpStmt* pS
if (TSDB_CODE_SUCCESS == code && hasData) {
code = parseInsertTableClause(pCxt, pStmt, &token);
}
if (TSDB_CODE_PAR_TABLE_NOT_EXIST == code && pCxt->preCtbname) {
code = TSDB_CODE_TSC_STMT_TBNAME_ERROR;
}
}
if (TSDB_CODE_SUCCESS == code && !pCxt->missCache) {

View File

@ -4729,16 +4729,20 @@ static int32_t translateJoinTable(STranslateContext* pCxt, SJoinTableNode* pJoin
return buildInvalidOperationMsg(&pCxt->msgBuf, "WINDOW_OFFSET required for WINDOW join");
}
if (TSDB_CODE_SUCCESS == code && NULL != pJoinTable->pJLimit) {
if (TSDB_CODE_SUCCESS == code && NULL != pJoinTable->pJLimit && NULL != ((SLimitNode*)pJoinTable->pJLimit)->limit) {
if (*pSType != JOIN_STYPE_ASOF && *pSType != JOIN_STYPE_WIN) {
return buildInvalidOperationMsgExt(&pCxt->msgBuf, "JLIMIT not supported for %s join",
getFullJoinTypeString(type, *pSType));
}
SLimitNode* pJLimit = (SLimitNode*)pJoinTable->pJLimit;
if (pJLimit->limit > JOIN_JLIMIT_MAX_VALUE || pJLimit->limit < 0) {
code = translateExpr(pCxt, (SNode**)&pJLimit->limit);
if (TSDB_CODE_SUCCESS != code) {
return code;
}
if (pJLimit->limit->datum.i > JOIN_JLIMIT_MAX_VALUE || pJLimit->limit->datum.i < 0) {
return buildInvalidOperationMsg(&pCxt->msgBuf, "JLIMIT value is out of valid range [0, 1024]");
}
if (0 == pJLimit->limit) {
if (0 == pJLimit->limit->datum.i) {
pCurrSmt->isEmptyResult = true;
}
}
@ -6994,16 +6998,32 @@ static int32_t translateFrom(STranslateContext* pCxt, SNode** pTable) {
}
static int32_t checkLimit(STranslateContext* pCxt, SSelectStmt* pSelect) {
if ((NULL != pSelect->pLimit && pSelect->pLimit->offset < 0) ||
(NULL != pSelect->pSlimit && pSelect->pSlimit->offset < 0)) {
int32_t code = 0;
if (pSelect->pLimit && pSelect->pLimit->limit) {
code = translateExpr(pCxt, (SNode**)&pSelect->pLimit->limit);
}
if (TSDB_CODE_SUCCESS == code && pSelect->pLimit && pSelect->pLimit->offset) {
code = translateExpr(pCxt, (SNode**)&pSelect->pLimit->offset);
}
if (TSDB_CODE_SUCCESS == code && pSelect->pSlimit && pSelect->pSlimit->limit) {
code = translateExpr(pCxt, (SNode**)&pSelect->pSlimit->limit);
}
if (TSDB_CODE_SUCCESS == code && pSelect->pSlimit && pSelect->pSlimit->offset) {
code = translateExpr(pCxt, (SNode**)&pSelect->pSlimit->offset);
}
if ((TSDB_CODE_SUCCESS == code) &&
((NULL != pSelect->pLimit && pSelect->pLimit->offset && pSelect->pLimit->offset->datum.i < 0) ||
(NULL != pSelect->pSlimit && pSelect->pSlimit->offset && pSelect->pSlimit->offset->datum.i < 0))) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_OFFSET_LESS_ZERO);
}
if (NULL != pSelect->pSlimit && (NULL == pSelect->pPartitionByList && NULL == pSelect->pGroupByList)) {
if ((TSDB_CODE_SUCCESS == code) && NULL != pSelect->pSlimit && (NULL == pSelect->pPartitionByList && NULL == pSelect->pGroupByList)) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_SLIMIT_LEAK_PARTITION_GROUP_BY);
}
return TSDB_CODE_SUCCESS;
return code;
}
static int32_t createPrimaryKeyColByTable(STranslateContext* pCxt, STableNode* pTable, SNode** pPrimaryKey) {
@ -7482,7 +7502,14 @@ static int32_t translateSetOperOrderBy(STranslateContext* pCxt, SSetOperator* pS
}
static int32_t checkSetOperLimit(STranslateContext* pCxt, SLimitNode* pLimit) {
if ((NULL != pLimit && pLimit->offset < 0)) {
int32_t code = 0;
if (pLimit && pLimit->limit) {
code = translateExpr(pCxt, (SNode**)&pLimit->limit);
}
if (TSDB_CODE_SUCCESS == code && pLimit && pLimit->offset) {
code = translateExpr(pCxt, (SNode**)&pLimit->offset);
}
if (TSDB_CODE_SUCCESS == code && (NULL != pLimit && NULL != pLimit->offset && pLimit->offset->datum.i < 0)) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_OFFSET_LESS_ZERO);
}
return TSDB_CODE_SUCCESS;

View File

@ -3705,8 +3705,14 @@ static int32_t rewriteTailOptCreateLimit(SNode* pLimit, SNode* pOffset, SNode**
if (NULL == pLimitNode) {
return code;
}
pLimitNode->limit = NULL == pLimit ? -1 : ((SValueNode*)pLimit)->datum.i;
pLimitNode->offset = NULL == pOffset ? 0 : ((SValueNode*)pOffset)->datum.i;
code = nodesMakeValueNodeFromInt64(NULL == pLimit ? -1 : ((SValueNode*)pLimit)->datum.i, (SNode**)&pLimitNode->limit);
if (TSDB_CODE_SUCCESS != code) {
return code;
}
code = nodesMakeValueNodeFromInt64(NULL == pOffset ? 0 : ((SValueNode*)pOffset)->datum.i, (SNode**)&pLimitNode->offset);
if (TSDB_CODE_SUCCESS != code) {
return code;
}
*pOutput = (SNode*)pLimitNode;
return TSDB_CODE_SUCCESS;
}

View File

@ -1823,9 +1823,9 @@ static int32_t createAggPhysiNode(SPhysiPlanContext* pCxt, SNodeList* pChildren,
if (NULL == pAgg) {
return terrno;
}
if (pAgg->node.pSlimit) {
if (pAgg->node.pSlimit && ((SLimitNode*)pAgg->node.pSlimit)->limit) {
pSubPlan->dynamicRowThreshold = true;
pSubPlan->rowsThreshold = ((SLimitNode*)pAgg->node.pSlimit)->limit;
pSubPlan->rowsThreshold = ((SLimitNode*)pAgg->node.pSlimit)->limit->datum.i;
}
pAgg->mergeDataBlock = (GROUP_ACTION_KEEP == pAggLogicNode->node.groupAction ? false : true);

View File

@ -133,8 +133,12 @@ static int32_t splCreateExchangeNode(SSplitContext* pCxt, SLogicNode* pChild, SE
nodesDestroyNode((SNode*)pExchange);
return code;
}
((SLimitNode*)pChild->pLimit)->limit += ((SLimitNode*)pChild->pLimit)->offset;
((SLimitNode*)pChild->pLimit)->offset = 0;
if (((SLimitNode*)pChild->pLimit)->limit && ((SLimitNode*)pChild->pLimit)->offset) {
((SLimitNode*)pChild->pLimit)->limit->datum.i += ((SLimitNode*)pChild->pLimit)->offset->datum.i;
}
if (((SLimitNode*)pChild->pLimit)->offset) {
((SLimitNode*)pChild->pLimit)->offset->datum.i = 0;
}
}
*pOutput = pExchange;
@ -679,8 +683,12 @@ static int32_t stbSplCreateMergeNode(SSplitContext* pCxt, SLogicSubplan* pSubpla
if (TSDB_CODE_SUCCESS == code && NULL != pSplitNode->pLimit) {
pMerge->node.pLimit = NULL;
code = nodesCloneNode(pSplitNode->pLimit, &pMerge->node.pLimit);
((SLimitNode*)pSplitNode->pLimit)->limit += ((SLimitNode*)pSplitNode->pLimit)->offset;
((SLimitNode*)pSplitNode->pLimit)->offset = 0;
if (((SLimitNode*)pSplitNode->pLimit)->limit && ((SLimitNode*)pSplitNode->pLimit)->offset) {
((SLimitNode*)pSplitNode->pLimit)->limit->datum.i += ((SLimitNode*)pSplitNode->pLimit)->offset->datum.i;
}
if (((SLimitNode*)pSplitNode->pLimit)->offset) {
((SLimitNode*)pSplitNode->pLimit)->offset->datum.i = 0;
}
}
if (TSDB_CODE_SUCCESS == code) {
code = stbSplRewriteFromMergeNode(pMerge, pSplitNode);
@ -1427,8 +1435,12 @@ static int32_t stbSplGetSplitNodeForScan(SStableSplitInfo* pInfo, SLogicNode** p
if (NULL == (*pSplitNode)->pLimit) {
return code;
}
((SLimitNode*)pInfo->pSplitNode->pLimit)->limit += ((SLimitNode*)pInfo->pSplitNode->pLimit)->offset;
((SLimitNode*)pInfo->pSplitNode->pLimit)->offset = 0;
if (((SLimitNode*)pInfo->pSplitNode->pLimit)->limit && ((SLimitNode*)pInfo->pSplitNode->pLimit)->offset) {
((SLimitNode*)pInfo->pSplitNode->pLimit)->limit->datum.i += ((SLimitNode*)pInfo->pSplitNode->pLimit)->offset->datum.i;
}
if (((SLimitNode*)pInfo->pSplitNode->pLimit)->offset) {
((SLimitNode*)pInfo->pSplitNode->pLimit)->offset->datum.i = 0;
}
}
}
return TSDB_CODE_SUCCESS;
@ -1579,8 +1591,12 @@ static int32_t stbSplSplitMergeScanNode(SSplitContext* pCxt, SLogicSubplan* pSub
int32_t code = stbSplCreateMergeScanNode(pScan, &pMergeScan, &pMergeKeys);
if (TSDB_CODE_SUCCESS == code) {
if (NULL != pMergeScan->pLimit) {
((SLimitNode*)pMergeScan->pLimit)->limit += ((SLimitNode*)pMergeScan->pLimit)->offset;
((SLimitNode*)pMergeScan->pLimit)->offset = 0;
if (((SLimitNode*)pMergeScan->pLimit)->limit && ((SLimitNode*)pMergeScan->pLimit)->offset) {
((SLimitNode*)pMergeScan->pLimit)->limit->datum.i += ((SLimitNode*)pMergeScan->pLimit)->offset->datum.i;
}
if (((SLimitNode*)pMergeScan->pLimit)->offset) {
((SLimitNode*)pMergeScan->pLimit)->offset->datum.i = 0;
}
}
code = stbSplCreateMergeNode(pCxt, pSubplan, (SLogicNode*)pScan, pMergeKeys, pMergeScan, groupSort, true);
}

View File

@ -592,8 +592,12 @@ int32_t cloneLimit(SLogicNode* pParent, SLogicNode* pChild, uint8_t cloneWhat, b
if (pParent->pLimit && (cloneWhat & CLONE_LIMIT)) {
code = nodesCloneNode(pParent->pLimit, (SNode**)&pLimit);
if (TSDB_CODE_SUCCESS == code) {
pLimit->limit += pLimit->offset;
pLimit->offset = 0;
if (pLimit->limit && pLimit->offset) {
pLimit->limit->datum.i += pLimit->offset->datum.i;
}
if (pLimit->offset) {
pLimit->offset->datum.i = 0;
}
cloned = true;
}
}
@ -601,8 +605,12 @@ int32_t cloneLimit(SLogicNode* pParent, SLogicNode* pChild, uint8_t cloneWhat, b
if (pParent->pSlimit && (cloneWhat & CLONE_SLIMIT)) {
code = nodesCloneNode(pParent->pSlimit, (SNode**)&pSlimit);
if (TSDB_CODE_SUCCESS == code) {
pSlimit->limit += pSlimit->offset;
pSlimit->offset = 0;
if (pSlimit->limit && pSlimit->offset) {
pSlimit->limit->datum.i += pSlimit->offset->datum.i;
}
if (pSlimit->offset) {
pSlimit->offset->datum.i = 0;
}
cloned = true;
}
}

View File

@ -604,6 +604,17 @@ int32_t streamTaskUpdateTaskCheckpointInfo(SStreamTask* pTask, bool restored, SV
streamMutexLock(&pTask->lock);
// not update the checkpoint info if the checkpointId is less than the failed checkpointId
if (pReq->checkpointId < pInfo->pActiveInfo->failedId) {
stWarn("s-task:%s vgId:%d not update the checkpoint-info, since update checkpointId:%" PRId64
" is less than the failed checkpointId:%" PRId64 ", discard the update info",
id, vgId, pReq->checkpointId, pInfo->pActiveInfo->failedId);
streamMutexUnlock(&pTask->lock);
// always return true
return TSDB_CODE_SUCCESS;
}
if (pReq->checkpointId <= pInfo->checkpointId) {
stDebug("s-task:%s vgId:%d latest checkpointId:%" PRId64 " Ver:%" PRId64
" no need to update checkpoint info, updated checkpointId:%" PRId64 " Ver:%" PRId64 " transId:%d ignored",
@ -638,9 +649,9 @@ int32_t streamTaskUpdateTaskCheckpointInfo(SStreamTask* pTask, bool restored, SV
pInfo->checkpointTime, pReq->checkpointTs);
} else { // not in restore status, must be in checkpoint status
if ((pStatus.state == TASK_STATUS__CK) || (pMeta->role == NODE_ROLE_FOLLOWER)) {
stDebug("s-task:%s vgId:%d status:%s start to update the checkpoint-info, checkpointId:%" PRId64 "->%" PRId64
stDebug("s-task:%s vgId:%d status:%s role:%d start to update the checkpoint-info, checkpointId:%" PRId64 "->%" PRId64
" checkpointVer:%" PRId64 "->%" PRId64 " checkpointTs:%" PRId64 "->%" PRId64,
id, vgId, pStatus.name, pInfo->checkpointId, pReq->checkpointId, pInfo->checkpointVer,
id, vgId, pStatus.name, pMeta->role, pInfo->checkpointId, pReq->checkpointId, pInfo->checkpointVer,
pReq->checkpointVer, pInfo->checkpointTime, pReq->checkpointTs);
} else {
stDebug("s-task:%s vgId:%d status:%s NOT update the checkpoint-info, checkpointId:%" PRId64 "->%" PRId64

View File

@ -732,7 +732,11 @@ int32_t syncFsmExecute(SSyncNode* pNode, SSyncFSM* pFsm, ESyncState role, SyncTe
pEntry->index, pEntry->term, TMSG_INFO(pEntry->originalRpcType), code, retry);
if (retry) {
taosMsleep(10);
sError("vgId:%d, retry on fsm commit since %s. index:%" PRId64, pNode->vgId, tstrerror(code), pEntry->index);
if (code == TSDB_CODE_OUT_OF_RPC_MEMORY_QUEUE) {
sError("vgId:%d, failed to execute fsm since %s. index:%" PRId64, pNode->vgId, terrstr(), pEntry->index);
} else {
sDebug("vgId:%d, retry on fsm commit since %s. index:%" PRId64, pNode->vgId, terrstr(), pEntry->index);
}
}
} while (retry);

View File

@ -14,14 +14,16 @@
*/
#define _DEFAULT_SOURCE
#include "tqueue.h"
#include "taoserror.h"
#include "tlog.h"
#include "tqueue.h"
#include "tutil.h"
int64_t tsQueueMemoryAllowed = 0;
int64_t tsQueueMemoryUsed = 0;
int64_t tsApplyMemoryAllowed = 0;
int64_t tsApplyMemoryUsed = 0;
struct STaosQueue {
STaosQnode *head;
STaosQnode *tail;
@ -148,20 +150,34 @@ int64_t taosQueueMemorySize(STaosQueue *queue) {
}
int32_t taosAllocateQitem(int32_t size, EQItype itype, int64_t dataSize, void **item) {
int64_t alloced = atomic_add_fetch_64(&tsQueueMemoryUsed, size + dataSize);
if (alloced > tsQueueMemoryAllowed) {
if (itype == RPC_QITEM) {
int64_t alloced = -1;
if (itype == RPC_QITEM) {
alloced = atomic_add_fetch_64(&tsQueueMemoryUsed, size + dataSize);
if (alloced > tsQueueMemoryAllowed) {
uError("failed to alloc qitem, size:%" PRId64 " alloc:%" PRId64 " allowed:%" PRId64, size + dataSize, alloced,
tsQueueMemoryAllowed);
(void)atomic_sub_fetch_64(&tsQueueMemoryUsed, size + dataSize);
return (terrno = TSDB_CODE_OUT_OF_RPC_MEMORY_QUEUE);
}
} else if (itype == APPLY_QITEM) {
alloced = atomic_add_fetch_64(&tsApplyMemoryUsed, size + dataSize);
if (alloced > tsApplyMemoryAllowed) {
uDebug("failed to alloc qitem, size:%" PRId64 " alloc:%" PRId64 " allowed:%" PRId64, size + dataSize, alloced,
tsApplyMemoryAllowed);
(void)atomic_sub_fetch_64(&tsApplyMemoryUsed, size + dataSize);
return (terrno = TSDB_CODE_OUT_OF_RPC_MEMORY_QUEUE);
}
}
*item = NULL;
STaosQnode *pNode = taosMemoryCalloc(1, sizeof(STaosQnode) + size);
if (pNode == NULL) {
(void)atomic_sub_fetch_64(&tsQueueMemoryUsed, size + dataSize);
if (itype == RPC_QITEM) {
(void)atomic_sub_fetch_64(&tsQueueMemoryUsed, size + dataSize);
} else if (itype == APPLY_QITEM) {
(void)atomic_sub_fetch_64(&tsApplyMemoryUsed, size + dataSize);
}
return terrno;
}
@ -178,7 +194,12 @@ void taosFreeQitem(void *pItem) {
if (pItem == NULL) return;
STaosQnode *pNode = (STaosQnode *)((char *)pItem - sizeof(STaosQnode));
int64_t alloced = atomic_sub_fetch_64(&tsQueueMemoryUsed, pNode->size + pNode->dataSize);
int64_t alloced = -1;
if (pNode->itype == RPC_QITEM) {
alloced = atomic_sub_fetch_64(&tsQueueMemoryUsed, pNode->size + pNode->dataSize);
} else if (pNode->itype == APPLY_QITEM) {
alloced = atomic_sub_fetch_64(&tsApplyMemoryUsed, pNode->size + pNode->dataSize);
}
uTrace("item:%p, node:%p is freed, alloc:%" PRId64, pItem, pNode, alloced);
taosMemoryFree(pNode);

View File

@ -1015,3 +1015,108 @@ taos> select _irowts, _isfilled, interp(c1) from test.td32861 where ts between '
taos> select _irowts, _isfilled, interp(c1) from test.td32861 where ts between '2020-01-01 00:00:22' and '2020-01-01 00:00:30' range('2020-01-01 00:00:00', '2020-01-01 00:00:21') every(1s) fill(linear);
taos> select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(prev);
_irowts | interp(c1) | t1 |
===========================================================================
2020-02-01 00:00:05.000 | 5 | testts5941 |
2020-02-01 00:00:06.000 | 5 | testts5941 |
2020-02-01 00:00:07.000 | 5 | testts5941 |
2020-02-01 00:00:08.000 | 5 | testts5941 |
2020-02-01 00:00:09.000 | 5 | testts5941 |
2020-02-01 00:00:10.000 | 10 | testts5941 |
2020-02-01 00:00:11.000 | 10 | testts5941 |
2020-02-01 00:00:12.000 | 10 | testts5941 |
2020-02-01 00:00:13.000 | 10 | testts5941 |
2020-02-01 00:00:14.000 | 10 | testts5941 |
2020-02-01 00:00:15.000 | 15 | testts5941 |
2020-02-01 00:00:16.000 | 15 | testts5941 |
2020-02-01 00:00:17.000 | 15 | testts5941 |
2020-02-01 00:00:18.000 | 15 | testts5941 |
2020-02-01 00:00:19.000 | 15 | testts5941 |
2020-02-01 00:00:20.000 | 15 | testts5941 |
taos> select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(next);
_irowts | interp(c1) | t1 |
===========================================================================
2020-02-01 00:00:00.000 | 5 | testts5941 |
2020-02-01 00:00:01.000 | 5 | testts5941 |
2020-02-01 00:00:02.000 | 5 | testts5941 |
2020-02-01 00:00:03.000 | 5 | testts5941 |
2020-02-01 00:00:04.000 | 5 | testts5941 |
2020-02-01 00:00:05.000 | 5 | testts5941 |
2020-02-01 00:00:06.000 | 10 | testts5941 |
2020-02-01 00:00:07.000 | 10 | testts5941 |
2020-02-01 00:00:08.000 | 10 | testts5941 |
2020-02-01 00:00:09.000 | 10 | testts5941 |
2020-02-01 00:00:10.000 | 10 | testts5941 |
2020-02-01 00:00:11.000 | 15 | testts5941 |
2020-02-01 00:00:12.000 | 15 | testts5941 |
2020-02-01 00:00:13.000 | 15 | testts5941 |
2020-02-01 00:00:14.000 | 15 | testts5941 |
2020-02-01 00:00:15.000 | 15 | testts5941 |
taos> select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(linear);
_irowts | interp(c1) | t1 |
===========================================================================
2020-02-01 00:00:05.000 | 5 | testts5941 |
2020-02-01 00:00:06.000 | 6 | testts5941 |
2020-02-01 00:00:07.000 | 7 | testts5941 |
2020-02-01 00:00:08.000 | 8 | testts5941 |
2020-02-01 00:00:09.000 | 9 | testts5941 |
2020-02-01 00:00:10.000 | 10 | testts5941 |
2020-02-01 00:00:11.000 | 11 | testts5941 |
2020-02-01 00:00:12.000 | 12 | testts5941 |
2020-02-01 00:00:13.000 | 13 | testts5941 |
2020-02-01 00:00:14.000 | 14 | testts5941 |
2020-02-01 00:00:15.000 | 15 | testts5941 |
taos> select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(null);
_irowts | interp(c1) | t1 |
===========================================================================
2020-02-01 00:00:00.000 | NULL | testts5941 |
2020-02-01 00:00:01.000 | NULL | testts5941 |
2020-02-01 00:00:02.000 | NULL | testts5941 |
2020-02-01 00:00:03.000 | NULL | testts5941 |
2020-02-01 00:00:04.000 | NULL | testts5941 |
2020-02-01 00:00:05.000 | 5 | testts5941 |
2020-02-01 00:00:06.000 | NULL | testts5941 |
2020-02-01 00:00:07.000 | NULL | testts5941 |
2020-02-01 00:00:08.000 | NULL | testts5941 |
2020-02-01 00:00:09.000 | NULL | testts5941 |
2020-02-01 00:00:10.000 | 10 | testts5941 |
2020-02-01 00:00:11.000 | NULL | testts5941 |
2020-02-01 00:00:12.000 | NULL | testts5941 |
2020-02-01 00:00:13.000 | NULL | testts5941 |
2020-02-01 00:00:14.000 | NULL | testts5941 |
2020-02-01 00:00:15.000 | 15 | testts5941 |
2020-02-01 00:00:16.000 | NULL | testts5941 |
2020-02-01 00:00:17.000 | NULL | testts5941 |
2020-02-01 00:00:18.000 | NULL | testts5941 |
2020-02-01 00:00:19.000 | NULL | testts5941 |
2020-02-01 00:00:20.000 | NULL | testts5941 |
taos> select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(value, 1);
_irowts | interp(c1) | t1 |
===========================================================================
2020-02-01 00:00:00.000 | 1 | testts5941 |
2020-02-01 00:00:01.000 | 1 | testts5941 |
2020-02-01 00:00:02.000 | 1 | testts5941 |
2020-02-01 00:00:03.000 | 1 | testts5941 |
2020-02-01 00:00:04.000 | 1 | testts5941 |
2020-02-01 00:00:05.000 | 5 | testts5941 |
2020-02-01 00:00:06.000 | 1 | testts5941 |
2020-02-01 00:00:07.000 | 1 | testts5941 |
2020-02-01 00:00:08.000 | 1 | testts5941 |
2020-02-01 00:00:09.000 | 1 | testts5941 |
2020-02-01 00:00:10.000 | 10 | testts5941 |
2020-02-01 00:00:11.000 | 1 | testts5941 |
2020-02-01 00:00:12.000 | 1 | testts5941 |
2020-02-01 00:00:13.000 | 1 | testts5941 |
2020-02-01 00:00:14.000 | 1 | testts5941 |
2020-02-01 00:00:15.000 | 15 | testts5941 |
2020-02-01 00:00:16.000 | 1 | testts5941 |
2020-02-01 00:00:17.000 | 1 | testts5941 |
2020-02-01 00:00:18.000 | 1 | testts5941 |
2020-02-01 00:00:19.000 | 1 | testts5941 |
2020-02-01 00:00:20.000 | 1 | testts5941 |

1 taos> select _irowts as irowts ,tbname as table_name, _isfilled as isfilled , interp(c1) as intp_c1 from test.td32727 partition by tbname range('2020-02-01 00:00:04', '2020-02-01 00:00:16') every(1s) fill (null) order by irowts;
1015 2020-02-01 00:00:08.000 | NULL | testts5941 |
1016 2020-02-01 00:00:09.000 | NULL | testts5941 |
1017 2020-02-01 00:00:10.000 | 10 | testts5941 |
1018 2020-02-01 00:00:11.000 | NULL | testts5941 |
1019 2020-02-01 00:00:12.000 | NULL | testts5941 |
1020 2020-02-01 00:00:13.000 | NULL | testts5941 |
1021 2020-02-01 00:00:14.000 | NULL | testts5941 |
1022 2020-02-01 00:00:15.000 | 15 | testts5941 |
1023 2020-02-01 00:00:16.000 | NULL | testts5941 |
1024 2020-02-01 00:00:17.000 | NULL | testts5941 |
1025 2020-02-01 00:00:18.000 | NULL | testts5941 |
1026 2020-02-01 00:00:19.000 | NULL | testts5941 |
1027 2020-02-01 00:00:20.000 | NULL | testts5941 |
1028 taos> select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(value, 1);
1029 _irowts | interp(c1) | t1 |
1030 ===========================================================================
1031 2020-02-01 00:00:00.000 | 1 | testts5941 |
1032 2020-02-01 00:00:01.000 | 1 | testts5941 |
1033 2020-02-01 00:00:02.000 | 1 | testts5941 |
1034 2020-02-01 00:00:03.000 | 1 | testts5941 |
1035 2020-02-01 00:00:04.000 | 1 | testts5941 |
1036 2020-02-01 00:00:05.000 | 5 | testts5941 |
1037 2020-02-01 00:00:06.000 | 1 | testts5941 |
1038 2020-02-01 00:00:07.000 | 1 | testts5941 |
1039 2020-02-01 00:00:08.000 | 1 | testts5941 |
1040 2020-02-01 00:00:09.000 | 1 | testts5941 |
1041 2020-02-01 00:00:10.000 | 10 | testts5941 |
1042 2020-02-01 00:00:11.000 | 1 | testts5941 |
1043 2020-02-01 00:00:12.000 | 1 | testts5941 |
1044 2020-02-01 00:00:13.000 | 1 | testts5941 |
1045 2020-02-01 00:00:14.000 | 1 | testts5941 |
1046 2020-02-01 00:00:15.000 | 15 | testts5941 |
1047 2020-02-01 00:00:16.000 | 1 | testts5941 |
1048 2020-02-01 00:00:17.000 | 1 | testts5941 |
1049 2020-02-01 00:00:18.000 | 1 | testts5941 |
1050 2020-02-01 00:00:19.000 | 1 | testts5941 |
1051 2020-02-01 00:00:20.000 | 1 | testts5941 |
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122

View File

@ -63,3 +63,8 @@ select _irowts, _isfilled, interp(c1) from test.td32861 where ts between '2020-0
select _irowts, _isfilled, interp(c1) from test.td32861 where ts between '2020-01-01 00:00:22' and '2020-01-01 00:00:30' range('2020-01-01 00:00:00', '2020-01-01 00:00:21') every(1s) fill(prev);
select _irowts, _isfilled, interp(c1) from test.td32861 where ts between '2020-01-01 00:00:22' and '2020-01-01 00:00:30' range('2020-01-01 00:00:00', '2020-01-01 00:00:21') every(1s) fill(next);
select _irowts, _isfilled, interp(c1) from test.td32861 where ts between '2020-01-01 00:00:22' and '2020-01-01 00:00:30' range('2020-01-01 00:00:00', '2020-01-01 00:00:21') every(1s) fill(linear);
select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(prev);
select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(next);
select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(linear);
select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(null);
select _irowts, interp(c1), t1 from test.ts5941_child range('2020-02-01 00:00:00', '2020-02-01 00:00:20') every(1s) fill(value, 1);

View File

@ -40,6 +40,9 @@ class TDTestCase(TBase):
)
tdSql.execute("create table if not exists test.td32861(ts timestamp, c1 int);")
tdSql.execute("create stable if not exists test.ts5941(ts timestamp, c1 int, c2 int) tags (t1 varchar(30));")
tdSql.execute("create table if not exists test.ts5941_child using test.ts5941 tags ('testts5941');")
tdLog.printNoPrefix("==========step2:insert data")
tdSql.execute(f"insert into test.td32727 values ('2020-02-01 00:00:05', 5, 5, 5, 5, 5.0, 5.0, true, 'varchar', 'nchar', 5, 5, 5, 5)")
@ -56,6 +59,9 @@ class TDTestCase(TBase):
('2020-01-01 00:00:15', 15),
('2020-01-01 00:00:21', 21);"""
)
tdSql.execute(f"insert into test.ts5941_child values ('2020-02-01 00:00:05', 5, 5)")
tdSql.execute(f"insert into test.ts5941_child values ('2020-02-01 00:00:10', 10, 10)")
tdSql.execute(f"insert into test.ts5941_child values ('2020-02-01 00:00:15', 15, 15)")
def test_normal_query_new(self, testCase):
# read sql from .sql file and execute

View File

@ -1,4 +1,5 @@
#!/bin/bash
set -e
pgrep taosd || taosd >> /dev/null 2>&1 &
pgrep taosadapter || taosadapter >> /dev/null 2>&1 &
@ -6,11 +7,12 @@ cd ../../docs/examples/java
mvn clean test > jdbc-out.log 2>&1
tail -n 20 jdbc-out.log
totalJDBCCases=`grep 'Tests run' jdbc-out.log | awk -F"[:,]" 'END{ print $2 }'`
failed=`grep 'Tests run' jdbc-out.log | awk -F"[:,]" 'END{ print $4 }'`
error=`grep 'Tests run' jdbc-out.log | awk -F"[:,]" 'END{ print $6 }'`
totalJDBCFailed=`expr $failed + $error`
totalJDBCSuccess=`expr $totalJDBCCases - $totalJDBCFailed`
totalJDBCFailed=$((failed + error))
totalJDBCSuccess=$((totalJDBCCases - totalJDBCFailed))
if [ "$totalJDBCSuccess" -gt "0" ]; then
echo -e "\n${GREEN} ### Total $totalJDBCSuccess JDBC case(s) succeed! ### ${NC}"
@ -19,4 +21,4 @@ fi
if [ "$totalJDBCFailed" -ne "0" ]; then
echo -e "\n${RED} ### Total $totalJDBCFailed JDBC case(s) failed! ### ${NC}"
exit 8
fi
fi

View File

@ -176,6 +176,11 @@
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tsma2.py -Q 2
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tsma2.py -Q 3
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tsma2.py -Q 4
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tbnameIn.py
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tbnameIn.py -R
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tbnameIn.py -Q 2
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tbnameIn.py -Q 3
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/tbnameIn.py -Q 4
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/nestedQuery2.py
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/interp_extension.py
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/interp_extension.py -R

View File

@ -33,37 +33,6 @@ function printHelp() {
exit 0
}
# Initialization parameter
PROJECT_DIR=""
BRANCH=""
TEST_TYPE=""
SAVE_LOG="notsave"
# Parse command line parameters
while getopts "hb:d:t:s:" arg; do
case $arg in
d)
PROJECT_DIR=$OPTARG
;;
b)
BRANCH=$OPTARG
;;
t)
TEST_TYPE=$OPTARG
;;
s)
SAVE_LOG=$OPTARG
;;
h)
printHelp
;;
?)
echo "Usage: ./$(basename $0) -h"
exit 1
;;
esac
done
function get_DIR() {
today=`date +"%Y%m%d"`
if [ -z "$PROJECT_DIR" ]; then
@ -102,13 +71,6 @@ function get_DIR() {
fi
}
get_DIR
echo "PROJECT_DIR = $PROJECT_DIR"
echo "TDENGINE_DIR = $TDENGINE_DIR"
echo "BUILD_DIR = $BUILD_DIR"
echo "BACKUP_DIR = $BACKUP_DIR"
function buildTDengine() {
print_color "$GREEN" "TDengine build start"
@ -118,14 +80,14 @@ function buildTDengine() {
# pull tdinternal code
cd "$TDENGINE_DIR/../"
print_color "$GREEN" "Git pull TDinternal code..."
git remote prune origin > /dev/null
git remote update > /dev/null
# git remote prune origin > /dev/null
# git remote update > /dev/null
# pull tdengine code
cd $TDENGINE_DIR
print_color "$GREEN" "Git pull TDengine code..."
git remote prune origin > /dev/null
git remote update > /dev/null
# git remote prune origin > /dev/null
# git remote update > /dev/null
REMOTE_COMMIT=`git rev-parse --short remotes/origin/$branch`
LOCAL_COMMIT=`git rev-parse --short @`
print_color "$GREEN" " LOCAL: $LOCAL_COMMIT"
@ -137,12 +99,12 @@ function buildTDengine() {
print_color "$GREEN" "Repo need to pull"
fi
git reset --hard
git checkout -- .
# git reset --hard
# git checkout -- .
git checkout $branch
git checkout -- .
git clean -f
git pull
# git checkout -- .
# git clean -f
# git pull
[ -d $TDENGINE_DIR/../debug ] || mkdir $TDENGINE_DIR/../debug
cd $TDENGINE_DIR/../debug
@ -155,15 +117,15 @@ function buildTDengine() {
print_color "$GREEN" "$makecmd"
$makecmd
make -j 8 install
make -j $(nproc) install
else
TDENGINE_DIR="$PROJECT_DIR"
# pull tdengine code
cd $TDENGINE_DIR
print_color "$GREEN" "Git pull TDengine code..."
git remote prune origin > /dev/null
git remote update > /dev/null
# git remote prune origin > /dev/null
# git remote update > /dev/null
REMOTE_COMMIT=`git rev-parse --short remotes/origin/$branch`
LOCAL_COMMIT=`git rev-parse --short @`
print_color "$GREEN" " LOCAL: $LOCAL_COMMIT"
@ -175,12 +137,12 @@ function buildTDengine() {
print_color "$GREEN" "Repo need to pull"
fi
git reset --hard
git checkout -- .
# git reset --hard
# git checkout -- .
git checkout $branch
git checkout -- .
git clean -f
git pull
# git checkout -- .
# git clean -f
# git pull
[ -d $TDENGINE_DIR/debug ] || mkdir $TDENGINE_DIR/debug
cd $TDENGINE_DIR/debug
@ -193,24 +155,12 @@ function buildTDengine() {
print_color "$GREEN" "$makecmd"
$makecmd
make -j 8 install
make -j $(nproc) install
fi
print_color "$GREEN" "TDengine build end"
}
# Check and get the branch name
if [ -n "$BRANCH" ] ; then
branch="$BRANCH"
print_color "$GREEN" "Testing branch: $branch "
print_color "$GREEN" "Build is required for this test"
buildTDengine
else
print_color "$GREEN" "Build is not required for this test"
fi
function runCasesOneByOne () {
while read -r line; do
if [[ "$line" != "#"* ]]; then
@ -264,7 +214,7 @@ function runUnitTest() {
cd $BUILD_DIR
pgrep taosd || taosd >> /dev/null 2>&1 &
sleep 10
ctest -E "cunit_test" -j8
ctest -E "cunit_test" -j4
print_color "$GREEN" "3.0 unit test done"
}
@ -314,7 +264,6 @@ function runPythonCases() {
fi
}
function runTest() {
print_color "$GREEN" "run Test"
@ -344,7 +293,7 @@ function stopTaosd {
sleep 1
PID=`ps -ef|grep -w taosd | grep -v grep | awk '{print $2}'`
done
print_color "$GREEN" "Stop tasod end"
print_color "$GREEN" "Stop taosd end"
}
function stopTaosadapter {
@ -357,10 +306,52 @@ function stopTaosadapter {
sleep 1
PID=`ps -ef|grep -w taosd | grep -v grep | awk '{print $2}'`
done
print_color "$GREEN" "Stop tasoadapter end"
print_color "$GREEN" "Stop taosadapter end"
}
######################
# main entry
######################
# Initialization parameter
PROJECT_DIR=""
BRANCH=""
TEST_TYPE=""
SAVE_LOG="notsave"
# Parse command line parameters
while getopts "hb:d:t:s:" arg; do
case $arg in
d)
PROJECT_DIR=$OPTARG
;;
b)
BRANCH=$OPTARG
;;
t)
TEST_TYPE=$OPTARG
;;
s)
SAVE_LOG=$OPTARG
;;
h)
printHelp
;;
?)
echo "Usage: ./$(basename $0) -h"
exit 1
;;
esac
done
get_DIR
echo "PROJECT_DIR = $PROJECT_DIR"
echo "TDENGINE_DIR = $TDENGINE_DIR"
echo "BUILD_DIR = $BUILD_DIR"
echo "BACKUP_DIR = $BACKUP_DIR"
# Run all ci case
WORK_DIR=$TDENGINE_DIR
date >> $WORK_DIR/date.log
@ -368,6 +359,17 @@ print_color "$GREEN" "Run all ci test cases" | tee -a $WORK_DIR/date.log
stopTaosd
# Check and get the branch name
if [ -n "$BRANCH" ] ; then
branch="$BRANCH"
print_color "$GREEN" "Testing branch: $branch "
print_color "$GREEN" "Build is required for this test!"
buildTDengine
else
print_color "$GREEN" "Build is not required for this test!"
fi
# Run different types of case
if [ -z "$TEST_TYPE" -o "$TEST_TYPE" = "all" -o "$TEST_TYPE" = "ALL" ]; then
runTest
elif [ "$TEST_TYPE" = "python" -o "$TEST_TYPE" = "PYTHON" ]; then

View File

@ -41,49 +41,6 @@ function printHelp() {
}
PROJECT_DIR=""
CAPTURE_GCDA_DIR=""
TEST_CASE="task"
UNIT_TEST_CASE=""
BRANCH=""
BRANCH_BUILD=""
LCOV_DIR="/usr/local/bin"
# Parse command line parameters
while getopts "hd:b:f:c:u:i:l:" arg; do
case $arg in
d)
PROJECT_DIR=$OPTARG
;;
b)
BRANCH=$OPTARG
;;
f)
CAPTURE_GCDA_DIR=$OPTARG
;;
c)
TEST_CASE=$OPTARG
;;
u)
UNIT_TEST_CASE=$OPTARG
;;
i)
BRANCH_BUILD=$OPTARG
;;
l)
LCOV_DIR=$OPTARG
;;
h)
printHelp
;;
?)
echo "Usage: ./$(basename $0) -h"
exit 1
;;
esac
done
# Find the project/tdengine/build/capture directory
function get_DIR() {
today=`date +"%Y%m%d"`
@ -118,18 +75,6 @@ function get_DIR() {
}
# Show all parameters
get_DIR
echo "PROJECT_DIR = $PROJECT_DIR"
echo "TDENGINE_DIR = $TDENGINE_DIR"
echo "BUILD_DIR = $BUILD_DIR"
echo "CAPTURE_GCDA_DIR = $CAPTURE_GCDA_DIR"
echo "TEST_CASE = $TEST_CASE"
echo "UNIT_TEST_CASE = $UNIT_TEST_CASE"
echo "BRANCH_BUILD = $BRANCH_BUILD"
echo "LCOV_DIR = $LCOV_DIR"
function buildTDengine() {
print_color "$GREEN" "TDengine build start"
@ -139,14 +84,14 @@ function buildTDengine() {
# pull tdinternal code
cd "$TDENGINE_DIR/../"
print_color "$GREEN" "Git pull TDinternal code..."
git remote prune origin > /dev/null
git remote update > /dev/null
# git remote prune origin > /dev/null
# git remote update > /dev/null
# pull tdengine code
cd $TDENGINE_DIR
print_color "$GREEN" "Git pull TDengine code..."
git remote prune origin > /dev/null
git remote update > /dev/null
# git remote prune origin > /dev/null
# git remote update > /dev/null
REMOTE_COMMIT=`git rev-parse --short remotes/origin/$branch`
LOCAL_COMMIT=`git rev-parse --short @`
print_color "$GREEN" " LOCAL: $LOCAL_COMMIT"
@ -158,12 +103,12 @@ function buildTDengine() {
print_color "$GREEN" "Repo need to pull"
fi
git reset --hard
git checkout -- .
# git reset --hard
# git checkout -- .
git checkout $branch
git checkout -- .
git clean -f
git pull
# git checkout -- .
# git clean -f
# git pull
[ -d $TDENGINE_DIR/../debug ] || mkdir $TDENGINE_DIR/../debug
cd $TDENGINE_DIR/../debug
@ -183,8 +128,8 @@ function buildTDengine() {
# pull tdengine code
cd $TDENGINE_DIR
print_color "$GREEN" "Git pull TDengine code..."
git remote prune origin > /dev/null
git remote update > /dev/null
# git remote prune origin > /dev/null
# git remote update > /dev/null
REMOTE_COMMIT=`git rev-parse --short remotes/origin/$branch`
LOCAL_COMMIT=`git rev-parse --short @`
print_color "$GREEN" " LOCAL: $LOCAL_COMMIT"
@ -196,12 +141,12 @@ function buildTDengine() {
print_color "$GREEN" "Repo need to pull"
fi
git reset --hard
git checkout -- .
# git reset --hard
# git checkout -- .
git checkout $branch
git checkout -- .
git clean -f
git pull
# git checkout -- .
# git clean -f
# git pull
[ -d $TDENGINE_DIR/debug ] || mkdir $TDENGINE_DIR/debug
cd $TDENGINE_DIR/debug
@ -220,44 +165,6 @@ function buildTDengine() {
print_color "$GREEN" "TDengine build end"
}
# Check and get the branch name and build branch
if [ -n "$BRANCH" ] && [ -z "$BRANCH_BUILD" ] ; then
branch="$BRANCH"
print_color "$GREEN" "Testing branch: $branch "
print_color "$GREEN" "Build is required for this test!"
buildTDengine
elif [ -n "$BRANCH_BUILD" ] && [ "$BRANCH_BUILD" = "YES" -o "$BRANCH_BUILD" = "yes" ] ; then
CURRENT_DIR=$(pwd)
echo "CURRENT_DIR: $CURRENT_DIR"
if [ -d .git ]; then
CURRENT_BRANCH=$(git rev-parse --abbrev-ref HEAD)
echo "CURRENT_BRANCH: $CURRENT_BRANCH"
else
echo "The current directory is not a Git repository"
fi
branch="$CURRENT_BRANCH"
print_color "$GREEN" "Testing branch: $branch "
print_color "$GREEN" "Build is required for this test!"
buildTDengine
elif [ -n "$BRANCH_BUILD" ] && [ "$BRANCH_BUILD" = "ONLY_INSTALL" -o "$BRANCH_BUILD" = "only_install" ] ; then
CURRENT_DIR=$(pwd)
echo "CURRENT_DIR: $CURRENT_DIR"
if [ -d .git ]; then
CURRENT_BRANCH=$(git rev-parse --abbrev-ref HEAD)
echo "CURRENT_BRANCH: $CURRENT_BRANCH"
else
echo "The current directory is not a Git repository"
fi
branch="$CURRENT_BRANCH"
print_color "$GREEN" "Testing branch: $branch "
print_color "$GREEN" "not build,only install!"
cd $TDENGINE_DIR/debug
make -j $(nproc) install
elif [ -z "$BRANCH" ] && [ -z "$BRANCH_BUILD" ] ; then
print_color "$GREEN" "Build is not required for this test!"
fi
function runCasesOneByOne () {
while read -r line; do
if [[ "$line" != "#"* ]]; then
@ -481,10 +388,108 @@ function stopTaosadapter {
}
######################
# main entry
######################
# Initialization parameter
PROJECT_DIR=""
CAPTURE_GCDA_DIR=""
TEST_CASE="task"
UNIT_TEST_CASE=""
BRANCH=""
BRANCH_BUILD=""
LCOV_DIR="/usr/local/bin"
# Parse command line parameters
while getopts "hd:b:f:c:u:i:l:" arg; do
case $arg in
d)
PROJECT_DIR=$OPTARG
;;
b)
BRANCH=$OPTARG
;;
f)
CAPTURE_GCDA_DIR=$OPTARG
;;
c)
TEST_CASE=$OPTARG
;;
u)
UNIT_TEST_CASE=$OPTARG
;;
i)
BRANCH_BUILD=$OPTARG
;;
l)
LCOV_DIR=$OPTARG
;;
h)
printHelp
;;
?)
echo "Usage: ./$(basename $0) -h"
exit 1
;;
esac
done
# Show all parameters
get_DIR
echo "PROJECT_DIR = $PROJECT_DIR"
echo "TDENGINE_DIR = $TDENGINE_DIR"
echo "BUILD_DIR = $BUILD_DIR"
echo "CAPTURE_GCDA_DIR = $CAPTURE_GCDA_DIR"
echo "TEST_CASE = $TEST_CASE"
echo "UNIT_TEST_CASE = $UNIT_TEST_CASE"
echo "BRANCH_BUILD = $BRANCH_BUILD"
echo "LCOV_DIR = $LCOV_DIR"
date >> $TDENGINE_DIR/date.log
print_color "$GREEN" "Run local coverage test cases" | tee -a $TDENGINE_DIR/date.log
# Check and get the branch name and build branch
if [ -n "$BRANCH" ] && [ -z "$BRANCH_BUILD" ] ; then
branch="$BRANCH"
print_color "$GREEN" "Testing branch: $branch "
print_color "$GREEN" "Build is required for this test!"
buildTDengine
elif [ -n "$BRANCH_BUILD" ] && [ "$BRANCH_BUILD" = "YES" -o "$BRANCH_BUILD" = "yes" ] ; then
CURRENT_DIR=$(pwd)
echo "CURRENT_DIR: $CURRENT_DIR"
if [ -d .git ]; then
CURRENT_BRANCH=$(git rev-parse --abbrev-ref HEAD)
echo "CURRENT_BRANCH: $CURRENT_BRANCH"
else
echo "The current directory is not a Git repository"
fi
branch="$CURRENT_BRANCH"
print_color "$GREEN" "Testing branch: $branch "
print_color "$GREEN" "Build is required for this test!"
buildTDengine
elif [ -n "$BRANCH_BUILD" ] && [ "$BRANCH_BUILD" = "ONLY_INSTALL" -o "$BRANCH_BUILD" = "only_install" ] ; then
CURRENT_DIR=$(pwd)
echo "CURRENT_DIR: $CURRENT_DIR"
if [ -d .git ]; then
CURRENT_BRANCH=$(git rev-parse --abbrev-ref HEAD)
echo "CURRENT_BRANCH: $CURRENT_BRANCH"
else
echo "The current directory is not a Git repository"
fi
branch="$CURRENT_BRANCH"
print_color "$GREEN" "Testing branch: $branch "
print_color "$GREEN" "not build,only install!"
cd $TDENGINE_DIR/debug
make -j $(nproc) install
elif [ -z "$BRANCH" ] && [ -z "$BRANCH_BUILD" ] ; then
print_color "$GREEN" "Build is not required for this test!"
fi
stopTaosd
runTest

View File

@ -158,9 +158,21 @@ class TDTestCase:
tdSql.query(f'show grants;')
tdSql.checkEqual(len(tdSql.queryResult), 1)
infoFile.write(";".join(map(str,tdSql.queryResult[0])) + "\n")
tdLog.info(f"show grants: {tdSql.queryResult[0]}")
expireTimeStr=tdSql.queryResult[0][1]
serviceTimeStr=tdSql.queryResult[0][2]
tdLog.info(f"expireTimeStr: {expireTimeStr}, serviceTimeStr: {serviceTimeStr}")
expireTime = time.mktime(time.strptime(expireTimeStr, "%Y-%m-%d %H:%M:%S"))
serviceTime = time.mktime(time.strptime(serviceTimeStr, "%Y-%m-%d %H:%M:%S"))
tdLog.info(f"expireTime: {expireTime}, serviceTime: {serviceTime}")
tdSql.checkEqual(True, abs(expireTime - serviceTime - 864000) < 15)
tdSql.query(f'show grants full;')
tdSql.checkEqual(len(tdSql.queryResult), 31)
nGrantItems = 31
tdSql.checkEqual(len(tdSql.queryResult), nGrantItems)
tdSql.checkEqual(tdSql.queryResult[0][2], serviceTimeStr)
for i in range(1, nGrantItems):
tdSql.checkEqual(tdSql.queryResult[i][2], expireTimeStr)
if infoFile:
infoFile.flush()

View File

@ -153,7 +153,7 @@ class TDTestCase:
tdSql.checkData(9, 1, '8')
tdSql.checkData(9, 2, 8)
tdSql.query('select * from d1.st order by ts limit 2;')
tdSql.query('select * from d1.st order by ts,pk limit 2;')
tdSql.checkRows(2)
tdSql.checkData(0, 0, datetime.datetime(2021, 4, 19, 0, 0))
tdSql.checkData(0, 1, '1')
@ -286,7 +286,7 @@ class TDTestCase:
tdSql.checkData(9, 1, '8')
tdSql.checkData(9, 2, 8)
tdSql.query('select * from d2.st order by ts limit 2;')
tdSql.query('select * from d2.st order by ts,pk limit 2;')
tdSql.checkRows(2)
tdSql.checkData(0, 0, datetime.datetime(2021, 4, 19, 0, 0))
tdSql.checkData(0, 1, '1')

View File

@ -0,0 +1,145 @@
###################################################################
# Copyright (c) 2016 by TAOS Technologies, Inc.
# All rights reserved.
#
# This file is proprietary and confidential to TAOS Technologies.
# No part of this file may be reproduced, stored, transmitted,
# disclosed or used in any form or by any means other than as
# expressly provided by the written permission from Jianhui Tao
#
###################################################################
# -*- coding: utf-8 -*-
import sys
import taos
from util.log import *
from util.cases import *
from util.sql import *
class TDTestCase:
def init(self, conn, logSql, replicaVar=1):
self.replicaVar = int(replicaVar)
tdLog.debug(f"start to excute {__file__}")
tdSql.init(conn.cursor())
def inTest(self, dbname="db"):
tdSql.execute(f'drop database if exists {dbname}')
tdSql.execute(f'create database {dbname}')
tdSql.execute(f'use {dbname}')
tdSql.execute(f'CREATE STABLE {dbname}.`st1` (`ts` TIMESTAMP, `v1` INT) TAGS (`t1` INT);')
tdSql.execute(f'CREATE STABLE {dbname}.`st2` (`ts` TIMESTAMP, `v1` INT) TAGS (`t1` INT);')
tdSql.execute(f'CREATE TABLE {dbname}.`t11` USING {dbname}.`st1` (`t1`) TAGS (11);')
tdSql.execute(f'CREATE TABLE {dbname}.`t12` USING {dbname}.`st1` (`t1`) TAGS (12);')
tdSql.execute(f'CREATE TABLE {dbname}.`t21` USING {dbname}.`st2` (`t1`) TAGS (21);')
tdSql.execute(f'CREATE TABLE {dbname}.`t22` USING {dbname}.`st2` (`t1`) TAGS (22);')
tdSql.execute(f'CREATE TABLE {dbname}.`ta` (`ts` TIMESTAMP, `v1` INT);')
tdSql.execute(f"insert into {dbname}.t11 values ( '2025-01-21 00:11:01', 111 )")
tdSql.execute(f"insert into {dbname}.t11 values ( '2025-01-21 00:11:02', 112 )")
tdSql.execute(f"insert into {dbname}.t11 values ( '2025-01-21 00:11:03', 113 )")
tdSql.execute(f"insert into {dbname}.t12 values ( '2025-01-21 00:12:01', 121 )")
tdSql.execute(f"insert into {dbname}.t12 values ( '2025-01-21 00:12:02', 122 )")
tdSql.execute(f"insert into {dbname}.t12 values ( '2025-01-21 00:12:03', 123 )")
tdSql.execute(f"insert into {dbname}.t21 values ( '2025-01-21 00:21:01', 211 )")
tdSql.execute(f"insert into {dbname}.t21 values ( '2025-01-21 00:21:02', 212 )")
tdSql.execute(f"insert into {dbname}.t21 values ( '2025-01-21 00:21:03', 213 )")
tdSql.execute(f"insert into {dbname}.t22 values ( '2025-01-21 00:22:01', 221 )")
tdSql.execute(f"insert into {dbname}.t22 values ( '2025-01-21 00:22:02', 222 )")
tdSql.execute(f"insert into {dbname}.t22 values ( '2025-01-21 00:22:03', 223 )")
tdSql.execute(f"insert into {dbname}.ta values ( '2025-01-21 00:00:01', 1 )")
tdSql.execute(f"insert into {dbname}.ta values ( '2025-01-21 00:00:02', 2 )")
tdSql.execute(f"insert into {dbname}.ta values ( '2025-01-21 00:00:03', 3 )")
tdLog.debug(f"-------------- step1: normal table test ------------------")
tdSql.query(f"select last(*) from {dbname}.ta where tbname in ('ta');")
tdSql.checkRows(1)
tdSql.checkData(0, 0, '2025-01-21 00:00:03')
tdSql.checkData(0, 1, 3)
tdSql.query(f"select last(*) from {dbname}.ta where tbname in ('ta', 't21');")
tdSql.checkRows(1)
tdSql.checkData(0, 0, '2025-01-21 00:00:03')
tdSql.checkData(0, 1, 3)
tdSql.query(f"select last(*) from {dbname}.ta where tbname in ('t21');")
tdSql.checkRows(0)
tdSql.query(f"select last(*) from {dbname}.ta where tbname in ('ta') and tbname in ('ta');")
tdSql.checkRows(1)
tdSql.checkData(0, 0, '2025-01-21 00:00:03')
tdSql.checkData(0, 1, 3)
tdSql.query(f"select last(*) from {dbname}.ta where tbname in ('ta') or tbname in ('ta');")
tdSql.checkRows(1)
tdSql.checkData(0, 0, '2025-01-21 00:00:03')
tdSql.checkData(0, 1, 3)
tdSql.query(f"select last(*) from {dbname}.ta where tbname in ('ta') or tbname in ('tb');")
tdSql.checkRows(1)
tdSql.checkData(0, 0, '2025-01-21 00:00:03')
tdSql.checkData(0, 1, 3)
tdSql.query(f"select last(*) from {dbname}.ta where tbname in ('ta', 't21') and tbname in ('ta');")
tdSql.checkRows(1)
tdSql.checkData(0, 0, '2025-01-21 00:00:03')
tdSql.checkData(0, 1, 3)
tdSql.query(f"select last(*) from {dbname}.ta where tbname in ('ta', 't21') and tbname in ('t21');")
tdSql.checkRows(0)
tdSql.query(f"select last(*) from {dbname}.ta where tbname in ('t21') or tbname in ('ta');")
tdSql.checkRows(1)
tdSql.checkData(0, 0, '2025-01-21 00:00:03')
tdSql.checkData(0, 1, 3)
tdLog.debug(f"-------------- step2: super table test ------------------")
tdSql.query(f"select last(*) from {dbname}.st1 where tbname in ('t11');")
tdSql.checkRows(1)
tdSql.checkData(0, 0, '2025-01-21 00:11:03')
tdSql.checkData(0, 1, 113)
tdSql.query(f"select last(*) from {dbname}.st1 where tbname in ('t21');")
tdSql.checkRows(0)
tdSql.query(f"select last(*) from {dbname}.st1 where tbname in ('ta', 't21');")
tdSql.checkRows(0)
tdSql.query(f"select last(*) from {dbname}.st1 where tbname in ('t21', 't12');")
tdSql.checkRows(1)
tdSql.checkData(0, 0, '2025-01-21 00:12:03')
tdSql.checkData(0, 1, 123)
tdSql.query(f"select last(*) from {dbname}.st1 where tbname in ('ta') and tbname in ('t12');")
tdSql.checkRows(0)
tdSql.query(f"select last(*) from {dbname}.st1 where tbname in ('t12') or tbname in ('t11');")
tdSql.checkRows(1)
tdSql.checkData(0, 0, '2025-01-21 00:12:03')
tdSql.checkData(0, 1, 123)
tdSql.query(f"select last(*) from {dbname}.st1 where tbname in ('ta') or tbname in ('t21');")
tdSql.checkRows(0)
tdSql.query(f"select last(*) from {dbname}.st1 where tbname in ('t12', 't21') and tbname in ('t21');")
tdSql.checkRows(0)
tdSql.query(f"select last(*) from {dbname}.st1 where tbname in ('t12', 't11') and tbname in ('t11');")
tdSql.checkRows(1)
tdSql.checkData(0, 0, '2025-01-21 00:11:03')
tdSql.checkData(0, 1, 113)
def run(self):
self.inTest()
def stop(self):
tdSql.close()
tdLog.success("%s successfully executed" % __file__)
tdCases.addWindows(__file__, TDTestCase())
tdCases.addLinux(__file__, TDTestCase())