Merge branch '3.0' into enh/dnodelist

This commit is contained in:
Shengliang Guan 2024-10-31 17:40:01 +08:00
commit eedc020425
56 changed files with 2358 additions and 2269 deletions

View File

@ -46,7 +46,7 @@ For more details on features, please read through the entire documentation.
By making full use of [characteristics of time series data](https://tdengine.com/characteristics-of-time-series-data/), TDengine differentiates itself from other time series databases with the following advantages.
- **[High-Performance](https://tdengine.com/high-performance/)**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression.
- **[High-Performance](https://tdengine.com/high-performance/)**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while outperforming other time-series databases for data ingestion, querying and data compression.
- **[Simplified Solution](https://tdengine.com/comprehensive-industrial-data-solution/)**: Through built-in caching, stream processing and data subscription features, TDengine provides a simplified solution for time-series data processing. It reduces system design complexity and operation costs significantly.

View File

@ -1187,7 +1187,7 @@ CSUM(expr)
### DERIVATIVE
```sql
DERIVATIVE(expr, time_inerval, ignore_negative)
DERIVATIVE(expr, time_interval, ignore_negative)
ignore_negative: {
0

View File

@ -41,7 +41,7 @@ In this article, it specifically refers to the level within the secondary compre
### Create Table with Compression
```sql
CREATE [dbname.]tabname (colName colType [ENCODE 'encode_type'] [COMPRESS 'compress_type' [LEVEL 'level'], [, other cerate_definition]...])
CREATE [dbname.]tabname (colName colType [ENCODE 'encode_type'] [COMPRESS 'compress_type' [LEVEL 'level'], [, other create_definition]...])
```
**Parameter Description**
@ -58,7 +58,7 @@ CREATE [dbname.]tabname (colName colType [ENCODE 'encode_type'] [COMPRESS 'compr
### Change Compression Method
```sql
ALTER TABLE [db_name.]tabName MODIFY COLUMN colName [ENCODE 'ecode_type'] [COMPRESS 'compress_type'] [LEVEL "high"]
ALTER TABLE [db_name.]tabName MODIFY COLUMN colName [ENCODE 'encode_type'] [COMPRESS 'compress_type'] [LEVEL "high"]
```
**Parameter Description**

View File

@ -125,7 +125,7 @@ where `TOKEN` is the string after Base64 encoding of `{username}:{password}`, e.
Starting from `TDengine 3.0.3.0`, `taosAdapter` provides a configuration parameter `httpCodeServerError` to set whether to return a non-200 http status code when the C interface returns an error
| **Description** | **httpCodeServerError false** | **httpCodeServerError true** |
|--------------------|---------------------------- ------|---------------------------------------|
|--------------------|----------------------------------|---------------------------------------|
| taos_errno() returns 0 | 200 | 200 |
| taos_errno() returns non-0 | 200 (except authentication error) | 500 (except authentication error and 400/502 error) |
| Parameter error | 400 (only handle HTTP request URL parameter error) | 400 (handle HTTP request URL parameter error and taosd return error) |

View File

@ -701,15 +701,6 @@ The charset that takes effect is UTF-8.
| Type | String |
| Default Value | _tag_null |
### smlDataFormat
| Attribute | Description |
| ----------- | ----------------------------------------------------------------------------------- |
| Applicable | Client only |
| Meaning | Whether schemaless columns are consistently ordered, depat, discarded since 3.0.3.0 |
| Value Range | 0: not consistent; 1: consistent. |
| Default | 0 |
### smlTsDefaultName
| Attribute | Description |
@ -719,6 +710,16 @@ The charset that takes effect is UTF-8.
| Type | String |
| Default Value | _ts |
### smlDot2Underline
| Attribute | Description |
| -------- | -------------------------------------------------------- |
| Applicable | Client only |
| Meaning | Convert the dot in the supertable name to an underscore |
| Type | Bool |
| Default Value | true |
## Compress Parameters
### compressMsgSize

View File

@ -4,7 +4,7 @@ sidebar_label: Load Balance
description: This document describes how TDengine implements load balancing.
---
The load balance in TDengine is mainly about processing data series data. TDengine employes builtin hash algorithm to distribute all the tables, sub-tables and their data of a database across all the vgroups that belongs to the database. Each table or sub-table can only be handled by a single vgroup, while each vgroup can process multiple table or sub-table.
The load balance in TDengine is mainly about processing data series data. TDengine employs builtin hash algorithm to distribute all the tables, sub-tables and their data of a database across all the vgroups that belongs to the database. Each table or sub-table can only be handled by a single vgroup, while each vgroup can process multiple table or sub-table.
The number of vgroup can be specified when creating a database, using the parameter `vgroups`.
@ -12,10 +12,10 @@ The number of vgroup can be specified when creating a database, using the parame
create database db0 vgroups 100;
```
The proper value of `vgroups` depends on available system resources. Assuming there is only one database to be created in the system, then the number of `vgroups` is determined by the available resources from all dnodes. In principle more vgroups can be created if you have more CPU and memory. Disk I/O is another important factor to consider. Once the bottleneck shows on disk I/O, more vgroups may downgrad the system performance significantly. If multiple databases are to be created in the system, then the total number of `vroups` of all the databases are dependent on the available system resources. It needs to be careful to distribute vgroups among these databases, you need to consider the number of tables, data writing frequency, size of each data row for all these databases. A recommended practice is to firstly choose a starting number for `vgroups`, for example double of the number of CPU cores, then try to adjust and optimize system configurations to find the best setting for `vgroups`, then distribute these vgroups among databases.
The proper value of `vgroups` depends on available system resources. Assuming there is only one database to be created in the system, then the number of `vgroups` is determined by the available resources from all dnodes. In principle more vgroups can be created if you have more CPU and memory. Disk I/O is another important factor to consider. Once the bottleneck shows on disk I/O, more vgroups may degrade the system performance significantly. If multiple databases are to be created in the system, then the total number of `vgroups` of all the databases are dependent on the available system resources. It needs to be careful to distribute vgroups among these databases, you need to consider the number of tables, data writing frequency, size of each data row for all these databases. A recommended practice is to firstly choose a starting number for `vgroups`, for example double of the number of CPU cores, then try to adjust and optimize system configurations to find the best setting for `vgroups`, then distribute these vgroups among databases.
Furthermode, TDengine distributes the vgroups of each database equally among all dnodes. In case of replica 3, the distribution is even more complex, TDengine tries its best to prevent any dnode from becoming a bottleneck.
Furthermore, TDengine distributes the vgroups of each database equally among all dnodes. In case of replica 3, the distribution is even more complex, TDengine tries its best to prevent any dnode from becoming a bottleneck.
TDegnine utilizes the above ways to achieve load balance in a cluster, and finally achieve higher throughput.
TDengine utilizes the above ways to achieve load balance in a cluster, and finally achieve higher throughput.
Once the load balance is achieved, after some operations like deleting tables or dropping databases, the load across all dnodes may become imbalanced, the method of rebalance will be provided in later versions. However, even without explicit rebalancing, TDengine will try its best to achieve new balance without manual interfering when a new database is created.

View File

@ -20,21 +20,22 @@ taosBenchmark --start-timestamp=1600000000000 --tables=100 --records=10000000 --
```sql
SELECT * FROM meters
WHERE voltage > 10
WHERE voltage > 230
ORDER BY ts DESC
LIMIT 5
LIMIT 5;
```
上面的 SQL从超级表 `meters` 中查询出电压 `voltage` 大于 10 的记录,按时间降序排列,且仅输出前 5 行。查询结果如下:
上面的 SQL从超级表 `meters` 中查询出电压 `voltage` 大于 230 的记录,按时间降序排列,且仅输出前 5 行。查询结果如下:
```text
ts | current | voltage | phase | groupid | location |
==========================================================================================================
2023-11-14 22:13:10.000 | 1.1294620 | 18 | 0.3531540 | 8 | California.MountainView |
2023-11-14 22:13:10.000 | 1.0294620 | 12 | 0.3631540 | 2 | California.Campbell |
2023-11-14 22:13:10.000 | 1.0294620 | 16 | 0.3531540 | 1 | California.Campbell |
2023-11-14 22:13:10.000 | 1.1294620 | 18 | 0.3531540 | 2 | California.Campbell |
2023-11-14 22:13:10.000 | 1.1294620 | 16 | 0.3431540 | 7 | California.PaloAlto |
ts | current | voltage | phase | groupid | location |
===================================================================================================
2023-11-15 06:13:10.000 | 14.0601978 | 232 | 146.5000000 | 10 | California.Sunnyvale |
2023-11-15 06:13:10.000 | 14.0601978 | 232 | 146.5000000 | 1 | California.LosAngles |
2023-11-15 06:13:10.000 | 14.0601978 | 232 | 146.5000000 | 10 | California.Sunnyvale |
2023-11-15 06:13:10.000 | 14.0601978 | 232 | 146.5000000 | 5 | California.Cupertino |
2023-11-15 06:13:10.000 | 14.0601978 | 232 | 146.5000000 | 4 | California.SanFrancisco |
Query OK, 5 row(s) in set (0.145403s)
```
## 聚合查询
@ -48,28 +49,28 @@ TDengine 支持通过 GROUP BY 子句对数据进行聚合查询。SQL 语句
group by 子句用于对数据进行分组,并为每个分组返回一行汇总信息。在 group by 子句中,可以使用表或视图中的任何列作为分组依据,这些列不需要出现在 select 列表中。此外,用户可以直接在超级表上执行聚合查询,无须预先创建子表。以智能电表的数据模型为例,使用 group by 子句的 SQL 如下:
```sql
SELECT groupid,avg(voltage)
SELECT groupid, avg(voltage)
FROM meters
WHERE ts >= "2022-01-01T00:00:00+08:00"
AND ts < "2023-01-01T00:00:00+08:00"
GROUP BY groupid
GROUP BY groupid;
```
上面的 SQL查询超级表 `meters` 中,时间戳大于等于 `2022-01-01T00:00:00+08:00`,且时间戳小于 `2023-01-01T00:00:00+08:00` 的数据,按照 `groupid` 进行分组,求每组的平均电压。查询结果如下:
```text
groupid | avg(voltage) |
==========================================
8 | 9.104040404040404 |
5 | 9.078333333333333 |
1 | 9.087037037037037 |
7 | 8.991414141414142 |
9 | 8.789814814814815 |
6 | 9.051010101010101 |
4 | 9.135353535353536 |
10 | 9.213131313131314 |
2 | 9.008888888888889 |
3 | 8.783888888888889 |
groupid | avg(voltage) |
======================================
8 | 243.961981544901079 |
5 | 243.961981544901079 |
1 | 243.961981544901079 |
7 | 243.961981544901079 |
9 | 243.961981544901079 |
6 | 243.961981544901079 |
4 | 243.961981544901079 |
10 | 243.961981544901079 |
2 | 243.961981544901079 |
3 | 243.961981544901079 |
Query OK, 10 row(s) in set (0.042446s)
```
@ -110,24 +111,24 @@ TDengine 按如下方式处理数据切分子句。
```sql
SELECT location, avg(voltage)
FROM meters
PARTITION BY location
PARTITION BY location;
```
上面的示例 SQL 查询超级表 `meters`,将数据按标签 `location` 进行分组,每个分组计算电压的平均值。查询结果如下:
```text
location | avg(voltage) |
=========================================================
California.SantaClara | 8.793334320000000 |
California.SanFrancisco | 9.017645882352941 |
California.SanJose | 9.156112940000000 |
California.LosAngles | 9.036753507692307 |
California.SanDiego | 8.967037053333334 |
California.Sunnyvale | 8.978572085714285 |
California.PaloAlto | 8.936665800000000 |
California.Cupertino | 8.987654066666666 |
California.MountainView | 9.046297266666667 |
California.Campbell | 9.149999028571429 |
location | avg(voltage) |
======================================================
California.SantaClara | 243.962050000000005 |
California.SanFrancisco | 243.962050000000005 |
California.SanJose | 243.962050000000005 |
California.LosAngles | 243.962050000000005 |
California.SanDiego | 243.962050000000005 |
California.Sunnyvale | 243.962050000000005 |
California.PaloAlto | 243.962050000000005 |
California.Cupertino | 243.962050000000005 |
California.MountainView | 243.962050000000005 |
California.Campbell | 243.962050000000005 |
Query OK, 10 row(s) in set (2.415961s)
```
@ -200,20 +201,20 @@ SLIMIT 2;
上面的 SQL查询超级表 `meters` 中,时间戳大于等于 `2022-01-01T00:00:00+08:00`,且时间戳小于 `2022-01-01T00:05:00+08:00` 的数据;数据首先按照子表名 `tbname` 进行数据切分,再按照每 1 分钟的时间窗口进行切分,且每个时间窗口向后偏移 5 秒;最后,仅取前 2 个分片的数据作为结果。查询结果如下:
```text
tbname | _wstart | _wend | avg(voltage) |
==========================================================================================
d40 | 2021-12-31 15:59:05.000 | 2021-12-31 16:00:05.000 | 4.000000000000000 |
d40 | 2021-12-31 16:00:05.000 | 2021-12-31 16:01:05.000 | 5.000000000000000 |
d40 | 2021-12-31 16:01:05.000 | 2021-12-31 16:02:05.000 | 8.000000000000000 |
d40 | 2021-12-31 16:02:05.000 | 2021-12-31 16:03:05.000 | 7.666666666666667 |
d40 | 2021-12-31 16:03:05.000 | 2021-12-31 16:04:05.000 | 9.666666666666666 |
d40 | 2021-12-31 16:04:05.000 | 2021-12-31 16:05:05.000 | 15.199999999999999 |
d41 | 2021-12-31 15:59:05.000 | 2021-12-31 16:00:05.000 | 4.000000000000000 |
d41 | 2021-12-31 16:00:05.000 | 2021-12-31 16:01:05.000 | 7.000000000000000 |
d41 | 2021-12-31 16:01:05.000 | 2021-12-31 16:02:05.000 | 9.000000000000000 |
d41 | 2021-12-31 16:02:05.000 | 2021-12-31 16:03:05.000 | 10.666666666666666 |
d41 | 2021-12-31 16:03:05.000 | 2021-12-31 16:04:05.000 | 8.333333333333334 |
d41 | 2021-12-31 16:04:05.000 | 2021-12-31 16:05:05.000 | 9.600000000000000 |
tbname | _wstart | _wend | avg(voltage) |
======================================================================================
d2 | 2021-12-31 23:59:05.000 | 2022-01-01 00:00:05.000 | 253.000000000000000 |
d2 | 2022-01-01 00:00:05.000 | 2022-01-01 00:01:05.000 | 244.166666666666657 |
d2 | 2022-01-01 00:01:05.000 | 2022-01-01 00:02:05.000 | 241.833333333333343 |
d2 | 2022-01-01 00:02:05.000 | 2022-01-01 00:03:05.000 | 243.166666666666657 |
d2 | 2022-01-01 00:03:05.000 | 2022-01-01 00:04:05.000 | 240.833333333333343 |
d2 | 2022-01-01 00:04:05.000 | 2022-01-01 00:05:05.000 | 244.800000000000011 |
d26 | 2021-12-31 23:59:05.000 | 2022-01-01 00:00:05.000 | 253.000000000000000 |
d26 | 2022-01-01 00:00:05.000 | 2022-01-01 00:01:05.000 | 244.166666666666657 |
d26 | 2022-01-01 00:01:05.000 | 2022-01-01 00:02:05.000 | 241.833333333333343 |
d26 | 2022-01-01 00:02:05.000 | 2022-01-01 00:03:05.000 | 243.166666666666657 |
d26 | 2022-01-01 00:03:05.000 | 2022-01-01 00:04:05.000 | 240.833333333333343 |
d26 | 2022-01-01 00:04:05.000 | 2022-01-01 00:05:05.000 | 244.800000000000011 |
Query OK, 12 row(s) in set (0.021265s)
```
@ -255,19 +256,19 @@ SLIMIT 1;
上面的 SQL查询超级表 `meters` 中,时间戳大于等于 `2022-01-01T00:00:00+08:00`,且时间戳小于 `2022-01-01T00:05:00+08:00` 的数据,数据首先按照子表名 `tbname` 进行数据切分,再按照每 1 分钟的时间窗口进行切分,且时间窗口按照 30 秒进行滑动;最后,仅取前 1 个分片的数据作为结果。查询结果如下:
```text
tbname | _wstart | avg(voltage) |
================================================================
d40 | 2021-12-31 15:59:30.000 | 4.000000000000000 |
d40 | 2021-12-31 16:00:00.000 | 5.666666666666667 |
d40 | 2021-12-31 16:00:30.000 | 4.333333333333333 |
d40 | 2021-12-31 16:01:00.000 | 5.000000000000000 |
d40 | 2021-12-31 16:01:30.000 | 9.333333333333334 |
d40 | 2021-12-31 16:02:00.000 | 9.666666666666666 |
d40 | 2021-12-31 16:02:30.000 | 10.000000000000000 |
d40 | 2021-12-31 16:03:00.000 | 10.333333333333334 |
d40 | 2021-12-31 16:03:30.000 | 10.333333333333334 |
d40 | 2021-12-31 16:04:00.000 | 13.000000000000000 |
d40 | 2021-12-31 16:04:30.000 | 15.333333333333334 |
tbname | _wstart | avg(voltage) |
=============================================================
d2 | 2021-12-31 23:59:30.000 | 248.333333333333343 |
d2 | 2022-01-01 00:00:00.000 | 246.000000000000000 |
d2 | 2022-01-01 00:00:30.000 | 244.666666666666657 |
d2 | 2022-01-01 00:01:00.000 | 240.833333333333343 |
d2 | 2022-01-01 00:01:30.000 | 239.500000000000000 |
d2 | 2022-01-01 00:02:00.000 | 243.833333333333343 |
d2 | 2022-01-01 00:02:30.000 | 243.833333333333343 |
d2 | 2022-01-01 00:03:00.000 | 241.333333333333343 |
d2 | 2022-01-01 00:03:30.000 | 241.666666666666657 |
d2 | 2022-01-01 00:04:00.000 | 244.166666666666657 |
d2 | 2022-01-01 00:04:30.000 | 244.666666666666657 |
Query OK, 11 row(s) in set (0.013153s)
```
@ -290,13 +291,13 @@ SLIMIT 1;
上面的 SQL查询超级表 `meters` 中,时间戳大于等于 `2022-01-01T00:00:00+08:00`,且时间戳小于 `2022-01-01T00:05:00+08:00` 的数据,数据首先按照子表名 `tbname` 进行数据切分,再按照每 1 分钟的时间窗口进行切分,且时间窗口按照 1 分钟进行切分;最后,仅取前 1 个分片的数据作为结果。查询结果如下:
```text
tbname | _wstart | _wend | avg(voltage) |
=================================================================================================================
d28 | 2021-12-31 16:00:00.000 | 2021-12-31 16:01:00.000 | 7.333333333333333 |
d28 | 2021-12-31 16:01:00.000 | 2021-12-31 16:02:00.000 | 8.000000000000000 |
d28 | 2021-12-31 16:02:00.000 | 2021-12-31 16:03:00.000 | 11.000000000000000 |
d28 | 2021-12-31 16:03:00.000 | 2021-12-31 16:04:00.000 | 6.666666666666667 |
d28 | 2021-12-31 16:04:00.000 | 2021-12-31 16:05:00.000 | 10.000000000000000 |
tbname | _wstart | _wend | avg(voltage) |
======================================================================================
d2 | 2022-01-01 00:00:00.000 | 2022-01-01 00:01:00.000 | 246.000000000000000 |
d2 | 2022-01-01 00:01:00.000 | 2022-01-01 00:02:00.000 | 240.833333333333343 |
d2 | 2022-01-01 00:02:00.000 | 2022-01-01 00:03:00.000 | 243.833333333333343 |
d2 | 2022-01-01 00:03:00.000 | 2022-01-01 00:04:00.000 | 241.333333333333343 |
d2 | 2022-01-01 00:04:00.000 | 2022-01-01 00:05:00.000 | 244.166666666666657 |
Query OK, 5 row(s) in set (0.016812s)
```
@ -342,53 +343,65 @@ SLIMIT 2;
上面的 SQL查询超级表 `meters` 中,时间戳大于等于 `2022-01-01T00:00:00+08:00`,且时间戳小于 `2022-01-01T00:05:00+08:00` 的数据;数据首先按照子表名 `tbname` 进行数据切分,再按照每 1 分钟的时间窗口进行切分,如果窗口内的数据出现缺失,则使用使用前一个非 NULL 值填充数据;最后,仅取前 2 个分片的数据作为结果。查询结果如下:
```text
tbname | _wstart | _wend | avg(voltage) |
=================================================================================================================
d40 | 2021-12-31 16:00:00.000 | 2021-12-31 16:01:00.000 | 5.666666666666667 |
d40 | 2021-12-31 16:01:00.000 | 2021-12-31 16:02:00.000 | 5.000000000000000 |
d40 | 2021-12-31 16:02:00.000 | 2021-12-31 16:03:00.000 | 9.666666666666666 |
d40 | 2021-12-31 16:03:00.000 | 2021-12-31 16:04:00.000 | 10.333333333333334 |
d40 | 2021-12-31 16:04:00.000 | 2021-12-31 16:05:00.000 | 13.000000000000000 |
d41 | 2021-12-31 16:00:00.000 | 2021-12-31 16:01:00.000 | 5.666666666666667 |
d41 | 2021-12-31 16:01:00.000 | 2021-12-31 16:02:00.000 | 9.333333333333334 |
d41 | 2021-12-31 16:02:00.000 | 2021-12-31 16:03:00.000 | 11.000000000000000 |
d41 | 2021-12-31 16:03:00.000 | 2021-12-31 16:04:00.000 | 7.666666666666667 |
d41 | 2021-12-31 16:04:00.000 | 2021-12-31 16:05:00.000 | 10.000000000000000 |
tbname | _wstart | _wend | avg(voltage) |
=======================================================================================
d2 | 2022-01-01 00:00:00.000 | 2022-01-01 00:01:00.000 | 246.000000000000000 |
d2 | 2022-01-01 00:01:00.000 | 2022-01-01 00:02:00.000 | 240.833333333333343 |
d2 | 2022-01-01 00:02:00.000 | 2022-01-01 00:03:00.000 | 243.833333333333343 |
d2 | 2022-01-01 00:03:00.000 | 2022-01-01 00:04:00.000 | 241.333333333333343 |
d2 | 2022-01-01 00:04:00.000 | 2022-01-01 00:05:00.000 | 244.166666666666657 |
d26 | 2022-01-01 00:00:00.000 | 2022-01-01 00:01:00.000 | 246.000000000000000 |
d26 | 2022-01-01 00:01:00.000 | 2022-01-01 00:02:00.000 | 240.833333333333343 |
d26 | 2022-01-01 00:02:00.000 | 2022-01-01 00:03:00.000 | 243.833333333333343 |
d26 | 2022-01-01 00:03:00.000 | 2022-01-01 00:04:00.000 | 241.333333333333343 |
d26 | 2022-01-01 00:04:00.000 | 2022-01-01 00:05:00.000 | 244.166666666666657 |
Query OK, 10 row(s) in set (0.022866s)
```
### 状态窗口
使用整数布尔值或字符串来标识产生记录时候设备的状态量。产生的记录如果具有相同的状态量数值则归属于同一个状态窗口数值改变后该窗口关闭。TDengine 还支持将 CASE 表达式用在状态量,可以表达某个状态的开始是由满足某个条件而触发,这个状态的结束是由另外一个条件满足而触发的语义。以智能电表为例,电压正常范围是 205V 到 235V那么可以通过监控电压来判断电路是否正常。
使用整数布尔值或字符串来标识产生记录时候设备的状态量。产生的记录如果具有相同的状态量数值则归属于同一个状态窗口数值改变后该窗口关闭。TDengine 还支持将 CASE 表达式用在状态量,可以表达某个状态的开始是由满足某个条件而触发,这个状态的结束是由另外一个条件满足而触发的语义。以智能电表为例,电压正常范围是 225V 到 235V那么可以通过监控电压来判断电路是否正常。
```sql
SELECT tbname, _wstart, _wend,_wduration, CASE WHEN voltage >= 205 and voltage <= 235 THEN 1 ELSE 0 END status
SELECT tbname, _wstart, _wend,_wduration, CASE WHEN voltage >= 225 and voltage <= 235 THEN 1 ELSE 0 END status
FROM meters
WHERE ts >= "2022-01-01T00:00:00+08:00"
AND ts < "2022-01-01T00:05:00+08:00"
PARTITION BY tbname
STATE_WINDOW(
CASE WHEN voltage >= 205 and voltage <= 235 THEN 1 ELSE 0 END
CASE WHEN voltage >= 225 and voltage <= 235 THEN 1 ELSE 0 END
)
SLIMIT 10;
SLIMIT 2;
```
以上 SQL查询超级表 meters 中,时间戳大于等于 2022-01-01T00:00:00+08:00且时间戳小于 2022-01-01T00:05:00+08:00的数据数据首先按照子表名 tbname 进行数据切分;根据电压是否在正常范围内进行状态窗口的划分;最后,取前 10 个分片的数据作为结果。查询结果如下:
以上 SQL查询超级表 meters 中,时间戳大于等于 2022-01-01T00:00:00+08:00且时间戳小于 2022-01-01T00:05:00+08:00的数据数据首先按照子表名 tbname 进行数据切分;根据电压是否在正常范围内进行状态窗口的划分;最后,取前 2 个分片的数据作为结果。查询结果如下:(由于数据是随机生成,结果集包含的数据条数会有不同)
```text
tbname | _wstart | _wend | _wduration | status |
=====================================================================================================================================
d76 | 2021-12-31 16:00:00.000 | 2021-12-31 16:04:50.000 | 290000 | 0 |
d47 | 2021-12-31 16:00:00.000 | 2021-12-31 16:04:50.000 | 290000 | 0 |
d37 | 2021-12-31 16:00:00.000 | 2021-12-31 16:04:50.000 | 290000 | 0 |
d87 | 2021-12-31 16:00:00.000 | 2021-12-31 16:04:50.000 | 290000 | 0 |
d64 | 2021-12-31 16:00:00.000 | 2021-12-31 16:04:50.000 | 290000 | 0 |
d35 | 2021-12-31 16:00:00.000 | 2021-12-31 16:04:50.000 | 290000 | 0 |
d83 | 2021-12-31 16:00:00.000 | 2021-12-31 16:04:50.000 | 290000 | 0 |
d51 | 2021-12-31 16:00:00.000 | 2021-12-31 16:04:50.000 | 290000 | 0 |
d63 | 2021-12-31 16:00:00.000 | 2021-12-31 16:04:50.000 | 290000 | 0 |
d0 | 2021-12-31 16:00:00.000 | 2021-12-31 16:04:50.000 | 290000 | 0 |
Query OK, 10 row(s) in set (0.040495s)
tbname | _wstart | _wend | _wduration | status |
===============================================================================================
d2 | 2022-01-01 00:00:00.000 | 2022-01-01 00:01:20.000 | 80000 | 0 |
d2 | 2022-01-01 00:01:30.000 | 2022-01-01 00:01:30.000 | 0 | 1 |
d2 | 2022-01-01 00:01:40.000 | 2022-01-01 00:01:40.000 | 0 | 0 |
d2 | 2022-01-01 00:01:50.000 | 2022-01-01 00:01:50.000 | 0 | 1 |
d2 | 2022-01-01 00:02:00.000 | 2022-01-01 00:02:20.000 | 20000 | 0 |
d2 | 2022-01-01 00:02:30.000 | 2022-01-01 00:02:30.000 | 0 | 1 |
d2 | 2022-01-01 00:02:40.000 | 2022-01-01 00:03:00.000 | 20000 | 0 |
d2 | 2022-01-01 00:03:10.000 | 2022-01-01 00:03:10.000 | 0 | 1 |
d2 | 2022-01-01 00:03:20.000 | 2022-01-01 00:03:40.000 | 20000 | 0 |
d2 | 2022-01-01 00:03:50.000 | 2022-01-01 00:03:50.000 | 0 | 1 |
d2 | 2022-01-01 00:04:00.000 | 2022-01-01 00:04:50.000 | 50000 | 0 |
d26 | 2022-01-01 00:00:00.000 | 2022-01-01 00:01:20.000 | 80000 | 0 |
d26 | 2022-01-01 00:01:30.000 | 2022-01-01 00:01:30.000 | 0 | 1 |
d26 | 2022-01-01 00:01:40.000 | 2022-01-01 00:01:40.000 | 0 | 0 |
d26 | 2022-01-01 00:01:50.000 | 2022-01-01 00:01:50.000 | 0 | 1 |
d26 | 2022-01-01 00:02:00.000 | 2022-01-01 00:02:20.000 | 20000 | 0 |
d26 | 2022-01-01 00:02:30.000 | 2022-01-01 00:02:30.000 | 0 | 1 |
d26 | 2022-01-01 00:02:40.000 | 2022-01-01 00:03:00.000 | 20000 | 0 |
d26 | 2022-01-01 00:03:10.000 | 2022-01-01 00:03:10.000 | 0 | 1 |
d26 | 2022-01-01 00:03:20.000 | 2022-01-01 00:03:40.000 | 20000 | 0 |
d26 | 2022-01-01 00:03:50.000 | 2022-01-01 00:03:50.000 | 0 | 1 |
d26 | 2022-01-01 00:04:00.000 | 2022-01-01 00:04:50.000 | 50000 | 0 |
Query OK, 22 row(s) in set (0.153403s)
```
### 会话窗口
@ -417,18 +430,18 @@ SLIMIT 10;
上面的 SQL查询超级表 meters 中,时间戳大于等于 2022-01-01T00:00:00+08:00且时间戳小于 2022-01-01T00:10:00+08:00的数据数据先按照子表名 tbname 进行数据切分,再根据 10 分钟的会话窗口进行切分;最后,取前 10 个分片的数据作为结果,返回子表名、窗口开始时间、窗口结束时间、窗口宽度、窗口内数据条数。查询结果如下:
```text
tbname | _wstart | _wend | _wduration | count(*) |
=====================================================================================================================================
d76 | 2021-12-31 16:00:00.000 | 2021-12-31 16:09:50.000 | 590000 | 60 |
d47 | 2021-12-31 16:00:00.000 | 2021-12-31 16:09:50.000 | 590000 | 60 |
d37 | 2021-12-31 16:00:00.000 | 2021-12-31 16:09:50.000 | 590000 | 60 |
d87 | 2021-12-31 16:00:00.000 | 2021-12-31 16:09:50.000 | 590000 | 60 |
d64 | 2021-12-31 16:00:00.000 | 2021-12-31 16:09:50.000 | 590000 | 60 |
d35 | 2021-12-31 16:00:00.000 | 2021-12-31 16:09:50.000 | 590000 | 60 |
d83 | 2021-12-31 16:00:00.000 | 2021-12-31 16:09:50.000 | 590000 | 60 |
d51 | 2021-12-31 16:00:00.000 | 2021-12-31 16:09:50.000 | 590000 | 60 |
d63 | 2021-12-31 16:00:00.000 | 2021-12-31 16:09:50.000 | 590000 | 60 |
d0 | 2021-12-31 16:00:00.000 | 2021-12-31 16:09:50.000 | 590000 | 60 |
tbname | _wstart | _wend | _wduration | count(*) |
===============================================================================================
d2 | 2022-01-01 00:00:00.000 | 2022-01-01 00:09:50.000 | 590000 | 60 |
d26 | 2022-01-01 00:00:00.000 | 2022-01-01 00:09:50.000 | 590000 | 60 |
d52 | 2022-01-01 00:00:00.000 | 2022-01-01 00:09:50.000 | 590000 | 60 |
d64 | 2022-01-01 00:00:00.000 | 2022-01-01 00:09:50.000 | 590000 | 60 |
d76 | 2022-01-01 00:00:00.000 | 2022-01-01 00:09:50.000 | 590000 | 60 |
d28 | 2022-01-01 00:00:00.000 | 2022-01-01 00:09:50.000 | 590000 | 60 |
d4 | 2022-01-01 00:00:00.000 | 2022-01-01 00:09:50.000 | 590000 | 60 |
d88 | 2022-01-01 00:00:00.000 | 2022-01-01 00:09:50.000 | 590000 | 60 |
d77 | 2022-01-01 00:00:00.000 | 2022-01-01 00:09:50.000 | 590000 | 60 |
d54 | 2022-01-01 00:00:00.000 | 2022-01-01 00:09:50.000 | 590000 | 60 |
Query OK, 10 row(s) in set (0.043489s)
```
@ -458,26 +471,26 @@ FROM meters
WHERE ts >= "2022-01-01T00:00:00+08:00"
AND ts < "2022-01-01T00:10:00+08:00"
PARTITION BY tbname
EVENT_WINDOW START WITH voltage >= 10 END WITH voltage < 20
LIMIT 10;
EVENT_WINDOW START WITH voltage >= 225 END WITH voltage < 235
LIMIT 5;
```
上面的 SQL查询超级表meters中时间戳大于等于2022-01-01T00:00:00+08:00且时间戳小于2022-01-01T00:10:00+08:00的数据数据先按照子表名tbname进行数据切分再根据事件窗口条件电压大于等于 10V且小于 20V 进行切分;最后,取前 10 行的数据作为结果,返回子表名、窗口开始时间、窗口结束时间、窗口宽度、窗口内数据条数。查询结果如下:
上面的 SQL查询超级表meters中时间戳大于等于2022-01-01T00:00:00+08:00且时间戳小于2022-01-01T00:10:00+08:00的数据数据先按照子表名tbname进行数据切分再根据事件窗口条件电压大于等于 225V且小于 235V 进行切分;最后,取每个分片的前 5 行的数据作为结果,返回子表名、窗口开始时间、窗口结束时间、窗口宽度、窗口内数据条数。查询结果如下:
```text
tbname | _wstart | _wend | _wduration | count(*) |
=====================================================================================================================================
d0 | 2021-12-31 16:00:00.000 | 2021-12-31 16:00:00.000 | 0 | 1 |
d0 | 2021-12-31 16:00:30.000 | 2021-12-31 16:00:30.000 | 0 | 1 |
d0 | 2021-12-31 16:00:40.000 | 2021-12-31 16:00:40.000 | 0 | 1 |
d0 | 2021-12-31 16:01:20.000 | 2021-12-31 16:01:20.000 | 0 | 1 |
d0 | 2021-12-31 16:02:20.000 | 2021-12-31 16:02:20.000 | 0 | 1 |
d0 | 2021-12-31 16:02:30.000 | 2021-12-31 16:02:30.000 | 0 | 1 |
d0 | 2021-12-31 16:03:10.000 | 2021-12-31 16:03:10.000 | 0 | 1 |
d0 | 2021-12-31 16:03:30.000 | 2021-12-31 16:03:30.000 | 0 | 1 |
d0 | 2021-12-31 16:03:40.000 | 2021-12-31 16:03:40.000 | 0 | 1 |
d0 | 2021-12-31 16:03:50.000 | 2021-12-31 16:03:50.000 | 0 | 1 |
Query OK, 10 row(s) in set (0.034127s)
tbname | _wstart | _wend | _wduration | count(*) |
==============================================================================================
d0 | 2022-01-01 00:00:00.000 | 2022-01-01 00:01:30.000 | 90000 | 10 |
d0 | 2022-01-01 00:01:40.000 | 2022-01-01 00:02:30.000 | 50000 | 6 |
d0 | 2022-01-01 00:02:40.000 | 2022-01-01 00:03:10.000 | 30000 | 4 |
d0 | 2022-01-01 00:03:20.000 | 2022-01-01 00:07:10.000 | 230000 | 24 |
d0 | 2022-01-01 00:07:20.000 | 2022-01-01 00:07:50.000 | 30000 | 4 |
d1 | 2022-01-01 00:00:00.000 | 2022-01-01 00:01:30.000 | 90000 | 10 |
d1 | 2022-01-01 00:01:40.000 | 2022-01-01 00:02:30.000 | 50000 | 6 |
d1 | 2022-01-01 00:02:40.000 | 2022-01-01 00:03:10.000 | 30000 | 4 |
d1 | 2022-01-01 00:03:20.000 | 2022-01-01 00:07:10.000 | 230000 | 24 |
……
Query OK, 500 row(s) in set (0.328557s)
```
### 计数窗口
@ -492,17 +505,25 @@ sliding_val 是一个常量,表示窗口滑动的数量,类似于 interval
select _wstart, _wend, count(*)
from meters
where ts >= "2022-01-01T00:00:00+08:00" and ts < "2022-01-01T00:30:00+08:00"
count_window(10);
count_window(1000);
```
上面的 SQL 查询超级表 meters 中时间戳大于等于 2022-01-01T00:00:00+08:00 且时间戳小于 2022-01-01T00:10:00+08:00 的数据。以每 10 条数据为一组,返回每组的开始时间、结束时间和分组条数。查询结果如下
上面的 SQL 查询超级表 meters 中时间戳大于等于 2022-01-01T00:00:00+08:00 且时间戳小于 2022-01-01T00:10:00+08:00 的数据。以每 1000 条数据为一组,返回每组的开始时间、结束时间和分组条数。查询结果如下
```text
_wstart | _wend |count(*)|
===========================================================
2021-12-31 16:00:00.000 | 2021-12-31 16:10:00.000 | 10 |
2021-12-31 16:10:00.000 | 2021-12-31 16:20:00.000 | 10 |
2021-12-31 16:20:00.000 | 2021-12-31 16:30:00.000 | 10 |
_wstart | _wend | count(*) |
=====================================================================
2022-01-01 00:00:00.000 | 2022-01-01 00:01:30.000 | 1000 |
2022-01-01 00:01:40.000 | 2022-01-01 00:03:10.000 | 1000 |
2022-01-01 00:03:20.000 | 2022-01-01 00:04:50.000 | 1000 |
2022-01-01 00:05:00.000 | 2022-01-01 00:06:30.000 | 1000 |
2022-01-01 00:06:40.000 | 2022-01-01 00:08:10.000 | 1000 |
2022-01-01 00:08:20.000 | 2022-01-01 00:09:50.000 | 1000 |
2022-01-01 00:10:00.000 | 2022-01-01 00:11:30.000 | 1000 |
2022-01-01 00:11:40.000 | 2022-01-01 00:13:10.000 | 1000 |
2022-01-01 00:13:20.000 | 2022-01-01 00:14:50.000 | 1000 |
2022-01-01 00:15:00.000 | 2022-01-01 00:16:30.000 | 1000 |
Query OK, 10 row(s) in set (0.062794s)
```
## 时序数据特有函数
@ -563,14 +584,14 @@ UNION ALL
上面的 SQL分别查询子表 d1 的 1 条数据,子表 d11 的 2 条数据,子表 d21 的 3 条数据,并将结果合并。返回的结果如下:
```text
tbname | ts | current | voltage | phase |
=================================================================================================
d11 | 2020-09-13 12:26:40.000 | 1.0260611 | 6 | 0.3620200 |
d11 | 2020-09-13 12:26:50.000 | 2.9544230 | 8 | 1.0048079 |
d21 | 2020-09-13 12:26:40.000 | 1.0260611 | 2 | 0.3520200 |
d21 | 2020-09-13 12:26:50.000 | 2.9544230 | 2 | 0.9948080 |
d21 | 2020-09-13 12:27:00.000 | -0.0000430 | 12 | 0.0099860 |
d1 | 2020-09-13 12:26:40.000 | 1.0260611 | 10 | 0.3520200 |
tbname | ts | current | voltage | phase |
====================================================================================
d11 | 2020-09-13 20:26:40.000 | 11.5680809 | 247 | 146.5000000 |
d11 | 2020-09-13 20:26:50.000 | 14.2392311 | 234 | 148.0000000 |
d1 | 2020-09-13 20:26:40.000 | 11.5680809 | 247 | 146.5000000 |
d21 | 2020-09-13 20:26:40.000 | 11.5680809 | 247 | 146.5000000 |
d21 | 2020-09-13 20:26:50.000 | 14.2392311 | 234 | 148.0000000 |
d21 | 2020-09-13 20:27:00.000 | 10.0999422 | 251 | 146.0000000 |
Query OK, 6 row(s) in set (0.006438s)
```

View File

@ -54,10 +54,10 @@ TDengine 利用这些日志文件实现故障前的状态恢复。在写入 WAL
数据库参数 wal_level 和 wal_fsync_period 共同决定了 WAL 的保存行为。。
- wal_level此参数控制 WAL 的保存级别。级别 1 表示仅将数据写入 WAL但不立即执行 fsync 函数;级别 2 则表示在写入 WAL 的同时执行 fsync 函数。默认情况下wal_level 设为 1。虽然执行 fsync 函数可以提高数据的持久性,但相应地也会降低写入性能。
- wal_fsync_period当 wal_level 设置为 1 时,这个参数控制执行 fsync 的频率。设置为 0 表示每次写入后立即执行 fsync这可以确保数据的安全性但可能会牺牲一些性能。当设置为大于 0 的数值时,表示 fsync 周期,默认为 3000范围是[1 180000],单位毫秒。
- wal_fsync_period当 wal_level 设置为 2 时,这个参数控制执行 fsync 的频率。设置为 0 表示每次写入后立即执行 fsync这可以确保数据的安全性但可能会牺牲一些性能。当设置为大于 0 的数值时,表示 fsync 周期,默认为 3000范围是[1 180000],单位毫秒。
```sql
CREATE DATABASE POWER WAL_LEVEL 1 WAL_FSYNC_PERIOD 3000;
CREATE DATABASE POWER WAL_LEVEL 2 WAL_FSYNC_PERIOD 3000;
```
在创建数据库时可以选择不同的参数类型,来选择性能优先或者可靠性优先。

View File

@ -180,6 +180,7 @@ charset 的有效值是 UTF-8。
| tmrDebugFlag | 定时器模块的日志开关,取值范围同上 |
| uDebugFlag | 共用功能模块的日志开关,取值范围同上 |
| rpcDebugFlag | rpc 模块的日志开关,取值范围同上 |
| cDebugFlag | 客户端模块的日志开关,取值范围同上 |
| jniDebugFlag | jni 模块的日志开关,取值范围同上 |
| qDebugFlag | query 模块的日志开关,取值范围同上 |
| dDebugFlag | dnode 模块的日志开关,取值范围同上,缺省值 135 |

View File

@ -35,6 +35,7 @@ TDengine 客户端驱动提供了应用编程所需要的全部 API并且在
|smlAutoChildTableNameDelimiter | schemaless tag之间的连接符连起来作为子表名无缺省值 |
|smlTagName | schemaless tag 为空时默认的 tag 名字, 缺省值 "_tag_null" |
|smlTsDefaultName | schemaless自动建表的时间列名字通过该配置设置, 缺省值 "_ts" |
|smlDot2Underline | schemaless 把超级表名中的 dot 转成下划线 |
|enableCoreFile | crash 时是否生成 core 文件0: 不生成, 1 生成缺省值1 |
|enableScience | 是否开启科学计数法显示浮点数; 0: 不开始, 1: 开启缺省值1 |
|compressMsgSize | 是否对 RPC 消息进行压缩; -1: 所有消息都不压缩; 0: 所有消息都压缩; N (N>0): 只有大于 N 个字节的消息才压缩; 缺省值 -1|

View File

@ -364,7 +364,7 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
- **max** : 数据类型的 列/标签 的最大值。生成的值将小于最小值。
- **scalingFactor** : 浮点数精度增强因子仅当数据类型是float/double时生效有效值范围为1至1000000的正整数。用于增强生成浮点数的精度特别是在min或max值较小的情况下。此属性按10的幂次增强小数点后的精度scalingFactor为10表示增强1位小数精度100表示增强2位依此类推。
- **scalingFactor** : 浮点数精度增强因子,仅当数据类型是 float/double 时生效,有效值范围为 1 1000000 的正整数。用于增强生成浮点数的精度,特别是在 min max 值较小的情况下。此属性按 10 的幂次增强小数点后的精度scalingFactor 10 表示增强 1 位小数精度100 表示增强 2 位,依此类推。
- **fun** : 此列数据以函数填充,目前只支持 sin 和 cos 两函数,输入参数为时间戳换算成角度值,换算公式: 角度 x = 输入的时间列ts值 % 360。同时支持系数调节随机波动因子调节以固定格式的表达式展现如 fun=“10\*sin(x)+100\*random(5)” , x 表示角度,取值 0 ~ 360度增长步长与时间列步长一致。10 表示乘的系数100 表示加或减的系数5 表示波动幅度在 5% 的随机范围内。目前支持的数据类型为 int, bigint, float, double 四种数据类型。注意:表达式为固定模式,不可前后颠倒。

View File

@ -291,3 +291,4 @@ RESUME STREAM [IF EXISTS] [IGNORE UNTREATED] stream_name;
CREATE SNODE ON DNODE [id]
```
其中的 id 是集群中的 dnode 的序号。请注意选择的dnode流计算的中间状态将自动在其上进行备份。
从 3.3.4.0 版本开始,在多副本环境中创建流会进行 snode 的**存在性检查**,要求首先创建 snode。如果 snode 不存在,无法创建流。

View File

@ -34,8 +34,8 @@ TDengine 版本更新往往会增加新的功能特性,列表中的连接器
| **3.0.0.0 及以上** | 3.0.2以上 | 当前版本 | 3.0 分支 | 3.0.0 | 3.1.0 | 当前版本 | 与 TDengine 相同版本 |
| **2.4.0.14 及以上** | 2.0.38 | 当前版本 | develop 分支 | 1.0.2 - 1.0.6 | 2.0.10 - 2.0.12 | 当前版本 | 与 TDengine 相同版本 |
| **2.4.0.4 - 2.4.0.13** | 2.0.37 | 当前版本 | develop 分支 | 1.0.2 - 1.0.6 | 2.0.10 - 2.0.12 | 当前版本 | 与 TDengine 相同版本 |
| **2.2.x.x ** | 2.0.36 | 当前版本 | master 分支 | n/a | 2.0.7 - 2.0.9 | 当前版本 | 与 TDengine 相同版本 |
| **2.0.x.x ** | 2.0.34 | 当前版本 | master 分支 | n/a | 2.0.1 - 2.0.6 | 当前版本 | 与 TDengine 相同版本 |
| **2.2.x.x** | 2.0.36 | 当前版本 | master 分支 | n/a | 2.0.7 - 2.0.9 | 当前版本 | 与 TDengine 相同版本 |
| **2.0.x.x** | 2.0.34 | 当前版本 | master 分支 | n/a | 2.0.1 - 2.0.6 | 当前版本 | 与 TDengine 相同版本 |
## 功能特性

View File

@ -92,6 +92,26 @@ extern "C" {
} \
}
#define SML_CHECK_CODE(CMD) \
code = (CMD); \
if (TSDB_CODE_SUCCESS != code) { \
lino = __LINE__; \
goto END; \
}
#define SML_CHECK_NULL(CMD) \
if (NULL == (CMD)) { \
code = terrno; \
lino = __LINE__; \
goto END; \
}
#define RETURN \
if (code != 0){ \
uError("%s failed code:%d line:%d", __FUNCTION__ , code, lino); \
} \
return code;
typedef enum {
SCHEMA_ACTION_NULL,
SCHEMA_ACTION_CREATE_STABLE,
@ -191,7 +211,6 @@ typedef struct {
cJSON *root; // for parse json
int8_t offset[OTD_JSON_FIELDS_NUM];
SSmlLineInfo *lines; // element is SSmlLineInfo
bool parseJsonByLib;
SArray *tagJsonArray;
SArray *valueJsonArray;
@ -211,13 +230,8 @@ typedef struct {
extern int64_t smlFactorNS[];
extern int64_t smlFactorS[];
typedef int32_t (*_equal_fn_sml)(const void *, const void *);
int32_t smlBuildSmlInfo(TAOS *taos, SSmlHandle **handle);
void smlDestroyInfo(SSmlHandle *info);
int smlJsonParseObjFirst(char **start, SSmlLineInfo *element, int8_t *offset);
int smlJsonParseObj(char **start, SSmlLineInfo *element, int8_t *offset);
bool smlParseNumberOld(SSmlKv *kvVal, SSmlMsgBuf *msg);
void smlBuildInvalidDataMsg(SSmlMsgBuf *pBuf, const char *msg1, const char *msg2);
int32_t smlParseNumber(SSmlKv *kvVal, SSmlMsgBuf *msg);
int64_t smlGetTimeValue(const char *value, int32_t len, uint8_t fromPrecision, uint8_t toPrecision);
@ -237,7 +251,7 @@ void smlDestroyTableInfo(void *para);
void freeSSmlKv(void* data);
int32_t smlParseInfluxString(SSmlHandle *info, char *sql, char *sqlEnd, SSmlLineInfo *elements);
int32_t smlParseTelnetString(SSmlHandle *info, char *sql, char *sqlEnd, SSmlLineInfo *elements);
int32_t smlParseJSON(SSmlHandle *info, char *payload);
int32_t smlParseJSONExt(SSmlHandle *info, char *payload);
int32_t smlBuildSuperTableInfo(SSmlHandle *info, SSmlLineInfo *currElement, SSmlSTableMeta** sMeta);
bool isSmlTagAligned(SSmlHandle *info, int cnt, SSmlKv *kv);
@ -246,7 +260,8 @@ int32_t smlProcessChildTable(SSmlHandle *info, SSmlLineInfo *elements);
int32_t smlProcessSuperTable(SSmlHandle *info, SSmlLineInfo *elements);
int32_t smlJoinMeasureTag(SSmlLineInfo *elements);
void smlBuildTsKv(SSmlKv *kv, int64_t ts);
int32_t smlParseEndTelnetJson(SSmlHandle *info, SSmlLineInfo *elements, SSmlKv *kvTs, SSmlKv *kv);
int32_t smlParseEndTelnetJsonFormat(SSmlHandle *info, SSmlLineInfo *elements, SSmlKv *kvTs, SSmlKv *kv);
int32_t smlParseEndTelnetJsonUnFormat(SSmlHandle *info, SSmlLineInfo *elements, SSmlKv *kvTs, SSmlKv *kv);
int32_t smlParseEndLine(SSmlHandle *info, SSmlLineInfo *elements, SSmlKv *kvTs);
static inline bool smlDoubleToInt64OverFlow(double num) {

File diff suppressed because it is too large Load Diff

View File

@ -21,259 +21,10 @@
#define OTD_JSON_SUB_FIELDS_NUM 2
#define JUMP_JSON_SPACE(start) \
while (*(start)) { \
if (unlikely(*(start) > 32)) \
break; \
else \
(start)++; \
}
static int32_t smlJsonGetObj(char **payload) {
int leftBracketCnt = 0;
bool isInQuote = false;
while (**payload) {
if (**payload == '"' && *((*payload) - 1) != '\\') {
isInQuote = !isInQuote;
} else if (!isInQuote && unlikely(**payload == '{')) {
leftBracketCnt++;
(*payload)++;
continue;
} else if (!isInQuote && unlikely(**payload == '}')) {
leftBracketCnt--;
(*payload)++;
if (leftBracketCnt == 0) {
return 0;
} else if (leftBracketCnt < 0) {
return -1;
}
continue;
}
(*payload)++;
}
return -1;
}
int smlJsonParseObjFirst(char **start, SSmlLineInfo *element, int8_t *offset) {
int index = 0;
while (*(*start)) {
if ((*start)[0] != '"') {
(*start)++;
continue;
}
if (unlikely(index >= OTD_JSON_FIELDS_NUM)) {
uError("index >= %d, %s", OTD_JSON_FIELDS_NUM, *start);
return TSDB_CODE_TSC_INVALID_JSON;
}
char *sTmp = *start;
if ((*start)[1] == 'm' && (*start)[2] == 'e' && (*start)[3] == 't' && (*start)[4] == 'r' && (*start)[5] == 'i' &&
(*start)[6] == 'c' && (*start)[7] == '"') {
(*start) += 8;
bool isInQuote = false;
while (*(*start)) {
if (unlikely(!isInQuote && *(*start) == '"')) {
(*start)++;
offset[index++] = *start - sTmp;
element->measure = (*start);
isInQuote = true;
continue;
}
if (unlikely(isInQuote && *(*start) == '"')) {
element->measureLen = (*start) - element->measure;
(*start)++;
break;
}
(*start)++;
}
} else if ((*start)[1] == 't' && (*start)[2] == 'i' && (*start)[3] == 'm' && (*start)[4] == 'e' &&
(*start)[5] == 's' && (*start)[6] == 't' && (*start)[7] == 'a' && (*start)[8] == 'm' &&
(*start)[9] == 'p' && (*start)[10] == '"') {
(*start) += 11;
bool hasColon = false;
while (*(*start)) {
if (unlikely(!hasColon && *(*start) == ':')) {
(*start)++;
JUMP_JSON_SPACE((*start))
offset[index++] = *start - sTmp;
element->timestamp = (*start);
if (*(*start) == '{') {
char *tmp = *start;
int32_t code = smlJsonGetObj(&tmp);
if (code == 0) {
element->timestampLen = tmp - (*start);
*start = tmp;
}
break;
}
hasColon = true;
continue;
}
if (unlikely(hasColon && (*(*start) == ',' || *(*start) == '}' || (*(*start)) <= 32))) {
element->timestampLen = (*start) - element->timestamp;
break;
}
(*start)++;
}
} else if ((*start)[1] == 'v' && (*start)[2] == 'a' && (*start)[3] == 'l' && (*start)[4] == 'u' &&
(*start)[5] == 'e' && (*start)[6] == '"') {
(*start) += 7;
bool hasColon = false;
while (*(*start)) {
if (unlikely(!hasColon && *(*start) == ':')) {
(*start)++;
JUMP_JSON_SPACE((*start))
offset[index++] = *start - sTmp;
element->cols = (*start);
if (*(*start) == '{') {
char *tmp = *start;
int32_t code = smlJsonGetObj(&tmp);
if (code == 0) {
element->colsLen = tmp - (*start);
*start = tmp;
}
break;
}
hasColon = true;
continue;
}
if (unlikely(hasColon && (*(*start) == ',' || *(*start) == '}' || (*(*start)) <= 32))) {
element->colsLen = (*start) - element->cols;
break;
}
(*start)++;
}
} else if ((*start)[1] == 't' && (*start)[2] == 'a' && (*start)[3] == 'g' && (*start)[4] == 's' &&
(*start)[5] == '"') {
(*start) += 6;
while (*(*start)) {
if (unlikely(*(*start) == ':')) {
(*start)++;
JUMP_JSON_SPACE((*start))
offset[index++] = *start - sTmp;
element->tags = (*start);
char *tmp = *start;
int32_t code = smlJsonGetObj(&tmp);
if (code == 0) {
element->tagsLen = tmp - (*start);
*start = tmp;
}
break;
}
(*start)++;
}
}
if (*(*start) == '\0') {
break;
}
if (*(*start) == '}') {
(*start)++;
break;
}
(*start)++;
}
if (unlikely(index != OTD_JSON_FIELDS_NUM) || element->tags == NULL || element->cols == NULL ||
element->measure == NULL || element->timestamp == NULL) {
uError("elements != %d or element parse null", OTD_JSON_FIELDS_NUM);
return TSDB_CODE_TSC_INVALID_JSON;
}
return 0;
}
int smlJsonParseObj(char **start, SSmlLineInfo *element, int8_t *offset) {
int index = 0;
while (*(*start)) {
if ((*start)[0] != '"') {
(*start)++;
continue;
}
if (unlikely(index >= OTD_JSON_FIELDS_NUM)) {
uError("index >= %d, %s", OTD_JSON_FIELDS_NUM, *start);
return TSDB_CODE_TSC_INVALID_JSON;
}
if ((*start)[1] == 'm') {
(*start) += offset[index++];
element->measure = *start;
while (*(*start)) {
if (unlikely(*(*start) == '"')) {
element->measureLen = (*start) - element->measure;
(*start)++;
break;
}
(*start)++;
}
} else if ((*start)[1] == 't' && (*start)[2] == 'i') {
(*start) += offset[index++];
element->timestamp = *start;
if (*(*start) == '{') {
char *tmp = *start;
int32_t code = smlJsonGetObj(&tmp);
if (code == 0) {
element->timestampLen = tmp - (*start);
*start = tmp;
}
} else {
while (*(*start)) {
if (unlikely(*(*start) == ',' || *(*start) == '}' || (*(*start)) <= 32)) {
element->timestampLen = (*start) - element->timestamp;
break;
}
(*start)++;
}
}
} else if ((*start)[1] == 'v') {
(*start) += offset[index++];
element->cols = *start;
if (*(*start) == '{') {
char *tmp = *start;
int32_t code = smlJsonGetObj(&tmp);
if (code == 0) {
element->colsLen = tmp - (*start);
*start = tmp;
}
} else {
while (*(*start)) {
if (unlikely(*(*start) == ',' || *(*start) == '}' || (*(*start)) <= 32)) {
element->colsLen = (*start) - element->cols;
break;
}
(*start)++;
}
}
} else if ((*start)[1] == 't' && (*start)[2] == 'a') {
(*start) += offset[index++];
element->tags = (*start);
char *tmp = *start;
int32_t code = smlJsonGetObj(&tmp);
if (code == 0) {
element->tagsLen = tmp - (*start);
*start = tmp;
}
}
if (*(*start) == '}') {
(*start)++;
break;
}
(*start)++;
}
if (unlikely(index != 0 && index != OTD_JSON_FIELDS_NUM)) {
uError("elements != %d", OTD_JSON_FIELDS_NUM);
return TSDB_CODE_TSC_INVALID_JSON;
}
return TSDB_CODE_SUCCESS;
}
static inline int32_t smlParseMetricFromJSON(SSmlHandle *info, cJSON *metric, SSmlLineInfo *elements) {
elements->measureLen = strlen(metric->valuestring);
if (IS_INVALID_TABLE_LEN(elements->measureLen)) {
uError("OTD:0x%" PRIx64 " Metric length is 0 or large than 192", info->id);
uError("SML:0x%" PRIx64 " Metric length is 0 or large than 192", info->id);
return TSDB_CODE_TSC_INVALID_TABLE_ID_LENGTH;
}
@ -293,7 +44,7 @@ static int32_t smlGetJsonElements(cJSON *root, cJSON ***marks) {
child = child->next;
}
if (*marks[i] == NULL) {
uError("smlGetJsonElements error, not find mark:%d:%s", i, jsonName[i]);
uError("SML %s error, not find mark:%d:%s", __FUNCTION__, i, jsonName[i]);
return TSDB_CODE_TSC_INVALID_JSON;
}
}
@ -302,7 +53,7 @@ static int32_t smlGetJsonElements(cJSON *root, cJSON ***marks) {
static int32_t smlConvertJSONBool(SSmlKv *pVal, char *typeStr, cJSON *value) {
if (strcasecmp(typeStr, "bool") != 0) {
uError("OTD:invalid type(%s) for JSON Bool", typeStr);
uError("SML:invalid type(%s) for JSON Bool", typeStr);
return TSDB_CODE_TSC_INVALID_JSON_TYPE;
}
pVal->type = TSDB_DATA_TYPE_BOOL;
@ -316,7 +67,7 @@ static int32_t smlConvertJSONNumber(SSmlKv *pVal, char *typeStr, cJSON *value) {
// tinyint
if (strcasecmp(typeStr, "i8") == 0 || strcasecmp(typeStr, "tinyint") == 0) {
if (!IS_VALID_TINYINT(value->valuedouble)) {
uError("OTD:JSON value(%f) cannot fit in type(tinyint)", value->valuedouble);
uError("SML:JSON value(%f) cannot fit in type(tinyint)", value->valuedouble);
return TSDB_CODE_TSC_VALUE_OUT_OF_RANGE;
}
pVal->type = TSDB_DATA_TYPE_TINYINT;
@ -327,7 +78,7 @@ static int32_t smlConvertJSONNumber(SSmlKv *pVal, char *typeStr, cJSON *value) {
// smallint
if (strcasecmp(typeStr, "i16") == 0 || strcasecmp(typeStr, "smallint") == 0) {
if (!IS_VALID_SMALLINT(value->valuedouble)) {
uError("OTD:JSON value(%f) cannot fit in type(smallint)", value->valuedouble);
uError("SML:JSON value(%f) cannot fit in type(smallint)", value->valuedouble);
return TSDB_CODE_TSC_VALUE_OUT_OF_RANGE;
}
pVal->type = TSDB_DATA_TYPE_SMALLINT;
@ -338,7 +89,7 @@ static int32_t smlConvertJSONNumber(SSmlKv *pVal, char *typeStr, cJSON *value) {
// int
if (strcasecmp(typeStr, "i32") == 0 || strcasecmp(typeStr, "int") == 0) {
if (!IS_VALID_INT(value->valuedouble)) {
uError("OTD:JSON value(%f) cannot fit in type(int)", value->valuedouble);
uError("SML:JSON value(%f) cannot fit in type(int)", value->valuedouble);
return TSDB_CODE_TSC_VALUE_OUT_OF_RANGE;
}
pVal->type = TSDB_DATA_TYPE_INT;
@ -362,7 +113,7 @@ static int32_t smlConvertJSONNumber(SSmlKv *pVal, char *typeStr, cJSON *value) {
// float
if (strcasecmp(typeStr, "f32") == 0 || strcasecmp(typeStr, "float") == 0) {
if (!IS_VALID_FLOAT(value->valuedouble)) {
uError("OTD:JSON value(%f) cannot fit in type(float)", value->valuedouble);
uError("SML:JSON value(%f) cannot fit in type(float)", value->valuedouble);
return TSDB_CODE_TSC_VALUE_OUT_OF_RANGE;
}
pVal->type = TSDB_DATA_TYPE_FLOAT;
@ -379,7 +130,7 @@ static int32_t smlConvertJSONNumber(SSmlKv *pVal, char *typeStr, cJSON *value) {
}
// if reach here means type is unsupported
uError("OTD:invalid type(%s) for JSON Number", typeStr);
uError("SML:invalid type(%s) for JSON Number", typeStr);
return TSDB_CODE_TSC_INVALID_JSON_TYPE;
}
@ -391,7 +142,7 @@ static int32_t smlConvertJSONString(SSmlKv *pVal, char *typeStr, cJSON *value) {
} else if (strcasecmp(typeStr, "nchar") == 0) {
pVal->type = TSDB_DATA_TYPE_NCHAR;
} else {
uError("OTD:invalid type(%s) for JSON String", typeStr);
uError("SML:invalid type(%s) for JSON String", typeStr);
return TSDB_CODE_TSC_INVALID_JSON_TYPE;
}
pVal->length = strlen(value->valuestring);
@ -474,7 +225,7 @@ static int32_t smlParseValueFromJSON(cJSON *root, SSmlKv *kv) {
case cJSON_String: {
int32_t ret = smlConvertJSONString(kv, "binary", root);
if (ret != TSDB_CODE_SUCCESS) {
uError("OTD:Failed to parse binary value from JSON Obj");
uError("SML:Failed to parse binary value from JSON Obj");
return ret;
}
break;
@ -482,7 +233,7 @@ static int32_t smlParseValueFromJSON(cJSON *root, SSmlKv *kv) {
case cJSON_Object: {
int32_t ret = smlParseValueFromJSONObj(root, kv);
if (ret != TSDB_CODE_SUCCESS) {
uError("OTD:Failed to parse value from JSON Obj");
uError("SML:Failed to parse value from JSON Obj");
return ret;
}
break;
@ -511,7 +262,7 @@ static int32_t smlProcessTagJson(SSmlHandle *info, cJSON *tags){
}
size_t keyLen = strlen(tag->string);
if (unlikely(IS_INVALID_COL_LEN(keyLen))) {
uError("OTD:Tag key length is 0 or too large than 64");
uError("SML:Tag key length is 0 or too large than 64");
return TSDB_CODE_TSC_INVALID_COLUMN_LENGTH;
}
@ -539,28 +290,24 @@ static int32_t smlProcessTagJson(SSmlHandle *info, cJSON *tags){
}
static int32_t smlParseTagsFromJSON(SSmlHandle *info, cJSON *tags, SSmlLineInfo *elements) {
int32_t ret = 0;
if (is_same_child_table_telnet(elements, &info->preLine) == 0) {
elements->measureTag = info->preLine.measureTag;
return TSDB_CODE_SUCCESS;
}
int32_t code = 0;
int32_t lino = 0;
if(info->dataFormat){
ret = smlProcessSuperTable(info, elements);
if(ret != 0){
if(info->reRun){
return TSDB_CODE_SUCCESS;
}
return ret;
}
}
ret = smlProcessTagJson(info, tags);
if(ret != 0){
if(info->reRun){
return TSDB_CODE_SUCCESS;
}
return ret;
}
ret = smlJoinMeasureTag(elements);
if(ret != 0){
return ret;
SML_CHECK_CODE(smlProcessSuperTable(info, elements));
}
SML_CHECK_CODE(smlProcessTagJson(info, tags));
SML_CHECK_CODE(smlJoinMeasureTag(elements));
return smlProcessChildTable(info, elements);
END:
if(info->reRun){
return TSDB_CODE_SUCCESS;
}
RETURN
}
static int64_t smlParseTSFromJSONObj(SSmlHandle *info, cJSON *root, int32_t toPrecision) {
@ -678,7 +425,8 @@ static int64_t smlParseTSFromJSON(SSmlHandle *info, cJSON *timestamp) {
}
static int32_t smlParseJSONStringExt(SSmlHandle *info, cJSON *root, SSmlLineInfo *elements) {
int32_t ret = TSDB_CODE_SUCCESS;
int32_t code = TSDB_CODE_SUCCESS;
int32_t lino = 0;
cJSON *metricJson = NULL;
cJSON *tsJson = NULL;
@ -688,57 +436,27 @@ static int32_t smlParseJSONStringExt(SSmlHandle *info, cJSON *root, SSmlLineInfo
int32_t size = cJSON_GetArraySize(root);
// outmost json fields has to be exactly 4
if (size != OTD_JSON_FIELDS_NUM) {
uError("OTD:0x%" PRIx64 " Invalid number of JSON fields in data point %d", info->id, size);
uError("SML:0x%" PRIx64 " Invalid number of JSON fields in data point %d", info->id, size);
return TSDB_CODE_TSC_INVALID_JSON;
}
cJSON **marks[OTD_JSON_FIELDS_NUM] = {&metricJson, &tsJson, &valueJson, &tagsJson};
ret = smlGetJsonElements(root, marks);
if (unlikely(ret != TSDB_CODE_SUCCESS)) {
return ret;
}
SML_CHECK_CODE(smlGetJsonElements(root, marks));
// Parse metric
ret = smlParseMetricFromJSON(info, metricJson, elements);
if (unlikely(ret != TSDB_CODE_SUCCESS)) {
uError("OTD:0x%" PRIx64 " Unable to parse metric from JSON payload", info->id);
return ret;
}
SML_CHECK_CODE(smlParseMetricFromJSON(info, metricJson, elements));
// Parse metric value
SSmlKv kv = {.key = VALUE, .keyLen = VALUE_LEN};
ret = smlParseValueFromJSON(valueJson, &kv);
if (unlikely(ret)) {
uError("OTD:0x%" PRIx64 " Unable to parse metric value from JSON payload", info->id);
return ret;
}
SML_CHECK_CODE(smlParseValueFromJSON(valueJson, &kv));
// Parse tags
bool needFree = info->dataFormat;
elements->tags = cJSON_PrintUnformatted(tagsJson);
if (elements->tags == NULL){
return TSDB_CODE_OUT_OF_MEMORY;
}
elements->tagsLen = strlen(elements->tags);
if (is_same_child_table_telnet(elements, &info->preLine) != 0) {
ret = smlParseTagsFromJSON(info, tagsJson, elements);
if (unlikely(ret)) {
uError("OTD:0x%" PRIx64 " Unable to parse tags from JSON payload", info->id);
taosMemoryFree(elements->tags);
elements->tags = NULL;
return ret;
}
} else {
elements->measureTag = info->preLine.measureTag;
}
SML_CHECK_NULL(elements->tags);
if (needFree) {
taosMemoryFree(elements->tags);
elements->tags = NULL;
}
elements->tagsLen = strlen(elements->tags);
SML_CHECK_CODE(smlParseTagsFromJSON(info, tagsJson, elements));
if (unlikely(info->reRun)) {
return TSDB_CODE_SUCCESS;
goto END;
}
// Parse timestamp
@ -747,29 +465,34 @@ static int32_t smlParseJSONStringExt(SSmlHandle *info, cJSON *root, SSmlLineInfo
if (unlikely(ts < 0)) {
char* tmp = cJSON_PrintUnformatted(tsJson);
if (tmp == NULL) {
uError("cJSON_PrintUnformatted failed since %s", tstrerror(TSDB_CODE_OUT_OF_MEMORY));
uError("OTD:0x%" PRIx64 " Unable to parse timestamp from JSON payload %s %" PRId64, info->id, info->msgBuf.buf, ts);
uError("SML:0x%" PRIx64 " Unable to parse timestamp from JSON payload %s %" PRId64, info->id, info->msgBuf.buf, ts);
} else {
uError("OTD:0x%" PRIx64 " Unable to parse timestamp from JSON payload %s %s %" PRId64, info->id, info->msgBuf.buf,tmp, ts);
uError("SML:0x%" PRIx64 " Unable to parse timestamp from JSON payload %s %s %" PRId64, info->id, info->msgBuf.buf,tmp, ts);
taosMemoryFree(tmp);
}
return TSDB_CODE_INVALID_TIMESTAMP;
SML_CHECK_CODE(TSDB_CODE_INVALID_TIMESTAMP);
}
SSmlKv kvTs = {0};
smlBuildTsKv(&kvTs, ts);
if (info->dataFormat){
code = smlParseEndTelnetJsonFormat(info, elements, &kvTs, &kv);
} else {
code = smlParseEndTelnetJsonUnFormat(info, elements, &kvTs, &kv);
}
SML_CHECK_CODE(code);
taosMemoryFreeClear(info->preLine.tags);
info->preLine = *elements;
elements->tags = NULL;
return smlParseEndTelnetJson(info, elements, &kvTs, &kv);
END:
taosMemoryFree(elements->tags);
RETURN
}
static int32_t smlParseJSONExt(SSmlHandle *info, char *payload) {
int32_t smlParseJSONExt(SSmlHandle *info, char *payload) {
int32_t payloadNum = 0;
int32_t ret = TSDB_CODE_SUCCESS;
if (unlikely(payload == NULL)) {
uError("SML:0x%" PRIx64 " empty JSON Payload", info->id);
return TSDB_CODE_TSC_INVALID_JSON;
}
info->root = cJSON_Parse(payload);
if (unlikely(info->root == NULL)) {
uError("SML:0x%" PRIx64 " parse json failed:%s", info->id, payload);
@ -782,27 +505,11 @@ static int32_t smlParseJSONExt(SSmlHandle *info, char *payload) {
} else if (cJSON_IsObject(info->root)) {
payloadNum = 1;
} else {
uError("SML:0x%" PRIx64 " Invalid JSON Payload 3:%s", info->id, payload);
uError("SML:0x%" PRIx64 " Invalid JSON type:%s", info->id, payload);
return TSDB_CODE_TSC_INVALID_JSON;
}
if (unlikely(info->lines != NULL)) {
for (int i = 0; i < info->lineNum; i++) {
taosArrayDestroyEx(info->lines[i].colArray, freeSSmlKv);
if (info->lines[i].measureTagsLen != 0) taosMemoryFree(info->lines[i].measureTag);
}
taosMemoryFree(info->lines);
info->lines = NULL;
}
info->lineNum = payloadNum;
info->dataFormat = true;
ret = smlClearForRerun(info);
if (ret != TSDB_CODE_SUCCESS) {
return ret;
}
info->parseJsonByLib = true;
cJSON *head = (payloadNum == 1 && cJSON_IsObject(info->root)) ? info->root : info->root->child;
int cnt = 0;
@ -811,6 +518,7 @@ static int32_t smlParseJSONExt(SSmlHandle *info, char *payload) {
if (info->dataFormat) {
SSmlLineInfo element = {0};
ret = smlParseJSONStringExt(info, dataPoint, &element);
if (element.measureTagsLen != 0) taosMemoryFree(element.measureTag);
} else {
ret = smlParseJSONStringExt(info, dataPoint, info->lines + cnt);
}
@ -836,164 +544,3 @@ static int32_t smlParseJSONExt(SSmlHandle *info, char *payload) {
return TSDB_CODE_SUCCESS;
}
static int32_t smlParseJSONString(SSmlHandle *info, char **start, SSmlLineInfo *elements) {
int32_t ret = TSDB_CODE_SUCCESS;
if (info->offset[0] == 0) {
ret = smlJsonParseObjFirst(start, elements, info->offset);
} else {
ret = smlJsonParseObj(start, elements, info->offset);
}
if (ret != TSDB_CODE_SUCCESS) {
return ret;
}
if (unlikely(**start == '\0' && elements->measure == NULL)) return TSDB_CODE_SUCCESS;
if (unlikely(IS_INVALID_TABLE_LEN(elements->measureLen))) {
smlBuildInvalidDataMsg(&info->msgBuf, "measure is empty or too large than 192", NULL);
return TSDB_CODE_TSC_INVALID_TABLE_ID_LENGTH;
}
SSmlKv kv = {.key = VALUE, .keyLen = VALUE_LEN, .value = elements->cols, .length = (size_t)elements->colsLen};
if (unlikely(elements->colsLen == 0)) {
uError("SML:colsLen == 0");
return TSDB_CODE_TSC_INVALID_VALUE;
} else if (unlikely(elements->cols[0] == '{')) {
char tmp = elements->cols[elements->colsLen];
elements->cols[elements->colsLen] = '\0';
cJSON *valueJson = cJSON_Parse(elements->cols);
if (unlikely(valueJson == NULL)) {
uError("SML:0x%" PRIx64 " parse json cols failed:%s", info->id, elements->cols);
elements->cols[elements->colsLen] = tmp;
return TSDB_CODE_TSC_INVALID_JSON;
}
if (taosArrayPush(info->tagJsonArray, &valueJson) == NULL){
cJSON_Delete(valueJson);
elements->cols[elements->colsLen] = tmp;
return terrno;
}
ret = smlParseValueFromJSONObj(valueJson, &kv);
if (ret != TSDB_CODE_SUCCESS) {
uError("SML:0x%" PRIx64 " Failed to parse value from JSON Obj:%s", info->id, elements->cols);
elements->cols[elements->colsLen] = tmp;
return TSDB_CODE_TSC_INVALID_VALUE;
}
elements->cols[elements->colsLen] = tmp;
} else if (smlParseValue(&kv, &info->msgBuf) != TSDB_CODE_SUCCESS) {
uError("SML:0x%" PRIx64 " cols invalidate:%s", info->id, elements->cols);
return TSDB_CODE_TSC_INVALID_VALUE;
}
// Parse tags
if (is_same_child_table_telnet(elements, &info->preLine) != 0) {
char tmp = *(elements->tags + elements->tagsLen);
*(elements->tags + elements->tagsLen) = 0;
cJSON *tagsJson = cJSON_Parse(elements->tags);
*(elements->tags + elements->tagsLen) = tmp;
if (unlikely(tagsJson == NULL)) {
uError("SML:0x%" PRIx64 " parse json tag failed:%s", info->id, elements->tags);
return TSDB_CODE_TSC_INVALID_JSON;
}
if (taosArrayPush(info->tagJsonArray, &tagsJson) == NULL){
cJSON_Delete(tagsJson);
uError("SML:0x%" PRIx64 " taosArrayPush failed", info->id);
return terrno;
}
ret = smlParseTagsFromJSON(info, tagsJson, elements);
if (unlikely(ret)) {
uError("OTD:0x%" PRIx64 " Unable to parse tags from JSON payload", info->id);
return ret;
}
} else {
elements->measureTag = info->preLine.measureTag;
}
if (unlikely(info->reRun)) {
return TSDB_CODE_SUCCESS;
}
// Parse timestamp
// notice!!! put ts back to tag to ensure get meta->precision
int64_t ts = 0;
if (unlikely(elements->timestampLen == 0)) {
uError("OTD:0x%" PRIx64 " elements->timestampLen == 0", info->id);
return TSDB_CODE_INVALID_TIMESTAMP;
} else if (elements->timestamp[0] == '{') {
char tmp = elements->timestamp[elements->timestampLen];
elements->timestamp[elements->timestampLen] = '\0';
cJSON *tsJson = cJSON_Parse(elements->timestamp);
ts = smlParseTSFromJSON(info, tsJson);
if (unlikely(ts < 0)) {
uError("SML:0x%" PRIx64 " Unable to parse timestamp from JSON payload:%s", info->id, elements->timestamp);
elements->timestamp[elements->timestampLen] = tmp;
cJSON_Delete(tsJson);
return TSDB_CODE_INVALID_TIMESTAMP;
}
elements->timestamp[elements->timestampLen] = tmp;
cJSON_Delete(tsJson);
} else {
ts = smlParseOpenTsdbTime(info, elements->timestamp, elements->timestampLen);
if (unlikely(ts < 0)) {
uError("OTD:0x%" PRIx64 " Unable to parse timestamp from JSON payload", info->id);
return TSDB_CODE_INVALID_TIMESTAMP;
}
}
SSmlKv kvTs = {0};
smlBuildTsKv(&kvTs, ts);
return smlParseEndTelnetJson(info, elements, &kvTs, &kv);
}
int32_t smlParseJSON(SSmlHandle *info, char *payload) {
int32_t payloadNum = 1 << 15;
int32_t ret = TSDB_CODE_SUCCESS;
uDebug("SML:0x%" PRIx64 "json:%s", info->id, payload);
int cnt = 0;
char *dataPointStart = payload;
while (1) {
if (info->dataFormat) {
SSmlLineInfo element = {0};
ret = smlParseJSONString(info, &dataPointStart, &element);
if (element.measureTagsLen != 0) taosMemoryFree(element.measureTag);
} else {
if (cnt >= payloadNum) {
payloadNum = payloadNum << 1;
void *tmp = taosMemoryRealloc(info->lines, payloadNum * sizeof(SSmlLineInfo));
if (tmp == NULL) {
ret = terrno;
return ret;
}
info->lines = (SSmlLineInfo *)tmp;
(void)memset(info->lines + cnt, 0, (payloadNum - cnt) * sizeof(SSmlLineInfo));
}
ret = smlParseJSONString(info, &dataPointStart, info->lines + cnt);
if ((info->lines + cnt)->measure == NULL) break;
}
if (unlikely(ret != TSDB_CODE_SUCCESS)) {
uError("SML:0x%" PRIx64 " Invalid JSON Payload 1:%s", info->id, payload);
return smlParseJSONExt(info, payload);
}
if (unlikely(info->reRun)) {
cnt = 0;
dataPointStart = payload;
info->lineNum = payloadNum;
ret = smlClearForRerun(info);
if (ret != TSDB_CODE_SUCCESS) {
return ret;
}
continue;
}
cnt++;
if (*dataPointStart == '\0') break;
}
info->lineNum = cnt;
return TSDB_CODE_SUCCESS;
}

View File

@ -63,7 +63,7 @@ static int64_t smlParseInfluxTime(SSmlHandle *info, const char *data, int32_t le
int64_t ts = smlGetTimeValue(data, len, fromPrecision, toPrecision);
if (unlikely(ts == -1)) {
smlBuildInvalidDataMsg(&info->msgBuf, "invalid timestamp", data);
smlBuildInvalidDataMsg(&info->msgBuf, "SML line invalid timestamp", data);
return TSDB_CODE_SML_INVALID_DATA;
}
return ts;
@ -84,7 +84,7 @@ int32_t smlParseValue(SSmlKv *pVal, SSmlMsgBuf *msg) {
}
if (pVal->value[0] == 'l' || pVal->value[0] == 'L') { // nchar
if (pVal->value[1] == '"' && pVal->value[pVal->length - 1] == '"' && pVal->length >= 3) {
if (pVal->length >= NCHAR_ADD_LEN && pVal->value[1] == '"' && pVal->value[pVal->length - 1] == '"') {
pVal->type = TSDB_DATA_TYPE_NCHAR;
pVal->length -= NCHAR_ADD_LEN;
if (pVal->length > (TSDB_MAX_NCHAR_LEN - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE) {
@ -97,7 +97,7 @@ int32_t smlParseValue(SSmlKv *pVal, SSmlMsgBuf *msg) {
}
if (pVal->value[0] == 'g' || pVal->value[0] == 'G') { // geometry
if (pVal->value[1] == '"' && pVal->value[pVal->length - 1] == '"' && pVal->length >= sizeof("POINT")+3) {
if (pVal->length >= NCHAR_ADD_LEN && pVal->value[1] == '"' && pVal->value[pVal->length - 1] == '"') {
int32_t code = initCtxGeomFromText();
if (code != TSDB_CODE_SUCCESS) {
return code;
@ -124,7 +124,7 @@ int32_t smlParseValue(SSmlKv *pVal, SSmlMsgBuf *msg) {
}
if (pVal->value[0] == 'b' || pVal->value[0] == 'B') { // varbinary
if (pVal->value[1] == '"' && pVal->value[pVal->length - 1] == '"' && pVal->length >= 3) {
if (pVal->length >= NCHAR_ADD_LEN && pVal->value[1] == '"' && pVal->value[pVal->length - 1] == '"') {
pVal->type = TSDB_DATA_TYPE_VARBINARY;
if(isHex(pVal->value + NCHAR_ADD_LEN - 1, pVal->length - NCHAR_ADD_LEN)){
if(!isValidateHex(pVal->value + NCHAR_ADD_LEN - 1, pVal->length - NCHAR_ADD_LEN)){
@ -298,7 +298,7 @@ static int32_t smlProcessTagLine(SSmlHandle *info, char **sql, char *sqlEnd){
}
if (info->dataFormat && !isSmlTagAligned(info, cnt, &kv)) {
return TSDB_CODE_TSC_INVALID_JSON;
return TSDB_CODE_SML_INVALID_DATA;
}
cnt++;
@ -311,31 +311,24 @@ static int32_t smlProcessTagLine(SSmlHandle *info, char **sql, char *sqlEnd){
}
static int32_t smlParseTagLine(SSmlHandle *info, char **sql, char *sqlEnd, SSmlLineInfo *elements) {
int32_t code = 0;
int32_t lino = 0;
bool isSameCTable = IS_SAME_CHILD_TABLE;
if(isSameCTable){
return TSDB_CODE_SUCCESS;
}
int32_t ret = 0;
if(info->dataFormat){
ret = smlProcessSuperTable(info, elements);
if(ret != 0){
if(info->reRun){
return TSDB_CODE_SUCCESS;
}
return ret;
}
SML_CHECK_CODE(smlProcessSuperTable(info, elements));
}
ret = smlProcessTagLine(info, sql, sqlEnd);
if(ret != 0){
if (info->reRun){
return TSDB_CODE_SUCCESS;
}
return ret;
}
SML_CHECK_CODE(smlProcessTagLine(info, sql, sqlEnd));
return smlProcessChildTable(info, elements);
END:
if(info->reRun){
return TSDB_CODE_SUCCESS;
}
RETURN
}
static int32_t smlParseColLine(SSmlHandle *info, char **sql, char *sqlEnd, SSmlLineInfo *currElement) {
@ -353,7 +346,7 @@ static int32_t smlParseColLine(SSmlHandle *info, char **sql, char *sqlEnd, SSmlL
const char *escapeChar = NULL;
while (*sql < sqlEnd) {
if (unlikely(IS_SPACE(*sql,escapeChar) || IS_COMMA(*sql,escapeChar))) {
smlBuildInvalidDataMsg(&info->msgBuf, "invalid data", *sql);
smlBuildInvalidDataMsg(&info->msgBuf, "SML line invalid data", *sql);
return TSDB_CODE_SML_INVALID_DATA;
}
if (unlikely(IS_EQUAL(*sql,escapeChar))) {
@ -370,7 +363,7 @@ static int32_t smlParseColLine(SSmlHandle *info, char **sql, char *sqlEnd, SSmlL
}
if (unlikely(IS_INVALID_COL_LEN(keyLen - keyLenEscaped))) {
smlBuildInvalidDataMsg(&info->msgBuf, "invalid key or key is too long than 64", key);
smlBuildInvalidDataMsg(&info->msgBuf, "SML line invalid key or key is too long than 64", key);
return TSDB_CODE_TSC_INVALID_COLUMN_LENGTH;
}
@ -404,18 +397,18 @@ static int32_t smlParseColLine(SSmlHandle *info, char **sql, char *sqlEnd, SSmlL
valueLen = *sql - value;
if (unlikely(quoteNum != 0 && quoteNum != 2)) {
smlBuildInvalidDataMsg(&info->msgBuf, "unbalanced quotes", value);
smlBuildInvalidDataMsg(&info->msgBuf, "SML line unbalanced quotes", value);
return TSDB_CODE_SML_INVALID_DATA;
}
if (unlikely(valueLen == 0)) {
smlBuildInvalidDataMsg(&info->msgBuf, "invalid value", value);
smlBuildInvalidDataMsg(&info->msgBuf, "SML line invalid value", value);
return TSDB_CODE_SML_INVALID_DATA;
}
SSmlKv kv = {.key = key, .keyLen = keyLen, .value = value, .length = valueLen};
int32_t ret = smlParseValue(&kv, &info->msgBuf);
if (ret != TSDB_CODE_SUCCESS) {
smlBuildInvalidDataMsg(&info->msgBuf, "smlParseValue error", value);
uError("SML:0x%" PRIx64 " %s parse value error:%d.", info->id, __FUNCTION__, ret);
return ret;
}
@ -437,11 +430,6 @@ static int32_t smlParseColLine(SSmlHandle *info, char **sql, char *sqlEnd, SSmlL
}
(void)memcpy(tmp, kv.value, kv.length);
PROCESS_SLASH_IN_FIELD_VALUE(tmp, kv.length);
if(kv.type == TSDB_DATA_TYPE_GEOMETRY) {
uError("SML:0x%" PRIx64 " smlParseColLine error, invalid GEOMETRY type.", info->id);
taosMemoryFree((void*)kv.value);
return TSDB_CODE_TSC_INVALID_VALUE;
}
if(kv.type == TSDB_DATA_TYPE_VARBINARY){
taosMemoryFree((void*)kv.value);
}
@ -510,7 +498,7 @@ int32_t smlParseInfluxString(SSmlHandle *info, char *sql, char *sqlEnd, SSmlLine
}
elements->measureLen = sql - elements->measure;
if (unlikely(IS_INVALID_TABLE_LEN(elements->measureLen - measureLenEscaped))) {
smlBuildInvalidDataMsg(&info->msgBuf, "measure is empty or too large than 192", NULL);
smlBuildInvalidDataMsg(&info->msgBuf, "SML line measure is empty or too large than 192", NULL);
return TSDB_CODE_TSC_INVALID_TABLE_ID_LENGTH;
}
@ -557,7 +545,7 @@ int32_t smlParseInfluxString(SSmlHandle *info, char *sql, char *sqlEnd, SSmlLine
elements->colsLen = sql - elements->cols;
if (unlikely(elements->colsLen == 0)) {
smlBuildInvalidDataMsg(&info->msgBuf, "cols is empty", NULL);
smlBuildInvalidDataMsg(&info->msgBuf, "SML line cols is empty", NULL);
return TSDB_CODE_SML_INVALID_DATA;
}
@ -574,7 +562,7 @@ int32_t smlParseInfluxString(SSmlHandle *info, char *sql, char *sqlEnd, SSmlLine
int64_t ts = smlParseInfluxTime(info, elements->timestamp, elements->timestampLen);
if (unlikely(ts <= 0)) {
uError("SML:0x%" PRIx64 " smlParseTS error:%" PRId64, info->id, ts);
uError("SML:0x%" PRIx64 " %s error:%" PRId64, info->id, __FUNCTION__, ts);
return TSDB_CODE_INVALID_TIMESTAMP;
}

View File

@ -148,31 +148,21 @@ static int32_t smlParseTelnetTags(SSmlHandle *info, char *data, char *sqlEnd, SS
return TSDB_CODE_SUCCESS;
}
int32_t ret = 0;
int32_t code = 0;
int32_t lino = 0;
if(info->dataFormat){
ret = smlProcessSuperTable(info, elements);
if(ret != 0){
if(info->reRun){
return TSDB_CODE_SUCCESS;
}
return ret;
}
SML_CHECK_CODE(smlProcessSuperTable(info, elements));
}
SML_CHECK_CODE(smlProcessTagTelnet(info, data, sqlEnd));
SML_CHECK_CODE(smlJoinMeasureTag(elements));
ret = smlProcessTagTelnet(info, data, sqlEnd);
if(ret != 0){
if (info->reRun){
return TSDB_CODE_SUCCESS;
}
return ret;
code = smlProcessChildTable(info, elements);
END:
if(info->reRun){
return TSDB_CODE_SUCCESS;
}
ret = smlJoinMeasureTag(elements);
if(ret != 0){
return ret;
}
return smlProcessChildTable(info, elements);
RETURN
}
// format: <metric> <timestamp> <value> <tagk_1>=<tagv_1>[ <tagk_n>=<tagv_n>]
@ -182,14 +172,14 @@ int32_t smlParseTelnetString(SSmlHandle *info, char *sql, char *sqlEnd, SSmlLine
// parse metric
smlParseTelnetElement(&sql, sqlEnd, &elements->measure, &elements->measureLen);
if (unlikely((!(elements->measure) || IS_INVALID_TABLE_LEN(elements->measureLen)))) {
smlBuildInvalidDataMsg(&info->msgBuf, "invalid data", sql);
smlBuildInvalidDataMsg(&info->msgBuf, "SML telnet invalid measure", sql);
return TSDB_CODE_TSC_INVALID_TABLE_ID_LENGTH;
}
// parse timestamp
smlParseTelnetElement(&sql, sqlEnd, &elements->timestamp, &elements->timestampLen);
if (unlikely(!elements->timestamp || elements->timestampLen == 0)) {
smlBuildInvalidDataMsg(&info->msgBuf, "invalid timestamp", sql);
smlBuildInvalidDataMsg(&info->msgBuf, "SML telnet invalid timestamp", sql);
return TSDB_CODE_SML_INVALID_DATA;
}
@ -199,19 +189,21 @@ int32_t smlParseTelnetString(SSmlHandle *info, char *sql, char *sqlEnd, SSmlLine
}
int64_t ts = smlParseOpenTsdbTime(info, elements->timestamp, elements->timestampLen);
if (unlikely(ts < 0)) {
smlBuildInvalidDataMsg(&info->msgBuf, "invalid timestamp", sql);
smlBuildInvalidDataMsg(&info->msgBuf, "SML telnet parse timestamp failed", sql);
return TSDB_CODE_INVALID_TIMESTAMP;
}
// parse value
smlParseTelnetElement(&sql, sqlEnd, &elements->cols, &elements->colsLen);
if (unlikely(!elements->cols || elements->colsLen == 0)) {
smlBuildInvalidDataMsg(&info->msgBuf, "invalid value", sql);
smlBuildInvalidDataMsg(&info->msgBuf, "SML telnet invalid value", sql);
return TSDB_CODE_TSC_INVALID_VALUE;
}
SSmlKv kv = {.key = VALUE, .keyLen = VALUE_LEN, .value = elements->cols, .length = (size_t)elements->colsLen};
if (smlParseValue(&kv, &info->msgBuf) != TSDB_CODE_SUCCESS) {
int ret = smlParseValue(&kv, &info->msgBuf);
if (ret != TSDB_CODE_SUCCESS) {
uError("SML:0x%" PRIx64 " %s parse value error:%d.", info->id, __FUNCTION__, ret);
return TSDB_CODE_TSC_INVALID_VALUE;
}
@ -220,11 +212,11 @@ int32_t smlParseTelnetString(SSmlHandle *info, char *sql, char *sqlEnd, SSmlLine
elements->tags = sql;
elements->tagsLen = sqlEnd - sql;
if (unlikely(!elements->tags || elements->tagsLen == 0)) {
smlBuildInvalidDataMsg(&info->msgBuf, "invalid value", sql);
smlBuildInvalidDataMsg(&info->msgBuf, "SML telnet invalid tag value", sql);
return TSDB_CODE_TSC_INVALID_VALUE;
}
int ret = smlParseTelnetTags(info, sql, sqlEnd, elements);
ret = smlParseTelnetTags(info, sql, sqlEnd, elements);
if (unlikely(ret != TSDB_CODE_SUCCESS)) {
return ret;
}
@ -239,5 +231,12 @@ int32_t smlParseTelnetString(SSmlHandle *info, char *sql, char *sqlEnd, SSmlLine
kvTs.i = convertTimePrecision(kvTs.i, TSDB_TIME_PRECISION_NANO, info->currSTableMeta->tableInfo.precision);
}
return smlParseEndTelnetJson(info, elements, &kvTs, &kv);
if (info->dataFormat){
ret = smlParseEndTelnetJsonFormat(info, elements, &kvTs, &kv);
} else {
ret = smlParseEndTelnetJsonUnFormat(info, elements, &kvTs, &kv);
}
info->preLine = *elements;
return ret;
}

View File

@ -68,6 +68,15 @@ TEST(testCase, smlParseInfluxString_Test) {
taosArrayDestroy(elements.colArray);
elements.colArray = nullptr;
// case 0 false
tmp = "st,t1=3 c3=\"";
(void)memcpy(sql, tmp, strlen(tmp) + 1);
(void)memset(&elements, 0, sizeof(SSmlLineInfo));
ret = smlParseInfluxString(info, sql, sql + strlen(sql), &elements);
ASSERT_NE(ret, 0);
taosArrayDestroy(elements.colArray);
elements.colArray = nullptr;
// case 2 false
tmp = "st,t1=3,t2=4,t3=t3 c1=3i64,c3=\"passit hello,c1=2,c2=false,c4=4f64 1626006833639000000";
(void)memcpy(sql, tmp, strlen(tmp) + 1);
@ -591,6 +600,104 @@ TEST(testCase, smlParseTelnetLine_Test) {
// smlDestroyInfo(info);
//}
bool smlParseNumberOld(SSmlKv *kvVal, SSmlMsgBuf *msg) {
const char *pVal = kvVal->value;
int32_t len = kvVal->length;
char *endptr = NULL;
double result = taosStr2Double(pVal, &endptr);
if (pVal == endptr) {
smlBuildInvalidDataMsg(msg, "invalid data", pVal);
return false;
}
int32_t left = len - (endptr - pVal);
if (left == 0 || (left == 3 && strncasecmp(endptr, "f64", left) == 0)) {
kvVal->type = TSDB_DATA_TYPE_DOUBLE;
kvVal->d = result;
} else if ((left == 3 && strncasecmp(endptr, "f32", left) == 0)) {
if (!IS_VALID_FLOAT(result)) {
smlBuildInvalidDataMsg(msg, "float out of range[-3.402823466e+38,3.402823466e+38]", pVal);
return false;
}
kvVal->type = TSDB_DATA_TYPE_FLOAT;
kvVal->f = (float)result;
} else if ((left == 1 && *endptr == 'i') || (left == 3 && strncasecmp(endptr, "i64", left) == 0)) {
if (smlDoubleToInt64OverFlow(result)) {
errno = 0;
int64_t tmp = taosStr2Int64(pVal, &endptr, 10);
if (errno == ERANGE) {
smlBuildInvalidDataMsg(msg, "big int out of range[-9223372036854775808,9223372036854775807]", pVal);
return false;
}
kvVal->type = TSDB_DATA_TYPE_BIGINT;
kvVal->i = tmp;
return true;
}
kvVal->type = TSDB_DATA_TYPE_BIGINT;
kvVal->i = (int64_t)result;
} else if ((left == 1 && *endptr == 'u') || (left == 3 && strncasecmp(endptr, "u64", left) == 0)) {
if (result >= (double)UINT64_MAX || result < 0) {
errno = 0;
uint64_t tmp = taosStr2UInt64(pVal, &endptr, 10);
if (errno == ERANGE || result < 0) {
smlBuildInvalidDataMsg(msg, "unsigned big int out of range[0,18446744073709551615]", pVal);
return false;
}
kvVal->type = TSDB_DATA_TYPE_UBIGINT;
kvVal->u = tmp;
return true;
}
kvVal->type = TSDB_DATA_TYPE_UBIGINT;
kvVal->u = result;
} else if (left == 3 && strncasecmp(endptr, "i32", left) == 0) {
if (!IS_VALID_INT(result)) {
smlBuildInvalidDataMsg(msg, "int out of range[-2147483648,2147483647]", pVal);
return false;
}
kvVal->type = TSDB_DATA_TYPE_INT;
kvVal->i = result;
} else if (left == 3 && strncasecmp(endptr, "u32", left) == 0) {
if (!IS_VALID_UINT(result)) {
smlBuildInvalidDataMsg(msg, "unsigned int out of range[0,4294967295]", pVal);
return false;
}
kvVal->type = TSDB_DATA_TYPE_UINT;
kvVal->u = result;
} else if (left == 3 && strncasecmp(endptr, "i16", left) == 0) {
if (!IS_VALID_SMALLINT(result)) {
smlBuildInvalidDataMsg(msg, "small int our of range[-32768,32767]", pVal);
return false;
}
kvVal->type = TSDB_DATA_TYPE_SMALLINT;
kvVal->i = result;
} else if (left == 3 && strncasecmp(endptr, "u16", left) == 0) {
if (!IS_VALID_USMALLINT(result)) {
smlBuildInvalidDataMsg(msg, "unsigned small int out of rang[0,65535]", pVal);
return false;
}
kvVal->type = TSDB_DATA_TYPE_USMALLINT;
kvVal->u = result;
} else if (left == 2 && strncasecmp(endptr, "i8", left) == 0) {
if (!IS_VALID_TINYINT(result)) {
smlBuildInvalidDataMsg(msg, "tiny int out of range[-128,127]", pVal);
return false;
}
kvVal->type = TSDB_DATA_TYPE_TINYINT;
kvVal->i = result;
} else if (left == 2 && strncasecmp(endptr, "u8", left) == 0) {
if (!IS_VALID_UTINYINT(result)) {
smlBuildInvalidDataMsg(msg, "unsigned tiny int out of range[0,255]", pVal);
return false;
}
kvVal->type = TSDB_DATA_TYPE_UTINYINT;
kvVal->u = result;
} else {
smlBuildInvalidDataMsg(msg, "invalid data", pVal);
return false;
}
return true;
}
TEST(testCase, smlParseNumber_performance_Test) {
char msg[256] = {0};
SSmlMsgBuf msgBuf;

View File

@ -133,6 +133,7 @@ int32_t mndStreamSetUpdateEpsetAction(SMnode *pMnode, SStreamObj *pStream, SVgr
int32_t mndGetStreamObj(SMnode *pMnode, int64_t streamId, SStreamObj** pStream);
bool mndStreamNodeIsUpdated(SMnode *pMnode);
int32_t mndCheckForSnode(SMnode *pMnode, SDbObj *pSrcDb);
int32_t extractNodeEpset(SMnode *pMnode, SEpSet *pEpSet, bool *hasEpset, int32_t taskId, int32_t nodeId);
int32_t mndProcessStreamHb(SRpcMsg *pReq);

View File

@ -795,12 +795,22 @@ static int32_t mndProcessCreateStreamReq(SRpcMsg *pReq) {
}
if (createReq.sql != NULL) {
sqlLen = strlen(createReq.sql);
sql = taosMemoryMalloc(sqlLen + 1);
sql = taosStrdup(createReq.sql);
TSDB_CHECK_NULL(sql, code, lino, _OVER, terrno);
}
memset(sql, 0, sqlLen + 1);
memcpy(sql, createReq.sql, sqlLen);
SDbObj *pSourceDb = mndAcquireDb(pMnode, createReq.sourceDB);
if (pSourceDb == NULL) {
code = terrno;
mInfo("stream:%s failed to create, acquire source db %s failed, code:%s", createReq.name, createReq.sourceDB,
tstrerror(code));
goto _OVER;
}
code = mndCheckForSnode(pMnode, pSourceDb);
mndReleaseDb(pMnode, pSourceDb);
if (code != 0) {
goto _OVER;
}
// build stream obj from request

View File

@ -1497,6 +1497,30 @@ bool mndStreamNodeIsUpdated(SMnode *pMnode) {
return updated;
}
int32_t mndCheckForSnode(SMnode *pMnode, SDbObj *pSrcDb) {
SSdb *pSdb = pMnode->pSdb;
void *pIter = NULL;
SSnodeObj *pObj = NULL;
if (pSrcDb->cfg.replications == 1) {
return TSDB_CODE_SUCCESS;
} else {
while (1) {
pIter = sdbFetch(pSdb, SDB_SNODE, pIter, (void **)&pObj);
if (pIter == NULL) {
break;
}
sdbRelease(pSdb, pObj);
sdbCancelFetch(pSdb, pIter);
return TSDB_CODE_SUCCESS;
}
mError("snode not existed when trying to create stream in db with multiple replica");
return TSDB_CODE_SNODE_NOT_DEPLOYED;
}
}
uint32_t seed = 0;
static SRpcMsg createRpcMsg(STransAction* pAction, int64_t traceId, int64_t signature) {
SRpcMsg rpcMsg = {.msgType = pAction->msgType, .contLen = pAction->contLen, .info.ahandle = (void *)signature};

View File

@ -342,7 +342,10 @@ typedef struct {
rocksdb_writeoptions_t *writeoptions;
rocksdb_readoptions_t *readoptions;
rocksdb_writebatch_t *writebatch;
TdThreadMutex writeBatchMutex;
TdThreadMutex writeBatchMutex;
int32_t sver;
tb_uid_t suid;
tb_uid_t uid;
STSchema *pTSchema;
} SRocksCache;

View File

@ -222,6 +222,7 @@ int32_t tsdbCacheNewSTableColumn(STsdb* pTsdb, SArray* uids, int16_t cid, int8_t
int32_t tsdbCacheDropSTableColumn(STsdb* pTsdb, SArray* uids, int16_t cid, bool hasPrimayKey);
int32_t tsdbCacheNewNTableColumn(STsdb* pTsdb, int64_t uid, int16_t cid, int8_t col_type);
int32_t tsdbCacheDropNTableColumn(STsdb* pTsdb, int64_t uid, int16_t cid, bool hasPrimayKey);
void tsdbCacheInvalidateSchema(STsdb* pTsdb, tb_uid_t suid, tb_uid_t uid, int32_t sver);
int tsdbScanAndConvertSubmitMsg(STsdb* pTsdb, SSubmitReq2* pMsg);
int tsdbInsertData(STsdb* pTsdb, int64_t version, SSubmitReq2* pMsg, SSubmitRsp2* pRsp);
int32_t tsdbInsertTableData(STsdb* pTsdb, int64_t version, SSubmitTbData* pSubmitTbData, int32_t* affectedRows);

View File

@ -620,6 +620,8 @@ int metaAlterSTable(SMeta *pMeta, int64_t version, SVCreateStbReq *pReq) {
}
}
if (uids) taosArrayDestroy(uids);
tsdbCacheInvalidateSchema(pTsdb, pReq->suid, -1, pReq->schemaRow.version);
}
metaWLock(pMeta);
@ -1945,6 +1947,10 @@ static int metaAlterTableColumn(SMeta *pMeta, int64_t version, SVAlterTbReq *pAl
break;
}
if (!TSDB_CACHE_NO(pMeta->pVnode->config)) {
tsdbCacheInvalidateSchema(pMeta->pVnode->pTsdb, 0, entry.uid, pSchema->version);
}
entry.version = version;
// do actual write

View File

@ -221,7 +221,7 @@ static int32_t tsdbOpenRocksCache(STsdb *pTsdb) {
rocksdb_writebatch_t *writebatch = rocksdb_writebatch_create();
TAOS_CHECK_GOTO(taosThreadMutexInit(&pTsdb->rCache.writeBatchMutex, NULL), &lino, _err6) ;
TAOS_CHECK_GOTO(taosThreadMutexInit(&pTsdb->rCache.writeBatchMutex, NULL), &lino, _err6);
pTsdb->rCache.writebatch = writebatch;
pTsdb->rCache.my_comparator = cmp;
@ -230,6 +230,9 @@ static int32_t tsdbOpenRocksCache(STsdb *pTsdb) {
pTsdb->rCache.readoptions = readoptions;
pTsdb->rCache.flushoptions = flushoptions;
pTsdb->rCache.db = db;
pTsdb->rCache.sver = -1;
pTsdb->rCache.suid = -1;
pTsdb->rCache.uid = -1;
pTsdb->rCache.pTSchema = NULL;
TAOS_RETURN(code);
@ -1132,19 +1135,17 @@ static int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, SArray
(void)taosThreadMutexLock(&pTsdb->lruMutex);
for (int i = 0; i < num_keys; ++i) {
SLastUpdateCtx *updCtx = (SLastUpdateCtx *)taosArrayGet(updCtxArray, i);
int8_t lflag = updCtx->lflag;
SRowKey *pRowKey = &updCtx->tsdbRowKey.key;
SColVal *pColVal = &updCtx->colVal;
SLastUpdateCtx *updCtx = &((SLastUpdateCtx *)TARRAY_DATA(updCtxArray))[i];
int8_t lflag = updCtx->lflag;
SRowKey *pRowKey = &updCtx->tsdbRowKey.key;
SColVal *pColVal = &updCtx->colVal;
if (lflag == LFLAG_LAST && !COL_VAL_IS_VALUE(pColVal)) {
continue;
}
SLastKey *key = &(SLastKey){.lflag = lflag, .uid = uid, .cid = pColVal->cid};
size_t klen = ROCKS_KEY_LEN;
LRUHandle *h = taosLRUCacheLookup(pCache, key, klen);
LRUHandle *h = taosLRUCacheLookup(pCache, key, ROCKS_KEY_LEN);
if (h) {
SLastCol *pLastCol = (SLastCol *)taosLRUCacheValue(pCache, h);
if (pLastCol->cacheStatus != TSDB_LAST_CACHE_NO_CACHE) {
@ -1299,53 +1300,94 @@ _exit:
TAOS_RETURN(code);
}
void tsdbCacheInvalidateSchema(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, int32_t sver) {
SRocksCache *pRCache = &pTsdb->rCache;
if (!pRCache->pTSchema || sver <= pTsdb->rCache.sver) return;
if (suid > 0 && suid == pRCache->suid) {
pRCache->sver = -1;
pRCache->suid = -1;
}
if (suid == 0 && uid == pRCache->uid) {
pRCache->sver = -1;
pRCache->uid = -1;
}
}
static int32_t tsdbUpdateSkm(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, int32_t sver) {
SRocksCache *pRCache = &pTsdb->rCache;
if (pRCache->pTSchema && sver == pRCache->sver) {
if (suid > 0 && suid == pRCache->suid) {
return 0;
}
if (suid == 0 && uid == pRCache->uid) {
return 0;
}
}
pRCache->suid = suid;
pRCache->uid = uid;
pRCache->sver = sver;
tDestroyTSchema(pRCache->pTSchema);
return metaGetTbTSchemaEx(pTsdb->pVnode->pMeta, suid, uid, -1, &pRCache->pTSchema);
}
int32_t tsdbCacheRowFormatUpdate(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, int64_t version, int32_t nRow,
SRow **aRow) {
int32_t code = 0, lino = 0;
// 1. prepare last
TSDBROW lRow = {.type = TSDBROW_ROW_FMT, .pTSRow = aRow[nRow - 1], .version = version};
TSDBROW lRow = {.type = TSDBROW_ROW_FMT, .pTSRow = aRow[nRow - 1], .version = version};
STSchema *pTSchema = NULL;
int32_t sver = TSDBROW_SVERSION(&lRow);
SArray *ctxArray = NULL;
SSHashObj *iColHash = NULL;
TAOS_CHECK_GOTO(metaGetTbTSchemaEx(pTsdb->pVnode->pMeta, suid, uid, sver, &pTSchema), &lino, _exit);
TAOS_CHECK_GOTO(tsdbUpdateSkm(pTsdb, suid, uid, sver), &lino, _exit);
pTSchema = pTsdb->rCache.pTSchema;
TSDBROW tRow = {.type = TSDBROW_ROW_FMT, .version = version};
int32_t nCol = pTSchema->numOfCols;
ctxArray = taosArrayInit(nCol, sizeof(SLastUpdateCtx));
iColHash = tSimpleHashInit(16, taosGetDefaultHashFunction(TSDB_DATA_TYPE_INT));
ctxArray = taosArrayInit(nCol * 2, sizeof(SLastUpdateCtx));
if (ctxArray == NULL) {
TAOS_CHECK_GOTO(TSDB_CODE_OUT_OF_MEMORY, &lino, _exit);
}
// 1. prepare by lrow
STsdbRowKey tsdbRowKey = {0};
tsdbRowGetKey(&lRow, &tsdbRowKey);
STSDBRowIter iter = {0};
code = tsdbRowIterOpen(&iter, &lRow, pTSchema);
if (code != TSDB_CODE_SUCCESS) {
tsdbError("vgId:%d, %s tsdbRowIterOpen failed at line %d since %s", TD_VID(pTsdb->pVnode), __func__, __LINE__,
tstrerror(code));
TAOS_CHECK_GOTO(code, &lino, _exit);
}
TAOS_CHECK_GOTO(tsdbRowIterOpen(&iter, &lRow, pTSchema), &lino, _exit);
int32_t iCol = 0;
for (SColVal *pColVal = tsdbRowIterNext(&iter); pColVal && iCol < nCol; pColVal = tsdbRowIterNext(&iter), iCol++) {
SLastUpdateCtx updateCtx = {.lflag = LFLAG_LAST_ROW, .tsdbRowKey = tsdbRowKey, .colVal = *pColVal};
if (!taosArrayPush(ctxArray, &updateCtx)) {
tsdbRowClose(&iter);
TAOS_CHECK_GOTO(terrno, &lino, _exit);
}
if (!COL_VAL_IS_VALUE(pColVal)) {
if (COL_VAL_IS_VALUE(pColVal)) {
updateCtx.lflag = LFLAG_LAST;
if (!taosArrayPush(ctxArray, &updateCtx)) {
tsdbRowClose(&iter);
TAOS_CHECK_GOTO(terrno, &lino, _exit);
}
} else {
if (!iColHash) {
iColHash = tSimpleHashInit(16, taosGetDefaultHashFunction(TSDB_DATA_TYPE_INT));
if (iColHash == NULL) {
tsdbRowClose(&iter);
TAOS_CHECK_GOTO(TSDB_CODE_OUT_OF_MEMORY, &lino, _exit);
}
}
if (tSimpleHashPut(iColHash, &iCol, sizeof(iCol), NULL, 0)) {
tsdbRowClose(&iter);
TAOS_CHECK_GOTO(TSDB_CODE_OUT_OF_MEMORY, &lino, _exit);
}
continue;
}
updateCtx.lflag = LFLAG_LAST;
if (!taosArrayPush(ctxArray, &updateCtx)) {
TAOS_CHECK_GOTO(terrno, &lino, _exit);
}
}
tsdbRowClose(&iter);
@ -1390,7 +1432,10 @@ int32_t tsdbCacheRowFormatUpdate(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, int6
}
_exit:
taosMemoryFreeClear(pTSchema);
if (code) {
tsdbError("vgId:%d, %s failed at line %d since %s", TD_VID(pTsdb->pVnode), __func__, __LINE__, tstrerror(code));
}
taosArrayDestroy(ctxArray);
tSimpleHashCleanup(iColHash);

View File

@ -2018,7 +2018,12 @@ static void doMergeAlignedIntervalAgg(SOperatorInfo* pOperator) {
cleanupAfterGroupResultGen(pMiaInfo, pRes);
code = doFilter(pRes, pOperator->exprSupp.pFilterInfo, NULL);
QUERY_CHECK_CODE(code, lino, _end);
break;
if (pRes->info.rows == 0) {
// After filtering for last group, the result is empty, so we need to continue to process next group
continue;
} else {
break;
}
} else {
// continue
pRes->info.id.groupId = pMiaInfo->groupId;

View File

@ -2,23 +2,22 @@
MESSAGE(STATUS "build parser unit test")
# IF(NOT TD_DARWIN)
# # GoogleTest requires at least C++11
# SET(CMAKE_CXX_STANDARD 11)
# AUX_SOURCE_DIRECTORY(${CMAKE_CURRENT_SOURCE_DIR} SOURCE_LIST)
#
# ADD_EXECUTABLE(executorTest ${SOURCE_LIST})
# TARGET_LINK_LIBRARIES(
# executorTest
# PRIVATE os util common transport gtest taos_static qcom executor function planner scalar nodes vnode
# )
#
# TARGET_INCLUDE_DIRECTORIES(
# executorTest
# PUBLIC "${TD_SOURCE_DIR}/include/libs/executor/"
# PRIVATE "${TD_SOURCE_DIR}/source/libs/executor/inc"
# )
# # GoogleTest requires at least C++11
# SET(CMAKE_CXX_STANDARD 11)
# AUX_SOURCE_DIRECTORY(${CMAKE_CURRENT_SOURCE_DIR} SOURCE_LIST)
#
# ADD_EXECUTABLE(executorTest ${SOURCE_LIST})
# TARGET_LINK_LIBRARIES(
# executorTest
# PRIVATE os util common transport gtest taos_static qcom executor function planner scalar nodes vnode
# )
#
# TARGET_INCLUDE_DIRECTORIES(
# executorTest
# PUBLIC "${TD_SOURCE_DIR}/include/libs/executor/"
# PRIVATE "${TD_SOURCE_DIR}/source/libs/executor/inc"
# )
# ENDIF ()
SET(CMAKE_CXX_STANDARD 11)
AUX_SOURCE_DIRECTORY(${CMAKE_CURRENT_SOURCE_DIR} SOURCE_LIST)

View File

@ -7,160 +7,159 @@ IF(NOT TD_DARWIN)
add_executable(idxFstUtilUT "")
target_sources(idxTest
PRIVATE
"indexTests.cc"
PRIVATE
"indexTests.cc"
)
target_sources(idxFstTest
PRIVATE
"fstTest.cc"
PRIVATE
"fstTest.cc"
)
target_sources(idxFstUT
PRIVATE
"fstUT.cc"
PRIVATE
"fstUT.cc"
)
target_sources(idxUtilUT
PRIVATE
"utilUT.cc"
PRIVATE
"utilUT.cc"
)
target_sources(idxJsonUT
PRIVATE
"jsonUT.cc"
PRIVATE
"jsonUT.cc"
)
target_sources(idxFstUtilUT
PRIVATE
"fstUtilUT.cc"
PRIVATE
"fstUtilUT.cc"
)
target_include_directories(idxTest
PUBLIC
"${TD_SOURCE_DIR}/include/libs/index"
"${CMAKE_CURRENT_SOURCE_DIR}/../inc"
)
target_include_directories(idxFstTest
PUBLIC
"${TD_SOURCE_DIR}/include/libs/index"
"${CMAKE_CURRENT_SOURCE_DIR}/../inc"
)
target_include_directories (idxTest
PUBLIC
"${TD_SOURCE_DIR}/include/libs/index"
"${CMAKE_CURRENT_SOURCE_DIR}/../inc"
)
target_include_directories (idxFstTest
PUBLIC
"${TD_SOURCE_DIR}/include/libs/index"
"${CMAKE_CURRENT_SOURCE_DIR}/../inc"
)
target_sources(idxJsonUT
PRIVATE
"jsonUT.cc"
PRIVATE
"jsonUT.cc"
)
target_include_directories (idxTest
target_include_directories(idxTest
PUBLIC
"${TD_SOURCE_DIR}/include/libs/index"
"${TD_SOURCE_DIR}/include/libs/index"
"${CMAKE_CURRENT_SOURCE_DIR}/../inc"
)
target_include_directories (idxFstTest
)
target_include_directories(idxFstTest
PUBLIC
"${TD_SOURCE_DIR}/include/libs/index"
"${TD_SOURCE_DIR}/include/libs/index"
"${CMAKE_CURRENT_SOURCE_DIR}/../inc"
)
)
target_include_directories (idxFstUT
target_include_directories(idxFstUT
PUBLIC
"${TD_SOURCE_DIR}/include/libs/index"
"${TD_SOURCE_DIR}/include/libs/index"
"${CMAKE_CURRENT_SOURCE_DIR}/../inc"
)
)
target_include_directories (idxUtilUT
target_include_directories(idxUtilUT
PUBLIC
"${TD_SOURCE_DIR}/include/libs/index"
"${TD_SOURCE_DIR}/include/libs/index"
"${CMAKE_CURRENT_SOURCE_DIR}/../inc"
)
)
target_include_directories (idxJsonUT
PUBLIC
"${TD_SOURCE_DIR}/include/libs/index"
"${CMAKE_CURRENT_SOURCE_DIR}/../inc"
)
target_include_directories (idxFstUtilUT
PUBLIC
"${TD_SOURCE_DIR}/include/libs/index"
"${CMAKE_CURRENT_SOURCE_DIR}/../inc"
)
target_include_directories (idxJsonUT
target_include_directories(idxJsonUT
PUBLIC
"${TD_SOURCE_DIR}/include/libs/index"
"${TD_SOURCE_DIR}/include/libs/index"
"${CMAKE_CURRENT_SOURCE_DIR}/../inc"
)
)
target_include_directories(idxFstUtilUT
PUBLIC
"${TD_SOURCE_DIR}/include/libs/index"
"${CMAKE_CURRENT_SOURCE_DIR}/../inc"
)
target_include_directories(idxJsonUT
PUBLIC
"${TD_SOURCE_DIR}/include/libs/index"
"${CMAKE_CURRENT_SOURCE_DIR}/../inc"
)
target_link_libraries (idxTest
os
target_link_libraries(idxTest
os
util
common
gtest_main
index
)
target_link_libraries (idxFstTest
os
target_link_libraries(idxFstTest
os
util
common
gtest_main
index
)
target_link_libraries (idxFstUT
os
target_link_libraries(idxFstUT
os
util
common
gtest_main
index
)
target_link_libraries (idxTest
os
target_link_libraries(idxTest
os
util
common
gtest_main
index
)
target_link_libraries (idxFstTest
os
target_link_libraries(idxFstTest
os
util
common
gtest_main
index
)
target_link_libraries (idxFstUT
os
target_link_libraries(idxFstUT
os
util
common
gtest_main
index
)
target_link_libraries (idxUtilUT
os
target_link_libraries(idxUtilUT
os
util
common
gtest_main
index
)
target_link_libraries (idxJsonUT
os
target_link_libraries(idxJsonUT
os
util
common
gtest_main
index
)
target_link_libraries (idxFstUtilUT
os
target_link_libraries(idxFstUtilUT
os
util
common
gtest_main
index
)
add_test(
NAME idxJsonUT
COMMAND idxJsonUT
COMMAND idxJsonUT
)
add_test(
NAME idxFstUtilUT
COMMAND idxFstUtilUT
NAME idxFstUtilUT
COMMAND idxFstUtilUT
)
add_test(
@ -168,15 +167,15 @@ IF(NOT TD_DARWIN)
COMMAND idxTest
)
add_test(
NAME idxUtilUT
COMMAND idxUtilUT
NAME idxUtilUT
COMMAND idxUtilUT
)
add_test(
NAME idxFstUT
COMMAND idxFstUT
NAME idxFstUT
COMMAND idxFstUT
)
add_test(
NAME idxFstTest
COMMAND idxFstTest
COMMAND idxFstTest
)
ENDIF ()
ENDIF()

View File

@ -468,35 +468,38 @@ end:
int32_t smlInitHandle(SQuery** query) {
*query = NULL;
SQuery* pQuery = NULL;
SVnodeModifyOpStmt* stmt = NULL;
int32_t code = nodesMakeNode(QUERY_NODE_QUERY, (SNode**)&pQuery);
if (NULL == pQuery) {
uError("create pQuery error");
return code;
if (code != 0) {
uError("SML create pQuery error");
goto END;
}
pQuery->execMode = QUERY_EXEC_MODE_SCHEDULE;
pQuery->haveResultSet = false;
pQuery->msgType = TDMT_VND_SUBMIT;
SVnodeModifyOpStmt* stmt = NULL;
code = nodesMakeNode(QUERY_NODE_VNODE_MODIFY_STMT, (SNode**)&stmt);
if (NULL == stmt) {
uError("create SVnodeModifyOpStmt error");
qDestroyQuery(pQuery);
return code;
if (code != 0) {
uError("SML create SVnodeModifyOpStmt error");
goto END;
}
stmt->pTableBlockHashObj = taosHashInit(16, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, HASH_NO_LOCK);
if (stmt->pTableBlockHashObj == NULL){
uError("create pTableBlockHashObj error");
qDestroyQuery(pQuery);
nodesDestroyNode((SNode*)stmt);
return terrno;
uError("SML create pTableBlockHashObj error");
code = terrno;
goto END;
}
stmt->freeHashFunc = insDestroyTableDataCxtHashMap;
stmt->freeArrayFunc = insDestroyVgroupDataCxtList;
pQuery->pRoot = (SNode*)stmt;
*query = pQuery;
return code;
return TSDB_CODE_SUCCESS;
END:
nodesDestroyNode((SNode*)stmt);
qDestroyQuery(pQuery);
return code;
}
int32_t smlBuildOutput(SQuery* handle, SHashObj* pVgHash) {

View File

@ -147,7 +147,7 @@ int16_t insFindCol(SToken* pColname, int16_t start, int16_t end, SSchema* pSchem
}
int32_t insBuildCreateTbReq(SVCreateTbReq* pTbReq, const char* tname, STag* pTag, int64_t suid, const char* sname,
SArray* tagName, uint8_t tagNum, int32_t ttl) {
SArray* tagName, uint8_t tagNum, int32_t ttl) {
pTbReq->type = TD_CHILD_TABLE;
pTbReq->ctb.pTag = (uint8_t*)pTag;
pTbReq->name = taosStrdup(tname);
@ -174,7 +174,7 @@ static void initBoundCols(int32_t ncols, int16_t* pBoundCols) {
static int32_t initColValues(STableMeta* pTableMeta, SArray* pValues) {
SSchema* pSchemas = getTableColumnSchema(pTableMeta);
int32_t code = 0;
int32_t code = 0;
for (int32_t i = 0; i < pTableMeta->tableInfo.numOfColumns; ++i) {
SColVal val = COL_VAL_NONE(pSchemas[i].colId, pSchemas[i].type);
if (NULL == taosArrayPush(pValues, &val)) {
@ -886,19 +886,77 @@ static bool findFileds(SSchema* pSchema, TAOS_FIELD* fields, int numFields) {
return false;
}
int32_t checkSchema(SSchema* pColSchema, int8_t* fields, char* errstr, int32_t errstrLen) {
if (*fields != pColSchema->type) {
if (errstr != NULL)
snprintf(errstr, errstrLen, "column type not equal, name:%s, schema type:%s, data type:%s", pColSchema->name,
tDataTypes[pColSchema->type].name, tDataTypes[*fields].name);
return TSDB_CODE_INVALID_PARA;
}
if (IS_VAR_DATA_TYPE(pColSchema->type) && *(int32_t*)(fields + sizeof(int8_t)) > pColSchema->bytes) {
if (errstr != NULL)
snprintf(errstr, errstrLen,
"column var data bytes error, name:%s, schema type:%s, bytes:%d, data type:%s, bytes:%d",
pColSchema->name, tDataTypes[pColSchema->type].name, pColSchema->bytes, tDataTypes[*fields].name,
*(int32_t*)(fields + sizeof(int8_t)));
return TSDB_CODE_INVALID_PARA;
}
if (!IS_VAR_DATA_TYPE(pColSchema->type) && *(int32_t*)(fields + sizeof(int8_t)) != pColSchema->bytes) {
if (errstr != NULL)
snprintf(errstr, errstrLen,
"column normal data bytes not equal, name:%s, schema type:%s, bytes:%d, data type:%s, bytes:%d",
pColSchema->name, tDataTypes[pColSchema->type].name, pColSchema->bytes, tDataTypes[*fields].name,
*(int32_t*)(fields + sizeof(int8_t)));
return TSDB_CODE_INVALID_PARA;
}
return 0;
}
#define PRCESS_DATA(i, j) \
ret = checkSchema(pColSchema, fields, errstr, errstrLen); \
if (ret != 0) { \
goto end; \
} \
\
if (pColSchema->colId == PRIMARYKEY_TIMESTAMP_COL_ID) { \
hasTs = true; \
} \
\
int8_t* offset = pStart; \
if (IS_VAR_DATA_TYPE(pColSchema->type)) { \
pStart += numOfRows * sizeof(int32_t); \
} else { \
pStart += BitmapLen(numOfRows); \
} \
char* pData = pStart; \
\
SColData* pCol = taosArrayGet(pTableCxt->pData->aCol, j); \
ret = tColDataAddValueByDataBlock(pCol, pColSchema->type, pColSchema->bytes, numOfRows, offset, pData); \
if (ret != 0) { \
goto end; \
} \
fields += sizeof(int8_t) + sizeof(int32_t); \
if (needChangeLength && version == BLOCK_VERSION_1) { \
pStart += htonl(colLength[i]); \
} else { \
pStart += colLength[i]; \
} \
boundInfo->pColIndex[j] = -1;
int rawBlockBindData(SQuery* query, STableMeta* pTableMeta, void* data, SVCreateTbReq* pCreateTb, void* tFields,
int numFields, bool needChangeLength, char* errstr, int32_t errstrLen, bool raw) {
int ret = 0;
if(data == NULL) {
if (data == NULL) {
uError("rawBlockBindData, data is NULL");
return TSDB_CODE_APP_ERROR;
}
void* tmp =
taosHashGet(((SVnodeModifyOpStmt*)(query->pRoot))->pTableBlockHashObj, &pTableMeta->uid, sizeof(pTableMeta->uid));
SVCreateTbReq *pCreateReqTmp = NULL;
if (tmp == NULL && pCreateTb != NULL){
SVCreateTbReq* pCreateReqTmp = NULL;
if (tmp == NULL && pCreateTb != NULL) {
ret = cloneSVreateTbReq(pCreateTb, &pCreateReqTmp);
if (ret != TSDB_CODE_SUCCESS){
if (ret != TSDB_CODE_SUCCESS) {
uError("cloneSVreateTbReq error");
goto end;
}
@ -906,7 +964,7 @@ int rawBlockBindData(SQuery* query, STableMeta* pTableMeta, void* data, SVCreate
STableDataCxt* pTableCxt = NULL;
ret = insGetTableDataCxt(((SVnodeModifyOpStmt*)(query->pRoot))->pTableBlockHashObj, &pTableMeta->uid,
sizeof(pTableMeta->uid), pTableMeta, &pCreateReqTmp, &pTableCxt, true, false);
sizeof(pTableMeta->uid), pTableMeta, &pCreateReqTmp, &pTableCxt, true, false);
if (pCreateReqTmp != NULL) {
tdDestroySVCreateTbReq(pCreateReqTmp);
taosMemoryFree(pCreateReqTmp);
@ -963,121 +1021,48 @@ int rawBlockBindData(SQuery* query, STableMeta* pTableMeta, void* data, SVCreate
ret = TSDB_CODE_INVALID_PARA;
goto end;
}
// if (tFields != NULL && numFields > boundInfo->numOfBound) {
// if (errstr != NULL) snprintf(errstr, errstrLen, "numFields:%d bigger than num of bound cols:%d", numFields, boundInfo->numOfBound);
// ret = TSDB_CODE_INVALID_PARA;
// goto end;
// }
if (tFields == NULL && numOfCols != boundInfo->numOfBound) {
if (errstr != NULL) snprintf(errstr, errstrLen, "numFields:%d not equal to num of bound cols:%d", numOfCols, boundInfo->numOfBound);
ret = TSDB_CODE_INVALID_PARA;
goto end;
}
bool hasTs = false;
if (tFields == NULL) {
for (int j = 0; j < boundInfo->numOfBound; j++) {
SSchema* pColSchema = &pSchema[j];
SColData* pCol = taosArrayGet(pTableCxt->pData->aCol, j);
if (*fields != pColSchema->type && *(int32_t*)(fields + sizeof(int8_t)) != pColSchema->bytes) {
if (errstr != NULL)
snprintf(errstr, errstrLen,
"column type or bytes not equal, name:%s, schema type:%s, bytes:%d, data type:%s, bytes:%d",
pColSchema->name, tDataTypes[pColSchema->type].name, pColSchema->bytes, tDataTypes[*fields].name,
*(int32_t*)(fields + sizeof(int8_t)));
ret = TSDB_CODE_INVALID_PARA;
goto end;
}
int8_t* offset = pStart;
if (IS_VAR_DATA_TYPE(pColSchema->type)) {
pStart += numOfRows * sizeof(int32_t);
} else {
pStart += BitmapLen(numOfRows);
}
char* pData = pStart;
ret = tColDataAddValueByDataBlock(pCol, pColSchema->type, pColSchema->bytes, numOfRows, offset, pData);
if (ret != 0) {
goto end;
}
fields += sizeof(int8_t) + sizeof(int32_t);
if (needChangeLength && version == BLOCK_VERSION_1) {
pStart += htonl(colLength[j]);
} else {
pStart += colLength[j];
}
int32_t len = TMIN(numOfCols, boundInfo->numOfBound);
for (int j = 0; j < len; j++) {
SSchema* pColSchema = &pSchema[j];
PRCESS_DATA(j, j)
}
} else {
bool hasTs = false;
for (int i = 0; i < numFields; i++) {
for (int j = 0; j < boundInfo->numOfBound; j++) {
SSchema* pColSchema = &pSchema[j];
char* fieldName = NULL;
char* fieldName = NULL;
if (raw) {
fieldName = ((SSchemaWrapper*)tFields)->pSchema[i].name;
} else {
fieldName = ((TAOS_FIELD*)tFields)[i].name;
}
if (strcmp(pColSchema->name, fieldName) == 0) {
if (*fields != pColSchema->type && *(int32_t*)(fields + sizeof(int8_t)) != pColSchema->bytes) {
if (errstr != NULL)
snprintf(errstr, errstrLen,
"column type or bytes not equal, name:%s, schema type:%s, bytes:%d, data type:%s, bytes:%d",
pColSchema->name, tDataTypes[pColSchema->type].name, pColSchema->bytes, tDataTypes[*fields].name,
*(int32_t*)(fields + sizeof(int8_t)));
ret = TSDB_CODE_INVALID_PARA;
goto end;
}
if (pColSchema->colId == PRIMARYKEY_TIMESTAMP_COL_ID) {
hasTs = true;
}
int8_t* offset = pStart;
if (IS_VAR_DATA_TYPE(pColSchema->type)) {
pStart += numOfRows * sizeof(int32_t);
} else {
pStart += BitmapLen(numOfRows);
// for(int k = 0; k < numOfRows; k++) {
// if(!colDataIsNull_f(offset, k) && pColSchema->type == TSDB_DATA_TYPE_INT){
// printf("colName:%s,val:%d", fieldName, *(int32_t*)(pStart + k * sizeof(int32_t)));
// }
// }
}
char* pData = pStart;
SColData* pCol = taosArrayGet(pTableCxt->pData->aCol, j);
ret = tColDataAddValueByDataBlock(pCol, pColSchema->type, pColSchema->bytes, numOfRows, offset, pData);
if (ret != 0) {
goto end;
}
fields += sizeof(int8_t) + sizeof(int32_t);
if (needChangeLength && version == BLOCK_VERSION_1) {
pStart += htonl(colLength[i]);
} else {
pStart += colLength[i];
}
boundInfo->pColIndex[j] = -1;
PRCESS_DATA(i, j)
break;
}
}
}
}
if (!hasTs) {
if (errstr != NULL) snprintf(errstr, errstrLen, "timestamp column(primary key) not found in raw data");
ret = TSDB_CODE_INVALID_PARA;
goto end;
}
if (!hasTs) {
if (errstr != NULL) snprintf(errstr, errstrLen, "timestamp column(primary key) not found in raw data");
ret = TSDB_CODE_INVALID_PARA;
goto end;
}
for (int c = 0; c < boundInfo->numOfBound; ++c) {
if (boundInfo->pColIndex[c] != -1) {
SColData* pCol = taosArrayGet(pTableCxt->pData->aCol, c);
ret = tColDataAddValueByDataBlock(pCol, 0, 0, numOfRows, NULL, NULL);
if (ret != 0) {
goto end;
}
} else {
boundInfo->pColIndex[c] = c; // restore for next block
// process NULL data
for (int c = 0; c < boundInfo->numOfBound; ++c) {
if (boundInfo->pColIndex[c] != -1) {
SColData* pCol = taosArrayGet(pTableCxt->pData->aCol, c);
ret = tColDataAddValueByDataBlock(pCol, 0, 0, numOfRows, NULL, NULL);
if (ret != 0) {
goto end;
}
} else {
boundInfo->pColIndex[c] = c; // restore for next block
}
}

View File

@ -3757,7 +3757,7 @@ static EDealRes doCheckExprForGroupBy(SNode** pNode, void* pContext) {
bool partionByTbname = hasTbnameFunction(pSelect->pPartitionByList);
FOREACH(pPartKey, pSelect->pPartitionByList) {
if (nodesEqualNode(pPartKey, *pNode)) {
return pCxt->currClause == SQL_CLAUSE_HAVING ? DEAL_RES_IGNORE_CHILD : rewriteExprToGroupKeyFunc(pCxt, pNode);
return pSelect->hasAggFuncs ? rewriteExprToGroupKeyFunc(pCxt, pNode) : DEAL_RES_IGNORE_CHILD;
}
if ((partionByTbname) && QUERY_NODE_COLUMN == nodeType(*pNode) &&
((SColumnNode*)*pNode)->colType == COLUMN_TYPE_TAG) {

View File

@ -1,5 +1,6 @@
MESSAGE(STATUS "build qworker unit test")
IF(NOT TD_DARWIN)
# GoogleTest requires at least C++11
SET(CMAKE_CXX_STANDARD 11)

View File

@ -887,7 +887,7 @@ int32_t streamBuildAndSendDropTaskMsg(SMsgCb* pMsgCb, int32_t vgId, SStreamTaskI
}
int32_t streamSendChkptReportMsg(SStreamTask* pTask, SCheckpointInfo* pCheckpointInfo, int8_t dropRelHTask) {
int32_t code;
int32_t code = 0;
int32_t tlen = 0;
int32_t vgId = pTask->pMeta->vgId;
const char* id = pTask->id.idStr;

View File

@ -353,6 +353,7 @@ typedef struct {
queue node;
void (*freeFunc)(void* arg);
int32_t size;
int8_t inited;
} STransQueue;
/*

View File

@ -127,10 +127,12 @@ typedef struct {
typedef struct SCliReq {
SReqCtx* ctx;
queue q;
queue sendQ;
STransMsgType type;
uint64_t st;
int64_t seq;
int32_t sent; //(0: no send, 1: alread sent)
int8_t inSendQ;
STransMsg msg;
int8_t inRetry;
@ -274,6 +276,8 @@ static FORCE_INLINE void destroyReqAndAhanlde(void* cmsg);
static FORCE_INLINE int cliRBChoseIdx(STrans* pInst);
static FORCE_INLINE void destroyReqCtx(SReqCtx* ctx);
static FORCE_INLINE void removeReqFromSendQ(SCliReq* pReq);
static int32_t cliHandleState_mayUpdateState(SCliConn* pConn, SCliReq* pReq);
static int32_t cliHandleState_mayHandleReleaseResp(SCliConn* conn, STransMsgHead* pHead);
static int32_t cliHandleState_mayCreateAhandle(SCliConn* conn, STransMsgHead* pHead, STransMsg* pResp);
@ -453,6 +457,7 @@ static bool filteBySeq(void* key, void* arg) {
SFiterArg* targ = arg;
SCliReq* pReq = QUEUE_DATA(key, SCliReq, q);
if (pReq->seq == targ->seq && pReq->msg.msgType + 1 == targ->msgType) {
removeReqFromSendQ(pReq);
return true;
} else {
return false;
@ -539,6 +544,7 @@ bool filterByQid(void* key, void* arg) {
SCliReq* pReq = QUEUE_DATA(key, SCliReq, q);
if (pReq->msg.info.qId == *qid) {
removeReqFromSendQ(pReq);
return true;
} else {
return false;
@ -600,7 +606,7 @@ int32_t cliHandleState_mayHandleReleaseResp(SCliConn* conn, STransMsgHead* pHead
queue* el = QUEUE_HEAD(&set);
QUEUE_REMOVE(el);
SCliReq* pReq = QUEUE_DATA(el, SCliReq, q);
removeReqFromSendQ(pReq);
STraceId* trace = &pReq->msg.info.traceId;
tGDebug("start to free msg %p", pReq);
destroyReqWrapper(pReq, pThrd);
@ -700,6 +706,7 @@ void cliHandleResp(SCliConn* conn) {
tstrerror(code));
}
}
removeReqFromSendQ(pReq);
code = cliBuildRespFromCont(pReq, &resp, pHead);
STraceId* trace = &resp.info.traceId;
@ -905,6 +912,10 @@ static void addConnToPool(void* pool, SCliConn* conn) {
}
SCliThrd* thrd = conn->hostThrd;
if (thrd->quit == true) {
return;
}
cliResetConnTimer(conn);
if (conn->list == NULL && conn->dstAddr != NULL) {
conn->list = taosHashGet((SHashObj*)pool, conn->dstAddr, strlen(conn->dstAddr));
@ -1092,6 +1103,7 @@ _failed:
transQueueDestroy(&conn->reqsToSend);
transQueueDestroy(&conn->reqsSentOut);
taosMemoryFree(conn->dstAddr);
taosMemoryFree(conn->ipStr);
}
tError("failed to create conn, code:%d", code);
taosMemoryFree(conn);
@ -1216,6 +1228,7 @@ static FORCE_INLINE void destroyReqInQueue(SCliConn* conn, queue* set, int32_t c
QUEUE_REMOVE(el);
SCliReq* pReq = QUEUE_DATA(el, SCliReq, q);
removeReqFromSendQ(pReq);
notifyAndDestroyReq(conn, pReq, code);
}
}
@ -1246,8 +1259,8 @@ static void cliHandleException(SCliConn* conn) {
}
cliDestroyAllQidFromThrd(conn);
QUEUE_REMOVE(&conn->q);
if (conn->list) {
if (pThrd->quit == false && conn->list) {
QUEUE_REMOVE(&conn->q);
conn->list->totalSize -= 1;
conn->list = NULL;
}
@ -1273,7 +1286,8 @@ static void cliHandleException(SCliConn* conn) {
bool filterToRmReq(void* h, void* arg) {
queue* el = h;
SCliReq* pReq = QUEUE_DATA(el, SCliReq, q);
if (pReq->sent == 1 && REQUEST_NO_RESP(&pReq->msg)) {
if (pReq->sent == 1 && pReq->inSendQ == 0 && REQUEST_NO_RESP(&pReq->msg)) {
removeReqFromSendQ(pReq);
return true;
}
return false;
@ -1300,12 +1314,18 @@ static void cliBatchSendCb(uv_write_t* req, int status) {
SCliThrd* pThrd = conn->hostThrd;
STrans* pInst = pThrd->pInst;
while (!QUEUE_IS_EMPTY(&wrapper->node)) {
queue* h = QUEUE_HEAD(&wrapper->node);
SCliReq* pReq = QUEUE_DATA(h, SCliReq, sendQ);
removeReqFromSendQ(pReq);
}
freeWReqToWQ(&conn->wq, wrapper);
int32_t ref = transUnrefCliHandle(conn);
if (ref <= 0) {
return;
}
cliConnRmReqs(conn);
if (status != 0) {
tDebug("%s conn %p failed to send msg since %s", CONN_GET_INST_LABEL(conn), conn, uv_err_name(status));
@ -1340,6 +1360,9 @@ bool cliConnMayAddUserInfo(SCliConn* pConn, STransMsgHead** ppHead, int32_t* msg
}
STransMsgHead* pHead = *ppHead;
STransMsgHead* tHead = taosMemoryCalloc(1, *msgLen + sizeof(pInst->user));
if (tHead == NULL) {
return false;
}
memcpy((char*)tHead, (char*)pHead, TRANS_MSG_OVERHEAD);
memcpy((char*)tHead + TRANS_MSG_OVERHEAD, pInst->user, sizeof(pInst->user));
@ -1398,6 +1421,10 @@ int32_t cliBatchSend(SCliConn* pConn, int8_t direct) {
int j = 0;
int32_t batchLimit = 64;
queue reqToSend;
QUEUE_INIT(&reqToSend);
while (!transQueueEmpty(&pConn->reqsToSend)) {
queue* h = transQueuePop(&pConn->reqsToSend);
SCliReq* pCliMsg = QUEUE_DATA(h, SCliReq, q);
@ -1422,6 +1449,10 @@ int32_t cliBatchSend(SCliConn* pConn, int8_t direct) {
if (cliConnMayAddUserInfo(pConn, &pHead, &msgLen)) {
content = transContFromHead(pHead);
contLen = transContLenFromMsg(msgLen);
} else {
if (pConn->userInited == 0) {
return terrno;
}
}
if (pHead->comp == 0) {
pHead->noResp = REQUEST_NO_RESP(pReq) ? 1 : 0;
@ -1447,30 +1478,51 @@ int32_t cliBatchSend(SCliConn* pConn, int8_t direct) {
wb[j++] = uv_buf_init((char*)pHead, msgLen);
totalLen += msgLen;
pCliMsg->sent = 1;
pCliMsg->seq = pConn->seq;
pCliMsg->sent = 1;
STraceId* trace = &pCliMsg->msg.info.traceId;
tGDebug("%s conn %p %s is sent to %s, local info:%s, seq:%" PRId64 ", sid:%" PRId64 "", CONN_GET_INST_LABEL(pConn),
pConn, TMSG_INFO(pReq->msgType), pConn->dst, pConn->src, pConn->seq, pReq->info.qId);
transQueuePush(&pConn->reqsSentOut, &pCliMsg->q);
QUEUE_INIT(&pCliMsg->sendQ);
QUEUE_PUSH(&reqToSend, &pCliMsg->sendQ);
pCliMsg->inSendQ = 1;
if (j >= batchLimit) {
break;
}
}
transRefCliHandle(pConn);
uv_write_t* req = allocWReqFromWQ(&pConn->wq, pConn);
if (req == NULL) {
tError("%s conn %p failed to send msg since %s", CONN_GET_INST_LABEL(pConn), pConn, tstrerror(terrno));
while (!QUEUE_IS_EMPTY(&reqToSend)) {
queue* h = QUEUE_HEAD(&reqToSend);
SCliReq* pCliMsg = QUEUE_DATA(h, SCliReq, sendQ);
removeReqFromSendQ(pCliMsg);
}
transRefCliHandle(pConn);
return terrno;
}
SWReqsWrapper* pWreq = req->data;
QUEUE_MOVE(&reqToSend, &pWreq->node);
tDebug("%s conn %p start to send msg, batch size:%d, len:%d", CONN_GET_INST_LABEL(pConn), pConn, j, totalLen);
int32_t ret = uv_write(req, (uv_stream_t*)pConn->stream, wb, j, cliBatchSendCb);
if (ret != 0) {
tError("%s conn %p failed to send msg since %s", CONN_GET_INST_LABEL(pConn), pConn, uv_err_name(ret));
while (!QUEUE_IS_EMPTY(&pWreq->node)) {
queue* h = QUEUE_HEAD(&pWreq->node);
SCliReq* pCliMsg = QUEUE_DATA(h, SCliReq, sendQ);
removeReqFromSendQ(pCliMsg);
}
freeWReqToWQ(&pConn->wq, req->data);
code = TSDB_CODE_THIRDPARTY_ERROR;
TAOS_UNUSED(transUnrefCliHandle(pConn));
@ -2182,11 +2234,21 @@ static void cliAsyncCb(uv_async_t* handle) {
if (pThrd->stopMsg != NULL) cliHandleQuit(pThrd, pThrd->stopMsg);
}
static FORCE_INLINE void removeReqFromSendQ(SCliReq* pReq) {
if (pReq == NULL || pReq->inSendQ == 0) {
return;
}
QUEUE_REMOVE(&pReq->sendQ);
pReq->inSendQ = 0;
}
static FORCE_INLINE void destroyReq(void* arg) {
SCliReq* pReq = arg;
if (pReq == NULL) {
return;
}
removeReqFromSendQ(pReq);
STraceId* trace = &pReq->msg.info.traceId;
tGDebug("free memory:%p, free ctx: %p", pReq, pReq->ctx);
@ -2961,6 +3023,7 @@ int32_t cliNotifyCb(SCliConn* pConn, SCliReq* pReq, STransMsg* pResp) {
STrans* pInst = pThrd->pInst;
if (pReq != NULL) {
removeReqFromSendQ(pReq);
if (pResp->code != TSDB_CODE_SUCCESS) {
if (cliMayRetry(pConn, pReq, pResp)) {
return TSDB_CODE_RPC_ASYNC_IN_PROCESS;
@ -3114,7 +3177,7 @@ static int32_t transInitMsg(void* pInstRef, const SEpSet* pEpSet, STransMsg* pRe
if (ctx != NULL) pCtx->userCtx = *ctx;
pCliReq = taosMemoryCalloc(1, sizeof(SCliReq));
if (pReq == NULL) {
if (pCliReq == NULL) {
TAOS_CHECK_GOTO(terrno, NULL, _exception);
}
@ -3183,6 +3246,7 @@ int32_t transSendRequestWithId(void* pInstRef, const SEpSet* pEpSet, STransMsg*
return TSDB_CODE_INVALID_PARA;
}
int32_t code = 0;
int8_t transIdInited = 0;
STrans* pInst = (STrans*)transAcquireExHandle(transGetInstMgt(), (int64_t)pInstRef);
if (pInst == NULL) {
@ -3200,6 +3264,7 @@ int32_t transSendRequestWithId(void* pInstRef, const SEpSet* pEpSet, STransMsg*
if (exh == NULL) {
TAOS_CHECK_GOTO(TSDB_CODE_RPC_MODULE_QUIT, NULL, _exception);
}
transIdInited = 1;
pReq->info.handle = (void*)(*transpointId);
pReq->info.qId = *transpointId;
@ -3216,9 +3281,6 @@ int32_t transSendRequestWithId(void* pInstRef, const SEpSet* pEpSet, STransMsg*
return (code == TSDB_CODE_RPC_ASYNC_MODULE_QUIT ? TSDB_CODE_RPC_MODULE_QUIT : code);
}
// if (pReq->msgType == TDMT_SCH_DROP_TASK) {
// TAOS_UNUSED(transReleaseCliHandle(pReq->info.handle));
// }
transReleaseExHandle(transGetRefMgt(), *transpointId);
transReleaseExHandle(transGetInstMgt(), (int64_t)pInstRef);
return 0;
@ -3226,6 +3288,7 @@ int32_t transSendRequestWithId(void* pInstRef, const SEpSet* pEpSet, STransMsg*
_exception:
transFreeMsg(pReq->pCont);
pReq->pCont = NULL;
if (transIdInited) transReleaseExHandle(transGetRefMgt(), *transpointId);
transReleaseExHandle(transGetInstMgt(), (int64_t)pInstRef);
tError("failed to send request since %s", tstrerror(code));
@ -3641,6 +3704,7 @@ bool filterTimeoutReq(void* key, void* arg) {
if (pReq->msg.info.qId == 0 && !REQUEST_NO_RESP(&pReq->msg) && pReq->ctx) {
int64_t elapse = ((st - pReq->st) / 1000000);
if (listArg && elapse >= listArg->pInst->readTimeout) {
removeReqFromSendQ(pReq);
return true;
} else {
return false;

View File

@ -423,6 +423,7 @@ int32_t transQueueInit(STransQueue* wq, void (*freeFunc)(void* arg)) {
QUEUE_INIT(&wq->node);
wq->freeFunc = (void (*)(void*))freeFunc;
wq->size = 0;
wq->inited = 1;
return 0;
}
void transQueuePush(STransQueue* q, void* arg) {
@ -497,6 +498,7 @@ void transQueueRemove(STransQueue* q, void* e) {
bool transQueueEmpty(STransQueue* q) { return q->size == 0 ? true : false; }
void transQueueClear(STransQueue* q) {
if (q->inited == 0) return;
while (!QUEUE_IS_EMPTY(&q->node)) {
queue* h = QUEUE_HEAD(&q->node);
QUEUE_REMOVE(h);

View File

@ -1289,8 +1289,8 @@ static FORCE_INLINE SSvrConn* createConn(void* hThrd) {
int32_t code = 0;
SWorkThrd* pThrd = hThrd;
int32_t lino;
SSvrConn* pConn = (SSvrConn*)taosMemoryCalloc(1, sizeof(SSvrConn));
int8_t wqInited = 0;
SSvrConn* pConn = (SSvrConn*)taosMemoryCalloc(1, sizeof(SSvrConn));
if (pConn == NULL) {
TAOS_CHECK_GOTO(TSDB_CODE_OUT_OF_MEMORY, &lino, _end);
}
@ -1340,6 +1340,7 @@ static FORCE_INLINE SSvrConn* createConn(void* hThrd) {
code = initWQ(&pConn->wq);
TAOS_CHECK_GOTO(code, &lino, _end);
wqInited = 1;
// init client handle
pConn->pTcp = (uv_tcp_t*)taosMemoryMalloc(sizeof(uv_tcp_t));
@ -1372,7 +1373,7 @@ _end:
transDestroyBuffer(&pConn->readBuf);
taosHashCleanup(pConn->pQTable);
taosMemoryFree(pConn->pTcp);
destroyWQ(&pConn->wq);
if (wqInited) destroyWQ(&pConn->wq);
taosMemoryFree(pConn->buf);
taosMemoryFree(pConn);
pConn = NULL;

View File

@ -1,5 +1,6 @@
add_executable(transportTest "")
add_executable(transUT "")
add_executable(transUT2 "")
add_executable(svrBench "")
add_executable(cliBench "")
add_executable(httpBench "")
@ -9,7 +10,11 @@ target_sources(transUT
"transUT.cpp"
)
target_sources(transportTest
target_sources(transUT2
PRIVATE
"transUT2.cpp"
)
target_sources(transportTest
PRIVATE
"transportTests.cpp"
)
@ -25,16 +30,16 @@ target_sources(cliBench
target_sources(httpBench
PRIVATE
"http_test.c"
)
)
target_include_directories(transportTest
target_include_directories(transportTest
PUBLIC
"${TD_SOURCE_DIR}/include/libs/transport"
"${TD_SOURCE_DIR}/include/libs/transport"
"${CMAKE_CURRENT_SOURCE_DIR}/../inc"
)
target_link_libraries (transportTest
os
target_link_libraries(transportTest
os
util
common
gtest_main
@ -42,67 +47,81 @@ target_link_libraries (transportTest
function
)
target_link_libraries (transUT
os
target_link_libraries(transUT
os
util
common
gtest_main
transport
transport
)
target_include_directories(transUT
PUBLIC
"${TD_SOURCE_DIR}/include/libs/transport"
"${TD_SOURCE_DIR}/include/libs/transport"
"${CMAKE_CURRENT_SOURCE_DIR}/../inc"
)
target_link_libraries(transUT2
os
util
common
gtest_main
transport
)
target_include_directories(transUT2
PUBLIC
"${TD_SOURCE_DIR}/include/libs/transport"
"${CMAKE_CURRENT_SOURCE_DIR}/../inc"
)
target_include_directories(svrBench
PUBLIC
"${TD_SOURCE_DIR}/include/libs/transport"
"${TD_SOURCE_DIR}/include/libs/transport"
"${CMAKE_CURRENT_SOURCE_DIR}/../inc"
)
target_link_libraries (svrBench
os
target_link_libraries(svrBench
os
util
common
gtest_main
transport
transport
)
target_include_directories(cliBench
PUBLIC
"${TD_SOURCE_DIR}/include/libs/transport"
"${TD_SOURCE_DIR}/include/libs/transport"
"${CMAKE_CURRENT_SOURCE_DIR}/../inc"
)
target_include_directories(httpBench
PUBLIC
"${TD_SOURCE_DIR}/include/libs/transport"
"${TD_SOURCE_DIR}/include/libs/transport"
"${CMAKE_CURRENT_SOURCE_DIR}/../inc"
)
target_link_libraries (cliBench
os
target_link_libraries(cliBench
os
util
common
gtest_main
transport
transport
)
target_link_libraries(httpBench
os
os
util
common
gtest_main
transport
transport
)
add_test(
NAME transUT
COMMAND transUT
NAME transUT
COMMAND transUT
)
add_test(
NAME transUtilUt
NAME transUtilUt
COMMAND transportTest
)

View File

@ -53,8 +53,6 @@ static void processResponse(void *parent, SRpcMsg *pMsg, SEpSet *pEpSet) {
tDebug("thread:%d, response is received, type:%d contLen:%d code:0x%x", pInfo->index, pMsg->msgType, pMsg->contLen,
pMsg->code);
if (pEpSet) pInfo->epSet = *pEpSet;
rpcFreeCont(pMsg->pCont);
tsem_post(&pInfo->rspSem);
}
@ -72,12 +70,12 @@ static void *sendRequest(void *param) {
rpcMsg.pCont = rpcMallocCont(pInfo->msgSize);
rpcMsg.contLen = pInfo->msgSize;
rpcMsg.info.ahandle = pInfo;
rpcMsg.info.noResp = 1;
rpcMsg.info.noResp = 0;
rpcMsg.msgType = 1;
tDebug("thread:%d, send request, contLen:%d num:%d", pInfo->index, pInfo->msgSize, pInfo->num);
rpcSendRequest(pInfo->pRpc, &pInfo->epSet, &rpcMsg, NULL);
if (pInfo->num % 20000 == 0) tInfo("thread:%d, %d requests have been sent", pInfo->index, pInfo->num);
// tsem_wait(&pInfo->rspSem);
tsem_wait(&pInfo->rspSem);
}
tDebug("thread:%d, it is over", pInfo->index);
@ -110,17 +108,15 @@ int main(int argc, char *argv[]) {
rpcInit.label = "APP";
rpcInit.numOfThreads = 1;
rpcInit.cfp = processResponse;
rpcInit.sessions = 100;
rpcInit.sessions = 1000;
rpcInit.idleTime = tsShellActivityTimer * 1000;
rpcInit.user = "michael";
rpcInit.connType = TAOS_CONN_CLIENT;
rpcInit.connLimitNum = 10;
rpcInit.connLimitLock = 1;
rpcInit.shareConnLimit = 16 * 1024;
rpcInit.shareConnLimit = tsShareConnLimit;
rpcInit.supportBatch = 1;
rpcDebugFlag = 135;
rpcInit.compressSize = -1;
rpcDebugFlag = 143;
for (int i = 1; i < argc; ++i) {
if (strcmp(argv[i], "-p") == 0 && i < argc - 1) {
} else if (strcmp(argv[i], "-i") == 0 && i < argc - 1) {
@ -139,6 +135,10 @@ int main(int argc, char *argv[]) {
} else if (strcmp(argv[i], "-u") == 0 && i < argc - 1) {
} else if (strcmp(argv[i], "-k") == 0 && i < argc - 1) {
} else if (strcmp(argv[i], "-spi") == 0 && i < argc - 1) {
} else if (strcmp(argv[i], "-l") == 0 && i < argc - 1) {
rpcInit.shareConnLimit = atoi(argv[++i]);
} else if (strcmp(argv[i], "-c") == 0 && i < argc - 1) {
rpcInit.compressSize = atoi(argv[++i]);
} else if (strcmp(argv[i], "-d") == 0 && i < argc - 1) {
rpcDebugFlag = atoi(argv[++i]);
} else {
@ -150,6 +150,8 @@ int main(int argc, char *argv[]) {
printf(" [-n requests]: number of requests per thread, default is:%d\n", numOfReqs);
printf(" [-u user]: user name for the connection, default is:%s\n", rpcInit.user);
printf(" [-d debugFlag]: debug flag, default:%d\n", rpcDebugFlag);
printf(" [-c compressSize]: compress size, default:%d\n", tsCompressMsgSize);
printf(" [-l shareConnLimit]: share conn limit, default:%d\n", tsShareConnLimit);
printf(" [-h help]: print out this help\n\n");
exit(0);
}
@ -168,18 +170,18 @@ int main(int argc, char *argv[]) {
int64_t now = taosGetTimestampUs();
SInfo *pInfo = (SInfo *)taosMemoryCalloc(1, sizeof(SInfo) * appThreads);
SInfo *p = pInfo;
SInfo **pInfo = (SInfo **)taosMemoryCalloc(1, sizeof(SInfo *) * appThreads);
for (int i = 0; i < appThreads; ++i) {
pInfo->index = i;
pInfo->epSet = epSet;
pInfo->numOfReqs = numOfReqs;
pInfo->msgSize = msgSize;
tsem_init(&pInfo->rspSem, 0, 0);
pInfo->pRpc = pRpc;
SInfo *p = taosMemoryCalloc(1, sizeof(SInfo));
p->index = i;
p->epSet = epSet;
p->numOfReqs = numOfReqs;
p->msgSize = msgSize;
tsem_init(&p->rspSem, 0, 0);
p->pRpc = pRpc;
pInfo[i] = p;
taosThreadCreate(&pInfo->thread, NULL, sendRequest, pInfo);
pInfo++;
taosThreadCreate(&p->thread, NULL, sendRequest, pInfo[i]);
}
do {
@ -192,12 +194,14 @@ int main(int argc, char *argv[]) {
tInfo("Performance: %.3f requests per second, msgSize:%d bytes", 1000.0 * numOfReqs * appThreads / usedTime, msgSize);
for (int i = 0; i < appThreads; i++) {
SInfo *pInfo = p;
taosThreadJoin(pInfo->thread, NULL);
p++;
SInfo *p = pInfo[i];
taosThreadJoin(p->thread, NULL);
taosMemoryFree(p);
}
int ch = getchar();
UNUSED(ch);
taosMemoryFree(pInfo);
// int ch = getchar();
// UNUSED(ch);
taosCloseLog();

View File

@ -76,23 +76,6 @@ void *processShellMsg(void *arg) {
for (int i = 0; i < numOfMsgs; ++i) {
taosGetQitem(qall, (void **)&pRpcMsg);
if (pDataFile != NULL) {
if (taosWriteFile(pDataFile, pRpcMsg->pCont, pRpcMsg->contLen) < 0) {
tInfo("failed to write data file, reason:%s", strerror(errno));
}
}
}
if (commit >= 2) {
num += numOfMsgs;
// if (taosFsync(pDataFile) < 0) {
// tInfo("failed to flush data to file, reason:%s", strerror(errno));
//}
if (num % 10000 == 0) {
tInfo("%d request have been written into disk", num);
}
}
taosResetQitems(qall);
@ -107,16 +90,7 @@ void *processShellMsg(void *arg) {
rpcMsg.code = 0;
rpcSendResponse(&rpcMsg);
void *handle = pRpcMsg->info.handle;
taosFreeQitem(pRpcMsg);
//{
// SRpcMsg nRpcMsg = {0};
// nRpcMsg.pCont = rpcMallocCont(msgSize);
// nRpcMsg.contLen = msgSize;
// nRpcMsg.info.handle = handle;
// nRpcMsg.code = TSDB_CODE_CTG_NOT_READY;
// rpcSendResponse(&nRpcMsg);
//}
}
taosUpdateItemSize(qinfo.queue, numOfMsgs);
@ -149,12 +123,13 @@ int main(int argc, char *argv[]) {
rpcInit.localPort = 7000;
memcpy(rpcInit.localFqdn, "localhost", strlen("localhost"));
rpcInit.label = "SER";
rpcInit.numOfThreads = 1;
rpcInit.numOfThreads = 10;
rpcInit.cfp = processRequestMsg;
rpcInit.idleTime = 2 * 1500;
taosVersionStrToInt(version, &(rpcInit.compatibilityVer));
rpcDebugFlag = 131;
rpcInit.compressSize = -1;
for (int i = 1; i < argc; ++i) {
if (strcmp(argv[i], "-p") == 0 && i < argc - 1) {
@ -205,8 +180,8 @@ int main(int argc, char *argv[]) {
if (pDataFile == NULL) tInfo("failed to open data file, reason:%s", strerror(errno));
}
int32_t numOfAthread = 5;
multiQ = taosMemoryMalloc(sizeof(numOfAthread));
int32_t numOfAthread = 1;
multiQ = taosMemoryMalloc(sizeof(MultiThreadQhandle));
multiQ->numOfThread = numOfAthread;
multiQ->qhandle = (STaosQueue **)taosMemoryMalloc(sizeof(STaosQueue *) * numOfAthread);
multiQ->qset = (STaosQset **)taosMemoryMalloc(sizeof(STaosQset *) * numOfAthread);
@ -221,11 +196,6 @@ int main(int argc, char *argv[]) {
threads[i].idx = i;
taosThreadCreate(&(threads[i].thread), NULL, processShellMsg, (void *)&threads[i]);
}
// qhandle = taosOpenQueue();
// qset = taosOpenQset();
// taosAddIntoQset(qset, qhandle, NULL);
// processShellMsg();
if (pDataFile != NULL) {
taosCloseFile(&pDataFile);

View File

@ -54,6 +54,7 @@ class Client {
rpcInit_.user = (char *)user;
rpcInit_.parent = this;
rpcInit_.connType = TAOS_CONN_CLIENT;
rpcInit_.shareConnLimit = 200;
taosVersionStrToInt(version, &(rpcInit_.compatibilityVer));
this->transCli = rpcOpen(&rpcInit_);
@ -85,6 +86,14 @@ class Client {
SemWait();
*resp = this->resp;
}
void sendReq(SRpcMsg *req) {
SEpSet epSet = {0};
epSet.inUse = 0;
addEpIntoEpSet(&epSet, "127.0.0.1", 7000);
rpcSendRequest(this->transCli, &epSet, req, NULL);
}
void SendAndRecvNoHandle(SRpcMsg *req, SRpcMsg *resp) {
if (req->info.handle != NULL) {
rpcReleaseHandle(req->info.handle, TAOS_CONN_CLIENT);
@ -160,6 +169,7 @@ static void processReq(void *parent, SRpcMsg *pMsg, SEpSet *pEpSet) {
rpcMsg.contLen = 100;
rpcMsg.info = pMsg->info;
rpcMsg.code = 0;
rpcFreeCont(pMsg->pCont);
rpcSendResponse(&rpcMsg);
}
@ -264,6 +274,7 @@ class TransObj {
cli->Stop();
}
void cliSendAndRecv(SRpcMsg *req, SRpcMsg *resp) { cli->SendAndRecv(req, resp); }
void cliSendReq(SRpcMsg *req) { cli->sendReq(req); }
void cliSendAndRecvNoHandle(SRpcMsg *req, SRpcMsg *resp) { cli->SendAndRecvNoHandle(req, resp); }
~TransObj() {
@ -492,15 +503,16 @@ TEST_F(TransEnv, queryExcept) {
TEST_F(TransEnv, noResp) {
SRpcMsg resp = {0};
SRpcMsg req = {0};
// for (int i = 0; i < 5; i++) {
// memset(&req, 0, sizeof(req));
// req.info.noResp = 1;
// req.msgType = 1;
// req.pCont = rpcMallocCont(10);
// req.contLen = 10;
// tr->cliSendAndRecv(&req, &resp);
//}
// taosMsleep(2000);
for (int i = 0; i < 500000; i++) {
memset(&req, 0, sizeof(req));
req.info.noResp = 1;
req.msgType = 3;
req.pCont = rpcMallocCont(10);
req.contLen = 10;
tr->cliSendReq(&req);
//tr->cliSendAndRecv(&req, &resp);
}
taosMsleep(2000);
// no resp
}

View File

@ -0,0 +1,529 @@
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3 * or later ("AGPL"), as published by the Free
* Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <gtest/gtest.h>
#include <cstdio>
#include <cstring>
#include "tdatablock.h"
#include "tglobal.h"
#include "tlog.h"
#include "tmisce.h"
#include "transLog.h"
#include "trpc.h"
#include "tversion.h"
using namespace std;
const char *label = "APP";
const char *secret = "secret";
const char *user = "user";
const char *ckey = "ckey";
class Server;
int port = 7000;
// server process
// server except
typedef void (*CB)(void *parent, SRpcMsg *pMsg, SEpSet *pEpSet);
static void processContinueSend(void *parent, SRpcMsg *pMsg, SEpSet *pEpSet);
static void processReleaseHandleCb(void *parent, SRpcMsg *pMsg, SEpSet *pEpSet);
static void processRegisterFailure(void *parent, SRpcMsg *pMsg, SEpSet *pEpSet);
static void processReq(void *parent, SRpcMsg *pMsg, SEpSet *pEpSet);
// client process;
static void processResp(void *parent, SRpcMsg *pMsg, SEpSet *pEpSet);
class Client {
public:
void Init(int nThread) {
memcpy(tsTempDir, TD_TMP_DIR_PATH, strlen(TD_TMP_DIR_PATH));
memset(&rpcInit_, 0, sizeof(rpcInit_));
rpcInit_.localPort = 0;
rpcInit_.label = (char *)"client";
rpcInit_.numOfThreads = nThread;
rpcInit_.cfp = processResp;
rpcInit_.user = (char *)user;
rpcInit_.parent = this;
rpcInit_.connType = TAOS_CONN_CLIENT;
rpcInit_.shareConnLimit = 200;
taosVersionStrToInt(version, &(rpcInit_.compatibilityVer));
this->transCli = rpcOpen(&rpcInit_);
//tsem_init(&this->sem, 0, 0);
}
void SetResp(SRpcMsg *pMsg) {
// set up resp;
this->resp = *pMsg;
}
SRpcMsg *Resp() { return &this->resp; }
void Restart(CB cb) {
rpcClose(this->transCli);
rpcInit_.cfp = cb;
taosVersionStrToInt(version, &(rpcInit_.compatibilityVer));
this->transCli = rpcOpen(&rpcInit_);
}
void Stop() {
rpcClose(this->transCli);
this->transCli = NULL;
}
void SendAndRecv(SRpcMsg *req, SRpcMsg *resp) {
SEpSet epSet = {0};
epSet.inUse = 0;
addEpIntoEpSet(&epSet, "127.0.0.1", 7000);
rpcSendRequest(this->transCli, &epSet, req, NULL);
SemWait();
*resp = this->resp;
}
void sendReq(SRpcMsg *req) {
SEpSet epSet = {0};
epSet.inUse = 0;
addEpIntoEpSet(&epSet, "127.0.0.1", 7000);
rpcSendRequest(this->transCli, &epSet, req, NULL);
}
void sendReqWithId(SRpcMsg *req, int64_t *id) {
SEpSet epSet = {0};
epSet.inUse = 0;
addEpIntoEpSet(&epSet, "127.0.0.1",7000);
rpcSendRequestWithCtx(this->transCli, &epSet, req, id, NULL);
}
void freeId(int64_t *id) {
rpcFreeConnById(this->transCli, *id);
}
void SendAndRecvNoHandle(SRpcMsg *req, SRpcMsg *resp) {
if (req->info.handle != NULL) {
rpcReleaseHandle(req->info.handle, TAOS_CONN_CLIENT);
req->info.handle = NULL;
}
SendAndRecv(req, resp);
}
void SemWait() { tsem_wait(&this->sem); }
void SemPost() { tsem_post(&this->sem); }
void Reset() {}
~Client() {
if (this->transCli) rpcClose(this->transCli);
}
private:
tsem_t sem;
SRpcInit rpcInit_;
void *transCli;
SRpcMsg resp;
};
class Server {
public:
Server() {
memcpy(tsTempDir, TD_TMP_DIR_PATH, strlen(TD_TMP_DIR_PATH));
memset(&rpcInit_, 0, sizeof(rpcInit_));
memcpy(rpcInit_.localFqdn, "localhost", strlen("localhost"));
rpcInit_.localPort = port;
rpcInit_.label = (char *)"server";
rpcInit_.numOfThreads = 5;
rpcInit_.cfp = processReq;
rpcInit_.user = (char *)user;
rpcInit_.connType = TAOS_CONN_SERVER;
taosVersionStrToInt(version, &(rpcInit_.compatibilityVer));
}
void Start() {
this->transSrv = rpcOpen(&this->rpcInit_);
taosMsleep(1000);
}
void SetSrvContinueSend(CB cb) {
this->Stop();
rpcInit_.cfp = cb;
this->Start();
}
void Stop() {
if (this->transSrv == NULL) return;
rpcClose(this->transSrv);
this->transSrv = NULL;
}
void SetSrvSend(void (*cfp)(void *parent, SRpcMsg *pMsg, SEpSet *pEpSet)) {
this->Stop();
rpcInit_.cfp = cfp;
this->Start();
}
void Restart() {
this->Stop();
this->Start();
}
~Server() {
if (this->transSrv) rpcClose(this->transSrv);
this->transSrv = NULL;
}
private:
SRpcInit rpcInit_;
void *transSrv;
};
static void processReq(void *parent, SRpcMsg *pMsg, SEpSet *pEpSet) {
SRpcMsg rpcMsg = {0};
rpcMsg.pCont = rpcMallocCont(100);
rpcMsg.contLen = 100;
rpcMsg.info = pMsg->info;
rpcMsg.code = 0;
rpcFreeCont(pMsg->pCont);
rpcSendResponse(&rpcMsg);
}
static void processContinueSend(void *parent, SRpcMsg *pMsg, SEpSet *pEpSet) {
// for (int i = 0; i < 10; i++) {
// SRpcMsg rpcMsg = {0};
// rpcMsg.pCont = rpcMallocCont(100);
// rpcMsg.contLen = 100;
// rpcMsg.info = pMsg->info;
// rpcMsg.code = 0;
// rpcSendResponse(&rpcMsg);
// }
}
static void processReleaseHandleCb(void *parent, SRpcMsg *pMsg, SEpSet *pEpSet) {
SRpcMsg rpcMsg = {0};
rpcMsg.pCont = rpcMallocCont(100);
rpcMsg.contLen = 100;
rpcMsg.info = pMsg->info;
rpcMsg.code = 0;
rpcSendResponse(&rpcMsg);
rpcReleaseHandle(&pMsg->info, TAOS_CONN_SERVER);
}
static void processRegisterFailure(void *parent, SRpcMsg *pMsg, SEpSet *pEpSet) {
// {
// SRpcMsg rpcMsg1 = {0};
// rpcMsg1.pCont = rpcMallocCont(100);
// rpcMsg1.contLen = 100;
// rpcMsg1.info = pMsg->info;
// rpcMsg1.code = 0;
// rpcRegisterBrokenLinkArg(&rpcMsg1);
// }
// taosMsleep(10);
// SRpcMsg rpcMsg = {0};
// rpcMsg.pCont = rpcMallocCont(100);
// rpcMsg.contLen = 100;
// rpcMsg.info = pMsg->info;
// rpcMsg.code = 0;
// rpcSendResponse(&rpcMsg);
}
// client process;
static void processResp(void *parent, SRpcMsg *pMsg, SEpSet *pEpSet) {
Client *client = (Client *)parent;
rpcFreeCont(pMsg->pCont);
STraceId *trace = (STraceId *)&pMsg->info.traceId;
tGDebug("received resp %s",tstrerror(pMsg->code));
}
static void initEnv() {
dDebugFlag = 143;
vDebugFlag = 0;
mDebugFlag = 143;
cDebugFlag = 0;
jniDebugFlag = 0;
tmrDebugFlag = 143;
uDebugFlag = 143;
rpcDebugFlag = 143;
qDebugFlag = 0;
wDebugFlag = 0;
sDebugFlag = 0;
tsdbDebugFlag = 0;
tsLogEmbedded = 1;
tsAsyncLog = 0;
std::string path = TD_TMP_DIR_PATH "transport";
// taosRemoveDir(path.c_str());
taosMkDir(path.c_str());
tstrncpy(tsLogDir, path.c_str(), PATH_MAX);
if (taosInitLog("taosdlog", 1, false) != 0) {
printf("failed to init log file\n");
}
}
class TransObj {
public:
TransObj() {
initEnv();
cli = new Client;
cli->Init(1);
srv = new Server;
srv->Start();
}
void RestartCli(CB cb) {
//
cli->Restart(cb);
}
void StopSrv() {
//
srv->Stop();
}
// call when link broken, and notify query or fetch stop
void SetSrvContinueSend(void (*cfp)(void *parent, SRpcMsg *pMsg, SEpSet *pEpSet)) {
///////
srv->SetSrvContinueSend(cfp);
}
void RestartSrv() { srv->Restart(); }
void StopCli() {
///////
cli->Stop();
}
void cliSendAndRecv(SRpcMsg *req, SRpcMsg *resp) { cli->SendAndRecv(req, resp); }
void cliSendReq(SRpcMsg *req) { cli->sendReq(req); }
void cliSendReqWithId(SRpcMsg *req, int64_t *id) { cli->sendReqWithId(req, id);}
void cliFreeReqId(int64_t *id) { cli->freeId(id);}
void cliSendAndRecvNoHandle(SRpcMsg *req, SRpcMsg *resp) { cli->SendAndRecvNoHandle(req, resp); }
~TransObj() {
delete cli;
delete srv;
}
private:
Client *cli;
Server *srv;
};
class TransEnv : public ::testing::Test {
protected:
virtual void SetUp() {
// set up trans obj
tr = new TransObj();
}
virtual void TearDown() {
// tear down
delete tr;
}
TransObj *tr = NULL;
};
TEST_F(TransEnv, 01sendAndRec) {
// for (int i = 0; i < 10; i++) {
// SRpcMsg req = {0}, resp = {0};
// req.msgType = 0;
// req.pCont = rpcMallocCont(10);
// req.contLen = 10;
// tr->cliSendAndRecv(&req, &resp);
// assert(resp.code == 0);
// }
}
TEST_F(TransEnv, 02StopServer) {
// for (int i = 0; i < 1; i++) {
// SRpcMsg req = {0}, resp = {0};
// req.msgType = 0;
// req.info.ahandle = (void *)0x35;
// req.pCont = rpcMallocCont(10);
// req.contLen = 10;
// tr->cliSendAndRecv(&req, &resp);
// assert(resp.code == 0);
// }
// SRpcMsg req = {0}, resp = {0};
// req.info.ahandle = (void *)0x35;
// req.msgType = 1;
// req.pCont = rpcMallocCont(10);
// req.contLen = 10;
// tr->StopSrv();
// // tr->RestartSrv();
// tr->cliSendAndRecv(&req, &resp);
// assert(resp.code != 0);
}
TEST_F(TransEnv, clientUserDefined) {
// tr->RestartSrv();
// for (int i = 0; i < 10; i++) {
// SRpcMsg req = {0}, resp = {0};
// req.msgType = 0;
// req.pCont = rpcMallocCont(10);
// req.contLen = 10;
// tr->cliSendAndRecv(&req, &resp);
// assert(resp.code == 0);
// }
//////////////////
}
TEST_F(TransEnv, cliPersistHandle) {
// SRpcMsg resp = {0};
// void *handle = NULL;
// for (int i = 0; i < 10; i++) {
// SRpcMsg req = {0};
// req.info = resp.info;
// req.info.persistHandle = 1;
// req.msgType = 1;
// req.pCont = rpcMallocCont(10);
// req.contLen = 10;
// tr->cliSendAndRecv(&req, &resp);
// // if (i == 5) {
// // std::cout << "stop server" << std::endl;
// // tr->StopSrv();
// //}
// // if (i >= 6) {
// // EXPECT_TRUE(resp.code != 0);
// //}
// handle = resp.info.handle;
// }
// rpcReleaseHandle(handle, TAOS_CONN_CLIENT);
// for (int i = 0; i < 10; i++) {
// SRpcMsg req = {0};
// req.msgType = 1;
// req.pCont = rpcMallocCont(10);
// req.contLen = 10;
// tr->cliSendAndRecv(&req, &resp);
// }
// taosMsleep(1000);
//////////////////
}
TEST_F(TransEnv, srvReleaseHandle) {
// SRpcMsg resp = {0};
// tr->SetSrvContinueSend(processReleaseHandleCb);
// // tr->Restart(processReleaseHandleCb);
// void *handle = NULL;
// SRpcMsg req = {0};
// for (int i = 0; i < 1; i++) {
// memset(&req, 0, sizeof(req));
// req.info = resp.info;
// req.info.persistHandle = 1;
// req.msgType = 1;
// req.pCont = rpcMallocCont(10);
// req.contLen = 10;
// tr->cliSendAndRecv(&req, &resp);
// // tr->cliSendAndRecvNoHandle(&req, &resp);
// EXPECT_TRUE(resp.code == 0);
// }
//////////////////
}
// reopen later
// TEST_F(TransEnv, cliReleaseHandleExcept) {
// SRpcMsg resp = {0};
// SRpcMsg req = {0};
// for (int i = 0; i < 3; i++) {
// memset(&req, 0, sizeof(req));
// req.info = resp.info;
// req.info.persistHandle = 1;
// req.info.ahandle = (void *)1234;
// req.msgType = 1;
// req.pCont = rpcMallocCont(10);
// req.contLen = 10;
// tr->cliSendAndRecv(&req, &resp);
// if (i == 1) {
// std::cout << "stop server" << std::endl;
// tr->StopSrv();
// }
// if (i > 1) {
// EXPECT_TRUE(resp.code != 0);
// }
// }
// //////////////////
//}
TEST_F(TransEnv, srvContinueSend) {
// tr->SetSrvContinueSend(processContinueSend);
// SRpcMsg req = {0}, resp = {0};
// for (int i = 0; i < 10; i++) {
// // memset(&req, 0, sizeof(req));
// // memset(&resp, 0, sizeof(resp));
// // req.msgType = 1;
// // req.pCont = rpcMallocCont(10);
// // req.contLen = 10;
// // tr->cliSendAndRecv(&req, &resp);
// }
// taosMsleep(1000);
}
TEST_F(TransEnv, srvPersistHandleExcept) {
// tr->SetSrvContinueSend(processContinueSend);
// // tr->SetCliPersistFp(cliPersistHandle);
// SRpcMsg resp = {0};
// SRpcMsg req = {0};
// for (int i = 0; i < 5; i++) {
// // memset(&req, 0, sizeof(req));
// // req.info = resp.info;
// // req.msgType = 1;
// // req.pCont = rpcMallocCont(10);
// // req.contLen = 10;
// // tr->cliSendAndRecv(&req, &resp);
// // if (i > 2) {
// // tr->StopCli();
// // break;
// //}
// }
// taosMsleep(2000);
// conn broken
//
}
TEST_F(TransEnv, cliPersistHandleExcept) {
// tr->SetSrvContinueSend(processContinueSend);
// SRpcMsg resp = {0};
// SRpcMsg req = {0};
// for (int i = 0; i < 5; i++) {
// // memset(&req, 0, sizeof(req));
// // req.info = resp.info;
// // req.msgType = 1;
// // req.pCont = rpcMallocCont(10);
// // req.contLen = 10;
// // tr->cliSendAndRecv(&req, &resp);
// // if (i > 2) {
// // tr->StopSrv();
// // break;
// //}
// }
// taosMsleep(2000);
// // conn broken
//
}
TEST_F(TransEnv, multiCliPersistHandleExcept) {
// conn broken
}
TEST_F(TransEnv, queryExcept) {
//taosMsleep(4 * 1000);
}
TEST_F(TransEnv, idTest) {
SRpcMsg resp = {0};
SRpcMsg req = {0};
for (int i = 0; i < 50000; i++) {
memset(&req, 0, sizeof(req));
req.info.noResp = 0;
req.msgType = 3;
req.pCont = rpcMallocCont(10);
req.contLen = 10;
int64_t id;
tr->cliSendReqWithId(&req, &id);
tr->cliFreeReqId(&id);
}
taosMsleep(1000);
// no resp
}
TEST_F(TransEnv, noResp) {
SRpcMsg resp = {0};
SRpcMsg req = {0};
for (int i = 0; i < 500000; i++) {
memset(&req, 0, sizeof(req));
req.info.noResp = 0;
req.msgType = 3;
req.pCont = rpcMallocCont(10);
req.contLen = 10;
tr->cliSendReq(&req);
//tr->cliSendAndRecv(&req, &resp);
}
taosMsleep(10000);
// no resp
}

View File

@ -664,7 +664,7 @@ static FORCE_INLINE int32_t walWriteImpl(SWal *pWal, int64_t index, tmsg_t msgTy
// set status
if (pWal->vers.firstVer == -1) {
pWal->vers.firstVer = 0;
pWal->vers.firstVer = index;
}
pWal->vers.lastVer = index;
pWal->totSize += sizeof(SWalCkHead) + cyptedBodyLen;

View File

@ -223,10 +223,15 @@ int32_t taosMulModeMkDir(const char *dirname, int mode, bool checkAccess) {
if (checkAccess && taosCheckAccessFile(temp, TD_FILE_ACCESS_EXIST_OK | TD_FILE_ACCESS_READ_OK | TD_FILE_ACCESS_WRITE_OK)) {
return 0;
}
code = chmod(temp, mode);
if (-1 == code) {
terrno = TAOS_SYSTEM_ERROR(errno);
return terrno;
struct stat statbuf = {0};
code = stat(temp, &statbuf);
if (code != 0 || (statbuf.st_mode & mode) != mode) {
terrno = TAOS_SYSTEM_ERROR(errno);
return terrno;
}
}
}

View File

@ -5,12 +5,11 @@ FIND_PATH(HEADER_GTEST_INCLUDE_DIR gtest.h /usr/include/gtest /usr/local/include
FIND_LIBRARY(LIB_GTEST_STATIC_DIR libgtest.a /usr/lib/ /usr/local/lib /usr/lib64)
FIND_LIBRARY(LIB_GTEST_SHARED_DIR libgtest.so /usr/lib/ /usr/local/lib /usr/lib64)
IF (HEADER_GTEST_INCLUDE_DIR AND (LIB_GTEST_STATIC_DIR OR LIB_GTEST_SHARED_DIR))
IF(HEADER_GTEST_INCLUDE_DIR AND(LIB_GTEST_STATIC_DIR OR LIB_GTEST_SHARED_DIR))
MESSAGE(STATUS "gTest library found, build os test")
INCLUDE_DIRECTORIES(${HEADER_GTEST_INCLUDE_DIR})
AUX_SOURCE_DIRECTORY(${CMAKE_CURRENT_SOURCE_DIR} SOURCE_LIST)
ENDIF()
INCLUDE_DIRECTORIES(${TD_SOURCE_DIR}/src/util/inc)

View File

@ -14,12 +14,12 @@
*/
#define _DEFAULT_SOURCE
#include "tlrucache.h"
#include "os.h"
#include "taoserror.h"
#include "tarray.h"
#include "tdef.h"
#include "tlog.h"
#include "tlrucache.h"
#include "tutil.h"
typedef struct SLRUEntry SLRUEntry;
@ -110,7 +110,7 @@ struct SLRUEntryTable {
};
static int taosLRUEntryTableInit(SLRUEntryTable *table, int maxUpperHashBits) {
table->lengthBits = 4;
table->lengthBits = 16;
table->list = taosMemoryCalloc(1 << table->lengthBits, sizeof(SLRUEntry *));
if (!table->list) {
TAOS_RETURN(terrno);
@ -371,24 +371,35 @@ static void taosLRUCacheShardCleanup(SLRUCacheShard *shard) {
static LRUStatus taosLRUCacheShardInsertEntry(SLRUCacheShard *shard, SLRUEntry *e, LRUHandle **handle,
bool freeOnFail) {
LRUStatus status = TAOS_LRU_STATUS_OK;
SArray *lastReferenceList = taosArrayInit(16, POINTER_BYTES);
if (!lastReferenceList) {
taosLRUEntryFree(e);
return TAOS_LRU_STATUS_FAIL;
LRUStatus status = TAOS_LRU_STATUS_OK;
SLRUEntry *toFree = NULL;
SArray *lastReferenceList = NULL;
if (shard->usage + e->totalCharge > shard->capacity) {
lastReferenceList = taosArrayInit(16, POINTER_BYTES);
if (!lastReferenceList) {
taosLRUEntryFree(e);
return TAOS_LRU_STATUS_FAIL;
}
}
(void)taosThreadMutexLock(&shard->mutex);
taosLRUCacheShardEvictLRU(shard, e->totalCharge, lastReferenceList);
if (shard->usage + e->totalCharge > shard->capacity && shard->lru.next != &shard->lru) {
if (!lastReferenceList) {
lastReferenceList = taosArrayInit(16, POINTER_BYTES);
if (!lastReferenceList) {
taosLRUEntryFree(e);
(void)taosThreadMutexUnlock(&shard->mutex);
return TAOS_LRU_STATUS_FAIL;
}
}
taosLRUCacheShardEvictLRU(shard, e->totalCharge, lastReferenceList);
}
if (shard->usage + e->totalCharge > shard->capacity && (shard->strictCapacity || handle == NULL)) {
TAOS_LRU_ENTRY_SET_IN_CACHE(e, false);
if (handle == NULL) {
if (!taosArrayPush(lastReferenceList, &e)) {
taosLRUEntryFree(e);
goto _exit;
}
toFree = e;
} else {
if (freeOnFail) {
taosLRUEntryFree(e);
@ -413,11 +424,7 @@ static LRUStatus taosLRUCacheShardInsertEntry(SLRUCacheShard *shard, SLRUEntry *
taosLRUCacheShardLRURemove(shard, old);
shard->usage -= old->totalCharge;
if (!taosArrayPush(lastReferenceList, &old)) {
taosLRUEntryFree(e);
taosLRUEntryFree(old);
goto _exit;
}
toFree = old;
}
}
if (handle == NULL) {
@ -434,6 +441,10 @@ static LRUStatus taosLRUCacheShardInsertEntry(SLRUCacheShard *shard, SLRUEntry *
_exit:
(void)taosThreadMutexUnlock(&shard->mutex);
if (toFree) {
taosLRUEntryFree(toFree);
}
for (int i = 0; i < taosArrayGetSize(lastReferenceList); ++i) {
SLRUEntry *entry = taosArrayGetP(lastReferenceList, i);
@ -733,7 +744,8 @@ void taosLRUCacheCleanup(SLRUCache *cache) {
}
LRUStatus taosLRUCacheInsert(SLRUCache *cache, const void *key, size_t keyLen, void *value, size_t charge,
_taos_lru_deleter_t deleter, _taos_lru_overwriter_t overwriter, LRUHandle **handle, LRUPriority priority, void *ud) {
_taos_lru_deleter_t deleter, _taos_lru_overwriter_t overwriter, LRUHandle **handle,
LRUPriority priority, void *ud) {
uint32_t hash = TAOS_LRU_CACHE_SHARD_HASH32(key, keyLen);
uint32_t shardIndex = hash & cache->shardedCache.shardMask;

View File

@ -5,7 +5,7 @@ FIND_PATH(HEADER_GTEST_INCLUDE_DIR gtest.h /usr/include/gtest /usr/local/include
FIND_LIBRARY(LIB_GTEST_STATIC_DIR libgtest.a /usr/lib/ /usr/local/lib /usr/lib64)
FIND_LIBRARY(LIB_GTEST_SHARED_DIR libgtest.so /usr/lib/ /usr/local/lib /usr/lib64)
IF (HEADER_GTEST_INCLUDE_DIR AND (LIB_GTEST_STATIC_DIR OR LIB_GTEST_SHARED_DIR))
IF(HEADER_GTEST_INCLUDE_DIR AND(LIB_GTEST_STATIC_DIR OR LIB_GTEST_SHARED_DIR))
MESSAGE(STATUS "gTest library found, build unit test")
INCLUDE_DIRECTORIES(${HEADER_GTEST_INCLUDE_DIR})
@ -20,18 +20,16 @@ IF (HEADER_GTEST_INCLUDE_DIR AND (LIB_GTEST_STATIC_DIR OR LIB_GTEST_SHARED_DIR))
LIST(APPEND SOURCE_LIST ${CMAKE_CURRENT_SOURCE_DIR}/hashTest.cpp)
ADD_EXECUTABLE(hashTest ${SOURCE_LIST})
TARGET_LINK_LIBRARIES(hashTest util common os gtest pthread)
LIST(APPEND BIN_SRC ${CMAKE_CURRENT_SOURCE_DIR}/trefTest.c)
ADD_EXECUTABLE(trefTest ${BIN_SRC})
TARGET_LINK_LIBRARIES(trefTest common util)
ENDIF()
#IF (TD_LINUX)
# ADD_EXECUTABLE(trefTest ./trefTest.c)
# TARGET_LINK_LIBRARIES(trefTest util common)
#ENDIF ()
# IF (TD_LINUX)
# ADD_EXECUTABLE(trefTest ./trefTest.c)
# TARGET_LINK_LIBRARIES(trefTest util common)
# ENDIF ()
INCLUDE_DIRECTORIES(${TD_SOURCE_DIR}/src/util/inc)
INCLUDE_DIRECTORIES(${TD_SOURCE_DIR}/include/common)
@ -46,8 +44,8 @@ add_test(
# # freelistTest
# add_executable(freelistTest "")
# target_sources(freelistTest
# PRIVATE
# "freelistTest.cpp"
# PRIVATE
# "freelistTest.cpp"
# )
# target_link_libraries(freelistTest os util gtest gtest_main)
@ -57,7 +55,7 @@ add_test(
# cfgTest
add_executable(cfgTest "cfgTest.cpp")
target_link_libraries(cfgTest os util gtest_main)
target_link_libraries(cfgTest os util gtest_main)
add_test(
NAME cfgTest
COMMAND cfgTest
@ -65,7 +63,7 @@ add_test(
# bloomFilterTest
add_executable(bloomFilterTest "bloomFilterTest.cpp")
target_link_libraries(bloomFilterTest os util gtest_main)
target_link_libraries(bloomFilterTest os util gtest_main)
add_test(
NAME bloomFilterTest
COMMAND bloomFilterTest
@ -73,7 +71,7 @@ add_test(
# taosbsearchTest
add_executable(taosbsearchTest "taosbsearchTest.cpp")
target_link_libraries(taosbsearchTest os util gtest_main)
target_link_libraries(taosbsearchTest os util gtest_main)
add_test(
NAME taosbsearchTest
COMMAND taosbsearchTest
@ -81,7 +79,7 @@ add_test(
# trbtreeTest
add_executable(rbtreeTest "trbtreeTest.cpp")
target_link_libraries(rbtreeTest os util gtest_main)
target_link_libraries(rbtreeTest os util gtest_main)
add_test(
NAME rbtreeTest
COMMAND rbtreeTest
@ -120,17 +118,17 @@ add_test(
)
add_executable(regexTest "regexTest.cpp")
target_link_libraries(regexTest os util gtest_main )
target_link_libraries(regexTest os util gtest_main)
add_test(
NAME regexTest
COMMAND regexTest
)
add_executable(logTest "log.cpp")
target_link_libraries(logTest os util common gtest_main)
add_test(
NAME logTest
COMMAND logTest
target_link_libraries(logTest os util common gtest_main)
add_test(
NAME logTest
COMMAND logTest
)
add_executable(decompressTest "decompressTest.cpp")
@ -140,7 +138,7 @@ add_test(
COMMAND decompressTest
)
if (${TD_LINUX})
if(${TD_LINUX})
# terrorTest
add_executable(terrorTest "terrorTest.cpp")
target_link_libraries(terrorTest os util common gtest_main)
@ -154,4 +152,4 @@ if (${TD_LINUX})
add_custom_command(TARGET terrorTest POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${ERR_TBL_FILE} $<TARGET_FILE_DIR:terrorTest>
)
endif ()
endif()

View File

@ -1318,6 +1318,7 @@
,,y,script,./test.sh -f tsim/stream/basic2.sim
,,y,script,./test.sh -f tsim/stream/basic3.sim
,,y,script,./test.sh -f tsim/stream/basic4.sim
,,y,script,./test.sh -f tsim/stream/snodeCheck.sim
,,y,script,./test.sh -f tsim/stream/checkpointInterval0.sim
,,y,script,./test.sh -f tsim/stream/checkStreamSTable1.sim
,,y,script,./test.sh -f tsim/stream/checkStreamSTable.sim

View File

@ -0,0 +1,64 @@
system sh/stop_dnodes.sh
system sh/deploy.sh -n dnode1 -i 1
system sh/deploy.sh -n dnode2 -i 2
system sh/deploy.sh -n dnode3 -i 3
system sh/cfg.sh -n dnode1 -c supportVnodes -v 4
system sh/cfg.sh -n dnode2 -c supportVnodes -v 4
system sh/cfg.sh -n dnode3 -c supportVnodes -v 4
print ========== step1
system sh/exec.sh -n dnode1 -s start
sql connect
print ========== step2
sql create dnode $hostname port 7200
system sh/exec.sh -n dnode2 -s start
sql create dnode $hostname port 7300
system sh/exec.sh -n dnode3 -s start
$x = 0
step2:
$x = $x + 1
sleep 1000
if $x == 10 then
print ====> dnode not ready!
return -1
endi
sql select * from information_schema.ins_dnodes
print ===> $data00 $data01 $data02 $data03 $data04 $data05
print ===> $data10 $data11 $data12 $data13 $data14 $data15
if $rows != 3 then
return -1
endi
if $data(1)[4] != ready then
goto step2
endi
if $data(2)[4] != ready then
goto step2
endi
print ========== step3
sql drop database if exists test;
sql create database if not exists test vgroups 4 replica 3 precision "ms" ;
sql use test;
sql create table test.test (ts timestamp, c1 int) tags (t1 int) ;
print create stream without snode existing
sql_error create stream stream_t1 trigger at_once into str_dst as select count(*) from test interval(20s);
print create snode
sql create snode on dnode 1;
sql create stream stream_t1 trigger at_once into str_dst as select count(*) from test interval(20s);
print drop snode and then create stream
sql drop snode on dnode 1;
sql_error create stream stream_t2 trigger at_once into str_dst as select count(*) from test interval(20s);
system sh/exec.sh -n dnode1 -s stop -x SIGINT
system sh/exec.sh -n dnode2 -s stop -x SIGINT
system sh/exec.sh -n dnode3 -s stop -x SIGINT

View File

@ -102,6 +102,7 @@ run tsim/stream/triggerInterval0.sim
run tsim/stream/triggerSession0.sim
run tsim/stream/distributeIntervalRetrive0.sim
run tsim/stream/basic0.sim
run tsim/stream/snodeCheck.sim
run tsim/stream/session0.sim
run tsim/stream/schedSnode.sim
run tsim/stream/partitionby.sim

View File

@ -344,9 +344,72 @@ class TDTestCase:
tdSql.query(sql, queryTimes=1)
tdSql.checkRows(55)
# TODO Fix Me!
sql = "explain SELECT count(*), timediff(_wend, last(ts)), timediff('2018-09-20 01:00:00', _wstart) FROM meters WHERE ts >= '2018-09-20 00:00:00.000' AND ts < '2018-09-20 01:00:00.000' PARTITION BY concat(tbname, 'asd') INTERVAL(5m) having(concat(tbname, 'asd') like '%asd');"
tdSql.error(sql, -2147473664) # Error: Planner internal error
sql = "SELECT count(*), timediff(_wend, last(ts)), timediff('2018-09-20 01:00:00', _wstart) FROM meters WHERE ts >= '2018-09-20 00:00:00.000' AND ts < '2018-09-20 01:00:00.000' PARTITION BY concat(tbname, 'asd') INTERVAL(5m) having(concat(tbname, 'asd') like '%asd');"
tdSql.query(sql, queryTimes=1)
tdSql.checkRows(60)
sql = "SELECT count(*), timediff(_wend, last(ts)), timediff('2018-09-20 01:00:00', _wstart) FROM meters WHERE ts >= '2018-09-20 00:00:00.000' AND ts < '2018-09-20 01:00:00.000' PARTITION BY concat(tbname, 'asd') INTERVAL(5m) having(concat(tbname, 'asd') like 'asd%');"
tdSql.query(sql, queryTimes=1)
tdSql.checkRows(0)
sql = "SELECT c1 FROM meters PARTITION BY c1 HAVING c1 > 0 slimit 2 limit 10"
tdSql.query(sql, queryTimes=1)
tdSql.checkRows(20)
sql = "SELECT t1 FROM meters PARTITION BY t1 HAVING(t1 = 1)"
tdSql.query(sql, queryTimes=1)
tdSql.checkRows(20000)
sql = "SELECT concat(t2, 'asd') FROM meters PARTITION BY t2 HAVING(t2 like '%5')"
tdSql.query(sql, queryTimes=1)
tdSql.checkRows(10000)
tdSql.checkData(0, 0, 'tb5asd')
sql = "SELECT concat(t2, 'asd') FROM meters PARTITION BY concat(t2, 'asd') HAVING(concat(t2, 'asd')like '%5%')"
tdSql.query(sql, queryTimes=1)
tdSql.checkRows(10000)
tdSql.checkData(0, 0, 'tb5asd')
sql = "SELECT avg(c1) FROM meters PARTITION BY tbname, t1 HAVING(t1 = 1)"
tdSql.query(sql, queryTimes=1)
tdSql.checkRows(2)
sql = "SELECT count(*) FROM meters PARTITION BY concat(tbname, 'asd') HAVING(concat(tbname, 'asd') like '%asd')"
tdSql.query(sql, queryTimes=1)
tdSql.checkRows(10)
sql = "SELECT count(*), concat(tbname, 'asd') FROM meters PARTITION BY concat(tbname, 'asd') HAVING(concat(tbname, 'asd') like '%asd')"
tdSql.query(sql, queryTimes=1)
tdSql.checkRows(10)
sql = "SELECT count(*) FROM meters PARTITION BY t1 HAVING(t1 < 4) order by t1 +1"
tdSql.query(sql, queryTimes=1)
tdSql.checkRows(4)
sql = "SELECT count(*), t1 + 100 FROM meters PARTITION BY t1 HAVING(t1 < 4) order by t1 +1"
tdSql.query(sql, queryTimes=1)
tdSql.checkRows(4)
sql = "SELECT count(*), t1 + 100 FROM meters PARTITION BY t1 INTERVAL(1d) HAVING(t1 < 4) order by t1 +1 desc"
tdSql.query(sql, queryTimes=1)
tdSql.checkRows(280)
sql = "SELECT count(*), concat(t3, 'asd') FROM meters PARTITION BY concat(t3, 'asd') INTERVAL(1d) HAVING(concat(t3, 'asd') like '%5asd' and count(*) = 118)"
tdSql.query(sql, queryTimes=1)
tdSql.checkRows(1)
sql = "SELECT count(*), concat(t3, 'asd') FROM meters PARTITION BY concat(t3, 'asd') INTERVAL(1d) HAVING(concat(t3, 'asd') like '%5asd' and count(*) != 118)"
tdSql.query(sql, queryTimes=1)
tdSql.checkRows(69)
sql = "SELECT count(*), concat(t3, 'asd') FROM meters PARTITION BY concat(t3, 'asd') INTERVAL(1d) HAVING(concat(t3, 'asd') like '%5asd') order by count(*) asc limit 10"
tdSql.query(sql, queryTimes=1)
tdSql.checkRows(10)
sql = "SELECT count(*), concat(t3, 'asd') FROM meters PARTITION BY concat(t3, 'asd') INTERVAL(1d) HAVING(concat(t3, 'asd') like '%5asd' or concat(t3, 'asd') like '%3asd') order by count(*) asc limit 10000"
tdSql.query(sql, queryTimes=1)
tdSql.checkRows(140)
def run(self):
self.prepareTestEnv()

View File

@ -17,8 +17,6 @@ sys.path.append("./7-tmq")
from tmqCommon import *
class TDTestCase:
updatecfgDict = {'sDebugFlag':143, 'wDebugFlag':143}
def __init__(self):
self.vgroups = 1
self.ctbNum = 10

View File

@ -5,15 +5,15 @@ FIND_PATH(HEADER_GTEST_INCLUDE_DIR gtest.h /usr/include/gtest /usr/local/include
FIND_LIBRARY(LIB_GTEST_STATIC_DIR libgtest.a /usr/lib/ /usr/local/lib /usr/lib64 /usr/local/taos/driver/)
FIND_LIBRARY(LIB_GTEST_SHARED_DIR libgtest.so /usr/lib/ /usr/local/lib /usr/lib64 /usr/local/taos/driver/)
IF (HEADER_GTEST_INCLUDE_DIR AND (LIB_GTEST_STATIC_DIR OR LIB_GTEST_SHARED_DIR))
IF(HEADER_GTEST_INCLUDE_DIR AND(LIB_GTEST_STATIC_DIR OR LIB_GTEST_SHARED_DIR))
MESSAGE(STATUS "gTest library found, build os test")
INCLUDE_DIRECTORIES(${HEADER_GTEST_INCLUDE_DIR})
AUX_SOURCE_DIRECTORY(${CMAKE_CURRENT_SOURCE_DIR} SOURCE_LIST)
ENDIF()
aux_source_directory(src OS_SRC)
# taoscTest
add_executable(taoscTest "taoscTest.cpp")
target_link_libraries(taoscTest taos os gtest_main)
@ -25,4 +25,3 @@ add_test(
NAME taoscTest
COMMAND taoscTest
)

View File

@ -105,6 +105,113 @@ int smlProcess_telnet_Test() {
return code;
}
int smlProcess_telnet0_Test() {
TAOS *taos = taos_connect("localhost", "root", "taosdata", NULL, 0);
TAOS_RES *pRes = taos_query(taos, "create database if not exists sml_db schemaless 1");
taos_free_result(pRes);
pRes = taos_query(taos, "use sml_db");
taos_free_result(pRes);
const char *sql1[] = {"sysif.bytes.out 1479496100 1.3E0 host=web01 interface=eth0"};
pRes = taos_schemaless_insert(taos, (char **)sql1, sizeof(sql1) / sizeof(sql1[0]), TSDB_SML_TELNET_PROTOCOL,
TSDB_SML_TIMESTAMP_NANO_SECONDS);
printf("%s result:%s\n", __FUNCTION__, taos_errstr(pRes));
int code = taos_errno(pRes);
ASSERT(code == 0);
taos_free_result(pRes);
const char *sql2[] = {"sysif.bytes.out 1479496700 1.6E0 host=web01 interface=eth0"};
pRes = taos_schemaless_insert(taos, (char **)sql2, sizeof(sql2) / sizeof(sql2[0]), TSDB_SML_TELNET_PROTOCOL,
TSDB_SML_TIMESTAMP_NANO_SECONDS);
printf("%s result:%s\n", __FUNCTION__, taos_errstr(pRes));
code = taos_errno(pRes);
ASSERT(code == 0);
taos_free_result(pRes);
const char *sql3[] = {"sysif.bytes.out 1479496300 1.1E0 interface=eth0 host=web01"};
pRes = taos_schemaless_insert(taos, (char **)sql3, sizeof(sql3) / sizeof(sql3[0]), TSDB_SML_TELNET_PROTOCOL,
TSDB_SML_TIMESTAMP_NANO_SECONDS);
printf("%s result:%s\n", __FUNCTION__, taos_errstr(pRes));
code = taos_errno(pRes);
ASSERT(code == 0);
taos_free_result(pRes);
taos_close(taos);
return code;
}
int smlProcess_json0_Test() {
TAOS *taos = taos_connect("localhost", "root", "taosdata", NULL, 0);
TAOS_RES *pRes = taos_query(taos, "create database if not exists sml_db");
taos_free_result(pRes);
pRes = taos_query(taos, "use sml_db");
taos_free_result(pRes);
const char *sql[] = {
"[{\"metric\":\"syscpu.nice\",\"timestamp\":1662344045,\"value\":9,\"tags\":{\"host\":\"web02\",\"dc\":4}}]"};
char *sql1[1] = {0};
for (int i = 0; i < 1; i++) {
sql1[i] = taosMemoryCalloc(1, 1024);
ASSERT(sql1[i] != NULL);
(void)strncpy(sql1[i], sql[i], 1023);
}
pRes = taos_schemaless_insert(taos, (char **)sql1, sizeof(sql1) / sizeof(sql1[0]), TSDB_SML_JSON_PROTOCOL,
TSDB_SML_TIMESTAMP_NANO_SECONDS);
int code = taos_errno(pRes);
if (code != 0) {
printf("%s result:%s\n", __FUNCTION__, taos_errstr(pRes));
} else {
printf("%s result:success\n", __FUNCTION__);
}
taos_free_result(pRes);
for (int i = 0; i < 1; i++) {
taosMemoryFree(sql1[i]);
}
ASSERT(code == 0);
const char *sql2[] = {
"[{\"metric\":\"syscpu.nice\",\"timestamp\":1662344041,\"value\":13,\"tags\":{\"host\":\"web01\",\"dc\":1}"
"},{\"metric\":\"syscpu.nice\",\"timestamp\":1662344042,\"value\":9,\"tags\":{\"host\":\"web02\",\"dc\":4}"
"}]",
};
char *sql3[1] = {0};
for (int i = 0; i < 1; i++) {
sql3[i] = taosMemoryCalloc(1, 1024);
ASSERT(sql3[i] != NULL);
(void)strncpy(sql3[i], sql2[i], 1023);
}
pRes = taos_schemaless_insert(taos, (char **)sql3, sizeof(sql3) / sizeof(sql3[0]), TSDB_SML_JSON_PROTOCOL,
TSDB_SML_TIMESTAMP_NANO_SECONDS);
code = taos_errno(pRes);
if (code != 0) {
printf("%s result:%s\n", __FUNCTION__, taos_errstr(pRes));
} else {
printf("%s result:success\n", __FUNCTION__);
}
taos_free_result(pRes);
for (int i = 0; i < 1; i++) {
taosMemoryFree(sql3[i]);
}
ASSERT(code == 0);
taos_close(taos);
return code;
}
int smlProcess_json1_Test() {
TAOS *taos = taos_connect("localhost", "root", "taosdata", NULL, 0);
@ -1775,6 +1882,21 @@ int sml_td24559_Test() {
pRes = taos_query(taos, "create database if not exists td24559");
taos_free_result(pRes);
const char *sql1[] = {
"sttb,t1=1 f1=283i32,f2=g\"\" 1632299372000",
"sttb,t1=1 f2=G\"Point(4.343 89.342)\",f1=106i32 1632299373000",
};
pRes = taos_query(taos, "use td24559");
taos_free_result(pRes);
pRes = taos_schemaless_insert(taos, (char **)sql1, sizeof(sql1) / sizeof(sql1[0]), TSDB_SML_LINE_PROTOCOL,
TSDB_SML_TIMESTAMP_MILLI_SECONDS);
int code = taos_errno(pRes);
printf("%s result0:%s\n", __FUNCTION__, taos_errstr(pRes));
ASSERT(code);
taos_free_result(pRes);
const char *sql[] = {
"stb,t1=1 f1=283i32,f2=g\"Point(4.343 89.342)\" 1632299372000",
"stb,t1=1 f2=G\"Point(4.343 89.342)\",f1=106i32 1632299373000",
@ -1788,7 +1910,7 @@ int sml_td24559_Test() {
pRes = taos_schemaless_insert(taos, (char **)sql, sizeof(sql) / sizeof(sql[0]), TSDB_SML_LINE_PROTOCOL,
TSDB_SML_TIMESTAMP_MILLI_SECONDS);
int code = taos_errno(pRes);
code = taos_errno(pRes);
printf("%s result0:%s\n", __FUNCTION__, taos_errstr(pRes));
taos_free_result(pRes);
@ -2136,7 +2258,8 @@ int main(int argc, char *argv[]) {
taos_options(TSDB_OPTION_CONFIGDIR, argv[1]);
}
int ret = 0;
int ret = smlProcess_json0_Test();
ASSERT(!ret);
ret = sml_ts5528_test();
ASSERT(!ret);
ret = sml_td29691_Test();
@ -2173,6 +2296,8 @@ int main(int argc, char *argv[]) {
ASSERT(!ret);
ret = smlProcess_telnet_Test();
ASSERT(!ret);
ret = smlProcess_telnet0_Test();
ASSERT(!ret);
ret = smlProcess_json1_Test();
ASSERT(!ret);
ret = smlProcess_json2_Test();

View File

@ -19,196 +19,77 @@
#include "taos.h"
#include "types.h"
int buildStable(TAOS* pConn) {
TAOS_RES* pRes = taos_query(pConn,
"CREATE STABLE `meters` (`ts` TIMESTAMP, `current` INT, `voltage` INT, `phase` FLOAT) TAGS "
"(`groupid` INT, `location` VARCHAR(16))");
if (taos_errno(pRes) != 0) {
printf("failed to create super table meters, reason:%s\n", taos_errstr(pRes));
return -1;
}
TAOS* pConn = NULL;
void action(char* sql) {
TAOS_RES* pRes = taos_query(pConn, sql);
ASSERT(taos_errno(pRes) == 0);
taos_free_result(pRes);
pRes = taos_query(pConn, "create table d0 using meters tags(1, 'San Francisco')");
if (taos_errno(pRes) != 0) {
printf("failed to create child table d0, reason:%s\n", taos_errstr(pRes));
return -1;
}
taos_free_result(pRes);
pRes = taos_query(pConn, "insert into d0 (ts, current) values (now, 120)");
if (taos_errno(pRes) != 0) {
printf("failed to insert into table d0, reason:%s\n", taos_errstr(pRes));
return -1;
}
taos_free_result(pRes);
pRes = taos_query(pConn, "create table d1 using meters tags(2, 'San Francisco')");
if (taos_errno(pRes) != 0) {
printf("failed to create child table d1, reason:%s\n", taos_errstr(pRes));
return -1;
}
taos_free_result(pRes);
pRes = taos_query(pConn, "create table d2 using meters tags(3, 'San Francisco')");
if (taos_errno(pRes) != 0) {
printf("failed to create child table d2, reason:%s\n", taos_errstr(pRes));
return -1;
}
taos_free_result(pRes);
pRes = taos_query(pConn, "create table ntba(ts timestamp, addr binary(32))");
if (taos_errno(pRes) != 0) {
printf("failed to create ntba, reason:%s\n", taos_errstr(pRes));
return -1;
}
taos_free_result(pRes);
pRes = taos_query(pConn, "create table ntbb(ts timestamp, addr binary(8))");
if (taos_errno(pRes) != 0) {
printf("failed to create ntbb, reason:%s\n", taos_errstr(pRes));
return -1;
}
taos_free_result(pRes);
pRes = taos_query(pConn, "insert into ntba values(now,'123456789abcdefg123456789')");
if (taos_errno(pRes) != 0) {
printf("failed to insert table ntba, reason:%s\n", taos_errstr(pRes));
return -1;
}
taos_free_result(pRes);
pRes = taos_query(pConn, "insert into ntba values(now + 1s,'hello')");
if (taos_errno(pRes) != 0) {
printf("failed to insert table ntba, reason:%s\n", taos_errstr(pRes));
return -1;
}
taos_free_result(pRes);
return 0;
}
int32_t init_env() {
TAOS* pConn = taos_connect("localhost", "root", "taosdata", NULL, 0);
if (pConn == NULL) {
return -1;
}
int32_t ret = -1;
TAOS_RES* pRes = taos_query(pConn, "drop database if exists db_raw");
if (taos_errno(pRes) != 0) {
printf("error in drop db_taosx, reason:%s\n", taos_errstr(pRes));
goto END;
}
taos_free_result(pRes);
pRes = taos_query(pConn, "create database if not exists db_raw vgroups 2");
if (taos_errno(pRes) != 0) {
printf("error in create db_taosx, reason:%s\n", taos_errstr(pRes));
goto END;
}
taos_free_result(pRes);
pRes = taos_query(pConn, "use db_raw");
if (taos_errno(pRes) != 0) {
printf("error in create db_taosx, reason:%s\n", taos_errstr(pRes));
goto END;
}
taos_free_result(pRes);
buildStable(pConn);
pRes = taos_query(pConn, "select * from d0");
if (taos_errno(pRes) != 0) {
printf("error in drop db_taosx, reason:%s\n", taos_errstr(pRes));
goto END;
}
void *data = NULL;
int32_t test_write_raw_block(char* query, char* dst) {
TAOS_RES* pRes = taos_query(pConn, query);
ASSERT(taos_errno(pRes) == 0);
void* data = NULL;
int32_t numOfRows = 0;
int error_code = taos_fetch_raw_block(pRes, &numOfRows, &data);
if(error_code !=0 ){
printf("error fetch raw block, reason:%s\n", taos_errstr(pRes));
goto END;
}
taos_write_raw_block(pConn, numOfRows, data, "d1");
ASSERT(error_code == 0);
error_code = taos_write_raw_block(pConn, numOfRows, data, dst);
taos_free_result(pRes);
return error_code;
}
pRes = taos_query(pConn, "select ts,phase from d0");
if (taos_errno(pRes) != 0) {
printf("error in drop db_taosx, reason:%s\n", taos_errstr(pRes));
goto END;
}
error_code = taos_fetch_raw_block(pRes, &numOfRows, &data);
if(error_code !=0 ){
printf("error fetch raw block, reason:%s\n", taos_errstr(pRes));
goto END;
}
int32_t test_write_raw_block_with_fields(char* query, char* dst) {
TAOS_RES* pRes = taos_query(pConn, query);
ASSERT(taos_errno(pRes) == 0);
void* data = NULL;
int32_t numOfRows = 0;
int error_code = taos_fetch_raw_block(pRes, &numOfRows, &data);
ASSERT(error_code == 0);
int numFields = taos_num_fields(pRes);
TAOS_FIELD *fields = taos_fetch_fields(pRes);
taos_write_raw_block_with_fields(pConn, numOfRows, data, "d2", fields, numFields);
TAOS_FIELD* fields = taos_fetch_fields(pRes);
error_code = taos_write_raw_block_with_fields(pConn, numOfRows, data, dst, fields, numFields);
taos_free_result(pRes);
return error_code;
}
// check error msg
pRes = taos_query(pConn, "select * from ntba");
if (taos_errno(pRes) != 0) {
printf("error select * from ntba, reason:%s\n", taos_errstr(pRes));
goto END;
}
void init_env() {
pConn = taos_connect("localhost", "root", "taosdata", NULL, 0);
ASSERT(pConn);
data = NULL;
numOfRows = 0;
error_code = taos_fetch_raw_block(pRes, &numOfRows, &data);
if(error_code !=0 ){
printf("error fetch select * from ntba, reason:%s\n", taos_errstr(pRes));
goto END;
}
error_code = taos_write_raw_block(pConn, numOfRows, data, "ntbb");
if(error_code == 0) {
printf(" taos_write_raw_block to ntbb expect failed , but success!\n");
goto END;
}
action("drop database if exists db_raw");
action("create database if not exists db_raw vgroups 2");
action("use db_raw");
// pass NULL return last error code describe
const char* err = tmq_err2str(error_code);
printf("write_raw_block return code =0x%x err=%s\n", error_code, err);
if(strcmp(err, "success") == 0) {
printf("expect failed , but error string is success! err=%s\n", err);
goto END;
}
action(
"CREATE STABLE `meters` (`ts` TIMESTAMP, `current` INT, `voltage` INT, `phase` FLOAT) TAGS (`groupid` INT, "
"`location` VARCHAR(16))");
action("create table d0 using meters tags(1, 'San Francisco')");
action("create table d1 using meters tags(2, 'San Francisco')");
action("create table d2 using meters tags(3, 'San Francisco')");
action("insert into d0 (ts, current) values (now, 120)");
// no exist table
error_code = taos_write_raw_block(pConn, numOfRows, data, "no-exist-table");
if(error_code == 0) {
printf(" taos_write_raw_block to no-exist-table expect failed , but success!\n");
goto END;
}
action("create table ntba(ts timestamp, addr binary(32))");
action("create table ntbb(ts timestamp, addr binary(8))");
action("create table ntbc(ts timestamp, addr binary(8), c2 int)");
err = tmq_err2str(error_code);
printf("write_raw_block no exist table return code =0x%x err=%s\n", error_code, err);
if(strcmp(err, "success") == 0) {
printf("expect failed write no exist table, but error string is success! err=%s\n", err);
goto END;
}
// success
ret = 0;
END:
// free
if(pRes) taos_free_result(pRes);
if(pConn) taos_close(pConn);
return ret;
action("insert into ntba values(now,'123456789abcdefg123456789')");
action("insert into ntbb values(now + 1s,'hello')");
action("insert into ntbc values(now + 13s, 'sdf', 123)");
}
int main(int argc, char* argv[]) {
printf("test write_raw_block...\n");
int ret = init_env();
if (ret < 0) {
printf("test write_raw_block failed.\n");
return ret;
}
printf("test write_raw_block ok.\n");
printf("test write_raw_block start.\n");
init_env();
ASSERT(test_write_raw_block("select * from d0", "d1") == 0); // test schema same
ASSERT(test_write_raw_block("select * from ntbb", "ntba") == 0); // test schema compatible
ASSERT(test_write_raw_block("select * from ntbb", "ntbc") == 0); // test schema small
ASSERT(test_write_raw_block("select * from ntbc", "ntbb") == 0); // test schema bigger
ASSERT(test_write_raw_block("select * from ntba", "ntbb") != 0); // test schema mismatch
ASSERT(test_write_raw_block("select * from ntba", "no-exist-table") != 0); // test no exist table
ASSERT(test_write_raw_block("select addr from ntba", "ntbb") != 0); // test without ts
ASSERT(test_write_raw_block_with_fields("select ts,phase from d0", "d2") == 0); // test with fields
printf("test write_raw_block end.\n");
return 0;
}