diff --git a/docs/en/12-taos-sql/07-tag-index.md b/docs/en/12-taos-sql/07-tag-index.md index 024d0ed574..54e68edbb9 100644 --- a/docs/en/12-taos-sql/07-tag-index.md +++ b/docs/en/12-taos-sql/07-tag-index.md @@ -34,6 +34,13 @@ SELECT * FROM information_schema.INS_INDEXES You can also add filter conditions to limit the results. + +````sql +SHOW INDEXES FROM tbl_name [FROM db_name]; +SHOW INDEXES FROM [db_name.]tbl_name ; +```` +Use `show indexes` commands to show indices that have been created for the specified database or table. + ## Detailed Specification 1. Indexes can improve query performance significantly if they are used properly. The operators supported by tag index include `=`, `>`, `>=`, `<`, `<=`. If you use these operators with tags, indexes can improve query performance significantly. However, for operators not in this scope, indexes don't help. More and more operators will be added in future. diff --git a/docs/en/12-taos-sql/27-indexing.md b/docs/en/12-taos-sql/27-indexing.md index 1badd38d17..dfe3ef527c 100644 --- a/docs/en/12-taos-sql/27-indexing.md +++ b/docs/en/12-taos-sql/27-indexing.md @@ -1,65 +1,134 @@ --- -title: Indexing -sidebar_label: Indexing -description: This document describes the SQL statements related to indexing in TDengine. +sidebar_label: Window Pre-Aggregation +title: Window Pre-Aggregation +description: Instructions for using Window Pre-Aggregation --- -TDengine supports SMA and tag indexing. +To improve the performance of aggregate function queries on large datasets, you can create Time-Range Small Materialized Aggregates (TSMA) objects. These objects perform pre-computation on specified aggregate functions using fixed time windows and store the computed results. When querying, you can retrieve the pre-computed results to enhance query performance. -## Create an Index +## Creating TSMA ```sql -CREATE INDEX index_name ON tb_name (col_name [, col_name] ...) +-- Create TSMA based on a super table or regular table +CREATE TSMA tsma_name ON [dbname.]table_name FUNCTION (func_name(func_param) [, ...] ) INTERVAL(time_duration); +-- Create a large window TSMA based on a small window TSMA +CREATE RECURSIVE TSMA tsma_name ON [db_name.]tsma_name1 INTERVAL(time_duration); -CREATE SMA INDEX index_name ON tb_name index_option - -index_option: - FUNCTION(functions) INTERVAL(interval_val [, interval_offset]) [SLIDING(sliding_val)] [WATERMARK(watermark_val)] [MAX_DELAY(max_delay_val)] - -functions: - function [, function] ... +time_duration: + number unit ``` -### tag Indexing - [tag index](../tag-index) +To create a TSMA, you need to specify the TSMA name, table name, function list, and window size. When creating a TSMA based on an existing TSMA, using the `RECURSIVE` keyword, you don't need to specify the `FUNCTION()`. It will create a TSMA with the same function list as the existing TSMA, and the INTERVAL must be a multiple of the window of the base TSMA. -### SMA Indexing +The naming rule for TSMA is similar to the table name, with a maximum length of the table name length minus the length of the output table suffix. The table name length limit is 193, and the output table suffix is `_tsma_res_stb_`. The maximum length of the TSMA name is 178. -Performs pre-aggregation on the specified column over the time window defined by the INTERVAL clause. The type is specified in functions_string. SMA indexing improves aggregate query performance for the specified time period. One supertable can only contain one SMA index. +TSMA can only be created based on super tables and regular tables, not on subtables. -- The max, min, and sum functions are supported. -- WATERMARK: Enter a value between 0ms and 900000ms. The most precise unit supported is milliseconds. The default value is 5 seconds. This option can be used only on supertables. -- MAX_DELAY: Enter a value between 1ms and 900000ms. The most precise unit supported is milliseconds. The default value is the value of interval provided that it does not exceed 900000ms. This option can be used only on supertables. Note: Retain the default value if possible. Configuring a small MAX_DELAY may cause results to be frequently pushed, affecting storage and query performance. +In the function list, you can only specify supported aggregate functions (see below), and the number of function parameters must be 1, even if the current function supports multiple parameters. The function parameters must be ordinary column names, not tag columns. Duplicate functions and columns in the function list will be deduplicated. When calculating TSMA, all `intermediate results of the functions` will be output to another super table, and the output super table also includes all tag columns of the original table. The maximum number of functions in the function list is the maximum number of columns in the output table (including tag columns) minus the four additional columns added for TSMA calculation, namely `_wstart`, `_wend`, `_wduration`, and a new tag column `tbname`, minus the number of tag columns in the original table. If the number of columns exceeds the limit, an error `Too many columns` will be reported. + +Since the output of TSMA is a super table, the row length of the output table is subject to the maximum row length limit. The size of the `intermediate results of different functions` varies, but they are generally larger than the original data size. If the row length of the output table exceeds the maximum row length limit, an error `Row length exceeds max length` will be reported. In this case, you need to reduce the number of functions or split commonly used functions groups into multiple TSMA objects. + +The window size is limited to [1ms ~ 1h]. The unit of INTERVAL is the same as the INTERVAL clause in the query, such as a (milliseconds), b (nanoseconds), h (hours), m (minutes), s (seconds), u (microseconds). + +TSMA is a database-level object, but it is globally unique. The number of TSMA that can be created in the cluster is limited by the parameter `maxTsmaNum`, with a default value of 8 and a range of [0-12]. Note that since TSMA background calculation uses stream computing, creating a TSMA will create a stream. Therefore, the number of TSMA that can be created is also limited by the number of existing streams and the maximum number of streams that can be created. + +## Supported Functions +| function | comments | +|---|---| +|min|| +|max|| +|sum|| +|first|| +|last|| +|avg|| +|count| If you want to use count(*), you should create the count(ts) function| +|spread|| +|stddev|| +|hyperloglog|| +||| + +## Drop TSMA +```sql +DROP TSMA [db_name.]tsma_name; +``` + +If there are other TSMA created based on the TSMA being deleted, the delete operation will report an `Invalid drop base tsma, drop recursive tsma first` error. Therefore, all Recursive TSMA must be deleted first. + +## TSMA Calculation +The calculation result of TSMA is a super table in the same database as the original table, but it is not visible to users. It cannot be deleted and will be automatically deleted when `DROP TSMA` is executed. The calculation of TSMA is done through stream computing, which is a background asynchronous process. The calculation result of TSMA is not guaranteed to be real-time, but it can guarantee eventual correctness. + +When there is a large amount of historical data, after creating TSMA, the stream computing will first calculate the historical data. During this period, newly created TSMA will not be used. The calculation will be automatically recalculated when data updates, deletions, or expired data arrive. During the recalculation period, the TSMA query results are not guaranteed to be real-time. If you want to query real-time data, you can use the hint `/*+ skip_tsma() */` in the SQL statement or disable the `querySmaOptimize` parameter to query from the original data. + +## Using and Limitations of TSMA + +Client configuration parameter: `querySmaOptimize`, used to control whether to use TSMA during queries. Set it to `True` to use TSMA, and `False` to query from the original data. + +Client configuration parameter: `maxTsmaCalcDelay`, in seconds, is used to control the acceptable TSMA calculation delay for users. If the calculation progress of a TSMA is within this range from the latest time, the TSMA will be used. If it exceeds this range, it will not be used. The default value is 600 (10 minutes), with a minimum value of 600 (10 minutes) and a maximum value of 86400 (1 day). + +### Using TSMA Duraing Query + +The aggregate functions defined in TSMA can be directly used in most query scenarios. If multiple TSMA are available, the one with the larger window size is preferred. For unclosed windows, the calculation can be done using smaller window TSMA or the original data. However, there are certain scenarios where TSMA cannot be used (see below). In such cases, the entire query will be calculated using the original data. + +The default behavior for queries without specified window sizes is to prioritize the use of the largest window TSMA that includes all the aggregate functions used in the query. For example, `SELECT COUNT(*) FROM stable GROUP BY tbname` will use the TSMA with the largest window that includes the `count(ts)` function. Therefore, when using aggregate queries frequently, it is recommended to create TSMA objects with larger window size. + +When specifying the window size, which is the `INTERVAL` statement, use the largest TSMA window that is divisible by the window size of the query. In window queries, the window size of the `INTERVAL`, `OFFSET`, and `SLIDING` all affect the TSMA window size that can be used. Divisible window TSMA refers to a TSMA window size that is divisible by the `INTERVAL`, `OFFSET`, and `SLIDING` of the query statement. Therefore, when using window queries frequently, consider the window size, as well as the offset and sliding size when creating TSMA objects. + +Example 1. If TSMA with window size of `5m` and `10m` is created, and the query is `INTERVAL(30m)`, the TSMA with window size of `10m` will be used. If the query is `INTERVAL(30m, 10m) SLIDING(5m)`, only the TSMA with window size of `5m` can be used for the query. + +### Limitations of Query + +When the parameter `querySmaOptimize` is enabled and there is no `skip_tsma()` hint, the following query scenarios cannot use TSMA: + +- When the aggregate functions defined in a TSMA do not cover the function list of the current query. +- Non-`INTERVAL` windows or the query window size (including `INTERVAL, SLIDING, OFFSET`) is not multiples of the defined window size. For example, if the defined window is 2m and the query uses a 5-minute window, but if there is a 1m window available, it can be used. +- Query with filtering on any regular column (non-primary key time column) in the `WHERE` condition. +- When `PARTITION` or `GROUP BY` includes any regular column or its expression +- When other faster optimization logic can be used, such as last cache optimization, if it meets the conditions for last optimization, it will be prioritized. If last optimization is not possible, then it will be determined whether TSMA optimization can be used. +- When the current TSMA calculation progress delay is greater than the configuration parameter `maxTsmaCalcDelay` + +Some examples: ```sql -DROP DATABASE IF EXISTS d0; -CREATE DATABASE d0; -USE d0; -CREATE TABLE IF NOT EXISTS st1 (ts timestamp, c1 int, c2 float, c3 double) TAGS (t1 int unsigned); -CREATE TABLE ct1 USING st1 TAGS(1000); -CREATE TABLE ct2 USING st1 TAGS(2000); -INSERT INTO ct1 VALUES(now+0s, 10, 2.0, 3.0); -INSERT INTO ct1 VALUES(now+1s, 11, 2.1, 3.1)(now+2s, 12, 2.2, 3.2)(now+3s, 13, 2.3, 3.3); -CREATE SMA INDEX sma_index_name1 ON st1 FUNCTION(max(c1),max(c2),min(c1)) INTERVAL(5m,10s) SLIDING(5m) WATERMARK 5s MAX_DELAY 1m; --- query from SMA Index -ALTER LOCAL 'querySmaOptimize' '1'; -SELECT max(c2),min(c1) FROM st1 INTERVAL(5m,10s) SLIDING(5m); -SELECT _wstart,_wend,_wduration,max(c2),min(c1) FROM st1 INTERVAL(5m,10s) SLIDING(5m); --- query from raw data -ALTER LOCAL 'querySmaOptimize' '0'; +SELECT agg_func_list [, pesudo_col_list] FROM stable WHERE exprs [GROUP/PARTITION BY [tbname] [, tag_list]] [HAVING ...] [INTERVAL(time_duration, offset) SLIDING(duration)]...; + +-- create +CREATE TSMA tsma1 ON stable FUNCTION(COUNT(ts), SUM(c1), SUM(c3), MIN(c1), MIN(c3), AVG(c1)) INTERVAL(1m); +-- query +SELECT COUNT(*), SUM(c1) + SUM(c3) FROM stable; ---- use tsma1 +SELECT COUNT(*), AVG(c1) FROM stable GROUP/PARTITION BY tbname, tag1, tag2; --- use tsma1 +SELECT COUNT(*), MIN(c1) FROM stable INTERVAL(1h); ---use tsma1 +SELECT COUNT(*), MIN(c1), SPREAD(c1) FROM stable INTERVAL(1h); ----- can't use, spread func not defined, although SPREAD can be calculated by MIN and MAX which are defined. +SELECT COUNT(*), MIN(c1) FROM stable INTERVAL(30s); ----- can't use tsma1, time_duration not fit. Normally, query_time_duration should be multple of create_duration. +SELECT COUNT(*), MIN(c1) FROM stable where c2 > 0; ---- can't use tsma1, can't do c2 filtering +SELECT COUNT(*) FROM stable GROUP BY c2; ---- can't use any tsma +SELECT MIN(c3), MIN(c2) FROM stable INTERVAL(1m); ---- can't use tsma1, c2 is not defined in tsma1. + +-- Another tsma2 created with INTERVAL(1h) based on tsma1 +CREATE RECURSIVE TSMA tsma2 on tsma1 INTERVAL(1h); +SELECT COUNT(*), SUM(c1) FROM stable; ---- use tsma2 +SELECT COUNT(*), AVG(c1) FROM stable GROUP/PARTITION BY tbname, tag1, tag2; --- use tsma2 +SELECT COUNT(*), MIN(c1) FROM stable INTERVAL(2h); ---use tsma2 +SELECT COUNT(*), MIN(c1) FROM stable WHERE ts < '2023-01-01 10:10:10' INTERVAL(30m); --use tsma1 +SELECT COUNT(*), MIN(c1) + MIN(c3) FROM stable INTERVAL(30m); ---use tsma1 +SELECT COUNT(*), MIN(c1) FROM stable INTERVAL(1h) SLIDING(30m); ---use tsma1 +SELECT COUNT(*), MIN(c1), SPREAD(c1) FROM stable INTERVAL(1h); ----- can't use tsma1 or tsma2, spread func not defined +SELECT COUNT(*), MIN(c1) FROM stable INTERVAL(30s); ----- can't use tsma1 or tsma2, time_duration not fit. Normally, query_time_duration should be multple of create_duration. +SELECT COUNT(*), MIN(c1) FROM stable where c2 > 0; ---- can't use tsma1 or tsam2, can't do c2 filtering ``` -## Delete an Index +### Limitations of Usage + +After creating a TSMA, there are certain restrictions on operations that can be performed on the original table: + +- You must delete all TSMAs on the table before you can delete the table itself. +- All tag columns of the original table cannot be deleted, nor can the tag column names or sub-table tag values be modified. You must first delete the TSMA before you can delete the tag column. +- If some columns are being used by the TSMA, these columns cannot be deleted. You must first delete the TSMA. However, adding new columns to the table is not affected. However, new columns added are not included in any TSMA, so if you want to calculate the new columns, you need to create new TSMA for them. + +## Show TSMA ```sql -DROP INDEX index_name; +SHOW [db_name.]TSMAS; +SELECT * FROM information_schema.ins_tsma; ``` -## View Indices - -````sql -SHOW INDEXES FROM tbl_name [FROM db_name]; -SHOW INDEXES FROM [db_name.]tbl_name ; -```` - -Shows indices that have been created for the specified database or table. +If more functions are specified during creation, and the column names are longer, the function list may be truncated when displayed (currently supports a maximum output of 256KB) \ No newline at end of file diff --git a/docs/en/14-reference/12-config/index.md b/docs/en/14-reference/12-config/index.md index 0bdf143a60..1a9df366e3 100755 --- a/docs/en/14-reference/12-config/index.md +++ b/docs/en/14-reference/12-config/index.md @@ -241,6 +241,16 @@ Please note the `taoskeeper` needs to be installed and running to create the `lo | Default Value | 0 | | Notes | When this parameter is set to 0, last(\*)/last_row(\*)/first(\*) only returns the columns of the super table; When it is 1, return the columns and tags of the super table. | +### maxTsmaCalcDelay + +| Attribute | Description | +| -------- | -------------------------------------------------------------------------------------------------------------------------------------------- | +| Applicable | Client only | +| Meaning | Query allowed tsma calculation delay, if the tsma calculation delay is greater than the configured value, the TSMA will not be used. | +| Value Range | 600s - 86400s, 10 minutes to 1 hour | +| Default value | 600s | + + ## Locale Parameters ### timezone @@ -760,6 +770,15 @@ The charset that takes effect is UTF-8. | Value Range | 1-10000| | Default Value | 20 | +### maxTsmaNum + +| Attribute | Description | +| --------- | ----------------------------- | +| Applicable | Server Only | +| Meaning | Max num of TSMAs | +| Value Range | 0-12 | +| Default Value | 8 | + ## 3.0 Parameters | # | **Parameter** | **Applicable to 2.x ** | **Applicable to 3.0 ** | Current behavior in 3.0 | diff --git a/docs/zh/12-taos-sql/07-tag-index.md b/docs/zh/12-taos-sql/07-tag-index.md index 0e5b3ba2fb..c016a5b513 100644 --- a/docs/zh/12-taos-sql/07-tag-index.md +++ b/docs/zh/12-taos-sql/07-tag-index.md @@ -34,6 +34,13 @@ SELECT * FROM information_schema.INS_INDEXES 也可以为上面的查询语句加上过滤条件以缩小查询范围。 +或者通过 SHOW 命令查看指定表上的索引 + +```sql +SHOW INDEXES FROM tbl_name [FROM db_name]; +SHOW INDEXES FROM [db_name.]tbl_name; +``` + ## 使用说明 1. 索引使用得当能够提升数据过滤的效率,目前支持的过滤算子有 `=`, `>`, `>=`, `<`, `<=`。如果查询过滤条件中使用了这些算子,则索引能够明显提升查询效率。但如果查询过滤条件中使用的是其它算子,则索引起不到作用,查询效率没有变化。未来会逐步添加更多的算子。 diff --git a/docs/zh/12-taos-sql/27-indexing.md b/docs/zh/12-taos-sql/27-indexing.md index cf8cac1bed..31057a67f8 100644 --- a/docs/zh/12-taos-sql/27-indexing.md +++ b/docs/zh/12-taos-sql/27-indexing.md @@ -1,66 +1,132 @@ --- -sidebar_label: 索引 -title: 索引 -description: 索引功能的使用细节 +sidebar_label: 窗口预聚集 +title: 窗口预聚集 +description: 窗口预聚集使用说明 --- -TDengine 从 3.0.0.0 版本开始引入了索引功能,支持 SMA 索引和 tag 索引。 +为了提高大数据量的聚合函数查询性能,通过创建窗口预聚集 (TSMA Time-Range Small Materialized Aggregates) 对象, 使用固定时间窗口对指定的聚集函数进行预计算,并将计算结果存储下来,查询时通过查询预计算结果以提高查询性能。 -## 创建索引 +## 创建TSMA ```sql +-- 创建基于超级表或普通表的tsma +CREATE TSMA tsma_name ON [dbname.]table_name FUNCTION (func_name(func_param) [, ...] ) INTERVAL(time_duration); +-- 创建基于小窗口tsma的大窗口tsma +CREATE RECURSIVE TSMA tsma_name ON [db_name.]tsma_name1 INTERVAL(time_duration); -CREATE INDEX index_name ON tb_name index_option - -CREATE SMA INDEX index_name ON tb_name index_option - -index_option: - FUNCTION(functions) INTERVAL(interval_val [, interval_offset]) [SLIDING(sliding_val)] [WATERMARK(watermark_val)] [MAX_DELAY(max_delay_val)] - -functions: - function [, function] ... +time_duration: + number unit ``` -### tag 索引 - [tag 索引](../tag-index) +创建 TSMA 时需要指定 TSMA 名字, 表名字, 函数列表以及窗口大小. 当基于 TSMA 创建时 TSMA 时, 即使用 `RECURSIVE` 关键字, 不需要指定 `FUNCTION()`, 将创建与已有 TSMA 相同的函数列表的TSMA, 且 INTERVAL 必须为所基于的TSMA窗口的整数倍。 -### SMA 索引 +其中 TSMA 命名规则与表名字类似, 长度最大限制为表名长度限制减去输出表后缀长度, 表名长度限制为193, 输出表后缀为`_tsma_res_stb_`, TSMA 名字最大长度为178. -对指定列按 INTERVAL 子句定义的时间窗口创建进行预聚合计算,预聚合计算类型由 functions_string 指定。SMA 索引能提升指定时间段的聚合查询的性能。目前,限制一个超级表只能创建一个 SMA INDEX。 +TSMA只能基于超级表和普通表创建, 不能基于子表创建. -- 支持的函数包括 MAX、MIN 和 SUM。 -- WATERMARK: 最小单位毫秒,取值范围 [0ms, 900000ms],默认值为 5 秒,只可用于超级表。 -- MAX_DELAY: 最小单位毫秒,取值范围 [1ms, 900000ms],默认值为 interval 的值(但不能超过最大值),只可用于超级表。注:不建议 MAX_DELAY 设置太小,否则会过于频繁的推送结果,影响存储和查询性能,如无特殊需求,取默认值即可。 +函数列表中只能指定支持的聚集函数(见下文), 并且函数参数必须为1个, 即使当前函数支持多个参数, 函数参数内必须为普通列名, 不能为标签列. 函数列表中完全相同的函数和列会被去重, 如同时创建两个avg(c1), 则只会计算一个输出. TSMA 计算时将会把所有`函数中间结果`都输出到另一张超级表中, 输出超级表还包含了原始表的所有tag列. 函数列表中函数个数最多支持创建表最大列个数(包括tag列)减去 TSMA 计算附加的四列, 分别为`_wstart`, `_wend`, `_wduration`, 以及一个新增tag列 `tbname`, 再减去原始表的tag列数. 若列个数超出限制, 会报`Too many columns`错误. + +由于TSMA输出为一张超级表, 因此输出表的行长度受最大行长度限制, 不同函数的`中间结果`大小各异, 一般都大于原始数据大小, 若输出表的行长度大于最大行长度限制, 将会报`Row length exceeds max length`错误. 此时需要减少函数个数或者将常用的函数进行分组拆分到多个TSMA中. + +窗口大小的限制为[1ms ~ 1h]. INTERVAL 的单位与查询中INTERVAL字句相同, 如 a (毫秒), b (纳秒), h (小时), m (分钟), s (秒), u (微妙). + +TSMA为库内对象, 但名字全局唯一. 集群内一共可创建TSMA个数受参数`maxTsmaNum`限制, 参数默认值为8, 范围: [0-12]. 注意, 由于TSMA后台计算使用流计算, 因此每创建一条TSMA, 将会创建一条流, 因此能够创建的TSMA条数也受当前已经存在的流条数和最大可创建流条数限制. + +## 支持的函数列表 +| 函数| 备注 | +|---|---| +|min|| +|max|| +|sum|| +|first|| +|last|| +|avg|| +|count| 若想使用count(*), 则应创建count(ts)函数| +|spread|| +|stddev|| +|hyperloglog|| +||| + +## 删除TSMA +```sql +DROP TSMA [db_name.]tsma_name; +``` +若存在其他TSMA基于当前被删除TSMA创建, 则删除操作报`Invalid drop base tsma, drop recursive tsma first`错误. 因此需先删除 所有Recursive TSMA. + +## TSMA的计算 +TSMA的计算结果为与原始表相同库下的一张超级表, 此表用户不可见. 不可删除, 在`DROP TSMA`时自动删除. TSMA的计算是通过流计算完成的, 此过程为后台异步过程, TSMA的计算结果不保证实时性, 但可以保证最终正确性. + +当存在大量历史数据时, 创建TSMA之后, 流计算将会首先计算历史数据, 此期间新创建的TSMA不会被使用. 数据更新删除或者过期数据到来时自动重新计算影响部分数据。 在重新计算期间 TSMA 查询结果不保证实时性。若希望查询实时数据, 可以通过在 SQL 中添加 hint `/*+ skip_tsma() */` 或者关闭参数`querySmaOptimize`从原始数据查询。 + +## TSMA的使用与限制 + +客户端配置参数: `querySmaOptimize`, 用于控制查询时是否使用TSMA, `True`为使用, `False`为不使用即从原始数据查询. + +客户端配置参数:`maxTsmaCalcDelay`,单位 s,用于控制用户可以接受的 TSMA 计算延迟,若 TSMA 的计算进度与最新时间差距在此范围内, 则该 TSMA 将会被使用, 若超出该范围, 则不使用, 默认值: 600(10 分钟), 最小值: 600(10 分钟), 最大值: 86400(1 天). + +### 查询时使用TSMA + +已在 TSMA 中定义的 agg 函数在大部分查询场景下都可直接使用, 若存在多个可用的 TSMA, 优先使用大窗口的 TSMA, 未闭合窗口通过查询小窗口TSMA或者原始数据计算。 同时也有某些场景不能使用 TSMA(见下文)。 不可用时整个查询将使用原始数据进行计算。 + +未指定窗口大小的查询语句默认优先使用包含所有查询聚合函数的最大窗口 TSMA 进行数据的计算。 如`SELECT COUNT(*) FROM stable GROUP BY tbname`将会使用包含count(ts)且窗口最大的TSMA。因此若使用聚合查询频率高时, 应当尽可能创建大窗口的TSMA. + +指定窗口大小时即 `INTERVAL` 语句,使用最大的可整除窗口 TSMA。 窗口查询中, `INTERVAL` 的窗口大小, `OFFSET` 以及 `SLIDING` 都影响能使用的 TSMA 窗口大小, 可整 除窗口 TSMA 即 TSMA 窗口大小可被查询语句的 `INTERVAL, OFFSET, SLIDING` 整除的窗口。因此若使用窗口查询较多时, 需要考虑经常查询的窗口大小, 以及 offset, sliding大小来创建TSMA. + +例 1. 如 创建 TSMA 窗口大小 `5m` 一条, `10m` 一条, 查询时 `INTERVAL(30m)`, 那么优先使用 `10m` 的 TSMA, 若查询为 `INTERVAL(30m, 10m) SLIDING(5m)`, 那么仅可使用 `5m` 的 TSMA 查询。 + + +### 查询限制 + +在开启了参数`querySmaOptimize`并且无`skip_tsma()` hint时, 以下查询场景无法使用TSMA: + +- 某个TSMA 中定义的 agg 函数不能覆盖当前查询的函数列表时 +- 非 `INTERVAL` 的其他窗口,或者 `INTERVAL` 查询窗口大小(包括 `INTERVAL,SLIDING,OFFSET`)不是定义窗口的整数倍,如定义窗口为 2m,查询使用 5 分钟窗口,但若存在 1m 的窗口,则可以使用。 +- 查询 `WHERE` 条件中包含任意普通列(非主键时间列)的过滤。 +- `PARTITION` 或者 `GROUY BY` 包含任意普通列或其表达式时 +- 可以使用其他更快的优化逻辑时, 如last cache优化, 若符合last优化的条件, 则先走last 优化, 无法走last时, 再判断是否可以走tsma优化 +- 当前 TSMA 计算进度延迟大于配置参数 `maxTsmaCalcDelay`时 + +下面是一些例子: ```sql -DROP DATABASE IF EXISTS d0; -CREATE DATABASE d0; -USE d0; -CREATE TABLE IF NOT EXISTS st1 (ts timestamp, c1 int, c2 float, c3 double) TAGS (t1 int unsigned); -CREATE TABLE ct1 USING st1 TAGS(1000); -CREATE TABLE ct2 USING st1 TAGS(2000); -INSERT INTO ct1 VALUES(now+0s, 10, 2.0, 3.0); -INSERT INTO ct1 VALUES(now+1s, 11, 2.1, 3.1)(now+2s, 12, 2.2, 3.2)(now+3s, 13, 2.3, 3.3); -CREATE SMA INDEX sma_index_name1 ON st1 FUNCTION(max(c1),max(c2),min(c1)) INTERVAL(5m,10s) SLIDING(5m) WATERMARK 5s MAX_DELAY 1m; --- 从 SMA 索引查询 -ALTER LOCAL 'querySmaOptimize' '1'; -SELECT max(c2),min(c1) FROM st1 INTERVAL(5m,10s) SLIDING(5m); -SELECT _wstart,_wend,_wduration,max(c2),min(c1) FROM st1 INTERVAL(5m,10s) SLIDING(5m); --- 从原始数据查询 -ALTER LOCAL 'querySmaOptimize' '0'; +SELECT agg_func_list [, pesudo_col_list] FROM stable WHERE exprs [GROUP/PARTITION BY [tbname] [, tag_list]] [HAVING ...] [INTERVAL(time_duration, offset) SLIDING(duration)]...; + +-- 创建 +CREATE TSMA tsma1 ON stable FUNCTION(COUNT(ts), SUM(c1), SUM(c3), MIN(c1), MIN(c3), AVG(c1)) INTERVAL(1m); +-- 查询 +SELECT COUNT(*), SUM(c1) + SUM(c3) FROM stable; ---- use tsma1 +SELECT COUNT(*), AVG(c1) FROM stable GROUP/PARTITION BY tbname, tag1, tag2; --- use tsma1 +SELECT COUNT(*), MIN(c1) FROM stable INTERVAL(1h); ---use tsma1 +SELECT COUNT(*), MIN(c1), SPREAD(c1) FROM stable INTERVAL(1h); ----- can't use, spread func not defined, although SPREAD can be calculated by MIN and MAX which are defined. +SELECT COUNT(*), MIN(c1) FROM stable INTERVAL(30s); ----- can't use tsma1, time_duration not fit. Normally, query_time_duration should be multple of create_duration. +SELECT COUNT(*), MIN(c1) FROM stable where c2 > 0; ---- can't use tsma1, can't do c2 filtering +SELECT COUNT(*) FROM stable GROUP BY c2; ---- can't use any tsma +SELECT MIN(c3), MIN(c2) FROM stable INTERVAL(1m); ---- can't use tsma1, c2 is not defined in tsma1. + +-- Another tsma2 created with INTERVAL(1h) based on tsma1 +CREATE RECURSIVE TSMA tsma2 on tsma1 INTERVAL(1h); +SELECT COUNT(*), SUM(c1) FROM stable; ---- use tsma2 +SELECT COUNT(*), AVG(c1) FROM stable GROUP/PARTITION BY tbname, tag1, tag2; --- use tsma2 +SELECT COUNT(*), MIN(c1) FROM stable INTERVAL(2h); ---use tsma2 +SELECT COUNT(*), MIN(c1) FROM stable WHERE ts < '2023-01-01 10:10:10' INTERVAL(30m); --use tsma1 +SELECT COUNT(*), MIN(c1) + MIN(c3) FROM stable INTERVAL(30m); ---use tsma1 +SELECT COUNT(*), MIN(c1) FROM stable INTERVAL(1h) SLIDING(30m); ---use tsma1 +SELECT COUNT(*), MIN(c1), SPREAD(c1) FROM stable INTERVAL(1h); ----- can't use tsma1 or tsma2, spread func not defined +SELECT COUNT(*), MIN(c1) FROM stable INTERVAL(30s); ----- can't use tsma1 or tsma2, time_duration not fit. Normally, query_time_duration should be multple of create_duration. +SELECT COUNT(*), MIN(c1) FROM stable where c2 > 0; ---- can't use tsma1 or tsam2, can't do c2 filtering ``` -## 删除索引 +### 使用限制 +创建TSMA之后, 对原始超级表的操作有以下限制: + +- 必须删除该表上的所有TSMA才能删除该表. +- 原始表所有tag列不能删除, 也不能修改tag列名或子表的tag值, 必须先删除TSMA, 才能删除tag列. +- 若某些列被TSMA使用了, 则这些列不能被删除, 必须先删除TSMA. 添加列不受影响, 但是新添加的列不在任何TSMA中, 因此若要计算新增列, 需要新创建其他的TSMA. + +## 查看TSMA ```sql -DROP INDEX index_name; +SHOW [db_name.]TSMAS; +SELECT * FROM information_schema.ins_tsma; ``` - -## 查看索引 - -````sql -SHOW INDEXES FROM tbl_name [FROM db_name]; -SHOW INDEXES FROM [db_name.]tbl_name; -```` - -显示在所指定的数据库或表上已创建的索引。 +若创建时指定的较多的函数, 且列名较长, 在显示函数列表时可能会被截断(目前最大支持输出256KB). \ No newline at end of file diff --git a/docs/zh/14-reference/12-config/index.md b/docs/zh/14-reference/12-config/index.md index 1bada64431..cb6ae8451f 100755 --- a/docs/zh/14-reference/12-config/index.md +++ b/docs/zh/14-reference/12-config/index.md @@ -240,6 +240,16 @@ taos -C | 缺省值 | 0 | | 补充说明 | 该参数设置为 0 时,last(\*)/last_row(\*)/first(\*) 只返回超级表的普通列;为 1 时,返回超级表的普通列和标签列 | +### maxTsmaCalcDelay + +| 属性 | 说明 | +| -------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | +| 适用范围 | 仅客户端适用 | +| 含义 | 查询时客户端可允许的tsma计算延迟, 若tsma的计算延迟大于配置值, 则该TSMA将不会被使用. | +| 取值范围 | 600s - 86400s, 即10分钟-1小时 | +| 缺省值 | 600s | + + ## 区域相关 ### timezone @@ -745,6 +755,15 @@ charset 的有效值是 UTF-8。 | 取值范围 | 1-10000 | | 缺省值 | 20 | +### maxTsmaNum + +| 属性 | 说明 | +| -------- | --------------------------- | +| 适用范围 | 仅服务端适用 | +| 含义 | 集群内可创建的TSMA个数 | +| 取值范围 | 0-12 | +| 缺省值 | 8 | + ## 压缩参数 ### compressMsgSize diff --git a/docs/zh/21-tdinternal/01-arch.md b/docs/zh/21-tdinternal/01-arch.md index e2480b6682..52212bc714 100644 --- a/docs/zh/21-tdinternal/01-arch.md +++ b/docs/zh/21-tdinternal/01-arch.md @@ -201,7 +201,7 @@ TDengine 采用数据驱动的方式让缓存中的数据写入硬盘进行持 除此之外,TDengine 也提供了数据分级存储的功能,将不同时间段的数据存储在挂载的不同介质上的目录里,从而实现不同“热度”的数据存储在不同的存储介质上,充分利用存储,节约成本。比如,最新采集的数据需要经常访问,对硬盘的读取性能要求高,那么用户可以配置将这些数据存储在 SSD 盘上。超过一定期限的数据,查询需求量没有那么高,那么可以存储在相对便宜的 HDD 盘上。 -多级存储支持 3 级,每级最多可配置 16 个挂载点。 +多级存储支持 3 级,每级最多可配置 128 个挂载点。 TDengine 多级存储配置方式如下(在配置文件/etc/taos/taos.cfg 中): diff --git a/include/common/tcommon.h b/include/common/tcommon.h index 2c4a00a72d..d28477ae40 100644 --- a/include/common/tcommon.h +++ b/include/common/tcommon.h @@ -15,7 +15,7 @@ #ifndef _TD_COMMON_DEF_H_ #define _TD_COMMON_DEF_H_ -// #include "taosdef.h" + #include "tarray.h" #include "tmsg.h" #include "tvariant.h" @@ -412,6 +412,7 @@ typedef struct STUidTagInfo { #define UD_TAG_COLUMN_INDEX 2 int32_t taosGenCrashJsonMsg(int signum, char** pMsg, int64_t clusterId, int64_t startTime); +int32_t dumpConfToDataBlock(SSDataBlock* pBlock, int32_t startCol); #define TSMA_RES_STB_POSTFIX "_tsma_res_stb_" #define MD5_OUTPUT_LEN 32 diff --git a/include/common/tglobal.h b/include/common/tglobal.h index 00ed8bfb8e..3b8929f241 100644 --- a/include/common/tglobal.h +++ b/include/common/tglobal.h @@ -260,7 +260,7 @@ int32_t taosInitCfg(const char *cfgDir, const char **envCmd, const char *envFile bool tsc); void taosCleanupCfg(); -int32_t taosCfgDynamicOptions(SConfig *pCfg, char *name, bool forServer); +int32_t taosCfgDynamicOptions(SConfig *pCfg, const char *name, bool forServer); struct SConfig *taosGetCfg(); diff --git a/include/libs/stream/tstream.h b/include/libs/stream/tstream.h index 1b30fdcb01..4423bf94b6 100644 --- a/include/libs/stream/tstream.h +++ b/include/libs/stream/tstream.h @@ -424,7 +424,7 @@ typedef struct STaskOutputInfo { }; int8_t type; STokenBucket* pTokenBucket; - SArray* pDownstreamUpdateList; + SArray* pNodeEpsetUpdateList; } STaskOutputInfo; typedef struct SUpstreamInfo { @@ -435,6 +435,7 @@ typedef struct SUpstreamInfo { typedef struct SDownstreamStatusInfo { int64_t reqId; int32_t taskId; + int32_t vgId; int64_t rspTs; int32_t status; } SDownstreamStatusInfo; @@ -445,6 +446,8 @@ typedef struct STaskCheckInfo { int32_t notReadyTasks; int32_t inCheckProcess; int32_t stopCheckProcess; + int32_t notReadyRetryCount; + int32_t timeoutRetryCount; tmr_h checkRspTmr; TdThreadMutex checkInfoLock; } STaskCheckInfo; @@ -484,7 +487,7 @@ struct SStreamTask { SSHashObj* pNameMap; void* pBackend; int8_t subtableWithoutMd5; - char reserve[255]; + char reserve[256]; }; typedef int32_t (*startComplete_fn_t)(struct SStreamMeta*); @@ -845,12 +848,9 @@ int32_t streamTaskSetDb(SStreamMeta* pMeta, void* pTask, char* key); bool streamTaskIsSinkTask(const SStreamTask* pTask); int32_t streamTaskSendCheckpointReq(SStreamTask* pTask); -int32_t streamTaskAddReqInfo(STaskCheckInfo* pInfo, int64_t reqId, int32_t taskId, const char* id); -int32_t streamTaskUpdateCheckInfo(STaskCheckInfo* pInfo, int32_t taskId, int32_t status, int64_t rspTs, int64_t reqId, - int32_t* pNotReady, const char* id); -void streamTaskCleanCheckInfo(STaskCheckInfo* pInfo); int32_t streamTaskStartMonitorCheckRsp(SStreamTask* pTask); int32_t streamTaskStopMonitorCheckRsp(STaskCheckInfo* pInfo, const char* id); +void streamTaskCleanupCheckInfo(STaskCheckInfo* pInfo); void streamTaskStatusInit(STaskStatusEntry* pEntry, const SStreamTask* pTask); void streamTaskStatusCopy(STaskStatusEntry* pDst, const STaskStatusEntry* pSrc); diff --git a/include/util/tconfig.h b/include/util/tconfig.h index 45abe2ff83..2095601e14 100644 --- a/include/util/tconfig.h +++ b/include/util/tconfig.h @@ -98,34 +98,32 @@ typedef struct { const char *value; } SConfigPair; -typedef struct SConfig { - ECfgSrcType stype; - SArray *array; -} SConfig; - -SConfig *cfgInit(); -int32_t cfgLoad(SConfig *pCfg, ECfgSrcType cfgType, const void *sourceStr); -int32_t cfgLoadFromArray(SConfig *pCfg, SArray *pArgs); // SConfigPair -void cfgCleanup(SConfig *pCfg); +typedef struct SConfig SConfig; +typedef struct SConfigIter SConfigIter; +SConfig *cfgInit(); +int32_t cfgLoad(SConfig *pCfg, ECfgSrcType cfgType, const void *sourceStr); +int32_t cfgLoadFromArray(SConfig *pCfg, SArray *pArgs); // SConfigPair +void cfgCleanup(SConfig *pCfg); int32_t cfgGetSize(SConfig *pCfg); SConfigItem *cfgGetItem(SConfig *pCfg, const char *name); int32_t cfgSetItem(SConfig *pCfg, const char *name, const char *value, ECfgSrcType stype); +int32_t cfgCheckRangeForDynUpdate(SConfig *pCfg, const char *name, const char *pVal, bool isServer); +SConfigIter *cfgCreateIter(SConfig *pConf); +SConfigItem *cfgNextIter(SConfigIter *pIter); +void cfgDestroyIter(SConfigIter *pIter); -int32_t cfgCheckRangeForDynUpdate(SConfig *pCfg, const char *name, const char *pVal, bool isServer); - +// clang-format off int32_t cfgAddBool(SConfig *pCfg, const char *name, bool defaultVal, int8_t scope, int8_t dynScope); -int32_t cfgAddInt32(SConfig *pCfg, const char *name, int32_t defaultVal, int64_t minval, int64_t maxval, int8_t scope, - int8_t dynScope); -int32_t cfgAddInt64(SConfig *pCfg, const char *name, int64_t defaultVal, int64_t minval, int64_t maxval, int8_t scope, - int8_t dynScope); -int32_t cfgAddFloat(SConfig *pCfg, const char *name, float defaultVal, float minval, float maxval, int8_t scope, - int8_t dynScope); +int32_t cfgAddInt32(SConfig *pCfg, const char *name, int32_t defaultVal, int64_t minval, int64_t maxval, int8_t scope, int8_t dynScope); +int32_t cfgAddInt64(SConfig *pCfg, const char *name, int64_t defaultVal, int64_t minval, int64_t maxval, int8_t scope, int8_t dynScope); +int32_t cfgAddFloat(SConfig *pCfg, const char *name, float defaultVal, float minval, float maxval, int8_t scope, int8_t dynScope); int32_t cfgAddString(SConfig *pCfg, const char *name, const char *defaultVal, int8_t scope, int8_t dynScope); int32_t cfgAddDir(SConfig *pCfg, const char *name, const char *defaultVal, int8_t scope, int8_t dynScope); int32_t cfgAddLocale(SConfig *pCfg, const char *name, const char *defaultVal, int8_t scope, int8_t dynScope); int32_t cfgAddCharset(SConfig *pCfg, const char *name, const char *defaultVal, int8_t scope, int8_t dynScope); int32_t cfgAddTimezone(SConfig *pCfg, const char *name, const char *defaultVal, int8_t scope, int8_t dynScope); +// clang-format on const char *cfgStypeStr(ECfgSrcType type); const char *cfgDtypeStr(ECfgDataType type); diff --git a/include/util/tdef.h b/include/util/tdef.h index 6402ef902c..6ded54dc00 100644 --- a/include/util/tdef.h +++ b/include/util/tdef.h @@ -188,8 +188,8 @@ typedef enum ELogicConditionType { LOGIC_COND_TYPE_NOT, } ELogicConditionType; -#define ENCRYPTED_LEN(len) (len/16) * 16 + (len%16?1:0) * 16 -#define ENCRYPT_KEY_LEN 16 +#define ENCRYPTED_LEN(len) (len / 16) * 16 + (len % 16 ? 1 : 0) * 16 +#define ENCRYPT_KEY_LEN 16 #define ENCRYPT_KEY_LEN_MIN 8 #define TSDB_INT32_ID_LEN 11 @@ -525,7 +525,7 @@ typedef enum ELogicConditionType { #define TSDB_ARB_DUMMY_TIME 4765104000000 // 2121-01-01 00:00:00.000, :P #define TFS_MAX_TIERS 3 -#define TFS_MAX_DISKS_PER_TIER 16 +#define TFS_MAX_DISKS_PER_TIER 128 #define TFS_MAX_DISKS (TFS_MAX_TIERS * TFS_MAX_DISKS_PER_TIER) #define TFS_MIN_LEVEL 0 #define TFS_MAX_LEVEL (TFS_MAX_TIERS - 1) @@ -535,7 +535,7 @@ typedef enum ELogicConditionType { enum { TRANS_STAT_INIT = 0, TRANS_STAT_EXECUTING, TRANS_STAT_EXECUTED, TRANS_STAT_ROLLBACKING, TRANS_STAT_ROLLBACKED }; enum { TRANS_OPER_INIT = 0, TRANS_OPER_EXECUTE, TRANS_OPER_ROLLBACK }; -enum { ENCRYPT_KEY_STAT_UNKNOWN = 0, ENCRYPT_KEY_STAT_UNSET, ENCRYPT_KEY_STAT_SET, ENCRYPT_KEY_STAT_LOADED}; +enum { ENCRYPT_KEY_STAT_UNKNOWN = 0, ENCRYPT_KEY_STAT_UNSET, ENCRYPT_KEY_STAT_SET, ENCRYPT_KEY_STAT_LOADED }; typedef struct { char dir[TSDB_FILENAME_LEN]; diff --git a/packaging/deb/makedeb.sh b/packaging/deb/makedeb.sh index c029b1871a..337407f8d2 100755 --- a/packaging/deb/makedeb.sh +++ b/packaging/deb/makedeb.sh @@ -57,9 +57,9 @@ else arch=$cpuType fi -echo "${top_dir}/../enterprise/packaging/build_taoskeeper.sh -r ${arch} -e taoskeeper" +echo "${top_dir}/../enterprise/packaging/build_taoskeeper.sh -r ${arch} -e taoskeeper -t ver-${tdengine_ver}" echo "$top_dir=${top_dir}" -taoskeeper_binary=`${top_dir}/../enterprise/packaging/build_taoskeeper.sh -r $arch -e taoskeeper` +taoskeeper_binary=`${top_dir}/../enterprise/packaging/build_taoskeeper.sh -r $arch -e taoskeeper -t ver-${tdengine_ver}` echo "taoskeeper_binary: ${taoskeeper_binary}" # copy config files @@ -76,6 +76,13 @@ if [ -f "${compile_dir}/test/cfg/taosadapter.service" ]; then cp ${compile_dir}/test/cfg/taosadapter.service ${pkg_dir}${install_home_path}/cfg || : fi +if [ -f "%{_compiledir}/../../../explorer/target/taos-explorer.service" ]; then + cp %{_compiledir}/../../../explorer/target/taos-explorer.service ${pkg_dir}${install_home_path}/cfg || : +fi +if [ -f "%{_compiledir}/../../../explorer/server/example/explorer.toml" ]; then + cp %{_compiledir}/../../../explorer/server/example/explorer.toml ${pkg_dir}${install_home_path}/cfg || : +fi + cp ${taoskeeper_binary} ${pkg_dir}${install_home_path}/bin #cp ${compile_dir}/../packaging/deb/taosd ${pkg_dir}${install_home_path}/init.d cp ${compile_dir}/../packaging/tools/post.sh ${pkg_dir}${install_home_path}/script @@ -93,6 +100,10 @@ if [ -f "${compile_dir}/build/bin/taosadapter" ]; then cp ${compile_dir}/build/bin/taosadapter ${pkg_dir}${install_home_path}/bin ||: fi +if [ -f "${compile_dir}/../../../explorer/target/release/taos-explorer" ]; then + cp ${compile_dir}/../../../explorer/target/release/taos-explorer ${pkg_dir}${install_home_path}/bin ||: +fi + cp ${compile_dir}/build/bin/taos ${pkg_dir}${install_home_path}/bin cp ${compile_dir}/build/lib/${libfile} ${pkg_dir}${install_home_path}/driver [ -f ${compile_dir}/build/lib/${wslibfile} ] && cp ${compile_dir}/build/lib/${wslibfile} ${pkg_dir}${install_home_path}/driver ||: diff --git a/packaging/rpm/tdengine.spec b/packaging/rpm/tdengine.spec index b846cd447b..6b324486b2 100644 --- a/packaging/rpm/tdengine.spec +++ b/packaging/rpm/tdengine.spec @@ -72,6 +72,14 @@ if [ -f %{_compiledir}/../build-taoskeeper/taoskeeper.service ]; then cp %{_compiledir}/../build-taoskeeper/taoskeeper.service %{buildroot}%{homepath}/cfg ||: fi +if [ -f %{_compiledir}/../../../explorer/target/taos-explorer.service ]; then + cp %{_compiledir}/../../../explorer/target/taos-explorer.service %{buildroot}%{homepath}/cfg ||: +fi + +if [ -f %{_compiledir}/../../../explorer/server/example/explorer.toml ]; then + cp %{_compiledir}/../../../explorer/server/example/explorer.toml %{buildroot}%{homepath}/cfg ||: +fi + #cp %{_compiledir}/../packaging/rpm/taosd %{buildroot}%{homepath}/init.d cp %{_compiledir}/../packaging/tools/post.sh %{buildroot}%{homepath}/script cp %{_compiledir}/../packaging/tools/preun.sh %{buildroot}%{homepath}/script @@ -84,6 +92,10 @@ cp %{_compiledir}/build/bin/udfd %{buildroot}%{homepath}/bin cp %{_compiledir}/build/bin/taosBenchmark %{buildroot}%{homepath}/bin cp %{_compiledir}/build/bin/taosdump %{buildroot}%{homepath}/bin +if [ -f %{_compiledir}/../../../explorer/target/release/taos-explorer ]; then + cp %{_compiledir}/../../../explorer/target/release/taos-explorer %{buildroot}%{homepath}/bin +fi + if [ -f %{_compiledir}/../build-taoskeeper/taoskeeper ]; then cp %{_compiledir}/../build-taoskeeper/taoskeeper %{buildroot}%{homepath}/bin fi diff --git a/packaging/tools/install.sh b/packaging/tools/install.sh index ae774a3289..5a83cdc6a8 100755 --- a/packaging/tools/install.sh +++ b/packaging/tools/install.sh @@ -16,49 +16,27 @@ serverFqdn="" script_dir=$(dirname $(readlink -f "$0")) # Dynamic directory -clientName="taos" -serverName="taosd" +PREFIX="taos" +clientName="${PREFIX}" +serverName="${PREFIX}d" udfdName="udfd" -configFile="taos.cfg" +configFile="${PREFIX}.cfg" productName="TDengine" emailName="taosdata.com" -uninstallScript="rmtaos" -historyFile="taos_history" +uninstallScript="rm${PREFIX}" +historyFile="${PREFIX}_history" tarName="package.tar.gz" -dataDir="/var/lib/taos" -logDir="/var/log/taos" -configDir="/etc/taos" -installDir="/usr/local/taos" -adapterName="taosadapter" -benchmarkName="taosBenchmark" -dumpName="taosdump" -demoName="taosdemo" -xname="taosx" -keeperName="taoskeeper" - -clientName2="taos" -serverName2="${clientName2}d" -configFile2="${clientName2}.cfg" -productName2="TDengine" -emailName2="taosdata.com" -xname2="${clientName2}x" -adapterName2="${clientName2}adapter" -keeperName2="${clientName2}keeper" - -explorerName="${clientName2}-explorer" -benchmarkName2="${clientName2}Benchmark" -demoName2="${clientName2}demo" -dumpName2="${clientName2}dump" -uninstallScript2="rm${clientName2}" - -historyFile="${clientName2}_history" -logDir="/var/log/${clientName2}" -configDir="/etc/${clientName2}" -installDir="/usr/local/${clientName2}" - -data_dir=${dataDir} -log_dir=${logDir} -cfg_install_dir=${configDir} +dataDir="/var/lib/${PREFIX}" +logDir="/var/log/${PREFIX}" +configDir="/etc/${PREFIX}" +installDir="/usr/local/${PREFIX}" +adapterName="${PREFIX}adapter" +benchmarkName="${PREFIX}Benchmark" +dumpName="${PREFIX}dump" +demoName="${PREFIX}demo" +xname="${PREFIX}x" +explorerName="${PREFIX}-explorer" +keeperName="${PREFIX}keeper" bin_link_dir="/usr/bin" lib_link_dir="/usr/lib" @@ -71,7 +49,6 @@ install_main_dir=${installDir} bin_dir="${installDir}/bin" service_config_dir="/etc/systemd/system" -web_port=6041 # Color setting RED='\033[0;31m' @@ -179,6 +156,26 @@ done #echo "verType=${verType} interactiveFqdn=${interactiveFqdn}" +tools=(${clientName} ${benchmarkName} ${dumpName} ${demoName} remove.sh udfd set_core.sh TDinsight.sh start_pre.sh) +if [ "${verMode}" == "cluster" ]; then + services=(${serverName} ${adapterName} ${xname} ${explorerName} ${keeperName}) +elif [ "${verMode}" == "edge" ]; then + if [ "${pagMode}" == "full" ]; then + services=(${serverName} ${adapterName} ${keeperName} ${explorerName}) + else + services=(${serverName}) + tools=(${clientName} ${benchmarkName} remove.sh start_pre.sh) + fi +else + services=(${serverName} ${adapterName} ${xname} ${explorerName} ${keeperName}) +fi + +function install_services() { + for service in "${services[@]}"; do + install_service ${service} + done +} + function kill_process() { pid=$(ps -ef | grep "$1" | grep -v "grep" | awk '{print $2}') if [ -n "$pid" ]; then @@ -196,6 +193,7 @@ function install_main_path() { ${csudo}mkdir -p ${install_main_dir}/driver ${csudo}mkdir -p ${install_main_dir}/examples ${csudo}mkdir -p ${install_main_dir}/include + ${csudo}mkdir -p ${configDir} # ${csudo}mkdir -p ${install_main_dir}/init.d if [ "$verMode" == "cluster" ]; then ${csudo}mkdir -p ${install_main_dir}/share @@ -208,44 +206,48 @@ function install_main_path() { function install_bin() { # Remove links - ${csudo}rm -f ${bin_link_dir}/${clientName2} || : - ${csudo}rm -f ${bin_link_dir}/${serverName2} || : - ${csudo}rm -f ${bin_link_dir}/${udfdName} || : - ${csudo}rm -f ${bin_link_dir}/${adapterName} || : - ${csudo}rm -f ${bin_link_dir}/${uninstallScript2} || : - ${csudo}rm -f ${bin_link_dir}/${demoName2} || : - ${csudo}rm -f ${bin_link_dir}/${benchmarkName2} || : - ${csudo}rm -f ${bin_link_dir}/${dumpName2} || : - ${csudo}rm -f ${bin_link_dir}/${keeperName2} || : - ${csudo}rm -f ${bin_link_dir}/set_core || : - ${csudo}rm -f ${bin_link_dir}/TDinsight.sh || : + for tool in "${tools[@]}"; do + ${csudo}rm -f ${bin_link_dir}/${tool} || : + done - ${csudo}cp -r ${script_dir}/bin/* ${install_main_dir}/bin && ${csudo}chmod 0555 ${install_main_dir}/bin/* + for service in "${services[@]}"; do + ${csudo}rm -f ${bin_link_dir}/${service} || : + done + + if [ "${verType}" == "client" ]; then + ${csudo}cp -r ${script_dir}/bin/${clientName} ${install_main_dir}/bin + ${csudo}cp -r ${script_dir}/bin/${benchmarkName} ${install_main_dir}/bin + ${csudo}cp -r ${script_dir}/bin/${dumpName} ${install_main_dir}/bin + ${csudo}cp -r ${script_dir}/bin/remove.sh ${install_main_dir}/bin + else + ${csudo}cp -r ${script_dir}/bin/* ${install_main_dir}/bin + fi + + if [[ "${verMode}" == "cluster" && "${verType}" != "client" ]]; then + if [ -d ${script_dir}/${xname}/bin ]; then + ${csudo}cp -r ${script_dir}/${xname}/bin/* ${install_main_dir}/bin + fi + fi + + if [ -f ${script_dir}/bin/quick_deploy.sh ]; then + ${csudo}cp -r ${script_dir}/bin/quick_deploy.sh ${install_main_dir}/bin + fi + + ${csudo}chmod 0555 ${install_main_dir}/bin/* + [ -x ${install_main_dir}/bin/remove.sh ] && ${csudo}mv ${install_main_dir}/bin/remove.sh ${install_main_dir}/uninstall.sh || : #Make link - [ -x ${install_main_dir}/bin/${clientName2} ] && ${csudo}ln -sf ${install_main_dir}/bin/${clientName2} ${bin_link_dir}/${clientName2} || : - [ -x ${install_main_dir}/bin/${serverName2} ] && ${csudo}ln -sf ${install_main_dir}/bin/${serverName2} ${bin_link_dir}/${serverName2} || : - [ -x ${install_main_dir}/bin/${udfdName} ] && ${csudo}ln -sf ${install_main_dir}/bin/${udfdName} ${bin_link_dir}/${udfdName} || : - [ -x ${install_main_dir}/bin/${adapterName2} ] && ${csudo}ln -sf ${install_main_dir}/bin/${adapterName2} ${bin_link_dir}/${adapterName2} || : - [ -x ${install_main_dir}/bin/${benchmarkName2} ] && ${csudo}ln -sf ${install_main_dir}/bin/${benchmarkName2} ${bin_link_dir}/${demoName2} || : - [ -x ${install_main_dir}/bin/${benchmarkName2} ] && ${csudo}ln -sf ${install_main_dir}/bin/${benchmarkName2} ${bin_link_dir}/${benchmarkName2} || : - [ -x ${install_main_dir}/bin/${dumpName2} ] && ${csudo}ln -sf ${install_main_dir}/bin/${dumpName2} ${bin_link_dir}/${dumpName2} || : - [ -x ${install_main_dir}/bin/${keeperName2} ] && ${csudo}ln -sf ${install_main_dir}/bin/${keeperName2} ${bin_link_dir}/${keeperName2} || : - [ -x ${install_main_dir}/bin/TDinsight.sh ] && ${csudo}ln -sf ${install_main_dir}/bin/TDinsight.sh ${bin_link_dir}/TDinsight.sh || : - if [ "$clientName2" == "${clientName}" ]; then - [ -x ${install_main_dir}/bin/remove.sh ] && ${csudo}ln -s ${install_main_dir}/bin/remove.sh ${bin_link_dir}/${uninstallScript} || : - fi - [ -x ${install_main_dir}/bin/set_core.sh ] && ${csudo}ln -s ${install_main_dir}/bin/set_core.sh ${bin_link_dir}/set_core || : + for tool in "${tools[@]}"; do + if [ "${tool}" == "remove.sh" ]; then + [ -x ${install_main_dir}/uninstall.sh ] && ${csudo}ln -sf ${install_main_dir}/uninstall.sh ${bin_link_dir}/${uninstallScript} || : + else + [ -x ${install_main_dir}/bin/${tool} ] && ${csudo}ln -sf ${install_main_dir}/bin/${tool} ${bin_link_dir}/${tool} || : + fi + done - if [ "$verMode" == "cluster" ] && [ "$clientName" != "$clientName2" ]; then - ${csudo}rm -f ${bin_link_dir}/${xname2} || : - ${csudo}rm -f ${bin_link_dir}/${explorerName} || : - - #Make link - [ -x ${install_main_dir}/bin/${xname2} ] && ${csudo}ln -sf ${install_main_dir}/bin/${xname2} ${bin_link_dir}/${xname2} || : - [ -x ${install_main_dir}/bin/${explorerName} ] && ${csudo}ln -sf ${install_main_dir}/bin/${explorerName} ${bin_link_dir}/${explorerName} || : - [ -x ${install_main_dir}/bin/remove.sh ] && ${csudo}ln -s ${install_main_dir}/bin/remove.sh ${bin_link_dir}/${uninstallScript2} || : - fi + for service in "${services[@]}"; do + [ -x ${install_main_dir}/bin/${service} ] && ${csudo}ln -sf ${install_main_dir}/bin/${service} ${bin_link_dir}/${service} || : + done } function install_lib() { @@ -356,7 +358,7 @@ function install_header() { ${csudo}ln -sf ${install_main_dir}/include/taosdef.h ${inc_link_dir}/taosdef.h ${csudo}ln -sf ${install_main_dir}/include/taoserror.h ${inc_link_dir}/taoserror.h ${csudo}ln -sf ${install_main_dir}/include/tdef.h ${inc_link_dir}/tdef.h - ${csudo}ln -sf ${install_main_dir}/include/taosudf.h ${inc_link_dir}/taosudf.h + ${csudo}ln -sf ${install_main_dir}/include/taosudf.h ${inc_link_dir}/taosudf.h [ -f ${install_main_dir}/include/taosws.h ] && ${csudo}ln -sf ${install_main_dir}/include/taosws.h ${inc_link_dir}/taosws.h || : } @@ -415,10 +417,10 @@ function set_hostname() { # ${csudo}sed -i -r "s/#*\s*(HOSTNAME=\s*).*/\1$newHostname/" /etc/sysconfig/network || : # fi - if [ -f ${cfg_install_dir}/${configFile2} ]; then - ${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$newHostname/" ${cfg_install_dir}/${configFile2} + if [ -f ${configDir}/${configFile} ]; then + ${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$newHostname/" ${configDir}/${configFile} else - ${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$newHostname/" ${script_dir}/cfg/${configFile2} + ${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$newHostname/" ${script_dir}/cfg/${configFile} fi serverFqdn=$newHostname @@ -453,11 +455,11 @@ function set_ipAsFqdn() { echo -e -n "${GREEN}Unable to get local ip, use 127.0.0.1${NC}" localFqdn="127.0.0.1" # Write the local FQDN to configuration file - - if [ -f ${cfg_install_dir}/${configFile2} ]; then - ${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$localFqdn/" ${cfg_install_dir}/${configFile2} + + if [ -f ${configDir}/${configFile} ]; then + ${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$localFqdn/" ${configDir}/${configFile} else - ${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$localFqdn/" ${script_dir}/cfg/${configFile2} + ${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$localFqdn/" ${script_dir}/cfg/${configFile} fi serverFqdn=$localFqdn echo @@ -479,11 +481,11 @@ function set_ipAsFqdn() { if [[ $retval != 0 ]]; then read -p "Please choose an IP from local IP list:" localFqdn else - # Write the local FQDN to configuration file - if [ -f ${cfg_install_dir}/${configFile2} ]; then - ${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$localFqdn/" ${cfg_install_dir}/${configFile2} + # Write the local FQDN to configuration file + if [ -f ${configDir}/${configFile} ]; then + ${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$localFqdn/" ${configDir}/${configFile} else - ${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$localFqdn/" ${script_dir}/cfg/${configFile2} + ${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$localFqdn/" ${script_dir}/cfg/${configFile} fi serverFqdn=$localFqdn break @@ -499,91 +501,121 @@ function local_fqdn_check() { echo echo -e -n "System hostname is: ${GREEN}$serverFqdn${NC}" echo - set_hostname + set_hostname +} + +function install_taosx_config() { + [ ! -z $1 ] && return 0 || : # only install client + + fileName="${script_dir}/${xname}/etc/${PREFIX}/${xname}.toml" + if [ -f ${fileName} ]; then + ${csudo}sed -i -r "s/#*\s*(fqdn\s*=\s*).*/\1\"${serverFqdn}\"/" ${fileName} + + if [ -f "${configDir}/${xname}.toml" ]; then + ${csudo}cp ${fileName} ${configDir}/${xname}.toml.new + else + ${csudo}cp ${fileName} ${configDir}/${xname}.toml + fi + fi +} + + +function install_explorer_config() { + [ ! -z $1 ] && return 0 || : # only install client + + if [ "$verMode" == "cluster" ]; then + fileName="${script_dir}/${xname}/etc/${PREFIX}/explorer.toml" + else + fileName="${script_dir}/cfg/explorer.toml" + fi + + if [ -f ${fileName} ]; then + ${csudo}sed -i "s/localhost/${serverFqdn}/g" ${fileName} + + if [ -f "${configDir}/explorer.toml" ]; then + ${csudo}cp ${fileName} ${configDir}/explorer.toml.new + else + ${csudo}cp ${fileName} ${configDir}/explorer.toml + fi + fi } function install_adapter_config() { - if [ -f ${script_dir}/cfg/${adapterName}.toml ]; then - ${csudo}sed -i -r "s/localhost/${serverFqdn}/g" ${script_dir}/cfg/${adapterName}.toml - fi - if [ ! -f "${cfg_install_dir}/${adapterName}.toml" ]; then - ${csudo}mkdir -p ${cfg_install_dir} - [ -f ${script_dir}/cfg/${adapterName}.toml ] && ${csudo}cp ${script_dir}/cfg/${adapterName}.toml ${cfg_install_dir} - [ -f ${cfg_install_dir}/${adapterName}.toml ] && ${csudo}chmod 644 ${cfg_install_dir}/${adapterName}.toml - else - [ -f ${script_dir}/cfg/${adapterName}.toml ] && - ${csudo}cp -f ${script_dir}/cfg/${adapterName}.toml ${cfg_install_dir}/${adapterName}.toml.new - fi - - [ -f ${cfg_install_dir}/${adapterName}.toml ] && - ${csudo}ln -sf ${cfg_install_dir}/${adapterName}.toml ${install_main_dir}/cfg/${adapterName}.toml - [ ! -z $1 ] && return 0 || : # only install client + fileName="${script_dir}/cfg/${adapterName}.toml" + if [ -f ${fileName} ]; then + ${csudo}sed -i -r "s/localhost/${serverFqdn}/g" ${fileName} + + if [ -f "${configDir}/${adapterName}.toml" ]; then + ${csudo}cp ${fileName} ${configDir}/${adapterName}.toml.new + else + ${csudo}cp ${fileName} ${configDir}/${adapterName}.toml + fi + fi } function install_keeper_config() { - if [ -f ${script_dir}/cfg/${keeperName2}.toml ]; then - ${csudo}sed -i -r "s/127.0.0.1/${serverFqdn}/g" ${script_dir}/cfg/${keeperName2}.toml - fi - if [ -f "${configDir}/keeper.toml" ]; then - echo "The file keeper.toml will be renamed to ${keeperName2}.toml" - ${csudo}cp ${script_dir}/cfg/${keeperName2}.toml ${configDir}/${keeperName2}.toml.new - ${csudo}mv ${configDir}/keeper.toml ${configDir}/${keeperName2}.toml - elif [ -f "${configDir}/${keeperName2}.toml" ]; then - # "taoskeeper.toml exists,new config is taoskeeper.toml.new" - ${csudo}cp ${script_dir}/cfg/${keeperName2}.toml ${configDir}/${keeperName2}.toml.new - else - ${csudo}cp ${script_dir}/cfg/${keeperName2}.toml ${configDir}/${keeperName2}.toml - fi - command -v systemctl >/dev/null 2>&1 && ${csudo}systemctl daemon-reload >/dev/null 2>&1 || true -} - -function install_config() { - - if [ ! -f "${cfg_install_dir}/${configFile2}" ]; then - ${csudo}mkdir -p ${cfg_install_dir} - if [ -f ${script_dir}/cfg/${configFile2} ]; then - ${csudo} echo "monitor 1" >> ${script_dir}/cfg/${configFile2} - ${csudo} echo "monitorFQDN ${serverFqdn}" >> ${script_dir}/cfg/${configFile2} - ${csudo} echo "audit 1" >> ${script_dir}/cfg/${configFile2} - ${csudo}cp ${script_dir}/cfg/${configFile2} ${cfg_install_dir} - fi - ${csudo}chmod 644 ${cfg_install_dir}/* - else - ${csudo} echo "monitor 1" >> ${script_dir}/cfg/${configFile2} - ${csudo} echo "monitorFQDN ${serverFqdn}" >> ${script_dir}/cfg/${configFile2} - ${csudo} echo "audit 1" >> ${script_dir}/cfg/${configFile2} - ${csudo}cp -f ${script_dir}/cfg/${configFile2} ${cfg_install_dir}/${configFile2}.new - fi - - ${csudo}ln -sf ${cfg_install_dir}/${configFile2} ${install_main_dir}/cfg - [ ! -z $1 ] && return 0 || : # only install client + fileName="${script_dir}/cfg/${keeperName}.toml" + if [ -f ${fileName} ]; then + ${csudo}sed -i -r "s/127.0.0.1/${serverFqdn}/g" ${fileName} + + if [ -f "${configDir}/${keeperName}.toml" ]; then + ${csudo}cp ${fileName} ${configDir}/${keeperName}.toml.new + else + ${csudo}cp ${fileName} ${configDir}/${keeperName}.toml + fi + fi +} + +function install_taosd_config() { + fileName="${script_dir}/cfg/${configFile}" + if [ -f ${fileName} ]; then + ${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$serverFqdn/" ${script_dir}/cfg/${configFile} + ${csudo}echo "monitor 1" >>${script_dir}/cfg/${configFile} + ${csudo}echo "monitorFQDN ${serverFqdn}" >>${script_dir}/cfg/${configFile} + ${csudo}echo "audit 1" >>${script_dir}/cfg/${configFile} + + if [ -f "${configDir}/${configFile}" ]; then + ${csudo}cp ${fileName} ${configDir}/${configFile}.new + else + ${csudo}cp ${fileName} ${configDir}/${configFile} + fi + fi + + ${csudo}ln -sf ${configDir}/${configFile} ${install_main_dir}/cfg +} +function install_config() { + + [ ! -z $1 ] && return 0 || : # only install client + if ((${update_flag} == 1)); then return 0 fi if [ "$interactiveFqdn" == "no" ]; then + install_taosd_config return 0 fi local_fqdn_check + install_taosd_config echo - echo -e -n "${GREEN}Enter FQDN:port (like h1.${emailName2}:6030) of an existing ${productName2} cluster node to join${NC}" + echo -e -n "${GREEN}Enter FQDN:port (like h1.${emailName}:6030) of an existing ${productName} cluster node to join${NC}" echo echo -e -n "${GREEN}OR leave it blank to build one${NC}:" read firstEp while true; do if [ ! -z "$firstEp" ]; then - if [ -f ${cfg_install_dir}/${configFile2} ]; then - ${csudo}sed -i -r "s/#*\s*(firstEp\s*).*/\1$firstEp/" ${cfg_install_dir}/${configFile2} + if [ -f ${configDir}/${configFile} ]; then + ${csudo}sed -i -r "s/#*\s*(firstEp\s*).*/\1$firstEp/" ${configDir}/${configFile} else - ${csudo}sed -i -r "s/#*\s*(firstEp\s*).*/\1$firstEp/" ${script_dir}/cfg/${configFile2} + ${csudo}sed -i -r "s/#*\s*(firstEp\s*).*/\1$firstEp/" ${script_dir}/cfg/${configFile} fi break else @@ -605,37 +637,21 @@ function install_config() { done } -function install_share_etc() { - [ ! -d ${script_dir}/share/etc ] && return - for c in `ls ${script_dir}/share/etc/`; do - if [ -e /etc/${clientName2}/$c ]; then - out=/etc/${clientName2}/$c.new.`date +%F` - ${csudo}cp -f ${script_dir}/share/etc/$c $out ||: - else - ${csudo}mkdir -p /etc/${clientName2} >/dev/null 2>/dev/null ||: - ${csudo}cp -f ${script_dir}/share/etc/$c /etc/${clientName2}/$c ||: - fi - done +function install_log() { + ${csudo}mkdir -p ${logDir} && ${csudo}chmod 777 ${logDir} - [ ! -d ${script_dir}/share/srv ] && return - ${csudo} cp ${script_dir}/share/srv/* ${service_config_dir} ||: -} - -function install_log() { - ${csudo}mkdir -p ${log_dir} && ${csudo}chmod 777 ${log_dir} - - ${csudo}ln -sf ${log_dir} ${install_main_dir}/log + ${csudo}ln -sf ${logDir} ${install_main_dir}/log } function install_data() { - ${csudo}mkdir -p ${data_dir} + ${csudo}mkdir -p ${dataDir} - ${csudo}ln -sf ${data_dir} ${install_main_dir}/data + ${csudo}ln -sf ${dataDir} ${install_main_dir}/data } function install_connector() { if [ -d "${script_dir}/connector/" ]; then - ${csudo}cp -rf ${script_dir}/connector/ ${install_main_dir}/ || echo "failed to copy connector" + ${csudo}cp -rf ${script_dir}/connector/ ${install_main_dir}/ || echo "failed to copy connector" ${csudo}cp ${script_dir}/start-all.sh ${install_main_dir}/ || echo "failed to copy start-all.sh" ${csudo}cp ${script_dir}/stop-all.sh ${install_main_dir}/ || echo "failed to copy stop-all.sh" ${csudo}cp ${script_dir}/README.md ${install_main_dir}/ || echo "failed to copy README.md" @@ -644,59 +660,36 @@ function install_connector() { function install_examples() { if [ -d ${script_dir}/examples ]; then - ${csudo}cp -rf ${script_dir}/examples/* ${install_main_dir}/examples || echo "failed to copy examples" + ${csudo}cp -rf ${script_dir}/examples ${install_main_dir}/ || echo "failed to copy examples" fi } -function install_web() { - if [ -d "${script_dir}/share" ]; then - ${csudo}cp -rf ${script_dir}/share/* ${install_main_dir}/share > /dev/null 2>&1 ||: - fi -} - -function install_taosx() { - if [ -f "${script_dir}/taosx/install_taosx.sh" ]; then - cd ${script_dir}/taosx - chmod a+x install_taosx.sh - bash install_taosx.sh -e $serverFqdn +function install_plugins() { + if [ -d ${script_dir}/${xname}/plugins ]; then + ${csudo}cp -rf ${script_dir}/${xname}/plugins/ ${install_main_dir}/ || echo "failed to copy ${PREFIX}x plugins" fi } function clean_service_on_sysvinit() { - if ps aux | grep -v grep | grep ${serverName2} &>/dev/null; then - ${csudo}service ${serverName2} stop || : - fi - - if ps aux | grep -v grep | grep tarbitrator &>/dev/null; then - ${csudo}service tarbitratord stop || : + if ps aux | grep -v grep | grep $1 &>/dev/null; then + ${csudo}service $1 stop || : fi if ((${initd_mod} == 1)); then - if [ -e ${service_config_dir}/${serverName2} ]; then - ${csudo}chkconfig --del ${serverName2} || : - fi - - if [ -e ${service_config_dir}/tarbitratord ]; then - ${csudo}chkconfig --del tarbitratord || : + if [ -e ${service_config_dir}/$1 ]; then + ${csudo}chkconfig --del $1 || : fi elif ((${initd_mod} == 2)); then - if [ -e ${service_config_dir}/${serverName2} ]; then - ${csudo}insserv -r ${serverName2} || : - fi - if [ -e ${service_config_dir}/tarbitratord ]; then - ${csudo}insserv -r tarbitratord || : + if [ -e ${service_config_dir}/$1 ]; then + ${csudo}insserv -r $1 || : fi elif ((${initd_mod} == 3)); then - if [ -e ${service_config_dir}/${serverName2} ]; then - ${csudo}update-rc.d -f ${serverName2} remove || : - fi - if [ -e ${service_config_dir}/tarbitratord ]; then - ${csudo}update-rc.d -f tarbitratord remove || : + if [ -e ${service_config_dir}/$1 ]; then + ${csudo}update-rc.d -f $1 remove || : fi fi - ${csudo}rm -f ${service_config_dir}/${serverName2} || : - ${csudo}rm -f ${service_config_dir}/tarbitratord || : + ${csudo}rm -f ${service_config_dir}/$1 || : if $(which init &>/dev/null); then ${csudo}init q || : @@ -704,96 +697,68 @@ function clean_service_on_sysvinit() { } function install_service_on_sysvinit() { - clean_service_on_sysvinit + if [ "$1" != "${serverName}" ]; then + return + fi + + clean_service_on_sysvinit $1 sleep 1 if ((${os_type} == 1)); then - # ${csudo}cp -f ${script_dir}/init.d/${serverName}.deb ${install_main_dir}/init.d/${serverName} ${csudo}cp ${script_dir}/init.d/${serverName}.deb ${service_config_dir}/${serverName} && ${csudo}chmod a+x ${service_config_dir}/${serverName} elif ((${os_type} == 2)); then - # ${csudo}cp -f ${script_dir}/init.d/${serverName}.rpm ${install_main_dir}/init.d/${serverName} ${csudo}cp ${script_dir}/init.d/${serverName}.rpm ${service_config_dir}/${serverName} && ${csudo}chmod a+x ${service_config_dir}/${serverName} fi if ((${initd_mod} == 1)); then - ${csudo}chkconfig --add ${serverName2} || : - ${csudo}chkconfig --level 2345 ${serverName2} on || : + ${csudo}chkconfig --add $1 || : + ${csudo}chkconfig --level 2345 $1 on || : elif ((${initd_mod} == 2)); then - ${csudo}insserv ${serverName2} || : - ${csudo}insserv -d ${serverName2} || : + ${csudo}insserv $1} || : + ${csudo}insserv -d $1 || : elif ((${initd_mod} == 3)); then - ${csudo}update-rc.d ${serverName2} defaults || : + ${csudo}update-rc.d $1 defaults || : fi } function clean_service_on_systemd() { - service_config="${service_config_dir}/${serverName2}.service" - if systemctl is-active --quiet ${serverName2}; then - echo "${productName} is running, stopping it..." - ${csudo}systemctl stop ${serverName2} &>/dev/null || echo &>/dev/null - fi - ${csudo}systemctl disable ${serverName2} &>/dev/null || echo &>/dev/null - ${csudo}rm -f ${service_config} + service_config="${service_config_dir}/$1.service" - tarbitratord_service_config="${service_config_dir}/tarbitratord.service" - if systemctl is-active --quiet tarbitratord; then - echo "tarbitrator is running, stopping it..." - ${csudo}systemctl stop tarbitratord &>/dev/null || echo &>/dev/null + if systemctl is-active --quiet $1; then + echo "$1 is running, stopping it..." + ${csudo}systemctl stop $1 &>/dev/null || echo &>/dev/null fi - ${csudo}systemctl disable tarbitratord &>/dev/null || echo &>/dev/null - ${csudo}rm -f ${tarbitratord_service_config} + ${csudo}systemctl disable $1 &>/dev/null || echo &>/dev/null + ${csudo}rm -f ${service_config} } function install_service_on_systemd() { - clean_service_on_systemd + clean_service_on_systemd $1 - install_share_etc - - [ -f ${script_dir}/cfg/${serverName2}.service ] && - ${csudo}cp ${script_dir}/cfg/${serverName2}.service \ - ${service_config_dir}/ || : - - # if [ "$verMode" == "cluster" ] && [ "$clientName" != "$clientName2" ]; then - # [ -f ${script_dir}/cfg/${serverName2}.service ] && - # ${csudo}cp ${script_dir}/cfg/${serverName2}.service \ - # ${service_config_dir}/${serverName2}.service || : - # fi - - ${csudo}systemctl daemon-reload - - ${csudo}systemctl enable ${serverName2} - ${csudo}systemctl daemon-reload -} - -function install_adapter_service() { - if ((${service_mod} == 0)); then - [ -f ${script_dir}/cfg/${adapterName2}.service ] && - ${csudo}cp ${script_dir}/cfg/${adapterName2}.service \ - ${service_config_dir}/ || : - - ${csudo}systemctl enable ${adapterName2} - ${csudo}systemctl daemon-reload + cfg_source_dir=${script_dir}/cfg + if [[ "$1" == "${xname}" || "$1" == "${explorerName}" ]]; then + if [ "$verMode" == "cluster" ]; then + cfg_source_dir=${script_dir}/${xname}/etc/systemd/system + else + cfg_source_dir=${script_dir}/cfg + fi fi -} -function install_keeper_service() { - if ((${service_mod} == 0)); then - [ -f ${script_dir}/cfg/${clientName2}keeper.service ] && - ${csudo}cp ${script_dir}/cfg/${clientName2}keeper.service \ - ${service_config_dir}/ || : - - ${csudo}systemctl enable ${clientName2}keeper - ${csudo}systemctl daemon-reload + if [ -f ${cfg_source_dir}/$1.service ]; then + ${csudo}cp ${cfg_source_dir}/$1.service ${service_config_dir}/ || : fi + + ${csudo}systemctl enable $1 + ${csudo}systemctl daemon-reload } function install_service() { if ((${service_mod} == 0)); then - install_service_on_systemd + install_service_on_systemd $1 elif ((${service_mod} == 1)); then - install_service_on_sysvinit + install_service_on_sysvinit $1 else - kill_process ${serverName2} + kill_process $1 fi } @@ -830,10 +795,10 @@ function is_version_compatible() { if [ -f ${script_dir}/driver/vercomp.txt ]; then min_compatible_version=$(cat ${script_dir}/driver/vercomp.txt) else - min_compatible_version=$(${script_dir}/bin/${serverName2} -V | head -1 | cut -d ' ' -f 5) + min_compatible_version=$(${script_dir}/bin/${serverName} -V | head -1 | cut -d ' ' -f 5) fi - exist_version=$(${installDir}/bin/${serverName2} -V | head -1 | cut -d ' ' -f 3) + exist_version=$(${installDir}/bin/${serverName} -V | head -1 | cut -d ' ' -f 3) vercomp $exist_version "3.0.0.0" case $? in 2) @@ -857,7 +822,7 @@ deb_erase() { echo -e -n "${RED}Existing TDengine deb is detected, do you want to remove it? [yes|no] ${NC}:" read confirm if [ "yes" == "$confirm" ]; then - ${csudo}dpkg --remove tdengine ||: + ${csudo}dpkg --remove tdengine || : break elif [ "no" == "$confirm" ]; then break @@ -871,7 +836,7 @@ rpm_erase() { echo -e -n "${RED}Existing TDengine rpm is detected, do you want to remove it? [yes|no] ${NC}:" read confirm if [ "yes" == "$confirm" ]; then - ${csudo}rpm -e tdengine ||: + ${csudo}rpm -e tdengine || : break elif [ "no" == "$confirm" ]; then break @@ -893,23 +858,23 @@ function updateProduct() { fi if echo $osinfo | grep -qwi "centos"; then - rpm -q tdengine 2>&1 > /dev/null && rpm_erase tdengine ||: + rpm -q tdengine 2>&1 >/dev/null && rpm_erase tdengine || : elif echo $osinfo | grep -qwi "ubuntu"; then - dpkg -l tdengine 2>&1 | grep ii > /dev/null && deb_erase tdengine ||: + dpkg -l tdengine 2>&1 | grep ii >/dev/null && deb_erase tdengine || : fi tar -zxf ${tarName} install_jemalloc - echo "Start to update ${productName2}..." + echo "Start to update ${productName}..." # Stop the service if running - if ps aux | grep -v grep | grep ${serverName2} &>/dev/null; then + if ps aux | grep -v grep | grep ${serverName} &>/dev/null; then if ((${service_mod} == 0)); then - ${csudo}systemctl stop ${serverName2} || : + ${csudo}systemctl stop ${serverName} || : elif ((${service_mod} == 1)); then - ${csudo}service ${serverName2} stop || : + ${csudo}service ${serverName} stop || : else - kill_process ${serverName2} + kill_process ${serverName} fi sleep 1 fi @@ -922,82 +887,73 @@ function updateProduct() { install_config if [ "$verMode" == "cluster" ]; then - install_connector - install_taosx + install_connector + install_plugins fi install_examples - install_web if [ -z $1 ]; then install_bin - install_service - install_adapter_service - install_adapter_config - install_keeper_service - if [ "${verMode}" != "cloud" ]; then - install_keeper_config + install_services + + if [ "${pagMode}" != "lite" ]; then + install_adapter_config + install_taosx_config + install_explorer_config + if [ "${verMode}" != "cloud" ]; then + install_keeper_config + fi fi openresty_work=false echo - echo -e "${GREEN_DARK}To configure ${productName2} ${NC}\t\t: edit ${cfg_install_dir}/${configFile2}" - [ -f ${configDir}/${clientName2}adapter.toml ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \ - echo -e "${GREEN_DARK}To configure ${clientName2}Adapter ${NC}\t: edit ${configDir}/${clientName2}adapter.toml" + echo -e "${GREEN_DARK}To configure ${productName} ${NC}\t\t: edit ${configDir}/${configFile}" + [ -f ${configDir}/${adapterName}.toml ] && [ -f ${installDir}/bin/${adapterName} ] && + echo -e "${GREEN_DARK}To configure ${adapterName} ${NC}\t: edit ${configDir}/${adapterName}.toml" if [ "$verMode" == "cluster" ]; then - echo -e "${GREEN_DARK}To configure ${clientName2}-explorer ${NC}\t: edit ${configDir}/explorer.toml" + echo -e "${GREEN_DARK}To configure ${explorerName} ${NC}\t: edit ${configDir}/explorer.toml" fi if ((${service_mod} == 0)); then - echo -e "${GREEN_DARK}To start ${productName2} ${NC}\t\t: ${csudo}systemctl start ${serverName2}${NC}" - [ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \ - echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}\t\t: ${csudo}systemctl start ${clientName2}adapter ${NC}" + echo -e "${GREEN_DARK}To start ${productName} ${NC}\t\t: ${csudo}systemctl start ${serverName}${NC}" + [ -f ${service_config_dir}/${clientName}adapter.service ] && [ -f ${installDir}/bin/${clientName}adapter ] && + echo -e "${GREEN_DARK}To start ${clientName}Adapter ${NC}\t\t: ${csudo}systemctl start ${clientName}adapter ${NC}" elif ((${service_mod} == 1)); then - echo -e "${GREEN_DARK}To start ${productName2} ${NC}\t\t: ${csudo}service ${serverName2} start${NC}" - [ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \ - echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}\t\t: ${csudo}service ${clientName2}adapter start${NC}" + echo -e "${GREEN_DARK}To start ${productName} ${NC}\t\t: ${csudo}service ${serverName} start${NC}" + [ -f ${service_config_dir}/${clientName}adapter.service ] && [ -f ${installDir}/bin/${clientName}adapter ] && + echo -e "${GREEN_DARK}To start ${clientName}Adapter ${NC}\t\t: ${csudo}service ${clientName}adapter start${NC}" else - echo -e "${GREEN_DARK}To start ${productName2} ${NC}\t\t: ./${serverName2}${NC}" - [ -f ${installDir}/bin/${clientName2}adapter ] && \ - echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}\t\t: ${clientName2}adapter ${NC}" - fi - - echo -e "${GREEN_DARK}To enable ${clientName2}keeper ${NC}\t\t: sudo systemctl enable ${clientName2}keeper ${NC}" - if [ "$verMode" == "cluster" ];then - echo -e "${GREEN_DARK}To start ${clientName2}x ${NC}\t\t\t: sudo systemctl start ${clientName2}x ${NC}" - echo -e "${GREEN_DARK}To start ${clientName2}-explorer ${NC}\t\t: sudo systemctl start ${clientName2}-explorer ${NC}" + echo -e "${GREEN_DARK}To start ${productName} ${NC}\t\t: ./${serverName}${NC}" + [ -f ${installDir}/bin/${clientName}adapter ] && + echo -e "${GREEN_DARK}To start ${clientName}Adapter ${NC}\t\t: ${clientName}adapter ${NC}" fi - # if [ ${openresty_work} = 'true' ]; then - # echo -e "${GREEN_DARK}To access ${productName2} ${NC}\t\t: use ${GREEN_UNDERLINE}${clientName2} -h $serverFqdn${NC} in shell OR from ${GREEN_UNDERLINE}http://127.0.0.1:${web_port}${NC}" - # else - # echo -e "${GREEN_DARK}To access ${productName2} ${NC}\t\t: use ${GREEN_UNDERLINE}${clientName2} -h $serverFqdn${NC} in shell${NC}" - # fi - - # if ((${prompt_force} == 1)); then - # echo "" - # echo -e "${RED}Please run '${serverName2} --force-keep-file' at first time for the exist ${productName2} $exist_version!${NC}" - # fi + echo -e "${GREEN_DARK}To enable ${clientName}keeper ${NC}\t\t: sudo systemctl enable ${clientName}keeper ${NC}" + if [ "$verMode" == "cluster" ]; then + echo -e "${GREEN_DARK}To start ${clientName}x ${NC}\t\t\t: sudo systemctl start ${clientName}x ${NC}" + echo -e "${GREEN_DARK}To start ${clientName}-explorer ${NC}\t\t: sudo systemctl start ${clientName}-explorer ${NC}" + fi echo - echo "${productName2} is updated successfully!" + echo "${productName} is updated successfully!" echo - if [ "$verMode" == "cluster" ];then + if [ "$verMode" == "cluster" ]; then echo -e "\033[44;32;1mTo start all the components : ./start-all.sh${NC}" fi - echo -e "\033[44;32;1mTo access ${productName2} : ${clientName2} -h $serverFqdn${NC}" - if [ "$verMode" == "cluster" ];then + echo -e "\033[44;32;1mTo access ${productName} : ${clientName} -h $serverFqdn${NC}" + if [ "$verMode" == "cluster" ]; then echo -e "\033[44;32;1mTo access the management system : http://$serverFqdn:6060${NC}" echo -e "\033[44;32;1mTo read the user manual : http://$serverFqdn:6060/docs${NC}" fi else - install_bin + install_bin echo - echo -e "\033[44;32;1m${productName2} client is updated successfully!${NC}" + echo -e "\033[44;32;1m${productName} client is updated successfully!${NC}" fi cd $script_dir - rm -rf $(tar -tf ${tarName} | grep -Ev "^\./$|^\/") + rm -rf $(tar -tf ${tarName} | grep -Ev "^\./$|^\/") } function installProduct() { @@ -1008,7 +964,7 @@ function installProduct() { fi tar -zxf ${tarName} - echo "Start to install ${productName2}..." + echo "Start to install ${productName}..." install_main_path @@ -1025,105 +981,114 @@ function installProduct() { install_config if [ "$verMode" == "cluster" ]; then - install_connector - install_taosx + install_connector + install_plugins fi - install_examples - install_web + install_examples + if [ -z $1 ]; then # install service and client # For installing new install_bin - install_service - install_adapter_service - install_adapter_config - install_keeper_service - if [ "${verMode}" != "cloud" ]; then - install_keeper_config - fi - openresty_work=false + install_services + if [ "${pagMode}" != "lite" ]; then + install_adapter_config + install_taosx_config + install_explorer_config + if [ "${verMode}" != "cloud" ]; then + install_keeper_config + fi + fi + + openresty_work=false # Ask if to start the service echo - echo -e "${GREEN_DARK}To configure ${productName2} ${NC}\t\t: edit ${cfg_install_dir}/${configFile2}" - [ -f ${configDir}/${clientName2}adapter.toml ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \ - echo -e "${GREEN_DARK}To configure ${clientName2}Adapter ${NC}\t: edit ${configDir}/${clientName2}adapter.toml" + echo -e "${GREEN_DARK}To configure ${productName} ${NC}\t\t: edit ${configDir}/${configFile}" + [ -f ${configDir}/${clientName}adapter.toml ] && [ -f ${installDir}/bin/${clientName}adapter ] && + echo -e "${GREEN_DARK}To configure ${clientName}Adapter ${NC}\t: edit ${configDir}/${clientName}adapter.toml" if [ "$verMode" == "cluster" ]; then - echo -e "${GREEN_DARK}To configure ${clientName2}-explorer ${NC}\t: edit ${configDir}/explorer.toml" + echo -e "${GREEN_DARK}To configure ${clientName}-explorer ${NC}\t: edit ${configDir}/explorer.toml" fi if ((${service_mod} == 0)); then - echo -e "${GREEN_DARK}To start ${productName2} ${NC}\t\t: ${csudo}systemctl start ${serverName2}${NC}" - [ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \ - echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}\t\t: ${csudo}systemctl start ${clientName2}adapter ${NC}" + echo -e "${GREEN_DARK}To start ${productName} ${NC}\t\t: ${csudo}systemctl start ${serverName}${NC}" + [ -f ${service_config_dir}/${clientName}adapter.service ] && [ -f ${installDir}/bin/${clientName}adapter ] && + echo -e "${GREEN_DARK}To start ${clientName}Adapter ${NC}\t\t: ${csudo}systemctl start ${clientName}adapter ${NC}" elif ((${service_mod} == 1)); then - echo -e "${GREEN_DARK}To start ${productName2} ${NC}\t\t: ${csudo}service ${serverName2} start${NC}" - [ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \ - echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}\t\t: ${csudo}service ${clientName2}adapter start${NC}" + echo -e "${GREEN_DARK}To start ${productName} ${NC}\t\t: ${csudo}service ${serverName} start${NC}" + [ -f ${service_config_dir}/${clientName}adapter.service ] && [ -f ${installDir}/bin/${clientName}adapter ] && + echo -e "${GREEN_DARK}To start ${clientName}Adapter ${NC}\t\t: ${csudo}service ${clientName}adapter start${NC}" else - echo -e "${GREEN_DARK}To start ${productName2} ${NC}\t\t: ${serverName2}${NC}" - [ -f ${installDir}/bin/${clientName2}adapter ] && \ - echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}\t\t: ${clientName2}adapter ${NC}" + echo -e "${GREEN_DARK}To start ${productName} ${NC}\t\t: ${serverName}${NC}" + [ -f ${installDir}/bin/${clientName}adapter ] && + echo -e "${GREEN_DARK}To start ${clientName}Adapter ${NC}\t\t: ${clientName}adapter ${NC}" fi - echo -e "${GREEN_DARK}To enable ${clientName2}keeper ${NC}\t\t: sudo systemctl enable ${clientName2}keeper ${NC}" - - if [ "$verMode" == "cluster" ];then - echo -e "${GREEN_DARK}To start ${clientName2}x ${NC}\t\t\t: sudo systemctl start ${clientName2}x ${NC}" - echo -e "${GREEN_DARK}To start ${clientName2}-explorer ${NC}\t\t: sudo systemctl start ${clientName2}-explorer ${NC}" + echo -e "${GREEN_DARK}To enable ${clientName}keeper ${NC}\t\t: sudo systemctl enable ${clientName}keeper ${NC}" + + if [ "$verMode" == "cluster" ]; then + echo -e "${GREEN_DARK}To start ${clientName}x ${NC}\t\t\t: sudo systemctl start ${clientName}x ${NC}" + echo -e "${GREEN_DARK}To start ${clientName}-explorer ${NC}\t\t: sudo systemctl start ${clientName}-explorer ${NC}" fi - # if [ ! -z "$firstEp" ]; then - # tmpFqdn=${firstEp%%:*} - # substr=":" - # if [[ $firstEp =~ $substr ]]; then - # tmpPort=${firstEp#*:} - # else - # tmpPort="" - # fi - # if [[ "$tmpPort" != "" ]]; then - # echo -e "${GREEN_DARK}To access ${productName2} ${NC}\t\t: ${clientName2} -h $tmpFqdn -P $tmpPort${GREEN_DARK} to login into cluster, then${NC}" - # else - # echo -e "${GREEN_DARK}To access ${productName2} ${NC}\t\t: ${clientName2} -h $tmpFqdn${GREEN_DARK} to login into cluster, then${NC}" - # fi - # echo -e "${GREEN_DARK}execute ${NC}: create dnode 'newDnodeFQDN:port'; ${GREEN_DARK}to add this new node${NC}" - # echo - # elif [ ! -z "$serverFqdn" ]; then - # echo -e "${GREEN_DARK}To access ${productName2} ${NC}\t\t: ${clientName2} -h $serverFqdn${GREEN_DARK} to login into ${productName2} server${NC}" - # echo - # fi echo - echo "${productName2} is installed successfully!" + echo "${productName} is installed successfully!" echo - if [ "$verMode" == "cluster" ];then + if [ "$verMode" == "cluster" ]; then echo -e "\033[44;32;1mTo start all the components : sudo ./start-all.sh${NC}" fi - echo -e "\033[44;32;1mTo access ${productName2} : ${clientName2} -h $serverFqdn${NC}" - if [ "$verMode" == "cluster" ];then + echo -e "\033[44;32;1mTo access ${productName} : ${clientName} -h $serverFqdn${NC}" + if [ "$verMode" == "cluster" ]; then echo -e "\033[44;32;1mTo access the management system : http://$serverFqdn:6060${NC}" echo -e "\033[44;32;1mTo read the user manual : http://$serverFqdn:6060/docs-en${NC}" fi echo else # Only install client install_bin - + echo - echo -e "\033[44;32;1m${productName2} client is installed successfully!${NC}" + echo -e "\033[44;32;1m${productName} client is installed successfully!${NC}" fi - + cd $script_dir touch ~/.${historyFile} rm -rf $(tar -tf ${tarName} | grep -Ev "^\./$|^\/") } +check_java_env() { + if ! command -v java &> /dev/null + then + echo -e "\033[31mWarning: Java command not found. Version 1.8+ is required.\033[0m" + return + fi + + java_version=$(java -version 2>&1 | awk -F '"' '/version/ {print $2}') + java_version_ok=false + if [[ $(echo "$java_version" | cut -d"." -f1) -gt 1 ]]; then + java_version_ok=true + elif [[ $(echo "$java_version" | cut -d"." -f1) -eq 1 && $(echo "$java_version" | cut -d"." -f2) -ge 8 ]]; then + java_version_ok=true + fi + + if $java_version_ok; then + echo -e "\033[32mJava ${java_version} has been found.\033[0m" + else + echo -e "\033[31mWarning: Java Version 1.8+ is required, but version ${java_version} has been found.\033[0m" + fi +} + ## ==============================Main program starts from here============================ serverFqdn=$(hostname) if [ "$verType" == "server" ]; then + if [ -x ${script_dir}/${xname}/bin/${xname} ]; then + check_java_env + fi # Check default 2.x data file. - if [ -x ${data_dir}/dnode/dnodeCfg.json ]; then - echo -e "\033[44;31;5mThe default data directory ${data_dir} contains old data of ${productName2} 2.x, please clear it before installing!\033[0m" + if [ -x ${dataDir}/dnode/dnodeCfg.json ]; then + echo -e "\033[44;31;5mThe default data directory ${dataDir} contains old data of ${productName} 2.x, please clear it before installing!\033[0m" else # Install server and client - if [ -x ${bin_dir}/${serverName2} ]; then + if [ -x ${bin_dir}/${serverName} ]; then update_flag=1 updateProduct else @@ -1133,7 +1098,7 @@ if [ "$verType" == "server" ]; then elif [ "$verType" == "client" ]; then interactiveFqdn=no # Only install client - if [ -x ${bin_dir}/${clientName2} ]; then + if [ -x ${bin_dir}/${clientName} ]; then update_flag=1 updateProduct client else @@ -1142,5 +1107,3 @@ elif [ "$verType" == "client" ]; then else echo "please input correct verType" fi - - diff --git a/packaging/tools/makepkg.sh b/packaging/tools/makepkg.sh index 4b0faaa958..3744892526 100755 --- a/packaging/tools/makepkg.sh +++ b/packaging/tools/makepkg.sh @@ -231,12 +231,8 @@ fi if [ "$verMode" == "cluster" ]; then sed 's/verMode=edge/verMode=cluster/g' ${install_dir}/bin/remove.sh >>remove_temp.sh - sed -i "s/serverName2=\"taosd\"/serverName2=\"${serverName2}\"/g" remove_temp.sh - sed -i "s/clientName2=\"taos\"/clientName2=\"${clientName2}\"/g" remove_temp.sh - sed -i "s/configFile2=\"taos.cfg\"/configFile2=\"${clientName2}.cfg\"/g" remove_temp.sh - sed -i "s/productName2=\"TDengine\"/productName2=\"${productName2}\"/g" remove_temp.sh - cusDomain=`echo "${cusEmail2}" | sed 's/^[^@]*@//'` - sed -i "s/emailName2=\"taosdata.com\"/emailName2=\"${cusDomain}\"/g" remove_temp.sh + sed -i "s/PREFIX=\"taos\"/PREFIX=\"${serverName2}\"/g" remove_temp.sh + sed -i "s/productName=\"TDengine\"/productName=\"${productName2}\"/g" remove_temp.sh mv remove_temp.sh ${install_dir}/bin/remove.sh fi if [ "$verMode" == "cloud" ]; then @@ -262,12 +258,10 @@ cp ${install_files} ${install_dir} cp ${install_dir}/install.sh install_temp.sh if [ "$verMode" == "cluster" ]; then sed -i 's/verMode=edge/verMode=cluster/g' install_temp.sh - sed -i "s/serverName2=\"taosd\"/serverName2=\"${serverName2}\"/g" install_temp.sh - sed -i "s/clientName2=\"taos\"/clientName2=\"${clientName2}\"/g" install_temp.sh - sed -i "s/configFile2=\"taos.cfg\"/configFile2=\"${clientName2}.cfg\"/g" install_temp.sh - sed -i "s/productName2=\"TDengine\"/productName2=\"${productName2}\"/g" install_temp.sh + sed -i "s/PREFIX=\"taos\"/PREFIX=\"${serverName2}\"/g" install_temp.sh + sed -i "s/productName=\"TDengine\"/productName=\"${productName2}\"/g" install_temp.sh cusDomain=`echo "${cusEmail2}" | sed 's/^[^@]*@//'` - sed -i "s/emailName2=\"taosdata.com\"/emailName2=\"${cusDomain}\"/g" install_temp.sh + sed -i "s/emailName=\"taosdata.com\"/emailName=\"${cusDomain}\"/g" install_temp.sh mv install_temp.sh ${install_dir}/install.sh fi if [ "$verMode" == "cloud" ]; then @@ -367,8 +361,7 @@ if [ "$verMode" == "cluster" ]; then # copy taosx if [ -d ${top_dir}/../enterprise/src/plugins/taosx/release/taosx ]; then - cp -r ${top_dir}/../enterprise/src/plugins/taosx/release/taosx ${install_dir} - cp ${top_dir}/../enterprise/packaging/install_taosx.sh ${install_dir}/taosx + cp -r ${top_dir}/../enterprise/src/plugins/taosx/release/taosx ${install_dir} cp ${top_dir}/../enterprise/src/plugins/taosx/packaging/uninstall.sh ${install_dir}/taosx sed -i 's/target=\"\"/target=\"taosx\"/g' ${install_dir}/taosx/uninstall.sh fi diff --git a/source/client/src/clientEnv.c b/source/client/src/clientEnv.c index 7f73aa6845..f37e9851e1 100644 --- a/source/client/src/clientEnv.c +++ b/source/client/src/clientEnv.c @@ -770,6 +770,32 @@ int taos_init() { return tscInitRes; } +const char* getCfgName(TSDB_OPTION option) { + const char* name = NULL; + + switch (option) { + case TSDB_OPTION_SHELL_ACTIVITY_TIMER: + name = "shellActivityTimer"; + break; + case TSDB_OPTION_LOCALE: + name = "locale"; + break; + case TSDB_OPTION_CHARSET: + name = "charset"; + break; + case TSDB_OPTION_TIMEZONE: + name = "timezone"; + break; + case TSDB_OPTION_USE_ADAPTER: + name = "useAdapter"; + break; + default: + break; + } + + return name; +} + int taos_options_imp(TSDB_OPTION option, const char *str) { if (option == TSDB_OPTION_CONFIGDIR) { #ifndef WINDOWS @@ -799,39 +825,26 @@ int taos_options_imp(TSDB_OPTION option, const char *str) { SConfig *pCfg = taosGetCfg(); SConfigItem *pItem = NULL; + const char *name = getCfgName(option); - switch (option) { - case TSDB_OPTION_SHELL_ACTIVITY_TIMER: - pItem = cfgGetItem(pCfg, "shellActivityTimer"); - break; - case TSDB_OPTION_LOCALE: - pItem = cfgGetItem(pCfg, "locale"); - break; - case TSDB_OPTION_CHARSET: - pItem = cfgGetItem(pCfg, "charset"); - break; - case TSDB_OPTION_TIMEZONE: - pItem = cfgGetItem(pCfg, "timezone"); - break; - case TSDB_OPTION_USE_ADAPTER: - pItem = cfgGetItem(pCfg, "useAdapter"); - break; - default: - break; + if (name == NULL) { + tscError("Invalid option %d", option); + return -1; } + pItem = cfgGetItem(pCfg, name); if (pItem == NULL) { tscError("Invalid option %d", option); return -1; } - int code = cfgSetItem(pCfg, pItem->name, str, CFG_STYPE_TAOS_OPTIONS); + int code = cfgSetItem(pCfg, name, str, CFG_STYPE_TAOS_OPTIONS); if (code != 0) { - tscError("failed to set cfg:%s to %s since %s", pItem->name, str, terrstr()); + tscError("failed to set cfg:%s to %s since %s", name, str, terrstr()); } else { - tscInfo("set cfg:%s to %s", pItem->name, str); + tscInfo("set cfg:%s to %s", name, str); if (TSDB_OPTION_SHELL_ACTIVITY_TIMER == option || TSDB_OPTION_USE_ADAPTER == option) { - code = taosCfgDynamicOptions(pCfg, pItem->name, false); + code = taosCfgDynamicOptions(pCfg, name, false); } } diff --git a/source/client/src/clientRawBlockWrite.c b/source/client/src/clientRawBlockWrite.c index b04e81c193..427e238a7b 100644 --- a/source/client/src/clientRawBlockWrite.c +++ b/source/client/src/clientRawBlockWrite.c @@ -115,6 +115,30 @@ static char* buildCreateTableJson(SSchemaWrapper* schemaRow, SSchemaWrapper* sch return string; } +static int32_t setCompressOption(cJSON* json, uint32_t para) { + uint8_t encode = COMPRESS_L1_TYPE_U32(para); + if (encode != 0) { + const char* encodeStr = columnEncodeStr(encode); + cJSON* encodeJson = cJSON_CreateString(encodeStr); + cJSON_AddItemToObject(json, "encode", encodeJson); + return 0; + } + uint8_t compress = COMPRESS_L2_TYPE_U32(para); + if (compress != 0) { + const char* compressStr = columnCompressStr(compress); + cJSON* compressJson = cJSON_CreateString(compressStr); + cJSON_AddItemToObject(json, "compress", compressJson); + return 0; + } + uint8_t level = COMPRESS_L2_TYPE_LEVEL_U32(para); + if (level != 0) { + const char* levelStr = columnLevelStr(level); + cJSON* levelJson = cJSON_CreateString(levelStr); + cJSON_AddItemToObject(json, "level", levelJson); + return 0; + } + return 0; +} static char* buildAlterSTableJson(void* alterData, int32_t alterDataLen) { SMAlterStbReq req = {0}; cJSON* json = NULL; @@ -199,6 +223,13 @@ static char* buildAlterSTableJson(void* alterData, int32_t alterDataLen) { cJSON_AddItemToObject(json, "colNewName", colNewName); break; } + case TSDB_ALTER_TABLE_UPDATE_COLUMN_COMPRESS: { + TAOS_FIELD* field = taosArrayGet(req.pFields, 0); + cJSON* colName = cJSON_CreateString(field->name); + cJSON_AddItemToObject(json, "colName", colName); + setCompressOption(json, field->bytes); + break; + } default: break; } @@ -568,6 +599,12 @@ static char* processAlterTable(SMqMetaRsp* metaRsp) { cJSON_AddItemToObject(json, "colValueNull", isNullCJson); break; } + case TSDB_ALTER_TABLE_UPDATE_COLUMN_COMPRESS: { + cJSON* colName = cJSON_CreateString(vAlterTbReq.colName); + cJSON_AddItemToObject(json, "colName", colName); + setCompressOption(json, vAlterTbReq.compress); + break; + } default: break; } diff --git a/source/client/src/clientSml.c b/source/client/src/clientSml.c index 0802ec672a..6225cf703c 100644 --- a/source/client/src/clientSml.c +++ b/source/client/src/clientSml.c @@ -832,7 +832,7 @@ static int32_t smlFindNearestPowerOf2(int32_t length, uint8_t type) { return result; } -static int32_t smlProcessSchemaAction(SSmlHandle *info, SSchema *schemaField, SHashObj *schemaHash, SArray *cols, +static int32_t smlProcessSchemaAction(SSmlHandle *info, SSchema *schemaField, SHashObj *schemaHash, SArray *cols, SArray *checkDumplicateCols, ESchemaAction *action, bool isTag) { int32_t code = TSDB_CODE_SUCCESS; for (int j = 0; j < taosArrayGetSize(cols); ++j) { @@ -843,6 +843,13 @@ static int32_t smlProcessSchemaAction(SSmlHandle *info, SSchema *schemaField, SH return code; } } + + for (int j = 0; j < taosArrayGetSize(checkDumplicateCols); ++j) { + SSmlKv *kv = (SSmlKv *)taosArrayGet(checkDumplicateCols, j); + if(taosHashGet(schemaHash, kv->key, kv->keyLen) != NULL){ + return TSDB_CODE_PAR_DUPLICATED_COLUMN; + } + } return TSDB_CODE_SUCCESS; } @@ -1106,7 +1113,7 @@ static int32_t smlModifyDBSchemas(SSmlHandle *info) { } ESchemaAction action = SCHEMA_ACTION_NULL; - code = smlProcessSchemaAction(info, pTableMeta->schema, hashTmp, sTableData->tags, &action, true); + code = smlProcessSchemaAction(info, pTableMeta->schema, hashTmp, sTableData->tags, sTableData->cols, &action, true); if (code != TSDB_CODE_SUCCESS) { goto end; } @@ -1181,7 +1188,7 @@ static int32_t smlModifyDBSchemas(SSmlHandle *info) { taosHashPut(hashTmp, pTableMeta->schema[i].name, strlen(pTableMeta->schema[i].name), &i, SHORT_BYTES); } action = SCHEMA_ACTION_NULL; - code = smlProcessSchemaAction(info, pTableMeta->schema, hashTmp, sTableData->cols, &action, false); + code = smlProcessSchemaAction(info, pTableMeta->schema, hashTmp, sTableData->cols, sTableData->tags, &action, false); if (code != TSDB_CODE_SUCCESS) { goto end; } @@ -1290,17 +1297,24 @@ end: return code; } -static void smlInsertMeta(SHashObj *metaHash, SArray *metaArray, SArray *cols) { +static int32_t smlInsertMeta(SHashObj *metaHash, SArray *metaArray, SArray *cols, SHashObj *checkDuplicate) { + terrno = 0; for (int16_t i = 0; i < taosArrayGetSize(cols); ++i) { SSmlKv *kv = (SSmlKv *)taosArrayGet(cols, i); int ret = taosHashPut(metaHash, kv->key, kv->keyLen, &i, SHORT_BYTES); if (ret == 0) { taosArrayPush(metaArray, kv); + if(taosHashGet(checkDuplicate, kv->key, kv->keyLen) != NULL) { + return TSDB_CODE_PAR_DUPLICATED_COLUMN; + } + }else if(terrno == TSDB_CODE_DUP_KEY){ + return TSDB_CODE_PAR_DUPLICATED_COLUMN; } } + return TSDB_CODE_SUCCESS; } -static int32_t smlUpdateMeta(SHashObj *metaHash, SArray *metaArray, SArray *cols, bool isTag, SSmlMsgBuf *msg) { +static int32_t smlUpdateMeta(SHashObj *metaHash, SArray *metaArray, SArray *cols, bool isTag, SSmlMsgBuf *msg, SHashObj* checkDuplicate) { for (int i = 0; i < taosArrayGetSize(cols); ++i) { SSmlKv *kv = (SSmlKv *)taosArrayGet(cols, i); @@ -1332,6 +1346,11 @@ static int32_t smlUpdateMeta(SHashObj *metaHash, SArray *metaArray, SArray *cols int ret = taosHashPut(metaHash, kv->key, kv->keyLen, &size, SHORT_BYTES); if (ret == 0) { taosArrayPush(metaArray, kv); + if(taosHashGet(checkDuplicate, kv->key, kv->keyLen) != NULL) { + return TSDB_CODE_PAR_DUPLICATED_COLUMN; + } + }else{ + return ret; } } } @@ -1456,7 +1475,7 @@ static int32_t smlPushCols(SArray *colsArray, SArray *cols) { taosHashPut(kvHash, kv->key, kv->keyLen, &kv, POINTER_BYTES); if (terrno == TSDB_CODE_DUP_KEY) { taosHashCleanup(kvHash); - return terrno; + return TSDB_CODE_PAR_DUPLICATED_COLUMN; } } @@ -1512,12 +1531,12 @@ static int32_t smlParseLineBottom(SSmlHandle *info) { if (tableMeta) { // update meta uDebug("SML:0x%" PRIx64 " smlParseLineBottom update meta, format:%d, linenum:%d", info->id, info->dataFormat, info->lineNum); - ret = smlUpdateMeta((*tableMeta)->colHash, (*tableMeta)->cols, elements->colArray, false, &info->msgBuf); + ret = smlUpdateMeta((*tableMeta)->colHash, (*tableMeta)->cols, elements->colArray, false, &info->msgBuf, (*tableMeta)->tagHash); if (ret == TSDB_CODE_SUCCESS) { - ret = smlUpdateMeta((*tableMeta)->tagHash, (*tableMeta)->tags, tinfo->tags, true, &info->msgBuf); + ret = smlUpdateMeta((*tableMeta)->tagHash, (*tableMeta)->tags, tinfo->tags, true, &info->msgBuf, (*tableMeta)->colHash); } if (ret != TSDB_CODE_SUCCESS) { - uError("SML:0x%" PRIx64 " smlUpdateMeta failed", info->id); + uError("SML:0x%" PRIx64 " smlUpdateMeta failed, ret:%d", info->id, ret); return ret; } } else { @@ -1527,13 +1546,19 @@ static int32_t smlParseLineBottom(SSmlHandle *info) { if (meta == NULL) { return TSDB_CODE_OUT_OF_MEMORY; } - taosHashPut(info->superTables, elements->measure, elements->measureLen, &meta, POINTER_BYTES); - terrno = 0; - smlInsertMeta(meta->tagHash, meta->tags, tinfo->tags); - if (terrno == TSDB_CODE_DUP_KEY) { - return terrno; + ret = taosHashPut(info->superTables, elements->measure, elements->measureLen, &meta, POINTER_BYTES); + if (ret != TSDB_CODE_SUCCESS) { + uError("SML:0x%" PRIx64 " put measuer to hash failed", info->id); + return ret; + } + ret = smlInsertMeta(meta->tagHash, meta->tags, tinfo->tags, NULL); + if (ret == TSDB_CODE_SUCCESS) { + ret = smlInsertMeta(meta->colHash, meta->cols, elements->colArray, meta->tagHash); + } + if (ret != TSDB_CODE_SUCCESS) { + uError("SML:0x%" PRIx64 " insert meta failed:%s", info->id, tstrerror(ret)); + return ret; } - smlInsertMeta(meta->colHash, meta->cols, elements->colArray); } } uDebug("SML:0x%" PRIx64 " smlParseLineBottom end, format:%d, linenum:%d", info->id, info->dataFormat, info->lineNum); diff --git a/source/client/test/clientTests.cpp b/source/client/test/clientTests.cpp index 778b1826b4..1498476634 100644 --- a/source/client/test/clientTests.cpp +++ b/source/client/test/clientTests.cpp @@ -827,10 +827,18 @@ TEST(clientCase, projection_query_tables) { // } // taos_free_result(pRes); - TAOS_RES* pRes = taos_query(pConn, "use test"); + TAOS_RES* pRes = taos_query(pConn, "alter local 'fqdn 127.0.0.1'"); + if (taos_errno(pRes) != 0) { + printf("failed to exec query, %s\n", taos_errstr(pRes)); + } + taos_free_result(pRes); - pRes = taos_query(pConn, "create table st2 (ts timestamp, k int primary key, j varchar(1000)) tags(a int)"); + pRes = taos_query(pConn, "select last(ts), ts from cache_1.t1"); +// pRes = taos_query(pConn, "select last(ts), ts from cache_1.no_pk_t1"); + if (taos_errno(pRes) != 0) { + printf("failed to exec query, %s\n", taos_errstr(pRes)); + } taos_free_result(pRes); // pRes = taos_query(pConn, "create stream stream_1 trigger at_once fill_history 1 ignore expired 0 into str_res1 as select _wstart as ts, count(*) from stable_1 interval(10s);"); diff --git a/source/common/src/systable.c b/source/common/src/systable.c index 4dfe9d900f..14e8088dfe 100644 --- a/source/common/src/systable.c +++ b/source/common/src/systable.c @@ -191,8 +191,8 @@ static const SSysDbTableSchema streamTaskSchema[] = { {.name = "start_id", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = false}, {.name = "start_ver", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = false}, {.name = "checkpoint_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false}, - {.name = "checkpoint_id", .bytes = 25, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = false}, - {.name = "checkpoint_version", .bytes = 25, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = false}, + {.name = "checkpoint_id", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = false}, + {.name = "checkpoint_version", .bytes = 8, .type = TSDB_DATA_TYPE_BIGINT, .sysInfo = false}, {.name = "ds_err_info", .bytes = 25, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, {.name = "history_task_id", .bytes = 16 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, {.name = "history_task_status", .bytes = 12 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false}, diff --git a/source/common/src/tglobal.c b/source/common/src/tglobal.c index 24f00ab6a5..ba96dc0adf 100644 --- a/source/common/src/tglobal.c +++ b/source/common/src/tglobal.c @@ -1032,7 +1032,7 @@ static void taosSetServerLogCfg(SConfig *pCfg) { sndDebugFlag = cfgGetItem(pCfg, "sndDebugFlag")->i32; } -static int32_t taosSetSlowLogScope(char *pScope) { +static int32_t taosSetSlowLogScope(const char *pScope) { if (NULL == pScope || 0 == strlen(pScope)) { tsSlowLogScope = SLOW_LOG_TYPE_ALL; return 0; @@ -1505,7 +1505,7 @@ static int32_t taosCfgSetOption(OptionNameAndVar *pOptions, int32_t optionSize, return terrno == TSDB_CODE_SUCCESS ? 0 : -1; } -static int32_t taosCfgDynamicOptionsForServer(SConfig *pCfg, char *name) { +static int32_t taosCfgDynamicOptionsForServer(SConfig *pCfg, const char *name) { terrno = TSDB_CODE_SUCCESS; if (strcasecmp(name, "resetlog") == 0) { @@ -1583,11 +1583,12 @@ static int32_t taosCfgDynamicOptionsForServer(SConfig *pCfg, char *name) { return terrno == TSDB_CODE_SUCCESS ? 0 : -1; } -static int32_t taosCfgDynamicOptionsForClient(SConfig *pCfg, char *name) { +// todo fix race condition caused by update of config, pItem->str may be removed +static int32_t taosCfgDynamicOptionsForClient(SConfig *pCfg, const char *name) { terrno = TSDB_CODE_SUCCESS; SConfigItem *pItem = cfgGetItem(pCfg, name); - if (!pItem || (pItem->dynScope & CFG_DYN_CLIENT) == 0) { + if ((pItem == NULL) || (pItem->dynScope & CFG_DYN_CLIENT) == 0) { uError("failed to config:%s, not support", name); terrno = TSDB_CODE_INVALID_CFG; return -1; @@ -1598,6 +1599,7 @@ static int32_t taosCfgDynamicOptionsForClient(SConfig *pCfg, char *name) { int32_t len = strlen(name); char lowcaseName[CFG_NAME_MAX_LEN + 1] = {0}; strntolower(lowcaseName, name, TMIN(CFG_NAME_MAX_LEN, len)); + switch (lowcaseName[0]) { case 'd': { if (strcasecmp("debugFlag", name) == 0) { @@ -1803,9 +1805,12 @@ _out: return terrno == TSDB_CODE_SUCCESS ? 0 : -1; } -int32_t taosCfgDynamicOptions(SConfig *pCfg, char *name, bool forServer) { - if (forServer) return taosCfgDynamicOptionsForServer(pCfg, name); - return taosCfgDynamicOptionsForClient(pCfg, name); +int32_t taosCfgDynamicOptions(SConfig *pCfg, const char *name, bool forServer) { + if (forServer) { + return taosCfgDynamicOptionsForServer(pCfg, name); + } else { + return taosCfgDynamicOptionsForClient(pCfg, name); + } } void taosSetDebugFlag(int32_t *pFlagPtr, const char *flagName, int32_t flagVal) { diff --git a/source/common/src/tmisce.c b/source/common/src/tmisce.c index 77dd8344b1..0a9e8f434b 100644 --- a/source/common/src/tmisce.c +++ b/source/common/src/tmisce.c @@ -17,6 +17,8 @@ #include "tmisce.h" #include "tglobal.h" #include "tjson.h" +#include "tdatablock.h" + int32_t taosGetFqdnPortFromEp(const char* ep, SEp* pEp) { pEp->port = 0; memset(pEp->fqdn, 0, TSDB_FQDN_LEN); @@ -97,10 +99,10 @@ void epsetSort(SEpSet* pDst) { SEp* s = &pDst->eps[j + 1]; int cmp = strncmp(f->fqdn, s->fqdn, sizeof(f->fqdn)); if (cmp > 0 || (cmp == 0 && f->port > s->port)) { - SEp ep = {0}; - epAssign(&ep, f); + SEp ep1 = {0}; + epAssign(&ep1, f); epAssign(f, s); - epAssign(s, &ep); + epAssign(s, &ep1); } } } @@ -216,3 +218,43 @@ int32_t taosGenCrashJsonMsg(int signum, char** pMsg, int64_t clusterId, int64_t return TSDB_CODE_SUCCESS; } + +int32_t dumpConfToDataBlock(SSDataBlock* pBlock, int32_t startCol) { + SConfig* pConf = taosGetCfg(); + int32_t numOfRows = 0; + int32_t col = startCol; + SConfigItem* pItem = NULL; + + blockDataEnsureCapacity(pBlock, cfgGetSize(pConf)); + SConfigIter* pIter = cfgCreateIter(pConf); + + while ((pItem = cfgNextIter(pIter)) != NULL) { + col = startCol; + + // GRANT_CFG_SKIP; + char name[TSDB_CONFIG_OPTION_LEN + VARSTR_HEADER_SIZE] = {0}; + STR_WITH_MAXSIZE_TO_VARSTR(name, pItem->name, TSDB_CONFIG_OPTION_LEN + VARSTR_HEADER_SIZE); + SColumnInfoData* pColInfo = taosArrayGet(pBlock->pDataBlock, col++); + colDataSetVal(pColInfo, numOfRows, name, false); + + char value[TSDB_CONFIG_VALUE_LEN + VARSTR_HEADER_SIZE] = {0}; + int32_t valueLen = 0; + cfgDumpItemValue(pItem, &value[VARSTR_HEADER_SIZE], TSDB_CONFIG_VALUE_LEN, &valueLen); + varDataSetLen(value, valueLen); + pColInfo = taosArrayGet(pBlock->pDataBlock, col++); + colDataSetVal(pColInfo, numOfRows, value, false); + + char scope[TSDB_CONFIG_SCOPE_LEN + VARSTR_HEADER_SIZE] = {0}; + cfgDumpItemScope(pItem, &scope[VARSTR_HEADER_SIZE], TSDB_CONFIG_SCOPE_LEN, &valueLen); + varDataSetLen(scope, valueLen); + pColInfo = taosArrayGet(pBlock->pDataBlock, col++); + colDataSetVal(pColInfo, numOfRows, scope, false); + + numOfRows++; + } + + pBlock->info.rows = numOfRows; + + cfgDestroyIter(pIter); + return TSDB_CODE_SUCCESS; +} diff --git a/source/common/src/tmsg.c b/source/common/src/tmsg.c index 6c2358a1d7..45b0b6ac2b 100644 --- a/source/common/src/tmsg.c +++ b/source/common/src/tmsg.c @@ -69,7 +69,7 @@ static int32_t tDecodeSVAlterTbReqCommon(SDecoder *pDecoder, SVAlterTbReq *pReq); static int32_t tDecodeSBatchDeleteReqCommon(SDecoder *pDecoder, SBatchDeleteReq *pReq); static int32_t tEncodeTableTSMAInfoRsp(SEncoder *pEncoder, const STableTSMAInfoRsp *pRsp); -static int32_t tDecodeTableTSMAInfoRsp(SDecoder* pDecoder, STableTSMAInfoRsp* pRsp); +static int32_t tDecodeTableTSMAInfoRsp(SDecoder *pDecoder, STableTSMAInfoRsp *pRsp); int32_t tInitSubmitMsgIter(const SSubmitReq *pMsg, SSubmitMsgIter *pIter) { if (pMsg == NULL) { @@ -895,8 +895,8 @@ int32_t tSerializeSMCreateSmaReq(void *buf, int32_t bufLen, SMCreateSmaReq *pReq if (tEncodeI64(&encoder, pReq->normSourceTbUid) < 0) return -1; if (tEncodeI32(&encoder, taosArrayGetSize(pReq->pVgroupVerList)) < 0) return -1; - for(int32_t i = 0; i < taosArrayGetSize(pReq->pVgroupVerList); ++i) { - SVgroupVer* p = taosArrayGet(pReq->pVgroupVerList, i); + for (int32_t i = 0; i < taosArrayGetSize(pReq->pVgroupVerList); ++i) { + SVgroupVer *p = taosArrayGet(pReq->pVgroupVerList, i); if (tEncodeI32(&encoder, p->vgId) < 0) return -1; if (tEncodeI64(&encoder, p->ver) < 0) return -1; } @@ -8000,7 +8000,7 @@ int32_t tDeserializeSCMCreateStreamReq(void *buf, int32_t bufLen, SCMCreateStrea } } if (!tDecodeIsEnd(&decoder)) { - if (tDecodeI64(&decoder, &pReq->smaId)< 0) return -1; + if (tDecodeI64(&decoder, &pReq->smaId) < 0) return -1; } tEndDecode(&decoder); @@ -8445,8 +8445,8 @@ static int32_t tDecodeSVDropTbRsp(SDecoder *pCoder, SVDropTbRsp *pReq) { } int32_t tEncodeSVDropTbBatchReq(SEncoder *pCoder, const SVDropTbBatchReq *pReq) { - int32_t nReqs = taosArrayGetSize(pReq->pArray); - SVDropTbReq *pDropTbReq; + int32_t nReqs = taosArrayGetSize(pReq->pArray); + SVDropTbReq *pDropTbReq; if (tStartEncode(pCoder) < 0) return -1; @@ -8709,6 +8709,7 @@ int32_t tEncodeSVAlterTbReq(SEncoder *pEncoder, const SVAlterTbReq *pReq) { } break; case TSDB_ALTER_TABLE_UPDATE_COLUMN_COMPRESS: + if (tEncodeCStr(pEncoder, pReq->colName) < 0) return -1; if (tEncodeU32(pEncoder, pReq->compress) < 0) return -1; break; default: @@ -8763,6 +8764,7 @@ static int32_t tDecodeSVAlterTbReqCommon(SDecoder *pDecoder, SVAlterTbReq *pReq) } break; case TSDB_ALTER_TABLE_UPDATE_COLUMN_COMPRESS: + if (tDecodeCStr(pDecoder, &pReq->colName) < 0) return -1; if (tDecodeU32(pDecoder, &pReq->compress) < 0) return -1; break; default: @@ -9200,7 +9202,7 @@ int32_t tEncodeMqDataRspCommon(SEncoder *pEncoder, const SMqDataRspCommon *pRsp) int32_t tEncodeMqDataRsp(SEncoder *pEncoder, const void *pRsp) { if (tEncodeMqDataRspCommon(pEncoder, pRsp) < 0) return -1; - if (tEncodeI64(pEncoder, ((SMqDataRsp*)pRsp)->sleepTime) < 0) return -1; + if (tEncodeI64(pEncoder, ((SMqDataRsp *)pRsp)->sleepTime) < 0) return -1; return 0; } @@ -9253,7 +9255,7 @@ int32_t tDecodeMqDataRspCommon(SDecoder *pDecoder, SMqDataRspCommon *pRsp) { int32_t tDecodeMqDataRsp(SDecoder *pDecoder, void *pRsp) { if (tDecodeMqDataRspCommon(pDecoder, pRsp) < 0) return -1; if (!tDecodeIsEnd(pDecoder)) { - if (tDecodeI64(pDecoder, &((SMqDataRsp*)pRsp)->sleepTime) < 0) return -1; + if (tDecodeI64(pDecoder, &((SMqDataRsp *)pRsp)->sleepTime) < 0) return -1; } return 0; @@ -9272,9 +9274,7 @@ static void tDeleteMqDataRspCommon(void *rsp) { tOffsetDestroy(&pRsp->rspOffset); } -void tDeleteMqDataRsp(void *rsp) { - tDeleteMqDataRspCommon(rsp); -} +void tDeleteMqDataRsp(void *rsp) { tDeleteMqDataRspCommon(rsp); } int32_t tEncodeSTaosxRsp(SEncoder *pEncoder, const void *rsp) { if (tEncodeMqDataRspCommon(pEncoder, rsp) < 0) return -1; @@ -9300,7 +9300,7 @@ int32_t tDecodeSTaosxRsp(SDecoder *pDecoder, void *rsp) { pRsp->createTableLen = taosArrayInit(pRsp->createTableNum, sizeof(int32_t)); pRsp->createTableReq = taosArrayInit(pRsp->createTableNum, sizeof(void *)); for (int32_t i = 0; i < pRsp->createTableNum; i++) { - void * pCreate = NULL; + void *pCreate = NULL; uint64_t len = 0; if (tDecodeBinaryAlloc(pDecoder, &pCreate, &len) < 0) return -1; int32_t l = (int32_t)len; @@ -10114,7 +10114,7 @@ void setFieldWithOptions(SFieldWithOptions *fieldWithOptions, SField *field) { fieldWithOptions->type = field->type; strncpy(fieldWithOptions->name, field->name, TSDB_COL_NAME_LEN); } -int32_t tSerializeTableTSMAInfoReq(void* buf, int32_t bufLen, const STableTSMAInfoReq* pReq) { +int32_t tSerializeTableTSMAInfoReq(void *buf, int32_t bufLen, const STableTSMAInfoReq *pReq) { SEncoder encoder = {0}; tEncoderInit(&encoder, buf, bufLen); @@ -10129,13 +10129,13 @@ int32_t tSerializeTableTSMAInfoReq(void* buf, int32_t bufLen, const STableTSMAIn return tlen; } -int32_t tDeserializeTableTSMAInfoReq(void* buf, int32_t bufLen, STableTSMAInfoReq* pReq) { +int32_t tDeserializeTableTSMAInfoReq(void *buf, int32_t bufLen, STableTSMAInfoReq *pReq) { SDecoder decoder = {0}; tDecoderInit(&decoder, buf, bufLen); if (tStartDecode(&decoder) < 0) return -1; if (tDecodeCStrTo(&decoder, pReq->name) < 0) return -1; - if (tDecodeI8(&decoder, (uint8_t*)&pReq->fetchingWithTsmaName) < 0) return -1; + if (tDecodeI8(&decoder, (uint8_t *)&pReq->fetchingWithTsmaName) < 0) return -1; tEndDecode(&decoder); @@ -10143,7 +10143,7 @@ int32_t tDeserializeTableTSMAInfoReq(void* buf, int32_t bufLen, STableTSMAInfoRe return 0; } -static int32_t tEncodeTableTSMAInfo(SEncoder* pEncoder, const STableTSMAInfo* pTsmaInfo) { +static int32_t tEncodeTableTSMAInfo(SEncoder *pEncoder, const STableTSMAInfo *pTsmaInfo) { if (tEncodeCStr(pEncoder, pTsmaInfo->name) < 0) return -1; if (tEncodeU64(pEncoder, pTsmaInfo->tsmaId) < 0) return -1; if (tEncodeCStr(pEncoder, pTsmaInfo->tb) < 0) return -1; @@ -10160,7 +10160,7 @@ static int32_t tEncodeTableTSMAInfo(SEncoder* pEncoder, const STableTSMAInfo* pT int32_t size = pTsmaInfo->pFuncs ? pTsmaInfo->pFuncs->size : 0; if (tEncodeI32(pEncoder, size) < 0) return -1; for (int32_t i = 0; i < size; ++i) { - STableTSMAFuncInfo* pFuncInfo = taosArrayGet(pTsmaInfo->pFuncs, i); + STableTSMAFuncInfo *pFuncInfo = taosArrayGet(pTsmaInfo->pFuncs, i); if (tEncodeI32(pEncoder, pFuncInfo->funcId) < 0) return -1; if (tEncodeI16(pEncoder, pFuncInfo->colId) < 0) return -1; } @@ -10168,13 +10168,13 @@ static int32_t tEncodeTableTSMAInfo(SEncoder* pEncoder, const STableTSMAInfo* pT size = pTsmaInfo->pTags ? pTsmaInfo->pTags->size : 0; if (tEncodeI32(pEncoder, size) < 0) return -1; for (int32_t i = 0; i < size; ++i) { - const SSchema* pSchema = taosArrayGet(pTsmaInfo->pTags, i); + const SSchema *pSchema = taosArrayGet(pTsmaInfo->pTags, i); if (tEncodeSSchema(pEncoder, pSchema) < 0) return -1; } size = pTsmaInfo->pUsedCols ? pTsmaInfo->pUsedCols->size : 0; if (tEncodeI32(pEncoder, size) < 0) return -1; for (int32_t i = 0; i < size; ++i) { - const SSchema* pSchema = taosArrayGet(pTsmaInfo->pUsedCols, i); + const SSchema *pSchema = taosArrayGet(pTsmaInfo->pUsedCols, i); if (tEncodeSSchema(pEncoder, pSchema) < 0) return -1; } @@ -10187,7 +10187,7 @@ static int32_t tEncodeTableTSMAInfo(SEncoder* pEncoder, const STableTSMAInfo* pT return 0; } -static int32_t tDecodeTableTSMAInfo(SDecoder* pDecoder, STableTSMAInfo* pTsmaInfo) { +static int32_t tDecodeTableTSMAInfo(SDecoder *pDecoder, STableTSMAInfo *pTsmaInfo) { if (tDecodeCStrTo(pDecoder, pTsmaInfo->name) < 0) return -1; if (tDecodeU64(pDecoder, &pTsmaInfo->tsmaId) < 0) return -1; if (tDecodeCStrTo(pDecoder, pTsmaInfo->tb) < 0) return -1; @@ -10219,7 +10219,7 @@ static int32_t tDecodeTableTSMAInfo(SDecoder* pDecoder, STableTSMAInfo* pTsmaInf if (!pTsmaInfo->pTags) return -1; for (int32_t i = 0; i < size; ++i) { SSchema schema = {0}; - if(tDecodeSSchema(pDecoder, &schema) < 0) return -1; + if (tDecodeSSchema(pDecoder, &schema) < 0) return -1; taosArrayPush(pTsmaInfo->pTags, &schema); } } @@ -10239,7 +10239,7 @@ static int32_t tDecodeTableTSMAInfo(SDecoder* pDecoder, STableTSMAInfo* pTsmaInf if (tDecodeI64(pDecoder, &pTsmaInfo->reqTs) < 0) return -1; if (tDecodeI64(pDecoder, &pTsmaInfo->rspTs) < 0) return -1; if (tDecodeI64(pDecoder, &pTsmaInfo->delayDuration) < 0) return -1; - if (tDecodeI8(pDecoder, (int8_t*)&pTsmaInfo->fillHistoryFinished) < 0) return -1; + if (tDecodeI8(pDecoder, (int8_t *)&pTsmaInfo->fillHistoryFinished) < 0) return -1; return 0; } @@ -10247,13 +10247,13 @@ static int32_t tEncodeTableTSMAInfoRsp(SEncoder *pEncoder, const STableTSMAInfoR int32_t size = pRsp->pTsmas ? pRsp->pTsmas->size : 0; if (tEncodeI32(pEncoder, size) < 0) return -1; for (int32_t i = 0; i < size; ++i) { - STableTSMAInfo* pInfo = taosArrayGetP(pRsp->pTsmas, i); + STableTSMAInfo *pInfo = taosArrayGetP(pRsp->pTsmas, i); if (tEncodeTableTSMAInfo(pEncoder, pInfo) < 0) return -1; } return 0; } -static int32_t tDecodeTableTSMAInfoRsp(SDecoder* pDecoder, STableTSMAInfoRsp* pRsp) { +static int32_t tDecodeTableTSMAInfoRsp(SDecoder *pDecoder, STableTSMAInfoRsp *pRsp) { int32_t size = 0; if (tDecodeI32(pDecoder, &size) < 0) return -1; if (size <= 0) return 0; @@ -10268,7 +10268,7 @@ static int32_t tDecodeTableTSMAInfoRsp(SDecoder* pDecoder, STableTSMAInfoRsp* pR return 0; } -int32_t tSerializeTableTSMAInfoRsp(void* buf, int32_t bufLen, const STableTSMAInfoRsp* pRsp) { +int32_t tSerializeTableTSMAInfoRsp(void *buf, int32_t bufLen, const STableTSMAInfoRsp *pRsp) { SEncoder encoder = {0}; tEncoderInit(&encoder, buf, bufLen); @@ -10282,7 +10282,7 @@ int32_t tSerializeTableTSMAInfoRsp(void* buf, int32_t bufLen, const STableTSMAIn return tlen; } -int32_t tDeserializeTableTSMAInfoRsp(void* buf, int32_t bufLen, STableTSMAInfoRsp* pRsp) { +int32_t tDeserializeTableTSMAInfoRsp(void *buf, int32_t bufLen, STableTSMAInfoRsp *pRsp) { SDecoder decoder = {0}; tDecoderInit(&decoder, buf, bufLen); @@ -10295,7 +10295,7 @@ int32_t tDeserializeTableTSMAInfoRsp(void* buf, int32_t bufLen, STableTSMAInfoRs return 0; } -void tFreeTableTSMAInfo(void* p) { +void tFreeTableTSMAInfo(void *p) { STableTSMAInfo *pTsmaInfo = p; if (pTsmaInfo) { taosArrayDestroy(pTsmaInfo->pFuncs); @@ -10305,20 +10305,20 @@ void tFreeTableTSMAInfo(void* p) { } } -void tFreeAndClearTableTSMAInfo(void* p) { - STableTSMAInfo* pTsmaInfo = (STableTSMAInfo*)p; +void tFreeAndClearTableTSMAInfo(void *p) { + STableTSMAInfo *pTsmaInfo = (STableTSMAInfo *)p; if (pTsmaInfo) { tFreeTableTSMAInfo(pTsmaInfo); taosMemoryFree(pTsmaInfo); } } -int32_t tCloneTbTSMAInfo(STableTSMAInfo* pInfo, STableTSMAInfo** pRes) { +int32_t tCloneTbTSMAInfo(STableTSMAInfo *pInfo, STableTSMAInfo **pRes) { int32_t code = TSDB_CODE_SUCCESS; if (NULL == pInfo) { return TSDB_CODE_SUCCESS; } - STableTSMAInfo* pRet = taosMemoryCalloc(1, sizeof(STableTSMAInfo)); + STableTSMAInfo *pRet = taosMemoryCalloc(1, sizeof(STableTSMAInfo)); if (!pRet) return TSDB_CODE_OUT_OF_MEMORY; *pRet = *pInfo; @@ -10357,7 +10357,7 @@ static int32_t tEncodeStreamProgressReq(SEncoder *pEncoder, const SStreamProgres return 0; } -int32_t tSerializeStreamProgressReq(void* buf, int32_t bufLen, const SStreamProgressReq* pReq) { +int32_t tSerializeStreamProgressReq(void *buf, int32_t bufLen, const SStreamProgressReq *pReq) { SEncoder encoder = {0}; tEncoderInit(&encoder, buf, bufLen); @@ -10371,7 +10371,7 @@ int32_t tSerializeStreamProgressReq(void* buf, int32_t bufLen, const SStreamProg return tlen; } -static int32_t tDecodeStreamProgressReq(SDecoder* pDecoder, SStreamProgressReq* pReq) { +static int32_t tDecodeStreamProgressReq(SDecoder *pDecoder, SStreamProgressReq *pReq) { if (tDecodeI64(pDecoder, &pReq->streamId) < 0) return -1; if (tDecodeI32(pDecoder, &pReq->vgId) < 0) return -1; if (tDecodeI32(pDecoder, &pReq->fetchIdx) < 0) return -1; @@ -10379,7 +10379,7 @@ static int32_t tDecodeStreamProgressReq(SDecoder* pDecoder, SStreamProgressReq* return 0; } -int32_t tDeserializeStreamProgressReq(void* buf, int32_t bufLen, SStreamProgressReq* pReq) { +int32_t tDeserializeStreamProgressReq(void *buf, int32_t bufLen, SStreamProgressReq *pReq) { SDecoder decoder = {0}; tDecoderInit(&decoder, (char *)buf, bufLen); @@ -10392,7 +10392,7 @@ int32_t tDeserializeStreamProgressReq(void* buf, int32_t bufLen, SStreamProgress return 0; } -static int32_t tEncodeStreamProgressRsp(SEncoder* pEncoder, const SStreamProgressRsp* pRsp) { +static int32_t tEncodeStreamProgressRsp(SEncoder *pEncoder, const SStreamProgressRsp *pRsp) { if (tEncodeI64(pEncoder, pRsp->streamId) < 0) return -1; if (tEncodeI32(pEncoder, pRsp->vgId) < 0) return -1; if (tEncodeI8(pEncoder, pRsp->fillHisFinished) < 0) return -1; @@ -10402,7 +10402,7 @@ static int32_t tEncodeStreamProgressRsp(SEncoder* pEncoder, const SStreamProgres return 0; } -int32_t tSerializeStreamProgressRsp(void* buf, int32_t bufLen, const SStreamProgressRsp* pRsp) { +int32_t tSerializeStreamProgressRsp(void *buf, int32_t bufLen, const SStreamProgressRsp *pRsp) { SEncoder encoder = {0}; tEncoderInit(&encoder, buf, bufLen); @@ -10416,17 +10416,17 @@ int32_t tSerializeStreamProgressRsp(void* buf, int32_t bufLen, const SStreamProg return tlen; } -static int32_t tDecodeStreamProgressRsp(SDecoder* pDecoder, SStreamProgressRsp* pRsp) { +static int32_t tDecodeStreamProgressRsp(SDecoder *pDecoder, SStreamProgressRsp *pRsp) { if (tDecodeI64(pDecoder, &pRsp->streamId) < 0) return -1; if (tDecodeI32(pDecoder, &pRsp->vgId) < 0) return -1; - if (tDecodeI8(pDecoder, (int8_t*)&pRsp->fillHisFinished) < 0) return -1; + if (tDecodeI8(pDecoder, (int8_t *)&pRsp->fillHisFinished) < 0) return -1; if (tDecodeI64(pDecoder, &pRsp->progressDelay) < 0) return -1; if (tDecodeI32(pDecoder, &pRsp->fetchIdx) < 0) return -1; if (tDecodeI32(pDecoder, &pRsp->subFetchIdx) < 0) return -1; return 0; } -int32_t tDeserializeSStreamProgressRsp(void* buf, int32_t bufLen, SStreamProgressRsp* pRsp) { +int32_t tDeserializeSStreamProgressRsp(void *buf, int32_t bufLen, SStreamProgressRsp *pRsp) { SDecoder decoder = {0}; tDecoderInit(&decoder, buf, bufLen); @@ -10440,22 +10440,22 @@ int32_t tDeserializeSStreamProgressRsp(void* buf, int32_t bufLen, SStreamProgres } int32_t tEncodeSMDropTbReqOnSingleVg(SEncoder *pEncoder, const SMDropTbReqsOnSingleVg *pReq) { - const SVgroupInfo* pVgInfo = &pReq->vgInfo; + const SVgroupInfo *pVgInfo = &pReq->vgInfo; if (tEncodeI32(pEncoder, pVgInfo->vgId) < 0) return -1; if (tEncodeU32(pEncoder, pVgInfo->hashBegin) < 0) return -1; if (tEncodeU32(pEncoder, pVgInfo->hashEnd) < 0) return -1; if (tEncodeSEpSet(pEncoder, &pVgInfo->epSet) < 0) return -1; if (tEncodeI32(pEncoder, pVgInfo->numOfTable) < 0) return -1; - int32_t size = pReq->pTbs ? pReq->pTbs->size: 0; + int32_t size = pReq->pTbs ? pReq->pTbs->size : 0; if (tEncodeI32(pEncoder, size) < 0) return -1; for (int32_t i = 0; i < size; ++i) { - const SVDropTbReq* pInfo = taosArrayGet(pReq->pTbs, i); + const SVDropTbReq *pInfo = taosArrayGet(pReq->pTbs, i); if (tEncodeSVDropTbReq(pEncoder, pInfo) < 0) return -1; } return 0; } -int32_t tDecodeSMDropTbReqOnSingleVg(SDecoder* pDecoder, SMDropTbReqsOnSingleVg* pReq) { +int32_t tDecodeSMDropTbReqOnSingleVg(SDecoder *pDecoder, SMDropTbReqsOnSingleVg *pReq) { if (tDecodeI32(pDecoder, &pReq->vgInfo.vgId) < 0) return -1; if (tDecodeU32(pDecoder, &pReq->vgInfo.hashBegin) < 0) return -1; if (tDecodeU32(pDecoder, &pReq->vgInfo.hashEnd) < 0) return -1; @@ -10477,18 +10477,18 @@ int32_t tDecodeSMDropTbReqOnSingleVg(SDecoder* pDecoder, SMDropTbReqsOnSingleVg* } void tFreeSMDropTbReqOnSingleVg(void *p) { - SMDropTbReqsOnSingleVg* pReq = p; + SMDropTbReqsOnSingleVg *pReq = p; taosArrayDestroy(pReq->pTbs); } -int32_t tSerializeSMDropTbsReq(void* buf, int32_t bufLen, const SMDropTbsReq* pReq){ +int32_t tSerializeSMDropTbsReq(void *buf, int32_t bufLen, const SMDropTbsReq *pReq) { SEncoder encoder = {0}; tEncoderInit(&encoder, buf, bufLen); tStartEncode(&encoder); int32_t size = pReq->pVgReqs ? pReq->pVgReqs->size : 0; if (tEncodeI32(&encoder, size) < 0) return -1; for (int32_t i = 0; i < size; ++i) { - SMDropTbReqsOnSingleVg* pVgReq = taosArrayGet(pReq->pVgReqs, i); + SMDropTbReqsOnSingleVg *pVgReq = taosArrayGet(pReq->pVgReqs, i); if (tEncodeSMDropTbReqOnSingleVg(&encoder, pVgReq) < 0) return -1; } tEndEncode(&encoder); @@ -10497,7 +10497,7 @@ int32_t tSerializeSMDropTbsReq(void* buf, int32_t bufLen, const SMDropTbsReq* pR return tlen; } -int32_t tDeserializeSMDropTbsReq(void* buf, int32_t bufLen, SMDropTbsReq* pReq) { +int32_t tDeserializeSMDropTbsReq(void *buf, int32_t bufLen, SMDropTbsReq *pReq) { SDecoder decoder = {0}; tDecoderInit(&decoder, buf, bufLen); tStartDecode(&decoder); @@ -10518,12 +10518,12 @@ int32_t tDeserializeSMDropTbsReq(void* buf, int32_t bufLen, SMDropTbsReq* pReq) return 0; } -void tFreeSMDropTbsReq(void* p) { - SMDropTbsReq* pReq = p; +void tFreeSMDropTbsReq(void *p) { + SMDropTbsReq *pReq = p; taosArrayDestroyEx(pReq->pVgReqs, tFreeSMDropTbReqOnSingleVg); } -int32_t tEncodeVFetchTtlExpiredTbsRsp(SEncoder* pCoder, const SVFetchTtlExpiredTbsRsp* pRsp) { +int32_t tEncodeVFetchTtlExpiredTbsRsp(SEncoder *pCoder, const SVFetchTtlExpiredTbsRsp *pRsp) { if (tEncodeI32(pCoder, pRsp->vgId) < 0) return -1; int32_t size = pRsp->pExpiredTbs ? pRsp->pExpiredTbs->size : 0; if (tEncodeI32(pCoder, size) < 0) return -1; @@ -10533,7 +10533,7 @@ int32_t tEncodeVFetchTtlExpiredTbsRsp(SEncoder* pCoder, const SVFetchTtlExpiredT return 0; } -int32_t tDecodeVFetchTtlExpiredTbsRsp(SDecoder* pCoder, SVFetchTtlExpiredTbsRsp* pRsp) { +int32_t tDecodeVFetchTtlExpiredTbsRsp(SDecoder *pCoder, SVFetchTtlExpiredTbsRsp *pRsp) { if (tDecodeI32(pCoder, &pRsp->vgId) < 0) return -1; int32_t size = 0; if (tDecodeI32(pCoder, &size) < 0) return -1; @@ -10549,7 +10549,7 @@ int32_t tDecodeVFetchTtlExpiredTbsRsp(SDecoder* pCoder, SVFetchTtlExpiredTbsRsp* return 0; } -void tFreeFetchTtlExpiredTbsRsp(void* p) { - SVFetchTtlExpiredTbsRsp* pRsp = p; +void tFreeFetchTtlExpiredTbsRsp(void *p) { + SVFetchTtlExpiredTbsRsp *pRsp = p; taosArrayDestroy(pRsp->pExpiredTbs); } diff --git a/source/dnode/mgmt/mgmt_dnode/src/dmHandle.c b/source/dnode/mgmt/mgmt_dnode/src/dmHandle.c index 12e414b30d..56fdb463c4 100644 --- a/source/dnode/mgmt/mgmt_dnode/src/dmHandle.c +++ b/source/dnode/mgmt/mgmt_dnode/src/dmHandle.c @@ -333,39 +333,10 @@ SSDataBlock *dmBuildVariablesBlock(void) { } int32_t dmAppendVariablesToBlock(SSDataBlock *pBlock, int32_t dnodeId) { - int32_t numOfCfg = taosArrayGetSize(tsCfg->array); - int32_t numOfRows = 0; - blockDataEnsureCapacity(pBlock, numOfCfg); + /*int32_t code = */dumpConfToDataBlock(pBlock, 1); - for (int32_t i = 0, c = 0; i < numOfCfg; ++i, c = 0) { - SConfigItem *pItem = taosArrayGet(tsCfg->array, i); - // GRANT_CFG_SKIP; - - SColumnInfoData *pColInfo = taosArrayGet(pBlock->pDataBlock, c++); - colDataSetVal(pColInfo, i, (const char *)&dnodeId, false); - - char name[TSDB_CONFIG_OPTION_LEN + VARSTR_HEADER_SIZE] = {0}; - STR_WITH_MAXSIZE_TO_VARSTR(name, pItem->name, TSDB_CONFIG_OPTION_LEN + VARSTR_HEADER_SIZE); - pColInfo = taosArrayGet(pBlock->pDataBlock, c++); - colDataSetVal(pColInfo, i, name, false); - - char value[TSDB_CONFIG_VALUE_LEN + VARSTR_HEADER_SIZE] = {0}; - int32_t valueLen = 0; - cfgDumpItemValue(pItem, &value[VARSTR_HEADER_SIZE], TSDB_CONFIG_VALUE_LEN, &valueLen); - varDataSetLen(value, valueLen); - pColInfo = taosArrayGet(pBlock->pDataBlock, c++); - colDataSetVal(pColInfo, i, value, false); - - char scope[TSDB_CONFIG_SCOPE_LEN + VARSTR_HEADER_SIZE] = {0}; - cfgDumpItemScope(pItem, &scope[VARSTR_HEADER_SIZE], TSDB_CONFIG_SCOPE_LEN, &valueLen); - varDataSetLen(scope, valueLen); - pColInfo = taosArrayGet(pBlock->pDataBlock, c++); - colDataSetVal(pColInfo, i, scope, false); - - numOfRows++; - } - - pBlock->info.rows = numOfRows; + SColumnInfoData *pColInfo = taosArrayGet(pBlock->pDataBlock, 0); + colDataSetNItems(pColInfo, 0, (const char *)&dnodeId, pBlock->info.rows, false); return TSDB_CODE_SUCCESS; } diff --git a/source/dnode/mgmt/mgmt_mnode/src/mmWorker.c b/source/dnode/mgmt/mgmt_mnode/src/mmWorker.c index 75d5669824..885086e37a 100644 --- a/source/dnode/mgmt/mgmt_mnode/src/mmWorker.c +++ b/source/dnode/mgmt/mgmt_mnode/src/mmWorker.c @@ -16,6 +16,8 @@ #define _DEFAULT_SOURCE #include "mmInt.h" +#define PROCESS_THRESHOLD (2000 * 1000) + static inline int32_t mmAcquire(SMnodeMgmt *pMgmt) { int32_t code = 0; taosThreadRwlockRdlock(&pMgmt->lock); @@ -53,6 +55,14 @@ static void mmProcessRpcMsg(SQueueInfo *pInfo, SRpcMsg *pMsg) { int32_t code = mndProcessRpcMsg(pMsg); + if (pInfo->timestamp != 0) { + int64_t cost = taosGetTimestampUs() - pInfo->timestamp; + if (cost > PROCESS_THRESHOLD) { + dGWarn("worker:%d,message has been processed for too long, type:%s, cost: %" PRId64 "s", pInfo->threadNum, + TMSG_INFO(pMsg->msgType), cost / (1000 * 1000)); + } + } + if (IsReq(pMsg) && pMsg->info.handle != NULL && code != TSDB_CODE_ACTION_IN_PROGRESS) { if (code != 0 && terrno != 0) code = terrno; mmSendRsp(pMsg, code); diff --git a/source/dnode/mnode/impl/src/mndArbGroup.c b/source/dnode/mnode/impl/src/mndArbGroup.c index 92ab5274e4..d0a86bdde7 100644 --- a/source/dnode/mnode/impl/src/mndArbGroup.c +++ b/source/dnode/mnode/impl/src/mndArbGroup.c @@ -27,6 +27,8 @@ #define ARBGROUP_VER_NUMBER 1 #define ARBGROUP_RESERVE_SIZE 64 +static SHashObj *arbUpdateHash = NULL; + static int32_t mndArbGroupActionInsert(SSdb *pSdb, SArbGroup *pGroup); static int32_t mndArbGroupActionUpdate(SSdb *pSdb, SArbGroup *pOld, SArbGroup *pNew); static int32_t mndArbGroupActionDelete(SSdb *pSdb, SArbGroup *pGroup); @@ -74,10 +76,14 @@ int32_t mndInitArbGroup(SMnode *pMnode) { mndAddShowRetrieveHandle(pMnode, TSDB_MGMT_TABLE_ARBGROUP, mndRetrieveArbGroups); mndAddShowFreeIterHandle(pMnode, TSDB_MGMT_TABLE_ARBGROUP, mndCancelGetNextArbGroup); + arbUpdateHash = taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_INT), false, HASH_ENTRY_LOCK); + return sdbSetTable(pMnode->pSdb, table); } -void mndCleanupArbGroup(SMnode *pMnode) {} +void mndCleanupArbGroup(SMnode *pMnode) { + taosHashCleanup(arbUpdateHash); +} SArbGroup *mndAcquireArbGroup(SMnode *pMnode, int32_t vgId) { SArbGroup *pGroup = sdbAcquire(pMnode->pSdb, SDB_ARBGROUP, &vgId); @@ -221,8 +227,7 @@ static int32_t mndArbGroupActionUpdate(SSdb *pSdb, SArbGroup *pOld, SArbGroup *p mInfo("arbgroup:%d, skip to perform update action, old row:%p new row:%p, old version:%" PRId64 " new version:%" PRId64, pOld->vgId, pOld, pNew, pOld->version, pNew->version); - taosThreadMutexUnlock(&pOld->mutex); - return 0; + goto _OVER; } for (int i = 0; i < TSDB_ARB_GROUP_MEMBER_NUM; i++) { @@ -232,7 +237,11 @@ static int32_t mndArbGroupActionUpdate(SSdb *pSdb, SArbGroup *pOld, SArbGroup *p pOld->assignedLeader.dnodeId = pNew->assignedLeader.dnodeId; memcpy(pOld->assignedLeader.token, pNew->assignedLeader.token, TSDB_ARB_TOKEN_SIZE); pOld->version++; + +_OVER: taosThreadMutexUnlock(&pOld->mutex); + + taosHashRemove(arbUpdateHash, &pOld->vgId, sizeof(int32_t)); return 0; } @@ -645,6 +654,11 @@ static void *mndBuildArbUpdateGroupReq(int32_t *pContLen, SArbGroup *pNewGroup) } static int32_t mndPullupArbUpdateGroup(SMnode *pMnode, SArbGroup *pNewGroup) { + if (taosHashGet(arbUpdateHash, &pNewGroup->vgId, sizeof(pNewGroup->vgId)) != NULL) { + mInfo("vgId:%d, arb skip to pullup arb-update-group request, since it is in process", pNewGroup->vgId); + return 0; + } + int32_t contLen = 0; void *pHead = mndBuildArbUpdateGroupReq(&contLen, pNewGroup); if (!pHead) { @@ -653,7 +667,11 @@ static int32_t mndPullupArbUpdateGroup(SMnode *pMnode, SArbGroup *pNewGroup) { } SRpcMsg rpcMsg = {.msgType = TDMT_MND_ARB_UPDATE_GROUP, .pCont = pHead, .contLen = contLen, .info.noResp = true}; - return tmsgPutToQueue(&pMnode->msgCb, WRITE_QUEUE, &rpcMsg); + int32_t ret = tmsgPutToQueue(&pMnode->msgCb, WRITE_QUEUE, &rpcMsg); + if (ret == 0) { + taosHashPut(arbUpdateHash, &pNewGroup->vgId, sizeof(pNewGroup->vgId), NULL, 0); + } + return ret; } static int32_t mndProcessArbUpdateGroupReq(SRpcMsg *pReq) { @@ -930,8 +948,12 @@ static int32_t mndProcessArbCheckSyncRsp(SRpcMsg *pRsp) { SVArbCheckSyncRsp syncRsp = {0}; if (tDeserializeSVArbCheckSyncRsp(pRsp->pCont, pRsp->contLen, &syncRsp) != 0) { - terrno = TSDB_CODE_INVALID_MSG; mInfo("arb sync check failed, since:%s", tstrerror(pRsp->code)); + if (pRsp->code == TSDB_CODE_MND_ARB_TOKEN_MISMATCH) { + terrno = TSDB_CODE_SUCCESS; + return 0; + } + terrno = TSDB_CODE_INVALID_MSG; return -1; } diff --git a/source/dnode/mnode/impl/src/mndSma.c b/source/dnode/mnode/impl/src/mndSma.c index aaa0c42262..02c932289f 100644 --- a/source/dnode/mnode/impl/src/mndSma.c +++ b/source/dnode/mnode/impl/src/mndSma.c @@ -1457,8 +1457,8 @@ static void mndCreateTSMABuildCreateStreamReq(SCreateTSMACxt *pCxt) { pCxt->pCreateStreamReq->igUpdate = 0; pCxt->pCreateStreamReq->lastTs = pCxt->pCreateSmaReq->lastTs; pCxt->pCreateStreamReq->smaId = pCxt->pSma->uid; - pCxt->pCreateStreamReq->ast = strdup(pCxt->pCreateSmaReq->ast); - pCxt->pCreateStreamReq->sql = strdup(pCxt->pCreateSmaReq->sql); + pCxt->pCreateStreamReq->ast = taosStrdup(pCxt->pCreateSmaReq->ast); + pCxt->pCreateStreamReq->sql = taosStrdup(pCxt->pCreateSmaReq->sql); // construct tags pCxt->pCreateStreamReq->pTags = taosArrayInit(pCxt->pCreateStreamReq->numOfTags, sizeof(SField)); @@ -1494,7 +1494,7 @@ static void mndCreateTSMABuildCreateStreamReq(SCreateTSMACxt *pCxt) { static void mndCreateTSMABuildDropStreamReq(SCreateTSMACxt* pCxt) { tstrncpy(pCxt->pDropStreamReq->name, pCxt->streamName, TSDB_STREAM_FNAME_LEN); pCxt->pDropStreamReq->igNotExists = false; - pCxt->pDropStreamReq->sql = strdup(pCxt->pDropSmaReq->name); + pCxt->pDropStreamReq->sql = taosStrdup(pCxt->pDropSmaReq->name); pCxt->pDropStreamReq->sqlLen = strlen(pCxt->pDropStreamReq->sql); } diff --git a/source/dnode/mnode/impl/src/mndStream.c b/source/dnode/mnode/impl/src/mndStream.c index d225c2e272..3dc26b4118 100644 --- a/source/dnode/mnode/impl/src/mndStream.c +++ b/source/dnode/mnode/impl/src/mndStream.c @@ -847,7 +847,7 @@ int64_t mndStreamGenChkptId(SMnode *pMnode, bool lock) { if (pIter == NULL) break; maxChkptId = TMAX(maxChkptId, pStream->checkpointId); - mDebug("stream:%p, %s id:%" PRIx64 "checkpoint %" PRId64 "", pStream, pStream->name, pStream->uid, + mDebug("stream:%p, %s id:0x%" PRIx64 " checkpoint %" PRId64 "", pStream, pStream->name, pStream->uid, pStream->checkpointId); sdbRelease(pSdb, pStream); } diff --git a/source/dnode/vnode/src/tq/tq.c b/source/dnode/vnode/src/tq/tq.c index 3d5f5bd64c..0e6b85bd2b 100644 --- a/source/dnode/vnode/src/tq/tq.c +++ b/source/dnode/vnode/src/tq/tq.c @@ -1156,14 +1156,24 @@ int32_t tqProcessTaskCheckPointSourceReq(STQ* pTq, SRpcMsg* pMsg, SRpcMsg* pRsp) // check if the checkpoint msg already sent or not. if (status == TASK_STATUS__CK) { - tqWarn("s-task:%s recv checkpoint-source msg again checkpointId:%" PRId64 - " transId:%d already received, ignore this msg and continue process checkpoint", + tqWarn("s-task:%s repeatly recv checkpoint-source msg checkpointId:%" PRId64 + " transId:%d already handled, ignore msg and continue process checkpoint", pTask->id.idStr, pTask->chkInfo.checkpointingId, req.transId); taosThreadMutexUnlock(&pTask->lock); streamMetaReleaseTask(pMeta, pTask); return TSDB_CODE_SUCCESS; + } else { // checkpoint already finished, and not in checkpoint status + if (req.checkpointId <= pTask->chkInfo.checkpointId) { + tqWarn("s-task:%s repeatly recv checkpoint-source msg checkpointId:%" PRId64 + " transId:%d already handled, ignore and discard", pTask->id.idStr, req.checkpointId, req.transId); + + taosThreadMutexUnlock(&pTask->lock); + streamMetaReleaseTask(pMeta, pTask); + + return TSDB_CODE_SUCCESS; + } } streamProcessCheckpointSourceReq(pTask, &req); diff --git a/source/dnode/vnode/src/tq/tqSink.c b/source/dnode/vnode/src/tq/tqSink.c index f690b9b277..25c2f32307 100644 --- a/source/dnode/vnode/src/tq/tqSink.c +++ b/source/dnode/vnode/src/tq/tqSink.c @@ -485,6 +485,7 @@ SVCreateTbReq* buildAutoCreateTableReq(const char* stbFullName, int64_t suid, in int32_t doPutIntoCache(SSHashObj* pSinkTableMap, STableSinkInfo* pTableSinkInfo, uint64_t groupId, const char* id) { if (tSimpleHashGetSize(pSinkTableMap) > MAX_CACHE_TABLE_INFO_NUM) { + taosMemoryFreeClear(pTableSinkInfo); // too many items, failed to cache it return TSDB_CODE_FAILED; } diff --git a/source/dnode/vnode/src/tqCommon/tqCommon.c b/source/dnode/vnode/src/tqCommon/tqCommon.c index 8b2e9693eb..04c0c0d204 100644 --- a/source/dnode/vnode/src/tqCommon/tqCommon.c +++ b/source/dnode/vnode/src/tqCommon/tqCommon.c @@ -195,13 +195,22 @@ int32_t tqStreamTaskProcessUpdateReq(SStreamMeta* pMeta, SMsgCb* cb, SRpcMsg* pM const char* idstr = pTask->id.idStr; if (pMeta->updateInfo.transId != req.transId) { - ASSERT(req.transId > pMeta->updateInfo.transId); - tqInfo("s-task:%s vgId:%d receive new trans to update nodeEp msg from mnode, transId:%d, prev transId:%d", idstr, - vgId, req.transId, pMeta->updateInfo.transId); + if (req.transId < pMeta->updateInfo.transId) { + tqError("s-task:%s vgId:%d disorder update nodeEp msg recv, discarded, newest transId:%d, recv:%d", idstr, vgId, + pMeta->updateInfo.transId, req.transId); + rsp.code = TSDB_CODE_SUCCESS; + streamMetaWUnLock(pMeta); - // info needs to be kept till the new trans to update the nodeEp arrived. - taosHashClear(pMeta->updateInfo.pTasks); - pMeta->updateInfo.transId = req.transId; + taosArrayDestroy(req.pNodeList); + return rsp.code; + } else { + tqInfo("s-task:%s vgId:%d receive new trans to update nodeEp msg from mnode, transId:%d, prev transId:%d", idstr, + vgId, req.transId, pMeta->updateInfo.transId); + + // info needs to be kept till the new trans to update the nodeEp arrived. + taosHashClear(pMeta->updateInfo.pTasks); + pMeta->updateInfo.transId = req.transId; + } } else { tqDebug("s-task:%s vgId:%d recv trans to update nodeEp from mnode, transId:%d", idstr, vgId, req.transId); } @@ -211,7 +220,7 @@ int32_t tqStreamTaskProcessUpdateReq(SStreamMeta* pMeta, SMsgCb* cb, SRpcMsg* pM void* pReqTask = taosHashGet(pMeta->updateInfo.pTasks, &entry, sizeof(STaskUpdateEntry)); if (pReqTask != NULL) { - tqDebug("s-task:%s (vgId:%d) already update in trans:%d, discard the nodeEp update msg", idstr, vgId, req.transId); + tqDebug("s-task:%s (vgId:%d) already update in transId:%d, discard the nodeEp update msg", idstr, vgId, req.transId); rsp.code = TSDB_CODE_SUCCESS; streamMetaWUnLock(pMeta); taosArrayDestroy(req.pNodeList); @@ -235,7 +244,7 @@ int32_t tqStreamTaskProcessUpdateReq(SStreamMeta* pMeta, SMsgCb* cb, SRpcMsg* pM } else { tqDebug("s-task:%s fill-history task update nodeEp along with stream task", (*ppHTask)->id.idStr); bool updateEpSet = streamTaskUpdateEpsetInfo(*ppHTask, req.pNodeList); - if (!updated) { + if (updateEpSet) { updated = updateEpSet; } @@ -245,14 +254,15 @@ int32_t tqStreamTaskProcessUpdateReq(SStreamMeta* pMeta, SMsgCb* cb, SRpcMsg* pM } if (updated) { - tqDebug("s-task:%s vgId:%d save task after update epset", idstr, vgId); + tqDebug("s-task:%s vgId:%d save task after update epset, and stop task", idstr, vgId); streamMetaSaveTask(pMeta, pTask); if (ppHTask != NULL) { streamMetaSaveTask(pMeta, *ppHTask); } + } else { + tqDebug("s-task:%s vgId:%d not save task since not update epset actually, stop task", idstr, vgId); } - tqDebug("s-task:%s vgId:%d start to stop task after save task", idstr, vgId); streamTaskStop(pTask); // keep the already updated info diff --git a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c index 42b8365130..52aabd4061 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c +++ b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c @@ -64,13 +64,15 @@ static int32_t saveOneRow(SArray* pRow, SSDataBlock* pBlock, SCacheRowsReader* p col_id_t colId = -1; SArray* funcTypeBlockArray = taosArrayInit(pReader->numOfCols, sizeof(int32_t)); + for (int32_t i = 0; i < pReader->numOfCols; ++i) { SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, dstSlotIds[i]); int32_t funcType = FUNCTION_TYPE_CACHE_LAST; + if (pReader->pFuncTypeList != NULL && taosArrayGetSize(pReader->pFuncTypeList) > i) { funcType = *(int32_t*)taosArrayGet(pReader->pFuncTypeList, i); + taosArrayInsert(funcTypeBlockArray, dstSlotIds[i], taosArrayGet(pReader->pFuncTypeList, i)); } - taosArrayInsert(funcTypeBlockArray, dstSlotIds[i], taosArrayGet(pReader->pFuncTypeList, i)); if (slotIds[i] == -1) { if (FUNCTION_TYPE_CACHE_LAST_ROW == funcType) { @@ -367,11 +369,7 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32 goto _end; } - int32_t pkBufLen = 0; - if (pr->rowKey.numOfPKs > 0) { - pkBufLen = pr->pkColumn.bytes; - } - + int32_t pkBufLen = (pr->rowKey.numOfPKs > 0)? pr->pkColumn.bytes:0; for (int32_t j = 0; j < pr->numOfCols; ++j) { int32_t bytes = (slotIds[j] == -1) ? 1 : pr->pSchema->columns[slotIds[j]].bytes; diff --git a/source/dnode/vnode/src/tsdb/tsdbFS2.c b/source/dnode/vnode/src/tsdb/tsdbFS2.c index 6195c37ab2..7741a1b096 100644 --- a/source/dnode/vnode/src/tsdb/tsdbFS2.c +++ b/source/dnode/vnode/src/tsdb/tsdbFS2.c @@ -897,6 +897,7 @@ int32_t tsdbFSEditCommit(STFileSystem *fs) { // commit code = commit_edit(fs); + ASSERT(code == 0); TSDB_CHECK_CODE(code, lino, _exit); // schedule merge @@ -973,11 +974,11 @@ int32_t tsdbFSEditCommit(STFileSystem *fs) { _exit: if (code) { - TSDB_ERROR_LOG(TD_VID(fs->tsdb->pVnode), lino, code); + tsdbError("vgId:%d %s failed at line %d since %s", TD_VID(fs->tsdb->pVnode), __func__, lino, tstrerror(code)); } else { - tsdbDebug("vgId:%d %s done, etype:%d", TD_VID(fs->tsdb->pVnode), __func__, fs->etype); - tsem_post(&fs->canEdit); + tsdbInfo("vgId:%d %s done, etype:%d", TD_VID(fs->tsdb->pVnode), __func__, fs->etype); } + tsem_post(&fs->canEdit); return code; } diff --git a/source/dnode/vnode/src/tsdb/tsdbMerge.c b/source/dnode/vnode/src/tsdb/tsdbMerge.c index 87f48acaac..971020e7d6 100644 --- a/source/dnode/vnode/src/tsdb/tsdbMerge.c +++ b/source/dnode/vnode/src/tsdb/tsdbMerge.c @@ -580,9 +580,9 @@ int32_t tsdbMerge(void *arg) { } */ // do merge - tsdbDebug("vgId:%d merge begin, fid:%d", TD_VID(tsdb->pVnode), merger->fid); + tsdbInfo("vgId:%d merge begin, fid:%d", TD_VID(tsdb->pVnode), merger->fid); code = tsdbDoMerge(merger); - tsdbDebug("vgId:%d merge done, fid:%d", TD_VID(tsdb->pVnode), mergeArg->fid); + tsdbInfo("vgId:%d merge done, fid:%d", TD_VID(tsdb->pVnode), mergeArg->fid); TSDB_CHECK_CODE(code, lino, _exit); _exit: diff --git a/source/dnode/vnode/src/tsdb/tsdbRead2.c b/source/dnode/vnode/src/tsdb/tsdbRead2.c index 93099a9aa7..cf95dd3f6f 100644 --- a/source/dnode/vnode/src/tsdb/tsdbRead2.c +++ b/source/dnode/vnode/src/tsdb/tsdbRead2.c @@ -1054,14 +1054,30 @@ static void copyNumericCols(const SColData* pData, SFileBlockDumpInfo* pDumpInfo } } -static void blockInfoToRecord(SBrinRecord* record, SFileDataBlockInfo* pBlockInfo) { +static void blockInfoToRecord(SBrinRecord* record, SFileDataBlockInfo* pBlockInfo, SBlockLoadSuppInfo* pSupp) { record->uid = pBlockInfo->uid; - record->firstKey = (STsdbRowKey){ - .key = {.ts = pBlockInfo->firstKey, .numOfPKs = 0}, - }; - record->lastKey = (STsdbRowKey){ - .key = {.ts = pBlockInfo->lastKey, .numOfPKs = 0}, - }; + record->firstKey = (STsdbRowKey){.key = {.ts = pBlockInfo->firstKey, .numOfPKs = pSupp->numOfPks}}; + record->lastKey = (STsdbRowKey){.key = {.ts = pBlockInfo->lastKey, .numOfPKs = pSupp->numOfPks}}; + + if (pSupp->numOfPks > 0) { + SValue* pFirst = &record->firstKey.key.pks[0]; + SValue* pLast = &record->lastKey.key.pks[0]; + + pFirst->type = pSupp->pk.type; + pLast->type = pSupp->pk.type; + + if (IS_VAR_DATA_TYPE(pFirst->type)) { + pFirst->pData = (uint8_t*) varDataVal(pBlockInfo->firstPk.pData); + pFirst->nData = varDataLen(pBlockInfo->firstPk.pData); + + pLast->pData = (uint8_t*) varDataVal(pBlockInfo->lastPk.pData); + pLast->nData = varDataLen(pBlockInfo->lastPk.pData); + } else { + pFirst->val = pBlockInfo->firstPk.val; + pLast->val = pBlockInfo->lastPk.val; + } + } + record->minVer = pBlockInfo->minVer; record->maxVer = pBlockInfo->maxVer; record->blockOffset = pBlockInfo->blockOffset; @@ -1091,7 +1107,7 @@ static int32_t copyBlockDataToSDataBlock(STsdbReader* pReader, SRowKey* pLastPro int32_t step = asc ? 1 : -1; SBrinRecord tmp; - blockInfoToRecord(&tmp, pBlockInfo); + blockInfoToRecord(&tmp, pBlockInfo, pSupInfo); SBrinRecord* pRecord = &tmp; // no data exists, return directly. @@ -1290,7 +1306,7 @@ static int32_t doLoadFileBlockData(STsdbReader* pReader, SDataBlockIter* pBlockI SFileBlockDumpInfo* pDumpInfo = &pReader->status.fBlockDumpInfo; SBrinRecord tmp; - blockInfoToRecord(&tmp, pBlockInfo); + blockInfoToRecord(&tmp, pBlockInfo, pSup); SBrinRecord* pRecord = &tmp; code = tsdbDataFileReadBlockDataByColumn(pReader->pFileReader, pRecord, pBlockData, pSchema, &pSup->colId[1], pSup->numOfCols - 1); @@ -1325,9 +1341,9 @@ static int32_t dataBlockPartiallyRequired(STimeWindow* pWindow, SVersionRange* p (pVerRange->maxVer < pBlock->maxVer && pVerRange->maxVer >= pBlock->minVer); } -static bool getNeighborBlockOfSameTable(SDataBlockIter* pBlockIter, SFileDataBlockInfo* pBlockInfo, - STableBlockScanInfo* pScanInfo, int32_t* nextIndex, int32_t order, - SBrinRecord* pRecord) { +static bool getNeighborBlockOfTable(SDataBlockIter* pBlockIter, SFileDataBlockInfo* pBlockInfo, + STableBlockScanInfo* pScanInfo, int32_t* nextIndex, int32_t order, + SBrinRecord* pRecord, SBlockLoadSuppInfo* pSupInfo) { bool asc = ASCENDING_TRAVERSE(order); int32_t step = asc ? 1 : -1; @@ -1341,7 +1357,7 @@ static bool getNeighborBlockOfSameTable(SDataBlockIter* pBlockIter, SFileDataBlo STableDataBlockIdx* pTableDataBlockIdx = taosArrayGet(pScanInfo->pBlockIdxList, pBlockInfo->tbBlockIdx + step); SFileDataBlockInfo* p = taosArrayGet(pBlockIter->blockList, pTableDataBlockIdx->globalIndex); - blockInfoToRecord(pRecord, p); + blockInfoToRecord(pRecord, p, pSupInfo); *nextIndex = pBlockInfo->tbBlockIdx + step; return true; @@ -1391,12 +1407,40 @@ static int32_t setFileBlockActiveInBlockIter(STsdbReader* pReader, SDataBlockIte } // todo: this attribute could be acquired during extractin the global ordered block list. -static bool overlapWithNeighborBlock2(SFileDataBlockInfo* pBlock, SBrinRecord* pRec, int32_t order) { +static bool overlapWithNeighborBlock2(SFileDataBlockInfo* pBlock, SBrinRecord* pRec, int32_t order, int32_t pkType, int32_t numOfPk) { // it is the last block in current file, no chance to overlap with neighbor blocks. if (ASCENDING_TRAVERSE(order)) { - return pBlock->lastKey == pRec->firstKey.key.ts; + if (pBlock->lastKey == pRec->firstKey.key.ts) { + if (numOfPk > 0) { + SValue v1 = {.type = pkType}; + if (IS_VAR_DATA_TYPE(pkType)) { + v1.pData = (uint8_t*)varDataVal(pBlock->lastPk.pData), v1.nData = varDataLen(pBlock->lastPk.pData); + } else { + v1.val = pBlock->lastPk.val; + } + return (tValueCompare(&v1, &pRec->firstKey.key.pks[0]) == 0); + } else { // no pk + return true; + } + } else { + return false; + } } else { - return pBlock->firstKey == pRec->lastKey.key.ts; + if (pBlock->firstKey == pRec->lastKey.key.ts) { + if (numOfPk > 0) { + SValue v1 = {.type = pkType}; + if (IS_VAR_DATA_TYPE(pkType)) { + v1.pData = (uint8_t*)varDataVal(pBlock->firstPk.pData), v1.nData = varDataLen(pBlock->firstPk.pData); + } else { + v1.val = pBlock->firstPk.val; + } + return (tValueCompare(&v1, &pRec->lastKey.key.pks[0]) == 0); + } else { // no pk + return true; + } + } else { + return false; + } } } @@ -1430,23 +1474,26 @@ static bool keyOverlapFileBlock(TSDBKEY key, SFileDataBlockInfo* pBlock, SVersio static void getBlockToLoadInfo(SDataBlockToLoadInfo* pInfo, SFileDataBlockInfo* pBlockInfo, STableBlockScanInfo* pScanInfo, TSDBKEY keyInBuf, STsdbReader* pReader) { - SBrinRecord rec = {0}; - int32_t neighborIndex = 0; + SBrinRecord rec = {0}; + int32_t neighborIndex = 0; + int32_t order = pReader->info.order; + SBlockLoadSuppInfo* pSupInfo = &pReader->suppInfo; - bool hasNeighbor = getNeighborBlockOfSameTable(&pReader->status.blockIter, pBlockInfo, pScanInfo, &neighborIndex, - pReader->info.order, &rec); + bool hasNeighbor = + getNeighborBlockOfTable(&pReader->status.blockIter, pBlockInfo, pScanInfo, &neighborIndex, order, &rec, pSupInfo); // overlap with neighbor if (hasNeighbor) { - pInfo->overlapWithNeighborBlock = overlapWithNeighborBlock2(pBlockInfo, &rec, pReader->info.order); + pInfo->overlapWithNeighborBlock = + overlapWithNeighborBlock2(pBlockInfo, &rec, order, pSupInfo->pk.type, pSupInfo->numOfPks); } SBrinRecord pRecord; - blockInfoToRecord(&pRecord, pBlockInfo); + blockInfoToRecord(&pRecord, pBlockInfo, pSupInfo); // has duplicated ts of different version in this block pInfo->hasDupTs = (pBlockInfo->numRow > pBlockInfo->count) || (pBlockInfo->count <= 0); - pInfo->overlapWithDelInfo = overlapWithDelSkyline(pScanInfo, &pRecord, pReader->info.order); + pInfo->overlapWithDelInfo = overlapWithDelSkyline(pScanInfo, &pRecord, order); // todo handle the primary key overlap case ASSERT(pScanInfo->sttKeyInfo.status != STT_FILE_READER_UNINIT); @@ -2380,28 +2427,30 @@ static int32_t buildComposedDataBlockImpl(STsdbReader* pReader, STableBlockScanI static int32_t loadNeighborIfOverlap(SFileDataBlockInfo* pBlockInfo, STableBlockScanInfo* pBlockScanInfo, STsdbReader* pReader, bool* loadNeighbor) { - int32_t code = TSDB_CODE_SUCCESS; - int32_t step = ASCENDING_TRAVERSE(pReader->info.order) ? 1 : -1; - int32_t nextIndex = -1; + int32_t code = TSDB_CODE_SUCCESS; + int32_t order = pReader->info.order; + SDataBlockIter* pIter = &pReader->status.blockIter; + SBlockLoadSuppInfo* pSupInfo = &pReader->suppInfo; + int32_t step = ASCENDING_TRAVERSE(order) ? 1 : -1; + int32_t nextIndex = -1; + SBrinRecord rec = {0}; *loadNeighbor = false; - - SBrinRecord rec = {0}; - bool hasNeighbor = getNeighborBlockOfSameTable(&pReader->status.blockIter, pBlockInfo, pBlockScanInfo, &nextIndex, - pReader->info.order, &rec); + bool hasNeighbor = getNeighborBlockOfTable(pIter, pBlockInfo, pBlockScanInfo, &nextIndex, order, &rec, pSupInfo); if (!hasNeighbor) { // do nothing return code; } - if (overlapWithNeighborBlock2(pBlockInfo, &rec, pReader->info.order)) { // load next block + // load next block + if (overlapWithNeighborBlock2(pBlockInfo, &rec, order, pReader->suppInfo.pk.type, pReader->suppInfo.numOfPks)) { SReaderStatus* pStatus = &pReader->status; SDataBlockIter* pBlockIter = &pStatus->blockIter; // 1. find the next neighbor block in the scan block list STableDataBlockIdx* tableDataBlockIdx = taosArrayGet(pBlockScanInfo->pBlockIdxList, nextIndex); - int32_t neighborIndex = tableDataBlockIdx->globalIndex; // 2. remove it from the scan block list + int32_t neighborIndex = tableDataBlockIdx->globalIndex; setFileBlockActiveInBlockIter(pReader, pBlockIter, neighborIndex, step); // 3. load the neighbor block, and set it to be the currently accessed file data block @@ -4836,11 +4885,11 @@ int32_t tsdbRetrieveDatablockSMA2(STsdbReader* pReader, SSDataBlock* pDataBlock, return TSDB_CODE_SUCCESS; } - SFileDataBlockInfo* pFBlock = getCurrentBlockInfo(&pReader->status.blockIter); + SFileDataBlockInfo* pBlockInfo = getCurrentBlockInfo(&pReader->status.blockIter); SBlockLoadSuppInfo* pSup = &pReader->suppInfo; SSDataBlock* pResBlock = pReader->resBlockInfo.pResBlock; - if (pResBlock->info.id.uid != pFBlock->uid) { + if (pResBlock->info.id.uid != pBlockInfo->uid) { return TSDB_CODE_SUCCESS; } @@ -4848,10 +4897,10 @@ int32_t tsdbRetrieveDatablockSMA2(STsdbReader* pReader, SSDataBlock* pDataBlock, TARRAY2_CLEAR(&pSup->colAggArray, 0); SBrinRecord pRecord; - blockInfoToRecord(&pRecord, pFBlock); + blockInfoToRecord(&pRecord, pBlockInfo, pSup); code = tsdbDataFileReadBlockSma(pReader->pFileReader, &pRecord, &pSup->colAggArray); if (code != TSDB_CODE_SUCCESS) { - tsdbDebug("vgId:%d, failed to load block SMA for uid %" PRIu64 ", code:%s, %s", 0, pFBlock->uid, tstrerror(code), + tsdbDebug("vgId:%d, failed to load block SMA for uid %" PRIu64 ", code:%s, %s", 0, pBlockInfo->uid, tstrerror(code), pReader->idStr); return code; } @@ -4880,7 +4929,7 @@ int32_t tsdbRetrieveDatablockSMA2(STsdbReader* pReader, SSDataBlock* pDataBlock, } // do fill all null column value SMA info - doFillNullColSMA(pSup, pFBlock->numRow, numOfCols, pTsAgg); + doFillNullColSMA(pSup, pBlockInfo->numRow, numOfCols, pTsAgg); size_t size = pSup->colAggArray.size; @@ -4906,7 +4955,7 @@ int32_t tsdbRetrieveDatablockSMA2(STsdbReader* pReader, SSDataBlock* pDataBlock, // double elapsedTime = (taosGetTimestampUs() - st) / 1000.0; pReader->cost.smaLoadTime += 0; // elapsedTime; - tsdbDebug("vgId:%d, succeed to load block SMA for uid %" PRIu64 ", %s", 0, pFBlock->uid, pReader->idStr); + tsdbDebug("vgId:%d, succeed to load block SMA for uid %" PRIu64 ", %s", 0, pBlockInfo->uid, pReader->idStr); return code; } diff --git a/source/dnode/vnode/src/tsdb/tsdbReaderWriter.c b/source/dnode/vnode/src/tsdb/tsdbReaderWriter.c index 136bf3bfbe..932bf2d92c 100644 --- a/source/dnode/vnode/src/tsdb/tsdbReaderWriter.c +++ b/source/dnode/vnode/src/tsdb/tsdbReaderWriter.c @@ -14,8 +14,8 @@ */ #include "cos.h" -#include "tsdb.h" #include "crypt.h" +#include "tsdb.h" #include "vnd.h" static int32_t tsdbOpenFileImpl(STsdbFD *pFD) { @@ -61,6 +61,7 @@ static int32_t tsdbOpenFileImpl(STsdbFD *pFD) { // taosMemoryFree(pFD); goto _exit; } + pFD->s3File = 1; /* const char *object_name = taosDirEntryBaseName((char *)path); long s3_size = 0; @@ -86,7 +87,6 @@ static int32_t tsdbOpenFileImpl(STsdbFD *pFD) { goto _exit; } #else - pFD->s3File = 1; pFD->pFD = (TdFilePtr)&pFD->s3File; int32_t vid = 0; sscanf(object_name, "v%df%dver%" PRId64 ".data", &vid, &pFD->fid, &pFD->cid); @@ -170,7 +170,7 @@ void tsdbCloseFile(STsdbFD **ppFD) { } } -static int32_t tsdbWriteFilePage(STsdbFD *pFD, int32_t encryptAlgorithm, char* encryptKey) { +static int32_t tsdbWriteFilePage(STsdbFD *pFD, int32_t encryptAlgorithm, char *encryptKey) { int32_t code = 0; if (!pFD->pFD) { @@ -182,7 +182,7 @@ static int32_t tsdbWriteFilePage(STsdbFD *pFD, int32_t encryptAlgorithm, char* e if (pFD->pgno > 0) { int64_t offset = PAGE_OFFSET(pFD->pgno, pFD->szPage); - if (pFD->lcn > 1) { + if (pFD->s3File && pFD->lcn > 1) { SVnodeCfg *pCfg = &pFD->pTsdb->pVnode->config; int64_t chunksize = (int64_t)pCfg->tsdbPageSize * pCfg->s3ChunkSize; int64_t chunkoffset = chunksize * (pFD->lcn - 1); @@ -198,27 +198,27 @@ static int32_t tsdbWriteFilePage(STsdbFD *pFD, int32_t encryptAlgorithm, char* e } taosCalcChecksumAppend(0, pFD->pBuf, pFD->szPage); - - if(encryptAlgorithm == DND_CA_SM4){ - //if(tsiEncryptAlgorithm == DND_CA_SM4 && (tsiEncryptScope & DND_CS_TSDB) == DND_CS_TSDB){ - unsigned char PacketData[128]; - int NewLen; - int32_t count = 0; + + if (encryptAlgorithm == DND_CA_SM4) { + // if(tsiEncryptAlgorithm == DND_CA_SM4 && (tsiEncryptScope & DND_CS_TSDB) == DND_CS_TSDB){ + unsigned char PacketData[128]; + int NewLen; + int32_t count = 0; while (count < pFD->szPage) { SCryptOpts opts = {0}; opts.len = 128; opts.source = pFD->pBuf + count; opts.result = PacketData; opts.unitLen = 128; - //strncpy(opts.key, tsEncryptKey, 16); + // strncpy(opts.key, tsEncryptKey, 16); strncpy(opts.key, encryptKey, ENCRYPT_KEY_LEN); NewLen = CBC_Encrypt(&opts); memcpy(pFD->pBuf + count, PacketData, NewLen); - count += NewLen; + count += NewLen; } - //tsdbDebug("CBC_Encrypt count:%d %s", count, __FUNCTION__); + // tsdbDebug("CBC_Encrypt count:%d %s", count, __FUNCTION__); } n = taosWriteFile(pFD->pFD, pFD->pBuf, pFD->szPage); @@ -237,7 +237,7 @@ _exit: return code; } -static int32_t tsdbReadFilePage(STsdbFD *pFD, int64_t pgno, int32_t encryptAlgorithm, char* encryptKey) { +static int32_t tsdbReadFilePage(STsdbFD *pFD, int64_t pgno, int32_t encryptAlgorithm, char *encryptKey) { int32_t code = 0; // ASSERT(pgno <= pFD->szFile); @@ -297,20 +297,19 @@ static int32_t tsdbReadFilePage(STsdbFD *pFD, int64_t pgno, int32_t encryptAlgor } //} - if(encryptAlgorithm == DND_CA_SM4){ - //if(tsiEncryptAlgorithm == DND_CA_SM4 && (tsiEncryptScope & DND_CS_TSDB) == DND_CS_TSDB){ - unsigned char PacketData[128]; - int NewLen; + if (encryptAlgorithm == DND_CA_SM4) { + // if(tsiEncryptAlgorithm == DND_CA_SM4 && (tsiEncryptScope & DND_CS_TSDB) == DND_CS_TSDB){ + unsigned char PacketData[128]; + int NewLen; int32_t count = 0; - while(count < pFD->szPage) - { + while (count < pFD->szPage) { SCryptOpts opts = {0}; opts.len = 128; opts.source = pFD->pBuf + count; opts.result = PacketData; opts.unitLen = 128; - //strncpy(opts.key, tsEncryptKey, 16); + // strncpy(opts.key, tsEncryptKey, 16); strncpy(opts.key, encryptKey, ENCRYPT_KEY_LEN); NewLen = CBC_Decrypt(&opts); @@ -318,7 +317,7 @@ static int32_t tsdbReadFilePage(STsdbFD *pFD, int64_t pgno, int32_t encryptAlgor memcpy(pFD->pBuf + count, PacketData, NewLen); count += NewLen; } - //tsdbDebug("CBC_Decrypt count:%d %s", count, __FUNCTION__); + // tsdbDebug("CBC_Decrypt count:%d %s", count, __FUNCTION__); } // check @@ -333,8 +332,8 @@ _exit: return code; } -int32_t tsdbWriteFile(STsdbFD *pFD, int64_t offset, const uint8_t *pBuf, int64_t size, int32_t encryptAlgorithm, - char* encryptKey) { +int32_t tsdbWriteFile(STsdbFD *pFD, int64_t offset, const uint8_t *pBuf, int64_t size, int32_t encryptAlgorithm, + char *encryptKey) { int32_t code = 0; int64_t fOffset = LOGIC_TO_FILE_OFFSET(offset, pFD->szPage); int64_t pgno = OFFSET_PGNO(fOffset, pFD->szPage); @@ -366,8 +365,8 @@ _exit: return code; } -static int32_t tsdbReadFileImp(STsdbFD *pFD, int64_t offset, uint8_t *pBuf, int64_t size, int32_t encryptAlgorithm, - char* encryptKey) { +static int32_t tsdbReadFileImp(STsdbFD *pFD, int64_t offset, uint8_t *pBuf, int64_t size, int32_t encryptAlgorithm, + char *encryptKey) { int32_t code = 0; int64_t n = 0; int64_t fOffset = LOGIC_TO_FILE_OFFSET(offset, pFD->szPage); @@ -572,8 +571,8 @@ _exit: return code; } -int32_t tsdbReadFile(STsdbFD *pFD, int64_t offset, uint8_t *pBuf, int64_t size, int64_t szHint, - int32_t encryptAlgorithm, char* encryptKey) { +int32_t tsdbReadFile(STsdbFD *pFD, int64_t offset, uint8_t *pBuf, int64_t size, int64_t szHint, + int32_t encryptAlgorithm, char *encryptKey) { int32_t code = 0; if (!pFD->pFD) { code = tsdbOpenFileImpl(pFD); @@ -582,7 +581,7 @@ int32_t tsdbReadFile(STsdbFD *pFD, int64_t offset, uint8_t *pBuf, int64_t size, } } - if (pFD->lcn > 1 /*pFD->s3File && tsS3BlockSize < 0*/) { + if (pFD->s3File && pFD->lcn > 1 /* && tsS3BlockSize < 0*/) { return tsdbReadFileS3(pFD, offset, pBuf, size, szHint); } else { return tsdbReadFileImp(pFD, offset, pBuf, size, encryptAlgorithm, encryptKey); @@ -593,20 +592,19 @@ _exit: } int32_t tsdbReadFileToBuffer(STsdbFD *pFD, int64_t offset, int64_t size, SBuffer *buffer, int64_t szHint, - int32_t encryptAlgorithm, char* encryptKey) { + int32_t encryptAlgorithm, char *encryptKey) { int32_t code; code = tBufferEnsureCapacity(buffer, buffer->size + size); if (code) return code; - code = tsdbReadFile(pFD, offset, (uint8_t *)tBufferGetDataEnd(buffer), size, szHint, - encryptAlgorithm, encryptKey); + code = tsdbReadFile(pFD, offset, (uint8_t *)tBufferGetDataEnd(buffer), size, szHint, encryptAlgorithm, encryptKey); if (code) return code; buffer->size += size; return code; } -int32_t tsdbFsyncFile(STsdbFD *pFD, int32_t encryptAlgorithm, char* encryptKey) { +int32_t tsdbFsyncFile(STsdbFD *pFD, int32_t encryptAlgorithm, char *encryptKey) { int32_t code = 0; /* if (pFD->s3File) { @@ -726,7 +724,7 @@ int32_t tsdbReadBlockIdx(SDataFReader *pReader, SArray *aBlockIdx) { // read int32_t encryptAlgorithm = pReader->pTsdb->pVnode->config.tsdbCfg.encryptAlgorithm; - char* encryptKey = pReader->pTsdb->pVnode->config.tsdbCfg.encryptKey; + char *encryptKey = pReader->pTsdb->pVnode->config.tsdbCfg.encryptKey; code = tsdbReadFile(pReader->pHeadFD, offset, pReader->aBuf[0], size, 0, encryptAlgorithm, encryptKey); if (code) goto _err; @@ -765,7 +763,7 @@ int32_t tsdbReadSttBlk(SDataFReader *pReader, int32_t iStt, SArray *aSttBlk) { // read int32_t encryptAlgorithm = pReader->pTsdb->pVnode->config.tsdbCfg.encryptAlgorithm; - char* encryptKey = pReader->pTsdb->pVnode->config.tsdbCfg.encryptKey; + char *encryptKey = pReader->pTsdb->pVnode->config.tsdbCfg.encryptKey; code = tsdbReadFile(pReader->aSttFD[iStt], offset, pReader->aBuf[0], size, 0, encryptAlgorithm, encryptKey); if (code) goto _err; @@ -800,7 +798,7 @@ int32_t tsdbReadDataBlk(SDataFReader *pReader, SBlockIdx *pBlockIdx, SMapData *m // read int32_t encryptAlgorithm = pReader->pTsdb->pVnode->config.tsdbCfg.encryptAlgorithm; - char* encryptKey = pReader->pTsdb->pVnode->config.tsdbCfg.encryptKey; + char *encryptKey = pReader->pTsdb->pVnode->config.tsdbCfg.encryptKey; code = tsdbReadFile(pReader->pHeadFD, offset, pReader->aBuf[0], size, 0, encryptAlgorithm, encryptKey); if (code) goto _err; @@ -895,7 +893,7 @@ int32_t tsdbReadDelDatav1(SDelFReader *pReader, SDelIdx *pDelIdx, SArray *aDelDa // read int32_t encryptAlgorithm = pReader->pTsdb->pVnode->config.tsdbCfg.encryptAlgorithm; - char* encryptKey = pReader->pTsdb->pVnode->config.tsdbCfg.encryptKey; + char *encryptKey = pReader->pTsdb->pVnode->config.tsdbCfg.encryptKey; code = tsdbReadFile(pReader->pReadH, offset, pReader->aBuf[0], size, 0, encryptAlgorithm, encryptKey); if (code) goto _err; @@ -937,7 +935,7 @@ int32_t tsdbReadDelIdx(SDelFReader *pReader, SArray *aDelIdx) { // read int32_t encryptAlgorithm = pReader->pTsdb->pVnode->config.tsdbCfg.encryptAlgorithm; - char* encryptKey = pReader->pTsdb->pVnode->config.tsdbCfg.encryptKey; + char *encryptKey = pReader->pTsdb->pVnode->config.tsdbCfg.encryptKey; code = tsdbReadFile(pReader->pReadH, offset, pReader->aBuf[0], size, 0, encryptAlgorithm, encryptKey); if (code) goto _err; diff --git a/source/dnode/vnode/src/tsdb/tsdbRetention.c b/source/dnode/vnode/src/tsdb/tsdbRetention.c index 45c63cb4f1..5a138d2bed 100644 --- a/source/dnode/vnode/src/tsdb/tsdbRetention.c +++ b/source/dnode/vnode/src/tsdb/tsdbRetention.c @@ -528,7 +528,7 @@ static int32_t tsdbMigrateDataFileLCS3(SRTNer *rtner, const STFileObj *fobj, int if (fdFrom == NULL) code = terrno; TSDB_CHECK_CODE(code, lino, _exit); - tsdbInfo("vgId: %d, open lcfile: %s size: %" PRId64, TD_VID(rtner->tsdb->pVnode), fname, lc_size); + tsdbInfo("vgId:%d, open lcfile: %s size: %" PRId64, TD_VID(rtner->tsdb->pVnode), fname, lc_size); snprintf(dot2 + 1, TSDB_FQDN_LEN - (dot2 + 1 - object_name), "%d.data", lcn); fdTo = taosOpenFile(fname, TD_FILE_WRITE | TD_FILE_CREATE | TD_FILE_TRUNC); @@ -671,7 +671,7 @@ static int32_t tsdbDoS3MigrateOnFileSet(SRTNer *rtner, STFileSet *fset) { int64_t chunksize = (int64_t)pCfg->tsdbPageSize * pCfg->s3ChunkSize; int32_t lcn = fobj->f->lcn; - if (lcn < 1 && taosCheckExistFile(fobj->fname)) { + if (/*lcn < 1 && */ taosCheckExistFile(fobj->fname)) { int32_t mtime = 0; int64_t size = 0; taosStatFile(fobj->fname, &size, &mtime, NULL); diff --git a/source/libs/command/src/command.c b/source/libs/command/src/command.c index 2902a19f06..a83b493589 100644 --- a/source/libs/command/src/command.c +++ b/source/libs/command/src/command.c @@ -967,46 +967,11 @@ static int32_t buildLocalVariablesResultDataBlock(SSDataBlock** pOutput) { return TSDB_CODE_SUCCESS; } -int32_t setLocalVariablesResultIntoDataBlock(SSDataBlock* pBlock) { - int32_t numOfCfg = taosArrayGetSize(tsCfg->array); - int32_t numOfRows = 0; - blockDataEnsureCapacity(pBlock, numOfCfg); - - for (int32_t i = 0, c = 0; i < numOfCfg; ++i, c = 0) { - SConfigItem* pItem = taosArrayGet(tsCfg->array, i); - // GRANT_CFG_SKIP; - - char name[TSDB_CONFIG_OPTION_LEN + VARSTR_HEADER_SIZE] = {0}; - STR_WITH_MAXSIZE_TO_VARSTR(name, pItem->name, TSDB_CONFIG_OPTION_LEN + VARSTR_HEADER_SIZE); - SColumnInfoData* pColInfo = taosArrayGet(pBlock->pDataBlock, c++); - colDataSetVal(pColInfo, i, name, false); - - char value[TSDB_CONFIG_VALUE_LEN + VARSTR_HEADER_SIZE] = {0}; - int32_t valueLen = 0; - cfgDumpItemValue(pItem, &value[VARSTR_HEADER_SIZE], TSDB_CONFIG_VALUE_LEN, &valueLen); - varDataSetLen(value, valueLen); - pColInfo = taosArrayGet(pBlock->pDataBlock, c++); - colDataSetVal(pColInfo, i, value, false); - - char scope[TSDB_CONFIG_SCOPE_LEN + VARSTR_HEADER_SIZE] = {0}; - cfgDumpItemScope(pItem, &scope[VARSTR_HEADER_SIZE], TSDB_CONFIG_SCOPE_LEN, &valueLen); - varDataSetLen(scope, valueLen); - pColInfo = taosArrayGet(pBlock->pDataBlock, c++); - colDataSetVal(pColInfo, i, scope, false); - - numOfRows++; - } - - pBlock->info.rows = numOfRows; - - return TSDB_CODE_SUCCESS; -} - static int32_t execShowLocalVariables(SRetrieveTableRsp** pRsp) { SSDataBlock* pBlock = NULL; int32_t code = buildLocalVariablesResultDataBlock(&pBlock); if (TSDB_CODE_SUCCESS == code) { - code = setLocalVariablesResultIntoDataBlock(pBlock); + code = dumpConfToDataBlock(pBlock, 0); } if (TSDB_CODE_SUCCESS == code) { code = buildRetrieveTableRsp(pBlock, SHOW_LOCAL_VARIABLES_RESULT_COLS, pRsp); diff --git a/source/libs/executor/src/aggregateoperator.c b/source/libs/executor/src/aggregateoperator.c index 2429fcff79..d0e1449188 100644 --- a/source/libs/executor/src/aggregateoperator.c +++ b/source/libs/executor/src/aggregateoperator.c @@ -278,7 +278,7 @@ int32_t doAggregateImpl(SOperatorInfo* pOperator, SqlFunctionCtx* pCtx) { int32_t code = TSDB_CODE_SUCCESS; for (int32_t k = 0; k < pOperator->exprSupp.numOfExprs; ++k) { if (functionNeedToExecute(&pCtx[k])) { - // todo add a dummy funtion to avoid process check + // todo add a dummy function to avoid process check if (pCtx[k].fpSet.process == NULL) { continue; } diff --git a/source/libs/executor/src/cachescanoperator.c b/source/libs/executor/src/cachescanoperator.c index 985cdb9433..0d0870911e 100644 --- a/source/libs/executor/src/cachescanoperator.c +++ b/source/libs/executor/src/cachescanoperator.c @@ -221,6 +221,7 @@ SSDataBlock* doScanCache(SOperatorInfo* pOperator) { SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo; STableListInfo* pTableList = pInfo->pTableList; SStoreCacheReader* pReaderFn = &pInfo->readHandle.api.cacheFn; + SSDataBlock* pBufRes = pInfo->pBufferedRes; uint64_t suid = tableListGetSuid(pTableList); int32_t size = tableListGetSize(pTableList); @@ -237,18 +238,18 @@ SSDataBlock* doScanCache(SOperatorInfo* pOperator) { T_LONG_JMP(pTaskInfo->env, pTaskInfo->code); } - if (pInfo->indexOfBufferedRes >= pInfo->pBufferedRes->info.rows) { - blockDataCleanup(pInfo->pBufferedRes); + if (pInfo->indexOfBufferedRes >= pBufRes->info.rows) { + blockDataCleanup(pBufRes); taosArrayClear(pInfo->pUidList); - int32_t code = pReaderFn->retrieveRows(pInfo->pLastrowReader, pInfo->pBufferedRes, pInfo->pSlotIds, - pInfo->pDstSlotIds, pInfo->pUidList); + int32_t code = + pReaderFn->retrieveRows(pInfo->pLastrowReader, pBufRes, pInfo->pSlotIds, pInfo->pDstSlotIds, pInfo->pUidList); if (code != TSDB_CODE_SUCCESS) { T_LONG_JMP(pTaskInfo->env, code); } // check for tag values - int32_t resultRows = pInfo->pBufferedRes->info.rows; + int32_t resultRows = pBufRes->info.rows; // the results may be null, if last values are all null ASSERT(resultRows == 0 || resultRows == taosArrayGetSize(pInfo->pUidList)); @@ -257,12 +258,12 @@ SSDataBlock* doScanCache(SOperatorInfo* pOperator) { SSDataBlock* pRes = pInfo->pRes; - if (pInfo->indexOfBufferedRes < pInfo->pBufferedRes->info.rows) { - for (int32_t i = 0; i < taosArrayGetSize(pInfo->pBufferedRes->pDataBlock); ++i) { + if (pInfo->indexOfBufferedRes < pBufRes->info.rows) { + for (int32_t i = 0; i < taosArrayGetSize(pBufRes->pDataBlock); ++i) { SColumnInfoData* pCol = taosArrayGet(pRes->pDataBlock, i); int32_t slotId = pCol->info.slotId; - SColumnInfoData* pSrc = taosArrayGet(pInfo->pBufferedRes->pDataBlock, slotId); + SColumnInfoData* pSrc = taosArrayGet(pBufRes->pDataBlock, slotId); SColumnInfoData* pDst = taosArrayGet(pRes->pDataBlock, slotId); if (colDataIsNull_s(pSrc, pInfo->indexOfBufferedRes)) { diff --git a/source/libs/function/src/builtinsimpl.c b/source/libs/function/src/builtinsimpl.c index 56f2ccd630..3fb298e1ea 100644 --- a/source/libs/function/src/builtinsimpl.c +++ b/source/libs/function/src/builtinsimpl.c @@ -2837,7 +2837,7 @@ static int32_t firstLastTransferInfoImpl(SFirstLastRes* pInput, SFirstLastRes* p memcpy(pOutput->buf, pInput->buf, pOutput->bytes); if (pInput->pkData) { pOutput->pkBytes = pInput->pkBytes; - memcpy(pOutput->buf+pOutput->bytes, pInput->pkData, pOutput->pkBytes); + memcpy(pOutput->buf + pOutput->bytes, pInput->pkData, pOutput->pkBytes); pOutput->pkData = pOutput->buf + pOutput->bytes; } return TSDB_CODE_SUCCESS; @@ -2885,7 +2885,8 @@ static int32_t firstLastFunctionMergeImpl(SqlFunctionCtx* pCtx, bool isFirstQuer } else { pInputInfo->pkData = NULL; } - int32_t code = firstLastTransferInfo(pCtx, pInputInfo, pInfo, isFirstQuery, i); + + int32_t code = firstLastTransferInfo(pCtx, pInputInfo, pInfo, isFirstQuery, i); if (code != TSDB_CODE_SUCCESS) { return code; } diff --git a/source/libs/nodes/src/nodesUtilFuncs.c b/source/libs/nodes/src/nodesUtilFuncs.c index 76bf7b04fd..adb011e3ec 100644 --- a/source/libs/nodes/src/nodesUtilFuncs.c +++ b/source/libs/nodes/src/nodesUtilFuncs.c @@ -2343,7 +2343,7 @@ static EDealRes collectFuncs(SNode* pNode, void* pContext) { return DEAL_RES_CONTINUE; } } - SExprNode* pExpr = (SExprNode*)pNode; + bool bFound = false; SNode* pn = NULL; FOREACH(pn, pCxt->pFuncs) { diff --git a/source/libs/planner/src/planLogicCreater.c b/source/libs/planner/src/planLogicCreater.c index 2e3e8f189b..60bce622be 100644 --- a/source/libs/planner/src/planLogicCreater.c +++ b/source/libs/planner/src/planLogicCreater.c @@ -19,6 +19,9 @@ #include "tglobal.h" #include "parser.h" +// primary key column always the second column if exists +#define PRIMARY_COLUMN_SLOT 1 + typedef struct SLogicPlanContext { SPlanContext* pPlanCxt; SLogicNode* pCurrRoot; @@ -304,7 +307,7 @@ static SNode* createFirstCol(SRealTableNode* pTable, const SSchema* pSchema) { return (SNode*)pCol; } -static int32_t addPrimaryKeyCol(SRealTableNode* pTable, SNodeList** pCols) { +static int32_t addPrimaryTsCol(SRealTableNode* pTable, SNodeList** pCols) { bool found = false; SNode* pCol = NULL; FOREACH(pCol, *pCols) { @@ -327,10 +330,10 @@ static int32_t addSystableFirstCol(SRealTableNode* pTable, SNodeList** pCols) { return nodesListMakeStrictAppend(pCols, createFirstCol(pTable, pTable->pMeta->schema)); } -static int32_t addPkCol(SRealTableNode* pTable, SNodeList** pCols) { +static int32_t addPrimaryKeyCol(SRealTableNode* pTable, SNodeList** pCols) { bool found = false; SNode* pCol = NULL; - SSchema* pSchema = &pTable->pMeta->schema[1]; + SSchema* pSchema = &pTable->pMeta->schema[PRIMARY_COLUMN_SLOT]; FOREACH(pCol, *pCols) { if (pSchema->colId == ((SColumnNode*)pCol)->colId) { found = true; @@ -348,9 +351,9 @@ static int32_t addDefaultScanCol(SRealTableNode* pTable, SNodeList** pCols) { if (TSDB_SYSTEM_TABLE == pTable->pMeta->tableType) { return addSystableFirstCol(pTable, pCols); } - int32_t code = addPrimaryKeyCol(pTable, pCols); + int32_t code = addPrimaryTsCol(pTable, pCols); if (code == TSDB_CODE_SUCCESS && hasPkInTable(pTable->pMeta)) { - code = addPkCol(pTable, pCols); + code = addPrimaryKeyCol(pTable, pCols); } return code; } @@ -1802,7 +1805,7 @@ static int32_t createDeleteScanLogicNode(SLogicPlanContext* pCxt, SDeleteStmt* p STableMeta* pMeta = ((SRealTableNode*)pDelete->pFromTable)->pMeta; if (TSDB_CODE_SUCCESS == code && hasPkInTable(pMeta)) { - code = addPkCol((SRealTableNode*)pDelete->pFromTable, &pScan->pScanCols); + code = addPrimaryKeyCol((SRealTableNode*)pDelete->pFromTable, &pScan->pScanCols); } if (TSDB_CODE_SUCCESS == code && NULL != pDelete->pTagCond) { diff --git a/source/libs/planner/src/planOptimizer.c b/source/libs/planner/src/planOptimizer.c index da39228a62..eee0766589 100644 --- a/source/libs/planner/src/planOptimizer.c +++ b/source/libs/planner/src/planOptimizer.c @@ -3966,21 +3966,25 @@ static int32_t lastRowScanOptimize(SOptimizeContext* pCxt, SLogicSubplan* pLogic if (NULL != cxt.pLastCols) { cxt.doAgg = false; cxt.funcType = FUNCTION_TYPE_CACHE_LAST; + lastRowScanOptSetLastTargets(pScan->pScanCols, cxt.pLastCols, pLastRowCols, true, cxt.pkBytes); nodesWalkExprs(pScan->pScanPseudoCols, lastRowScanOptSetColDataType, &cxt); + lastRowScanOptSetLastTargets(pScan->node.pTargets, cxt.pLastCols, pLastRowCols, false, cxt.pkBytes); lastRowScanOptRemoveUslessTargets(pScan->node.pTargets, cxt.pLastCols, cxt.pOtherCols, pLastRowCols); - if (pPKTsCol && pScan->node.pTargets->length == 1) { + if (pPKTsCol && ((pScan->node.pTargets->length == 1) || (pScan->node.pTargets->length == 2 && cxt.pkBytes > 0))) { // when select last(ts),ts from ..., we add another ts to targets sprintf(pPKTsCol->colName, "#sel_val.%p", pPKTsCol); nodesListAppend(pScan->node.pTargets, nodesCloneNode((SNode*)pPKTsCol)); } + if (pNonPKCol && cxt.pLastCols->length == 1 && nodesEqualNode((SNode*)pNonPKCol, nodesListGetNode(cxt.pLastCols, 0))) { // when select last(c1), c1 from ..., we add c1 to targets sprintf(pNonPKCol->colName, "#sel_val.%p", pNonPKCol); nodesListAppend(pScan->node.pTargets, nodesCloneNode((SNode*)pNonPKCol)); } + nodesClearList(cxt.pLastCols); } nodesClearList(cxt.pOtherCols); diff --git a/source/libs/stream/inc/streamInt.h b/source/libs/stream/inc/streamInt.h index b3ed86cff8..44fb0706b8 100644 --- a/source/libs/stream/inc/streamInt.h +++ b/source/libs/stream/inc/streamInt.h @@ -69,6 +69,7 @@ typedef struct { int64_t chkpId; char* dbPrefixPath; } SStreamTaskSnap; + struct STokenBucket { int32_t numCapacity; // total capacity, available token per second int32_t numOfToken; // total available tokens @@ -148,18 +149,19 @@ int32_t streamQueueGetItemSize(const SStreamQueue* pQueue); void streamMetaRemoveDB(void* arg, char* key); -typedef enum UPLOAD_TYPE { - UPLOAD_DISABLE = -1, - UPLOAD_S3 = 0, - UPLOAD_RSYNC = 1, -} UPLOAD_TYPE; +typedef enum ECHECKPOINT_BACKUP_TYPE { + DATA_UPLOAD_DISABLE = -1, + DATA_UPLOAD_S3 = 0, + DATA_UPLOAD_RSYNC = 1, +} ECHECKPOINT_BACKUP_TYPE; -UPLOAD_TYPE getUploadType(); -int uploadCheckpoint(char* id, char* path); -int downloadCheckpoint(char* id, char* path); -int deleteCheckpoint(char* id); -int deleteCheckpointFile(char* id, char* name); -int downloadCheckpointByName(char* id, char* fname, char* dstName); +ECHECKPOINT_BACKUP_TYPE streamGetCheckpointBackupType(); + +int32_t streamTaskBackupCheckpoint(char* id, char* path); +int32_t downloadCheckpoint(char* id, char* path); +int32_t deleteCheckpoint(char* id); +int32_t deleteCheckpointFile(char* id, char* name); +int32_t downloadCheckpointByName(char* id, char* fname, char* dstName); int32_t streamTaskOnNormalTaskReady(SStreamTask* pTask); int32_t streamTaskOnScanhistoryTaskReady(SStreamTask* pTask); diff --git a/source/libs/stream/src/streamBackendRocksdb.c b/source/libs/stream/src/streamBackendRocksdb.c index 06093cbaf8..662d02a48f 100644 --- a/source/libs/stream/src/streamBackendRocksdb.c +++ b/source/libs/stream/src/streamBackendRocksdb.c @@ -376,10 +376,10 @@ int32_t rebuildFromRemoteChkp_s3(char* key, char* chkpPath, int64_t chkpId, char return code; } int32_t rebuildFromRemoteChkp(char* key, char* chkpPath, int64_t chkpId, char* defaultPath) { - UPLOAD_TYPE type = getUploadType(); - if (type == UPLOAD_S3) { + ECHECKPOINT_BACKUP_TYPE type = streamGetCheckpointBackupType(); + if (type == DATA_UPLOAD_S3) { return rebuildFromRemoteChkp_s3(key, chkpPath, chkpId, defaultPath); - } else if (type == UPLOAD_RSYNC) { + } else if (type == DATA_UPLOAD_RSYNC) { return rebuildFromRemoteChkp_rsync(key, chkpPath, chkpId, defaultPath); } return -1; @@ -2111,11 +2111,11 @@ int32_t taskDbGenChkpUploadData__s3(STaskDbWrapper* pDb, void* bkdChkpMgt, int64 } int32_t taskDbGenChkpUploadData(void* arg, void* mgt, int64_t chkpId, int8_t type, char** path, SArray* list) { STaskDbWrapper* pDb = arg; - UPLOAD_TYPE utype = type; + ECHECKPOINT_BACKUP_TYPE utype = type; - if (utype == UPLOAD_RSYNC) { + if (utype == DATA_UPLOAD_RSYNC) { return taskDbGenChkpUploadData__rsync(pDb, chkpId, path); - } else if (utype == UPLOAD_S3) { + } else if (utype == DATA_UPLOAD_S3) { return taskDbGenChkpUploadData__s3(pDb, mgt, chkpId, path, list); } return -1; @@ -2179,6 +2179,7 @@ int32_t copyDataAt(RocksdbCfInst* pSrc, STaskDbWrapper* pDst, int8_t i) { } _EXIT: + rocksdb_writebatch_destroy(wb); rocksdb_iter_destroy(pIter); rocksdb_readoptions_destroy(pRdOpt); taosMemoryFree(err); diff --git a/source/libs/stream/src/streamCheckStatus.c b/source/libs/stream/src/streamCheckStatus.c new file mode 100644 index 0000000000..d164d01934 --- /dev/null +++ b/source/libs/stream/src/streamCheckStatus.c @@ -0,0 +1,697 @@ +/* + * Copyright (c) 2019 TAOS Data, Inc. + * + * This program is free software: you can use, redistribute, and/or modify + * it under the terms of the GNU Affero General Public License, version 3 + * or later ("AGPL"), as published by the Free Software Foundation. + * + * This program is distributed in the hope that it will be useful, but WITHOUT + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or + * FITNESS FOR A PARTICULAR PURPOSE. + * + * You should have received a copy of the GNU Affero General Public License + * along with this program. If not, see . + */ + +#include "cos.h" +#include "rsync.h" +#include "streamBackendRocksdb.h" +#include "streamInt.h" + +#define CHECK_NOT_RSP_DURATION 10 * 1000 // 10 sec + +static void processDownstreamReadyRsp(SStreamTask* pTask); +static void addIntoNodeUpdateList(SStreamTask* pTask, int32_t nodeId); +static void rspMonitorFn(void* param, void* tmrId); +static int32_t streamTaskInitTaskCheckInfo(STaskCheckInfo* pInfo, STaskOutputInfo* pOutputInfo, int64_t startTs); +static int32_t streamTaskStartCheckDownstream(STaskCheckInfo* pInfo, const char* id); +static int32_t streamTaskCompleteCheckRsp(STaskCheckInfo* pInfo, bool lock, const char* id); +static int32_t streamTaskAddReqInfo(STaskCheckInfo* pInfo, int64_t reqId, int32_t taskId, int32_t vgId, const char* id); +static void doSendCheckMsg(SStreamTask* pTask, SDownstreamStatusInfo* p); +static void handleTimeoutDownstreamTasks(SStreamTask* pTask, SArray* pTimeoutList); +static void handleNotReadyDownstreamTask(SStreamTask* pTask, SArray* pNotReadyList); +static int32_t streamTaskUpdateCheckInfo(STaskCheckInfo* pInfo, int32_t taskId, int32_t status, int64_t rspTs, + int64_t reqId, int32_t* pNotReady, const char* id); +static void setCheckDownstreamReqInfo(SStreamTaskCheckReq* pReq, int64_t reqId, int32_t dstTaskId, int32_t dstNodeId); +static void getCheckRspStatus(STaskCheckInfo* pInfo, int64_t el, int32_t* numOfReady, int32_t* numOfFault, + int32_t* numOfNotRsp, SArray* pTimeoutList, SArray* pNotReadyList, const char* id); +static SDownstreamStatusInfo* findCheckRspStatus(STaskCheckInfo* pInfo, int32_t taskId); + +// check status +void streamTaskCheckDownstream(SStreamTask* pTask) { + SDataRange* pRange = &pTask->dataRange; + STimeWindow* pWindow = &pRange->window; + const char* idstr = pTask->id.idStr; + + SStreamTaskCheckReq req = { + .streamId = pTask->id.streamId, + .upstreamTaskId = pTask->id.taskId, + .upstreamNodeId = pTask->info.nodeId, + .childId = pTask->info.selfChildId, + .stage = pTask->pMeta->stage, + }; + + ASSERT(pTask->status.downstreamReady == 0); + + // serialize streamProcessScanHistoryFinishRsp + if (pTask->outputInfo.type == TASK_OUTPUT__FIXED_DISPATCH) { + streamTaskStartMonitorCheckRsp(pTask); + + STaskDispatcherFixed* pDispatch = &pTask->outputInfo.fixedDispatcher; + + setCheckDownstreamReqInfo(&req, tGenIdPI64(), pDispatch->taskId, pDispatch->nodeId); + streamTaskAddReqInfo(&pTask->taskCheckInfo, req.reqId, pDispatch->taskId, pDispatch->nodeId, idstr); + + stDebug("s-task:%s (vgId:%d) stage:%" PRId64 " check single downstream task:0x%x(vgId:%d) ver:%" PRId64 "-%" PRId64 + " window:%" PRId64 "-%" PRId64 " reqId:0x%" PRIx64, + idstr, pTask->info.nodeId, req.stage, req.downstreamTaskId, req.downstreamNodeId, pRange->range.minVer, + pRange->range.maxVer, pWindow->skey, pWindow->ekey, req.reqId); + + streamSendCheckMsg(pTask, &req, pTask->outputInfo.fixedDispatcher.nodeId, &pTask->outputInfo.fixedDispatcher.epSet); + + } else if (pTask->outputInfo.type == TASK_OUTPUT__SHUFFLE_DISPATCH) { + streamTaskStartMonitorCheckRsp(pTask); + + SArray* vgInfo = pTask->outputInfo.shuffleDispatcher.dbInfo.pVgroupInfos; + + int32_t numOfVgs = taosArrayGetSize(vgInfo); + stDebug("s-task:%s check %d downstream tasks, ver:%" PRId64 "-%" PRId64 " window:%" PRId64 "-%" PRId64, idstr, + numOfVgs, pRange->range.minVer, pRange->range.maxVer, pWindow->skey, pWindow->ekey); + + for (int32_t i = 0; i < numOfVgs; i++) { + SVgroupInfo* pVgInfo = taosArrayGet(vgInfo, i); + + setCheckDownstreamReqInfo(&req, tGenIdPI64(), pVgInfo->taskId, pVgInfo->vgId); + streamTaskAddReqInfo(&pTask->taskCheckInfo, req.reqId, pVgInfo->taskId, pVgInfo->vgId, idstr); + + stDebug("s-task:%s (vgId:%d) stage:%" PRId64 + " check downstream task:0x%x (vgId:%d) (shuffle), idx:%d, reqId:0x%" PRIx64, + idstr, pTask->info.nodeId, req.stage, req.downstreamTaskId, req.downstreamNodeId, i, req.reqId); + streamSendCheckMsg(pTask, &req, pVgInfo->vgId, &pVgInfo->epSet); + } + } else { // for sink task, set it ready directly. + stDebug("s-task:%s (vgId:%d) set downstream ready, since no downstream", idstr, pTask->info.nodeId); + streamTaskStopMonitorCheckRsp(&pTask->taskCheckInfo, idstr); + processDownstreamReadyRsp(pTask); + } +} + +int32_t streamProcessCheckRsp(SStreamTask* pTask, const SStreamTaskCheckRsp* pRsp) { + ASSERT(pTask->id.taskId == pRsp->upstreamTaskId); + + int64_t now = taosGetTimestampMs(); + const char* id = pTask->id.idStr; + STaskCheckInfo* pInfo = &pTask->taskCheckInfo; + int32_t total = streamTaskGetNumOfDownstream(pTask); + int32_t left = -1; + + if (streamTaskShouldStop(pTask)) { + stDebug("s-task:%s should stop, do not do check downstream again", id); + return TSDB_CODE_SUCCESS; + } + + if (pRsp->status == TASK_DOWNSTREAM_READY) { + int32_t code = streamTaskUpdateCheckInfo(pInfo, pRsp->downstreamTaskId, pRsp->status, now, pRsp->reqId, &left, id); + if (code != TSDB_CODE_SUCCESS) { + return TSDB_CODE_SUCCESS; + } + + if (left == 0) { + processDownstreamReadyRsp(pTask); // all downstream tasks are ready, set the complete check downstream flag + streamTaskStopMonitorCheckRsp(pInfo, id); + } else { + stDebug("s-task:%s (vgId:%d) recv check rsp from task:0x%x (vgId:%d) status:%d, total:%d not ready:%d", id, + pRsp->upstreamNodeId, pRsp->downstreamTaskId, pRsp->downstreamNodeId, pRsp->status, total, left); + } + } else { // not ready, wait for 100ms and retry + int32_t code = streamTaskUpdateCheckInfo(pInfo, pRsp->downstreamTaskId, pRsp->status, now, pRsp->reqId, &left, id); + if (code != TSDB_CODE_SUCCESS) { + return TSDB_CODE_SUCCESS; // return success in any cases. + } + + if (pRsp->status == TASK_UPSTREAM_NEW_STAGE || pRsp->status == TASK_DOWNSTREAM_NOT_LEADER) { + if (pRsp->status == TASK_UPSTREAM_NEW_STAGE) { + stError("s-task:%s vgId:%d self vnode-transfer/leader-change/restart detected, old stage:%" PRId64 + ", current stage:%" PRId64 ", not check wait for downstream task nodeUpdate, and all tasks restart", + id, pRsp->upstreamNodeId, pRsp->oldStage, pTask->pMeta->stage); + addIntoNodeUpdateList(pTask, pRsp->upstreamNodeId); + } else { + stError( + "s-task:%s downstream taskId:0x%x (vgId:%d) not leader, self dispatch epset needs to be updated, not check " + "downstream again, nodeUpdate needed", + id, pRsp->downstreamTaskId, pRsp->downstreamNodeId); + addIntoNodeUpdateList(pTask, pRsp->downstreamNodeId); + } + + int32_t startTs = pTask->execInfo.checkTs; + streamMetaAddTaskLaunchResult(pTask->pMeta, pTask->id.streamId, pTask->id.taskId, startTs, now, false); + + // automatically set the related fill-history task to be failed. + if (HAS_RELATED_FILLHISTORY_TASK(pTask)) { + STaskId* pId = &pTask->hTaskInfo.id; + streamMetaAddTaskLaunchResult(pTask->pMeta, pId->streamId, pId->taskId, startTs, now, false); + } + } else { // TASK_DOWNSTREAM_NOT_READY, let's retry in 100ms + ASSERT(left > 0); + stDebug("s-task:%s (vgId:%d) recv check rsp from task:0x%x (vgId:%d) status:%d, total:%d not ready:%d", id, + pRsp->upstreamNodeId, pRsp->downstreamTaskId, pRsp->downstreamNodeId, pRsp->status, total, left); + } + } + + return 0; +} + +int32_t streamTaskStartMonitorCheckRsp(SStreamTask* pTask) { + STaskCheckInfo* pInfo = &pTask->taskCheckInfo; + + taosThreadMutexLock(&pInfo->checkInfoLock); + int32_t code = streamTaskStartCheckDownstream(pInfo, pTask->id.idStr); + if (code != TSDB_CODE_SUCCESS) { + taosThreadMutexUnlock(&pInfo->checkInfoLock); + return TSDB_CODE_FAILED; + } + + streamTaskInitTaskCheckInfo(pInfo, &pTask->outputInfo, taosGetTimestampMs()); + + int32_t ref = atomic_add_fetch_32(&pTask->status.timerActive, 1); + stDebug("s-task:%s start check rsp monit, ref:%d ", pTask->id.idStr, ref); + + if (pInfo->checkRspTmr == NULL) { + pInfo->checkRspTmr = taosTmrStart(rspMonitorFn, CHECK_RSP_INTERVAL, pTask, streamTimer); + } else { + taosTmrReset(rspMonitorFn, CHECK_RSP_INTERVAL, pTask, streamTimer, &pInfo->checkRspTmr); + } + + taosThreadMutexUnlock(&pInfo->checkInfoLock); + return 0; +} + +int32_t streamTaskStopMonitorCheckRsp(STaskCheckInfo* pInfo, const char* id) { + taosThreadMutexLock(&pInfo->checkInfoLock); + streamTaskCompleteCheckRsp(pInfo, false, id); + + pInfo->stopCheckProcess = 1; + taosThreadMutexUnlock(&pInfo->checkInfoLock); + + stDebug("s-task:%s set stop check rsp mon", id); + return TSDB_CODE_SUCCESS; +} + +void streamTaskCleanupCheckInfo(STaskCheckInfo* pInfo) { + ASSERT(pInfo->inCheckProcess == 0); + + pInfo->pList = taosArrayDestroy(pInfo->pList); + if (pInfo->checkRspTmr != NULL) { + /*bool ret = */ taosTmrStop(pInfo->checkRspTmr); + pInfo->checkRspTmr = NULL; + } + + taosThreadMutexDestroy(&pInfo->checkInfoLock); +} + +/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// +void processDownstreamReadyRsp(SStreamTask* pTask) { + EStreamTaskEvent event = (pTask->info.fillHistory == 0) ? TASK_EVENT_INIT : TASK_EVENT_INIT_SCANHIST; + streamTaskOnHandleEventSuccess(pTask->status.pSM, event, NULL, NULL); + + int64_t checkTs = pTask->execInfo.checkTs; + int64_t readyTs = pTask->execInfo.readyTs; + streamMetaAddTaskLaunchResult(pTask->pMeta, pTask->id.streamId, pTask->id.taskId, checkTs, readyTs, true); + + if (pTask->status.taskStatus == TASK_STATUS__HALT) { + ASSERT(HAS_RELATED_FILLHISTORY_TASK(pTask) && (pTask->info.fillHistory == 0)); + + // halt it self for count window stream task until the related fill history task completed. + stDebug("s-task:%s level:%d initial status is %s from mnode, set it to be halt", pTask->id.idStr, + pTask->info.taskLevel, streamTaskGetStatusStr(pTask->status.taskStatus)); + streamTaskHandleEvent(pTask->status.pSM, TASK_EVENT_HALT); + } + + // start the related fill-history task, when current task is ready + // not invoke in success callback due to the deadlock. + if (HAS_RELATED_FILLHISTORY_TASK(pTask)) { + stDebug("s-task:%s try to launch related fill-history task", pTask->id.idStr); + streamLaunchFillHistoryTask(pTask); + } +} + +void addIntoNodeUpdateList(SStreamTask* pTask, int32_t nodeId) { + int32_t vgId = pTask->pMeta->vgId; + + taosThreadMutexLock(&pTask->lock); + int32_t num = taosArrayGetSize(pTask->outputInfo.pNodeEpsetUpdateList); + bool existed = false; + for (int i = 0; i < num; ++i) { + SDownstreamTaskEpset* p = taosArrayGet(pTask->outputInfo.pNodeEpsetUpdateList, i); + if (p->nodeId == nodeId) { + existed = true; + break; + } + } + + if (!existed) { + SDownstreamTaskEpset t = {.nodeId = nodeId}; + taosArrayPush(pTask->outputInfo.pNodeEpsetUpdateList, &t); + + stInfo("s-task:%s vgId:%d downstream nodeId:%d needs to be updated, total needs updated:%d", pTask->id.idStr, vgId, + t.nodeId, (num + 1)); + } + + taosThreadMutexUnlock(&pTask->lock); +} + +int32_t streamTaskInitTaskCheckInfo(STaskCheckInfo* pInfo, STaskOutputInfo* pOutputInfo, int64_t startTs) { + taosArrayClear(pInfo->pList); + + if (pOutputInfo->type == TASK_OUTPUT__FIXED_DISPATCH) { + pInfo->notReadyTasks = 1; + } else if (pOutputInfo->type == TASK_OUTPUT__SHUFFLE_DISPATCH) { + pInfo->notReadyTasks = taosArrayGetSize(pOutputInfo->shuffleDispatcher.dbInfo.pVgroupInfos); + ASSERT(pInfo->notReadyTasks == pOutputInfo->shuffleDispatcher.dbInfo.vgNum); + } + + pInfo->startTs = startTs; + return TSDB_CODE_SUCCESS; +} + +SDownstreamStatusInfo* findCheckRspStatus(STaskCheckInfo* pInfo, int32_t taskId) { + for (int32_t j = 0; j < taosArrayGetSize(pInfo->pList); ++j) { + SDownstreamStatusInfo* p = taosArrayGet(pInfo->pList, j); + if (p->taskId == taskId) { + return p; + } + } + + return NULL; +} + +int32_t streamTaskUpdateCheckInfo(STaskCheckInfo* pInfo, int32_t taskId, int32_t status, int64_t rspTs, int64_t reqId, + int32_t* pNotReady, const char* id) { + taosThreadMutexLock(&pInfo->checkInfoLock); + + SDownstreamStatusInfo* p = findCheckRspStatus(pInfo, taskId); + if (p != NULL) { + if (reqId != p->reqId) { + stError("s-task:%s reqId:%" PRIx64 " expected:%" PRIx64 " expired check-rsp recv from downstream task:0x%x, discarded", + id, reqId, p->reqId, taskId); + taosThreadMutexUnlock(&pInfo->checkInfoLock); + return TSDB_CODE_FAILED; + } + + // subtract one not-ready-task, since it is ready now + if ((p->status != TASK_DOWNSTREAM_READY) && (status == TASK_DOWNSTREAM_READY)) { + *pNotReady = atomic_sub_fetch_32(&pInfo->notReadyTasks, 1); + } else { + *pNotReady = pInfo->notReadyTasks; + } + + p->status = status; + p->rspTs = rspTs; + + taosThreadMutexUnlock(&pInfo->checkInfoLock); + return TSDB_CODE_SUCCESS; + } + + taosThreadMutexUnlock(&pInfo->checkInfoLock); + stError("s-task:%s unexpected check rsp msg, invalid downstream task:0x%x, reqId:%" PRIx64 " discarded", id, taskId, + reqId); + return TSDB_CODE_FAILED; +} + +int32_t streamTaskStartCheckDownstream(STaskCheckInfo* pInfo, const char* id) { + if (pInfo->inCheckProcess == 0) { + pInfo->inCheckProcess = 1; + } else { + ASSERT(pInfo->startTs > 0); + stError("s-task:%s already in check procedure, checkTs:%" PRId64 ", start monitor check rsp failed", id, + pInfo->startTs); + return TSDB_CODE_FAILED; + } + + stDebug("s-task:%s set the in-check-procedure flag", id); + return 0; +} + +int32_t streamTaskCompleteCheckRsp(STaskCheckInfo* pInfo, bool lock, const char* id) { + if (lock) { + taosThreadMutexLock(&pInfo->checkInfoLock); + } + + if (!pInfo->inCheckProcess) { +// stWarn("s-task:%s already not in-check-procedure", id); + } + + int64_t el = (pInfo->startTs != 0) ? (taosGetTimestampMs() - pInfo->startTs) : 0; + stDebug("s-task:%s clear the in-check-procedure flag, not in-check-procedure elapsed time:%" PRId64 " ms", id, el); + + pInfo->startTs = 0; + pInfo->notReadyTasks = 0; + pInfo->inCheckProcess = 0; + pInfo->stopCheckProcess = 0; + + pInfo->notReadyRetryCount = 0; + pInfo->timeoutRetryCount = 0; + + taosArrayClear(pInfo->pList); + + if (lock) { + taosThreadMutexUnlock(&pInfo->checkInfoLock); + } + + return 0; +} + +int32_t streamTaskAddReqInfo(STaskCheckInfo* pInfo, int64_t reqId, int32_t taskId, int32_t vgId, const char* id) { + SDownstreamStatusInfo info = {.taskId = taskId, .status = -1, .vgId = vgId, .reqId = reqId, .rspTs = 0}; + + taosThreadMutexLock(&pInfo->checkInfoLock); + + SDownstreamStatusInfo* p = findCheckRspStatus(pInfo, taskId); + if (p != NULL) { + stDebug("s-task:%s check info to task:0x%x already sent", id, taskId); + taosThreadMutexUnlock(&pInfo->checkInfoLock); + return TSDB_CODE_SUCCESS; + } + + taosArrayPush(pInfo->pList, &info); + + taosThreadMutexUnlock(&pInfo->checkInfoLock); + return TSDB_CODE_SUCCESS; +} + +void doSendCheckMsg(SStreamTask* pTask, SDownstreamStatusInfo* p) { + SStreamTaskCheckReq req = { + .streamId = pTask->id.streamId, + .upstreamTaskId = pTask->id.taskId, + .upstreamNodeId = pTask->info.nodeId, + .childId = pTask->info.selfChildId, + .stage = pTask->pMeta->stage, + }; + + STaskOutputInfo* pOutputInfo = &pTask->outputInfo; + if (pOutputInfo->type == TASK_OUTPUT__FIXED_DISPATCH) { + STaskDispatcherFixed* pDispatch = &pOutputInfo->fixedDispatcher; + setCheckDownstreamReqInfo(&req, p->reqId, pDispatch->taskId, pDispatch->taskId); + + stDebug("s-task:%s (vgId:%d) stage:%" PRId64 " re-send check downstream task:0x%x(vgId:%d) reqId:0x%" PRIx64, + pTask->id.idStr, pTask->info.nodeId, req.stage, req.downstreamTaskId, req.downstreamNodeId, req.reqId); + + streamSendCheckMsg(pTask, &req, pOutputInfo->fixedDispatcher.nodeId, &pOutputInfo->fixedDispatcher.epSet); + } else if (pOutputInfo->type == TASK_OUTPUT__SHUFFLE_DISPATCH) { + SArray* vgInfo = pOutputInfo->shuffleDispatcher.dbInfo.pVgroupInfos; + int32_t numOfVgs = taosArrayGetSize(vgInfo); + + for (int32_t i = 0; i < numOfVgs; i++) { + SVgroupInfo* pVgInfo = taosArrayGet(vgInfo, i); + + if (p->taskId == pVgInfo->taskId) { + setCheckDownstreamReqInfo(&req, p->reqId, pVgInfo->taskId, pVgInfo->vgId); + + stDebug("s-task:%s (vgId:%d) stage:%" PRId64 + " re-send check downstream task:0x%x(vgId:%d) (shuffle), idx:%d reqId:0x%" PRIx64, + pTask->id.idStr, pTask->info.nodeId, req.stage, req.downstreamTaskId, req.downstreamNodeId, i, + p->reqId); + streamSendCheckMsg(pTask, &req, pVgInfo->vgId, &pVgInfo->epSet); + break; + } + } + } else { + ASSERT(0); + } +} + +void getCheckRspStatus(STaskCheckInfo* pInfo, int64_t el, int32_t* numOfReady, int32_t* numOfFault, + int32_t* numOfNotRsp, SArray* pTimeoutList, SArray* pNotReadyList, const char* id) { + for (int32_t i = 0; i < taosArrayGetSize(pInfo->pList); ++i) { + SDownstreamStatusInfo* p = taosArrayGet(pInfo->pList, i); + if (p->status == TASK_DOWNSTREAM_READY) { + (*numOfReady) += 1; + } else if (p->status == TASK_UPSTREAM_NEW_STAGE || p->status == TASK_DOWNSTREAM_NOT_LEADER) { + stDebug("s-task:%s recv status:NEW_STAGE/NOT_LEADER from downstream, task:0x%x, quit from check downstream", id, + p->taskId); + (*numOfFault) += 1; + } else { // TASK_DOWNSTREAM_NOT_READY + if (p->rspTs == 0) { // not response yet + ASSERT(p->status == -1); + if (el >= CHECK_NOT_RSP_DURATION) { // not receive info for 10 sec. + taosArrayPush(pTimeoutList, &p->taskId); + } else { // el < CHECK_NOT_RSP_DURATION + (*numOfNotRsp) += 1; // do nothing and continue waiting for their rsp + } + } else { + taosArrayPush(pNotReadyList, &p->taskId); + } + } + } +} + +void setCheckDownstreamReqInfo(SStreamTaskCheckReq* pReq, int64_t reqId, int32_t dstTaskId, int32_t dstNodeId) { + pReq->reqId = reqId; + pReq->downstreamTaskId = dstTaskId; + pReq->downstreamNodeId = dstNodeId; +} + +void handleTimeoutDownstreamTasks(SStreamTask* pTask, SArray* pTimeoutList) { + STaskCheckInfo* pInfo = &pTask->taskCheckInfo; + const char* id = pTask->id.idStr; + int32_t vgId = pTask->pMeta->vgId; + int32_t numOfTimeout = taosArrayGetSize(pTimeoutList); + + ASSERT(pTask->status.downstreamReady == 0); + + for (int32_t i = 0; i < numOfTimeout; ++i) { + int32_t taskId = *(int32_t*)taosArrayGet(pTimeoutList, i); + + SDownstreamStatusInfo* p = findCheckRspStatus(pInfo, taskId); + if (p != NULL) { + ASSERT(p->status == -1 && p->rspTs == 0); + doSendCheckMsg(pTask, p); + } + } + + pInfo->timeoutRetryCount += 1; + + // timeout more than 100 sec, add into node update list + if (pInfo->timeoutRetryCount > 10) { + pInfo->timeoutRetryCount = 0; + + for (int32_t i = 0; i < numOfTimeout; ++i) { + int32_t taskId = *(int32_t*)taosArrayGet(pTimeoutList, i); + SDownstreamStatusInfo* p = findCheckRspStatus(pInfo, taskId); + if (p != NULL) { + addIntoNodeUpdateList(pTask, p->vgId); + stDebug("s-task:%s vgId:%d downstream task:0x%x (vgId:%d) timeout more than 100sec, add into nodeUpate list", + id, vgId, p->taskId, p->vgId); + } + } + + stDebug("s-task:%s vgId:%d %d downstream task(s) all add into nodeUpate list", id, vgId, numOfTimeout); + } else { + stDebug("s-task:%s vgId:%d %d downstream task(s) timeout, send check msg again, retry:%d start time:%" PRId64, id, + vgId, numOfTimeout, pInfo->timeoutRetryCount, pInfo->startTs); + } +} + +void handleNotReadyDownstreamTask(SStreamTask* pTask, SArray* pNotReadyList) { + STaskCheckInfo* pInfo = &pTask->taskCheckInfo; + const char* id = pTask->id.idStr; + int32_t vgId = pTask->pMeta->vgId; + int32_t numOfNotReady = taosArrayGetSize(pNotReadyList); + + ASSERT(pTask->status.downstreamReady == 0); + + // reset the info, and send the check msg to failure downstream again + for (int32_t i = 0; i < numOfNotReady; ++i) { + int32_t taskId = *(int32_t*)taosArrayGet(pNotReadyList, i); + + SDownstreamStatusInfo* p = findCheckRspStatus(pInfo, taskId); + if (p != NULL) { + p->rspTs = 0; + p->status = -1; + doSendCheckMsg(pTask, p); + } + } + + pInfo->notReadyRetryCount += 1; + stDebug("s-task:%s vgId:%d %d downstream task(s) not ready, send check msg again, retry:%d start time:%" PRId64, id, + vgId, numOfNotReady, pInfo->notReadyRetryCount, pInfo->startTs); +} + +void rspMonitorFn(void* param, void* tmrId) { + SStreamTask* pTask = param; + SStreamTaskState* pStat = streamTaskGetStatus(pTask); + STaskCheckInfo* pInfo = &pTask->taskCheckInfo; + int32_t vgId = pTask->pMeta->vgId; + int64_t now = taosGetTimestampMs(); + int64_t el = now - pInfo->startTs; + ETaskStatus state = pStat->state; + const char* id = pTask->id.idStr; + int32_t numOfReady = 0; + int32_t numOfFault = 0; + int32_t numOfNotRsp = 0; + int32_t numOfNotReady = 0; + int32_t numOfTimeout = 0; + int32_t total = taosArrayGetSize(pInfo->pList); + + stDebug("s-task:%s start to do check-downstream-rsp check in tmr", id); + + if (state == TASK_STATUS__STOP) { + int32_t ref = atomic_sub_fetch_32(&pTask->status.timerActive, 1); + stDebug("s-task:%s status:%s vgId:%d quit from monitor check-rsp tmr, ref:%d", id, pStat->name, vgId, ref); + + streamTaskCompleteCheckRsp(pInfo, true, id); + + streamMetaAddTaskLaunchResult(pTask->pMeta, pTask->id.streamId, pTask->id.taskId, pInfo->startTs, now, false); + if (HAS_RELATED_FILLHISTORY_TASK(pTask)) { + STaskId* pHId = &pTask->hTaskInfo.id; + streamMetaAddTaskLaunchResult(pTask->pMeta, pHId->streamId, pHId->taskId, pInfo->startTs, now, false); + } + return; + } + + if (state == TASK_STATUS__DROPPING || state == TASK_STATUS__READY || state == TASK_STATUS__PAUSE) { + int32_t ref = atomic_sub_fetch_32(&pTask->status.timerActive, 1); + stDebug("s-task:%s status:%s vgId:%d quit from monitor check-rsp tmr, ref:%d", id, pStat->name, vgId, ref); + + streamTaskCompleteCheckRsp(pInfo, true, id); + return; + } + + taosThreadMutexLock(&pInfo->checkInfoLock); + if (pInfo->notReadyTasks == 0) { + int32_t ref = atomic_sub_fetch_32(&pTask->status.timerActive, 1); + stDebug("s-task:%s status:%s vgId:%d all downstream ready, quit from monitor rsp tmr, ref:%d", id, pStat->name, + vgId, ref); + + streamTaskCompleteCheckRsp(pInfo, false, id); + taosThreadMutexUnlock(&pInfo->checkInfoLock); + return; + } + + SArray* pNotReadyList = taosArrayInit(4, sizeof(int64_t)); + SArray* pTimeoutList = taosArrayInit(4, sizeof(int64_t)); + + if (pStat->state == TASK_STATUS__UNINIT) { + getCheckRspStatus(pInfo, el, &numOfReady, &numOfFault, &numOfNotRsp, pTimeoutList, pNotReadyList, id); + } else { // unexpected status + stError("s-task:%s unexpected task status:%s during waiting for check rsp", id, pStat->name); + } + + numOfNotReady = (int32_t)taosArrayGetSize(pNotReadyList); + numOfTimeout = (int32_t)taosArrayGetSize(pTimeoutList); + + // fault tasks detected, not try anymore + ASSERT((numOfReady + numOfFault + numOfNotReady + numOfTimeout + numOfNotRsp) == total); + if (numOfFault > 0) { + int32_t ref = atomic_sub_fetch_32(&pTask->status.timerActive, 1); + stDebug( + "s-task:%s status:%s vgId:%d all rsp. quit from monitor rsp tmr, since vnode-transfer/leader-change/restart " + "detected, notRsp:%d, notReady:%d, fault:%d, timeout:%d, ready:%d ref:%d", + id, pStat->name, vgId, numOfNotRsp, numOfNotReady, numOfFault, numOfTimeout, numOfReady, ref); + + streamTaskCompleteCheckRsp(pInfo, false, id); + taosThreadMutexUnlock(&pInfo->checkInfoLock); + + taosArrayDestroy(pNotReadyList); + taosArrayDestroy(pTimeoutList); + return; + } + + // checking of downstream tasks has been stopped by other threads + if (pInfo->stopCheckProcess == 1) { + int32_t ref = atomic_sub_fetch_32(&pTask->status.timerActive, 1); + stDebug( + "s-task:%s status:%s vgId:%d stopped by other threads to check downstream process, notRsp:%d, notReady:%d, " + "fault:%d, timeout:%d, ready:%d ref:%d", + id, pStat->name, vgId, numOfNotRsp, numOfNotReady, numOfFault, numOfTimeout, numOfReady, ref); + + streamTaskCompleteCheckRsp(pInfo, false, id); + taosThreadMutexUnlock(&pInfo->checkInfoLock); + + // add the not-ready tasks into the final task status result buf, along with related fill-history task if exists. + streamMetaAddTaskLaunchResult(pTask->pMeta, pTask->id.streamId, pTask->id.taskId, pInfo->startTs, now, false); + if (HAS_RELATED_FILLHISTORY_TASK(pTask)) { + STaskId* pHId = &pTask->hTaskInfo.id; + streamMetaAddTaskLaunchResult(pTask->pMeta, pHId->streamId, pHId->taskId, pInfo->startTs, now, false); + } + + taosArrayDestroy(pNotReadyList); + taosArrayDestroy(pTimeoutList); + return; + } + + if (numOfNotReady > 0) { // check to make sure not in recheck timer + handleNotReadyDownstreamTask(pTask, pNotReadyList); + } + + if (numOfTimeout > 0) { + handleTimeoutDownstreamTasks(pTask, pTimeoutList); + } + + taosTmrReset(rspMonitorFn, CHECK_RSP_INTERVAL, pTask, streamTimer, &pInfo->checkRspTmr); + taosThreadMutexUnlock(&pInfo->checkInfoLock); + + stDebug("s-task:%s continue checking rsp in 300ms, total:%d, notRsp:%d, notReady:%d, fault:%d, timeout:%d, ready:%d", + id, total, numOfNotRsp, numOfNotReady, numOfFault, numOfTimeout, numOfReady); + + taosArrayDestroy(pNotReadyList); + taosArrayDestroy(pTimeoutList); +} + +int32_t tEncodeStreamTaskCheckReq(SEncoder* pEncoder, const SStreamTaskCheckReq* pReq) { + if (tStartEncode(pEncoder) < 0) return -1; + if (tEncodeI64(pEncoder, pReq->reqId) < 0) return -1; + if (tEncodeI64(pEncoder, pReq->streamId) < 0) return -1; + if (tEncodeI32(pEncoder, pReq->upstreamNodeId) < 0) return -1; + if (tEncodeI32(pEncoder, pReq->upstreamTaskId) < 0) return -1; + if (tEncodeI32(pEncoder, pReq->downstreamNodeId) < 0) return -1; + if (tEncodeI32(pEncoder, pReq->downstreamTaskId) < 0) return -1; + if (tEncodeI32(pEncoder, pReq->childId) < 0) return -1; + if (tEncodeI64(pEncoder, pReq->stage) < 0) return -1; + tEndEncode(pEncoder); + return pEncoder->pos; +} + +int32_t tDecodeStreamTaskCheckReq(SDecoder* pDecoder, SStreamTaskCheckReq* pReq) { + if (tStartDecode(pDecoder) < 0) return -1; + if (tDecodeI64(pDecoder, &pReq->reqId) < 0) return -1; + if (tDecodeI64(pDecoder, &pReq->streamId) < 0) return -1; + if (tDecodeI32(pDecoder, &pReq->upstreamNodeId) < 0) return -1; + if (tDecodeI32(pDecoder, &pReq->upstreamTaskId) < 0) return -1; + if (tDecodeI32(pDecoder, &pReq->downstreamNodeId) < 0) return -1; + if (tDecodeI32(pDecoder, &pReq->downstreamTaskId) < 0) return -1; + if (tDecodeI32(pDecoder, &pReq->childId) < 0) return -1; + if (tDecodeI64(pDecoder, &pReq->stage) < 0) return -1; + tEndDecode(pDecoder); + return 0; +} + +int32_t tEncodeStreamTaskCheckRsp(SEncoder* pEncoder, const SStreamTaskCheckRsp* pRsp) { + if (tStartEncode(pEncoder) < 0) return -1; + if (tEncodeI64(pEncoder, pRsp->reqId) < 0) return -1; + if (tEncodeI64(pEncoder, pRsp->streamId) < 0) return -1; + if (tEncodeI32(pEncoder, pRsp->upstreamNodeId) < 0) return -1; + if (tEncodeI32(pEncoder, pRsp->upstreamTaskId) < 0) return -1; + if (tEncodeI32(pEncoder, pRsp->downstreamNodeId) < 0) return -1; + if (tEncodeI32(pEncoder, pRsp->downstreamTaskId) < 0) return -1; + if (tEncodeI32(pEncoder, pRsp->childId) < 0) return -1; + if (tEncodeI64(pEncoder, pRsp->oldStage) < 0) return -1; + if (tEncodeI8(pEncoder, pRsp->status) < 0) return -1; + tEndEncode(pEncoder); + return pEncoder->pos; +} + +int32_t tDecodeStreamTaskCheckRsp(SDecoder* pDecoder, SStreamTaskCheckRsp* pRsp) { + if (tStartDecode(pDecoder) < 0) return -1; + if (tDecodeI64(pDecoder, &pRsp->reqId) < 0) return -1; + if (tDecodeI64(pDecoder, &pRsp->streamId) < 0) return -1; + if (tDecodeI32(pDecoder, &pRsp->upstreamNodeId) < 0) return -1; + if (tDecodeI32(pDecoder, &pRsp->upstreamTaskId) < 0) return -1; + if (tDecodeI32(pDecoder, &pRsp->downstreamNodeId) < 0) return -1; + if (tDecodeI32(pDecoder, &pRsp->downstreamTaskId) < 0) return -1; + if (tDecodeI32(pDecoder, &pRsp->childId) < 0) return -1; + if (tDecodeI64(pDecoder, &pRsp->oldStage) < 0) return -1; + if (tDecodeI8(pDecoder, &pRsp->status) < 0) return -1; + tEndDecode(pDecoder); + return 0; +} diff --git a/source/libs/stream/src/streamCheckpoint.c b/source/libs/stream/src/streamCheckpoint.c index 36886329ac..e6d7c2fde8 100644 --- a/source/libs/stream/src/streamCheckpoint.c +++ b/source/libs/stream/src/streamCheckpoint.c @@ -19,7 +19,7 @@ #include "streamInt.h" typedef struct { - UPLOAD_TYPE type; + ECHECKPOINT_BACKUP_TYPE type; char* taskId; int64_t chkpId; @@ -94,6 +94,24 @@ int32_t tDecodeStreamCheckpointReadyMsg(SDecoder* pDecoder, SStreamCheckpointRea return 0; } +int32_t tEncodeStreamTaskCheckpointReq(SEncoder* pEncoder, const SStreamTaskCheckpointReq* pReq) { + if (tStartEncode(pEncoder) < 0) return -1; + if (tEncodeI64(pEncoder, pReq->streamId) < 0) return -1; + if (tEncodeI32(pEncoder, pReq->taskId) < 0) return -1; + if (tEncodeI32(pEncoder, pReq->nodeId) < 0) return -1; + tEndEncode(pEncoder); + return 0; +} + +int32_t tDecodeStreamTaskCheckpointReq(SDecoder* pDecoder, SStreamTaskCheckpointReq* pReq) { + if (tStartDecode(pDecoder) < 0) return -1; + if (tDecodeI64(pDecoder, &pReq->streamId) < 0) return -1; + if (tDecodeI32(pDecoder, &pReq->taskId) < 0) return -1; + if (tDecodeI32(pDecoder, &pReq->nodeId) < 0) return -1; + tEndDecode(pDecoder); + return 0; +} + static int32_t streamAlignCheckpoint(SStreamTask* pTask) { int32_t num = taosArrayGetSize(pTask->upstreamInfo.pList); int64_t old = atomic_val_compare_exchange_32(&pTask->chkInfo.downstreamAlignNum, 0, num); @@ -398,7 +416,7 @@ int32_t getChkpMeta(char* id, char* path, SArray* list) { return code; } -int32_t doUploadChkp(void* param) { +int32_t uploadCheckpointData(void* param) { SAsyncUploadArg* arg = param; char* path = NULL; int32_t code = 0; @@ -408,13 +426,13 @@ int32_t doUploadChkp(void* param) { (int8_t)(arg->type), &path, toDelFiles)) != 0) { stError("s-task:%s failed to gen upload checkpoint:%" PRId64 "", arg->pTask->id.idStr, arg->chkpId); } - if (arg->type == UPLOAD_S3) { + if (arg->type == DATA_UPLOAD_S3) { if (code == 0 && (code = getChkpMeta(arg->taskId, path, toDelFiles)) != 0) { stError("s-task:%s failed to get checkpoint:%" PRId64 " meta", arg->pTask->id.idStr, arg->chkpId); } } - if (code == 0 && (code = uploadCheckpoint(arg->taskId, path)) != 0) { + if (code == 0 && (code = streamTaskBackupCheckpoint(arg->taskId, path)) != 0) { stError("s-task:%s failed to upload checkpoint:%" PRId64, arg->pTask->id.idStr, arg->chkpId); } @@ -441,8 +459,8 @@ int32_t doUploadChkp(void* param) { int32_t streamTaskUploadChkp(SStreamTask* pTask, int64_t chkpId, char* taskId) { // async upload - UPLOAD_TYPE type = getUploadType(); - if (type == UPLOAD_DISABLE) { + ECHECKPOINT_BACKUP_TYPE type = streamGetCheckpointBackupType(); + if (type == DATA_UPLOAD_DISABLE) { return 0; } @@ -456,7 +474,7 @@ int32_t streamTaskUploadChkp(SStreamTask* pTask, int64_t chkpId, char* taskId) { arg->chkpId = chkpId; arg->pTask = pTask; - return streamMetaAsyncExec(pTask->pMeta, doUploadChkp, arg, NULL); + return streamMetaAsyncExec(pTask->pMeta, uploadCheckpointData, arg, NULL); } int32_t streamTaskBuildCheckpoint(SStreamTask* pTask) { @@ -540,7 +558,7 @@ int32_t streamTaskBuildCheckpoint(SStreamTask* pTask) { return code; } -static int uploadCheckpointToS3(char* id, char* path) { +static int32_t uploadCheckpointToS3(char* id, char* path) { TdDirPtr pDir = taosOpenDir(path); if (pDir == NULL) return -1; @@ -572,8 +590,8 @@ static int uploadCheckpointToS3(char* id, char* path) { return 0; } -static int downloadCheckpointByNameS3(char* id, char* fname, char* dstName) { - int code = 0; +static int32_t downloadCheckpointByNameS3(char* id, char* fname, char* dstName) { + int32_t code = 0; char* buf = taosMemoryCalloc(1, strlen(id) + strlen(dstName) + 4); sprintf(buf, "%s/%s", id, fname); if (s3GetObjectToFile(buf, dstName) != 0) { @@ -583,19 +601,19 @@ static int downloadCheckpointByNameS3(char* id, char* fname, char* dstName) { return code; } -UPLOAD_TYPE getUploadType() { +ECHECKPOINT_BACKUP_TYPE streamGetCheckpointBackupType() { if (strlen(tsSnodeAddress) != 0) { - return UPLOAD_RSYNC; + return DATA_UPLOAD_RSYNC; } else if (tsS3StreamEnabled) { - return UPLOAD_S3; + return DATA_UPLOAD_S3; } else { - return UPLOAD_DISABLE; + return DATA_UPLOAD_DISABLE; } } -int uploadCheckpoint(char* id, char* path) { +int32_t streamTaskBackupCheckpoint(char* id, char* path) { if (id == NULL || path == NULL || strlen(id) == 0 || strlen(path) == 0 || strlen(path) >= PATH_MAX) { - stError("uploadCheckpoint parameters invalid"); + stError("streamTaskBackupCheckpoint parameters invalid"); return -1; } if (strlen(tsSnodeAddress) != 0) { @@ -607,7 +625,7 @@ int uploadCheckpoint(char* id, char* path) { } // fileName: CURRENT -int downloadCheckpointByName(char* id, char* fname, char* dstName) { +int32_t downloadCheckpointByName(char* id, char* fname, char* dstName) { if (id == NULL || fname == NULL || strlen(id) == 0 || strlen(fname) == 0 || strlen(fname) >= PATH_MAX) { stError("uploadCheckpointByName parameters invalid"); return -1; @@ -620,7 +638,7 @@ int downloadCheckpointByName(char* id, char* fname, char* dstName) { return 0; } -int downloadCheckpoint(char* id, char* path) { +int32_t downloadCheckpoint(char* id, char* path) { if (id == NULL || path == NULL || strlen(id) == 0 || strlen(path) == 0 || strlen(path) >= PATH_MAX) { stError("downloadCheckpoint parameters invalid"); return -1; @@ -633,7 +651,7 @@ int downloadCheckpoint(char* id, char* path) { return 0; } -int deleteCheckpoint(char* id) { +int32_t deleteCheckpoint(char* id) { if (id == NULL || strlen(id) == 0) { stError("deleteCheckpoint parameters invalid"); return -1; @@ -646,7 +664,7 @@ int deleteCheckpoint(char* id) { return 0; } -int deleteCheckpointFile(char* id, char* name) { +int32_t deleteCheckpointFile(char* id, char* name) { char object[128] = {0}; snprintf(object, sizeof(object), "%s/%s", id, name); char* tmp = object; diff --git a/source/libs/stream/src/streamMeta.c b/source/libs/stream/src/streamMeta.c index f7f790fbe7..a464594233 100644 --- a/source/libs/stream/src/streamMeta.c +++ b/source/libs/stream/src/streamMeta.c @@ -1073,9 +1073,9 @@ static void addUpdateNodeIntoHbMsg(SStreamTask* pTask, SStreamHbMsg* pMsg) { taosThreadMutexLock(&pTask->lock); - int32_t num = taosArrayGetSize(pTask->outputInfo.pDownstreamUpdateList); + int32_t num = taosArrayGetSize(pTask->outputInfo.pNodeEpsetUpdateList); for (int j = 0; j < num; ++j) { - SDownstreamTaskEpset* pTaskEpset = taosArrayGet(pTask->outputInfo.pDownstreamUpdateList, j); + SDownstreamTaskEpset* pTaskEpset = taosArrayGet(pTask->outputInfo.pNodeEpsetUpdateList, j); bool exist = existInHbMsg(pMsg, pTaskEpset); if (!exist) { @@ -1085,7 +1085,7 @@ static void addUpdateNodeIntoHbMsg(SStreamTask* pTask, SStreamHbMsg* pMsg) { } } - taosArrayClear(pTask->outputInfo.pDownstreamUpdateList); + taosArrayClear(pTask->outputInfo.pNodeEpsetUpdateList); taosThreadMutexUnlock(&pTask->lock); } diff --git a/source/libs/stream/src/streamQueue.c b/source/libs/stream/src/streamQueue.c index 9f79501471..9e872a1aff 100644 --- a/source/libs/stream/src/streamQueue.c +++ b/source/libs/stream/src/streamQueue.c @@ -17,7 +17,6 @@ #define MAX_STREAM_EXEC_BATCH_NUM 32 #define MAX_SMOOTH_BURST_RATIO 5 // 5 sec -#define WAIT_FOR_DURATION 10 // todo refactor: // read data from input queue diff --git a/source/libs/stream/src/streamStart.c b/source/libs/stream/src/streamStartHistory.c similarity index 67% rename from source/libs/stream/src/streamStart.c rename to source/libs/stream/src/streamStartHistory.c index 0b91359f48..04a99feab0 100644 --- a/source/libs/stream/src/streamStart.c +++ b/source/libs/stream/src/streamStartHistory.c @@ -29,19 +29,15 @@ typedef struct SLaunchHTaskInfo { STaskId hTaskId; } SLaunchHTaskInfo; -typedef struct STaskRecheckInfo { - SStreamTask* pTask; - SStreamTaskCheckReq req; - void* checkTimer; -} STaskRecheckInfo; - static int32_t streamSetParamForScanHistory(SStreamTask* pTask); static void streamTaskSetRangeStreamCalc(SStreamTask* pTask); static int32_t initScanHistoryReq(SStreamTask* pTask, SStreamScanHistoryReq* pReq, int8_t igUntreated); -static SLaunchHTaskInfo* createHTaskLaunchInfo(SStreamMeta* pMeta, int64_t streamId, int32_t taskId, int64_t hStreamId, +static SLaunchHTaskInfo* createHTaskLaunchInfo(SStreamMeta* pMeta, STaskId* pTaskId, int64_t hStreamId, int32_t hTaskId); static void tryLaunchHistoryTask(void* param, void* tmrId); -static void doProcessDownstreamReadyRsp(SStreamTask* pTask); +static void doExecScanhistoryInFuture(void* param, void* tmrId); +static int32_t doStartScanHistoryTask(SStreamTask* pTask); +static int32_t streamTaskStartScanHistory(SStreamTask* pTask); int32_t streamTaskSetReady(SStreamTask* pTask) { int32_t numOfDowns = streamTaskGetNumOfDownstream(pTask); @@ -83,7 +79,7 @@ int32_t streamStartScanHistoryAsync(SStreamTask* pTask, int8_t igUntreated) { return 0; } -static void doExecScanhistoryInFuture(void* param, void* tmrId) { +void doExecScanhistoryInFuture(void* param, void* tmrId) { SStreamTask* pTask = param; pTask->schedHistoryInfo.numOfTicks -= 1; @@ -139,7 +135,7 @@ int32_t streamExecScanHistoryInFuture(SStreamTask* pTask, int32_t idleDuration) return TSDB_CODE_SUCCESS; } -static int32_t doStartScanHistoryTask(SStreamTask* pTask) { +int32_t doStartScanHistoryTask(SStreamTask* pTask) { SVersionRange* pRange = &pTask->dataRange.range; if (pTask->info.fillHistory) { streamSetParamForScanHistory(pTask); @@ -169,67 +165,6 @@ int32_t streamTaskStartScanHistory(SStreamTask* pTask) { return 0; } -// check status -void streamTaskCheckDownstream(SStreamTask* pTask) { - SDataRange* pRange = &pTask->dataRange; - STimeWindow* pWindow = &pRange->window; - - SStreamTaskCheckReq req = { - .streamId = pTask->id.streamId, - .upstreamTaskId = pTask->id.taskId, - .upstreamNodeId = pTask->info.nodeId, - .childId = pTask->info.selfChildId, - .stage = pTask->pMeta->stage, - }; - - ASSERT(pTask->status.downstreamReady == 0); - - // serialize streamProcessScanHistoryFinishRsp - if (pTask->outputInfo.type == TASK_OUTPUT__FIXED_DISPATCH) { - streamTaskStartMonitorCheckRsp(pTask); - - req.reqId = tGenIdPI64(); - req.downstreamNodeId = pTask->outputInfo.fixedDispatcher.nodeId; - req.downstreamTaskId = pTask->outputInfo.fixedDispatcher.taskId; - - streamTaskAddReqInfo(&pTask->taskCheckInfo, req.reqId, req.downstreamTaskId, pTask->id.idStr); - - stDebug("s-task:%s (vgId:%d) stage:%" PRId64 " check single downstream task:0x%x(vgId:%d) ver:%" PRId64 "-%" PRId64 - " window:%" PRId64 "-%" PRId64 " reqId:0x%" PRIx64, - pTask->id.idStr, pTask->info.nodeId, req.stage, req.downstreamTaskId, req.downstreamNodeId, - pRange->range.minVer, pRange->range.maxVer, pWindow->skey, pWindow->ekey, req.reqId); - - streamSendCheckMsg(pTask, &req, pTask->outputInfo.fixedDispatcher.nodeId, &pTask->outputInfo.fixedDispatcher.epSet); - - } else if (pTask->outputInfo.type == TASK_OUTPUT__SHUFFLE_DISPATCH) { - streamTaskStartMonitorCheckRsp(pTask); - - SArray* vgInfo = pTask->outputInfo.shuffleDispatcher.dbInfo.pVgroupInfos; - - int32_t numOfVgs = taosArrayGetSize(vgInfo); - stDebug("s-task:%s check %d downstream tasks, ver:%" PRId64 "-%" PRId64 " window:%" PRId64 "-%" PRId64, - pTask->id.idStr, numOfVgs, pRange->range.minVer, pRange->range.maxVer, pWindow->skey, pWindow->ekey); - - for (int32_t i = 0; i < numOfVgs; i++) { - SVgroupInfo* pVgInfo = taosArrayGet(vgInfo, i); - req.reqId = tGenIdPI64(); - req.downstreamNodeId = pVgInfo->vgId; - req.downstreamTaskId = pVgInfo->taskId; - - streamTaskAddReqInfo(&pTask->taskCheckInfo, req.reqId, req.downstreamTaskId, pTask->id.idStr); - - stDebug("s-task:%s (vgId:%d) stage:%" PRId64 - " check downstream task:0x%x (vgId:%d) (shuffle), idx:%d, reqId:0x%" PRIx64, - pTask->id.idStr, pTask->info.nodeId, req.stage, req.downstreamTaskId, req.downstreamNodeId, i, req.reqId); - streamSendCheckMsg(pTask, &req, pVgInfo->vgId, &pVgInfo->epSet); - } - } else { // for sink task, set it ready directly. - stDebug("s-task:%s (vgId:%d) set downstream ready, since no downstream", pTask->id.idStr, pTask->info.nodeId); - streamTaskStopMonitorCheckRsp(&pTask->taskCheckInfo, pTask->id.idStr); - doProcessDownstreamReadyRsp(pTask); - } -} - int32_t streamTaskCheckStatus(SStreamTask* pTask, int32_t upstreamTaskId, int32_t vgId, int64_t stage, int64_t* oldStage) { SStreamChildEpInfo* pInfo = streamTaskGetUpstreamTaskEpInfo(pTask, upstreamTaskId); @@ -331,122 +266,6 @@ int32_t streamTaskOnScanhistoryTaskReady(SStreamTask* pTask) { return TSDB_CODE_SUCCESS; } -void doProcessDownstreamReadyRsp(SStreamTask* pTask) { - EStreamTaskEvent event = (pTask->info.fillHistory == 0) ? TASK_EVENT_INIT : TASK_EVENT_INIT_SCANHIST; - streamTaskOnHandleEventSuccess(pTask->status.pSM, event, NULL, NULL); - - int64_t checkTs = pTask->execInfo.checkTs; - int64_t readyTs = pTask->execInfo.readyTs; - streamMetaAddTaskLaunchResult(pTask->pMeta, pTask->id.streamId, pTask->id.taskId, checkTs, readyTs, true); - - if (pTask->status.taskStatus == TASK_STATUS__HALT) { - ASSERT(HAS_RELATED_FILLHISTORY_TASK(pTask) && (pTask->info.fillHistory == 0)); - - // halt it self for count window stream task until the related fill history task completed. - stDebug("s-task:%s level:%d initial status is %s from mnode, set it to be halt", pTask->id.idStr, - pTask->info.taskLevel, streamTaskGetStatusStr(pTask->status.taskStatus)); - streamTaskHandleEvent(pTask->status.pSM, TASK_EVENT_HALT); - } - - // start the related fill-history task, when current task is ready - // not invoke in success callback due to the deadlock. - if (HAS_RELATED_FILLHISTORY_TASK(pTask)) { - stDebug("s-task:%s try to launch related fill-history task", pTask->id.idStr); - streamLaunchFillHistoryTask(pTask); - } -} - -static void addIntoNodeUpdateList(SStreamTask* pTask, int32_t nodeId) { - int32_t vgId = pTask->pMeta->vgId; - - taosThreadMutexLock(&pTask->lock); - int32_t num = taosArrayGetSize(pTask->outputInfo.pDownstreamUpdateList); - bool existed = false; - for (int i = 0; i < num; ++i) { - SDownstreamTaskEpset* p = taosArrayGet(pTask->outputInfo.pDownstreamUpdateList, i); - if (p->nodeId == nodeId) { - existed = true; - break; - } - } - - if (!existed) { - SDownstreamTaskEpset t = {.nodeId = nodeId}; - taosArrayPush(pTask->outputInfo.pDownstreamUpdateList, &t); - - stInfo("s-task:%s vgId:%d downstream nodeId:%d needs to be updated, total needs updated:%d", pTask->id.idStr, vgId, - t.nodeId, (int32_t)taosArrayGetSize(pTask->outputInfo.pDownstreamUpdateList)); - } - - taosThreadMutexUnlock(&pTask->lock); -} - -int32_t streamProcessCheckRsp(SStreamTask* pTask, const SStreamTaskCheckRsp* pRsp) { - ASSERT(pTask->id.taskId == pRsp->upstreamTaskId); - - int64_t now = taosGetTimestampMs(); - const char* id = pTask->id.idStr; - STaskCheckInfo* pInfo = &pTask->taskCheckInfo; - int32_t total = streamTaskGetNumOfDownstream(pTask); - int32_t left = -1; - - if (streamTaskShouldStop(pTask)) { - stDebug("s-task:%s should stop, do not do check downstream again", id); - return TSDB_CODE_SUCCESS; - } - - if (pRsp->status == TASK_DOWNSTREAM_READY) { - int32_t code = streamTaskUpdateCheckInfo(pInfo, pRsp->downstreamTaskId, pRsp->status, now, pRsp->reqId, &left, id); - if (code != TSDB_CODE_SUCCESS) { - return TSDB_CODE_SUCCESS; - } - - if (left == 0) { - doProcessDownstreamReadyRsp(pTask); // all downstream tasks are ready, set the complete check downstream flag - streamTaskStopMonitorCheckRsp(pInfo, id); - } else { - stDebug("s-task:%s (vgId:%d) recv check rsp from task:0x%x (vgId:%d) status:%d, total:%d not ready:%d", id, - pRsp->upstreamNodeId, pRsp->downstreamTaskId, pRsp->downstreamNodeId, pRsp->status, total, left); - } - } else { // not ready, wait for 100ms and retry - int32_t code = streamTaskUpdateCheckInfo(pInfo, pRsp->downstreamTaskId, pRsp->status, now, pRsp->reqId, &left, id); - if (code != TSDB_CODE_SUCCESS) { - return TSDB_CODE_SUCCESS; // return success in any cases. - } - - if (pRsp->status == TASK_UPSTREAM_NEW_STAGE || pRsp->status == TASK_DOWNSTREAM_NOT_LEADER) { - if (pRsp->status == TASK_UPSTREAM_NEW_STAGE) { - stError("s-task:%s vgId:%d self vnode-transfer/leader-change/restart detected, old stage:%" PRId64 - ", current stage:%" PRId64 ", not check wait for downstream task nodeUpdate, and all tasks restart", - id, pRsp->upstreamNodeId, pRsp->oldStage, pTask->pMeta->stage); - addIntoNodeUpdateList(pTask, pRsp->upstreamNodeId); - } else { - stError( - "s-task:%s downstream taskId:0x%x (vgId:%d) not leader, self dispatch epset needs to be updated, not check " - "downstream again, nodeUpdate needed", - id, pRsp->downstreamTaskId, pRsp->downstreamNodeId); - addIntoNodeUpdateList(pTask, pRsp->downstreamNodeId); - } - - int32_t startTs = pTask->execInfo.checkTs; - streamMetaAddTaskLaunchResult(pTask->pMeta, pTask->id.streamId, pTask->id.taskId, startTs, now, false); - - // automatically set the related fill-history task to be failed. - if (HAS_RELATED_FILLHISTORY_TASK(pTask)) { - STaskId* pId = &pTask->hTaskInfo.id; - streamMetaAddTaskLaunchResult(pTask->pMeta, pId->streamId, pId->taskId, startTs, now, false); - } - - } else { // TASK_DOWNSTREAM_NOT_READY, let's retry in 100ms - ASSERT(left > 0); - stDebug("s-task:%s (vgId:%d) recv check rsp from task:0x%x (vgId:%d) status:%d, total:%d not ready:%d", id, - pRsp->upstreamNodeId, pRsp->downstreamTaskId, pRsp->downstreamNodeId, pRsp->status, total, left); - } - } - - return 0; -} - int32_t streamSendCheckRsp(const SStreamMeta* pMeta, const SStreamTaskCheckReq* pReq, SStreamTaskCheckRsp* pRsp, SRpcHandleInfo* pRpcInfo, int32_t taskId) { SEncoder encoder; @@ -663,16 +482,15 @@ static void tryLaunchHistoryTask(void* param, void* tmrId) { taosMemoryFree(pInfo); } -SLaunchHTaskInfo* createHTaskLaunchInfo(SStreamMeta* pMeta, int64_t streamId, int32_t taskId, int64_t hStreamId, - int32_t hTaskId) { +SLaunchHTaskInfo* createHTaskLaunchInfo(SStreamMeta* pMeta, STaskId* pTaskId, int64_t hStreamId, int32_t hTaskId) { SLaunchHTaskInfo* pInfo = taosMemoryCalloc(1, sizeof(SLaunchHTaskInfo)); if (pInfo == NULL) { terrno = TSDB_CODE_OUT_OF_MEMORY; return NULL; } - pInfo->id.streamId = streamId; - pInfo->id.taskId = taskId; + pInfo->id.streamId = pTaskId->streamId; + pInfo->id.taskId = pTaskId->taskId; pInfo->hTaskId.streamId = hStreamId; pInfo->hTaskId.taskId = hTaskId; @@ -691,7 +509,8 @@ static int32_t launchNotBuiltFillHistoryTask(SStreamTask* pTask) { stWarn("s-task:%s vgId:%d failed to launch history task:0x%x, since not built yet", idStr, pMeta->vgId, hTaskId); - SLaunchHTaskInfo* pInfo = createHTaskLaunchInfo(pMeta, pTask->id.streamId, pTask->id.taskId, hStreamId, hTaskId); + STaskId id = streamTaskGetTaskId(pTask); + SLaunchHTaskInfo* pInfo = createHTaskLaunchInfo(pMeta, &id, hStreamId, hTaskId); if (pInfo == NULL) { stError("s-task:%s failed to launch related fill-history task, since Out Of Memory", idStr); streamMetaAddTaskLaunchResult(pMeta, hStreamId, hTaskId, pExecInfo->checkTs, pExecInfo->readyTs, false); @@ -802,82 +621,6 @@ bool streamHistoryTaskSetVerRangeStep2(SStreamTask* pTask, int64_t nextProcessVe } } -int32_t tEncodeStreamTaskCheckReq(SEncoder* pEncoder, const SStreamTaskCheckReq* pReq) { - if (tStartEncode(pEncoder) < 0) return -1; - if (tEncodeI64(pEncoder, pReq->reqId) < 0) return -1; - if (tEncodeI64(pEncoder, pReq->streamId) < 0) return -1; - if (tEncodeI32(pEncoder, pReq->upstreamNodeId) < 0) return -1; - if (tEncodeI32(pEncoder, pReq->upstreamTaskId) < 0) return -1; - if (tEncodeI32(pEncoder, pReq->downstreamNodeId) < 0) return -1; - if (tEncodeI32(pEncoder, pReq->downstreamTaskId) < 0) return -1; - if (tEncodeI32(pEncoder, pReq->childId) < 0) return -1; - if (tEncodeI64(pEncoder, pReq->stage) < 0) return -1; - tEndEncode(pEncoder); - return pEncoder->pos; -} - -int32_t tDecodeStreamTaskCheckReq(SDecoder* pDecoder, SStreamTaskCheckReq* pReq) { - if (tStartDecode(pDecoder) < 0) return -1; - if (tDecodeI64(pDecoder, &pReq->reqId) < 0) return -1; - if (tDecodeI64(pDecoder, &pReq->streamId) < 0) return -1; - if (tDecodeI32(pDecoder, &pReq->upstreamNodeId) < 0) return -1; - if (tDecodeI32(pDecoder, &pReq->upstreamTaskId) < 0) return -1; - if (tDecodeI32(pDecoder, &pReq->downstreamNodeId) < 0) return -1; - if (tDecodeI32(pDecoder, &pReq->downstreamTaskId) < 0) return -1; - if (tDecodeI32(pDecoder, &pReq->childId) < 0) return -1; - if (tDecodeI64(pDecoder, &pReq->stage) < 0) return -1; - tEndDecode(pDecoder); - return 0; -} - -int32_t tEncodeStreamTaskCheckRsp(SEncoder* pEncoder, const SStreamTaskCheckRsp* pRsp) { - if (tStartEncode(pEncoder) < 0) return -1; - if (tEncodeI64(pEncoder, pRsp->reqId) < 0) return -1; - if (tEncodeI64(pEncoder, pRsp->streamId) < 0) return -1; - if (tEncodeI32(pEncoder, pRsp->upstreamNodeId) < 0) return -1; - if (tEncodeI32(pEncoder, pRsp->upstreamTaskId) < 0) return -1; - if (tEncodeI32(pEncoder, pRsp->downstreamNodeId) < 0) return -1; - if (tEncodeI32(pEncoder, pRsp->downstreamTaskId) < 0) return -1; - if (tEncodeI32(pEncoder, pRsp->childId) < 0) return -1; - if (tEncodeI64(pEncoder, pRsp->oldStage) < 0) return -1; - if (tEncodeI8(pEncoder, pRsp->status) < 0) return -1; - tEndEncode(pEncoder); - return pEncoder->pos; -} - -int32_t tDecodeStreamTaskCheckRsp(SDecoder* pDecoder, SStreamTaskCheckRsp* pRsp) { - if (tStartDecode(pDecoder) < 0) return -1; - if (tDecodeI64(pDecoder, &pRsp->reqId) < 0) return -1; - if (tDecodeI64(pDecoder, &pRsp->streamId) < 0) return -1; - if (tDecodeI32(pDecoder, &pRsp->upstreamNodeId) < 0) return -1; - if (tDecodeI32(pDecoder, &pRsp->upstreamTaskId) < 0) return -1; - if (tDecodeI32(pDecoder, &pRsp->downstreamNodeId) < 0) return -1; - if (tDecodeI32(pDecoder, &pRsp->downstreamTaskId) < 0) return -1; - if (tDecodeI32(pDecoder, &pRsp->childId) < 0) return -1; - if (tDecodeI64(pDecoder, &pRsp->oldStage) < 0) return -1; - if (tDecodeI8(pDecoder, &pRsp->status) < 0) return -1; - tEndDecode(pDecoder); - return 0; -} - -int32_t tEncodeStreamTaskCheckpointReq(SEncoder* pEncoder, const SStreamTaskCheckpointReq* pReq) { - if (tStartEncode(pEncoder) < 0) return -1; - if (tEncodeI64(pEncoder, pReq->streamId) < 0) return -1; - if (tEncodeI32(pEncoder, pReq->taskId) < 0) return -1; - if (tEncodeI32(pEncoder, pReq->nodeId) < 0) return -1; - tEndEncode(pEncoder); - return 0; -} - -int32_t tDecodeStreamTaskCheckpointReq(SDecoder* pDecoder, SStreamTaskCheckpointReq* pReq) { - if (tStartDecode(pDecoder) < 0) return -1; - if (tDecodeI64(pDecoder, &pReq->streamId) < 0) return -1; - if (tDecodeI32(pDecoder, &pReq->taskId) < 0) return -1; - if (tDecodeI32(pDecoder, &pReq->nodeId) < 0) return -1; - tEndDecode(pDecoder); - return 0; -} - void streamTaskSetRangeStreamCalc(SStreamTask* pTask) { SDataRange* pRange = &pTask->dataRange; diff --git a/source/libs/stream/src/streamState.c b/source/libs/stream/src/streamState.c index 4c16e47163..52002b7ea8 100644 --- a/source/libs/stream/src/streamState.c +++ b/source/libs/stream/src/streamState.c @@ -1094,13 +1094,10 @@ _end: int32_t streamStatePutParName(SStreamState* pState, int64_t groupId, const char tbname[TSDB_TABLE_NAME_LEN]) { #ifdef USE_ROCKSDB - if (tSimpleHashGetSize(pState->parNameMap) > MAX_TABLE_NAME_NUM) { - if (tSimpleHashGet(pState->parNameMap, &groupId, sizeof(int64_t)) == NULL) { - streamStatePutParName_rocksdb(pState, groupId, tbname); - } - return TSDB_CODE_SUCCESS; + if (tSimpleHashGet(pState->parNameMap, &groupId, sizeof(int64_t)) == NULL) { + tSimpleHashPut(pState->parNameMap, &groupId, sizeof(int64_t), tbname, TSDB_TABLE_NAME_LEN); + streamStatePutParName_rocksdb(pState, groupId, tbname); } - tSimpleHashPut(pState->parNameMap, &groupId, sizeof(int64_t), tbname, TSDB_TABLE_NAME_LEN); return TSDB_CODE_SUCCESS; #else return tdbTbUpsert(pState->pTdbState->pParNameDb, &groupId, sizeof(int64_t), tbname, TSDB_TABLE_NAME_LEN, @@ -1112,10 +1109,11 @@ int32_t streamStateGetParName(SStreamState* pState, int64_t groupId, void** pVal #ifdef USE_ROCKSDB void* pStr = tSimpleHashGet(pState->parNameMap, &groupId, sizeof(int64_t)); if (!pStr) { - if (tSimpleHashGetSize(pState->parNameMap) > MAX_TABLE_NAME_NUM) { - return streamStateGetParName_rocksdb(pState, groupId, pVal); + int32_t code = streamStateGetParName_rocksdb(pState, groupId, pVal); + if (code == TSDB_CODE_SUCCESS) { + tSimpleHashPut(pState->parNameMap, &groupId, sizeof(int64_t), *pVal, TSDB_TABLE_NAME_LEN); } - return TSDB_CODE_FAILED; + return code; } *pVal = taosMemoryCalloc(1, TSDB_TABLE_NAME_LEN); memcpy(*pVal, pStr, TSDB_TABLE_NAME_LEN); diff --git a/source/libs/stream/src/streamTask.c b/source/libs/stream/src/streamTask.c index 2bc09c1c22..9f4e6aaea1 100644 --- a/source/libs/stream/src/streamTask.c +++ b/source/libs/stream/src/streamTask.c @@ -21,8 +21,6 @@ #include "ttimer.h" #include "wal.h" -#define CHECK_NOT_RSP_DURATION 10*1000 // 10 sec - static void streamTaskDestroyUpstreamInfo(SUpstreamInfo* pUpstreamInfo); static void streamTaskUpdateUpstreamInfo(SStreamTask* pTask, int32_t nodeId, const SEpSet* pEpSet, bool* pUpdated); static void streamTaskUpdateDownstreamInfo(SStreamTask* pTask, int32_t nodeId, const SEpSet* pEpSet, bool* pUpdate); @@ -36,15 +34,20 @@ static int32_t addToTaskset(SArray* pArray, SStreamTask* pTask) { static int32_t doUpdateTaskEpset(SStreamTask* pTask, int32_t nodeId, SEpSet* pEpSet, bool* pUpdated) { char buf[512] = {0}; - if (pTask->info.nodeId == nodeId) { // execution task should be moved away - if (!(*pUpdated)) { - *pUpdated = isEpsetEqual(&pTask->info.epSet, pEpSet); - } - - epsetAssign(&pTask->info.epSet, pEpSet); + bool isEqual = isEpsetEqual(&pTask->info.epSet, pEpSet); epsetToStr(pEpSet, buf, tListLen(buf)); - stDebug("s-task:0x%x (vgId:%d) self node epset is updated %s", pTask->id.taskId, nodeId, buf); + + if (!isEqual) { + (*pUpdated) = true; + char tmp[512] = {0}; + epsetToStr(&pTask->info.epSet, tmp, tListLen(tmp)); + + epsetAssign(&pTask->info.epSet, pEpSet); + stDebug("s-task:0x%x (vgId:%d) self node epset is updated %s, old:%s", pTask->id.taskId, nodeId, buf, tmp); + } else { + stDebug("s-task:0x%x (vgId:%d) not updated task epset, since epset identical, %s", pTask->id.taskId, nodeId, buf); + } } // check for the dispatch info and the upstream task info @@ -256,8 +259,8 @@ int32_t tDecodeStreamTask(SDecoder* pDecoder, SStreamTask* pTask) { if (tDecodeI32(pDecoder, &taskId)) return -1; pTask->streamTaskId.taskId = taskId; - if (tDecodeU64(pDecoder, &pTask->dataRange.range.minVer)) return -1; - if (tDecodeU64(pDecoder, &pTask->dataRange.range.maxVer)) return -1; + if (tDecodeU64(pDecoder, (uint64_t*)&pTask->dataRange.range.minVer)) return -1; + if (tDecodeU64(pDecoder, (uint64_t*)&pTask->dataRange.range.maxVer)) return -1; if (tDecodeI64(pDecoder, &pTask->dataRange.window.skey)) return -1; if (tDecodeI64(pDecoder, &pTask->dataRange.window.ekey)) return -1; @@ -437,7 +440,7 @@ void tFreeStreamTask(SStreamTask* pTask) { taosArrayDestroy(pTask->outputInfo.shuffleDispatcher.dbInfo.pVgroupInfos); } - streamTaskCleanCheckInfo(&pTask->taskCheckInfo); + streamTaskCleanupCheckInfo(&pTask->taskCheckInfo); if (pTask->pState) { stDebug("s-task:0x%x start to free task state", taskId); @@ -465,7 +468,7 @@ void tFreeStreamTask(SStreamTask* pTask) { taosMemoryFree(pTask->outputInfo.pTokenBucket); taosThreadMutexDestroy(&pTask->lock); - pTask->outputInfo.pDownstreamUpdateList = taosArrayDestroy(pTask->outputInfo.pDownstreamUpdateList); + pTask->outputInfo.pNodeEpsetUpdateList = taosArrayDestroy(pTask->outputInfo.pNodeEpsetUpdateList); taosMemoryFree(pTask); stDebug("s-task:0x%x free task completed", taskId); @@ -566,8 +569,8 @@ int32_t streamTaskInit(SStreamTask* pTask, SStreamMeta* pMeta, SMsgCb* pMsgCb, i // 2MiB per second for sink task // 50 times sink operator per second streamTaskInitTokenBucket(pOutputInfo->pTokenBucket, 35, 35, tsSinkDataRate, pTask->id.idStr); - pOutputInfo->pDownstreamUpdateList = taosArrayInit(4, sizeof(SDownstreamTaskEpset)); - if (pOutputInfo->pDownstreamUpdateList == NULL) { + pOutputInfo->pNodeEpsetUpdateList = taosArrayInit(4, sizeof(SDownstreamTaskEpset)); + if (pOutputInfo->pNodeEpsetUpdateList == NULL) { stError("s-task:%s failed to prepare downstreamUpdateList, code:%s", pTask->id.idStr, tstrerror(TSDB_CODE_OUT_OF_MEMORY)); return TSDB_CODE_OUT_OF_MEMORY; } @@ -620,13 +623,21 @@ void streamTaskUpdateUpstreamInfo(SStreamTask* pTask, int32_t nodeId, const SEpS for (int32_t i = 0; i < numOfUpstream; ++i) { SStreamChildEpInfo* pInfo = taosArrayGetP(pTask->upstreamInfo.pList, i); if (pInfo->nodeId == nodeId) { - if (!(*pUpdated)) { - *pUpdated = isEpsetEqual(&pInfo->epSet, pEpSet); + bool equal = isEpsetEqual(&pInfo->epSet, pEpSet); + if (!equal) { + *pUpdated = true; + + char tmp[512] = {0}; + epsetToStr(&pInfo->epSet, tmp, tListLen(tmp)); + + epsetAssign(&pInfo->epSet, pEpSet); + stDebug("s-task:0x%x update the upstreamInfo taskId:0x%x(nodeId:%d) newEpset:%s old:%s", pTask->id.taskId, + pInfo->taskId, nodeId, buf, tmp); + } else { + stDebug("s-task:0x%x not update upstreamInfo, since identical, task:0x%x(nodeId:%d) epset:%s", pTask->id.taskId, + pInfo->taskId, nodeId, buf); } - epsetAssign(&pInfo->epSet, pEpSet); - stDebug("s-task:0x%x update the upstreamInfo taskId:0x%x(nodeId:%d) newEpset:%s", pTask->id.taskId, pInfo->taskId, - nodeId, buf); break; } } @@ -653,7 +664,6 @@ void streamTaskSetFixedDownstreamInfo(SStreamTask* pTask, const SStreamTask* pDo void streamTaskUpdateDownstreamInfo(SStreamTask* pTask, int32_t nodeId, const SEpSet* pEpSet, bool *pUpdated) { char buf[512] = {0}; epsetToStr(pEpSet, buf, tListLen(buf)); - *pUpdated = false; int32_t id = pTask->id.taskId; int8_t type = pTask->outputInfo.type; @@ -661,29 +671,43 @@ void streamTaskUpdateDownstreamInfo(SStreamTask* pTask, int32_t nodeId, const SE if (type == TASK_OUTPUT__SHUFFLE_DISPATCH) { SArray* pVgs = pTask->outputInfo.shuffleDispatcher.dbInfo.pVgroupInfos; - int32_t numOfVgroups = taosArrayGetSize(pVgs); - for (int32_t i = 0; i < numOfVgroups; i++) { + for (int32_t i = 0; i < taosArrayGetSize(pVgs); i++) { SVgroupInfo* pVgInfo = taosArrayGet(pVgs, i); if (pVgInfo->vgId == nodeId) { - if (!(*pUpdated)) { - (*pUpdated) = isEpsetEqual(&pVgInfo->epSet, pEpSet); - } + bool isEqual = isEpsetEqual(&pVgInfo->epSet, pEpSet); + if (!isEqual) { + *pUpdated = true; + char tmp[512] = {0}; + epsetToStr(&pVgInfo->epSet, tmp, tListLen(tmp)); - epsetAssign(&pVgInfo->epSet, pEpSet); - stDebug("s-task:0x%x update dispatch info, task:0x%x(nodeId:%d) newEpset:%s", id, pVgInfo->taskId, nodeId, buf); + epsetAssign(&pVgInfo->epSet, pEpSet); + stDebug("s-task:0x%x update dispatch info, task:0x%x(nodeId:%d) newEpset:%s old:%s", id, pVgInfo->taskId, + nodeId, buf, tmp); + } else { + stDebug("s-task:0x%x not update dispatch info, since identical, task:0x%x(nodeId:%d) epset:%s", id, + pVgInfo->taskId, nodeId, buf); + } break; } } } else if (type == TASK_OUTPUT__FIXED_DISPATCH) { STaskDispatcherFixed* pDispatcher = &pTask->outputInfo.fixedDispatcher; if (pDispatcher->nodeId == nodeId) { - if (!(*pUpdated)) { - *pUpdated = isEpsetEqual(&pDispatcher->epSet, pEpSet); - } + bool equal = isEpsetEqual(&pDispatcher->epSet, pEpSet); + if (!equal) { + *pUpdated = true; - epsetAssign(&pDispatcher->epSet, pEpSet); - stDebug("s-task:0x%x update dispatch info, task:0x%x(nodeId:%d) newEpset:%s", id, pDispatcher->taskId, nodeId, buf); + char tmp[512] = {0}; + epsetToStr(&pDispatcher->epSet, tmp, tListLen(tmp)); + + epsetAssign(&pDispatcher->epSet, pEpSet); + stDebug("s-task:0x%x update dispatch info, task:0x%x(nodeId:%d) newEpset:%s old:%s", id, pDispatcher->taskId, + nodeId, buf, tmp); + } else { + stDebug("s-task:0x%x not update dispatch info, since identical, task:0x%x(nodeId:%d) epset:%s", id, + pDispatcher->taskId, nodeId, buf); + } } } } @@ -968,379 +992,3 @@ int32_t streamTaskSendCheckpointReq(SStreamTask* pTask) { tmsgSendReq(&pTask->info.mnodeEpset, &msg); return 0; } - -static int32_t streamTaskInitTaskCheckInfo(STaskCheckInfo* pInfo, STaskOutputInfo* pOutputInfo, int64_t startTs) { - taosArrayClear(pInfo->pList); - - if (pOutputInfo->type == TASK_OUTPUT__FIXED_DISPATCH) { - pInfo->notReadyTasks = 1; - } else if (pOutputInfo->type == TASK_OUTPUT__SHUFFLE_DISPATCH) { - pInfo->notReadyTasks = taosArrayGetSize(pOutputInfo->shuffleDispatcher.dbInfo.pVgroupInfos); - ASSERT(pInfo->notReadyTasks == pOutputInfo->shuffleDispatcher.dbInfo.vgNum); - } - - pInfo->startTs = startTs; - return TSDB_CODE_SUCCESS; -} - -static SDownstreamStatusInfo* findCheckRspStatus(STaskCheckInfo* pInfo, int32_t taskId) { - for (int32_t j = 0; j < taosArrayGetSize(pInfo->pList); ++j) { - SDownstreamStatusInfo* p = taosArrayGet(pInfo->pList, j); - if (p->taskId == taskId) { - return p; - } - } - - return NULL; -} - -int32_t streamTaskAddReqInfo(STaskCheckInfo* pInfo, int64_t reqId, int32_t taskId, const char* id) { - SDownstreamStatusInfo info = {.taskId = taskId, .status = -1, .reqId = reqId, .rspTs = 0}; - - taosThreadMutexLock(&pInfo->checkInfoLock); - - SDownstreamStatusInfo* p = findCheckRspStatus(pInfo, taskId); - if (p != NULL) { - stDebug("s-task:%s check info to task:0x%x already sent", id, taskId); - taosThreadMutexUnlock(&pInfo->checkInfoLock); - return TSDB_CODE_SUCCESS; - } - - taosArrayPush(pInfo->pList, &info); - - taosThreadMutexUnlock(&pInfo->checkInfoLock); - return TSDB_CODE_SUCCESS; -} - -int32_t streamTaskUpdateCheckInfo(STaskCheckInfo* pInfo, int32_t taskId, int32_t status, int64_t rspTs, int64_t reqId, - int32_t* pNotReady, const char* id) { - taosThreadMutexLock(&pInfo->checkInfoLock); - - SDownstreamStatusInfo* p = findCheckRspStatus(pInfo, taskId); - if (p != NULL) { - - if (reqId != p->reqId) { - stError("s-task:%s reqId:%" PRIx64 " expected:%" PRIx64 - " expired check-rsp recv from downstream task:0x%x, discarded", - id, reqId, p->reqId, taskId); - taosThreadMutexUnlock(&pInfo->checkInfoLock); - return TSDB_CODE_FAILED; - } - - // subtract one not-ready-task, since it is ready now - if ((p->status != TASK_DOWNSTREAM_READY) && (status == TASK_DOWNSTREAM_READY)) { - *pNotReady = atomic_sub_fetch_32(&pInfo->notReadyTasks, 1); - } else { - *pNotReady = pInfo->notReadyTasks; - } - - p->status = status; - p->rspTs = rspTs; - - taosThreadMutexUnlock(&pInfo->checkInfoLock); - return TSDB_CODE_SUCCESS; - } - - taosThreadMutexUnlock(&pInfo->checkInfoLock); - stError("s-task:%s unexpected check rsp msg, invalid downstream task:0x%x, reqId:%" PRIx64 " discarded", id, taskId, - reqId); - return TSDB_CODE_FAILED; -} - -static int32_t streamTaskStartCheckDownstream(STaskCheckInfo* pInfo, const char* id) { - if (pInfo->inCheckProcess == 0) { - pInfo->inCheckProcess = 1; - } else { - ASSERT(pInfo->startTs > 0); - stError("s-task:%s already in check procedure, checkTs:%"PRId64", start monitor check rsp failed", id, pInfo->startTs); - return TSDB_CODE_FAILED; - } - - stDebug("s-task:%s set the in-check-procedure flag", id); - return 0; -} - -static int32_t streamTaskCompleteCheckRsp(STaskCheckInfo* pInfo, const char* id) { - if (!pInfo->inCheckProcess) { - stWarn("s-task:%s already not in-check-procedure", id); - } - - int64_t el = (pInfo->startTs != 0) ? (taosGetTimestampMs() - pInfo->startTs) : 0; - stDebug("s-task:%s clear the in-check-procedure flag, not in-check-procedure elapsed time:%" PRId64 " ms", id, el); - - pInfo->startTs = 0; - pInfo->notReadyTasks = 0; - pInfo->inCheckProcess = 0; - pInfo->stopCheckProcess = 0; - taosArrayClear(pInfo->pList); - - return 0; -} - -static void doSendCheckMsg(SStreamTask* pTask, SDownstreamStatusInfo* p) { - SStreamTaskCheckReq req = { - .streamId = pTask->id.streamId, - .upstreamTaskId = pTask->id.taskId, - .upstreamNodeId = pTask->info.nodeId, - .childId = pTask->info.selfChildId, - .stage = pTask->pMeta->stage, - }; - - STaskOutputInfo* pOutputInfo = &pTask->outputInfo; - if (pOutputInfo->type == TASK_OUTPUT__FIXED_DISPATCH) { - req.reqId = p->reqId; - req.downstreamNodeId = pOutputInfo->fixedDispatcher.nodeId; - req.downstreamTaskId = pOutputInfo->fixedDispatcher.taskId; - stDebug("s-task:%s (vgId:%d) stage:%" PRId64 " re-send check downstream task:0x%x(vgId:%d) reqId:0x%" PRIx64, - pTask->id.idStr, pTask->info.nodeId, req.stage, req.downstreamTaskId, req.downstreamNodeId, req.reqId); - - streamSendCheckMsg(pTask, &req, pOutputInfo->fixedDispatcher.nodeId, &pOutputInfo->fixedDispatcher.epSet); - } else if (pOutputInfo->type == TASK_OUTPUT__SHUFFLE_DISPATCH) { - SArray* vgInfo = pOutputInfo->shuffleDispatcher.dbInfo.pVgroupInfos; - int32_t numOfVgs = taosArrayGetSize(vgInfo); - - for (int32_t i = 0; i < numOfVgs; i++) { - SVgroupInfo* pVgInfo = taosArrayGet(vgInfo, i); - - if (p->taskId == pVgInfo->taskId) { - req.reqId = p->reqId; - req.downstreamNodeId = pVgInfo->vgId; - req.downstreamTaskId = pVgInfo->taskId; - - stDebug("s-task:%s (vgId:%d) stage:%" PRId64 - " re-send check downstream task:0x%x(vgId:%d) (shuffle), idx:%d reqId:0x%" PRIx64, - pTask->id.idStr, pTask->info.nodeId, req.stage, req.downstreamTaskId, req.downstreamNodeId, i, - p->reqId); - streamSendCheckMsg(pTask, &req, pVgInfo->vgId, &pVgInfo->epSet); - break; - } - } - } else { - ASSERT(0); - } -} - -static void getCheckRspStatus(STaskCheckInfo* pInfo, int64_t el, int32_t* numOfReady, int32_t* numOfFault, - int32_t* numOfNotRsp, SArray* pTimeoutList, SArray* pNotReadyList, const char* id) { - for (int32_t i = 0; i < taosArrayGetSize(pInfo->pList); ++i) { - SDownstreamStatusInfo* p = taosArrayGet(pInfo->pList, i); - if (p->status == TASK_DOWNSTREAM_READY) { - (*numOfReady) += 1; - } else if (p->status == TASK_UPSTREAM_NEW_STAGE || p->status == TASK_DOWNSTREAM_NOT_LEADER) { - stDebug("s-task:%s recv status:NEW_STAGE/NOT_LEADER from downstream, task:0x%x, quit from check downstream", id, - p->taskId); - (*numOfFault) += 1; - } else { // TASK_DOWNSTREAM_NOT_READY - if (p->rspTs == 0) { // not response yet - ASSERT(p->status == -1); - if (el >= CHECK_NOT_RSP_DURATION) { // not receive info for 10 sec. - taosArrayPush(pTimeoutList, &p->taskId); - } else { // el < CHECK_NOT_RSP_DURATION - (*numOfNotRsp) += 1; // do nothing and continue waiting for their rsp - } - } else { - taosArrayPush(pNotReadyList, &p->taskId); - } - } - } -} - -static void rspMonitorFn(void* param, void* tmrId) { - SStreamTask* pTask = param; - SStreamTaskState* pStat = streamTaskGetStatus(pTask); - STaskCheckInfo* pInfo = &pTask->taskCheckInfo; - int32_t vgId = pTask->pMeta->vgId; - int64_t now = taosGetTimestampMs(); - int64_t el = now - pInfo->startTs; - ETaskStatus state = pStat->state; - const char* id = pTask->id.idStr; - int32_t numOfReady = 0; - int32_t numOfFault = 0; - int32_t numOfNotRsp = 0; - int32_t numOfNotReady = 0; - int32_t numOfTimeout = 0; - - stDebug("s-task:%s start to do check-downstream-rsp check in tmr", id); - - if (state == TASK_STATUS__STOP) { - int32_t ref = atomic_sub_fetch_32(&pTask->status.timerActive, 1); - stDebug("s-task:%s status:%s vgId:%d quit from monitor check-rsp tmr, ref:%d", id, pStat->name, vgId, ref); - - taosThreadMutexLock(&pInfo->checkInfoLock); - streamTaskCompleteCheckRsp(pInfo, id); - taosThreadMutexUnlock(&pInfo->checkInfoLock); - - streamMetaAddTaskLaunchResult(pTask->pMeta, pTask->id.streamId, pTask->id.taskId, pInfo->startTs, now, false); - if (HAS_RELATED_FILLHISTORY_TASK(pTask)) { - STaskId* pHId = &pTask->hTaskInfo.id; - streamMetaAddTaskLaunchResult(pTask->pMeta, pHId->streamId, pHId->taskId, pInfo->startTs, now, false); - } - return; - } - - if (state == TASK_STATUS__DROPPING || state == TASK_STATUS__READY || state == TASK_STATUS__PAUSE) { - int32_t ref = atomic_sub_fetch_32(&pTask->status.timerActive, 1); - stDebug("s-task:%s status:%s vgId:%d quit from monitor check-rsp tmr, ref:%d", id, pStat->name, vgId, ref); - - taosThreadMutexLock(&pInfo->checkInfoLock); - streamTaskCompleteCheckRsp(pInfo, id); - taosThreadMutexUnlock(&pInfo->checkInfoLock); - return; - } - - taosThreadMutexLock(&pInfo->checkInfoLock); - if (pInfo->notReadyTasks == 0) { - int32_t ref = atomic_sub_fetch_32(&pTask->status.timerActive, 1); - stDebug("s-task:%s status:%s vgId:%d all downstream ready, quit from monitor rsp tmr, ref:%d", id, pStat->name, - vgId, ref); - - streamTaskCompleteCheckRsp(pInfo, id); - taosThreadMutexUnlock(&pInfo->checkInfoLock); - return; - } - - SArray* pNotReadyList = taosArrayInit(4, sizeof(int64_t)); - SArray* pTimeoutList = taosArrayInit(4, sizeof(int64_t)); - - if (pStat->state == TASK_STATUS__UNINIT) { - getCheckRspStatus(pInfo, el, &numOfReady, &numOfFault, &numOfNotRsp, pTimeoutList, pNotReadyList, id); - } else { // unexpected status - stError("s-task:%s unexpected task status:%s during waiting for check rsp", id, pStat->name); - } - - numOfNotReady = (int32_t)taosArrayGetSize(pNotReadyList); - numOfTimeout = (int32_t)taosArrayGetSize(pTimeoutList); - - // fault tasks detected, not try anymore - ASSERT((numOfReady + numOfFault + numOfNotReady + numOfTimeout + numOfNotRsp) == taosArrayGetSize(pInfo->pList)); - if (numOfFault > 0) { - int32_t ref = atomic_sub_fetch_32(&pTask->status.timerActive, 1); - stDebug( - "s-task:%s status:%s vgId:%d all rsp. quit from monitor rsp tmr, since vnode-transfer/leader-change/restart " - "detected, notRsp:%d, notReady:%d, fault:%d, timeout:%d, ready:%d ref:%d", - id, pStat->name, vgId, numOfNotRsp, numOfNotReady, numOfFault, numOfTimeout, numOfReady, ref); - - streamTaskCompleteCheckRsp(pInfo, id); - taosThreadMutexUnlock(&pInfo->checkInfoLock); - - taosArrayDestroy(pNotReadyList); - taosArrayDestroy(pTimeoutList); - return; - } - - // checking of downstream tasks has been stopped by other threads - if (pInfo->stopCheckProcess == 1) { - int32_t ref = atomic_sub_fetch_32(&pTask->status.timerActive, 1); - stDebug( - "s-task:%s status:%s vgId:%d stopped by other threads to check downstream process, notRsp:%d, notReady:%d, " - "fault:%d, timeout:%d, ready:%d ref:%d", - id, pStat->name, vgId, numOfNotRsp, numOfNotReady, numOfFault, numOfTimeout, numOfReady, ref); - - streamTaskCompleteCheckRsp(pInfo, id); - taosThreadMutexUnlock(&pInfo->checkInfoLock); - - // add the not-ready tasks into the final task status result buf, along with related fill-history task if exists. - streamMetaAddTaskLaunchResult(pTask->pMeta, pTask->id.streamId, pTask->id.taskId, pInfo->startTs, now, false); - if (HAS_RELATED_FILLHISTORY_TASK(pTask)) { - STaskId* pHId = &pTask->hTaskInfo.id; - streamMetaAddTaskLaunchResult(pTask->pMeta, pHId->streamId, pHId->taskId, pInfo->startTs, now, false); - } - - taosArrayDestroy(pNotReadyList); - taosArrayDestroy(pTimeoutList); - return; - } - - if (numOfNotReady > 0) { // check to make sure not in recheck timer - ASSERT(pTask->status.downstreamReady == 0); - - // reset the info, and send the check msg to failure downstream again - for (int32_t i = 0; i < numOfNotReady; ++i) { - int32_t taskId = *(int32_t*)taosArrayGet(pNotReadyList, i); - - SDownstreamStatusInfo* p = findCheckRspStatus(pInfo, taskId); - if (p != NULL) { - p->rspTs = 0; - p->status = -1; - doSendCheckMsg(pTask, p); - } - } - - stDebug("s-task:%s %d downstream task(s) not ready, send check msg again", id, numOfNotReady); - } - - if (numOfTimeout > 0) { - pInfo->startTs = now; - ASSERT(pTask->status.downstreamReady == 0); - - for (int32_t i = 0; i < numOfTimeout; ++i) { - int32_t taskId = *(int32_t*)taosArrayGet(pTimeoutList, i); - - SDownstreamStatusInfo* p = findCheckRspStatus(pInfo, taskId); - if (p != NULL) { - ASSERT(p->status == -1 && p->rspTs == 0); - doSendCheckMsg(pTask, p); - } - } - - stDebug("s-task:%s %d downstream tasks timeout, send check msg again, start ts:%" PRId64, id, numOfTimeout, now); - } - - taosTmrReset(rspMonitorFn, CHECK_RSP_INTERVAL, pTask, streamTimer, &pInfo->checkRspTmr); - taosThreadMutexUnlock(&pInfo->checkInfoLock); - - stDebug("s-task:%s continue checking rsp in 300ms, notRsp:%d, notReady:%d, fault:%d, timeout:%d, ready:%d", id, - numOfNotRsp, numOfNotReady, numOfFault, numOfTimeout, numOfReady); - - taosArrayDestroy(pNotReadyList); - taosArrayDestroy(pTimeoutList); -} - -int32_t streamTaskStartMonitorCheckRsp(SStreamTask* pTask) { - STaskCheckInfo* pInfo = &pTask->taskCheckInfo; - - taosThreadMutexLock(&pInfo->checkInfoLock); - int32_t code = streamTaskStartCheckDownstream(pInfo, pTask->id.idStr); - if (code != TSDB_CODE_SUCCESS) { - - taosThreadMutexUnlock(&pInfo->checkInfoLock); - return TSDB_CODE_FAILED; - } - - streamTaskInitTaskCheckInfo(pInfo, &pTask->outputInfo, taosGetTimestampMs()); - - int32_t ref = atomic_add_fetch_32(&pTask->status.timerActive, 1); - stDebug("s-task:%s start check rsp monit, ref:%d ", pTask->id.idStr, ref); - - if (pInfo->checkRspTmr == NULL) { - pInfo->checkRspTmr = taosTmrStart(rspMonitorFn, CHECK_RSP_INTERVAL, pTask, streamTimer); - } else { - taosTmrReset(rspMonitorFn, CHECK_RSP_INTERVAL, pTask, streamTimer, pInfo->checkRspTmr); - } - - taosThreadMutexUnlock(&pInfo->checkInfoLock); - return 0; -} - -int32_t streamTaskStopMonitorCheckRsp(STaskCheckInfo* pInfo, const char* id) { - taosThreadMutexLock(&pInfo->checkInfoLock); - streamTaskCompleteCheckRsp(pInfo, id); - - pInfo->stopCheckProcess = 1; - taosThreadMutexUnlock(&pInfo->checkInfoLock); - - stDebug("s-task:%s set stop check rsp mon", id); - return TSDB_CODE_SUCCESS; -} - -void streamTaskCleanCheckInfo(STaskCheckInfo* pInfo) { - ASSERT(pInfo->inCheckProcess == 0); - - pInfo->pList = taosArrayDestroy(pInfo->pList); - if (pInfo->checkRspTmr != NULL) { - /*bool ret = */ taosTmrStop(pInfo->checkRspTmr); - pInfo->checkRspTmr = NULL; - } - - taosThreadMutexDestroy(&pInfo->checkInfoLock); -} \ No newline at end of file diff --git a/source/util/src/tcompression.c b/source/util/src/tcompression.c index da39a2423b..da67b68c1c 100644 --- a/source/util/src/tcompression.c +++ b/source/util/src/tcompression.c @@ -188,7 +188,7 @@ int32_t l2ComressInitImpl_zlib(char *lossyColumns, float fPrecision, double dPre int32_t l2CompressImpl_zlib(const char *const input, const int32_t inputSize, char *const output, int32_t outputSize, const char type, int8_t lvl) { uLongf dstLen = outputSize - 1; - int32_t ret = compress2((Bytef *)(output + 1), (uLongf *)&dstLen, (Bytef *)input, (uLong)inputSize, 9); + int32_t ret = compress2((Bytef *)(output + 1), (uLongf *)&dstLen, (Bytef *)input, (uLong)inputSize, lvl); if (ret == Z_OK) { output[0] = 1; return dstLen + 1; diff --git a/source/util/src/tconfig.c b/source/util/src/tconfig.c index aec1eba684..b1d9403a9d 100644 --- a/source/util/src/tconfig.c +++ b/source/util/src/tconfig.c @@ -27,12 +27,17 @@ #define CFG_NAME_PRINT_LEN 24 #define CFG_SRC_PRINT_LEN 12 +struct SConfig { + ECfgSrcType stype; + SArray *array; + TdThreadMutex lock; +}; + int32_t cfgLoadFromCfgFile(SConfig *pConfig, const char *filepath); -int32_t cfgLoadFromEnvFile(SConfig *pConfig, const char *filepath); +int32_t cfgLoadFromEnvFile(SConfig *pConfig, const char *envFile); int32_t cfgLoadFromEnvVar(SConfig *pConfig); int32_t cfgLoadFromEnvCmd(SConfig *pConfig, const char **envCmd); int32_t cfgLoadFromApollUrl(SConfig *pConfig, const char *url); -int32_t cfgSetItem(SConfig *pConfig, const char *name, const char *value, ECfgSrcType stype); extern char **environ; @@ -50,6 +55,7 @@ SConfig *cfgInit() { return NULL; } + taosThreadMutexInit(&pCfg->lock, NULL); return pCfg; } @@ -87,9 +93,9 @@ static void cfgFreeItem(SConfigItem *pItem) { pItem->dtype == CFG_DTYPE_CHARSET || pItem->dtype == CFG_DTYPE_TIMEZONE) { taosMemoryFreeClear(pItem->str); } + if (pItem->array) { - taosArrayDestroy(pItem->array); - pItem->array = NULL; + pItem->array = taosArrayDestroy(pItem->array); } } @@ -102,37 +108,18 @@ void cfgCleanup(SConfig *pCfg) { taosMemoryFreeClear(pItem->name); } taosArrayDestroy(pCfg->array); + taosThreadMutexDestroy(&pCfg->lock); taosMemoryFree(pCfg); } } int32_t cfgGetSize(SConfig *pCfg) { return taosArrayGetSize(pCfg->array); } -static int32_t cfgCheckAndSetTimezone(SConfigItem *pItem, const char *timezone) { +static int32_t cfgCheckAndSetConf(SConfigItem *pItem, const char *conf) { cfgFreeItem(pItem); - pItem->str = taosStrdup(timezone); - if (pItem->str == NULL) { - terrno = TSDB_CODE_OUT_OF_MEMORY; - return -1; - } + ASSERT(pItem->str == NULL); - return 0; -} - -static int32_t cfgCheckAndSetCharset(SConfigItem *pItem, const char *charset) { - cfgFreeItem(pItem); - pItem->str = taosStrdup(charset); - if (pItem->str == NULL) { - terrno = TSDB_CODE_OUT_OF_MEMORY; - return -1; - } - - return 0; -} - -static int32_t cfgCheckAndSetLocale(SConfigItem *pItem, const char *locale) { - cfgFreeItem(pItem); - pItem->str = taosStrdup(locale); + pItem->str = taosStrdup(conf); if (pItem->str == NULL) { terrno = TSDB_CODE_OUT_OF_MEMORY; return -1; @@ -229,7 +216,7 @@ static int32_t cfgSetString(SConfigItem *pItem, const char *value, ECfgSrcType s return -1; } - taosMemoryFree(pItem->str); + taosMemoryFreeClear(pItem->str); pItem->str = tmp; pItem->stype = stype; return 0; @@ -246,20 +233,8 @@ static int32_t cfgSetDir(SConfigItem *pItem, const char *value, ECfgSrcType styp return 0; } -static int32_t cfgSetLocale(SConfigItem *pItem, const char *value, ECfgSrcType stype) { - if (cfgCheckAndSetLocale(pItem, value) != 0) { - terrno = TSDB_CODE_OUT_OF_MEMORY; - uError("cfg:%s, type:%s src:%s value:%s failed to dup since %s", pItem->name, cfgDtypeStr(pItem->dtype), - cfgStypeStr(stype), value, terrstr()); - return -1; - } - - pItem->stype = stype; - return 0; -} - -static int32_t cfgSetCharset(SConfigItem *pItem, const char *value, ECfgSrcType stype) { - if (cfgCheckAndSetCharset(pItem, value) != 0) { +static int32_t doSetConf(SConfigItem *pItem, const char *value, ECfgSrcType stype) { + if (cfgCheckAndSetConf(pItem, value) != 0) { terrno = TSDB_CODE_OUT_OF_MEMORY; uError("cfg:%s, type:%s src:%s value:%s failed to dup since %s", pItem->name, cfgDtypeStr(pItem->dtype), cfgStypeStr(stype), value, terrstr()); @@ -271,18 +246,13 @@ static int32_t cfgSetCharset(SConfigItem *pItem, const char *value, ECfgSrcType } static int32_t cfgSetTimezone(SConfigItem *pItem, const char *value, ECfgSrcType stype) { - if (cfgCheckAndSetTimezone(pItem, value) != 0) { - terrno = TSDB_CODE_OUT_OF_MEMORY; - uError("cfg:%s, type:%s src:%s value:%s failed to dup since %s", pItem->name, cfgDtypeStr(pItem->dtype), - cfgStypeStr(stype), value, terrstr()); - return -1; + int32_t code = doSetConf(pItem, value, stype); + if (code != TSDB_CODE_SUCCESS) { + return code; } - pItem->stype = stype; - // apply new timezone osSetTimezone(value); - - return 0; + return code; } static int32_t cfgSetTfsItem(SConfig *pCfg, const char *name, const char *value, const char *level, const char *primary, @@ -342,40 +312,61 @@ static int32_t cfgUpdateDebugFlagItem(SConfig *pCfg, const char *name, bool rese int32_t cfgSetItem(SConfig *pCfg, const char *name, const char *value, ECfgSrcType stype) { // GRANT_CFG_SET; + int32_t code = 0; SConfigItem *pItem = cfgGetItem(pCfg, name); if (pItem == NULL) { terrno = TSDB_CODE_CFG_NOT_FOUND; return -1; } + taosThreadMutexLock(&pCfg->lock); + switch (pItem->dtype) { - case CFG_DTYPE_BOOL: - return cfgSetBool(pItem, value, stype); - case CFG_DTYPE_INT32: - return cfgSetInt32(pItem, value, stype); - case CFG_DTYPE_INT64: - return cfgSetInt64(pItem, value, stype); + case CFG_DTYPE_BOOL: { + code = cfgSetBool(pItem, value, stype); + break; + } + case CFG_DTYPE_INT32: { + code = cfgSetInt32(pItem, value, stype); + break; + } + case CFG_DTYPE_INT64: { + code = cfgSetInt64(pItem, value, stype); + break; + } case CFG_DTYPE_FLOAT: - case CFG_DTYPE_DOUBLE: - return cfgSetFloat(pItem, value, stype); - case CFG_DTYPE_STRING: - return cfgSetString(pItem, value, stype); - case CFG_DTYPE_DIR: - return cfgSetDir(pItem, value, stype); - case CFG_DTYPE_TIMEZONE: - return cfgSetTimezone(pItem, value, stype); - case CFG_DTYPE_CHARSET: - return cfgSetCharset(pItem, value, stype); - case CFG_DTYPE_LOCALE: - return cfgSetLocale(pItem, value, stype); + case CFG_DTYPE_DOUBLE: { + code = cfgSetFloat(pItem, value, stype); + break; + } + case CFG_DTYPE_STRING: { + code = cfgSetString(pItem, value, stype); + break; + } + case CFG_DTYPE_DIR: { + code = cfgSetDir(pItem, value, stype); + break; + } + case CFG_DTYPE_TIMEZONE: { + code = cfgSetTimezone(pItem, value, stype); + break; + } + case CFG_DTYPE_CHARSET: { + code = doSetConf(pItem, value, stype); + break; + } + case CFG_DTYPE_LOCALE: { + code = doSetConf(pItem, value, stype); + break; + } case CFG_DTYPE_NONE: default: + terrno = TSDB_CODE_INVALID_CFG; break; } -_err_out: - terrno = TSDB_CODE_INVALID_CFG; - return -1; + taosThreadMutexUnlock(&pCfg->lock); + return code; } SConfigItem *cfgGetItem(SConfig *pCfg, const char *name) { @@ -388,16 +379,16 @@ SConfigItem *cfgGetItem(SConfig *pCfg, const char *name) { } } - // uError("name:%s, cfg not found", name); terrno = TSDB_CODE_CFG_NOT_FOUND; return NULL; } int32_t cfgCheckRangeForDynUpdate(SConfig *pCfg, const char *name, const char *pVal, bool isServer) { ECfgDynType dynType = isServer ? CFG_DYN_SERVER : CFG_DYN_CLIENT; + SConfigItem *pItem = cfgGetItem(pCfg, name); if (!pItem || (pItem->dynScope & dynType) == 0) { - uError("failed to config:%s, not support", name); + uError("failed to config:%s, not support update this config", name); terrno = TSDB_CODE_INVALID_CFG; return -1; } @@ -459,7 +450,7 @@ static int32_t cfgAddItem(SConfig *pCfg, SConfigItem *pItem, const char *name) { return -1; } - int size = pCfg->array->size; + int32_t size = taosArrayGetSize(pCfg->array); for (int32_t i = 0; i < size; ++i) { SConfigItem *existItem = taosArrayGet(pCfg->array, i); if (existItem != NULL && strcmp(existItem->name, pItem->name) == 0) { @@ -559,7 +550,7 @@ int32_t cfgAddDir(SConfig *pCfg, const char *name, const char *defaultVal, int8_ int32_t cfgAddLocale(SConfig *pCfg, const char *name, const char *defaultVal, int8_t scope, int8_t dynScope) { SConfigItem item = {.dtype = CFG_DTYPE_LOCALE, .scope = scope, .dynScope = dynScope}; - if (cfgCheckAndSetLocale(&item, defaultVal) != 0) { + if (cfgCheckAndSetConf(&item, defaultVal) != 0) { return -1; } @@ -568,7 +559,7 @@ int32_t cfgAddLocale(SConfig *pCfg, const char *name, const char *defaultVal, in int32_t cfgAddCharset(SConfig *pCfg, const char *name, const char *defaultVal, int8_t scope, int8_t dynScope) { SConfigItem item = {.dtype = CFG_DTYPE_CHARSET, .scope = scope, .dynScope = dynScope}; - if (cfgCheckAndSetCharset(&item, defaultVal) != 0) { + if (cfgCheckAndSetConf(&item, defaultVal) != 0) { return -1; } @@ -577,7 +568,7 @@ int32_t cfgAddCharset(SConfig *pCfg, const char *name, const char *defaultVal, i int32_t cfgAddTimezone(SConfig *pCfg, const char *name, const char *defaultVal, int8_t scope, int8_t dynScope) { SConfigItem item = {.dtype = CFG_DTYPE_TIMEZONE, .scope = scope, .dynScope = dynScope}; - if (cfgCheckAndSetTimezone(&item, defaultVal) != 0) { + if (cfgCheckAndSetConf(&item, defaultVal) != 0) { return -1; } @@ -1356,3 +1347,35 @@ int32_t cfgGetApollUrl(const char **envCmd, const char *envFile, char *apolloUrl uInfo("fail get apollo url from cmd env file"); return -1; } + +struct SConfigIter { + int32_t index; + SConfig *pConf; +}; + +SConfigIter *cfgCreateIter(SConfig *pConf) { + SConfigIter* pIter = taosMemoryCalloc(1, sizeof(SConfigIter)); + if (pIter == NULL) { + terrno = TSDB_CODE_OUT_OF_MEMORY; + return NULL; + } + + pIter->pConf = pConf; + return pIter; +} + +SConfigItem *cfgNextIter(SConfigIter* pIter) { + if (pIter->index < cfgGetSize(pIter->pConf)) { + return taosArrayGet(pIter->pConf->array, pIter->index++); + } + + return NULL; +} + +void cfgDestroyIter(SConfigIter *pIter) { + if (pIter == NULL) { + return; + } + + taosMemoryFree(pIter); +} \ No newline at end of file diff --git a/source/util/test/cfgTest.cpp b/source/util/test/cfgTest.cpp index e10ffe7c9b..92422b6a80 100644 --- a/source/util/test/cfgTest.cpp +++ b/source/util/test/cfgTest.cpp @@ -63,9 +63,11 @@ TEST_F(CfgTest, 02_Basic) { EXPECT_EQ(cfgGetSize(pConfig), 6); - int32_t size = taosArrayGetSize(pConfig->array); - for (int32_t i = 0; i < size; ++i) { - SConfigItem *pItem = (SConfigItem *)taosArrayGet(pConfig->array, i); + int32_t size = cfgGetSize(pConfig); + + SConfigItem* pItem = NULL; + SConfigIter* pIter = cfgCreateIter(pConfig); + while((pItem == cfgNextIter(pIter)) != NULL) { switch (pItem->dtype) { case CFG_DTYPE_BOOL: printf("index:%d, cfg:%s value:%d\n", size, pItem->name, pItem->bval); @@ -90,9 +92,12 @@ TEST_F(CfgTest, 02_Basic) { break; } } + + cfgDestroyIter(pIter); + EXPECT_EQ(cfgGetSize(pConfig), 6); - SConfigItem *pItem = cfgGetItem(pConfig, "test_bool"); + pItem = cfgGetItem(pConfig, "test_bool"); EXPECT_EQ(pItem->stype, CFG_STYPE_DEFAULT); EXPECT_EQ(pItem->dtype, CFG_DTYPE_BOOL); EXPECT_STREQ(pItem->name, "test_bool"); diff --git a/tests/army/community/cluster/snapshot.py b/tests/army/community/cluster/snapshot.py index eef650cc77..d18dbf2796 100644 --- a/tests/army/community/cluster/snapshot.py +++ b/tests/army/community/cluster/snapshot.py @@ -30,7 +30,7 @@ from frame.srvCtl import * class TDTestCase(TBase): updatecfgDict = { - "countAlwaysReturnValue" : "0", + "countAlwaysReturnValue" : "1", "lossyColumns" : "float,double", "fPrecision" : "0.000000001", "dPrecision" : "0.00000000000000001", @@ -106,7 +106,7 @@ class TDTestCase(TBase): # check count always return value sql = f"select count(*) from {self.db}.ta" tdSql.query(sql) - tdSql.checkRows(0) # countAlwaysReturnValue is false + tdSql.checkRows(1) # countAlwaysReturnValue is false # run def run(self): diff --git a/tests/army/community/cmdline/fullopt.py b/tests/army/community/cmdline/fullopt.py index c03ba428a1..e61501d7b8 100644 --- a/tests/army/community/cmdline/fullopt.py +++ b/tests/army/community/cmdline/fullopt.py @@ -49,7 +49,7 @@ class TDTestCase(TBase): def checkQueryOK(self, rets): if rets[-2][:9] != "Query OK,": - tdLog.exit(f"check taos -s return unecpect: {rets}") + tdLog.exit(f"check taos -s return unexpect: {rets}") def doTaos(self): tdLog.info(f"check taos command options...") diff --git a/tests/army/community/storage/compressBasic.py b/tests/army/community/storage/compressBasic.py index c0975c6d75..db52c9baf8 100644 --- a/tests/army/community/storage/compressBasic.py +++ b/tests/army/community/storage/compressBasic.py @@ -118,18 +118,10 @@ class TDTestCase(TBase): sql = f"describe {self.db}.{self.stb}" tdSql.query(sql) - ''' # see AutoGen.types defEncodes = [ "delta-i","delta-i","simple8b","simple8b","simple8b","simple8b","simple8b","simple8b", "simple8b","simple8b","delta-d","delta-d","bit-packing", - "disabled","disabled","disabled","disabled","disabled"] - ''' - - # pass-ci have error - defEncodes = [ "delta-i","delta-i","simple8b","simple8b","simple8b","simple8b","simple8b","simple8b", - "simple8b","simple8b","delta-d","delta-d","bit-packing", - "disabled","disabled","disabled","disabled","simple8b"] - + "disabled","disabled","disabled","disabled"] count = tdSql.getRows() for i in range(count): diff --git a/tests/army/enterprise/s3/s3Basic.json b/tests/army/enterprise/s3/s3Basic.json index 4a2f4496f9..ef1585d2ba 100644 --- a/tests/army/enterprise/s3/s3Basic.json +++ b/tests/army/enterprise/s3/s3Basic.json @@ -32,7 +32,7 @@ { "name": "stb", "child_table_exists": "no", - "childtable_count": 10, + "childtable_count": 6, "insert_rows": 2000000, "childtable_prefix": "d", "insert_mode": "taosc", diff --git a/tests/army/enterprise/s3/s3Basic.py b/tests/army/enterprise/s3/s3Basic.py index b4b18e355e..8045a3f308 100644 --- a/tests/army/enterprise/s3/s3Basic.py +++ b/tests/army/enterprise/s3/s3Basic.py @@ -38,6 +38,10 @@ s3EndPoint http://192.168.1.52:9000 s3AccessKey 'zOgllR6bSnw2Ah3mCNel:cdO7oXAu3Cqdb1rUdevFgJMi0LtRwCXdWKQx4bhX' s3BucketName ci-bucket s3UploadDelaySec 60 + +for test: +"s3AccessKey" : "fGPPyYjzytw05nw44ViA:vK1VcwxgSOykicx6hk8fL1x15uEtyDSFU3w4hTaZ" +"s3BucketName": "test-bucket" ''' @@ -63,7 +67,7 @@ class TDTestCase(TBase): tdSql.execute(f"use {self.db}") # come from s3_basic.json - self.childtable_count = 10 + self.childtable_count = 6 self.insert_rows = 2000000 self.timestamp_step = 1000 @@ -85,7 +89,7 @@ class TDTestCase(TBase): fileName = cols[8] #print(f" filesize={fileSize} fileName={fileName} line={line}") if fileSize > maxFileSize: - tdLog.info(f"error, {fileSize} over max size({maxFileSize})\n") + tdLog.info(f"error, {fileSize} over max size({maxFileSize}) {fileName}\n") overCnt += 1 else: tdLog.info(f"{fileName}({fileSize}) check size passed.") @@ -99,7 +103,7 @@ class TDTestCase(TBase): loop = 0 rets = [] overCnt = 0 - while loop < 180: + while loop < 100: time.sleep(3) # check upload to s3 @@ -335,7 +339,7 @@ class TDTestCase(TBase): self.snapshotAgg() self.doAction() self.checkAggCorrect() - self.checkInsertCorrect(difCnt=self.childtable_count*999999) + self.checkInsertCorrect(difCnt=self.childtable_count*1499999) self.checkDelete() self.doAction() diff --git a/tests/army/enterprise/s3/s3Basic1.json b/tests/army/enterprise/s3/s3Basic1.json index ef7a169f77..fb95c14e98 100644 --- a/tests/army/enterprise/s3/s3Basic1.json +++ b/tests/army/enterprise/s3/s3Basic1.json @@ -32,7 +32,7 @@ { "name": "stb", "child_table_exists": "yes", - "childtable_count": 10, + "childtable_count": 6, "insert_rows": 1000000, "childtable_prefix": "d", "insert_mode": "taosc", diff --git a/tests/army/frame/caseBase.py b/tests/army/frame/caseBase.py index 21265d2fea..491e432df7 100644 --- a/tests/army/frame/caseBase.py +++ b/tests/army/frame/caseBase.py @@ -140,7 +140,7 @@ class TBase: # check step sql = f"select count(*) from (select diff(ts) as dif from {self.stb} partition by tbname order by ts desc) where dif != {self.timestamp_step}" - #tdSql.checkAgg(sql, difCnt) + tdSql.checkAgg(sql, difCnt) # save agg result def snapshotAgg(self): diff --git a/tests/system-test/0-others/compatibility.py b/tests/system-test/0-others/compatibility.py index 4c2fe652b8..71adf6eaac 100644 --- a/tests/system-test/0-others/compatibility.py +++ b/tests/system-test/0-others/compatibility.py @@ -185,8 +185,8 @@ class TDTestCase: # baseVersion = "3.0.1.8" tdLog.printNoPrefix(f"==========step1:prepare and check data in old version-{BASEVERSION}") - tdLog.info(f" LD_LIBRARY_PATH=/usr/lib taosBenchmark -t {tableNumbers} -n {recordNumbers1} -y ") - os.system(f"LD_LIBRARY_PATH=/usr/lib taosBenchmark -t {tableNumbers} -n {recordNumbers1} -y ") + tdLog.info(f" LD_LIBRARY_PATH=/usr/lib taosBenchmark -t {tableNumbers} -n {recordNumbers1} -v 1 -y ") + os.system(f"LD_LIBRARY_PATH=/usr/lib taosBenchmark -t {tableNumbers} -n {recordNumbers1} -v 1 -y ") os.system("LD_LIBRARY_PATH=/usr/lib taos -s 'flush database test '") os.system("LD_LIBRARY_PATH=/usr/lib taosBenchmark -f 0-others/com_alltypedata.json -y") os.system("LD_LIBRARY_PATH=/usr/lib taos -s 'flush database curdb '") @@ -196,49 +196,81 @@ class TDTestCase: os.system("LD_LIBRARY_PATH=/usr/lib taos -s 'select min(ui) from curdb.meters '") os.system("LD_LIBRARY_PATH=/usr/lib taos -s 'select max(bi) from curdb.meters '") - # os.system(f"LD_LIBRARY_PATH=/usr/lib taos -s 'use test;create stream current_stream into current_stream_output_stb as select _wstart as `start`, _wend as wend, max(current) as max_current from meters where voltage <= 220 interval (5s);' ") - # os.system('LD_LIBRARY_PATH=/usr/lib taos -s "use test;create stream power_stream into power_stream_output_stb as select ts, concat_ws(\\".\\", location, tbname) as meter_location, current*voltage*cos(phase) as active_power, current*voltage*sin(phase) as reactive_power from meters partition by tbname;" ') - # os.system('LD_LIBRARY_PATH=/usr/lib taos -s "use test;show streams;" ') + os.system(f"LD_LIBRARY_PATH=/usr/lib taos -s 'use test;create stream current_stream into current_stream_output_stb as select _wstart as `start`, _wend as wend, max(current) as max_current from meters where voltage <= 220 interval (5s);' ") + os.system('LD_LIBRARY_PATH=/usr/lib taos -s "use test;create stream power_stream trigger at_once into power_stream_output_stb as select ts, concat_ws(\\".\\", location, tbname) as meter_location, current*voltage*cos(phase) as active_power, current*voltage*sin(phase) as reactive_power from meters partition by tbname;" ') + os.system('LD_LIBRARY_PATH=/usr/lib taos -s "use test;show streams;" ') + self.alter_string_in_file("0-others/tmqBasic.json", "/etc/taos/", cPath) # os.system("LD_LIBRARY_PATH=/usr/lib taosBenchmark -f 0-others/tmqBasic.json -y ") - os.system('LD_LIBRARY_PATH=/usr/lib taos -s "create topic if not exists tmq_test_topic as select current,voltage,phase from test.meters where voltage <= 106 and current <= 5;" ') + # create db/stb/select topic + + db_topic = "db_test_topic" + os.system(f'LD_LIBRARY_PATH=/usr/lib taos -s "create topic if not exists {db_topic} with meta as database test" ') + + stable_topic = "stable_test_meters_topic" + os.system(f'LD_LIBRARY_PATH=/usr/lib taos -s "create topic if not exists {stable_topic} as stable test.meters where tbname like \\"d3\\";" ') + + select_topic = "select_test_meters_topic" + os.system(f'LD_LIBRARY_PATH=/usr/lib taos -s "create topic if not exists {select_topic} as select current,voltage,phase from test.meters where voltage >= 170;" ') + os.system('LD_LIBRARY_PATH=/usr/lib taos -s "use test;show topics;" ') os.system(f" /usr/bin/taosadapter --version " ) consumer_dict = { "group.id": "g1", + "td.connect.websocket.scheme": "ws", "td.connect.user": "root", "td.connect.pass": "taosdata", "auto.offset.reset": "earliest", + "enable.auto.commit": "false", } - consumer = taosws.Consumer(conf={"group.id": "local", "td.connect.websocket.scheme": "ws"}) + consumer = taosws.Consumer(consumer_dict) try: - consumer.subscribe(["tmq_test_topic"]) + consumer.subscribe([select_topic]) except TmqError: tdLog.exit(f"subscribe error") - + first_consumer_rows = 0 while True: message = consumer.poll(timeout=1.0) if message: - print("message") - id = message.vgroup() - topic = message.topic() - database = message.database() - for block in message: - nrows = block.nrows() - ncols = block.ncols() - for row in block: - print(row) - values = block.fetchall() - print(nrows, ncols) - - consumer.commit(message) + first_consumer_rows += block.nrows() else: - print("break") + tdLog.notice("message is null and break") break + consumer.commit(message) + tdLog.debug(f"topic:{select_topic} ,first consumer rows is {first_consumer_rows} in old version") + break consumer.close() + # consumer_dict2 = { + # "group.id": "g2", + # "td.connect.websocket.scheme": "ws", + # "td.connect.user": "root", + # "td.connect.pass": "taosdata", + # "auto.offset.reset": "earliest", + # "enable.auto.commit": "false", + # } + # consumer = taosws.Consumer(consumer_dict2) + # try: + # consumer.subscribe([db_topic,stable_topic]) + # except TmqError: + # tdLog.exit(f"subscribe error") + # first_consumer_rows = 0 + # while True: + # message = consumer.poll(timeout=1.0) + # if message: + # for block in message: + # first_consumer_rows += block.nrows() + # else: + # tdLog.notice("message is null and break") + # break + # consumer.commit(message) + # tdLog.debug(f"topic:{select_topic} ,first consumer rows is {first_consumer_rows} in old version") + # break + + + tdLog.info(" LD_LIBRARY_PATH=/usr/lib taosBenchmark -f 0-others/compa4096.json -y ") os.system("LD_LIBRARY_PATH=/usr/lib taosBenchmark -f 0-others/compa4096.json -y") os.system("LD_LIBRARY_PATH=/usr/lib taos -s 'flush database db4096 '") @@ -279,11 +311,10 @@ class TDTestCase: tdLog.printNoPrefix(f"==========step3:prepare and check data in new version-{nowServerVersion}") tdsql.query(f"select count(*) from {stb}") tdsql.checkData(0,0,tableNumbers*recordNumbers1) - # tdsql.query("show streams;") - # os.system(f"taosBenchmark -t {tableNumbers} -n {recordNumbers2} -y ") - # tdsql.query("show streams;") - # tdsql.query(f"select count(*) from {stb}") - # tdsql.checkData(0,0,tableNumbers*recordNumbers2) + tdsql.query("show streams;") + tdsql.checkRows(2) + + # checkout db4096 tdsql.query("select count(*) from db4096.stb0") @@ -334,7 +365,7 @@ class TDTestCase: # check stream tdsql.query("show streams;") - tdsql.checkRows(0) + tdsql.checkRows(2) #check TS-3131 tdsql.query("select *,tbname from d0.almlog where mcid='m0103';") @@ -348,39 +379,48 @@ class TDTestCase: print("The unordered list is the same as the ordered list.") else: tdLog.exit("The unordered list is not the same as the ordered list.") - tdsql.execute("insert into test.d80 values (now+1s, 11, 103, 0.21);") - tdsql.execute("insert into test.d9 values (now+5s, 4.3, 104, 0.4);") # check tmq + tdsql.execute("insert into test.d80 values (now+1s, 11, 190, 0.21);") + tdsql.execute("insert into test.d9 values (now+5s, 4.3, 104, 0.4);") conn = taos.connect() consumer = Consumer( { - "group.id": "tg75", - "client.id": "124", + "group.id": "g1", "td.connect.user": "root", "td.connect.pass": "taosdata", "enable.auto.commit": "true", "experimental.snapshot.enable": "true", } ) - consumer.subscribe(["tmq_test_topic"]) - + consumer.subscribe([select_topic]) + consumer_rows = 0 while True: - res = consumer.poll(10) - if not res: + message = consumer.poll(timeout=1.0) + tdLog.info(f" null {message}") + if message: + for block in message: + consumer_rows += block.nrows() + tdLog.info(f"consumer rows is {consumer_rows}") + else: + print("consumer has completed and break") break - err = res.error() - if err is not None: - raise err - val = res.value() - - for block in val: - print(block.fetchall()) + consumer.close() + tdsql.query("select current,voltage,phase from test.meters where voltage >= 170;") + all_rows = tdsql.queryRows + if consumer_rows < all_rows - first_consumer_rows : + tdLog.exit(f"consumer rows is {consumer_rows}, less than {all_rows - first_consumer_rows}") tdsql.query("show topics;") - tdsql.checkRows(1) + tdsql.checkRows(3) + tdsql.execute(f"drop topic {select_topic};",queryTimes=10) + tdsql.execute(f"drop topic {db_topic};",queryTimes=10) + tdsql.execute(f"drop topic {stable_topic};",queryTimes=10) + os.system(f" LD_LIBRARY_PATH={bPath}/build/lib {bPath}/build/bin/taosBenchmark -t {tableNumbers} -n {recordNumbers2} -y ") + tdsql.query(f"select count(*) from {stb}") + tdsql.checkData(0,0,tableNumbers*recordNumbers2) def stop(self): tdSql.close() diff --git a/tests/system-test/0-others/multilevel.py b/tests/system-test/0-others/multilevel.py index def2c3152b..7b0ebb4b78 100644 --- a/tests/system-test/0-others/multilevel.py +++ b/tests/system-test/0-others/multilevel.py @@ -182,10 +182,10 @@ class TDTestCase: else: checkFiles("%s/*/*" % i, 0) - def more_than_16_disks(self): - tdLog.info("============== more_than_16_disks test ===============") + def more_than_128_disks(self): + tdLog.info("============== more_than_128_disks test ===============") cfg={} - for i in range(17): + for i in range(129): if i == 0 : datadir = '/mnt/data%d 0 1' % (i+1) else: @@ -272,7 +272,7 @@ class TDTestCase: self.dir_permission_denied() self.file_distribution_same_level() self.three_level_basic() - self.more_than_16_disks() + self.more_than_128_disks() self.trim_database() self.missing_middle_level() diff --git a/tests/system-test/7-tmq/tmq_primary_key.py b/tests/system-test/7-tmq/tmq_primary_key.py index 8f62dc8783..7d0d3da4fc 100644 --- a/tests/system-test/7-tmq/tmq_primary_key.py +++ b/tests/system-test/7-tmq/tmq_primary_key.py @@ -62,7 +62,7 @@ class TDTestCase: while True: res = consumer.poll(1) if not res: - break + continue val = res.value() if val is None: continue @@ -173,7 +173,7 @@ class TDTestCase: while True: res = consumer.poll(1) if not res: - break + continue val = res.value() if val is None: continue @@ -282,7 +282,7 @@ class TDTestCase: while True: res = consumer.poll(1) if not res: - break + continue val = res.value() if val is None: continue @@ -391,7 +391,7 @@ class TDTestCase: while True: res = consumer.poll(1) if not res: - break + continue val = res.value() if val is None: continue diff --git a/utils/test/c/sml_test.c b/utils/test/c/sml_test.c index 9e04cfb75b..5e9d82e71e 100644 --- a/utils/test/c/sml_test.c +++ b/utils/test/c/sml_test.c @@ -1789,6 +1789,112 @@ int sml_td24559_Test() { return code; } +int sml_td29691_Test() { + TAOS *taos = taos_connect("localhost", "root", "taosdata", NULL, 0); + + TAOS_RES *pRes = taos_query(taos, "drop database if exists td29691"); + taos_free_result(pRes); + + pRes = taos_query(taos, "create database if not exists td29691"); + taos_free_result(pRes); + + // check column name duplication + const char *sql[] = { + "vbin,t1=1 f1=283i32,f2=3,f2=b\"hello\" 1632299372000", + }; + pRes = taos_query(taos, "use td29691"); + taos_free_result(pRes); + pRes = taos_schemaless_insert(taos, (char **)sql, sizeof(sql) / sizeof(sql[0]), TSDB_SML_LINE_PROTOCOL, + TSDB_SML_TIMESTAMP_MILLI_SECONDS); + int code = taos_errno(pRes); + printf("%s result0:%s\n", __FUNCTION__, taos_errstr(pRes)); + ASSERT(code == TSDB_CODE_PAR_DUPLICATED_COLUMN); + taos_free_result(pRes); + + // check tag name duplication + const char *sql1[] = { + "vbin,t1=1,t1=2 f2=b\"hello\" 1632299372000", + }; + pRes = taos_schemaless_insert(taos, (char **)sql1, sizeof(sql1) / sizeof(sql1[0]), TSDB_SML_LINE_PROTOCOL, + TSDB_SML_TIMESTAMP_MILLI_SECONDS); + code = taos_errno(pRes); + printf("%s result0:%s\n", __FUNCTION__, taos_errstr(pRes)); + ASSERT(code == TSDB_CODE_PAR_DUPLICATED_COLUMN); + taos_free_result(pRes); + + + // check column tag name duplication + const char *sql2[] = { + "vbin,t1=1,t2=2 t2=L\"ewe\",f2=b\"hello\" 1632299372000", + }; + pRes = taos_schemaless_insert(taos, (char **)sql2, sizeof(sql2) / sizeof(sql2[0]), TSDB_SML_LINE_PROTOCOL, + TSDB_SML_TIMESTAMP_MILLI_SECONDS); + code = taos_errno(pRes); + printf("%s result0:%s\n", __FUNCTION__, taos_errstr(pRes)); + ASSERT(code == TSDB_CODE_PAR_DUPLICATED_COLUMN); + taos_free_result(pRes); + + // insert data + const char *sql3[] = { + "vbin,t1=1,t2=2 f1=1,f2=b\"hello\" 1632299372000", + }; + pRes = taos_schemaless_insert(taos, (char **)sql3, sizeof(sql3) / sizeof(sql3[0]), TSDB_SML_LINE_PROTOCOL, + TSDB_SML_TIMESTAMP_MILLI_SECONDS); + code = taos_errno(pRes); + printf("%s result0:%s\n", __FUNCTION__, taos_errstr(pRes)); + ASSERT(code == 0); + taos_free_result(pRes); + + //check column tag name duplication when update + const char *sql4[] = { + "vbin,t1=1,t2=2,f1=ewe f1=1,f2=b\"hello\" 1632299372001", + }; + pRes = taos_schemaless_insert(taos, (char **)sql4, sizeof(sql4) / sizeof(sql4[0]), TSDB_SML_LINE_PROTOCOL, + TSDB_SML_TIMESTAMP_MILLI_SECONDS); + code = taos_errno(pRes); + printf("%s result0:%s\n", __FUNCTION__, taos_errstr(pRes)); + ASSERT(code == TSDB_CODE_PAR_DUPLICATED_COLUMN); + taos_free_result(pRes); + + //check column tag name duplication when update + const char *sql5[] = { + "vbin,t1=1,t2=2 f1=1,f2=b\"hello\",t1=3 1632299372002", + }; + pRes = taos_schemaless_insert(taos, (char **)sql5, sizeof(sql5) / sizeof(sql5[0]), TSDB_SML_LINE_PROTOCOL, + TSDB_SML_TIMESTAMP_MILLI_SECONDS); + code = taos_errno(pRes); + printf("%s result0:%s\n", __FUNCTION__, taos_errstr(pRes)); + ASSERT(code == TSDB_CODE_PAR_DUPLICATED_COLUMN); + taos_free_result(pRes); + + //check column tag name duplication when update + const char *sql7[] = { + "vbin,t1=1,t2=2,f1=ewe f2=b\"hello\" 1632299372003", + }; + pRes = taos_schemaless_insert(taos, (char **)sql7, sizeof(sql7) / sizeof(sql7[0]), TSDB_SML_LINE_PROTOCOL, + TSDB_SML_TIMESTAMP_MILLI_SECONDS); + code = taos_errno(pRes); + printf("%s result0:%s\n", __FUNCTION__, taos_errstr(pRes)); + ASSERT(code == TSDB_CODE_PAR_DUPLICATED_COLUMN); + taos_free_result(pRes); + + //check column tag name duplication when update + const char *sql6[] = { + "vbin,t1=1 t2=2,f1=1,f2=b\"hello\" 1632299372004", + }; + pRes = taos_schemaless_insert(taos, (char **)sql6, sizeof(sql6) / sizeof(sql6[0]), TSDB_SML_LINE_PROTOCOL, + TSDB_SML_TIMESTAMP_MILLI_SECONDS); + code = taos_errno(pRes); + printf("%s result0:%s\n", __FUNCTION__, taos_errstr(pRes)); + ASSERT(code == TSDB_CODE_PAR_DUPLICATED_COLUMN); + taos_free_result(pRes); + + taos_close(taos); + + return code; +} + + int sml_td18789_Test() { TAOS *taos = taos_connect("localhost", "root", "taosdata", NULL, 0); @@ -1808,11 +1914,11 @@ int sml_td18789_Test() { pRes = taos_schemaless_insert(taos, (char **)sql, sizeof(sql) / sizeof(sql[0]), TSDB_SML_LINE_PROTOCOL, TSDB_SML_TIMESTAMP_MILLI_SECONDS); - int code = taos_errno(pRes); printf("%s result0:%s\n", __FUNCTION__, taos_errstr(pRes)); taos_free_result(pRes); + TAOS_ROW row = NULL; pRes = taos_query(taos, "select *,tbname from vbin order by _ts"); int rowIndex = 0; @@ -1952,6 +2058,8 @@ int main(int argc, char *argv[]) { } int ret = 0; + ret = sml_td29691_Test(); + ASSERT(ret); ret = sml_td29373_Test(); ASSERT(ret); ret = sml_td24559_Test();