diff --git a/Jenkinsfile2 b/Jenkinsfile2 index 14d3888e16..88806222a0 100644 --- a/Jenkinsfile2 +++ b/Jenkinsfile2 @@ -451,8 +451,8 @@ pipeline { stage('run test') { when { - allOf { - not { expression { file_no_doc_changed == '' }} + expression { + file_no_doc_changed != '' && env.CHANGE_TARGET != 'docs-cloud' } } parallel { diff --git a/docs/en/14-reference/03-taos-sql/10-function.md b/docs/en/14-reference/03-taos-sql/10-function.md index f6c1ef24d0..3852783c10 100644 --- a/docs/en/14-reference/03-taos-sql/10-function.md +++ b/docs/en/14-reference/03-taos-sql/10-function.md @@ -422,7 +422,7 @@ CAST(expr AS type_name) TO_ISO8601(expr [, timezone]) ``` -**Description**: The ISO8601 date/time format converted from a UNIX timestamp, plus the timezone. You can specify any time zone with the timezone parameter. If you do not enter this parameter, the time zone on the client is used. +**Description**: The ISO8601 date/time format converted from a timestamp, plus the timezone. You can specify any time zone with the timezone parameter. If you do not enter this parameter, the time zone on the client is used. **Return value type**: VARCHAR @@ -466,7 +466,7 @@ return_timestamp: { } ``` -**Description**: UNIX timestamp converted from a string of date/time format +**Description**: timestamp converted from a string of date/time format **Return value type**: BIGINT, TIMESTAMP diff --git a/docs/zh/06-advanced/02-cache.md b/docs/zh/06-advanced/02-cache.md index 065adbf50a..875452205b 100644 --- a/docs/zh/06-advanced/02-cache.md +++ b/docs/zh/06-advanced/02-cache.md @@ -1,68 +1,44 @@ --- -sidebar_label: 数据缓存 -title: 数据缓存 +sidebar_label: 读缓存 +title: 读缓存 toc_max_heading_level: 4 --- -在工业互联网和物联网大数据应用场景中,时序数据库的性能表现尤为关键。这类应用程序不仅要求数据的实时写入能力,还需求能够迅速获取设备的最新状态或对最新数据进行实时计算。通常,大数据平台会通过部署 Redis 或类似的缓存技术来满足这些需求。然而,这种做法会增加系统的复杂性和运营成本。 +在物联网(IoT)和工业互联网(IIoT)大数据应用场景中,实时数据的价值往往远超历史数据。企业不仅需要数据处理系统具备高效的实时写入能力,更需要能快速获取设备的最新状态,或者对最新数据进行实时计算和分析。无论是工业设备的状态监控、车联网中的车辆位置追踪,还是智能仪表的实时读数,当前值都是业务运行中不可或缺的核心数据。这些数据直接关系到生产安全、运营效率以及用户体验。 -为了解决这一问题,TDengine 采用了针对性的缓存优化策略。通过精心设计的缓存机制,TDengine 实现了数据的实时高效写入和快速查询,从而有效降低整个集群的复杂性和运营成本。这种优化不仅提升了性能,还为用户带来了更简洁、易用的解决方案,使他们能够更专注于核心业务的发展。 +例如,在工业生产中,生产线设备的当前运行状态至关重要。操作员需要实时监控温度、压力、转速等关键指标,一旦设备出现异常,这些数据必须即时呈现,以便迅速调整工艺参数,避免停产或更大的损失。在车联网领域,以滴滴为例,车辆的实时位置数据是滴滴平台优化派单策略、提升运营效率的关键,确保每位乘客快速上车并享受更高质量的出行体验。 -## 写缓存 +同时,看板系统和智能仪表作为现场操作和用户端的窗口,也需要实时数据支撑。无论是工厂管理者通过看板获取的实时生产指标,还是家庭用户随时查询智能水表、电表的用量,实时性不仅影响到运营和决策效率,更直接关系到用户对服务的满意程度。 -TDengine 采用了一种创新的时间驱动缓存管理策略,亦称为写驱动的缓存管理机制。这一策略与传统的读驱动的缓存模式有所不同,其核心思想是将最新写入的数据优先保存在缓存中。当缓存容量达到预设的临界值时,系统会将最早存储的数据批量写入硬盘,从而实现缓存与硬盘之间的动态平衡。 +## 传统缓存方案的局限性 -在物联网数据应用中,用户往往最关注最近产生的数据,即设备的当前状态。TDengine 充分利用了这一业务特性,将最近到达的当前状态数据优先存储在缓存中,以便用户能够快速获取所需信息。 +为了满足这些高频实时查询需求,许多企业选择将 Redis 等缓存技术集成到大数据平台中,通过在数据库和应用之间添加一层缓存来提升查询性能。然而,这种方法也带来了不少问题: +- 系统复杂性增加:需要额外部署和维护缓存集群,对系统架构提出了更高的要求。 +- 运营成本上升:需要额外的硬件资源来支撑缓存,增加了维护和管理的开销。 +- 一致性问题:缓存和数据库之间的数据同步需要额外的机制来保障,否则可能出现数据不一致的情况。 -为了实现数据的分布式存储和高可用性,TDengine 引入了虚拟节点(vnode)的概念。每个 vnode 可以拥有多达 3 个副本,这些副本共同组成一个 vnode group,简称 vgroup。在创建数据库时,用户需要确定每个 vnode 的写入缓存大小,以确保数据的合理分配和高效存储。 +## TDengine 的解决方案:内置读缓存 -创建数据库时的两个关键参数 `vgroups` 和 `buffer` 分别决定了数据库中的数据由多少个 vgroup 进行处理,以及为每个 vnode 分配多少写入缓存。通过合理配置这两个 -参数,用户可以根据实际需求调整数据库的性能和存储容量,从而实现最佳的性能和成本效益。 +为了解决这些问题,TDengine 针对物联网和工业互联网的高频实时查询场景,设计并实现了读缓存机制。这一机制能够自动将每张表的最后一条记录缓存到内存中,从而在不引入第三方缓存技术的情况下,直接满足用户对当前值的实时查询需求。 -例 如, 下面的 SQL 创建了包含 10 个 vgroup,每个 vnode 占 用 256MB 内存的数据库。 -```sql -CREATE DATABASE POWER VGROUPS 10 BUFFER 256 CACHEMODEL 'NONE' PAGES 128 PAGESIZE 16; -``` +TDengine 采用时间驱动的缓存管理策略,将最新数据优先存储在缓存中,查询时无需访问硬盘即可快速返回结果。当缓存容量达到设定上限时,系统会批量将最早的数据写入硬盘,既提升了查询效率,也有效减少了硬盘的写入负担,延长硬件使用寿命。 -缓存越大越好,但超过一定阈值后再增加缓存对写入性能提升并无帮助。 +用户可通过设置 cachemodel 参数,自定义缓存模式,包括缓存最新一行数据、每列最近的非 NULL 值,或同时缓存行和列的数据。这种灵活设计在物联网场景中尤为重要,使设备状态的实时查询更加高效精准。 -## 读缓存 +这种读缓存机制的内置化设计显著降低了查询延迟,避免了引入 Redis 等外部系统的复杂性和运维成本。同时,减少了频繁查询对存储系统的压力,大幅提升系统的整体吞吐能力,确保在高并发场景下依然稳定高效运行。通过读缓存,TDengine 为用户提供了一种更轻量化的实时数据处理方案,不仅优化了查询性能,还降低了整体运维成本,为物联网和工业互联网用户提供强有力的技术支持。 -在创建数据库时,用户可以选择是否启用缓存机制以存储该数据库中每张子表的最新数据。这一缓存机制由数据库创建参数 cachemodel 进行控制。参数 cachemodel 具有如 -下 4 种情况: -- none: 不缓存 -- last_row: 缓存子表最近一行数据,这将显著改善 last_row 函数的性能 -- last_value: 缓存子表每一列最近的非 NULL 值,这将显著改善无特殊影响(比如 WHERE, ORDER BY, GROUP BY, INTERVAL)时的 last 函数的性能 -- both: 同时缓存最近的行和列,即等同于上述 cachemodel 值为 last_row 和 last_value 的行为同时生效 +## TDengine 的读缓存配置 + +在创建数据库时,用户可以选择是否启用缓存机制以存储该数据库中每张子表的最新数据。这一缓存机制由数据库创建参数 cachemodel 进行控制。参数 cachemodel 具有如 下 4 种情况: +- none:不缓存 +- last_row:缓存子表最近一行数据,这将显著改善 last_row 函数的性能 +- last_value:缓存子表每一列最近的非 NULL 值,这将显著改善无特殊影响(比如 WHERE,ORDER BY,GROUP BY, INTERVAL)时的 last 函数的性能 +- both:同时缓存最近的行和列,即等同于上述 cachemodel 值为 last_row 和 last_value 的行为同时生效 当使用数据库读缓存时,可以使用参数 cachesize 来配置每个 vnode 的内存大小。 -- cachesize:表示每个 vnode 中用于缓存子表最近数据的内存大小。默认为 1 ,范围是[1, 65536],单位是 MB。需要根据机器内存合理配置。 +- cachesize:表示每个 vnode 中用于缓存子表最近数据的内存大小。默认为 1 ,范围是[1,65536],单位是 MB。需要根据机器内存合理配置。 -## 元数据缓存 - -为了提升查询和写入操作的效率,每个 vnode 都配备了缓存机制,用于存储其曾经获取过的元数据。这一元数据缓存的大小由创建数据库时的两个参数 pages 和 pagesize 共同决定。其中,pagesize 参数的单位是 KB,用于指定每个缓存页的大小。如下 SQL 会为数据库 power 的每个 vnode 创建 128 个 page、每个 page 16KB 的元数据缓存 - -```sql -CREATE DATABASE POWER PAGES 128 PAGESIZE 16; -``` - -## 文件系统缓存 - -TDengine 采用 WAL 技术作为基本的数据可靠性保障手段。WAL 是一种先进的数据保护机制,旨在确保在发生故障时能够迅速恢复数据。其核心原理在于,在数据实际写入数据存储层之前,先将其变更记录到一个日志文件中。这样一来,即便集群遭遇崩溃或其他故障,也能确保数据安全无损。 - -TDengine 利用这些日志文件实现故障前的状态恢复。在写入 WAL 的过程中,数据是以顺序追加的方式写入硬盘文件的。因此,文件系统缓存在此过程中发挥着关键作用,对写入性能产生显著影响。为了确保数据真正落盘,系统会调用 fsync 函数,该函数负责将文件系统缓存中的数据强制写入硬盘。 - -数据库参数 wal_level 和 wal_fsync_period 共同决定了 WAL 的保存行为。。 -- wal_level:此参数控制 WAL 的保存级别。级别 1 表示仅将数据写入 WAL,但不立即执行 fsync 函数;级别 2 则表示在写入 WAL 的同时执行 fsync 函数。默认情况下,wal_level 设为 1。虽然执行 fsync 函数可以提高数据的持久性,但相应地也会降低写入性能。 -- wal_fsync_period:当 wal_level 设置为 2 时,这个参数控制执行 fsync 的频率。设置为 0 表示每次写入后立即执行 fsync,这可以确保数据的安全性,但可能会牺牲一些性能。当设置为大于 0 的数值时,表示 fsync 周期,默认为 3000,范围是[1, 180000],单位毫秒。 - -```sql -CREATE DATABASE POWER WAL_LEVEL 2 WAL_FSYNC_PERIOD 3000; -``` - -在创建数据库时可以选择不同的参数类型,来选择性能优先或者可靠性优先。 -- 1: 写 WAL 但不执行 fsync ,新写入 WAL 的数据保存在文件系统缓存中但并未写入磁盘,这种方式性能优先 -- 2: 写 WAL 且执行 fsync,新写入 WAL 的数据被立即同步到磁盘上,可靠性更高 +关于数据库的具体创建,相关参数和操作说明请参考[创建数据库](../../reference/taos-sql/database/) ## 实时数据查询的缓存实践 diff --git a/docs/zh/14-reference/03-taos-sql/10-function.md b/docs/zh/14-reference/03-taos-sql/10-function.md index ae256a4ac0..2f4b739447 100644 --- a/docs/zh/14-reference/03-taos-sql/10-function.md +++ b/docs/zh/14-reference/03-taos-sql/10-function.md @@ -1065,7 +1065,7 @@ CAST(expr AS type_name) TO_ISO8601(expr [, timezone]) ``` -**功能说明**:将 UNIX 时间戳转换成为 ISO8601 标准的日期时间格式,并附加时区信息。timezone 参数允许用户为输出结果指定附带任意时区信息。如果 timezone 参数省略,输出结果则附带当前客户端的系统时区信息。 +**功能说明**:将时间戳转换成为 ISO8601 标准的日期时间格式,并附加时区信息。timezone 参数允许用户为输出结果指定附带任意时区信息。如果 timezone 参数省略,输出结果则附带当前客户端的系统时区信息。 **返回结果数据类型**:VARCHAR 类型。 @@ -1109,7 +1109,7 @@ return_timestamp: { } ``` -**功能说明**:将日期时间格式的字符串转换成为 UNIX 时间戳。 +**功能说明**:将日期时间格式的字符串转换成为时间戳。 **返回结果数据类型**:BIGINT, TIMESTAMP。 @@ -1257,8 +1257,8 @@ TIMEDIFF(expr1, expr2 [, time_unit]) **返回结果类型**:BIGINT。 **适用数据类型**: -- `expr1`:表示 UNIX 时间戳的 BIGINT, TIMESTAMP 类型,或符合日期时间格式的 VARCHAR, NCHAR 类型。 -- `expr2`:表示 UNIX 时间戳的 BIGINT, TIMESTAMP 类型,或符合日期时间格式的 VARCHAR, NCHAR 类型。 +- `expr1`:表示时间戳的 BIGINT, TIMESTAMP 类型,或符合 ISO8601/RFC3339 标准的日期时间格式的 VARCHAR, NCHAR 类型。 +- `expr2`:表示时间戳的 BIGINT, TIMESTAMP 类型,或符合 ISO8601/RFC3339 标准的日期时间格式的 VARCHAR, NCHAR 类型。 - `time_unit`:见使用说明。 **嵌套子查询支持**:适用于内层查询和外层查询。 @@ -1301,7 +1301,7 @@ use_current_timezone: { **返回结果数据类型**:TIMESTAMP。 -**应用字段**:表示 UNIX 时间戳的 BIGINT, TIMESTAMP 类型,或符合日期时间格式的 VARCHAR, NCHAR 类型。 +**应用字段**:表示时间戳的 BIGINT, TIMESTAMP 类型,或符合 ISO8601/RFC3339 标准的日期时间格式的 VARCHAR, NCHAR 类型。 **适用于**:表和超级表。 @@ -1364,7 +1364,7 @@ WEEK(expr [, mode]) **返回结果类型**:BIGINT。 **适用数据类型**: -- `expr`:表示 UNIX 时间戳的 BIGINT, TIMESTAMP 类型,或符合日期时间格式的 VARCHAR, NCHAR 类型。 +- `expr`:表示时间戳的 BIGINT, TIMESTAMP 类型,或符合 ISO8601/RFC3339 标准的日期时间格式的 VARCHAR, NCHAR 类型。 - `mode`:0 - 7 之间的整数。 **嵌套子查询支持**:适用于内层查询和外层查询。 @@ -1424,7 +1424,7 @@ WEEKOFYEAR(expr) **返回结果类型**:BIGINT。 -**适用数据类型**:表示 UNIX 时间戳的 BIGINT, TIMESTAMP 类型,或符合日期时间格式的 VARCHAR, NCHAR 类型。 +**适用数据类型**:表示时间戳的 BIGINT, TIMESTAMP 类型,或符合 ISO8601/RFC3339 标准的日期时间格式的 VARCHAR, NCHAR 类型。 **嵌套子查询支持**:适用于内层查询和外层查询。 @@ -1451,7 +1451,7 @@ WEEKDAY(expr) **返回结果类型**:BIGINT。 -**适用数据类型**:表示 UNIX 时间戳的 BIGINT, TIMESTAMP 类型,或符合日期时间格式的 VARCHAR, NCHAR 类型。 +**适用数据类型**:表示 表示时间戳的 BIGINT, TIMESTAMP 类型,或符合 ISO8601/RFC3339 标准的日期时间格式的 VARCHAR, NCHAR 类型。 **嵌套子查询支持**:适用于内层查询和外层查询。 @@ -1478,7 +1478,7 @@ DAYOFWEEK(expr) **返回结果类型**:BIGINT。 -**适用数据类型**:表示 UNIX 时间戳的 BIGINT, TIMESTAMP 类型,或符合日期时间格式的 VARCHAR, NCHAR 类型。 +**适用数据类型**:表示时间戳的 BIGINT, TIMESTAMP 类型,或符合 ISO8601/RFC3339 标准的日期时间格式的 VARCHAR, NCHAR 类型。 **嵌套子查询支持**:适用于内层查询和外层查询。 diff --git a/docs/zh/26-tdinternal/10-cache.md b/docs/zh/26-tdinternal/10-cache.md new file mode 100644 index 0000000000..698f4ee87a --- /dev/null +++ b/docs/zh/26-tdinternal/10-cache.md @@ -0,0 +1,62 @@ +--- +sidebar_label: 数据缓存 +title: 数据缓存 +toc_max_heading_level: 4 +--- +在现代物联网(IoT)和工业互联网(IIoT)应用中,数据的高效管理对系统性能和用户体验至关重要。为了应对高并发环境下的实时读写需求,TDengine 设计了一套完整的缓存机制,包括写缓存、读缓存、元数据缓存和文件系统缓存。这些缓存机制紧密结合,既能优化数据查询的响应速度,又能提高数据写入的效率,同时保障数据的可靠性和系统的高可用性。通过灵活配置缓存参数,TDengine 为用户提供了性能与成本之间的最佳平衡。 + +## 写缓存 + +TDengine 采用了一种创新的时间驱动缓存管理策略,亦称为写驱动的缓存管理机制。这一策略与传统的读驱动的缓存模式有所不同,其核心思想是将最新写入的数据优先保存在缓存中。当缓存容量达到预设的临界值时,系统会将最早存储的数据批量写入硬盘,从而实现缓存与硬盘之间的动态平衡。 + +在物联网数据应用中,用户往往最关注最近产生的数据,即设备的当前状态。TDengine 充分利用了这一业务特性,将最近到达的当前状态数据优先存储在缓存中,以便用户能够快速获取所需信息。 + +为了实现数据的分布式存储和高可用性,TDengine 引入了虚拟节点(vnode)的概念。每个 vnode 可以拥有多达 3 个副本,这些副本共同组成一个 vnode group,简称 vgroup。在创建数据库时,用户需要确定每个 vnode 的写入缓存大小,以确保数据的合理分配和高效存储。 + +创建数据库时的两个关键参数 `vgroups` 和 `buffer` 分别决定了数据库中的数据由多少个 vgroup 进行处理,以及为每个 vnode 分配多少写入缓存。通过合理配置这两个 +参数,用户可以根据实际需求调整数据库的性能和存储容量,从而实现最佳的性能和成本效益。 + +例 如, 下面的 SQL 创建了包含 10 个 vgroup,每个 vnode 占 用 256MB 内存的数据库。 +```sql +CREATE DATABASE POWER VGROUPS 10 BUFFER 256 CACHEMODEL 'NONE' PAGES 128 PAGESIZE 16; +``` + +缓存越大越好,但超过一定阈值后再增加缓存对写入性能提升并无帮助。 + +## 读缓存 + +TDengine 的读缓存机制专为高频实时查询场景设计,尤其适用于物联网和工业互联网等需要实时掌握设备状态的业务场景。在这些场景中,用户往往最关心最新的数据,如设备的当前读数或状态。 + +通过设置 cachemodel 参数,TDengine 用户可以灵活选择适合的缓存模式,包括缓存最新一行数据、每列最近的非 NULL 值,或同时缓存行和列的数据。这种灵活性使 TDengine 能根据具体业务需求提供精准优化,在物联网场景下尤为突出,助力用户快速访问设备的最新状态。 + +这种设计不仅降低了查询的响应延迟,还能有效缓解存储系统的 I/O 压力。在高并发场景下,读缓存能够帮助系统维持更高的吞吐量,确保查询性能的稳定性。借助 TDengine 读缓存,用户无需再集成如 Redis 一类的外部缓存系统,避免了系统架构的复杂化,显著降低运维和部署成本。 + +此外,TDengine 的读缓存机制还能够根据实际业务场景灵活调整。在数据访问热点集中在最新记录的场景中,这种内置缓存能够显著提高用户体验,让关键数据的获取更加快速高效。相比传统缓存方案,这种无缝集成的缓存策略不仅简化了开发流程,还为用户提供了更高的性能保障。 + +关于 TDengine 读缓存的更多详细内容请看[读缓存](../../advanced/cache/) + +## 元数据缓存 + +为了提升查询和写入操作的效率,每个 vnode 都配备了缓存机制,用于存储其曾经获取过的元数据。这一元数据缓存的大小由创建数据库时的两个参数 pages 和 pagesize 共同决定。其中,pagesize 参数的单位是 KB,用于指定每个缓存页的大小。如下 SQL 会为数据库 power 的每个 vnode 创建 128 个 page、每个 page 16KB 的元数据缓存 + +```sql +CREATE DATABASE POWER PAGES 128 PAGESIZE 16; +``` + +## 文件系统缓存 + +TDengine 采用 WAL 技术作为基本的数据可靠性保障手段。WAL 是一种先进的数据保护机制,旨在确保在发生故障时能够迅速恢复数据。其核心原理在于,在数据实际写入数据存储层之前,先将其变更记录到一个日志文件中。这样一来,即便集群遭遇崩溃或其他故障,也能确保数据安全无损。 + +TDengine 利用这些日志文件实现故障前的状态恢复。在写入 WAL 的过程中,数据是以顺序追加的方式写入硬盘文件的。因此,文件系统缓存在此过程中发挥着关键作用,对写入性能产生显著影响。为了确保数据真正落盘,系统会调用 fsync 函数,该函数负责将文件系统缓存中的数据强制写入硬盘。 + +数据库参数 wal_level 和 wal_fsync_period 共同决定了 WAL 的保存行为。。 +- wal_level:此参数控制 WAL 的保存级别。级别 1 表示仅将数据写入 WAL,但不立即执行 fsync 函数;级别 2 则表示在写入 WAL 的同时执行 fsync 函数。默认情况下,wal_level 设为 1。虽然执行 fsync 函数可以提高数据的持久性,但相应地也会降低写入性能。 +- wal_fsync_period:当 wal_level 设置为 2 时,这个参数控制执行 fsync 的频率。设置为 0 则表示每次写入后立即执行 fsync,这可以确保数据的安全性,但可能会牺牲一些性能。当设置为大于 0 的数值时,则表示 fsync 周期,默认为 3000,范围是[1, 180000],单位毫秒。 + +```sql +CREATE DATABASE POWER WAL_LEVEL 2 WAL_FSYNC_PERIOD 3000; +``` + +在创建数据库时,用户可以根据需求选择不同的参数设置,以在性能和可靠性之间找到最佳平衡: +- 性能优先:将数据写入 WAL,但不立即执行 fsync 操作,此时新写入的数据仅保存在文件系统缓存中,尚未同步到磁盘。这种配置能够显著提高写入性能。 +- 可靠性优先:将数据写入 WAL 的同时执行 fsync 操作,将数据立即同步到磁盘,确保数据持久化,可靠性更高。 diff --git a/include/common/tanalytics.h b/include/common/tanalytics.h index 85eb963129..d0af84ecfb 100644 --- a/include/common/tanalytics.h +++ b/include/common/tanalytics.h @@ -39,14 +39,14 @@ typedef struct { } SAnalyticsUrl; typedef enum { - ANAL_BUF_TYPE_JSON = 0, - ANAL_BUF_TYPE_JSON_COL = 1, - ANAL_BUF_TYPE_OTHERS, + ANALYTICS_BUF_TYPE_JSON = 0, + ANALYTICS_BUF_TYPE_JSON_COL = 1, + ANALYTICS_BUF_TYPE_OTHERS, } EAnalBufType; typedef enum { - ANAL_HTTP_TYPE_GET = 0, - ANAL_HTTP_TYPE_POST, + ANALYTICS_HTTP_TYPE_GET = 0, + ANALYTICS_HTTP_TYPE_POST, } EAnalHttpType; typedef struct { @@ -61,11 +61,11 @@ typedef struct { char fileName[TSDB_FILENAME_LEN]; int32_t numOfCols; SAnalyticsColBuf *pCols; -} SAnalBuf; +} SAnalyticBuf; int32_t taosAnalyticsInit(); void taosAnalyticsCleanup(); -SJson *taosAnalSendReqRetJson(const char *url, EAnalHttpType type, SAnalBuf *pBuf); +SJson *taosAnalSendReqRetJson(const char *url, EAnalHttpType type, SAnalyticBuf *pBuf); int32_t taosAnalGetAlgoUrl(const char *algoName, EAnalAlgoType type, char *url, int32_t urlLen); bool taosAnalGetOptStr(const char *option, const char *optName, char *optValue, int32_t optMaxLen); @@ -73,18 +73,18 @@ bool taosAnalGetOptInt(const char *option, const char *optName, int64_t *optV int64_t taosAnalGetVersion(); void taosAnalUpdate(int64_t newVer, SHashObj *pHash); -int32_t tsosAnalBufOpen(SAnalBuf *pBuf, int32_t numOfCols); -int32_t taosAnalBufWriteOptStr(SAnalBuf *pBuf, const char *optName, const char *optVal); -int32_t taosAnalBufWriteOptInt(SAnalBuf *pBuf, const char *optName, int64_t optVal); -int32_t taosAnalBufWriteOptFloat(SAnalBuf *pBuf, const char *optName, float optVal); -int32_t taosAnalBufWriteColMeta(SAnalBuf *pBuf, int32_t colIndex, int32_t colType, const char *colName); -int32_t taosAnalBufWriteDataBegin(SAnalBuf *pBuf); -int32_t taosAnalBufWriteColBegin(SAnalBuf *pBuf, int32_t colIndex); -int32_t taosAnalBufWriteColData(SAnalBuf *pBuf, int32_t colIndex, int32_t colType, void *colValue); -int32_t taosAnalBufWriteColEnd(SAnalBuf *pBuf, int32_t colIndex); -int32_t taosAnalBufWriteDataEnd(SAnalBuf *pBuf); -int32_t taosAnalBufClose(SAnalBuf *pBuf); -void taosAnalBufDestroy(SAnalBuf *pBuf); +int32_t tsosAnalBufOpen(SAnalyticBuf *pBuf, int32_t numOfCols); +int32_t taosAnalBufWriteOptStr(SAnalyticBuf *pBuf, const char *optName, const char *optVal); +int32_t taosAnalBufWriteOptInt(SAnalyticBuf *pBuf, const char *optName, int64_t optVal); +int32_t taosAnalBufWriteOptFloat(SAnalyticBuf *pBuf, const char *optName, float optVal); +int32_t taosAnalBufWriteColMeta(SAnalyticBuf *pBuf, int32_t colIndex, int32_t colType, const char *colName); +int32_t taosAnalBufWriteDataBegin(SAnalyticBuf *pBuf); +int32_t taosAnalBufWriteColBegin(SAnalyticBuf *pBuf, int32_t colIndex); +int32_t taosAnalBufWriteColData(SAnalyticBuf *pBuf, int32_t colIndex, int32_t colType, void *colValue); +int32_t taosAnalBufWriteColEnd(SAnalyticBuf *pBuf, int32_t colIndex); +int32_t taosAnalBufWriteDataEnd(SAnalyticBuf *pBuf); +int32_t taosAnalBufClose(SAnalyticBuf *pBuf); +void taosAnalBufDestroy(SAnalyticBuf *pBuf); const char *taosAnalAlgoStr(EAnalAlgoType algoType); EAnalAlgoType taosAnalAlgoInt(const char *algoName); diff --git a/include/libs/nodes/cmdnodes.h b/include/libs/nodes/cmdnodes.h index 0b617c7ce3..867f8c8efc 100644 --- a/include/libs/nodes/cmdnodes.h +++ b/include/libs/nodes/cmdnodes.h @@ -322,7 +322,7 @@ typedef struct SAlterDnodeStmt { typedef struct { ENodeType type; - char url[TSDB_ANAL_ANODE_URL_LEN + 3]; + char url[TSDB_ANALYTIC_ANODE_URL_LEN + 3]; } SCreateAnodeStmt; typedef struct { diff --git a/include/libs/nodes/plannodes.h b/include/libs/nodes/plannodes.h index 48852e5552..89bc27a1fa 100644 --- a/include/libs/nodes/plannodes.h +++ b/include/libs/nodes/plannodes.h @@ -334,7 +334,7 @@ typedef struct SWindowLogicNode { int64_t windowSliding; SNodeList* pTsmaSubplans; SNode* pAnomalyExpr; - char anomalyOpt[TSDB_ANAL_ALGO_OPTION_LEN]; + char anomalyOpt[TSDB_ANALYTIC_ALGO_OPTION_LEN]; } SWindowLogicNode; typedef struct SFillLogicNode { @@ -740,7 +740,7 @@ typedef SCountWinodwPhysiNode SStreamCountWinodwPhysiNode; typedef struct SAnomalyWindowPhysiNode { SWindowPhysiNode window; SNode* pAnomalyKey; - char anomalyOpt[TSDB_ANAL_ALGO_OPTION_LEN]; + char anomalyOpt[TSDB_ANALYTIC_ALGO_OPTION_LEN]; } SAnomalyWindowPhysiNode; typedef struct SSortPhysiNode { diff --git a/include/libs/nodes/querynodes.h b/include/libs/nodes/querynodes.h index 763882ab3a..7af74a347a 100644 --- a/include/libs/nodes/querynodes.h +++ b/include/libs/nodes/querynodes.h @@ -351,7 +351,7 @@ typedef struct SAnomalyWindowNode { ENodeType type; // QUERY_NODE_ANOMALY_WINDOW SNode* pCol; // timestamp primary key SNode* pExpr; - char anomalyOpt[TSDB_ANAL_ALGO_OPTION_LEN]; + char anomalyOpt[TSDB_ANALYTIC_ALGO_OPTION_LEN]; } SAnomalyWindowNode; typedef enum EFillMode { diff --git a/include/util/taoserror.h b/include/util/taoserror.h index e33af33d0e..6cedaeeef1 100644 --- a/include/util/taoserror.h +++ b/include/util/taoserror.h @@ -491,13 +491,14 @@ int32_t taosGetErrSize(); #define TSDB_CODE_MND_ANODE_TOO_MANY_ALGO_TYPE TAOS_DEF_ERROR_CODE(0, 0x0438) // analysis -#define TSDB_CODE_ANAL_URL_RSP_IS_NULL TAOS_DEF_ERROR_CODE(0, 0x0440) -#define TSDB_CODE_ANAL_URL_CANT_ACCESS TAOS_DEF_ERROR_CODE(0, 0x0441) -#define TSDB_CODE_ANAL_ALGO_NOT_FOUND TAOS_DEF_ERROR_CODE(0, 0x0442) -#define TSDB_CODE_ANAL_ALGO_NOT_LOAD TAOS_DEF_ERROR_CODE(0, 0x0443) -#define TSDB_CODE_ANAL_BUF_INVALID_TYPE TAOS_DEF_ERROR_CODE(0, 0x0444) -#define TSDB_CODE_ANAL_ANODE_RETURN_ERROR TAOS_DEF_ERROR_CODE(0, 0x0445) -#define TSDB_CODE_ANAL_ANODE_TOO_MANY_ROWS TAOS_DEF_ERROR_CODE(0, 0x0446) +#define TSDB_CODE_ANA_URL_RSP_IS_NULL TAOS_DEF_ERROR_CODE(0, 0x0440) +#define TSDB_CODE_ANA_URL_CANT_ACCESS TAOS_DEF_ERROR_CODE(0, 0x0441) +#define TSDB_CODE_ANA_ALGO_NOT_FOUND TAOS_DEF_ERROR_CODE(0, 0x0442) +#define TSDB_CODE_ANA_ALGO_NOT_LOAD TAOS_DEF_ERROR_CODE(0, 0x0443) +#define TSDB_CODE_ANA_BUF_INVALID_TYPE TAOS_DEF_ERROR_CODE(0, 0x0444) +#define TSDB_CODE_ANA_ANODE_RETURN_ERROR TAOS_DEF_ERROR_CODE(0, 0x0445) +#define TSDB_CODE_ANA_ANODE_TOO_MANY_ROWS TAOS_DEF_ERROR_CODE(0, 0x0446) +#define TSDB_CODE_ANA_WN_DATA TAOS_DEF_ERROR_CODE(0, 0x0447) // mnode-sma #define TSDB_CODE_MND_SMA_ALREADY_EXIST TAOS_DEF_ERROR_CODE(0, 0x0480) diff --git a/include/util/tdef.h b/include/util/tdef.h index 0e2f7ed8a6..c69d4f8f19 100644 --- a/include/util/tdef.h +++ b/include/util/tdef.h @@ -335,12 +335,13 @@ typedef enum ELogicConditionType { #define TSDB_SLOW_QUERY_SQL_LEN 512 #define TSDB_SHOW_SUBQUERY_LEN 1000 #define TSDB_LOG_VAR_LEN 32 -#define TSDB_ANAL_ANODE_URL_LEN 128 -#define TSDB_ANAL_ALGO_NAME_LEN 64 -#define TSDB_ANAL_ALGO_TYPE_LEN 24 -#define TSDB_ANAL_ALGO_KEY_LEN (TSDB_ANAL_ALGO_NAME_LEN + 9) -#define TSDB_ANAL_ALGO_URL_LEN (TSDB_ANAL_ANODE_URL_LEN + TSDB_ANAL_ALGO_TYPE_LEN + 1) -#define TSDB_ANAL_ALGO_OPTION_LEN 256 + +#define TSDB_ANALYTIC_ANODE_URL_LEN 128 +#define TSDB_ANALYTIC_ALGO_NAME_LEN 64 +#define TSDB_ANALYTIC_ALGO_TYPE_LEN 24 +#define TSDB_ANALYTIC_ALGO_KEY_LEN (TSDB_ANALYTIC_ALGO_NAME_LEN + 9) +#define TSDB_ANALYTIC_ALGO_URL_LEN (TSDB_ANALYTIC_ANODE_URL_LEN + TSDB_ANALYTIC_ALGO_TYPE_LEN + 1) +#define TSDB_ANALYTIC_ALGO_OPTION_LEN 256 #define TSDB_MAX_EP_NUM 10 diff --git a/source/common/src/msg/tmsg.c b/source/common/src/msg/tmsg.c index 23b8c499b8..2e997218ac 100644 --- a/source/common/src/msg/tmsg.c +++ b/source/common/src/msg/tmsg.c @@ -2169,7 +2169,7 @@ int32_t tSerializeRetrieveAnalAlgoRsp(void *buf, int32_t bufLen, SRetrieveAnalAl SAnalyticsUrl *pUrl = pIter; size_t nameLen = 0; const char *name = taosHashGetKey(pIter, &nameLen); - if (nameLen > 0 && nameLen <= TSDB_ANAL_ALGO_KEY_LEN && pUrl->urlLen > 0) { + if (nameLen > 0 && nameLen <= TSDB_ANALYTIC_ALGO_KEY_LEN && pUrl->urlLen > 0) { numOfAlgos++; } pIter = taosHashIterate(pRsp->hash, pIter); @@ -2224,7 +2224,7 @@ int32_t tDeserializeRetrieveAnalAlgoRsp(void *buf, int32_t bufLen, SRetrieveAnal int32_t numOfAlgos = 0; int32_t nameLen; int32_t type; - char name[TSDB_ANAL_ALGO_KEY_LEN]; + char name[TSDB_ANALYTIC_ALGO_KEY_LEN]; SAnalyticsUrl url = {0}; TAOS_CHECK_EXIT(tStartDecode(&decoder)); @@ -2233,7 +2233,7 @@ int32_t tDeserializeRetrieveAnalAlgoRsp(void *buf, int32_t bufLen, SRetrieveAnal for (int32_t f = 0; f < numOfAlgos; ++f) { TAOS_CHECK_EXIT(tDecodeI32(&decoder, &nameLen)); - if (nameLen > 0 && nameLen <= TSDB_ANAL_ALGO_NAME_LEN) { + if (nameLen > 0 && nameLen <= TSDB_ANALYTIC_ALGO_NAME_LEN) { TAOS_CHECK_EXIT(tDecodeCStrTo(&decoder, name)); } diff --git a/source/common/src/systable.c b/source/common/src/systable.c index 12b789f14e..bfe82aa7ae 100644 --- a/source/common/src/systable.c +++ b/source/common/src/systable.c @@ -402,7 +402,7 @@ static const SSysDbTableSchema userCompactsDetailSchema[] = { static const SSysDbTableSchema anodesSchema[] = { {.name = "id", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = false}, - {.name = "url", .bytes = TSDB_ANAL_ANODE_URL_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "url", .bytes = TSDB_ANALYTIC_ANODE_URL_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, {.name = "status", .bytes = 10 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, {.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = true}, {.name = "update_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = true}, @@ -410,8 +410,8 @@ static const SSysDbTableSchema anodesSchema[] = { static const SSysDbTableSchema anodesFullSchema[] = { {.name = "id", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = false}, - {.name = "type", .bytes = TSDB_ANAL_ALGO_TYPE_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, - {.name = "algo", .bytes = TSDB_ANAL_ALGO_NAME_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "type", .bytes = TSDB_ANALYTIC_ALGO_TYPE_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, + {.name = "algo", .bytes = TSDB_ANALYTIC_ALGO_NAME_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, }; static const SSysDbTableSchema tsmaSchema[] = { diff --git a/source/dnode/mnode/impl/src/mndAnode.c b/source/dnode/mnode/impl/src/mndAnode.c index 87bfe9f7af..c64208600a 100644 --- a/source/dnode/mnode/impl/src/mndAnode.c +++ b/source/dnode/mnode/impl/src/mndAnode.c @@ -309,7 +309,7 @@ static int32_t mndCreateAnode(SMnode *pMnode, SRpcMsg *pReq, SMCreateAnodeReq *p anodeObj.updateTime = anodeObj.createdTime; anodeObj.version = 0; anodeObj.urlLen = pCreate->urlLen; - if (anodeObj.urlLen > TSDB_ANAL_ANODE_URL_LEN) { + if (anodeObj.urlLen > TSDB_ANALYTIC_ANODE_URL_LEN) { code = TSDB_CODE_MND_ANODE_TOO_LONG_URL; goto _OVER; } @@ -491,23 +491,24 @@ static int32_t mndSetDropAnodeRedoLogs(STrans *pTrans, SAnodeObj *pObj) { int32_t code = 0; SSdbRaw *pRedoRaw = mndAnodeActionEncode(pObj); if (pRedoRaw == NULL) { - code = TSDB_CODE_MND_RETURN_VALUE_NULL; - if (terrno != 0) code = terrno; - TAOS_RETURN(code); + code = terrno; + return code; } + TAOS_CHECK_RETURN(mndTransAppendRedolog(pTrans, pRedoRaw)); TAOS_CHECK_RETURN(sdbSetRawStatus(pRedoRaw, SDB_STATUS_DROPPING)); - TAOS_RETURN(code); + + return code; } static int32_t mndSetDropAnodeCommitLogs(STrans *pTrans, SAnodeObj *pObj) { int32_t code = 0; SSdbRaw *pCommitRaw = mndAnodeActionEncode(pObj); if (pCommitRaw == NULL) { - code = TSDB_CODE_MND_RETURN_VALUE_NULL; - if (terrno != 0) code = terrno; - TAOS_RETURN(code); + code = terrno; + return code; } + TAOS_CHECK_RETURN(mndTransAppendCommitlog(pTrans, pCommitRaw)); TAOS_CHECK_RETURN(sdbSetRawStatus(pCommitRaw, SDB_STATUS_DROPPED)); TAOS_RETURN(code); @@ -521,25 +522,25 @@ static int32_t mndSetDropAnodeInfoToTrans(SMnode *pMnode, STrans *pTrans, SAnode } static int32_t mndDropAnode(SMnode *pMnode, SRpcMsg *pReq, SAnodeObj *pObj) { - int32_t code = -1; + int32_t code = 0; + int32_t lino = 0; STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_NOTHING, pReq, "drop-anode"); - if (pTrans == NULL) { - code = TSDB_CODE_MND_RETURN_VALUE_NULL; - if (terrno != 0) code = terrno; - goto _OVER; - } + TSDB_CHECK_NULL(pTrans, code, lino, _OVER, terrno); + mndTransSetSerial(pTrans); + mInfo("trans:%d, to drop anode:%d", pTrans->id, pObj->id); - mInfo("trans:%d, used to drop anode:%d", pTrans->id, pObj->id); - TAOS_CHECK_GOTO(mndSetDropAnodeInfoToTrans(pMnode, pTrans, pObj, false), NULL, _OVER); - TAOS_CHECK_GOTO(mndTransPrepare(pMnode, pTrans), NULL, _OVER); + code = mndSetDropAnodeInfoToTrans(pMnode, pTrans, pObj, false); + mndReleaseAnode(pMnode, pObj); - code = 0; + TSDB_CHECK_CODE(code, lino, _OVER); + + code = mndTransPrepare(pMnode, pTrans); _OVER: mndTransDrop(pTrans); - TAOS_RETURN(code); + return code; } static int32_t mndProcessDropAnodeReq(SRpcMsg *pReq) { @@ -560,20 +561,20 @@ static int32_t mndProcessDropAnodeReq(SRpcMsg *pReq) { pObj = mndAcquireAnode(pMnode, dropReq.anodeId); if (pObj == NULL) { - code = TSDB_CODE_MND_RETURN_VALUE_NULL; - if (terrno != 0) code = terrno; + code = terrno; goto _OVER; } code = mndDropAnode(pMnode, pReq, pObj); - if (code == 0) code = TSDB_CODE_ACTION_IN_PROGRESS; + if (code == 0) { + code = TSDB_CODE_ACTION_IN_PROGRESS; + } _OVER: if (code != 0 && code != TSDB_CODE_ACTION_IN_PROGRESS) { mError("anode:%d, failed to drop since %s", dropReq.anodeId, tstrerror(code)); } - mndReleaseAnode(pMnode, pObj); tFreeSMDropAnodeReq(&dropReq); TAOS_RETURN(code); } @@ -584,7 +585,7 @@ static int32_t mndRetrieveAnodes(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pB int32_t numOfRows = 0; int32_t cols = 0; SAnodeObj *pObj = NULL; - char buf[TSDB_ANAL_ANODE_URL_LEN + VARSTR_HEADER_SIZE]; + char buf[TSDB_ANALYTIC_ANODE_URL_LEN + VARSTR_HEADER_SIZE]; char status[64]; int32_t code = 0; @@ -642,7 +643,7 @@ static int32_t mndRetrieveAnodesFull(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock int32_t numOfRows = 0; int32_t cols = 0; SAnodeObj *pObj = NULL; - char buf[TSDB_ANAL_ALGO_NAME_LEN + VARSTR_HEADER_SIZE]; + char buf[TSDB_ANALYTIC_ALGO_NAME_LEN + VARSTR_HEADER_SIZE]; int32_t code = 0; while (numOfRows < rows) { @@ -693,7 +694,7 @@ static int32_t mndDecodeAlgoList(SJson *pJson, SAnodeObj *pObj) { int32_t code = 0; int32_t protocol = 0; double tmp = 0; - char buf[TSDB_ANAL_ALGO_NAME_LEN + 1] = {0}; + char buf[TSDB_ANALYTIC_ALGO_NAME_LEN + 1] = {0}; code = tjsonGetDoubleValue(pJson, "protocol", &tmp); if (code < 0) return TSDB_CODE_INVALID_JSON_FORMAT; @@ -753,10 +754,10 @@ static int32_t mndDecodeAlgoList(SJson *pJson, SAnodeObj *pObj) { } static int32_t mndGetAnodeAlgoList(const char *url, SAnodeObj *pObj) { - char anodeUrl[TSDB_ANAL_ANODE_URL_LEN + 1] = {0}; - snprintf(anodeUrl, TSDB_ANAL_ANODE_URL_LEN, "%s/%s", url, "list"); + char anodeUrl[TSDB_ANALYTIC_ANODE_URL_LEN + 1] = {0}; + snprintf(anodeUrl, TSDB_ANALYTIC_ANODE_URL_LEN, "%s/%s", url, "list"); - SJson *pJson = taosAnalSendReqRetJson(anodeUrl, ANAL_HTTP_TYPE_GET, NULL); + SJson *pJson = taosAnalSendReqRetJson(anodeUrl, ANALYTICS_HTTP_TYPE_GET, NULL); if (pJson == NULL) return terrno; int32_t code = mndDecodeAlgoList(pJson, pObj); @@ -769,10 +770,10 @@ static int32_t mndGetAnodeStatus(SAnodeObj *pObj, char *status, int32_t statusLe int32_t code = 0; int32_t protocol = 0; double tmp = 0; - char anodeUrl[TSDB_ANAL_ANODE_URL_LEN + 1] = {0}; - snprintf(anodeUrl, TSDB_ANAL_ANODE_URL_LEN, "%s/%s", pObj->url, "status"); + char anodeUrl[TSDB_ANALYTIC_ANODE_URL_LEN + 1] = {0}; + snprintf(anodeUrl, TSDB_ANALYTIC_ANODE_URL_LEN, "%s/%s", pObj->url, "status"); - SJson *pJson = taosAnalSendReqRetJson(anodeUrl, ANAL_HTTP_TYPE_GET, NULL); + SJson *pJson = taosAnalSendReqRetJson(anodeUrl, ANALYTICS_HTTP_TYPE_GET, NULL); if (pJson == NULL) return terrno; code = tjsonGetDoubleValue(pJson, "protocol", &tmp); @@ -808,7 +809,7 @@ static int32_t mndProcessAnalAlgoReq(SRpcMsg *pReq) { SAnodeObj *pObj = NULL; SAnalyticsUrl url; int32_t nameLen; - char name[TSDB_ANAL_ALGO_KEY_LEN]; + char name[TSDB_ANALYTIC_ALGO_KEY_LEN]; SRetrieveAnalAlgoReq req = {0}; SRetrieveAnalAlgoRsp rsp = {0}; @@ -847,13 +848,13 @@ static int32_t mndProcessAnalAlgoReq(SRpcMsg *pReq) { goto _OVER; } } - url.url = taosMemoryMalloc(TSDB_ANAL_ANODE_URL_LEN + TSDB_ANAL_ALGO_TYPE_LEN + 1); + url.url = taosMemoryMalloc(TSDB_ANALYTIC_ANODE_URL_LEN + TSDB_ANALYTIC_ALGO_TYPE_LEN + 1); if (url.url == NULL) { sdbRelease(pSdb, pAnode); goto _OVER; } - url.urlLen = 1 + tsnprintf(url.url, TSDB_ANAL_ANODE_URL_LEN + TSDB_ANAL_ALGO_TYPE_LEN, "%s/%s", pAnode->url, + url.urlLen = 1 + tsnprintf(url.url, TSDB_ANALYTIC_ANODE_URL_LEN + TSDB_ANALYTIC_ALGO_TYPE_LEN, "%s/%s", pAnode->url, taosAnalAlgoUrlStr(url.type)); if (taosHashPut(rsp.hash, name, nameLen, &url, sizeof(SAnalyticsUrl)) != 0) { taosMemoryFree(url.url); diff --git a/source/libs/catalog/inc/catalogInt.h b/source/libs/catalog/inc/catalogInt.h index bfc5ca9142..b581e31919 100644 --- a/source/libs/catalog/inc/catalogInt.h +++ b/source/libs/catalog/inc/catalogInt.h @@ -832,12 +832,12 @@ typedef struct SCtgCacheItemInfo { #define ctgDebug(param, ...) qDebug("CTG:%p " param, pCtg, __VA_ARGS__) #define ctgTrace(param, ...) qTrace("CTG:%p " param, pCtg, __VA_ARGS__) -#define ctgTaskFatal(param, ...) qFatal("qid:%" PRIx64 " CTG:%p " param, pTask->pJob->queryId, pCtg, __VA_ARGS__) -#define ctgTaskError(param, ...) qError("qid:%" PRIx64 " CTG:%p " param, pTask->pJob->queryId, pCtg, __VA_ARGS__) -#define ctgTaskWarn(param, ...) qWarn("qid:%" PRIx64 " CTG:%p " param, pTask->pJob->queryId, pCtg, __VA_ARGS__) -#define ctgTaskInfo(param, ...) qInfo("qid:%" PRIx64 " CTG:%p " param, pTask->pJob->queryId, pCtg, __VA_ARGS__) -#define ctgTaskDebug(param, ...) qDebug("qid:%" PRIx64 " CTG:%p " param, pTask->pJob->queryId, pCtg, __VA_ARGS__) -#define ctgTaskTrace(param, ...) qTrace("qid:%" PRIx64 " CTG:%p " param, pTask->pJob->queryId, pCtg, __VA_ARGS__) +#define ctgTaskFatal(param, ...) qFatal("QID:%" PRIx64 " CTG:%p " param, pTask->pJob->queryId, pCtg, __VA_ARGS__) +#define ctgTaskError(param, ...) qError("QID:%" PRIx64 " CTG:%p " param, pTask->pJob->queryId, pCtg, __VA_ARGS__) +#define ctgTaskWarn(param, ...) qWarn("QID:%" PRIx64 " CTG:%p " param, pTask->pJob->queryId, pCtg, __VA_ARGS__) +#define ctgTaskInfo(param, ...) qInfo("QID:%" PRIx64 " CTG:%p " param, pTask->pJob->queryId, pCtg, __VA_ARGS__) +#define ctgTaskDebug(param, ...) qDebug("QID:%" PRIx64 " CTG:%p " param, pTask->pJob->queryId, pCtg, __VA_ARGS__) +#define ctgTaskTrace(param, ...) qTrace("QID:%" PRIx64 " CTG:%p " param, pTask->pJob->queryId, pCtg, __VA_ARGS__) #define CTG_LOCK_DEBUG(...) \ do { \ diff --git a/source/libs/executor/src/anomalywindowoperator.c b/source/libs/executor/src/anomalywindowoperator.c index 94cc5d9129..3bc9c806b0 100644 --- a/source/libs/executor/src/anomalywindowoperator.c +++ b/source/libs/executor/src/anomalywindowoperator.c @@ -44,9 +44,9 @@ typedef struct { SExprSupp scalarSup; int32_t tsSlotId; STimeWindowAggSupp twAggSup; - char algoName[TSDB_ANAL_ALGO_NAME_LEN]; - char algoUrl[TSDB_ANAL_ALGO_URL_LEN]; - char anomalyOpt[TSDB_ANAL_ALGO_OPTION_LEN]; + char algoName[TSDB_ANALYTIC_ALGO_NAME_LEN]; + char algoUrl[TSDB_ANALYTIC_ALGO_URL_LEN]; + char anomalyOpt[TSDB_ANALYTIC_ALGO_OPTION_LEN]; SAnomalyWindowSupp anomalySup; SWindowRowsSup anomalyWinRowSup; SColumn anomalyCol; @@ -75,13 +75,13 @@ int32_t createAnomalywindowOperatorInfo(SOperatorInfo* downstream, SPhysiNode* p if (!taosAnalGetOptStr(pAnomalyNode->anomalyOpt, "algo", pInfo->algoName, sizeof(pInfo->algoName))) { qError("failed to get anomaly_window algorithm name from %s", pAnomalyNode->anomalyOpt); - code = TSDB_CODE_ANAL_ALGO_NOT_FOUND; + code = TSDB_CODE_ANA_ALGO_NOT_FOUND; goto _error; } if (taosAnalGetAlgoUrl(pInfo->algoName, ANAL_ALGO_TYPE_ANOMALY_DETECT, pInfo->algoUrl, sizeof(pInfo->algoUrl)) != 0) { qError("failed to get anomaly_window algorithm url from %s", pInfo->algoName); - code = TSDB_CODE_ANAL_ALGO_NOT_LOAD; + code = TSDB_CODE_ANA_ALGO_NOT_LOAD; goto _error; } @@ -262,7 +262,7 @@ static void anomalyDestroyOperatorInfo(void* param) { static int32_t anomalyCacheBlock(SAnomalyWindowOperatorInfo* pInfo, SSDataBlock* pSrc) { if (pInfo->anomalySup.cachedRows > ANAL_ANOMALY_WINDOW_MAX_ROWS) { - return TSDB_CODE_ANAL_ANODE_TOO_MANY_ROWS; + return TSDB_CODE_ANA_ANODE_TOO_MANY_ROWS; } SSDataBlock* pDst = NULL; @@ -287,7 +287,7 @@ static int32_t anomalyFindWindow(SAnomalyWindowSupp* pSupp, TSKEY key) { return -1; } -static int32_t anomalyParseJson(SJson* pJson, SArray* pWindows) { +static int32_t anomalyParseJson(SJson* pJson, SArray* pWindows, const char* pId) { int32_t code = 0; int32_t rows = 0; STimeWindow win = {0}; @@ -295,8 +295,23 @@ static int32_t anomalyParseJson(SJson* pJson, SArray* pWindows) { taosArrayClear(pWindows); tjsonGetInt32ValueFromDouble(pJson, "rows", rows, code); - if (code < 0) return TSDB_CODE_INVALID_JSON_FORMAT; - if (rows <= 0) return 0; + if (code < 0) { + return TSDB_CODE_INVALID_JSON_FORMAT; + } + + if (rows < 0) { + char pMsg[1024] = {0}; + code = tjsonGetStringValue(pJson, "msg", pMsg); + if (code) { + qError("%s failed to get error msg from rsp, unknown error", pId); + } else { + qError("%s failed to exec forecast, msg:%s", pId, pMsg); + } + + return TSDB_CODE_ANA_WN_DATA; + } else if (rows == 0) { + return TSDB_CODE_SUCCESS; + } SJson* res = tjsonGetObjectItem(pJson, "res"); if (res == NULL) return TSDB_CODE_INVALID_JSON_FORMAT; @@ -313,7 +328,10 @@ static int32_t anomalyParseJson(SJson* pJson, SArray* pWindows) { SJson* start = tjsonGetArrayItem(row, 0); SJson* end = tjsonGetArrayItem(row, 1); - if (start == NULL || end == NULL) return TSDB_CODE_INVALID_JSON_FORMAT; + if (start == NULL || end == NULL) { + qError("%s invalid res from analytic sys, code:%s", pId, tstrerror(TSDB_CODE_INVALID_JSON_FORMAT)); + return TSDB_CODE_INVALID_JSON_FORMAT; + } tjsonGetObjectValueBigInt(start, &win.skey); tjsonGetObjectValueBigInt(end, &win.ekey); @@ -322,52 +340,57 @@ static int32_t anomalyParseJson(SJson* pJson, SArray* pWindows) { win.ekey = win.skey + 1; } - if (taosArrayPush(pWindows, &win) == NULL) return TSDB_CODE_OUT_OF_BUFFER; + if (taosArrayPush(pWindows, &win) == NULL) { + qError("%s out of memory in generating anomaly_window", pId); + return TSDB_CODE_OUT_OF_BUFFER; + } } int32_t numOfWins = taosArrayGetSize(pWindows); - qDebug("anomaly window recevied, total:%d", numOfWins); + qDebug("%s anomaly window recevied, total:%d", pId, numOfWins); for (int32_t i = 0; i < numOfWins; ++i) { STimeWindow* pWindow = taosArrayGet(pWindows, i); - qDebug("anomaly win:%d [%" PRId64 ", %" PRId64 ")", i, pWindow->skey, pWindow->ekey); + qDebug("%s anomaly win:%d [%" PRId64 ", %" PRId64 ")", pId, i, pWindow->skey, pWindow->ekey); } - return 0; + return code; } static int32_t anomalyAnalysisWindow(SOperatorInfo* pOperator) { SAnomalyWindowOperatorInfo* pInfo = pOperator->info; SAnomalyWindowSupp* pSupp = &pInfo->anomalySup; SJson* pJson = NULL; - SAnalBuf analBuf = {.bufType = ANAL_BUF_TYPE_JSON}; + SAnalyticBuf analBuf = {.bufType = ANALYTICS_BUF_TYPE_JSON}; char dataBuf[64] = {0}; int32_t code = 0; int64_t ts = 0; + int32_t lino = 0; + const char* pId = GET_TASKID(pOperator->pTaskInfo); - // int64_t ts = taosGetTimestampMs(); snprintf(analBuf.fileName, sizeof(analBuf.fileName), "%s/tdengine-anomaly-%" PRId64 "-%" PRId64, tsTempDir, ts, pSupp->groupId); code = tsosAnalBufOpen(&analBuf, 2); - if (code != 0) goto _OVER; + QUERY_CHECK_CODE(code, lino, _OVER); const char* prec = TSDB_TIME_PRECISION_MILLI_STR; if (pInfo->anomalyCol.precision == TSDB_TIME_PRECISION_MICRO) prec = TSDB_TIME_PRECISION_MICRO_STR; if (pInfo->anomalyCol.precision == TSDB_TIME_PRECISION_NANO) prec = TSDB_TIME_PRECISION_NANO_STR; code = taosAnalBufWriteColMeta(&analBuf, 0, TSDB_DATA_TYPE_TIMESTAMP, "ts"); - if (code != 0) goto _OVER; + QUERY_CHECK_CODE(code, lino, _OVER); code = taosAnalBufWriteColMeta(&analBuf, 1, pInfo->anomalyCol.type, "val"); - if (code != 0) goto _OVER; + QUERY_CHECK_CODE(code, lino, _OVER); code = taosAnalBufWriteDataBegin(&analBuf); - if (code != 0) goto _OVER; + QUERY_CHECK_CODE(code, lino, _OVER); int32_t numOfBlocks = (int32_t)taosArrayGetSize(pSupp->blocks); // timestamp code = taosAnalBufWriteColBegin(&analBuf, 0); - if (code != 0) goto _OVER; + QUERY_CHECK_CODE(code, lino, _OVER); + for (int32_t i = 0; i < numOfBlocks; ++i) { SSDataBlock* pBlock = taosArrayGetP(pSupp->blocks, i); if (pBlock == NULL) break; @@ -375,15 +398,17 @@ static int32_t anomalyAnalysisWindow(SOperatorInfo* pOperator) { if (pTsCol == NULL) break; for (int32_t j = 0; j < pBlock->info.rows; ++j) { code = taosAnalBufWriteColData(&analBuf, 0, TSDB_DATA_TYPE_TIMESTAMP, &((TSKEY*)pTsCol->pData)[j]); - if (code != 0) goto _OVER; + QUERY_CHECK_CODE(code, lino, _OVER); } } + code = taosAnalBufWriteColEnd(&analBuf, 0); - if (code != 0) goto _OVER; + QUERY_CHECK_CODE(code, lino, _OVER); // data code = taosAnalBufWriteColBegin(&analBuf, 1); - if (code != 0) goto _OVER; + QUERY_CHECK_CODE(code, lino, _OVER); + for (int32_t i = 0; i < numOfBlocks; ++i) { SSDataBlock* pBlock = taosArrayGetP(pSupp->blocks, i); if (pBlock == NULL) break; @@ -392,48 +417,47 @@ static int32_t anomalyAnalysisWindow(SOperatorInfo* pOperator) { for (int32_t j = 0; j < pBlock->info.rows; ++j) { code = taosAnalBufWriteColData(&analBuf, 1, pValCol->info.type, colDataGetData(pValCol, j)); - if (code != 0) goto _OVER; - if (code != 0) goto _OVER; + QUERY_CHECK_CODE(code, lino, _OVER); } } code = taosAnalBufWriteColEnd(&analBuf, 1); - if (code != 0) goto _OVER; + QUERY_CHECK_CODE(code, lino, _OVER); code = taosAnalBufWriteDataEnd(&analBuf); - if (code != 0) goto _OVER; + QUERY_CHECK_CODE(code, lino, _OVER); code = taosAnalBufWriteOptStr(&analBuf, "option", pInfo->anomalyOpt); - if (code != 0) goto _OVER; + QUERY_CHECK_CODE(code, lino, _OVER); code = taosAnalBufWriteOptStr(&analBuf, "algo", pInfo->algoName); - if (code != 0) goto _OVER; + QUERY_CHECK_CODE(code, lino, _OVER); code = taosAnalBufWriteOptStr(&analBuf, "prec", prec); - if (code != 0) goto _OVER; + QUERY_CHECK_CODE(code, lino, _OVER); int64_t wncheck = ANAL_FORECAST_DEFAULT_WNCHECK; bool hasWncheck = taosAnalGetOptInt(pInfo->anomalyOpt, "wncheck", &wncheck); if (!hasWncheck) { qDebug("anomaly_window wncheck not found from %s, use default:%" PRId64, pInfo->anomalyOpt, wncheck); } + code = taosAnalBufWriteOptInt(&analBuf, "wncheck", wncheck); - if (code != 0) goto _OVER; + QUERY_CHECK_CODE(code, lino, _OVER); code = taosAnalBufClose(&analBuf); - if (code != 0) goto _OVER; + QUERY_CHECK_CODE(code, lino, _OVER); - pJson = taosAnalSendReqRetJson(pInfo->algoUrl, ANAL_HTTP_TYPE_POST, &analBuf); + pJson = taosAnalSendReqRetJson(pInfo->algoUrl, ANALYTICS_HTTP_TYPE_POST, &analBuf); if (pJson == NULL) { code = terrno; goto _OVER; } - code = anomalyParseJson(pJson, pSupp->windows); - if (code != 0) goto _OVER; + code = anomalyParseJson(pJson, pSupp->windows, pId); _OVER: if (code != 0) { - qError("failed to analysis window since %s", tstrerror(code)); + qError("%s failed to analysis window since %s, lino:%d", pId, tstrerror(code), lino); } taosAnalBufDestroy(&analBuf); diff --git a/source/libs/executor/src/forecastoperator.c b/source/libs/executor/src/forecastoperator.c index 20dc9e28ba..bf1efc54ca 100644 --- a/source/libs/executor/src/forecastoperator.c +++ b/source/libs/executor/src/forecastoperator.c @@ -29,9 +29,9 @@ #ifdef USE_ANALYTICS typedef struct { - char algoName[TSDB_ANAL_ALGO_NAME_LEN]; - char algoUrl[TSDB_ANAL_ALGO_URL_LEN]; - char algoOpt[TSDB_ANAL_ALGO_OPTION_LEN]; + char algoName[TSDB_ANALYTIC_ALGO_NAME_LEN]; + char algoUrl[TSDB_ANALYTIC_ALGO_URL_LEN]; + char algoOpt[TSDB_ANALYTIC_ALGO_OPTION_LEN]; int64_t maxTs; int64_t minTs; int64_t numOfRows; @@ -47,7 +47,7 @@ typedef struct { int16_t inputValSlot; int8_t inputValType; int8_t inputPrecision; - SAnalBuf analBuf; + SAnalyticBuf analBuf; } SForecastSupp; typedef struct SForecastOperatorInfo { @@ -74,12 +74,12 @@ static FORCE_INLINE int32_t forecastEnsureBlockCapacity(SSDataBlock* pBlock, int static int32_t forecastCacheBlock(SForecastSupp* pSupp, SSDataBlock* pBlock) { if (pSupp->cachedRows > ANAL_FORECAST_MAX_ROWS) { - return TSDB_CODE_ANAL_ANODE_TOO_MANY_ROWS; + return TSDB_CODE_ANA_ANODE_TOO_MANY_ROWS; } int32_t code = TSDB_CODE_SUCCESS; int32_t lino = 0; - SAnalBuf* pBuf = &pSupp->analBuf; + SAnalyticBuf* pBuf = &pSupp->analBuf; qDebug("block:%d, %p rows:%" PRId64, pSupp->numOfBlocks, pBlock, pBlock->info.rows); pSupp->numOfBlocks++; @@ -108,7 +108,7 @@ static int32_t forecastCacheBlock(SForecastSupp* pSupp, SSDataBlock* pBlock) { } static int32_t forecastCloseBuf(SForecastSupp* pSupp) { - SAnalBuf* pBuf = &pSupp->analBuf; + SAnalyticBuf* pBuf = &pSupp->analBuf; int32_t code = 0; for (int32_t i = 0; i < 2; ++i) { @@ -180,8 +180,8 @@ static int32_t forecastCloseBuf(SForecastSupp* pSupp) { return code; } -static int32_t forecastAnalysis(SForecastSupp* pSupp, SSDataBlock* pBlock) { - SAnalBuf* pBuf = &pSupp->analBuf; +static int32_t forecastAnalysis(SForecastSupp* pSupp, SSDataBlock* pBlock, const char* pId) { + SAnalyticBuf* pBuf = &pSupp->analBuf; int32_t resCurRow = pBlock->info.rows; int8_t tmpI8; int16_t tmpI16; @@ -192,28 +192,45 @@ static int32_t forecastAnalysis(SForecastSupp* pSupp, SSDataBlock* pBlock) { int32_t code = 0; SColumnInfoData* pResValCol = taosArrayGet(pBlock->pDataBlock, pSupp->resValSlot); - if (NULL == pResValCol) return TSDB_CODE_OUT_OF_RANGE; + if (NULL == pResValCol) { + return terrno; + } SColumnInfoData* pResTsCol = (pSupp->resTsSlot != -1 ? taosArrayGet(pBlock->pDataBlock, pSupp->resTsSlot) : NULL); SColumnInfoData* pResLowCol = (pSupp->resLowSlot != -1 ? taosArrayGet(pBlock->pDataBlock, pSupp->resLowSlot) : NULL); SColumnInfoData* pResHighCol = (pSupp->resHighSlot != -1 ? taosArrayGet(pBlock->pDataBlock, pSupp->resHighSlot) : NULL); - SJson* pJson = taosAnalSendReqRetJson(pSupp->algoUrl, ANAL_HTTP_TYPE_POST, pBuf); - if (pJson == NULL) return terrno; + SJson* pJson = taosAnalSendReqRetJson(pSupp->algoUrl, ANALYTICS_HTTP_TYPE_POST, pBuf); + if (pJson == NULL) { + return terrno; + } int32_t rows = 0; tjsonGetInt32ValueFromDouble(pJson, "rows", rows, code); - if (code < 0) goto _OVER; - if (rows <= 0) goto _OVER; + if (rows < 0 && code == 0) { + char pMsg[1024] = {0}; + code = tjsonGetStringValue(pJson, "msg", pMsg); + if (code != 0) { + qError("%s failed to get msg from rsp, unknown error", pId); + } else { + qError("%s failed to exec forecast, msg:%s", pId, pMsg); + } + + tjsonDelete(pJson); + return TSDB_CODE_ANA_WN_DATA; + } + + if (code < 0) { + goto _OVER; + } SJson* res = tjsonGetObjectItem(pJson, "res"); if (res == NULL) goto _OVER; int32_t ressize = tjsonGetArraySize(res); bool returnConf = (pSupp->resHighSlot != -1 || pSupp->resLowSlot != -1); - if (returnConf) { - if (ressize != 4) goto _OVER; - } else if (ressize != 2) { + + if ((returnConf && (ressize != 4)) || ((!returnConf) && (ressize != 2))) { goto _OVER; } @@ -313,41 +330,25 @@ static int32_t forecastAnalysis(SForecastSupp* pSupp, SSDataBlock* pBlock) { resCurRow++; } - // for (int32_t i = rows; i < pSupp->optRows; ++i) { - // colDataSetNNULL(pResValCol, rows, (pSupp->optRows - rows)); - // if (pResTsCol != NULL) { - // colDataSetNNULL(pResTsCol, rows, (pSupp->optRows - rows)); - // } - // if (pResLowCol != NULL) { - // colDataSetNNULL(pResLowCol, rows, (pSupp->optRows - rows)); - // } - // if (pResHighCol != NULL) { - // colDataSetNNULL(pResHighCol, rows, (pSupp->optRows - rows)); - // } - // } - - // if (rows == pSupp->optRows) { - // pResValCol->hasNull = false; - // } - pBlock->info.rows += rows; if (pJson != NULL) tjsonDelete(pJson); return 0; _OVER: - if (pJson != NULL) tjsonDelete(pJson); + tjsonDelete(pJson); if (code == 0) { code = TSDB_CODE_INVALID_JSON_FORMAT; } - qError("failed to perform forecast finalize since %s", tstrerror(code)); - return TSDB_CODE_INVALID_JSON_FORMAT; + + qError("%s failed to perform forecast finalize since %s", pId, tstrerror(code)); + return code; } -static int32_t forecastAggregateBlocks(SForecastSupp* pSupp, SSDataBlock* pResBlock) { +static int32_t forecastAggregateBlocks(SForecastSupp* pSupp, SSDataBlock* pResBlock, const char* pId) { int32_t code = TSDB_CODE_SUCCESS; int32_t lino = 0; - SAnalBuf* pBuf = &pSupp->analBuf; + SAnalyticBuf* pBuf = &pSupp->analBuf; code = forecastCloseBuf(pSupp); QUERY_CHECK_CODE(code, lino, _end); @@ -355,10 +356,10 @@ static int32_t forecastAggregateBlocks(SForecastSupp* pSupp, SSDataBlock* pResBl code = forecastEnsureBlockCapacity(pResBlock, 1); QUERY_CHECK_CODE(code, lino, _end); - code = forecastAnalysis(pSupp, pResBlock); + code = forecastAnalysis(pSupp, pResBlock, pId); QUERY_CHECK_CODE(code, lino, _end); - uInfo("block:%d, forecast finalize", pSupp->numOfBlocks); + uInfo("%s block:%d, forecast finalize", pId, pSupp->numOfBlocks); _end: pSupp->numOfBlocks = 0; @@ -373,9 +374,10 @@ static int32_t forecastNext(SOperatorInfo* pOperator, SSDataBlock** ppRes) { SForecastOperatorInfo* pInfo = pOperator->info; SSDataBlock* pResBlock = pInfo->pRes; SForecastSupp* pSupp = &pInfo->forecastSupp; - SAnalBuf* pBuf = &pSupp->analBuf; + SAnalyticBuf* pBuf = &pSupp->analBuf; int64_t st = taosGetTimestampUs(); int32_t numOfBlocks = pSupp->numOfBlocks; + const char* pId = GET_TASKID(pOperator->pTaskInfo); blockDataCleanup(pResBlock); @@ -389,45 +391,46 @@ static int32_t forecastNext(SOperatorInfo* pOperator, SSDataBlock** ppRes) { pSupp->groupId = pBlock->info.id.groupId; numOfBlocks++; pSupp->cachedRows += pBlock->info.rows; - qDebug("group:%" PRId64 ", blocks:%d, rows:%" PRId64 ", total rows:%" PRId64, pSupp->groupId, numOfBlocks, + qDebug("%s group:%" PRId64 ", blocks:%d, rows:%" PRId64 ", total rows:%" PRId64, pId, pSupp->groupId, numOfBlocks, pBlock->info.rows, pSupp->cachedRows); code = forecastCacheBlock(pSupp, pBlock); QUERY_CHECK_CODE(code, lino, _end); } else { - qDebug("group:%" PRId64 ", read finish for new group coming, blocks:%d", pSupp->groupId, numOfBlocks); - code = forecastAggregateBlocks(pSupp, pResBlock); + qDebug("%s group:%" PRId64 ", read finish for new group coming, blocks:%d", pId, pSupp->groupId, numOfBlocks); + code = forecastAggregateBlocks(pSupp, pResBlock, pId); QUERY_CHECK_CODE(code, lino, _end); pSupp->groupId = pBlock->info.id.groupId; numOfBlocks = 1; pSupp->cachedRows = pBlock->info.rows; - qDebug("group:%" PRId64 ", new group, rows:%" PRId64 ", total rows:%" PRId64, pSupp->groupId, pBlock->info.rows, - pSupp->cachedRows); + qDebug("%s group:%" PRId64 ", new group, rows:%" PRId64 ", total rows:%" PRId64, pId, pSupp->groupId, + pBlock->info.rows, pSupp->cachedRows); code = forecastCacheBlock(pSupp, pBlock); QUERY_CHECK_CODE(code, lino, _end); } if (pResBlock->info.rows > 0) { (*ppRes) = pResBlock; - qDebug("group:%" PRId64 ", return to upstream, blocks:%d", pResBlock->info.id.groupId, numOfBlocks); + qDebug("%s group:%" PRId64 ", return to upstream, blocks:%d", pId, pResBlock->info.id.groupId, numOfBlocks); return code; } } if (numOfBlocks > 0) { - qDebug("group:%" PRId64 ", read finish, blocks:%d", pSupp->groupId, numOfBlocks); - code = forecastAggregateBlocks(pSupp, pResBlock); + qDebug("%s group:%" PRId64 ", read finish, blocks:%d", pId, pSupp->groupId, numOfBlocks); + code = forecastAggregateBlocks(pSupp, pResBlock, pId); QUERY_CHECK_CODE(code, lino, _end); } int64_t cost = taosGetTimestampUs() - st; - qDebug("all groups finished, cost:%" PRId64 "us", cost); + qDebug("%s all groups finished, cost:%" PRId64 "us", pId, cost); _end: if (code != TSDB_CODE_SUCCESS) { - qError("%s failed at line %d since %s", __func__, lino, tstrerror(code)); + qError("%s %s failed at line %d since %s", pId, __func__, lino, tstrerror(code)); pTaskInfo->code = code; T_LONG_JMP(pTaskInfo->env, code); } + (*ppRes) = (pResBlock->info.rows == 0) ? NULL : pResBlock; return code; } @@ -498,7 +501,7 @@ static int32_t forecastParseInput(SForecastSupp* pSupp, SNodeList* pFuncs) { pSupp->inputPrecision = pTsNode->node.resType.precision; pSupp->inputValSlot = pValNode->slotId; pSupp->inputValType = pValNode->node.resType.type; - tstrncpy(pSupp->algoOpt, "algo=arima", TSDB_ANAL_ALGO_OPTION_LEN); + tstrncpy(pSupp->algoOpt, "algo=arima", TSDB_ANALYTIC_ALGO_OPTION_LEN); } else { return TSDB_CODE_PLAN_INTERNAL_ERROR; } @@ -516,22 +519,22 @@ static int32_t forecastParseAlgo(SForecastSupp* pSupp) { if (!taosAnalGetOptStr(pSupp->algoOpt, "algo", pSupp->algoName, sizeof(pSupp->algoName))) { qError("failed to get forecast algorithm name from %s", pSupp->algoOpt); - return TSDB_CODE_ANAL_ALGO_NOT_FOUND; + return TSDB_CODE_ANA_ALGO_NOT_FOUND; } if (taosAnalGetAlgoUrl(pSupp->algoName, ANAL_ALGO_TYPE_FORECAST, pSupp->algoUrl, sizeof(pSupp->algoUrl)) != 0) { qError("failed to get forecast algorithm url from %s", pSupp->algoName); - return TSDB_CODE_ANAL_ALGO_NOT_LOAD; + return TSDB_CODE_ANA_ALGO_NOT_LOAD; } return 0; } static int32_t forecastCreateBuf(SForecastSupp* pSupp) { - SAnalBuf* pBuf = &pSupp->analBuf; + SAnalyticBuf* pBuf = &pSupp->analBuf; int64_t ts = 0; // taosGetTimestampMs(); - pBuf->bufType = ANAL_BUF_TYPE_JSON_COL; + pBuf->bufType = ANALYTICS_BUF_TYPE_JSON_COL; snprintf(pBuf->fileName, sizeof(pBuf->fileName), "%s/tdengine-forecast-%" PRId64, tsTempDir, ts); int32_t code = tsosAnalBufOpen(pBuf, 2); if (code != 0) goto _OVER; diff --git a/source/libs/executor/src/scanoperator.c b/source/libs/executor/src/scanoperator.c index 19a579698b..b128fe41ed 100644 --- a/source/libs/executor/src/scanoperator.c +++ b/source/libs/executor/src/scanoperator.c @@ -3407,6 +3407,8 @@ int32_t streamScanOperatorEncode(SStreamScanInfo* pInfo, void** pBuff, int32_t* QUERY_CHECK_CODE(code, lino, _end); } + qDebug("%s last scan range %d. %" PRId64 ",%" PRId64, __func__, __LINE__, pInfo->lastScanRange.skey, pInfo->lastScanRange.ekey); + *pLen = len; _end: @@ -3472,21 +3474,20 @@ void streamScanOperatorDecode(void* pBuff, int32_t len, SStreamScanInfo* pInfo) goto _end; } - if (pInfo->pUpdateInfo != NULL) { - void* pUpInfo = taosMemoryCalloc(1, sizeof(SUpdateInfo)); - if (!pUpInfo) { - lino = __LINE__; - goto _end; - } - code = pInfo->stateStore.updateInfoDeserialize(pDeCoder, pUpInfo); - if (code == TSDB_CODE_SUCCESS) { - pInfo->stateStore.updateInfoDestroy(pInfo->pUpdateInfo); - pInfo->pUpdateInfo = pUpInfo; - } else { - taosMemoryFree(pUpInfo); - lino = __LINE__; - goto _end; - } + void* pUpInfo = taosMemoryCalloc(1, sizeof(SUpdateInfo)); + if (!pUpInfo) { + lino = __LINE__; + goto _end; + } + code = pInfo->stateStore.updateInfoDeserialize(pDeCoder, pUpInfo); + if (code == TSDB_CODE_SUCCESS) { + pInfo->stateStore.updateInfoDestroy(pInfo->pUpdateInfo); + pInfo->pUpdateInfo = pUpInfo; + qDebug("%s line:%d. stream scan updateinfo deserialize success", __func__, __LINE__); + } else { + taosMemoryFree(pUpInfo); + code = TSDB_CODE_SUCCESS; + qDebug("%s line:%d. stream scan did not have updateinfo", __func__, __LINE__); } if (tDecodeIsEnd(pDeCoder)) { @@ -3506,6 +3507,7 @@ void streamScanOperatorDecode(void* pBuff, int32_t len, SStreamScanInfo* pInfo) lino = __LINE__; goto _end; } + qDebug("%s last scan range %d. %" PRId64 ",%" PRId64, __func__, __LINE__, pInfo->lastScanRange.skey, pInfo->lastScanRange.ekey); _end: if (pDeCoder != NULL) { diff --git a/source/libs/parser/src/parAstCreater.c b/source/libs/parser/src/parAstCreater.c index 245346273f..1a5e3444c0 100644 --- a/source/libs/parser/src/parAstCreater.c +++ b/source/libs/parser/src/parAstCreater.c @@ -1377,7 +1377,7 @@ SNode* createAnomalyWindowNode(SAstCreateContext* pCxt, SNode* pExpr, const STok CHECK_MAKE_NODE(pAnomaly->pCol); pAnomaly->pExpr = pExpr; if (pFuncOpt == NULL) { - tstrncpy(pAnomaly->anomalyOpt, "algo=iqr", TSDB_ANAL_ALGO_OPTION_LEN); + tstrncpy(pAnomaly->anomalyOpt, "algo=iqr", TSDB_ANALYTIC_ALGO_OPTION_LEN); } else { (void)trimString(pFuncOpt->z, pFuncOpt->n, pAnomaly->anomalyOpt, sizeof(pAnomaly->anomalyOpt)); } diff --git a/source/libs/parser/src/parCalcConst.c b/source/libs/parser/src/parCalcConst.c index 9702c2e760..a2e98bece7 100644 --- a/source/libs/parser/src/parCalcConst.c +++ b/source/libs/parser/src/parCalcConst.c @@ -329,16 +329,22 @@ static int32_t calcConstGroupBy(SCalcConstContext* pCxt, SSelectStmt* pSelect) { if (TSDB_CODE_SUCCESS == code) { SNode* pNode = NULL; FOREACH(pNode, pSelect->pGroupByList) { + bool hasNotValue = false; SNode* pGroupPara = NULL; FOREACH(pGroupPara, ((SGroupingSetNode*)pNode)->pParameterList) { if (QUERY_NODE_VALUE != nodeType(pGroupPara)) { - return code; + hasNotValue = true; + break; + } + } + if (!hasNotValue) { + if (pSelect->hasAggFuncs) { + ERASE_NODE(pSelect->pGroupByList); + } else { + if (!cell->pPrev && !cell->pNext) continue; + ERASE_NODE(pSelect->pGroupByList); } } - } - FOREACH(pNode, pSelect->pGroupByList) { - if (!cell->pPrev) continue; - ERASE_NODE(pSelect->pGroupByList); } } return code; diff --git a/source/libs/parser/src/parTranslater.c b/source/libs/parser/src/parTranslater.c index 9bea3491c3..fcb6361a6b 100755 --- a/source/libs/parser/src/parTranslater.c +++ b/source/libs/parser/src/parTranslater.c @@ -9652,7 +9652,7 @@ static int32_t translateDropUser(STranslateContext* pCxt, SDropUserStmt* pStmt) static int32_t translateCreateAnode(STranslateContext* pCxt, SCreateAnodeStmt* pStmt) { SMCreateAnodeReq createReq = {0}; createReq.urlLen = strlen(pStmt->url) + 1; - if (createReq.urlLen > TSDB_ANAL_ANODE_URL_LEN) { + if (createReq.urlLen > TSDB_ANALYTIC_ANODE_URL_LEN) { return TSDB_CODE_MND_ANODE_TOO_LONG_URL; } diff --git a/source/libs/qworker/inc/qwInt.h b/source/libs/qworker/inc/qwInt.h index 708c285aea..6d81baf91a 100644 --- a/source/libs/qworker/inc/qwInt.h +++ b/source/libs/qworker/inc/qwInt.h @@ -313,29 +313,29 @@ typedef struct SQWorkerMgmt { #define QW_SCH_DLOG(param, ...) qDebug("QW:%p SID:%" PRIx64 " " param, mgmt, sId, __VA_ARGS__) #define QW_TASK_ELOG(param, ...) \ - qError("qid:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, qId, cId, tId, eId, __VA_ARGS__) + qError("QID:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, qId, cId, tId, eId, __VA_ARGS__) #define QW_TASK_WLOG(param, ...) \ - qWarn("qid:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, qId, cId, tId, eId, __VA_ARGS__) + qWarn("QID:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, qId, cId, tId, eId, __VA_ARGS__) #define QW_TASK_DLOG(param, ...) \ - qDebug("qid:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, qId, cId, tId, eId, __VA_ARGS__) + qDebug("QID:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, qId, cId, tId, eId, __VA_ARGS__) #define QW_TASK_DLOGL(param, ...) \ - qDebugL("qid:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, qId, cId, tId, eId, __VA_ARGS__) + qDebugL("QID:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, qId, cId, tId, eId, __VA_ARGS__) #define QW_TASK_ELOG_E(param) \ - qError("qid:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, qId, cId, tId, eId) + qError("QID:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, qId, cId, tId, eId) #define QW_TASK_WLOG_E(param) \ - qWarn("qid:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, qId, cId, tId, eId) + qWarn("QID:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, qId, cId, tId, eId) #define QW_TASK_DLOG_E(param) \ - qDebug("qid:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, qId, cId, tId, eId) + qDebug("QID:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, qId, cId, tId, eId) #define QW_SCH_TASK_ELOG(param, ...) \ - qError("QW:%p SID:0x%" PRIx64 ",qid:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, mgmt, sId, \ + qError("QW:%p SID:0x%" PRIx64 ",QID:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, mgmt, sId, \ qId, cId, tId, eId, __VA_ARGS__) #define QW_SCH_TASK_WLOG(param, ...) \ - qWarn("QW:%p SID:0x%" PRIx64 ",qid:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, mgmt, sId, qId, \ + qWarn("QW:%p SID:0x%" PRIx64 ",QID:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, mgmt, sId, qId, \ cId, tId, eId, __VA_ARGS__) #define QW_SCH_TASK_DLOG(param, ...) \ - qDebug("QW:%p SID:0x%" PRIx64 ",qid:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, mgmt, sId, \ + qDebug("QW:%p SID:0x%" PRIx64 ",QID:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, mgmt, sId, \ qId, cId, tId, eId, __VA_ARGS__) #define QW_LOCK_DEBUG(...) \ diff --git a/source/libs/scheduler/inc/schInt.h b/source/libs/scheduler/inc/schInt.h index 6a910453f0..ef643852ea 100644 --- a/source/libs/scheduler/inc/schInt.h +++ b/source/libs/scheduler/inc/schInt.h @@ -62,7 +62,7 @@ typedef enum { #define SCH_DEFAULT_MAX_RETRY_NUM 6 #define SCH_MIN_AYSNC_EXEC_NUM 3 #define SCH_DEFAULT_RETRY_TOTAL_ROUND 3 -#define SCH_DEFAULT_TASK_CAPACITY_NUM 1000 +#define SCH_DEFAULT_TASK_CAPACITY_NUM 1000 typedef struct SSchDebug { bool lockEnable; @@ -333,12 +333,13 @@ extern SSchedulerMgmt schMgmt; #define SCH_UNLOCK_TASK(_task) SCH_UNLOCK(SCH_WRITE, &(_task)->lock) #define SCH_CLIENT_ID(_task) ((_task) ? (_task)->clientId : -1) -#define SCH_TASK_ID(_task) ((_task) ? (_task)->taskId : -1) -#define SCH_TASK_EID(_task) ((_task) ? (_task)->execId : -1) +#define SCH_TASK_ID(_task) ((_task) ? (_task)->taskId : -1) +#define SCH_TASK_EID(_task) ((_task) ? (_task)->execId : -1) #define SCH_IS_DATA_BIND_QRY_TASK(task) ((task)->plan->subplanType == SUBPLAN_TYPE_SCAN) -#define SCH_IS_DATA_BIND_PLAN(_plan) (((_plan)->subplanType == SUBPLAN_TYPE_SCAN) || ((_plan)->subplanType == SUBPLAN_TYPE_MODIFY)) -#define SCH_IS_DATA_BIND_TASK(task) SCH_IS_DATA_BIND_PLAN((task)->plan) +#define SCH_IS_DATA_BIND_PLAN(_plan) \ + (((_plan)->subplanType == SUBPLAN_TYPE_SCAN) || ((_plan)->subplanType == SUBPLAN_TYPE_MODIFY)) +#define SCH_IS_DATA_BIND_TASK(task) SCH_IS_DATA_BIND_PLAN((task)->plan) #define SCH_IS_LEAF_TASK(_job, _task) (((_task)->level->level + 1) == (_job)->levelNum) #define SCH_IS_DATA_MERGE_TASK(task) (!SCH_IS_DATA_BIND_TASK(task)) #define SCH_IS_LOCAL_EXEC_TASK(_job, _task) \ @@ -419,15 +420,15 @@ extern SSchedulerMgmt schMgmt; #define SCH_SWITCH_EPSET(_addr) ((_addr)->epSet.inUse = ((_addr)->epSet.inUse + 1) % (_addr)->epSet.numOfEps) #define SCH_TASK_NUM_OF_EPS(_addr) ((_addr)->epSet.numOfEps) -#define SCH_LOG_TASK_START_TS(_task) \ - do { \ - int64_t us = taosGetTimestampUs(); \ - if (NULL == taosArrayPush((_task)->profile.execTime, &us)) { \ - qError("taosArrayPush task execTime failed, error:%s", tstrerror(terrno)); \ - } \ - if (0 == (_task)->execId) { \ - (_task)->profile.startTs = us; \ - } \ +#define SCH_LOG_TASK_START_TS(_task) \ + do { \ + int64_t us = taosGetTimestampUs(); \ + if (NULL == taosArrayPush((_task)->profile.execTime, &us)) { \ + qError("taosArrayPush task execTime failed, error:%s", tstrerror(terrno)); \ + } \ + if (0 == (_task)->execId) { \ + (_task)->profile.startTs = us; \ + } \ } while (0) #define SCH_LOG_TASK_WAIT_TS(_task) \ @@ -450,23 +451,23 @@ extern SSchedulerMgmt schMgmt; (_task)->profile.endTs = us; \ } while (0) -#define SCH_JOB_ELOG(param, ...) qError("qid:0x%" PRIx64 " " param, pJob->queryId, __VA_ARGS__) -#define SCH_JOB_DLOG(param, ...) qDebug("qid:0x%" PRIx64 " " param, pJob->queryId, __VA_ARGS__) +#define SCH_JOB_ELOG(param, ...) qError("QID:0x%" PRIx64 " " param, pJob->queryId, __VA_ARGS__) +#define SCH_JOB_DLOG(param, ...) qDebug("QID:0x%" PRIx64 " " param, pJob->queryId, __VA_ARGS__) #define SCH_TASK_ELOG(param, ...) \ - qError("qid:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, pJob->queryId, SCH_CLIENT_ID(pTask), \ + qError("QID:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, pJob->queryId, SCH_CLIENT_ID(pTask), \ SCH_TASK_ID(pTask), SCH_TASK_EID(pTask), __VA_ARGS__) #define SCH_TASK_DLOG(param, ...) \ - qDebug("qid:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, pJob->queryId, SCH_CLIENT_ID(pTask), \ + qDebug("QID:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, pJob->queryId, SCH_CLIENT_ID(pTask), \ SCH_TASK_ID(pTask), SCH_TASK_EID(pTask), __VA_ARGS__) #define SCH_TASK_TLOG(param, ...) \ - qTrace("qid:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, pJob->queryId, SCH_CLIENT_ID(pTask), \ + qTrace("QID:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, pJob->queryId, SCH_CLIENT_ID(pTask), \ SCH_TASK_ID(pTask), SCH_TASK_EID(pTask), __VA_ARGS__) #define SCH_TASK_DLOGL(param, ...) \ - qDebugL("qid:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, pJob->queryId, SCH_CLIENT_ID(pTask), \ + qDebugL("QID:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, pJob->queryId, SCH_CLIENT_ID(pTask), \ SCH_TASK_ID(pTask), SCH_TASK_EID(pTask), __VA_ARGS__) #define SCH_TASK_WLOG(param, ...) \ - qWarn("qid:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, pJob->queryId, SCH_CLIENT_ID(pTask), \ + qWarn("QID:0x%" PRIx64 ",CID:0x%" PRIx64 ",TID:0x%" PRIx64 ",EID:%d " param, pJob->queryId, SCH_CLIENT_ID(pTask), \ SCH_TASK_ID(pTask), SCH_TASK_EID(pTask), __VA_ARGS__) #define SCH_SET_ERRNO(_err) \ @@ -580,7 +581,7 @@ int32_t schDelayLaunchTask(SSchJob *pJob, SSchTask *pTask); int32_t schBuildAndSendMsg(SSchJob *job, SSchTask *task, SQueryNodeAddr *addr, int32_t msgType, void *param); int32_t schAcquireJob(int64_t refId, SSchJob **ppJob); int32_t schReleaseJob(int64_t refId); -int32_t schReleaseJobEx(int64_t refId, int32_t* released); +int32_t schReleaseJobEx(int64_t refId, int32_t *released); void schFreeFlowCtrl(SSchJob *pJob); int32_t schChkJobNeedFlowCtrl(SSchJob *pJob, SSchLevel *pLevel); int32_t schDecTaskFlowQuota(SSchJob *pJob, SSchTask *pTask); @@ -648,7 +649,7 @@ void schDropTaskInHashList(SSchJob *pJob, SHashObj *list); int32_t schNotifyTaskInHashList(SSchJob *pJob, SHashObj *list, ETaskNotifyType type, SSchTask *pTask); int32_t schLaunchLevelTasks(SSchJob *pJob, SSchLevel *level); void schGetTaskFromList(SHashObj *pTaskList, uint64_t taskId, SSchTask **pTask); -int32_t schValidateSubplan(SSchJob *pJob, SSubplan* pSubplan, int32_t level, int32_t idx, int32_t taskNum); +int32_t schValidateSubplan(SSchJob *pJob, SSubplan *pSubplan, int32_t level, int32_t idx, int32_t taskNum); int32_t schInitTask(SSchJob *pJob, SSchTask *pTask, SSubplan *pPlan, SSchLevel *pLevel); int32_t schSwitchTaskCandidateAddr(SSchJob *pJob, SSchTask *pTask); void schDirectPostJobRes(SSchedulerReq *pReq, int32_t errCode); diff --git a/source/libs/stream/src/streamUpdate.c b/source/libs/stream/src/streamUpdate.c index a3cfa00127..49d5041369 100644 --- a/source/libs/stream/src/streamUpdate.c +++ b/source/libs/stream/src/streamUpdate.c @@ -445,6 +445,11 @@ int32_t updateInfoSerialize(SEncoder* pEncoder, const SUpdateInfo* pInfo) { int32_t code = TSDB_CODE_SUCCESS; int32_t lino = 0; if (!pInfo) { + if (tEncodeI32(pEncoder, -1) < 0) { + code = TSDB_CODE_FAILED; + QUERY_CHECK_CODE(code, lino, _end); + } + uDebug("%s line:%d. it did not have updateinfo", __func__, __LINE__); return TSDB_CODE_SUCCESS; } @@ -550,6 +555,10 @@ int32_t updateInfoDeserialize(SDecoder* pDeCoder, SUpdateInfo* pInfo) { int32_t size = 0; if (tDecodeI32(pDeCoder, &size) < 0) return -1; + + if (size < 0) { + return -1; + } pInfo->pTsBuckets = taosArrayInit(size, sizeof(TSKEY)); QUERY_CHECK_NULL(pInfo->pTsBuckets, code, lino, _error, terrno); diff --git a/source/libs/transport/inc/transComm.h b/source/libs/transport/inc/transComm.h index 5c79b379ed..2ba88cdcc6 100644 --- a/source/libs/transport/inc/transComm.h +++ b/source/libs/transport/inc/transComm.h @@ -96,7 +96,7 @@ typedef void* queue[2]; // #define TRANS_RETRY_COUNT_LIMIT 100 // retry count limit // #define TRANS_RETRY_INTERVAL 15 // retry interval (ms) -#define TRANS_CONN_TIMEOUT 3000 // connect timeout (ms) +#define TRANS_CONN_TIMEOUT 5000 // connect timeout (ms) #define TRANS_READ_TIMEOUT 3000 // read timeout (ms) #define TRANS_PACKET_LIMIT 1024 * 1024 * 512 @@ -452,6 +452,7 @@ void transPrintEpSet(SEpSet* pEpSet); void transFreeMsg(void* msg); int32_t transCompressMsg(char* msg, int32_t len); int32_t transDecompressMsg(char** msg, int32_t* len); +int32_t transDecompressMsgExt(char const* msg, int32_t len, char** out, int32_t* outLen); int32_t transOpenRefMgt(int size, void (*func)(void*)); void transCloseRefMgt(int32_t refMgt); diff --git a/source/libs/transport/src/transCli.c b/source/libs/transport/src/transCli.c index 9ad92590ba..8377a1456d 100644 --- a/source/libs/transport/src/transCli.c +++ b/source/libs/transport/src/transCli.c @@ -725,7 +725,8 @@ void cliConnTimeout(uv_timer_t* handle) { return; } - tTrace("%s conn %p conn timeout", CONN_GET_INST_LABEL(conn), conn); + cliMayUpdateFqdnCache(pThrd->fqdn2ipCache, conn->dstAddr); + tTrace("%s conn %p failed to connect %s since conn timeout", CONN_GET_INST_LABEL(conn), conn, conn->dstAddr); TAOS_UNUSED(transUnrefCliHandle(conn)); } @@ -1334,13 +1335,31 @@ static void cliBatchSendCb(uv_write_t* req, int status) { } } bool cliConnMayAddUserInfo(SCliConn* pConn, STransMsgHead** ppHead, int32_t* msgLen) { + int32_t code = 0; SCliThrd* pThrd = pConn->hostThrd; STrans* pInst = pThrd->pInst; if (pConn->userInited == 1) { return false; } STransMsgHead* pHead = *ppHead; - STransMsgHead* tHead = taosMemoryCalloc(1, *msgLen + sizeof(pInst->user)); + int32_t len = *msgLen; + char* oriMsg = NULL; + int32_t oriLen = 0; + + if (pHead->comp == 1) { + int32_t msgLen = htonl(pHead->msgLen); + code = transDecompressMsgExt((char*)(pHead), msgLen, &oriMsg, &oriLen); + if (code < 0) { + tError("failed to decompress since %s", tstrerror(code)); + return false; + } else { + tDebug("decompress msg and resent, compress size %d, raw size %d", msgLen, oriLen); + } + + pHead = (STransMsgHead*)oriMsg; + len = oriLen; + } + STransMsgHead* tHead = taosMemoryCalloc(1, len + sizeof(pInst->user)); if (tHead == NULL) { return false; } @@ -1348,14 +1367,17 @@ bool cliConnMayAddUserInfo(SCliConn* pConn, STransMsgHead** ppHead, int32_t* msg memcpy((char*)tHead + TRANS_MSG_OVERHEAD, pInst->user, sizeof(pInst->user)); memcpy((char*)tHead + TRANS_MSG_OVERHEAD + sizeof(pInst->user), (char*)pHead + TRANS_MSG_OVERHEAD, - *msgLen - TRANS_MSG_OVERHEAD); + len - TRANS_MSG_OVERHEAD); tHead->withUserInfo = 1; *ppHead = tHead; - *msgLen += sizeof(pInst->user); + *msgLen = len + sizeof(pInst->user); pConn->pInitUserReq = tHead; pConn->userInited = 1; + if (oriMsg != NULL) { + taosMemoryFree(oriMsg); + } return true; } int32_t cliBatchSend(SCliConn* pConn, int8_t direct) { @@ -1421,9 +1443,8 @@ int32_t cliBatchSend(SCliConn* pConn, int8_t direct) { pReq->contLen = 0; } - int32_t msgLen = transMsgLenFromCont(pReq->contLen); - STransMsgHead* pHead = transHeadFromCont(pReq->pCont); + int32_t msgLen = transMsgLenFromCont(pReq->contLen); char* content = pReq->pCont; int32_t contLen = pReq->contLen; diff --git a/source/libs/transport/src/transComm.c b/source/libs/transport/src/transComm.c index 66bd4a08f3..c0edcd54e4 100644 --- a/source/libs/transport/src/transComm.c +++ b/source/libs/transport/src/transComm.c @@ -77,6 +77,11 @@ int32_t transDecompressMsg(char** msg, int32_t* len) { STransMsgHead* pNewHead = (STransMsgHead*)buf; int32_t decompLen = LZ4_decompress_safe(pCont + sizeof(STransCompMsg), (char*)pNewHead->content, tlen - sizeof(STransMsgHead) - sizeof(STransCompMsg), oriLen); + + if (decompLen != oriLen) { + taosMemoryFree(buf); + return TSDB_CODE_INVALID_MSG; + } memcpy((char*)pNewHead, (char*)pHead, sizeof(STransMsgHead)); *len = oriLen + sizeof(STransMsgHead); @@ -84,9 +89,36 @@ int32_t transDecompressMsg(char** msg, int32_t* len) { taosMemoryFree(pHead); *msg = buf; + return 0; +} +int32_t transDecompressMsgExt(char const* msg, int32_t len, char** out, int32_t* outLen) { + STransMsgHead* pHead = (STransMsgHead*)msg; + char* pCont = transContFromHead(pHead); + + STransCompMsg* pComp = (STransCompMsg*)pCont; + int32_t oriLen = htonl(pComp->contLen); + + int32_t tlen = len; + char* buf = taosMemoryCalloc(1, oriLen + sizeof(STransMsgHead)); + if (buf == NULL) { + return terrno; + } + + STransMsgHead* pNewHead = (STransMsgHead*)buf; + int32_t decompLen = LZ4_decompress_safe(pCont + sizeof(STransCompMsg), (char*)pNewHead->content, + tlen - sizeof(STransMsgHead) - sizeof(STransCompMsg), oriLen); if (decompLen != oriLen) { + tError("msgLen:%d, originLen:%d, decompLen:%d", len, oriLen, decompLen); + taosMemoryFree(buf); return TSDB_CODE_INVALID_MSG; } + memcpy((char*)pNewHead, (char*)pHead, sizeof(STransMsgHead)); + + *out = buf; + *outLen = oriLen + sizeof(STransMsgHead); + pNewHead->msgLen = *outLen; + pNewHead->comp = 0; + return 0; } diff --git a/source/libs/wal/src/walMeta.c b/source/libs/wal/src/walMeta.c index da26ddae3a..ce2b9218b5 100644 --- a/source/libs/wal/src/walMeta.c +++ b/source/libs/wal/src/walMeta.c @@ -415,10 +415,10 @@ static void printFileSet(int32_t vgId, SArray* fileSet, const char* str) { int32_t sz = taosArrayGetSize(fileSet); for (int32_t i = 0; i < sz; i++) { SWalFileInfo* pFileInfo = taosArrayGet(fileSet, i); - wInfo("vgId:%d, %s-%d, firstVer:%" PRId64 ", lastVer:%" PRId64 ", fileSize:%" PRId64 ", syncedOffset:%" PRId64 - ", createTs:%" PRId64 ", closeTs:%" PRId64, - vgId, str, i, pFileInfo->firstVer, pFileInfo->lastVer, pFileInfo->fileSize, pFileInfo->syncedOffset, - pFileInfo->createTs, pFileInfo->closeTs); + wTrace("vgId:%d, %s-%d, firstVer:%" PRId64 ", lastVer:%" PRId64 ", fileSize:%" PRId64 ", syncedOffset:%" PRId64 + ", createTs:%" PRId64 ", closeTs:%" PRId64, + vgId, str, i, pFileInfo->firstVer, pFileInfo->lastVer, pFileInfo->fileSize, pFileInfo->syncedOffset, + pFileInfo->createTs, pFileInfo->closeTs); } } diff --git a/source/util/src/tanalytics.c b/source/util/src/tanalytics.c index 99d91700a2..68bbbb7e99 100644 --- a/source/util/src/tanalytics.c +++ b/source/util/src/tanalytics.c @@ -34,7 +34,7 @@ typedef struct { } SCurlResp; static SAlgoMgmt tsAlgos = {0}; -static int32_t taosAnalBufGetCont(SAnalBuf *pBuf, char **ppCont, int64_t *pContLen); +static int32_t taosAnalBufGetCont(SAnalyticBuf *pBuf, char **ppCont, int64_t *pContLen); const char *taosAnalAlgoStr(EAnalAlgoType type) { switch (type) { @@ -127,28 +127,44 @@ void taosAnalUpdate(int64_t newVer, SHashObj *pHash) { } bool taosAnalGetOptStr(const char *option, const char *optName, char *optValue, int32_t optMaxLen) { - char buf[TSDB_ANAL_ALGO_OPTION_LEN] = {0}; - int32_t bufLen = tsnprintf(buf, sizeof(buf), "%s=", optName); + char buf[TSDB_ANALYTIC_ALGO_OPTION_LEN] = {0}; + char *pStart = NULL; + char *pEnd = NULL; - char *pos1 = strstr(option, buf); - char *pos2 = strstr(option, ANAL_ALGO_SPLIT); - if (pos1 != NULL) { - if (optMaxLen > 0) { - int32_t copyLen = optMaxLen; - if (pos2 != NULL) { - copyLen = (int32_t)(pos2 - pos1 - strlen(optName)); - copyLen = MIN(copyLen, optMaxLen); - } - tstrncpy(optValue, pos1 + bufLen, copyLen); - } - return true; - } else { + pStart = strstr(option, optName); + if (pStart == NULL) { return false; } + + pEnd = strstr(pStart, ANAL_ALGO_SPLIT); + if (optMaxLen > 0) { + if (pEnd > pStart) { + int32_t len = (int32_t)(pEnd - pStart); + len = MIN(len + 1, TSDB_ANALYTIC_ALGO_OPTION_LEN); + tstrncpy(buf, pStart, len); + } else { + int32_t len = MIN(tListLen(buf), strlen(pStart) + 1); + tstrncpy(buf, pStart, len); + } + + char *pRight = strstr(buf, "="); + if (pRight == NULL) { + return false; + } else { + pRight += 1; + } + + int32_t unused = strtrim(pRight); + + int32_t vLen = MIN(optMaxLen, strlen(pRight) + 1); + tstrncpy(optValue, pRight, vLen); + } + + return true; } bool taosAnalGetOptInt(const char *option, const char *optName, int64_t *optValue) { - char buf[TSDB_ANAL_ALGO_OPTION_LEN] = {0}; + char buf[TSDB_ANALYTIC_ALGO_OPTION_LEN] = {0}; int32_t bufLen = tsnprintf(buf, sizeof(buf), "%s=", optName); char *pos1 = strstr(option, buf); @@ -163,7 +179,7 @@ bool taosAnalGetOptInt(const char *option, const char *optName, int64_t *optValu int32_t taosAnalGetAlgoUrl(const char *algoName, EAnalAlgoType type, char *url, int32_t urlLen) { int32_t code = 0; - char name[TSDB_ANAL_ALGO_KEY_LEN] = {0}; + char name[TSDB_ANALYTIC_ALGO_KEY_LEN] = {0}; int32_t nameLen = 1 + tsnprintf(name, sizeof(name) - 1, "%d:%s", type, algoName); char *unused = strntolower(name, name, nameLen); @@ -175,7 +191,7 @@ int32_t taosAnalGetAlgoUrl(const char *algoName, EAnalAlgoType type, char *url, uDebug("algo:%s, type:%s, url:%s", algoName, taosAnalAlgoStr(type), url); } else { url[0] = 0; - terrno = TSDB_CODE_ANAL_ALGO_NOT_FOUND; + terrno = TSDB_CODE_ANA_ALGO_NOT_FOUND; code = terrno; uError("algo:%s, type:%s, url not found", algoName, taosAnalAlgoStr(type)); } @@ -276,16 +292,16 @@ _OVER: return code; } -SJson *taosAnalSendReqRetJson(const char *url, EAnalHttpType type, SAnalBuf *pBuf) { +SJson *taosAnalSendReqRetJson(const char *url, EAnalHttpType type, SAnalyticBuf *pBuf) { int32_t code = -1; char *pCont = NULL; int64_t contentLen; SJson *pJson = NULL; SCurlResp curlRsp = {0}; - if (type == ANAL_HTTP_TYPE_GET) { + if (type == ANALYTICS_HTTP_TYPE_GET) { if (taosCurlGetRequest(url, &curlRsp) != 0) { - terrno = TSDB_CODE_ANAL_URL_CANT_ACCESS; + terrno = TSDB_CODE_ANA_URL_CANT_ACCESS; goto _OVER; } } else { @@ -295,20 +311,20 @@ SJson *taosAnalSendReqRetJson(const char *url, EAnalHttpType type, SAnalBuf *pBu goto _OVER; } if (taosCurlPostRequest(url, &curlRsp, pCont, contentLen) != 0) { - terrno = TSDB_CODE_ANAL_URL_CANT_ACCESS; + terrno = TSDB_CODE_ANA_URL_CANT_ACCESS; goto _OVER; } } if (curlRsp.data == NULL || curlRsp.dataLen == 0) { - terrno = TSDB_CODE_ANAL_URL_RSP_IS_NULL; + terrno = TSDB_CODE_ANA_URL_RSP_IS_NULL; goto _OVER; } pJson = tjsonParse(curlRsp.data); if (pJson == NULL) { if (curlRsp.data[0] == '<') { - terrno = TSDB_CODE_ANAL_ANODE_RETURN_ERROR; + terrno = TSDB_CODE_ANA_ANODE_RETURN_ERROR; } else { terrno = TSDB_CODE_INVALID_JSON_FORMAT; } @@ -360,7 +376,7 @@ _OVER: return code; } -static int32_t taosAnalJsonBufWriteOptInt(SAnalBuf *pBuf, const char *optName, int64_t optVal) { +static int32_t taosAnalJsonBufWriteOptInt(SAnalyticBuf *pBuf, const char *optName, int64_t optVal) { char buf[64] = {0}; int32_t bufLen = tsnprintf(buf, sizeof(buf), "\"%s\": %" PRId64 ",\n", optName, optVal); if (taosWriteFile(pBuf->filePtr, buf, bufLen) != bufLen) { @@ -369,7 +385,7 @@ static int32_t taosAnalJsonBufWriteOptInt(SAnalBuf *pBuf, const char *optName, i return 0; } -static int32_t taosAnalJsonBufWriteOptStr(SAnalBuf *pBuf, const char *optName, const char *optVal) { +static int32_t taosAnalJsonBufWriteOptStr(SAnalyticBuf *pBuf, const char *optName, const char *optVal) { char buf[128] = {0}; int32_t bufLen = tsnprintf(buf, sizeof(buf), "\"%s\": \"%s\",\n", optName, optVal); if (taosWriteFile(pBuf->filePtr, buf, bufLen) != bufLen) { @@ -378,7 +394,7 @@ static int32_t taosAnalJsonBufWriteOptStr(SAnalBuf *pBuf, const char *optName, c return 0; } -static int32_t taosAnalJsonBufWriteOptFloat(SAnalBuf *pBuf, const char *optName, float optVal) { +static int32_t taosAnalJsonBufWriteOptFloat(SAnalyticBuf *pBuf, const char *optName, float optVal) { char buf[128] = {0}; int32_t bufLen = tsnprintf(buf, sizeof(buf), "\"%s\": %f,\n", optName, optVal); if (taosWriteFile(pBuf->filePtr, buf, bufLen) != bufLen) { @@ -387,7 +403,7 @@ static int32_t taosAnalJsonBufWriteOptFloat(SAnalBuf *pBuf, const char *optName, return 0; } -static int32_t taosAnalJsonBufWriteStr(SAnalBuf *pBuf, const char *buf, int32_t bufLen) { +static int32_t taosAnalJsonBufWriteStr(SAnalyticBuf *pBuf, const char *buf, int32_t bufLen) { if (bufLen <= 0) { bufLen = strlen(buf); } @@ -397,9 +413,9 @@ static int32_t taosAnalJsonBufWriteStr(SAnalBuf *pBuf, const char *buf, int32_t return 0; } -static int32_t taosAnalJsonBufWriteStart(SAnalBuf *pBuf) { return taosAnalJsonBufWriteStr(pBuf, "{\n", 0); } +static int32_t taosAnalJsonBufWriteStart(SAnalyticBuf *pBuf) { return taosAnalJsonBufWriteStr(pBuf, "{\n", 0); } -static int32_t tsosAnalJsonBufOpen(SAnalBuf *pBuf, int32_t numOfCols) { +static int32_t tsosAnalJsonBufOpen(SAnalyticBuf *pBuf, int32_t numOfCols) { pBuf->filePtr = taosOpenFile(pBuf->fileName, TD_FILE_CREATE | TD_FILE_WRITE | TD_FILE_TRUNC | TD_FILE_WRITE_THROUGH); if (pBuf->filePtr == NULL) { return terrno; @@ -409,7 +425,7 @@ static int32_t tsosAnalJsonBufOpen(SAnalBuf *pBuf, int32_t numOfCols) { if (pBuf->pCols == NULL) return TSDB_CODE_OUT_OF_MEMORY; pBuf->numOfCols = numOfCols; - if (pBuf->bufType == ANAL_BUF_TYPE_JSON) { + if (pBuf->bufType == ANALYTICS_BUF_TYPE_JSON) { return taosAnalJsonBufWriteStart(pBuf); } @@ -426,7 +442,7 @@ static int32_t tsosAnalJsonBufOpen(SAnalBuf *pBuf, int32_t numOfCols) { return taosAnalJsonBufWriteStart(pBuf); } -static int32_t taosAnalJsonBufWriteColMeta(SAnalBuf *pBuf, int32_t colIndex, int32_t colType, const char *colName) { +static int32_t taosAnalJsonBufWriteColMeta(SAnalyticBuf *pBuf, int32_t colIndex, int32_t colType, const char *colName) { char buf[128] = {0}; bool first = (colIndex == 0); bool last = (colIndex == pBuf->numOfCols - 1); @@ -452,16 +468,16 @@ static int32_t taosAnalJsonBufWriteColMeta(SAnalBuf *pBuf, int32_t colIndex, int return 0; } -static int32_t taosAnalJsonBufWriteDataBegin(SAnalBuf *pBuf) { +static int32_t taosAnalJsonBufWriteDataBegin(SAnalyticBuf *pBuf) { return taosAnalJsonBufWriteStr(pBuf, "\"data\": [\n", 0); } -static int32_t taosAnalJsonBufWriteStrUseCol(SAnalBuf *pBuf, const char *buf, int32_t bufLen, int32_t colIndex) { +static int32_t taosAnalJsonBufWriteStrUseCol(SAnalyticBuf *pBuf, const char *buf, int32_t bufLen, int32_t colIndex) { if (bufLen <= 0) { bufLen = strlen(buf); } - if (pBuf->bufType == ANAL_BUF_TYPE_JSON) { + if (pBuf->bufType == ANALYTICS_BUF_TYPE_JSON) { if (taosWriteFile(pBuf->filePtr, buf, bufLen) != bufLen) { return terrno; } @@ -474,11 +490,11 @@ static int32_t taosAnalJsonBufWriteStrUseCol(SAnalBuf *pBuf, const char *buf, in return 0; } -static int32_t taosAnalJsonBufWriteColBegin(SAnalBuf *pBuf, int32_t colIndex) { +static int32_t taosAnalJsonBufWriteColBegin(SAnalyticBuf *pBuf, int32_t colIndex) { return taosAnalJsonBufWriteStrUseCol(pBuf, "[\n", 0, colIndex); } -static int32_t taosAnalJsonBufWriteColEnd(SAnalBuf *pBuf, int32_t colIndex) { +static int32_t taosAnalJsonBufWriteColEnd(SAnalyticBuf *pBuf, int32_t colIndex) { if (colIndex == pBuf->numOfCols - 1) { return taosAnalJsonBufWriteStrUseCol(pBuf, "\n]\n", 0, colIndex); @@ -487,7 +503,7 @@ static int32_t taosAnalJsonBufWriteColEnd(SAnalBuf *pBuf, int32_t colIndex) { } } -static int32_t taosAnalJsonBufWriteColData(SAnalBuf *pBuf, int32_t colIndex, int32_t colType, void *colValue) { +static int32_t taosAnalJsonBufWriteColData(SAnalyticBuf *pBuf, int32_t colIndex, int32_t colType, void *colValue) { char buf[64]; int32_t bufLen = 0; @@ -541,12 +557,12 @@ static int32_t taosAnalJsonBufWriteColData(SAnalBuf *pBuf, int32_t colIndex, int return taosAnalJsonBufWriteStrUseCol(pBuf, buf, bufLen, colIndex); } -static int32_t taosAnalJsonBufWriteDataEnd(SAnalBuf *pBuf) { +static int32_t taosAnalJsonBufWriteDataEnd(SAnalyticBuf *pBuf) { int32_t code = 0; char *pCont = NULL; int64_t contLen = 0; - if (pBuf->bufType == ANAL_BUF_TYPE_JSON_COL) { + if (pBuf->bufType == ANALYTICS_BUF_TYPE_JSON_COL) { for (int32_t i = 0; i < pBuf->numOfCols; ++i) { SAnalyticsColBuf *pCol = &pBuf->pCols[i]; @@ -570,14 +586,14 @@ static int32_t taosAnalJsonBufWriteDataEnd(SAnalBuf *pBuf) { return taosAnalJsonBufWriteStr(pBuf, "],\n", 0); } -static int32_t taosAnalJsonBufWriteEnd(SAnalBuf *pBuf) { +static int32_t taosAnalJsonBufWriteEnd(SAnalyticBuf *pBuf) { int32_t code = taosAnalJsonBufWriteOptInt(pBuf, "rows", pBuf->pCols[0].numOfRows); if (code != 0) return code; return taosAnalJsonBufWriteStr(pBuf, "\"protocol\": 1.0\n}", 0); } -int32_t taosAnalJsonBufClose(SAnalBuf *pBuf) { +int32_t taosAnalJsonBufClose(SAnalyticBuf *pBuf) { int32_t code = taosAnalJsonBufWriteEnd(pBuf); if (code != 0) return code; @@ -588,7 +604,7 @@ int32_t taosAnalJsonBufClose(SAnalBuf *pBuf) { if (code != 0) return code; } - if (pBuf->bufType == ANAL_BUF_TYPE_JSON_COL) { + if (pBuf->bufType == ANALYTICS_BUF_TYPE_JSON_COL) { for (int32_t i = 0; i < pBuf->numOfCols; ++i) { SAnalyticsColBuf *pCol = &pBuf->pCols[i]; if (pCol->filePtr != NULL) { @@ -603,14 +619,14 @@ int32_t taosAnalJsonBufClose(SAnalBuf *pBuf) { return 0; } -void taosAnalBufDestroy(SAnalBuf *pBuf) { +void taosAnalBufDestroy(SAnalyticBuf *pBuf) { if (pBuf->fileName[0] != 0) { if (pBuf->filePtr != NULL) (void)taosCloseFile(&pBuf->filePtr); // taosRemoveFile(pBuf->fileName); pBuf->fileName[0] = 0; } - if (pBuf->bufType == ANAL_BUF_TYPE_JSON_COL) { + if (pBuf->bufType == ANALYTICS_BUF_TYPE_JSON_COL) { for (int32_t i = 0; i < pBuf->numOfCols; ++i) { SAnalyticsColBuf *pCol = &pBuf->pCols[i]; if (pCol->fileName[0] != 0) { @@ -627,102 +643,102 @@ void taosAnalBufDestroy(SAnalBuf *pBuf) { pBuf->numOfCols = 0; } -int32_t tsosAnalBufOpen(SAnalBuf *pBuf, int32_t numOfCols) { - if (pBuf->bufType == ANAL_BUF_TYPE_JSON || pBuf->bufType == ANAL_BUF_TYPE_JSON_COL) { +int32_t tsosAnalBufOpen(SAnalyticBuf *pBuf, int32_t numOfCols) { + if (pBuf->bufType == ANALYTICS_BUF_TYPE_JSON || pBuf->bufType == ANALYTICS_BUF_TYPE_JSON_COL) { return tsosAnalJsonBufOpen(pBuf, numOfCols); } else { - return TSDB_CODE_ANAL_BUF_INVALID_TYPE; + return TSDB_CODE_ANA_BUF_INVALID_TYPE; } } -int32_t taosAnalBufWriteOptStr(SAnalBuf *pBuf, const char *optName, const char *optVal) { - if (pBuf->bufType == ANAL_BUF_TYPE_JSON || pBuf->bufType == ANAL_BUF_TYPE_JSON_COL) { +int32_t taosAnalBufWriteOptStr(SAnalyticBuf *pBuf, const char *optName, const char *optVal) { + if (pBuf->bufType == ANALYTICS_BUF_TYPE_JSON || pBuf->bufType == ANALYTICS_BUF_TYPE_JSON_COL) { return taosAnalJsonBufWriteOptStr(pBuf, optName, optVal); } else { - return TSDB_CODE_ANAL_BUF_INVALID_TYPE; + return TSDB_CODE_ANA_BUF_INVALID_TYPE; } } -int32_t taosAnalBufWriteOptInt(SAnalBuf *pBuf, const char *optName, int64_t optVal) { - if (pBuf->bufType == ANAL_BUF_TYPE_JSON || pBuf->bufType == ANAL_BUF_TYPE_JSON_COL) { +int32_t taosAnalBufWriteOptInt(SAnalyticBuf *pBuf, const char *optName, int64_t optVal) { + if (pBuf->bufType == ANALYTICS_BUF_TYPE_JSON || pBuf->bufType == ANALYTICS_BUF_TYPE_JSON_COL) { return taosAnalJsonBufWriteOptInt(pBuf, optName, optVal); } else { - return TSDB_CODE_ANAL_BUF_INVALID_TYPE; + return TSDB_CODE_ANA_BUF_INVALID_TYPE; } } -int32_t taosAnalBufWriteOptFloat(SAnalBuf *pBuf, const char *optName, float optVal) { - if (pBuf->bufType == ANAL_BUF_TYPE_JSON || pBuf->bufType == ANAL_BUF_TYPE_JSON_COL) { +int32_t taosAnalBufWriteOptFloat(SAnalyticBuf *pBuf, const char *optName, float optVal) { + if (pBuf->bufType == ANALYTICS_BUF_TYPE_JSON || pBuf->bufType == ANALYTICS_BUF_TYPE_JSON_COL) { return taosAnalJsonBufWriteOptFloat(pBuf, optName, optVal); } else { - return TSDB_CODE_ANAL_BUF_INVALID_TYPE; + return TSDB_CODE_ANA_BUF_INVALID_TYPE; } } -int32_t taosAnalBufWriteColMeta(SAnalBuf *pBuf, int32_t colIndex, int32_t colType, const char *colName) { - if (pBuf->bufType == ANAL_BUF_TYPE_JSON || pBuf->bufType == ANAL_BUF_TYPE_JSON_COL) { +int32_t taosAnalBufWriteColMeta(SAnalyticBuf *pBuf, int32_t colIndex, int32_t colType, const char *colName) { + if (pBuf->bufType == ANALYTICS_BUF_TYPE_JSON || pBuf->bufType == ANALYTICS_BUF_TYPE_JSON_COL) { return taosAnalJsonBufWriteColMeta(pBuf, colIndex, colType, colName); } else { - return TSDB_CODE_ANAL_BUF_INVALID_TYPE; + return TSDB_CODE_ANA_BUF_INVALID_TYPE; } } -int32_t taosAnalBufWriteDataBegin(SAnalBuf *pBuf) { - if (pBuf->bufType == ANAL_BUF_TYPE_JSON || pBuf->bufType == ANAL_BUF_TYPE_JSON_COL) { +int32_t taosAnalBufWriteDataBegin(SAnalyticBuf *pBuf) { + if (pBuf->bufType == ANALYTICS_BUF_TYPE_JSON || pBuf->bufType == ANALYTICS_BUF_TYPE_JSON_COL) { return taosAnalJsonBufWriteDataBegin(pBuf); } else { - return TSDB_CODE_ANAL_BUF_INVALID_TYPE; + return TSDB_CODE_ANA_BUF_INVALID_TYPE; } } -int32_t taosAnalBufWriteColBegin(SAnalBuf *pBuf, int32_t colIndex) { - if (pBuf->bufType == ANAL_BUF_TYPE_JSON || pBuf->bufType == ANAL_BUF_TYPE_JSON_COL) { +int32_t taosAnalBufWriteColBegin(SAnalyticBuf *pBuf, int32_t colIndex) { + if (pBuf->bufType == ANALYTICS_BUF_TYPE_JSON || pBuf->bufType == ANALYTICS_BUF_TYPE_JSON_COL) { return taosAnalJsonBufWriteColBegin(pBuf, colIndex); } else { - return TSDB_CODE_ANAL_BUF_INVALID_TYPE; + return TSDB_CODE_ANA_BUF_INVALID_TYPE; } } -int32_t taosAnalBufWriteColData(SAnalBuf *pBuf, int32_t colIndex, int32_t colType, void *colValue) { - if (pBuf->bufType == ANAL_BUF_TYPE_JSON || pBuf->bufType == ANAL_BUF_TYPE_JSON_COL) { +int32_t taosAnalBufWriteColData(SAnalyticBuf *pBuf, int32_t colIndex, int32_t colType, void *colValue) { + if (pBuf->bufType == ANALYTICS_BUF_TYPE_JSON || pBuf->bufType == ANALYTICS_BUF_TYPE_JSON_COL) { return taosAnalJsonBufWriteColData(pBuf, colIndex, colType, colValue); } else { - return TSDB_CODE_ANAL_BUF_INVALID_TYPE; + return TSDB_CODE_ANA_BUF_INVALID_TYPE; } } -int32_t taosAnalBufWriteColEnd(SAnalBuf *pBuf, int32_t colIndex) { - if (pBuf->bufType == ANAL_BUF_TYPE_JSON || pBuf->bufType == ANAL_BUF_TYPE_JSON_COL) { +int32_t taosAnalBufWriteColEnd(SAnalyticBuf *pBuf, int32_t colIndex) { + if (pBuf->bufType == ANALYTICS_BUF_TYPE_JSON || pBuf->bufType == ANALYTICS_BUF_TYPE_JSON_COL) { return taosAnalJsonBufWriteColEnd(pBuf, colIndex); } else { - return TSDB_CODE_ANAL_BUF_INVALID_TYPE; + return TSDB_CODE_ANA_BUF_INVALID_TYPE; } } -int32_t taosAnalBufWriteDataEnd(SAnalBuf *pBuf) { - if (pBuf->bufType == ANAL_BUF_TYPE_JSON || pBuf->bufType == ANAL_BUF_TYPE_JSON_COL) { +int32_t taosAnalBufWriteDataEnd(SAnalyticBuf *pBuf) { + if (pBuf->bufType == ANALYTICS_BUF_TYPE_JSON || pBuf->bufType == ANALYTICS_BUF_TYPE_JSON_COL) { return taosAnalJsonBufWriteDataEnd(pBuf); } else { - return TSDB_CODE_ANAL_BUF_INVALID_TYPE; + return TSDB_CODE_ANA_BUF_INVALID_TYPE; } } -int32_t taosAnalBufClose(SAnalBuf *pBuf) { - if (pBuf->bufType == ANAL_BUF_TYPE_JSON || pBuf->bufType == ANAL_BUF_TYPE_JSON_COL) { +int32_t taosAnalBufClose(SAnalyticBuf *pBuf) { + if (pBuf->bufType == ANALYTICS_BUF_TYPE_JSON || pBuf->bufType == ANALYTICS_BUF_TYPE_JSON_COL) { return taosAnalJsonBufClose(pBuf); } else { - return TSDB_CODE_ANAL_BUF_INVALID_TYPE; + return TSDB_CODE_ANA_BUF_INVALID_TYPE; } } -static int32_t taosAnalBufGetCont(SAnalBuf *pBuf, char **ppCont, int64_t *pContLen) { +static int32_t taosAnalBufGetCont(SAnalyticBuf *pBuf, char **ppCont, int64_t *pContLen) { *ppCont = NULL; *pContLen = 0; - if (pBuf->bufType == ANAL_BUF_TYPE_JSON || pBuf->bufType == ANAL_BUF_TYPE_JSON_COL) { + if (pBuf->bufType == ANALYTICS_BUF_TYPE_JSON || pBuf->bufType == ANALYTICS_BUF_TYPE_JSON_COL) { return taosAnalJsonBufGetCont(pBuf->fileName, ppCont, pContLen); } else { - return TSDB_CODE_ANAL_BUF_INVALID_TYPE; + return TSDB_CODE_ANA_BUF_INVALID_TYPE; } } @@ -730,7 +746,7 @@ static int32_t taosAnalBufGetCont(SAnalBuf *pBuf, char **ppCont, int64_t *pContL int32_t taosAnalyticsInit() { return 0; } void taosAnalyticsCleanup() {} -SJson *taosAnalSendReqRetJson(const char *url, EAnalHttpType type, SAnalBuf *pBuf) { return NULL; } +SJson *taosAnalSendReqRetJson(const char *url, EAnalHttpType type, SAnalyticBuf *pBuf) { return NULL; } int32_t taosAnalGetAlgoUrl(const char *algoName, EAnalAlgoType type, char *url, int32_t urlLen) { return 0; } bool taosAnalGetOptStr(const char *option, const char *optName, char *optValue, int32_t optMaxLen) { return true; } @@ -738,18 +754,18 @@ bool taosAnalGetOptInt(const char *option, const char *optName, int64_t *optV int64_t taosAnalGetVersion() { return 0; } void taosAnalUpdate(int64_t newVer, SHashObj *pHash) {} -int32_t tsosAnalBufOpen(SAnalBuf *pBuf, int32_t numOfCols) { return 0; } -int32_t taosAnalBufWriteOptStr(SAnalBuf *pBuf, const char *optName, const char *optVal) { return 0; } -int32_t taosAnalBufWriteOptInt(SAnalBuf *pBuf, const char *optName, int64_t optVal) { return 0; } -int32_t taosAnalBufWriteOptFloat(SAnalBuf *pBuf, const char *optName, float optVal) { return 0; } -int32_t taosAnalBufWriteColMeta(SAnalBuf *pBuf, int32_t colIndex, int32_t colType, const char *colName) { return 0; } -int32_t taosAnalBufWriteDataBegin(SAnalBuf *pBuf) { return 0; } -int32_t taosAnalBufWriteColBegin(SAnalBuf *pBuf, int32_t colIndex) { return 0; } -int32_t taosAnalBufWriteColData(SAnalBuf *pBuf, int32_t colIndex, int32_t colType, void *colValue) { return 0; } -int32_t taosAnalBufWriteColEnd(SAnalBuf *pBuf, int32_t colIndex) { return 0; } -int32_t taosAnalBufWriteDataEnd(SAnalBuf *pBuf) { return 0; } -int32_t taosAnalBufClose(SAnalBuf *pBuf) { return 0; } -void taosAnalBufDestroy(SAnalBuf *pBuf) {} +int32_t tsosAnalBufOpen(SAnalyticBuf *pBuf, int32_t numOfCols) { return 0; } +int32_t taosAnalBufWriteOptStr(SAnalyticBuf *pBuf, const char *optName, const char *optVal) { return 0; } +int32_t taosAnalBufWriteOptInt(SAnalyticBuf *pBuf, const char *optName, int64_t optVal) { return 0; } +int32_t taosAnalBufWriteOptFloat(SAnalyticBuf *pBuf, const char *optName, float optVal) { return 0; } +int32_t taosAnalBufWriteColMeta(SAnalyticBuf *pBuf, int32_t colIndex, int32_t colType, const char *colName) { return 0; } +int32_t taosAnalBufWriteDataBegin(SAnalyticBuf *pBuf) { return 0; } +int32_t taosAnalBufWriteColBegin(SAnalyticBuf *pBuf, int32_t colIndex) { return 0; } +int32_t taosAnalBufWriteColData(SAnalyticBuf *pBuf, int32_t colIndex, int32_t colType, void *colValue) { return 0; } +int32_t taosAnalBufWriteColEnd(SAnalyticBuf *pBuf, int32_t colIndex) { return 0; } +int32_t taosAnalBufWriteDataEnd(SAnalyticBuf *pBuf) { return 0; } +int32_t taosAnalBufClose(SAnalyticBuf *pBuf) { return 0; } +void taosAnalBufDestroy(SAnalyticBuf *pBuf) {} const char *taosAnalAlgoStr(EAnalAlgoType algoType) { return 0; } EAnalAlgoType taosAnalAlgoInt(const char *algoName) { return 0; } diff --git a/source/util/src/terror.c b/source/util/src/terror.c index 00f72123dc..9e8a85d301 100644 --- a/source/util/src/terror.c +++ b/source/util/src/terror.c @@ -361,13 +361,14 @@ TAOS_DEFINE_ERROR(TSDB_CODE_MND_ANODE_TOO_MANY_ALGO, "Anode too many algori TAOS_DEFINE_ERROR(TSDB_CODE_MND_ANODE_TOO_LONG_ALGO_NAME, "Anode too long algorithm name") TAOS_DEFINE_ERROR(TSDB_CODE_MND_ANODE_TOO_MANY_ALGO_TYPE, "Anode too many algorithm type") -TAOS_DEFINE_ERROR(TSDB_CODE_ANAL_URL_RSP_IS_NULL, "Analysis service response is NULL") -TAOS_DEFINE_ERROR(TSDB_CODE_ANAL_URL_CANT_ACCESS, "Analysis service can't access") -TAOS_DEFINE_ERROR(TSDB_CODE_ANAL_ALGO_NOT_FOUND, "Analysis algorithm not found") -TAOS_DEFINE_ERROR(TSDB_CODE_ANAL_ALGO_NOT_LOAD, "Analysis algorithm not loaded") -TAOS_DEFINE_ERROR(TSDB_CODE_ANAL_BUF_INVALID_TYPE, "Analysis invalid buffer type") -TAOS_DEFINE_ERROR(TSDB_CODE_ANAL_ANODE_RETURN_ERROR, "Analysis failed since anode return error") -TAOS_DEFINE_ERROR(TSDB_CODE_ANAL_ANODE_TOO_MANY_ROWS, "Analysis failed since too many input rows for anode") +TAOS_DEFINE_ERROR(TSDB_CODE_ANA_URL_RSP_IS_NULL, "Analysis service response is NULL") +TAOS_DEFINE_ERROR(TSDB_CODE_ANA_URL_CANT_ACCESS, "Analysis service can't access") +TAOS_DEFINE_ERROR(TSDB_CODE_ANA_ALGO_NOT_FOUND, "Analysis algorithm is missing") +TAOS_DEFINE_ERROR(TSDB_CODE_ANA_ALGO_NOT_LOAD, "Analysis algorithm not loaded") +TAOS_DEFINE_ERROR(TSDB_CODE_ANA_BUF_INVALID_TYPE, "Analysis invalid buffer type") +TAOS_DEFINE_ERROR(TSDB_CODE_ANA_ANODE_RETURN_ERROR, "Analysis failed since anode return error") +TAOS_DEFINE_ERROR(TSDB_CODE_ANA_ANODE_TOO_MANY_ROWS, "Analysis failed since too many input rows for anode") +TAOS_DEFINE_ERROR(TSDB_CODE_ANA_WN_DATA, "white-noise data not processed") // mnode-sma TAOS_DEFINE_ERROR(TSDB_CODE_MND_SMA_ALREADY_EXIST, "SMA already exists") diff --git a/tests/parallel_test/cases.task b/tests/parallel_test/cases.task index ab44102c98..b7298d359b 100644 --- a/tests/parallel_test/cases.task +++ b/tests/parallel_test/cases.task @@ -1303,6 +1303,7 @@ #,,y,script,./test.sh -f tsim/mnode/basic3.sim ,,y,script,./test.sh -f tsim/mnode/basic4.sim ,,y,script,./test.sh -f tsim/mnode/basic5.sim +,,y,script,./test.sh -f tsim/mnode/basic6.sim ,,y,script,./test.sh -f tsim/show/basic.sim ,,y,script,./test.sh -f tsim/table/autocreate.sim ,,y,script,./test.sh -f tsim/table/basic1.sim diff --git a/tests/pytest/auto_crash_gen.py b/tests/pytest/auto_crash_gen.py index 4e4679db6a..f6b31b4691 100755 --- a/tests/pytest/auto_crash_gen.py +++ b/tests/pytest/auto_crash_gen.py @@ -16,7 +16,18 @@ msg_dict = {0: "success", 1: "failed", 2: "other errors", 3: "crash occured", 4: # formal hostname = socket.gethostname() -group_url = 'https://open.feishu.cn/open-apis/bot/v2/hook/56c333b5-eae9-4c18-b0b6-7e4b7174f5c9' +group_url_test = ( + 'https://open.feishu.cn/open-apis/bot/v2/hook/7e409a8e-4390-4043-80d0-4e0dd2cbae7d' +) + +notification_robot_url = ( + "https://open.feishu.cn/open-apis/bot/v2/hook/56c333b5-eae9-4c18-b0b6-7e4b7174f5c9" +) + +alert_robot_url = ( + "https://open.feishu.cn/open-apis/bot/v2/hook/02363732-91f1-49c4-879c-4e98cf31a5f3" +) + def get_msg(text): return { @@ -37,12 +48,12 @@ def get_msg(text): } -def send_msg(json): +def send_msg(url:str,json:dict): headers = { 'Content-Type': 'application/json' } - req = requests.post(url=group_url, headers=headers, json=json) + req = requests.post(url=url, headers=headers, json=json) inf = req.json() if "StatusCode" in inf and inf["StatusCode"] == 0: pass @@ -355,18 +366,27 @@ def main(): core_dir = "none" text = f''' - exit status: {msg_dict[status]} - test scope: crash_gen - owner: pxiao - hostname: {hostname} - start time: {starttime} - end time: {endtime} - git commit : {git_commit} - log dir: {log_dir} - core dir: {core_dir} - cmd: {cmd}''' +Result: {msg_dict[status]} - send_msg(get_msg(text)) +Details +Owner: Jayden Jia +Start time: {starttime} +End time: {endtime} +Hostname: {hostname} +Commit: {git_commit} +Cmd: {cmd} +Log dir: {log_dir} +Core dir: {core_dir} +''' + text_result=text.split("Result: ")[1].split("Details")[0].strip() + print(text_result) + + if text_result == "success": + send_msg(notification_robot_url, get_msg(text)) + else: + send_msg(alert_robot_url, get_msg(text)) + + #send_msg(get_msg(text)) except Exception as e: print("exception:", e) exit(status) diff --git a/tests/pytest/auto_crash_gen_valgrind.py b/tests/pytest/auto_crash_gen_valgrind.py index 1e0de6ace1..b346aca308 100755 --- a/tests/pytest/auto_crash_gen_valgrind.py +++ b/tests/pytest/auto_crash_gen_valgrind.py @@ -19,7 +19,18 @@ msg_dict = {0: "success", 1: "failed", 2: "other errors", 3: "crash occured", 4: # formal hostname = socket.gethostname() -group_url = 'https://open.feishu.cn/open-apis/bot/v2/hook/56c333b5-eae9-4c18-b0b6-7e4b7174f5c9' +group_url_test = ( + 'https://open.feishu.cn/open-apis/bot/v2/hook/7e409a8e-4390-4043-80d0-4e0dd2cbae7d' +) + +notification_robot_url = ( + "https://open.feishu.cn/open-apis/bot/v2/hook/56c333b5-eae9-4c18-b0b6-7e4b7174f5c9" +) + +alert_robot_url = ( + "https://open.feishu.cn/open-apis/bot/v2/hook/02363732-91f1-49c4-879c-4e98cf31a5f3" +) + def get_msg(text): return { @@ -40,13 +51,12 @@ def get_msg(text): } -def send_msg(json): +def send_msg(url:str,json:dict): headers = { 'Content-Type': 'application/json' } - - req = requests.post(url=group_url, headers=headers, json=json) + req = requests.post(url=url, headers=headers, json=json) inf = req.json() if "StatusCode" in inf and inf["StatusCode"] == 0: pass @@ -389,18 +399,28 @@ def main(): core_dir = "none" text = f''' - exit status: {msg_dict[status]} - test scope: crash_gen - owner: pxiao - hostname: {hostname} - start time: {starttime} - end time: {endtime} - git commit : {git_commit} - log dir: {log_dir} - core dir: {core_dir} - cmd: {cmd}''' +Result: {msg_dict[status]} - send_msg(get_msg(text)) +Details +Owner: Jayden Jia +Start time: {starttime} +End time: {endtime} +Hostname: {hostname} +Commit: {git_commit} +Cmd: {cmd} +Log dir: {log_dir} +Core dir: {core_dir} +''' + + text_result=text.split("Result: ")[1].split("Details")[0].strip() + print(text_result) + + if text_result == "success": + send_msg(notification_robot_url, get_msg(text)) + else: + send_msg(alert_robot_url, get_msg(text)) + + #send_msg(get_msg(text)) except Exception as e: print("exception:", e) exit(status) diff --git a/tests/pytest/auto_crash_gen_valgrind_cluster.py b/tests/pytest/auto_crash_gen_valgrind_cluster.py index 22f453e51e..522ad48640 100755 --- a/tests/pytest/auto_crash_gen_valgrind_cluster.py +++ b/tests/pytest/auto_crash_gen_valgrind_cluster.py @@ -16,7 +16,18 @@ msg_dict = {0: "success", 1: "failed", 2: "other errors", 3: "crash occured", 4: # formal hostname = socket.gethostname() -group_url = 'https://open.feishu.cn/open-apis/bot/v2/hook/56c333b5-eae9-4c18-b0b6-7e4b7174f5c9' +group_url_test = ( + 'https://open.feishu.cn/open-apis/bot/v2/hook/7e409a8e-4390-4043-80d0-4e0dd2cbae7d' +) + +notification_robot_url = ( + "https://open.feishu.cn/open-apis/bot/v2/hook/56c333b5-eae9-4c18-b0b6-7e4b7174f5c9" +) + +alert_robot_url = ( + "https://open.feishu.cn/open-apis/bot/v2/hook/02363732-91f1-49c4-879c-4e98cf31a5f3" +) + def get_msg(text): return { @@ -37,12 +48,12 @@ def get_msg(text): } -def send_msg(json): +def send_msg(url:str,json:dict): headers = { 'Content-Type': 'application/json' } - req = requests.post(url=group_url, headers=headers, json=json) + req = requests.post(url=url, headers=headers, json=json) inf = req.json() if "StatusCode" in inf and inf["StatusCode"] == 0: pass @@ -376,18 +387,28 @@ def main(): core_dir = "none" text = f''' - exit status: {msg_dict[status]} - test scope: crash_gen - owner: pxiao - hostname: {hostname} - start time: {starttime} - end time: {endtime} - git commit : {git_commit} - log dir: {log_dir} - core dir: {core_dir} - cmd: {cmd}''' - - send_msg(get_msg(text)) +Result: {msg_dict[status]} + +Details +Owner: Jayden Jia +Start time: {starttime} +End time: {endtime} +Hostname: {hostname} +Commit: {git_commit} +Cmd: {cmd} +Log dir: {log_dir} +Core dir: {core_dir} +''' + + text_result=text.split("Result: ")[1].split("Details")[0].strip() + print(text_result) + + if text_result == "success": + send_msg(notification_robot_url, get_msg(text)) + else: + send_msg(alert_robot_url, get_msg(text)) + + #send_msg(get_msg(text)) except Exception as e: print("exception:", e) exit(status) diff --git a/tests/script/tsim/mnode/basic6.sim b/tests/script/tsim/mnode/basic6.sim new file mode 100644 index 0000000000..4ee56ff555 --- /dev/null +++ b/tests/script/tsim/mnode/basic6.sim @@ -0,0 +1,413 @@ +system sh/stop_dnodes.sh +system sh/deploy.sh -n dnode1 -i 1 +system sh/deploy.sh -n dnode2 -i 2 +system sh/deploy.sh -n dnode3 -i 3 +system sh/deploy.sh -n dnode4 -i 4 +system sh/cfg.sh -n dnode1 -c compressMsgSize -v 0 +system sh/cfg.sh -n dnode2 -c compressMsgSize -v 0 +system sh/cfg.sh -n dnode3 -c compressMsgSize -v 0 +system sh/cfg.sh -n dnode4 -c compressMsgSize -v 0 +system sh/exec.sh -n dnode1 -s start +sql connect + +print =============== step1: create dnodes +sql create dnode $hostname port 7200 +sql create dnode $hostname port 7300 +sql create dnode $hostname port 7400 + +$x = 0 +step1: + $x = $x + 1 + sleep 1000 + if $x == 5 then + return -1 + endi +sql select * from information_schema.ins_dnodes +if $data(1)[4] != ready then + goto step1 +endi + +print =============== step2: create dnodes - with error +sql_error create mnode on dnode 1; +sql_error create mnode on dnode 2; +sql_error create mnode on dnode 3; +sql_error create mnode on dnode 4; +sql_error create mnode on dnode 5; +sql_error create mnode on dnode 6; + +print =============== step3: create mnode 2 and 3 +system sh/exec.sh -n dnode2 -s start +system sh/exec.sh -n dnode3 -s start +system sh/exec.sh -n dnode4 -s start +$x = 0 +step3: + $x = $x + 1 + sleep 1000 + if $x == 5 then + return -1 + endi +sql select * from information_schema.ins_dnodes +if $data(2)[4] != ready then + goto step3 +endi +if $data(3)[4] != ready then + goto step3 +endi +if $data(4)[4] != ready then + goto step3 +endi + +sql create mnode on dnode 2 +sql create mnode on dnode 3 + +$x = 0 +step31: + $x = $x + 1 + sleep 1000 + if $x == 50 then + return -1 + endi +sql select * from information_schema.ins_mnodes +$leaderNum = 0 +if $data(1)[2] == leader then + $leaderNum = 1 +endi +if $data(2)[2] == leader then + $leaderNum = 1 +endi +if $data(3)[2] == leader then + $leaderNum = 1 +endi +if $leaderNum == 0 then + goto step31 +endi + +print =============== step4: create dnodes - with error +sql_error create mnode on dnode 1 +sql_error create mnode on dnode 2; +sql_error create mnode on dnode 3; +sql_error create mnode on dnode 4; +sql_error create mnode on dnode 5; +sql_error create mnode on dnode 6; + +print =============== step5: drop mnodes - with error +sql_error drop mnode on dnode 1 +sql_error drop mnode on dnode 4 +sql_error drop mnode on dnode 5 +sql_error drop mnode on dnode 6 + +system sh/exec.sh -n dnode2 -s stop +$x = 0 +step5: + $x = $x + 1 + sleep 1000 + if $x == 10 then + return -1 + endi +sql select * from information_schema.ins_dnodes +print ===> $data00 $data01 $data02 $data03 $data04 $data05 +print ===> $data10 $data11 $data12 $data13 $data14 $data15 +print ===> $data20 $data21 $data22 $data23 $data24 $data25 +print ===> $data30 $data31 $data32 $data33 $data34 $data35 +if $data(1)[4] != ready then + goto step5 +endi +if $data(2)[4] != offline then + goto step5 +endi +if $data(3)[4] != ready then + goto step5 +endi +if $data(4)[4] != ready then + goto step5 +endi + +sql_error drop mnode on dnode 2 + +system sh/exec.sh -n dnode2 -s start +$x = 0 +step51: + $x = $x + 1 + sleep 1000 + if $x == 10 then + return -1 + endi +sql select * from information_schema.ins_dnodes +print ===> $data00 $data01 $data02 $data03 $data04 $data05 +print ===> $data10 $data11 $data12 $data13 $data14 $data15 +print ===> $data20 $data21 $data22 $data23 $data24 $data25 +print ===> $data30 $data31 $data32 $data33 $data34 $data35 +if $data(1)[4] != ready then + goto step51 +endi +if $data(2)[4] != ready then + goto step51 +endi +if $data(3)[4] != ready then + goto step51 +endi +if $data(4)[4] != ready then + goto step51 +endi + +print =============== step6: stop mnode1 +system sh/exec.sh -n dnode1 -s stop +# sql_error drop mnode on dnode 1 + +$x = 0 +step61: + $x = $x + 1 + sleep 1000 + if $x == 10 then + return -1 + endi +sql select * from information_schema.ins_mnodes -x step61 +print ===> $data00 $data01 $data02 $data03 $data04 $data05 +print ===> $data10 $data11 $data12 $data13 $data14 $data15 +print ===> $data20 $data21 $data22 $data23 $data24 $data25 +$leaderNum = 0 +if $data(2)[2] == leader then + $leaderNum = 1 +endi +if $data(3)[2] == leader then + $leaderNum = 1 +endi +if $leaderNum != 1 then + goto step61 +endi + +print =============== step7: start mnode1 and wait it online +system sh/exec.sh -n dnode1 -s start + +$x = 0 +step71: + $x = $x + 1 + sleep 1000 + if $x == 50 then + return -1 + endi +sql select * from information_schema.ins_dnodes +print ===> $data00 $data01 $data02 $data03 $data04 $data05 +print ===> $data10 $data11 $data12 $data13 $data14 $data15 +print ===> $data20 $data21 $data22 $data23 $data24 $data25 +print ===> $data30 $data31 $data32 $data33 $data34 $data35 +if $data(1)[4] != ready then + goto step71 +endi +if $data(2)[4] != ready then + goto step71 +endi +if $data(3)[4] != ready then + goto step71 +endi +if $data(4)[4] != ready then + goto step71 +endi + +print =============== step8: stop mnode1 and drop it +system sh/exec.sh -n dnode1 -s stop + +$x = 0 +step81: + $x = $x + 1 + sleep 1000 + if $x == 10 then + return -1 + endi +sql select * from information_schema.ins_mnodes +print ===> $data00 $data01 $data02 $data03 $data04 $data05 +print ===> $data10 $data11 $data12 $data13 $data14 $data15 +print ===> $data20 $data21 $data22 $data23 $data24 $data25 +$leaderNum = 0 +if $data(1)[2] == leader then + $leaderNum = 1 +endi +if $data(2)[2] == leader then + $leaderNum = 1 +endi +if $data(3)[2] == leader then + $leaderNum = 1 +endi +if $leaderNum != 1 then + goto step81 +endi + +print =============== step9: start mnode1 and wait it dropped +print check mnode has leader step9a +$x = 0 +step9a: + $x = $x + 1 + sleep 1000 + if $x == 10 then + return -1 + endi +print check mnode leader +sql select * from information_schema.ins_mnodes +print ===> $data00 $data01 $data02 $data03 $data04 $data05 +print ===> $data10 $data11 $data12 $data13 $data14 $data15 +print ===> $data20 $data21 $data22 $data23 $data24 $data25 +$leaderNum = 0 +if $data(1)[2] == leader then + $leaderNum = 1 +endi +if $data(2)[2] == leader then + $leaderNum = 1 +endi +if $data(3)[2] == leader then + $leaderNum = 1 +endi +if $leaderNum != 1 then + goto step9a +endi + +print start dnode1 step9b +system sh/exec.sh -n dnode1 -s start +$x = 0 +step9b: + $x = $x + 1 + sleep 1000 + if $x == 10 then + return -1 + endi +print check dnode1 ready +sql select * from information_schema.ins_dnodes +print ===> $data00 $data01 $data02 $data03 $data04 $data05 +print ===> $data10 $data11 $data12 $data13 $data14 $data15 +print ===> $data20 $data21 $data22 $data23 $data24 $data25 +print ===> $data30 $data31 $data32 $data33 $data34 $data35 +if $data(1)[4] != ready then + goto step9b +endi +if $data(2)[4] != ready then + goto step9b +endi +if $data(3)[4] != ready then + goto step9b +endi +if $data(4)[4] != ready then + goto step9b +endi + +sleep 4000 +print check mnode has leader step9c +$x = 0 +step9c: + $x = $x + 1 + sleep 1000 + if $x == 10 then + return -1 + endi +print check mnode leader +sql select * from information_schema.ins_mnodes +print ===> $data00 $data01 $data02 $data03 $data04 $data05 +print ===> $data10 $data11 $data12 $data13 $data14 $data15 +print ===> $data20 $data21 $data22 $data23 $data24 $data25 +$leaderNum = 0 +if $data(1)[2] == leader then + $leaderNum = 1 +endi +if $data(2)[2] == leader then + $leaderNum = 1 +endi +if $data(3)[2] == leader then + $leaderNum = 1 +endi +if $leaderNum != 1 then + goto step9c +endi + +print drop mnode step9d +sql drop mnode on dnode 1 + +$x = 0 +step9d: + $x = $x + 1 + sleep 1000 + if $x == 20 then + return -1 + endi +print check mnode leader +sql select * from information_schema.ins_mnodes +print ===> $data00 $data01 $data02 $data03 $data04 $data05 +print ===> $data10 $data11 $data12 $data13 $data14 $data15 +print ===> $data20 $data21 $data22 $data23 $data24 $data25 +$leaderNum = 0 +if $data(1)[2] == leader then + $leaderNum = 1 +endi +if $data(2)[2] == leader then + $leaderNum = 1 +endi +if $data(3)[2] == leader then + $leaderNum = 1 +endi +if $leaderNum != 1 then + goto step9d +endi +if $rows != 2 then + goto step9d +endi + +print =============== stepa: create mnode1 again +sql create mnode on dnode 1 +$x = 0 +stepa: + $x = $x + 1 + sleep 1000 + if $x == 10 then + return -1 + endi +sql select * from information_schema.ins_mnodes +print ===> $data00 $data01 $data02 $data03 $data04 $data05 +print ===> $data10 $data11 $data12 $data13 $data14 $data15 +print ===> $data20 $data21 $data22 $data23 $data24 $data25 +$leaderNum = 0 +if $data(1)[2] == leader then + $leaderNum = 1 +endi +if $data(2)[2] == leader then + $leaderNum = 1 +endi +if $data(3)[2] == leader then + $leaderNum = 1 +endi +if $leaderNum == 0 then + goto stepa +endi +if $leaderNum != 1 then + return -1 +endi + +$x = 0 +stepb: + $x = $x + 1 + sleep 1000 + if $x == 10 then + print ====> dnode not ready! + return -1 + endi +sql select * from information_schema.ins_dnodes +print ===> $data00 $data01 $data02 $data03 $data04 $data05 +print ===> $data10 $data11 $data12 $data13 $data14 $data15 +print ===> $data20 $data21 $data22 $data23 $data24 $data25 +print ===> $data30 $data31 $data32 $data33 $data34 $data35 +if $rows != 4 then + return -1 +endi +if $data(1)[4] != ready then + goto stepb +endi +if $data(2)[4] != ready then + goto stepb +endi +if $data(3)[4] != ready then + goto stepb +endi +if $data(4)[4] != ready then + goto stepb +endi + +system sh/exec.sh -n dnode1 -s stop +system sh/exec.sh -n dnode2 -s stop +system sh/exec.sh -n dnode3 -s stop +system sh/exec.sh -n dnode4 -s stop diff --git a/tests/script/tsim/stream/streamFwcIntervalCheckpoint.sim b/tests/script/tsim/stream/streamFwcIntervalCheckpoint.sim new file mode 100644 index 0000000000..ed72d87e9a --- /dev/null +++ b/tests/script/tsim/stream/streamFwcIntervalCheckpoint.sim @@ -0,0 +1,67 @@ +system sh/stop_dnodes.sh +system sh/deploy.sh -n dnode1 -i 1 + +system sh/cfg.sh -n dnode1 -c checkpointInterval -v 60 + +system sh/exec.sh -n dnode1 -s start +sleep 50 +sql connect + +print step1 +print =============== create database +sql create database test vgroups 4; +sql use test; + +sql create stable st(ts timestamp, a int, b int , c int)tags(ta int,tb int,tc int); +sql create table t1 using st tags(1,1,1); +sql create table t2 using st tags(2,2,2); + +sql create stream streams1 trigger force_window_close IGNORE EXPIRED 1 IGNORE UPDATE 1 into streamt1 as select _wstart, count(a) from st partition by tbname interval(2s); +sql create stream streams2 trigger at_once IGNORE EXPIRED 0 IGNORE UPDATE 0 into streamt2 as select _wstart, count(a) from st interval(2s); + +run tsim/stream/checkTaskStatus.sim + +sleep 70000 + + +print restart taosd 01 ...... + +system sh/stop_dnodes.sh + +system sh/exec.sh -n dnode1 -s start + +run tsim/stream/checkTaskStatus.sim + +sql insert into t1 values(now + 3000a,1,1,1); + +$loop_count = 0 +loop0: + +sleep 2000 + +$loop_count = $loop_count + 1 +if $loop_count == 20 then + return -1 +endi + +print select * from streamt1; +sql select * from streamt1; + +print $data00 $data01 $data02 + +if $rows == 0 then + goto loop0 +endi + +print select * from streamt2; +sql select * from streamt2; + +print $data00 $data01 $data02 + +if $rows == 0 then + goto loop0 +endi + +print end + +system sh/exec.sh -n dnode1 -s stop -x SIGINT diff --git a/tests/system-test/2-query/project_group.py b/tests/system-test/2-query/project_group.py index 19fe8b1cf0..a251854213 100644 --- a/tests/system-test/2-query/project_group.py +++ b/tests/system-test/2-query/project_group.py @@ -15,6 +15,30 @@ class TDTestCase: self.batchNum = 5 self.ts = 1537146000000 + def groupby_value(self): + tdSql.query('select 1 from stb group by now') + tdSql.checkRows(1) + tdSql.checkData(0, 0, 1) + tdSql.query('select 1 from stb group by "1"') + tdSql.checkRows(1) + tdSql.checkData(0, 0, 1) + tdSql.query('select count(*) from stb group by now') + tdSql.checkRows(1) + tdSql.checkData(0, 0, 12) + tdSql.query('select count(*) from stb group by now+1') + tdSql.checkRows(1) + tdSql.checkData(0, 0, 12) + tdSql.query('select 1, count(*) from stb group by now, "1"') + tdSql.checkRows(1) + tdSql.checkData(0, 0, 1) + tdSql.checkData(0, 1, 12) + tdSql.query('select count(*) as cc from sta1 as a join sta2 as b on a.ts = b.ts group by now') + tdSql.checkRows(1) + tdSql.checkData(0, 0, 3) + tdSql.query('select a.tbname, count(*) as cc from sta1 as a join sta2 as b on a.ts = b.ts group by a.tbname, "1"') + tdSql.checkRows(1) + tdSql.checkData(0, 1, 3) + def run(self): dbname = "db" tdSql.prepare() @@ -59,6 +83,9 @@ class TDTestCase: tdSql.checkRows(2) tdSql.query('select col1 > 0 and col2 > 0 from stb') tdSql.checkRows(12) + + self.groupby_value() + def stop(self): tdSql.close() tdLog.success("%s successfully executed" % __file__)