diff --git a/docs/zh/06-advanced/05-data-in/07-mqtt.md b/docs/zh/06-advanced/05-data-in/07-mqtt.md
index 84f31636b2..eb6753b7b0 100644
--- a/docs/zh/06-advanced/05-data-in/07-mqtt.md
+++ b/docs/zh/06-advanced/05-data-in/07-mqtt.md
@@ -55,7 +55,7 @@ TDengine 可以通过 MQTT 连接器从 MQTT 代理订阅数据并将其写入 T
在 **MQTT 协议** 下拉列表中选择 MQTT 协议版本。有三个选项:`3.1`、`3.1.1`、`5.0`。 默认值为 3.1。
-在 **Client ID** 中填写客户端 ID。建多个任务时,同个 MQTT 地址下,Client ID 必须是唯一的。
+在 **Client ID** 中填写客户端标识,填写后会生成带有 `taosx` 前缀的客户端 id (例如,如果填写的标识为 `foo`,则生成的客户端 id 为 `taosxfoo`)。如果打开末尾处的开关,则会把当前任务的任务 id 拼接到 `taosx` 之后,输入的标识之前(生成的客户端 id 形如 `taosx100foo`)。连接到同一个 MQTT 地址的所有客户端 id 必须保证唯一。
在 **Keep Alive** 中输入保持活动间隔。如果代理在保持活动间隔内没有收到来自客户端的任何消息,它将假定客户端已断开连接,并关闭连接。
保持活动间隔是指客户端和代理之间协商的时间间隔,用于检测客户端是否活动。如果客户端在保持活动间隔内没有向代理发送消息,则代理将断开连接。
diff --git a/docs/zh/06-advanced/05-data-in/08-kafka.md b/docs/zh/06-advanced/05-data-in/08-kafka.md
index dd5512d615..92bfc031ec 100644
--- a/docs/zh/06-advanced/05-data-in/08-kafka.md
+++ b/docs/zh/06-advanced/05-data-in/08-kafka.md
@@ -60,6 +60,10 @@ TDengine 可以高效地从 Kafka 读取数据并将其写入 TDengine,以实
在 **主题** 中填写要消费的 Topic 名称。可以配置多个 Topic , Topic 之间用逗号分隔。例如:`tp1,tp2`。
+在 **Client ID** 中填写客户端标识,填写后会生成带有 `taosx` 前缀的客户端 ID (例如,如果填写的标识为 `foo`,则生成的客户端 ID 为 `taosxfoo`)。如果打开末尾处的开关,则会把当前任务的任务 ID 拼接到 `taosx` 之后,输入的标识之前(生成的客户端 ID 形如 `taosx100foo`)。连接到同一个 Kafka 集群的所有客户端 ID 必须保证唯一。
+
+在 **消费者组 ID** 中填写消费者组标识,填写后会生成带有 `taosx` 前缀的消费者组 ID (例如,如果填写的标识为 `foo`,则生成的消费者组 ID 为 `taosxfoo`)。如果打开末尾处的开关,则会把当前任务的任务 ID 拼接到 `taosx` 之后,输入的标识之前(生成的消费者组 ID 形如 `taosx100foo`)。
+
在 **Offset** 的下拉列表中选择从哪个 Offset 开始消费数据。有三个选项:`Earliest`、`Latest`、`ByTime(ms)`。 默认值为Earliest。
* Earliest:用于请求最早的 offset。
diff --git a/docs/zh/06-advanced/05-data-in/kafka-06.png b/docs/zh/06-advanced/05-data-in/kafka-06.png
index 5e3d414025..43efe834a6 100644
Binary files a/docs/zh/06-advanced/05-data-in/kafka-06.png and b/docs/zh/06-advanced/05-data-in/kafka-06.png differ
diff --git a/docs/zh/06-advanced/05-data-in/mqtt-05.png b/docs/zh/06-advanced/05-data-in/mqtt-05.png
index e1f6bfd04d..d362e8be86 100644
Binary files a/docs/zh/06-advanced/05-data-in/mqtt-05.png and b/docs/zh/06-advanced/05-data-in/mqtt-05.png differ
diff --git a/docs/zh/08-develop/07-tmq.md b/docs/zh/08-develop/07-tmq.md
index 7eb8628e56..96f34c6d5d 100644
--- a/docs/zh/08-develop/07-tmq.md
+++ b/docs/zh/08-develop/07-tmq.md
@@ -34,13 +34,15 @@ TDengine 消费者的概念跟 Kafka 类似,消费者通过订阅主题来接
| `td.connect.user` | string | 用户名 | |
| `td.connect.pass` | string | 密码 | |
| `td.connect.port` | integer | 服务端的端口号 | |
-| `group.id` | string | 消费组 ID,同一消费组共享消费进度 |
**必填项**。最大长度:192。
每个topic最多可建立100个 consumer group |
-| `client.id` | string | 客户端 ID | 最大长度:192。 |
+| `group.id` | string | 消费组 ID,同一消费组共享消费进度 |
**必填项**。最大长度:192。
每个topic最多可建立 100 个 consumer group |
+| `client.id` | string | 客户端 ID | 最大长度:192 |
| `auto.offset.reset` | enum | 消费组订阅的初始位置 |
`earliest`: default(version < 3.2.0.0);从头开始订阅;
`latest`: default(version >= 3.2.0.0);仅从最新数据开始订阅;
`none`: 没有提交的 offset 无法订阅 |
| `enable.auto.commit` | boolean | 是否启用消费位点自动提交,true: 自动提交,客户端应用无需commit;false:客户端应用需要自行commit | 默认值为 true |
| `auto.commit.interval.ms` | integer | 消费记录自动提交消费位点时间间隔,单位为毫秒 | 默认值为 5000 |
| `msg.with.table.name` | boolean | 是否允许从消息中解析表名, 不适用于列订阅(列订阅时可将 tbname 作为列写入 subquery 语句)(从3.2.0.0版本该参数废弃,恒为true) | 默认关闭 |
| `enable.replay` | boolean | 是否开启数据回放功能 | 默认关闭 |
+| `session.timeout.ms` | integer | consumer 心跳丢失后超时时间,超时后会触发 rebalance 逻辑,成功后该 consumer 会被删除(从3.3.3.0版本开始支持) | 默认值为 12000,取值范围 [6000, 1800000] |
+| `max.poll.interval.ms` | integer | consumer poll 拉取数据间隔的最长时间,超过该时间,会认为该 consumer 离线,触发rebalance 逻辑,成功后该 consumer 会被删除(从3.3.3.0版本开始支持) | 默认值为 300000,[1000,INT32_MAX] |
下面是各语言连接器创建参数:
diff --git a/docs/zh/14-reference/03-taos-sql/22-meta.md b/docs/zh/14-reference/03-taos-sql/22-meta.md
index 397b3d3362..b61e06f759 100644
--- a/docs/zh/14-reference/03-taos-sql/22-meta.md
+++ b/docs/zh/14-reference/03-taos-sql/22-meta.md
@@ -296,8 +296,10 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
| 2 | consumer_group | VARCHAR(193) | 订阅者的消费者组 |
| 3 | vgroup_id | INT | 消费者被分配的 vgroup id |
| 4 | consumer_id | BIGINT | 消费者的唯一 id |
-| 5 | offset | VARCHAR(64) | 消费者的消费进度 |
-| 6 | rows | BIGINT | 消费者的消费的数据条数 |
+| 5 | user | VARCHAR(24) | 消费者的登录的用户名 |
+| 6 | fqdn | VARCHAR(128) | 消费者的所在机器的 fqdn |
+| 7 | offset | VARCHAR(64) | 消费者的消费进度 |
+| 8 | rows | BIGINT | 消费者的消费的数据条数 |
## INS_STREAMS
diff --git a/docs/zh/26-tdinternal/07-topic.md b/docs/zh/26-tdinternal/07-topic.md
index 3fe12c943d..4299b3d503 100644
--- a/docs/zh/26-tdinternal/07-topic.md
+++ b/docs/zh/26-tdinternal/07-topic.md
@@ -61,37 +61,37 @@ TDengine 会为 WAL 文件自动创建索引以支持快速随机访问。通过

-在客户端成功建立与服务器的连接之后,用户须首先指定消费组和主题,以创建相应的消费者实例。随后,客户端便会向服务器提交订阅请求。此刻,消费者的状态被标记为 modify,表示正处于配置阶段。消费者随后会定期向服务器发送请求,以检索并获取待消费的 vnode 列表,直至服务器为其分配相应的 vnode。一旦分配完成,消费者的状态便更新为 ready,标志着订阅流程的成功完成。此刻,客户端便可正式启动向 vnode 发送消费数据请求的过程。
+在客户端成功建立与服务器的连接之后,用户须首先指定消费组和主题,以创建相应的消费者实例。随后,客户端便会向服务器提交订阅请求。此刻,消费者的状态被标记为 rebalancing,表示正处于 rebalance 阶段。消费者随后会定期向服务器发送请求,以检索并获取待消费的 vnode 列表,直至服务器为其分配相应的 vnode。一旦分配完成,消费者的状态便更新为 ready,标志着订阅流程的成功完成。此刻,客户端便可正式启动向 vnode 发送消费数据请求的过程。
在消费数据的过程中,消费者会不断地向每个分配到的 vnode 发送请求,以尝试获取新的数据。一旦收到数据,消费者在完成消费后会继续向该 vnode 发送请求,以便持续消费。若在预设时间内未收到任何数据,消费者便会在 vnode 上注册一个消费 handle。这样一来,一旦 vnode 上有新数据产生,便会立即推送给消费者,从而确保数据消费的即时性,并有效减少消费者频繁主动拉取数据所导致的性能损耗。因此,可以看出,消费者从 vnode 获取数据的方式是一种拉取(pull)与推送(push)相结合的高效模式。
-消费者在收到数据时,会同时收到数据的版本号,并将其记录为当前在每个 vnode上的消费进度。这一进度仅在消费者内部以内存形式存储,确保仅对该消费者有效。这种设计保证了消费者在每次启动时能够从上次的消费中断处继续,避免了数据的重复处理。然而,若消费者需要退出并希望在之后恢复上次的消费进度,就必须在退出前将消费进度提交至服务器,执行所谓的 commit 操作。这一操作会将消费进度在服务器进行持久化存储,支持自动提交或手动提交两种方式。
+消费者在收到数据时,会同时收到数据的版本号,并将其记录为当前在每个 vnode 上的消费进度。这一进度仅在消费者内部以内存形式存储,确保仅对该消费者有效。这种设计保证了消费者在每次启动时能够从上次的消费中断处继续,避免了数据的重复处理。然而,若消费者需要退出并希望在之后恢复上次的消费进度,就必须在退出前将消费进度提交至服务器,执行所谓的 commit 操作。这一操作会将消费进度在服务器进行持久化存储,支持自动提交或手动提交两种方式。
-此外,为了维持消费者的活跃状态,客户端还实施了心跳保活机制。通过定期向服务器发送心跳信号,消费者能够向服务器证明自己仍然在线。若服务器在一定时间内未收到消费者的心跳,便会将其标记为 lost 状态,即认为消费者已离线。服务器依赖心跳机制来监控所有消费者的状态,进而有效地管理整个消费者群体。
+此外,为了维持消费者的活跃状态,客户端还实施了心跳保活机制。通过定期向服务器发送心跳信号,消费者能够向服务器证明自己仍然在线。若服务器在一定时间内未收到消费者的心跳,便会认为消费者已离线。对于一定时间(可以通过参数控制)不拉取数据的 consumer,服务器也会标记为离线,并从消费组中删除该消费者。服务器依赖心跳机制来监控所有消费者的状态,进而有效地管理整个消费者群体。
-mnode 主要负责处理订阅过程中的控制消息,包括创建和删除主题、订阅消息、查询 endpoint 消息以及心跳消息等。vnode 则专注于处理消费消息和 commit 消息。当mnode 收到消费者的订阅消息时,如果该消费者尚未订阅过,其状态将被设置为 modify。如果消费者已经订阅过,但订阅的主题发生变更,消费者的状态同样会被设置为 modify。等待再平衡的计时器到来时,mnode 会对 modify 状态的消费者执行再平衡操作,将心跳超过固定时间的消费者设置为 lost 状态,并对关闭或 lost 状态超过一定时间的消费者进行删除操作。
+mnode 主要负责处理订阅过程中的控制消息,包括创建和删除主题、订阅消息、查询 endpoint 消息以及心跳消息等。vnode 则专注于处理消费消息和 commit 消息。当 mnode 收到消费者的订阅消息时,如果该消费者尚未订阅过,其状态将被设置为 rebalancing。如果消费者已经订阅过,但订阅的主题发生变更,消费者的状态同样会被设置为 rebalancing。然后 mnode 会对 rebalancing 状态的消费者执行 rebalance 操作。心跳超过固定时间的消费者或主动关闭的消费者将被删除。
消费者会定期向 mnode 发送查询 endpoint 消息,以获取再平衡后的最新 vnode 分配结果。同时,消费者还会定期发送心跳消息,通知 mnode 自身处于活跃状态。此外,消费者的一些信息也会通过心跳消息上报至 mnode,用户可以查询 mnode 上的这些信息以监测各个消费者的状态。这种设计有助于实现对消费者的有效管理和监控。
-## 再平衡过程
+## rebalance 过程
-每个主题的数据可能分散在多个 vnode 上。服务器通过执行再平衡过程,将这些vnode 合理地分配给各个消费者,确保数据的均匀分布和高效消费。
+每个主题的数据可能分散在多个 vnode 上。服务器通过执行 rebalance 过程,将这些 vnode 合理地分配给各个消费者,确保数据的均匀分布和高效消费。
-如下图所示,c1 表示消费者 1,c2 表示消费者 2,g1 表示消费组 1。起初 g1 中只有 c1 消费数据,c1 发送订阅信息到 mnode,mnode 把数据所在的所有 4 个vnode 分配给 c1。当 c2 增加到 g1 后,c2 将订阅信息发送给 mnode,mnode 将在下一次再平衡计时器到来时检测到这个 g1 需要重新分配,就会启动再平衡过程,随后 c2 分得其中两个 vnode 消费。分配信息还会通过 mnode 发送给 vnode,同时 c1 和 c2 也会获取自己消费的 vnode 信息并开始消费。
+如下图所示,c1 表示消费者 1,c2 表示消费者 2,g1 表示消费组 1。起初 g1 中只有 c1 消费数据,c1 发送订阅信息到 mnode,mnode 把数据所在的所有 4 个vnode 分配给 c1。当 c2 增加到 g1 后,c2 将订阅信息发送给 mnode,mnode 检测到这个 g1 需要重新分配,就会启动 rebalance 过程,随后 c2 分得其中两个 vnode 消费。分配信息还会通过 mnode 发送给 vnode,同时 c1 和 c2 也会获取自己消费的 vnode 信息并开始消费。
-
+
再平衡计时器每 2s 检测一次是否需要再平衡。在再平衡过程中,如果消费者获取的状态是 not ready,则不能进行消费。只有再平衡正常结束后,消费者获取分配 vnode 的offset 后才可正常消费,否则消费者会重试指定次数后报错。
## 消费者状态处理
-消费者的状态转换过程如下图所示。初始状态下,刚发起订阅的消费者处于modify 状态,此时客户端获取的消费者状态为 not ready,表明消费者尚未准备好进行数据消费。一旦 mnode 在再平衡计时器中检测到处于 modify 状态的消费者,便会启动再平衡过程。再平衡成功后,消费者的状态将转变为 ready,表示消费者已准备就绪。随后,当消费者通过定时的查询 endpoint 消息获取自身的 ready 状态以及分配的 vnode 列表后,便能正式开始消费数据。
+消费者的状态转换过程如下图所示。初始状态下,刚发起订阅的消费者处于 rebalancing 状态,表明消费者尚未准备好进行数据消费。一旦 mnode 检测到处于 rebalancing 状态的消费者,便会启动 rebalance 过程,成功后,消费者的状态将转变为 ready,表示消费者已准备就绪。随后,当消费者通过定时的查询 endpoint 消息获取自身的 ready 状态以及分配的 vnode 列表后,便能正式开始消费数据。

-若消费者的心跳丢失时间超过 12s,经过再平衡过程,其状态将被更新为 lost,表明消费者被认为已离线。如果心跳丢失时间超过 1 天,消费者的状态将进一步转变为 clear,此时消费者将被系统删除。然而,如果在上述过程中消费者重新发送心跳信号,其状态将恢复为 modify,并重新进入下一轮的再平衡过程。
+若消费者的心跳丢失时间超过 12s,经过 rebalance 过程,其状态将被更新为 clear,然后消费者将被系统删除。
-当消费者主动退出时,会发送 unsubscribe 消息。该消息将清除消费者订阅的所有主题,并将消费者的状态设置为 modify。随后,在再平衡过程中,一旦消费者的状态变为ready 且无订阅的主题,mnode 将在再平衡的计时器中检测到此状态变化,并据此删除该消费者。这一系列措施确保了消费者退出的有序性和系统的稳定性。
+当消费者主动退出时,会发送 unsubscribe 消息。该消息将清除消费者订阅的所有主题,并将消费者的状态设置为 rebalancing。随后,检测到处于 rebalancing 状态的消费者,便会启动 rebalance 过程,成功后,其状态将被更新为 clear,然后消费者将被系统删除。这一系列措施确保了消费者退出的有序性和系统的稳定性。
## 消费数据
diff --git a/docs/zh/26-tdinternal/consumer.png b/docs/zh/26-tdinternal/consumer.png
index cc996a431a..0c9a061107 100644
Binary files a/docs/zh/26-tdinternal/consumer.png and b/docs/zh/26-tdinternal/consumer.png differ
diff --git a/docs/zh/26-tdinternal/offset.png b/docs/zh/26-tdinternal/offset.png
index d8146b8d50..0ddb005d66 100644
Binary files a/docs/zh/26-tdinternal/offset.png and b/docs/zh/26-tdinternal/offset.png differ
diff --git a/docs/zh/26-tdinternal/rebalance.png b/docs/zh/26-tdinternal/rebalance.png
index 216ba5ec3d..fbc2147c91 100644
Binary files a/docs/zh/26-tdinternal/rebalance.png and b/docs/zh/26-tdinternal/rebalance.png differ
diff --git a/docs/zh/26-tdinternal/statetransition.png b/docs/zh/26-tdinternal/statetransition.png
index 1ec8ae22f7..dd2b472890 100644
Binary files a/docs/zh/26-tdinternal/statetransition.png and b/docs/zh/26-tdinternal/statetransition.png differ
diff --git a/docs/zh/26-tdinternal/topic.png b/docs/zh/26-tdinternal/topic.png
index 82f3df6b73..d9306cdb74 100644
Binary files a/docs/zh/26-tdinternal/topic.png and b/docs/zh/26-tdinternal/topic.png differ
diff --git a/docs/zh/26-tdinternal/topicarch.png b/docs/zh/26-tdinternal/topicarch.png
index 2442158ebb..1bb4bb3014 100644
Binary files a/docs/zh/26-tdinternal/topicarch.png and b/docs/zh/26-tdinternal/topicarch.png differ
diff --git a/include/libs/audit/audit.h b/include/libs/audit/audit.h
index 4fa69f1b4f..2e786ab2b3 100644
--- a/include/libs/audit/audit.h
+++ b/include/libs/audit/audit.h
@@ -49,6 +49,7 @@ typedef struct {
} SAuditRecord;
int32_t auditInit(const SAuditCfg *pCfg);
+void auditSetDnodeId(int32_t dnodeId);
void auditCleanup();
int32_t auditSend(SJson *pJson);
void auditRecord(SRpcMsg *pReq, int64_t clusterId, char *operation, char *target1, char *target2,
diff --git a/include/libs/monitor/monitor.h b/include/libs/monitor/monitor.h
index 96ac53ef4e..636e7a3143 100644
--- a/include/libs/monitor/monitor.h
+++ b/include/libs/monitor/monitor.h
@@ -218,6 +218,7 @@ typedef struct {
} SDmNotifyHandle;
int32_t monInit(const SMonCfg *pCfg);
+void monSetDnodeId(int32_t dnodeId);
void monInitVnode();
void monCleanup();
void monRecordLog(int64_t ts, ELogLevel level, const char *content);
diff --git a/include/libs/transport/thttp.h b/include/libs/transport/thttp.h
index a2f6b5ac8b..9f635b8523 100644
--- a/include/libs/transport/thttp.h
+++ b/include/libs/transport/thttp.h
@@ -28,9 +28,11 @@ typedef enum { HTTP_GZIP, HTTP_FLAT } EHttpCompFlag;
int32_t taosSendHttpReport(const char* server, const char* uri, uint16_t port, char* pCont, int32_t contLen,
EHttpCompFlag flag);
+int32_t taosSendHttpReportWithQID(const char* server, const char* uri, uint16_t port, char* pCont, int32_t contLen,
+ EHttpCompFlag flag, const char* qid);
int64_t taosInitHttpChan();
int32_t taosSendHttpReportByChan(const char* server, const char* uri, uint16_t port, char* pCont, int32_t contLen,
- EHttpCompFlag flag, int64_t chanId);
+ EHttpCompFlag flag, int64_t chanId, const char* qid);
void taosDestroyHttpChan(int64_t chanId);
#ifdef __cplusplus
diff --git a/include/util/tpagedbuf.h b/include/util/tpagedbuf.h
index 7f3bd29694..71cee62d2e 100644
--- a/include/util/tpagedbuf.h
+++ b/include/util/tpagedbuf.h
@@ -149,7 +149,7 @@ void setBufPageDirty(void* pPage, bool dirty);
* Set the compress/ no-compress flag for paged buffer, when flushing data in disk.
* @param pBuf
*/
-void setBufPageCompressOnDisk(SDiskbasedBuf* pBuf, bool comp);
+int32_t setBufPageCompressOnDisk(SDiskbasedBuf* pBuf, bool comp);
/**
* Set the pageId page buffer is not need
diff --git a/include/util/tuuid.h b/include/util/tuuid.h
index 315c2ad497..a4e1059371 100644
--- a/include/util/tuuid.h
+++ b/include/util/tuuid.h
@@ -37,3 +37,14 @@ int32_t tGenIdPI32(void);
* @return
*/
int64_t tGenIdPI64(void);
+
+/**
+ * Generate an qid
+ *+------------+-----+-----------+---------------+
+ *| nodeid| 0| serial number | 0 |
+ *+------------+-----+-----------+---------------+
+ *| 8bit | 16bit|32bit |8bit |
+ *+------------+-----+-----------+---------------+
+ * @return
+ */
+int64_t tGenQid64(int8_t dnodeId);
diff --git a/source/client/src/clientStmt2.c b/source/client/src/clientStmt2.c
index 51d3df5de8..a9738e4e9c 100644
--- a/source/client/src/clientStmt2.c
+++ b/source/client/src/clientStmt2.c
@@ -1001,7 +1001,7 @@ int stmtSetTbTags2(TAOS_STMT2* stmt, TAOS_STMT2_BIND* tags) {
STMT_ERR_RET(TSDB_CODE_APP_ERROR);
}
- if (pStmt->bInfo.inExecCache && (!pStmt->sql.autoCreateTbl || (*pDataBlock)->pData->pCreateTbReq)) {
+ if (pStmt->bInfo.inExecCache && !pStmt->sql.autoCreateTbl) {
return TSDB_CODE_SUCCESS;
}
diff --git a/source/common/src/tdatablock.c b/source/common/src/tdatablock.c
index 7e957357a9..d8a66f82bf 100644
--- a/source/common/src/tdatablock.c
+++ b/source/common/src/tdatablock.c
@@ -2620,11 +2620,15 @@ int32_t dumpBlockData(SSDataBlock* pDataBlock, const char* flag, char** pDataBuf
}
len += snprintf(dumpBuf + len, size - len, "%s |end\n", flag);
- *pDataBuf = dumpBuf;
- dumpBuf = NULL;
_exit:
- if (dumpBuf) {
- taosMemoryFree(dumpBuf);
+ if (code == TSDB_CODE_SUCCESS) {
+ *pDataBuf = dumpBuf;
+ dumpBuf = NULL;
+ } else {
+ uError("%s failed at line %d since %s", __func__, __LINE__, tstrerror(code));
+ if (dumpBuf) {
+ taosMemoryFree(dumpBuf);
+ }
}
return code;
}
diff --git a/source/dnode/mgmt/mgmt_dnode/src/dmHandle.c b/source/dnode/mgmt/mgmt_dnode/src/dmHandle.c
index f9ae94c53f..ae12bf6c99 100644
--- a/source/dnode/mgmt/mgmt_dnode/src/dmHandle.c
+++ b/source/dnode/mgmt/mgmt_dnode/src/dmHandle.c
@@ -14,7 +14,9 @@
*/
#define _DEFAULT_SOURCE
+#include "audit.h"
#include "dmInt.h"
+#include "monitor.h"
#include "systable.h"
#include "tchecksum.h"
@@ -27,6 +29,8 @@ static void dmUpdateDnodeCfg(SDnodeMgmt *pMgmt, SDnodeCfg *pCfg) {
(void)taosThreadRwlockWrlock(&pMgmt->pData->lock);
pMgmt->pData->dnodeId = pCfg->dnodeId;
pMgmt->pData->clusterId = pCfg->clusterId;
+ monSetDnodeId(pCfg->dnodeId);
+ auditSetDnodeId(pCfg->dnodeId);
code = dmWriteEps(pMgmt->pData);
if (code != 0) {
dInfo("failed to set local info, dnodeId:%d clusterId:%" PRId64 " reason:%s", pCfg->dnodeId, pCfg->clusterId,
diff --git a/source/dnode/mgmt/node_mgmt/src/dmNodes.c b/source/dnode/mgmt/node_mgmt/src/dmNodes.c
index f67901d6d5..3cb9030f60 100644
--- a/source/dnode/mgmt/node_mgmt/src/dmNodes.c
+++ b/source/dnode/mgmt/node_mgmt/src/dmNodes.c
@@ -15,6 +15,9 @@
#define _DEFAULT_SOURCE
#include "dmMgmt.h"
+#include "dmUtil.h"
+#include "monitor.h"
+#include "audit.h"
int32_t dmOpenNode(SMgmtWrapper *pWrapper) {
int32_t code = 0;
@@ -98,6 +101,9 @@ static int32_t dmOpenNodes(SDnode *pDnode) {
}
}
+ auditSetDnodeId(dmGetDnodeId(&pDnode->data));
+ monSetDnodeId(dmGetDnodeId(&pDnode->data));
+
dmSetStatus(pDnode, DND_STAT_RUNNING);
return 0;
}
diff --git a/source/dnode/mgmt/node_util/inc/dmUtil.h b/source/dnode/mgmt/node_util/inc/dmUtil.h
index 425f10392f..b5842acbad 100644
--- a/source/dnode/mgmt/node_util/inc/dmUtil.h
+++ b/source/dnode/mgmt/node_util/inc/dmUtil.h
@@ -217,6 +217,7 @@ int32_t dmInitDndInfo(SDnodeData *pData);
// dmEps.c
int32_t dmGetDnodeSize(SDnodeData *pData);
+int32_t dmGetDnodeId(SDnodeData *pData);
int32_t dmReadEps(SDnodeData *pData);
int32_t dmWriteEps(SDnodeData *pData);
void dmUpdateEps(SDnodeData *pData, SArray *pDnodeEps);
diff --git a/source/dnode/mgmt/node_util/src/dmUtil.c b/source/dnode/mgmt/node_util/src/dmUtil.c
index f73f041f4a..b50c746c92 100644
--- a/source/dnode/mgmt/node_util/src/dmUtil.c
+++ b/source/dnode/mgmt/node_util/src/dmUtil.c
@@ -88,3 +88,5 @@ void dmGetMonitorSystemInfo(SMonSysInfo *pInfo) {
}
return;
}
+
+int32_t dmGetDnodeId(SDnodeData *pData) { return pData->dnodeId; }
\ No newline at end of file
diff --git a/source/dnode/mnode/impl/src/mndAcct.c b/source/dnode/mnode/impl/src/mndAcct.c
index 425624d717..6fecd8d089 100644
--- a/source/dnode/mnode/impl/src/mndAcct.c
+++ b/source/dnode/mnode/impl/src/mndAcct.c
@@ -81,7 +81,7 @@ static int32_t mndCreateDefaultAcct(SMnode *pMnode) {
code = terrno;
TAOS_RETURN(code);
}
- (void)sdbSetRawStatus(pRaw, SDB_STATUS_READY);
+ TAOS_CHECK_RETURN(sdbSetRawStatus(pRaw, SDB_STATUS_READY));
mInfo("acct:%s, will be created when deploying, raw:%p", acctObj.acct, pRaw);
diff --git a/source/dnode/mnode/impl/src/mndArbGroup.c b/source/dnode/mnode/impl/src/mndArbGroup.c
index 1c9dc4c77c..61dd13eb45 100644
--- a/source/dnode/mnode/impl/src/mndArbGroup.c
+++ b/source/dnode/mnode/impl/src/mndArbGroup.c
@@ -256,7 +256,9 @@ static int32_t mndArbGroupActionUpdate(SSdb *pSdb, SArbGroup *pOld, SArbGroup *p
_OVER:
(void)taosThreadMutexUnlock(&pOld->mutex);
- (void)taosHashRemove(arbUpdateHash, &pOld->vgId, sizeof(int32_t));
+ if (taosHashRemove(arbUpdateHash, &pOld->vgId, sizeof(int32_t)) != 0) {
+ mError("arbgroup:%d, failed to remove from arbUpdateHash", pOld->vgId);
+ }
return 0;
}
@@ -451,7 +453,7 @@ static int32_t mndProcessArbHbTimer(SRpcMsg *pReq) {
int64_t mndTerm = mndGetTerm(pMnode);
if (mndIsDnodeOnline(pDnode, nowMs)) {
- (void)mndSendArbHeartBeatReq(pDnode, arbToken, mndTerm, hbMembers);
+ TAOS_CHECK_RETURN(mndSendArbHeartBeatReq(pDnode, arbToken, mndTerm, hbMembers));
}
mndReleaseDnode(pMnode, pDnode);
@@ -684,7 +686,7 @@ static int32_t mndProcessArbCheckSyncTimer(SRpcMsg *pReq) {
sdbRelease(pSdb, pArbGroup);
}
- (void)mndPullupArbUpdateGroupBatch(pMnode, pUpdateArray);
+ TAOS_CHECK_RETURN(mndPullupArbUpdateGroupBatch(pMnode, pUpdateArray));
taosArrayDestroy(pUpdateArray);
return 0;
@@ -795,7 +797,9 @@ _OVER:
if (ret != 0) {
for (size_t i = 0; i < sz; i++) {
SArbGroup *pNewGroup = taosArrayGet(newGroupArray, i);
- (void)taosHashRemove(arbUpdateHash, &pNewGroup->vgId, sizeof(pNewGroup->vgId));
+ if (taosHashRemove(arbUpdateHash, &pNewGroup->vgId, sizeof(pNewGroup->vgId)) != 0) {
+ mError("failed to remove vgId:%d from arbUpdateHash", pNewGroup->vgId);
+ }
}
}
@@ -839,7 +843,9 @@ static int32_t mndProcessArbUpdateGroupBatchReq(SRpcMsg *pReq) {
SArbGroup *pOldGroup = sdbAcquire(pMnode->pSdb, SDB_ARBGROUP, &newGroup.vgId);
if (!pOldGroup) {
mInfo("vgId:%d, arb skip to update arbgroup, since no obj found", newGroup.vgId);
- (void)taosHashRemove(arbUpdateHash, &newGroup.vgId, sizeof(int32_t));
+ if (taosHashRemove(arbUpdateHash, &newGroup.vgId, sizeof(int32_t)) != 0) {
+ mError("failed to remove vgId:%d from arbUpdateHash", newGroup.vgId);
+ }
continue;
}
@@ -869,7 +875,9 @@ _OVER:
// failed to update arbgroup
for (size_t i = 0; i < sz; i++) {
SMArbUpdateGroup *pUpdateGroup = taosArrayGet(req.updateArray, i);
- (void)taosHashRemove(arbUpdateHash, &pUpdateGroup->vgId, sizeof(int32_t));
+ if (taosHashRemove(arbUpdateHash, &pUpdateGroup->vgId, sizeof(int32_t)) != 0) {
+ mError("failed to remove vgId:%d from arbUpdateHash", pUpdateGroup->vgId);
+ }
}
}
@@ -1010,7 +1018,7 @@ static int32_t mndUpdateArbHeartBeat(SMnode *pMnode, int32_t dnodeId, SArray *me
sdbRelease(pMnode->pSdb, pGroup);
}
- (void)mndPullupArbUpdateGroupBatch(pMnode, pUpdateArray);
+ TAOS_CHECK_RETURN(mndPullupArbUpdateGroupBatch(pMnode, pUpdateArray));
taosArrayDestroy(pUpdateArray);
return 0;
@@ -1102,7 +1110,7 @@ static int32_t mndProcessArbHbRsp(SRpcMsg *pRsp) {
goto _OVER;
}
- (void)mndUpdateArbHeartBeat(pMnode, arbHbRsp.dnodeId, arbHbRsp.hbMembers);
+ TAOS_CHECK_GOTO(mndUpdateArbHeartBeat(pMnode, arbHbRsp.dnodeId, arbHbRsp.hbMembers), NULL, _OVER);
code = 0;
_OVER:
@@ -1249,6 +1257,8 @@ static int32_t mndRetrieveArbGroups(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock
int32_t numOfRows = 0;
int32_t cols = 0;
SArbGroup *pGroup = NULL;
+ int32_t code = 0;
+ int32_t lino = 0;
while (numOfRows < rows) {
pShow->pIter = sdbFetch(pSdb, SDB_ARBGROUP, pShow->pIter, (void **)&pGroup);
@@ -1264,33 +1274,40 @@ static int32_t mndRetrieveArbGroups(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock
sdbRelease(pSdb, pGroup);
continue;
}
+ char dbNameInGroup[TSDB_DB_FNAME_LEN];
+ strncpy(dbNameInGroup, pVgObj->dbName, TSDB_DB_FNAME_LEN);
+ sdbRelease(pSdb, pVgObj);
+
char dbname[TSDB_DB_NAME_LEN + VARSTR_HEADER_SIZE] = {0};
- STR_WITH_MAXSIZE_TO_VARSTR(dbname, mndGetDbStr(pVgObj->dbName), TSDB_ARB_TOKEN_SIZE + VARSTR_HEADER_SIZE);
- (void)colDataSetVal(pColInfo, numOfRows, (const char *)dbname, false);
+ STR_WITH_MAXSIZE_TO_VARSTR(dbname, mndGetDbStr(dbNameInGroup), TSDB_ARB_TOKEN_SIZE + VARSTR_HEADER_SIZE);
+ RETRIEVE_CHECK_GOTO(colDataSetVal(pColInfo, numOfRows, (const char *)dbname, false), pGroup, &lino, _OVER);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
- (void)colDataSetVal(pColInfo, numOfRows, (const char *)&pGroup->vgId, false);
+ RETRIEVE_CHECK_GOTO(colDataSetVal(pColInfo, numOfRows, (const char *)&pGroup->vgId, false), pGroup, &lino, _OVER);
for (int i = 0; i < TSDB_ARB_GROUP_MEMBER_NUM; i++) {
SArbGroupMember *pMember = &pGroup->members[i];
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
- (void)colDataSetVal(pColInfo, numOfRows, (const char *)&pMember->info.dnodeId, false);
+ RETRIEVE_CHECK_GOTO(colDataSetVal(pColInfo, numOfRows, (const char *)&pMember->info.dnodeId, false), pGroup,
+ &lino, _OVER);
}
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
- (void)colDataSetVal(pColInfo, numOfRows, (const char *)&pGroup->isSync, false);
+ RETRIEVE_CHECK_GOTO(colDataSetVal(pColInfo, numOfRows, (const char *)&pGroup->isSync, false), pGroup, &lino, _OVER);
if (pGroup->assignedLeader.dnodeId != 0) {
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
- (void)colDataSetVal(pColInfo, numOfRows, (const char *)&pGroup->assignedLeader.dnodeId, false);
+ RETRIEVE_CHECK_GOTO(colDataSetVal(pColInfo, numOfRows, (const char *)&pGroup->assignedLeader.dnodeId, false),
+ pGroup, &lino, _OVER);
char token[TSDB_ARB_TOKEN_SIZE + VARSTR_HEADER_SIZE] = {0};
STR_WITH_MAXSIZE_TO_VARSTR(token, pGroup->assignedLeader.token, TSDB_ARB_TOKEN_SIZE + VARSTR_HEADER_SIZE);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
- (void)colDataSetVal(pColInfo, numOfRows, (const char *)token, false);
+ RETRIEVE_CHECK_GOTO(colDataSetVal(pColInfo, numOfRows, (const char *)token, false), pGroup, &lino, _OVER);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
- (void)colDataSetVal(pColInfo, numOfRows, (const char *)&pGroup->assignedLeader.acked, false);
+ RETRIEVE_CHECK_GOTO(colDataSetVal(pColInfo, numOfRows, (const char *)&pGroup->assignedLeader.acked, false),
+ pGroup, &lino, _OVER);
} else {
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
colDataSetNULL(pColInfo, numOfRows);
@@ -1305,10 +1322,11 @@ static int32_t mndRetrieveArbGroups(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock
(void)taosThreadMutexUnlock(&pGroup->mutex);
numOfRows++;
- sdbRelease(pSdb, pVgObj);
sdbRelease(pSdb, pGroup);
}
+_OVER:
+ if (code != 0) mError("failed to restrieve arb group at line:%d, since %s", lino, tstrerror(code));
pShow->numOfRows += numOfRows;
return numOfRows;
diff --git a/source/dnode/mnode/impl/src/mndDnode.c b/source/dnode/mnode/impl/src/mndDnode.c
index b73d4b61a9..a94a471e4b 100644
--- a/source/dnode/mnode/impl/src/mndDnode.c
+++ b/source/dnode/mnode/impl/src/mndDnode.c
@@ -1694,6 +1694,8 @@ static int32_t mndRetrieveConfigs(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *p
char cfgVals[TSDB_CONFIG_NUMBER][TSDB_CONFIG_VALUE_LEN + 1] = {0};
char *pWrite = NULL;
int32_t cols = 0;
+ int32_t code = 0;
+ int32_t lino = 0;
cfgOpts[totalRows] = "statusInterval";
(void)snprintf(cfgVals[totalRows], TSDB_CONFIG_VALUE_LEN, "%d", tsStatusInterval);
@@ -1741,15 +1743,17 @@ static int32_t mndRetrieveConfigs(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *p
STR_WITH_MAXSIZE_TO_VARSTR(buf, cfgOpts[i], TSDB_CONFIG_OPTION_LEN);
SColumnInfoData *pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
- (void)colDataSetVal(pColInfo, numOfRows, (const char *)buf, false);
+ TAOS_CHECK_GOTO(colDataSetVal(pColInfo, numOfRows, (const char *)buf, false), &lino, _OVER);
STR_WITH_MAXSIZE_TO_VARSTR(bufVal, cfgVals[i], TSDB_CONFIG_VALUE_LEN);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
- (void)colDataSetVal(pColInfo, numOfRows, (const char *)bufVal, false);
+ TAOS_CHECK_GOTO(colDataSetVal(pColInfo, numOfRows, (const char *)bufVal, false), &lino, _OVER);
numOfRows++;
}
+_OVER:
+ if (code != 0) mError("failed to retrieve configs at line:%d since %s", lino, tstrerror(code));
pShow->numOfRows += numOfRows;
return numOfRows;
}
@@ -1765,6 +1769,8 @@ static int32_t mndRetrieveDnodes(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pB
SDnodeObj *pDnode = NULL;
int64_t curMs = taosGetTimestampMs();
char buf[TSDB_EP_LEN + VARSTR_HEADER_SIZE];
+ int32_t code = 0;
+ int32_t lino = 0;
while (numOfRows < rows) {
pShow->pIter = sdbFetchAll(pSdb, SDB_DNODE, pShow->pIter, (void **)&pDnode, &objStatus, true);
@@ -1774,19 +1780,20 @@ static int32_t mndRetrieveDnodes(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pB
cols = 0;
SColumnInfoData *pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
- (void)colDataSetVal(pColInfo, numOfRows, (const char *)&pDnode->id, false);
+ RETRIEVE_CHECK_GOTO(colDataSetVal(pColInfo, numOfRows, (const char *)&pDnode->id, false), pDnode, &lino, _OVER);
STR_WITH_MAXSIZE_TO_VARSTR(buf, pDnode->ep, pShow->pMeta->pSchemas[cols].bytes);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
- (void)colDataSetVal(pColInfo, numOfRows, buf, false);
+ RETRIEVE_CHECK_GOTO(colDataSetVal(pColInfo, numOfRows, buf, false), pDnode, &lino, _OVER);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
int16_t id = mndGetVnodesNum(pMnode, pDnode->id);
- (void)colDataSetVal(pColInfo, numOfRows, (const char *)&id, false);
+ RETRIEVE_CHECK_GOTO(colDataSetVal(pColInfo, numOfRows, (const char *)&id, false), pDnode, &lino, _OVER);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
- (void)colDataSetVal(pColInfo, numOfRows, (const char *)&pDnode->numOfSupportVnodes, false);
+ RETRIEVE_CHECK_GOTO(colDataSetVal(pColInfo, numOfRows, (const char *)&pDnode->numOfSupportVnodes, false), pDnode,
+ &lino, _OVER);
const char *status = "ready";
if (objStatus == SDB_STATUS_CREATING) status = "creating";
@@ -1802,31 +1809,36 @@ static int32_t mndRetrieveDnodes(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pB
STR_TO_VARSTR(buf, status);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
- (void)colDataSetVal(pColInfo, numOfRows, buf, false);
+ RETRIEVE_CHECK_GOTO(colDataSetVal(pColInfo, numOfRows, buf, false), pDnode, &lino, _OVER);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
- (void)colDataSetVal(pColInfo, numOfRows, (const char *)&pDnode->createdTime, false);
+ RETRIEVE_CHECK_GOTO(colDataSetVal(pColInfo, numOfRows, (const char *)&pDnode->createdTime, false), pDnode, &lino,
+ _OVER);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
- (void)colDataSetVal(pColInfo, numOfRows, (const char *)&pDnode->rebootTime, false);
+ RETRIEVE_CHECK_GOTO(colDataSetVal(pColInfo, numOfRows, (const char *)&pDnode->rebootTime, false), pDnode, &lino,
+ _OVER);
char *b = taosMemoryCalloc(VARSTR_HEADER_SIZE + strlen(offlineReason[pDnode->offlineReason]) + 1, 1);
STR_TO_VARSTR(b, online ? "" : offlineReason[pDnode->offlineReason]);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
- (void)colDataSetVal(pColInfo, numOfRows, b, false);
+ RETRIEVE_CHECK_GOTO(colDataSetVal(pColInfo, numOfRows, b, false), pDnode, &lino, _OVER);
taosMemoryFreeClear(b);
#ifdef TD_ENTERPRISE
STR_TO_VARSTR(buf, pDnode->machineId);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
- (void)colDataSetVal(pColInfo, numOfRows, buf, false);
+ RETRIEVE_CHECK_GOTO(colDataSetVal(pColInfo, numOfRows, buf, false), pDnode, &lino, _OVER);
#endif
numOfRows++;
sdbRelease(pSdb, pDnode);
}
+_OVER:
+ if (code != 0) mError("failed to retrieve dnodes at line:%d since %s", lino, tstrerror(code));
+
pShow->numOfRows += numOfRows;
return numOfRows;
}
diff --git a/source/dnode/mnode/impl/src/mndMnode.c b/source/dnode/mnode/impl/src/mndMnode.c
index 5315416226..0af0312b62 100644
--- a/source/dnode/mnode/impl/src/mndMnode.c
+++ b/source/dnode/mnode/impl/src/mndMnode.c
@@ -95,7 +95,7 @@ static int32_t mndCreateDefaultMnode(SMnode *pMnode) {
if (terrno != 0) code = terrno;
return -1;
}
- (void)sdbSetRawStatus(pRaw, SDB_STATUS_READY);
+ TAOS_CHECK_RETURN(sdbSetRawStatus(pRaw, SDB_STATUS_READY));
mInfo("mnode:%d, will be created when deploying, raw:%p", mnodeObj.id, pRaw);
diff --git a/source/dnode/mnode/impl/src/mndStream.c b/source/dnode/mnode/impl/src/mndStream.c
index 1fb398d070..511cc8f984 100644
--- a/source/dnode/mnode/impl/src/mndStream.c
+++ b/source/dnode/mnode/impl/src/mndStream.c
@@ -2195,8 +2195,9 @@ static int32_t refreshNodeListFromExistedStreams(SMnode *pMnode, SArray *pNodeLi
SNodeEntry entry = {.hbTimestamp = -1, .nodeId = pTask->info.nodeId, .lastHbMsgId = -1};
epsetAssign(&entry.epset, &pTask->info.epSet);
- if (taosHashPut(pHash, &entry.nodeId, sizeof(entry.nodeId), &entry, sizeof(entry)) != 0) {
- mError("failed to put entry into hash map, nodeId:%d", entry.nodeId);
+ int32_t ret = taosHashPut(pHash, &entry.nodeId, sizeof(entry.nodeId), &entry, sizeof(entry));
+ if (ret != 0 && ret != TSDB_CODE_DUP_KEY) {
+ mError("failed to put entry into hash map, nodeId:%d, code:%s", entry.nodeId, tstrerror(code));
}
}
diff --git a/source/dnode/mnode/impl/src/mndStreamUtil.c b/source/dnode/mnode/impl/src/mndStreamUtil.c
index bad44a8687..5d8ba02781 100644
--- a/source/dnode/mnode/impl/src/mndStreamUtil.c
+++ b/source/dnode/mnode/impl/src/mndStreamUtil.c
@@ -628,6 +628,7 @@ static int32_t doBuildStreamTaskUpdateMsg(void **pBuf, int32_t *pLen, SVgroupCha
code = tEncodeStreamTaskUpdateMsg(&encoder, &req);
if (code == -1) {
tEncoderClear(&encoder);
+ taosMemoryFree(buf);
taosArrayDestroy(req.pNodeList);
return code;
}
@@ -648,21 +649,25 @@ static int32_t doBuildStreamTaskUpdateMsg(void **pBuf, int32_t *pLen, SVgroupCha
static int32_t doSetUpdateTaskAction(SMnode *pMnode, STrans *pTrans, SStreamTask *pTask, SVgroupChangeInfo *pInfo) {
void *pBuf = NULL;
int32_t len = 0;
+ SEpSet epset = {0};
+ bool hasEpset = false;
+
bool unusedRet = streamTaskUpdateEpsetInfo(pTask, pInfo->pUpdateNodeList);
int32_t code = doBuildStreamTaskUpdateMsg(&pBuf, &len, pInfo, pTask->info.nodeId, &pTask->id, pTrans->id);
if (code) {
+ mError("failed to build stream task epset update msg, code:%s", tstrerror(code));
return code;
}
- SEpSet epset = {0};
- bool hasEpset = false;
code = extractNodeEpset(pMnode, &epset, &hasEpset, pTask->id.taskId, pTask->info.nodeId);
if (code != TSDB_CODE_SUCCESS || !hasEpset) {
+ mError("failed to extract epset during create update epset, code:%s", tstrerror(code));
return code;
}
code = setTransAction(pTrans, pBuf, len, TDMT_VND_STREAM_TASK_UPDATE, &epset, 0, TSDB_CODE_VND_INVALID_VGROUP_ID);
if (code != TSDB_CODE_SUCCESS) {
+ mError("failed to create update task epset trans, code:%s", tstrerror(code));
taosMemoryFree(pBuf);
}
diff --git a/source/dnode/mnode/impl/src/mndTelem.c b/source/dnode/mnode/impl/src/mndTelem.c
index f766c00f7d..0022aee619 100644
--- a/source/dnode/mnode/impl/src/mndTelem.c
+++ b/source/dnode/mnode/impl/src/mndTelem.c
@@ -68,54 +68,64 @@ static void mndGetStat(SMnode* pMnode, SMnodeStat* pStat) {
static void mndBuildRuntimeInfo(SMnode* pMnode, SJson* pJson) {
SMnodeStat mstat = {0};
+ int32_t code = 0;
+ int32_t lino = 0;
mndGetStat(pMnode, &mstat);
- (void)tjsonAddDoubleToObject(pJson, "numOfDnode", mstat.numOfDnode);
- (void)tjsonAddDoubleToObject(pJson, "numOfMnode", mstat.numOfMnode);
- (void)tjsonAddDoubleToObject(pJson, "numOfVgroup", mstat.numOfVgroup);
- (void)tjsonAddDoubleToObject(pJson, "numOfDatabase", mstat.numOfDatabase);
- (void)tjsonAddDoubleToObject(pJson, "numOfSuperTable", mstat.numOfSuperTable);
- (void)tjsonAddDoubleToObject(pJson, "numOfChildTable", mstat.numOfChildTable);
- (void)tjsonAddDoubleToObject(pJson, "numOfColumn", mstat.numOfColumn);
- (void)tjsonAddDoubleToObject(pJson, "numOfPoint", mstat.totalPoints);
- (void)tjsonAddDoubleToObject(pJson, "totalStorage", mstat.totalStorage);
- (void)tjsonAddDoubleToObject(pJson, "compStorage", mstat.compStorage);
+ TAOS_CHECK_GOTO(tjsonAddDoubleToObject(pJson, "numOfDnode", mstat.numOfDnode), &lino, _OVER);
+ TAOS_CHECK_GOTO(tjsonAddDoubleToObject(pJson, "numOfMnode", mstat.numOfMnode), &lino, _OVER);
+ TAOS_CHECK_GOTO(tjsonAddDoubleToObject(pJson, "numOfVgroup", mstat.numOfVgroup), &lino, _OVER);
+ TAOS_CHECK_GOTO(tjsonAddDoubleToObject(pJson, "numOfDatabase", mstat.numOfDatabase), &lino, _OVER);
+ TAOS_CHECK_GOTO(tjsonAddDoubleToObject(pJson, "numOfSuperTable", mstat.numOfSuperTable), &lino, _OVER);
+ TAOS_CHECK_GOTO(tjsonAddDoubleToObject(pJson, "numOfChildTable", mstat.numOfChildTable), &lino, _OVER);
+ TAOS_CHECK_GOTO(tjsonAddDoubleToObject(pJson, "numOfColumn", mstat.numOfColumn), &lino, _OVER);
+ TAOS_CHECK_GOTO(tjsonAddDoubleToObject(pJson, "numOfPoint", mstat.totalPoints), &lino, _OVER);
+ TAOS_CHECK_GOTO(tjsonAddDoubleToObject(pJson, "totalStorage", mstat.totalStorage), &lino, _OVER);
+ TAOS_CHECK_GOTO(tjsonAddDoubleToObject(pJson, "compStorage", mstat.compStorage), &lino, _OVER);
+_OVER:
+ if (code != 0) mError("failed to mndBuildRuntimeInfo at line:%d since %s", lino, tstrerror(code));
}
static char* mndBuildTelemetryReport(SMnode* pMnode) {
char tmp[4096] = {0};
STelemMgmt* pMgmt = &pMnode->telemMgmt;
+ int32_t code = 0;
+ int32_t lino = 0;
SJson* pJson = tjsonCreateObject();
if (pJson == NULL) return NULL;
char clusterName[64] = {0};
if ((terrno = mndGetClusterName(pMnode, clusterName, sizeof(clusterName))) != 0) return NULL;
- (void)tjsonAddStringToObject(pJson, "instanceId", clusterName);
- (void)tjsonAddDoubleToObject(pJson, "reportVersion", 1);
+ TAOS_CHECK_GOTO(tjsonAddStringToObject(pJson, "instanceId", clusterName), &lino, _OVER);
+ TAOS_CHECK_GOTO(tjsonAddDoubleToObject(pJson, "reportVersion", 1), &lino, _OVER);
if (taosGetOsReleaseName(tmp, NULL, NULL, sizeof(tmp)) == 0) {
- (void)tjsonAddStringToObject(pJson, "os", tmp);
+ TAOS_CHECK_GOTO(tjsonAddStringToObject(pJson, "os", tmp), &lino, _OVER);
}
float numOfCores = 0;
if (taosGetCpuInfo(tmp, sizeof(tmp), &numOfCores) == 0) {
- (void)tjsonAddStringToObject(pJson, "cpuModel", tmp);
- (void)tjsonAddDoubleToObject(pJson, "numOfCpu", numOfCores);
+ TAOS_CHECK_GOTO(tjsonAddStringToObject(pJson, "cpuModel", tmp), &lino, _OVER);
+ TAOS_CHECK_GOTO(tjsonAddDoubleToObject(pJson, "numOfCpu", numOfCores), &lino, _OVER);
} else {
- (void)tjsonAddDoubleToObject(pJson, "numOfCpu", tsNumOfCores);
+ TAOS_CHECK_GOTO(tjsonAddDoubleToObject(pJson, "numOfCpu", tsNumOfCores), &lino, _OVER);
}
snprintf(tmp, sizeof(tmp), "%" PRId64 " kB", tsTotalMemoryKB);
- (void)tjsonAddStringToObject(pJson, "memory", tmp);
+ TAOS_CHECK_GOTO(tjsonAddStringToObject(pJson, "memory", tmp), &lino, _OVER);
- (void)tjsonAddStringToObject(pJson, "version", version);
- (void)tjsonAddStringToObject(pJson, "buildInfo", buildinfo);
- (void)tjsonAddStringToObject(pJson, "gitInfo", gitinfo);
- (void)tjsonAddStringToObject(pJson, "email", pMgmt->email);
+ TAOS_CHECK_GOTO(tjsonAddStringToObject(pJson, "version", version), &lino, _OVER);
+ TAOS_CHECK_GOTO(tjsonAddStringToObject(pJson, "buildInfo", buildinfo), &lino, _OVER);
+ TAOS_CHECK_GOTO(tjsonAddStringToObject(pJson, "gitInfo", gitinfo), &lino, _OVER);
+ TAOS_CHECK_GOTO(tjsonAddStringToObject(pJson, "email", pMgmt->email), &lino, _OVER);
mndBuildRuntimeInfo(pMnode, pJson);
+_OVER:
+ if (code != 0) {
+ mError("failed to build telemetry report at lino:%d, since %s", lino, tstrerror(code));
+ }
char* pCont = tjsonToString(pJson);
tjsonDelete(pJson);
return pCont;
diff --git a/source/dnode/vnode/src/meta/metaCache.c b/source/dnode/vnode/src/meta/metaCache.c
index ef20799283..06576c0671 100644
--- a/source/dnode/vnode/src/meta/metaCache.c
+++ b/source/dnode/vnode/src/meta/metaCache.c
@@ -286,7 +286,7 @@ int32_t metaCacheUpsert(SMeta* pMeta, SMetaInfo* pInfo) {
SMetaCacheEntry* pEntryNew = (SMetaCacheEntry*)taosMemoryMalloc(sizeof(*pEntryNew));
if (pEntryNew == NULL) {
- code = TSDB_CODE_OUT_OF_MEMORY;
+ code = terrno;
goto _exit;
}
@@ -411,7 +411,7 @@ int32_t metaStatsCacheUpsert(SMeta* pMeta, SMetaStbStats* pInfo) {
SMetaStbStatsEntry* pEntryNew = (SMetaStbStatsEntry*)taosMemoryMalloc(sizeof(*pEntryNew));
if (pEntryNew == NULL) {
- code = TSDB_CODE_OUT_OF_MEMORY;
+ code = terrno;
goto _exit;
}
@@ -492,7 +492,7 @@ static int checkAllEntriesInCache(const STagFilterResEntry* pEntry, SArray* pInv
LRUHandle* pRes = taosLRUCacheLookup(pCache, buf, len);
if (pRes == NULL) { // remove the item in the linked list
if (taosArrayPush(pInvalidRes, &pNode) == NULL) {
- return TSDB_CODE_OUT_OF_MEMORY;
+ return terrno;
}
} else {
(void)taosLRUCacheRelease(pCache, pRes, false);
@@ -612,7 +612,7 @@ static void freeUidCachePayload(const void* key, size_t keyLen, void* value, voi
static int32_t addNewEntry(SHashObj* pTableEntry, const void* pKey, int32_t keyLen, uint64_t suid) {
STagFilterResEntry* p = taosMemoryMalloc(sizeof(STagFilterResEntry));
if (p == NULL) {
- return TSDB_CODE_OUT_OF_MEMORY;
+ return terrno;
}
p->hitTimes = 0;
diff --git a/source/dnode/vnode/src/meta/metaEntry.c b/source/dnode/vnode/src/meta/metaEntry.c
index 6e94cca390..3b6eaf45d3 100644
--- a/source/dnode/vnode/src/meta/metaEntry.c
+++ b/source/dnode/vnode/src/meta/metaEntry.c
@@ -39,7 +39,7 @@ int meteDecodeColCmprEntry(SDecoder *pDecoder, SMetaEntry *pME) {
uDebug("dencode cols:%d", pWrapper->nCols);
pWrapper->pColCmpr = (SColCmpr *)tDecoderMalloc(pDecoder, pWrapper->nCols * sizeof(SColCmpr));
if (pWrapper->pColCmpr == NULL) {
- return TSDB_CODE_OUT_OF_MEMORY;
+ return terrno;
}
for (int i = 0; i < pWrapper->nCols; i++) {
@@ -53,7 +53,7 @@ static FORCE_INLINE int32_t metatInitDefaultSColCmprWrapper(SDecoder *pDecoder,
SSchemaWrapper *pSchema) {
pCmpr->nCols = pSchema->nCols;
if ((pCmpr->pColCmpr = (SColCmpr *)tDecoderMalloc(pDecoder, pCmpr->nCols * sizeof(SColCmpr))) == NULL) {
- return TSDB_CODE_OUT_OF_MEMORY;
+ return terrno;
}
for (int32_t i = 0; i < pCmpr->nCols; i++) {
@@ -149,7 +149,7 @@ int metaDecodeEntry(SDecoder *pCoder, SMetaEntry *pME) {
} else if (pME->type == TSDB_TSMA_TABLE) {
pME->smaEntry.tsma = tDecoderMalloc(pCoder, sizeof(STSma));
if (!pME->smaEntry.tsma) {
- return TSDB_CODE_OUT_OF_MEMORY;
+ return terrno;
}
TAOS_CHECK_RETURN(tDecodeTSma(pCoder, pME->smaEntry.tsma, true));
} else {
diff --git a/source/dnode/vnode/src/meta/metaOpen.c b/source/dnode/vnode/src/meta/metaOpen.c
index 1917c09526..591c40332a 100644
--- a/source/dnode/vnode/src/meta/metaOpen.c
+++ b/source/dnode/vnode/src/meta/metaOpen.c
@@ -27,15 +27,15 @@ static int taskIdxKeyCmpr(const void *pKey1, int kLen1, const void *pKey2, int k
static int btimeIdxCmpr(const void *pKey1, int kLen1, const void *pKey2, int kLen2);
static int ncolIdxCmpr(const void *pKey1, int kLen1, const void *pKey2, int kLen2);
-static int32_t metaInitLock(SMeta *pMeta) {
+static void metaInitLock(SMeta *pMeta) {
TdThreadRwlockAttr attr;
(void)taosThreadRwlockAttrInit(&attr);
(void)taosThreadRwlockAttrSetKindNP(&attr, PTHREAD_RWLOCK_PREFER_WRITER_NONRECURSIVE_NP);
(void)taosThreadRwlockInit(&pMeta->lock, &attr);
(void)taosThreadRwlockAttrDestroy(&attr);
- return 0;
+ return;
}
-static int32_t metaDestroyLock(SMeta *pMeta) { return taosThreadRwlockDestroy(&pMeta->lock); }
+static void metaDestroyLock(SMeta *pMeta) { (void)taosThreadRwlockDestroy(&pMeta->lock); }
static void metaCleanup(SMeta **ppMeta);
@@ -56,7 +56,7 @@ int32_t metaOpen(SVnode *pVnode, SMeta **ppMeta, int8_t rollback) {
TSDB_CHECK_CODE(code = terrno, lino, _exit);
}
- (void)metaInitLock(pMeta);
+ metaInitLock(pMeta);
pMeta->path = (char *)&pMeta[1];
strcpy(pMeta->path, path);
@@ -188,17 +188,23 @@ int metaAlterCache(SMeta *pMeta, int32_t nPage) {
void metaRLock(SMeta *pMeta) {
metaTrace("meta rlock %p", &pMeta->lock);
- (void)taosThreadRwlockRdlock(&pMeta->lock);
+ if (taosThreadRwlockRdlock(&pMeta->lock) != 0) {
+ metaError("vgId:%d failed to lock %p", TD_VID(pMeta->pVnode), &pMeta->lock);
+ }
}
void metaWLock(SMeta *pMeta) {
metaTrace("meta wlock %p", &pMeta->lock);
- (void)taosThreadRwlockWrlock(&pMeta->lock);
+ if (taosThreadRwlockWrlock(&pMeta->lock) != 0) {
+ metaError("vgId:%d failed to lock %p", TD_VID(pMeta->pVnode), &pMeta->lock);
+ }
}
void metaULock(SMeta *pMeta) {
metaTrace("meta ulock %p", &pMeta->lock);
- (void)taosThreadRwlockUnlock(&pMeta->lock);
+ if (taosThreadRwlockUnlock(&pMeta->lock) != 0) {
+ metaError("vgId:%d failed to unlock %p", TD_VID(pMeta->pVnode), &pMeta->lock);
+ }
}
static void metaCleanup(SMeta **ppMeta) {
@@ -223,7 +229,7 @@ static void metaCleanup(SMeta **ppMeta) {
if (pMeta->pSkmDb) tdbTbClose(pMeta->pSkmDb);
if (pMeta->pTbDb) tdbTbClose(pMeta->pTbDb);
if (pMeta->pEnv) tdbClose(pMeta->pEnv);
- (void)metaDestroyLock(pMeta);
+ metaDestroyLock(pMeta);
taosMemoryFreeClear(*ppMeta);
}
diff --git a/source/dnode/vnode/src/meta/metaQuery.c b/source/dnode/vnode/src/meta/metaQuery.c
index 6123dc7642..3f6d17a5a7 100644
--- a/source/dnode/vnode/src/meta/metaQuery.c
+++ b/source/dnode/vnode/src/meta/metaQuery.c
@@ -353,7 +353,11 @@ int32_t metaTbCursorPrev(SMTbCursor *pTbCur, ETableType jumpTableType) {
tDecoderClear(&pTbCur->mr.coder);
- (void)metaGetTableEntryByVersion(&pTbCur->mr, ((SUidIdxVal *)pTbCur->pVal)[0].version, *(tb_uid_t *)pTbCur->pKey);
+ ret = metaGetTableEntryByVersion(&pTbCur->mr, ((SUidIdxVal *)pTbCur->pVal)[0].version, *(tb_uid_t *)pTbCur->pKey);
+ if (ret < 0) {
+ return ret;
+ }
+
if (pTbCur->mr.me.type == jumpTableType) {
continue;
}
@@ -387,7 +391,10 @@ _query:
SMetaEntry me = {0};
tDecoderInit(&dc, pData, nData);
- (void)metaDecodeEntry(&dc, &me);
+ int32_t code = metaDecodeEntry(&dc, &me);
+ if (code) {
+ goto _err;
+ }
if (me.type == TSDB_SUPER_TABLE) {
if (sver == -1 || sver == me.stbEntry.schemaRow.version) {
pSchema = tCloneSSchemaWrapper(&me.stbEntry.schemaRow);
@@ -502,17 +509,17 @@ int32_t metaResumeCtbCursor(SMCtbCursor *pCtbCur, int8_t first) {
ctbIdxKey.suid = pCtbCur->suid;
ctbIdxKey.uid = INT64_MIN;
int c = 0;
- (void)tdbTbcMoveTo(pCtbCur->pCur, &ctbIdxKey, sizeof(ctbIdxKey), &c);
+ ret = tdbTbcMoveTo(pCtbCur->pCur, &ctbIdxKey, sizeof(ctbIdxKey), &c);
if (c > 0) {
- (void)tdbTbcMoveToNext(pCtbCur->pCur);
+ ret = tdbTbcMoveToNext(pCtbCur->pCur);
}
} else {
int c = 0;
ret = tdbTbcMoveTo(pCtbCur->pCur, pCtbCur->pKey, pCtbCur->kLen, &c);
if (c < 0) {
- (void)tdbTbcMoveToPrev(pCtbCur->pCur);
+ ret = tdbTbcMoveToPrev(pCtbCur->pCur);
} else {
- (void)tdbTbcMoveToNext(pCtbCur->pCur);
+ ret = tdbTbcMoveToNext(pCtbCur->pCur);
}
}
}
@@ -570,9 +577,9 @@ SMStbCursor *metaOpenStbCursor(SMeta *pMeta, tb_uid_t suid) {
}
// move to the suid
- (void)tdbTbcMoveTo(pStbCur->pCur, &suid, sizeof(suid), &c);
+ ret = tdbTbcMoveTo(pStbCur->pCur, &suid, sizeof(suid), &c);
if (c > 0) {
- (void)tdbTbcMoveToNext(pStbCur->pCur);
+ ret = tdbTbcMoveToNext(pStbCur->pCur);
}
return pStbCur;
@@ -670,12 +677,12 @@ int32_t metaGetTbTSchemaEx(SMeta *pMeta, tb_uid_t suid, tb_uid_t uid, int32_t sv
}
if (c < 0) {
- (void)tdbTbcMoveToPrev(pSkmDbC);
+ int32_t ret = tdbTbcMoveToPrev(pSkmDbC);
}
const void *pKey = NULL;
int32_t nKey = 0;
- (void)tdbTbcGet(pSkmDbC, &pKey, &nKey, NULL, NULL);
+ int32_t ret = tdbTbcGet(pSkmDbC, &pKey, &nKey, NULL, NULL);
if (((SSkmDbKey *)pKey)->uid != skmDbKey.uid) {
metaULock(pMeta);
@@ -805,9 +812,9 @@ SMSmaCursor *metaOpenSmaCursor(SMeta *pMeta, tb_uid_t uid) {
// move to the suid
smaIdxKey.uid = uid;
smaIdxKey.smaUid = INT64_MIN;
- (void)tdbTbcMoveTo(pSmaCur->pCur, &smaIdxKey, sizeof(smaIdxKey), &c);
+ ret = tdbTbcMoveTo(pSmaCur->pCur, &smaIdxKey, sizeof(smaIdxKey), &c);
if (c > 0) {
- (void)tdbTbcMoveToNext(pSmaCur->pCur);
+ ret = tdbTbcMoveToNext(pSmaCur->pCur);
}
return pSmaCur;
@@ -916,7 +923,7 @@ STSmaWrapper *metaGetSmaInfoByTable(SMeta *pMeta, tb_uid_t uid, bool deepCopy) {
_err:
metaReaderClear(&mr);
taosArrayDestroy(pSmaIds);
- (void)tFreeTSmaWrapper(pSW, deepCopy);
+ pSW = tFreeTSmaWrapper(pSW, deepCopy);
return NULL;
}
@@ -1453,7 +1460,7 @@ int32_t metaGetTableTagsByUids(void *pVnode, int64_t suid, SArray *uidList) {
if (!p->pTagVal) {
if (isLock) metaULock(pMeta);
- TAOS_RETURN(TSDB_CODE_OUT_OF_MEMORY);
+ TAOS_RETURN(terrno);
}
memcpy(p->pTagVal, val, len);
tdbFree(val);
@@ -1504,13 +1511,13 @@ int32_t metaGetTableTags(void *pVnode, uint64_t suid, SArray *pUidTagInfo) {
if (!info.pTagVal) {
metaCloseCtbCursor(pCur);
taosHashCleanup(pSepecifiedUidMap);
- return TSDB_CODE_OUT_OF_MEMORY;
+ return terrno;
}
memcpy(info.pTagVal, pCur->pVal, pCur->vLen);
if (taosArrayPush(pUidTagInfo, &info) == NULL) {
metaCloseCtbCursor(pCur);
taosHashCleanup(pSepecifiedUidMap);
- return TSDB_CODE_OUT_OF_MEMORY;
+ return terrno;
}
}
} else { // only the specified tables need to be added
@@ -1531,7 +1538,7 @@ int32_t metaGetTableTags(void *pVnode, uint64_t suid, SArray *pUidTagInfo) {
if (!pTagInfo->pTagVal) {
metaCloseCtbCursor(pCur);
taosHashCleanup(pSepecifiedUidMap);
- return TSDB_CODE_OUT_OF_MEMORY;
+ return terrno;
}
memcpy(pTagInfo->pTagVal, pCur->pVal, pCur->vLen);
}
@@ -1583,7 +1590,10 @@ int32_t metaGetInfo(SMeta *pMeta, int64_t uid, SMetaInfo *pInfo, SMetaReader *pR
}
// upsert the cache
metaWLock(pMeta);
- (void)metaCacheUpsert(pMeta, pInfo);
+ int32_t ret = metaCacheUpsert(pMeta, pInfo);
+ if (ret != 0) {
+ metaError("vgId:%d, failed to upsert cache, uid:%" PRId64, TD_VID(pMeta->pVnode), uid);
+ }
metaULock(pMeta);
if (lock) {
@@ -1633,7 +1643,12 @@ int32_t metaGetStbStats(void *pVnode, int64_t uid, int64_t *numOfTables, int32_t
// upsert the cache
metaWLock(pVnodeObj->pMeta);
- (void)metaStatsCacheUpsert(pVnodeObj->pMeta, &state);
+
+ int32_t ret = metaStatsCacheUpsert(pVnodeObj->pMeta, &state);
+ if (ret) {
+ metaError("failed to upsert stats, uid:%" PRId64 ", ctbNum:%" PRId64 ", colNum:%d", uid, ctbNum, colNum);
+ }
+
metaULock(pVnodeObj->pMeta);
_exit:
@@ -1646,6 +1661,10 @@ void metaUpdateStbStats(SMeta *pMeta, int64_t uid, int64_t deltaCtb, int32_t del
if (metaStatsCacheGet(pMeta, uid, &stats) == TSDB_CODE_SUCCESS) {
stats.ctbNum += deltaCtb;
stats.colNum += deltaCol;
- (void)metaStatsCacheUpsert(pMeta, &stats);
+ int32_t code = metaStatsCacheUpsert(pMeta, &stats);
+ if (code) {
+ metaError("vgId:%d, failed to update stats, uid:%" PRId64 ", ctbNum:%" PRId64 ", colNum:%d",
+ TD_VID(pMeta->pVnode), uid, deltaCtb, deltaCol);
+ }
}
}
diff --git a/source/dnode/vnode/src/meta/metaSma.c b/source/dnode/vnode/src/meta/metaSma.c
index 371cc6ff21..493518462e 100644
--- a/source/dnode/vnode/src/meta/metaSma.c
+++ b/source/dnode/vnode/src/meta/metaSma.c
@@ -106,7 +106,7 @@ static int metaSaveSmaToDB(SMeta *pMeta, const SMetaEntry *pME) {
pVal = taosMemoryMalloc(vLen);
if (pVal == NULL) {
- terrno = TSDB_CODE_OUT_OF_MEMORY;
+ terrno = terrno;
goto _err;
}
diff --git a/source/dnode/vnode/src/meta/metaSnapshot.c b/source/dnode/vnode/src/meta/metaSnapshot.c
index 0f77ff5e01..12ef5088b8 100644
--- a/source/dnode/vnode/src/meta/metaSnapshot.c
+++ b/source/dnode/vnode/src/meta/metaSnapshot.c
@@ -98,7 +98,7 @@ int32_t metaSnapRead(SMetaSnapReader* pReader, uint8_t** ppData) {
*ppData = taosMemoryMalloc(sizeof(SSnapDataHdr) + nData);
if (*ppData == NULL) {
- code = TSDB_CODE_OUT_OF_MEMORY;
+ code = terrno;
goto _exit;
}
@@ -275,7 +275,7 @@ static int32_t saveSuperTableInfoForChildTable(SMetaEntry* me, SHashObj* suidInf
STableInfoForChildTable dataTmp = {0};
dataTmp.tableName = taosStrdup(me->name);
if (dataTmp.tableName == NULL) {
- code = TSDB_CODE_OUT_OF_MEMORY;
+ code = terrno;
goto END;
}
dataTmp.schemaRow = tCloneSSchemaWrapper(&me->stbEntry.schemaRow);
diff --git a/source/dnode/vnode/src/meta/metaTable.c b/source/dnode/vnode/src/meta/metaTable.c
index 5bbb41a7b0..0fb8ca3fb1 100644
--- a/source/dnode/vnode/src/meta/metaTable.c
+++ b/source/dnode/vnode/src/meta/metaTable.c
@@ -151,7 +151,7 @@ static int metaSaveJsonVarToIdx(SMeta *pMeta, const SMetaEntry *pCtbEntry, const
if (pTagVal->nData > 0) {
char *val = taosMemoryCalloc(1, pTagVal->nData + VARSTR_HEADER_SIZE);
if (val == NULL) {
- TAOS_CHECK_GOTO(TSDB_CODE_OUT_OF_MEMORY, NULL, _exception);
+ TAOS_CHECK_GOTO(terrno, NULL, _exception);
}
int32_t len = taosUcs4ToMbs((TdUcs4 *)pTagVal->pData, pTagVal->nData, val + VARSTR_HEADER_SIZE);
if (len < 0) {
@@ -175,7 +175,11 @@ static int metaSaveJsonVarToIdx(SMeta *pMeta, const SMetaEntry *pCtbEntry, const
}
if (term != NULL) {
- (void)indexMultiTermAdd(terms, term);
+ int32_t ret = indexMultiTermAdd(terms, term);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to add term to multi term, uid: %" PRId64 ", key: %s, type: %d, ret: %d",
+ TD_VID(pMeta->pVnode), tuid, key, type, ret);
+ }
} else {
code = terrno;
goto _exception;
@@ -213,7 +217,7 @@ int metaDelJsonVarFromIdx(SMeta *pMeta, const SMetaEntry *pCtbEntry, const SSche
SIndexMultiTerm *terms = indexMultiTermCreate();
if (terms == NULL) {
- return TSDB_CODE_OUT_OF_MEMORY;
+ return terrno;
}
int16_t nCols = taosArrayGetSize(pTagVals);
@@ -231,7 +235,7 @@ int metaDelJsonVarFromIdx(SMeta *pMeta, const SMetaEntry *pCtbEntry, const SSche
if (pTagVal->nData > 0) {
char *val = taosMemoryCalloc(1, pTagVal->nData + VARSTR_HEADER_SIZE);
if (val == NULL) {
- TAOS_CHECK_GOTO(TSDB_CODE_OUT_OF_MEMORY, NULL, _exception);
+ TAOS_CHECK_GOTO(terrno, NULL, _exception);
}
int32_t len = taosUcs4ToMbs((TdUcs4 *)pTagVal->pData, pTagVal->nData, val + VARSTR_HEADER_SIZE);
if (len < 0) {
@@ -254,7 +258,11 @@ int metaDelJsonVarFromIdx(SMeta *pMeta, const SMetaEntry *pCtbEntry, const SSche
term = indexTermCreate(suid, DEL_VALUE, TSDB_DATA_TYPE_BOOL, key, nKey, (const char *)&val, len);
}
if (term != NULL) {
- (void)indexMultiTermAdd(terms, term);
+ int32_t ret = indexMultiTermAdd(terms, term);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to add term to multi term, uid: %" PRId64 ", key: %s, type: %d, ret: %d",
+ TD_VID(pMeta->pVnode), tuid, key, type, ret);
+ }
} else {
code = terrno;
goto _exception;
@@ -351,6 +359,7 @@ int metaDropSTable(SMeta *pMeta, int64_t verison, SVDropStbReq *pReq, SArray *tb
int c = 0;
int rc = 0;
int32_t lino;
+ int32_t ret;
// check if super table exists
rc = tdbTbGet(pMeta->pNameIdx, pReq->name, strlen(pReq->name) + 1, &pData, &nData);
@@ -393,25 +402,56 @@ int metaDropSTable(SMeta *pMeta, int64_t verison, SVDropStbReq *pReq, SArray *tb
tdbTbcClose(pCtbIdxc);
- (void)tsdbCacheDropSubTables(pMeta->pVnode->pTsdb, tbUidList, pReq->suid);
+ ret = tsdbCacheDropSubTables(pMeta->pVnode->pTsdb, tbUidList, pReq->suid);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to drop stb:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), pReq->name, pReq->suid,
+ tstrerror(terrno));
+ }
metaWLock(pMeta);
for (int32_t iChild = 0; iChild < taosArrayGetSize(tbUidList); iChild++) {
tb_uid_t uid = *(tb_uid_t *)taosArrayGet(tbUidList, iChild);
- (void)metaDropTableByUid(pMeta, uid, NULL, NULL, NULL);
+ ret = metaDropTableByUid(pMeta, uid, NULL, NULL, NULL);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to drop stb:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), pReq->name,
+ pReq->suid, tstrerror(terrno));
+ }
}
// drop super table
_drop_super_table:
tdbTbGet(pMeta->pUidIdx, &pReq->suid, sizeof(tb_uid_t), &pData, &nData);
- (void)tdbTbDelete(pMeta->pTbDb, &(STbDbKey){.version = ((SUidIdxVal *)pData)[0].version, .uid = pReq->suid},
+ ret = tdbTbDelete(pMeta->pTbDb, &(STbDbKey){.version = ((SUidIdxVal *)pData)[0].version, .uid = pReq->suid},
sizeof(STbDbKey), pMeta->txn);
- (void)tdbTbDelete(pMeta->pNameIdx, pReq->name, strlen(pReq->name) + 1, pMeta->txn);
- (void)tdbTbDelete(pMeta->pUidIdx, &pReq->suid, sizeof(tb_uid_t), pMeta->txn);
- (void)tdbTbDelete(pMeta->pSuidIdx, &pReq->suid, sizeof(tb_uid_t), pMeta->txn);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to drop stb:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), pReq->name, pReq->suid,
+ tstrerror(terrno));
+ }
- (void)metaStatsCacheDrop(pMeta, pReq->suid);
+ ret = tdbTbDelete(pMeta->pNameIdx, pReq->name, strlen(pReq->name) + 1, pMeta->txn);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to drop stb:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), pReq->name, pReq->suid,
+ tstrerror(terrno));
+ }
+
+ ret = tdbTbDelete(pMeta->pUidIdx, &pReq->suid, sizeof(tb_uid_t), pMeta->txn);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to drop stb:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), pReq->name, pReq->suid,
+ tstrerror(terrno));
+ }
+
+ ret = tdbTbDelete(pMeta->pSuidIdx, &pReq->suid, sizeof(tb_uid_t), pMeta->txn);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to drop stb:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), pReq->name, pReq->suid,
+ tstrerror(terrno));
+ }
+
+ ret = metaStatsCacheDrop(pMeta, pReq->suid);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to drop stb:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), pReq->name, pReq->suid,
+ tstrerror(terrno));
+ }
metaULock(pMeta);
@@ -519,7 +559,14 @@ int metaAlterSTable(SMeta *pMeta, int64_t version, SVCreateStbReq *pReq) {
}
memcpy(oStbEntry.pBuf, pData, nData);
tDecoderInit(&dc, oStbEntry.pBuf, nData);
- (void)metaDecodeEntry(&dc, &oStbEntry);
+ ret = metaDecodeEntry(&dc, &oStbEntry);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to decode stb:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), pReq->name,
+ pReq->suid, tstrerror(ret));
+ tdbTbcClose(pTbDbc);
+ tdbTbcClose(pUidIdxc);
+ return terrno;
+ }
nStbEntry.version = version;
nStbEntry.type = TSDB_SUPER_TABLE;
@@ -550,7 +597,7 @@ int metaAlterSTable(SMeta *pMeta, int64_t version, SVCreateStbReq *pReq) {
int8_t col_type = pReq->schemaRow.pSchema[nCols - 1].type;
TAOS_CHECK_RETURN(metaGetSubtables(pMeta, pReq->suid, uids));
- (void)tsdbCacheNewSTableColumn(pTsdb, uids, cid, col_type);
+ TAOS_CHECK_RETURN(tsdbCacheNewSTableColumn(pTsdb, uids, cid, col_type));
} else if (deltaCol == -1) {
int16_t cid = -1;
bool hasPrimaryKey = false;
@@ -566,7 +613,7 @@ int metaAlterSTable(SMeta *pMeta, int64_t version, SVCreateStbReq *pReq) {
if (cid != -1) {
TAOS_CHECK_RETURN(metaGetSubtables(pMeta, pReq->suid, uids));
- (void)tsdbCacheDropSTableColumn(pTsdb, uids, cid, hasPrimaryKey);
+ TAOS_CHECK_RETURN(tsdbCacheDropSTableColumn(pTsdb, uids, cid, hasPrimaryKey));
}
}
if (uids) taosArrayDestroy(uids);
@@ -575,25 +622,41 @@ int metaAlterSTable(SMeta *pMeta, int64_t version, SVCreateStbReq *pReq) {
metaWLock(pMeta);
// compare two entry
if (oStbEntry.stbEntry.schemaRow.version != pReq->schemaRow.version) {
- (void)metaSaveToSkmDb(pMeta, &nStbEntry);
+ ret = metaSaveToSkmDb(pMeta, &nStbEntry);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to save skm db:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), pReq->name,
+ pReq->suid, tstrerror(ret));
+ }
}
// update table.db
- (void)metaSaveToTbDb(pMeta, &nStbEntry);
+ ret = metaSaveToTbDb(pMeta, &nStbEntry);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to save tb db:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), pReq->name,
+ pReq->suid, tstrerror(ret));
+ }
// update uid index
- (void)metaUpdateUidIdx(pMeta, &nStbEntry);
+ ret = metaUpdateUidIdx(pMeta, &nStbEntry);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to update uid idx:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), pReq->name,
+ pReq->suid, tstrerror(ret));
+ }
// metaStatsCacheDrop(pMeta, nStbEntry.uid);
if (updStat) {
- (void)metaUpdateStbStats(pMeta, pReq->suid, 0, deltaCol);
+ metaUpdateStbStats(pMeta, pReq->suid, 0, deltaCol);
}
metaULock(pMeta);
if (updStat) {
int64_t ctbNum;
- (void)metaGetStbStats(pMeta->pVnode, pReq->suid, &ctbNum, NULL);
+ ret = metaGetStbStats(pMeta->pVnode, pReq->suid, &ctbNum, NULL);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to get stb stats:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), pReq->name,
+ pReq->suid, tstrerror(ret));
+ }
pMeta->pVnode->config.vndStats.numOfTimeSeries += (ctbNum * deltaCol);
metaTimeSeriesNotifyCheck(pMeta);
}
@@ -745,7 +808,11 @@ int metaAddIndexToSTable(SMeta *pMeta, int64_t version, SVCreateStbReq *pReq) {
}
metaWLock(pMeta);
- (void)tdbTbUpsert(pMeta->pTagIdx, pTagIdxKey, nTagIdxKey, NULL, 0, pMeta->txn);
+ ret = tdbTbUpsert(pMeta->pTagIdx, pTagIdxKey, nTagIdxKey, NULL, 0, pMeta->txn);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to upsert tag idx key:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), pReq->name,
+ pReq->suid, tstrerror(ret));
+ }
metaULock(pMeta);
metaDestroyTagIdxKey(pTagIdxKey);
@@ -762,9 +829,17 @@ int metaAddIndexToSTable(SMeta *pMeta, int64_t version, SVCreateStbReq *pReq) {
metaWLock(pMeta);
// update table.db
- (void)metaSaveToTbDb(pMeta, &nStbEntry);
+ ret = metaSaveToTbDb(pMeta, &nStbEntry);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to save tb db:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), pReq->name,
+ pReq->suid, tstrerror(ret));
+ }
// update uid index
- (void)metaUpdateUidIdx(pMeta, &nStbEntry);
+ ret = metaUpdateUidIdx(pMeta, &nStbEntry);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to update uid idx:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), pReq->name,
+ pReq->suid, tstrerror(ret));
+ }
metaULock(pMeta);
if (oStbEntry.pBuf) taosMemoryFree(oStbEntry.pBuf);
@@ -896,7 +971,11 @@ int metaDropIndexFromSTable(SMeta *pMeta, int64_t version, SDropIndexReq *pReq)
}
metaWLock(pMeta);
- (void)tdbTbDelete(pMeta->pTagIdx, pTagIdxKey, nTagIdxKey, pMeta->txn);
+ ret = tdbTbDelete(pMeta->pTagIdx, pTagIdxKey, nTagIdxKey, pMeta->txn);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to delete tag idx key:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), pReq->stb,
+ pReq->stbUid, tstrerror(ret));
+ }
metaULock(pMeta);
metaDestroyTagIdxKey(pTagIdxKey);
pTagIdxKey = NULL;
@@ -932,9 +1011,17 @@ int metaDropIndexFromSTable(SMeta *pMeta, int64_t version, SDropIndexReq *pReq)
metaWLock(pMeta);
// update table.db
- (void)metaSaveToTbDb(pMeta, &nStbEntry);
+ ret = metaSaveToTbDb(pMeta, &nStbEntry);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to save tb db:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), pReq->stb,
+ pReq->stbUid, tstrerror(ret));
+ }
// update uid index
- (void)metaUpdateUidIdx(pMeta, &nStbEntry);
+ ret = metaUpdateUidIdx(pMeta, &nStbEntry);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to update uid idx:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), pReq->stb,
+ pReq->stbUid, tstrerror(ret));
+ }
metaULock(pMeta);
tDeleteSchemaWrapper(tag);
@@ -958,6 +1045,7 @@ _err:
int metaCreateTable(SMeta *pMeta, int64_t ver, SVCreateTbReq *pReq, STableMetaRsp **pMetaRsp) {
SMetaEntry me = {0};
SMetaReader mr = {0};
+ int32_t ret;
// validate message
if (pReq->type != TSDB_CHILD_TABLE && pReq->type != TSDB_NORMAL_TABLE) {
@@ -1033,18 +1121,34 @@ int metaCreateTable(SMeta *pMeta, int64_t ver, SVCreateTbReq *pReq, STableMetaRs
if (!sysTbl) {
int32_t nCols = 0;
- (void)metaGetStbStats(pMeta->pVnode, me.ctbEntry.suid, 0, &nCols);
+ ret = metaGetStbStats(pMeta->pVnode, me.ctbEntry.suid, 0, &nCols);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to get stb stats:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), pReq->name,
+ pReq->ctb.suid, tstrerror(ret));
+ }
pStats->numOfTimeSeries += nCols - 1;
}
metaWLock(pMeta);
- (void)metaUpdateStbStats(pMeta, me.ctbEntry.suid, 1, 0);
- (void)metaUidCacheClear(pMeta, me.ctbEntry.suid);
- (void)metaTbGroupCacheClear(pMeta, me.ctbEntry.suid);
+ metaUpdateStbStats(pMeta, me.ctbEntry.suid, 1, 0);
+ ret = metaUidCacheClear(pMeta, me.ctbEntry.suid);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to clear uid cache:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), pReq->name,
+ pReq->ctb.suid, tstrerror(ret));
+ }
+ ret = metaTbGroupCacheClear(pMeta, me.ctbEntry.suid);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to clear group cache:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), pReq->name,
+ pReq->ctb.suid, tstrerror(ret));
+ }
metaULock(pMeta);
if (!TSDB_CACHE_NO(pMeta->pVnode->config)) {
- (void)tsdbCacheNewTable(pMeta->pVnode->pTsdb, me.uid, me.ctbEntry.suid, NULL);
+ ret = tsdbCacheNewTable(pMeta->pVnode->pTsdb, me.uid, me.ctbEntry.suid, NULL);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to create table:%s since %s", TD_VID(pMeta->pVnode), pReq->name, tstrerror(ret));
+ goto _err;
+ }
}
} else {
me.ntbEntry.btime = pReq->btime;
@@ -1060,7 +1164,11 @@ int metaCreateTable(SMeta *pMeta, int64_t ver, SVCreateTbReq *pReq, STableMetaRs
pStats->numOfNTimeSeries += me.ntbEntry.schemaRow.nCols - 1;
if (!TSDB_CACHE_NO(pMeta->pVnode->config)) {
- (void)tsdbCacheNewTable(pMeta->pVnode->pTsdb, me.uid, -1, &me.ntbEntry.schemaRow);
+ ret = tsdbCacheNewTable(pMeta->pVnode->pTsdb, me.uid, -1, &me.ntbEntry.schemaRow);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to create table:%s since %s", TD_VID(pMeta->pVnode), pReq->name, tstrerror(ret));
+ goto _err;
+ }
}
}
@@ -1078,7 +1186,11 @@ int metaCreateTable(SMeta *pMeta, int64_t ver, SVCreateTbReq *pReq, STableMetaRs
(*pMetaRsp)->suid = pReq->ctb.suid;
strcpy((*pMetaRsp)->tbName, pReq->name);
} else {
- (void)metaUpdateMetaRsp(pReq->uid, pReq->name, &pReq->ntb.schemaRow, *pMetaRsp);
+ ret = metaUpdateMetaRsp(pReq->uid, pReq->name, &pReq->ntb.schemaRow, *pMetaRsp);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to update meta rsp:%s since %s", TD_VID(pMeta->pVnode), pReq->name,
+ tstrerror(ret));
+ }
for (int32_t i = 0; i < pReq->colCmpr.nCols; i++) {
SColCmpr *p = &pReq->colCmpr.pColCmpr[i];
(*pMetaRsp)->pSchemaExt[i].colId = p->id;
@@ -1135,7 +1247,11 @@ int metaDropTable(SMeta *pMeta, int64_t version, SVDropTbReq *pReq, SArray *tbUi
}
if (!TSDB_CACHE_NO(pMeta->pVnode->config)) {
- (void)tsdbCacheDropTable(pMeta->pVnode->pTsdb, uid, suid, NULL);
+ int32_t ret = tsdbCacheDropTable(pMeta->pVnode->pTsdb, uid, suid, NULL);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to drop table:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), pReq->name, uid,
+ tstrerror(ret));
+ }
}
}
@@ -1233,7 +1349,11 @@ static int32_t metaFilterTableByHash(SMeta *pMeta, SArray *uidList) {
SMetaEntry me = {0};
SDecoder dc = {0};
tDecoderInit(&dc, pData, nData);
- (void)metaDecodeEntry(&dc, &me);
+ code = metaDecodeEntry(&dc, &me);
+ if (code < 0) {
+ tDecoderClear(&dc);
+ return code;
+ }
if (me.type != TSDB_SUPER_TABLE) {
char tbFName[TSDB_TABLE_FNAME_LEN + 1];
@@ -1340,6 +1460,7 @@ static int metaDropTableByUid(SMeta *pMeta, tb_uid_t uid, int *type, tb_uid_t *p
int rc = 0;
SMetaEntry e = {0};
SDecoder dc = {0};
+ int32_t ret = 0;
rc = tdbTbGet(pMeta->pUidIdx, &uid, sizeof(uid), &pData, &nData);
if (rc < 0) {
@@ -1347,7 +1468,11 @@ static int metaDropTableByUid(SMeta *pMeta, tb_uid_t uid, int *type, tb_uid_t *p
}
int64_t version = ((SUidIdxVal *)pData)[0].version;
- (void)tdbTbGet(pMeta->pTbDb, &(STbDbKey){.version = version, .uid = uid}, sizeof(STbDbKey), &pData, &nData);
+ rc = tdbTbGet(pMeta->pTbDb, &(STbDbKey){.version = version, .uid = uid}, sizeof(STbDbKey), &pData, &nData);
+ if (rc < 0) {
+ tdbFree(pData);
+ return rc;
+ }
tDecoderInit(&dc, pData, nData);
rc = metaDecodeEntry(&dc, &e);
@@ -1370,7 +1495,12 @@ static int metaDropTableByUid(SMeta *pMeta, tb_uid_t uid, int *type, tb_uid_t *p
SMetaEntry stbEntry = {0};
tDecoderInit(&tdc, tData, tLen);
- (void)metaDecodeEntry(&tdc, &stbEntry);
+ int32_t ret = metaDecodeEntry(&tdc, &stbEntry);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to decode child table:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), e.name,
+ e.ctbEntry.suid, tstrerror(ret));
+ return ret;
+ }
if (pSysTbl) *pSysTbl = metaTbInFilterCache(pMeta, stbEntry.name, 1) ? 1 : 0;
@@ -1378,7 +1508,11 @@ static int metaDropTableByUid(SMeta *pMeta, tb_uid_t uid, int *type, tb_uid_t *p
SSchemaWrapper *pTagSchema = &stbEntry.stbEntry.schemaTag;
if (pTagSchema->nCols == 1 && pTagSchema->pSchema[0].type == TSDB_DATA_TYPE_JSON) {
pTagColumn = &stbEntry.stbEntry.schemaTag.pSchema[0];
- (void)metaDelJsonVarFromIdx(pMeta, &e, pTagColumn);
+ ret = metaDelJsonVarFromIdx(pMeta, &e, pTagColumn);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to delete json var from idx:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode),
+ e.name, e.uid, tstrerror(ret));
+ }
} else {
for (int i = 0; i < pTagSchema->nCols; i++) {
pTagColumn = &stbEntry.stbEntry.schemaTag.pSchema[i];
@@ -1406,7 +1540,11 @@ static int metaDropTableByUid(SMeta *pMeta, tb_uid_t uid, int *type, tb_uid_t *p
if (metaCreateTagIdxKey(e.ctbEntry.suid, pTagColumn->colId, pTagData, nTagData, pTagColumn->type, uid,
&pTagIdxKey, &nTagIdxKey) == 0) {
- (void)tdbTbDelete(pMeta->pTagIdx, pTagIdxKey, nTagIdxKey, pMeta->txn);
+ ret = tdbTbDelete(pMeta->pTagIdx, pTagIdxKey, nTagIdxKey, pMeta->txn);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to delete tag idx key:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode),
+ e.name, e.uid, tstrerror(ret));
+ }
}
metaDestroyTagIdxKey(pTagIdxKey);
pTagIdxKey = NULL;
@@ -1418,9 +1556,21 @@ static int metaDropTableByUid(SMeta *pMeta, tb_uid_t uid, int *type, tb_uid_t *p
}
}
- (void)tdbTbDelete(pMeta->pTbDb, &(STbDbKey){.version = version, .uid = uid}, sizeof(STbDbKey), pMeta->txn);
- (void)tdbTbDelete(pMeta->pNameIdx, e.name, strlen(e.name) + 1, pMeta->txn);
- (void)tdbTbDelete(pMeta->pUidIdx, &uid, sizeof(uid), pMeta->txn);
+ ret = tdbTbDelete(pMeta->pTbDb, &(STbDbKey){.version = version, .uid = uid}, sizeof(STbDbKey), pMeta->txn);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to delete table:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), e.name, e.uid,
+ tstrerror(ret));
+ }
+ ret = tdbTbDelete(pMeta->pNameIdx, e.name, strlen(e.name) + 1, pMeta->txn);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to delete name idx:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), e.name, e.uid,
+ tstrerror(ret));
+ }
+ ret = tdbTbDelete(pMeta->pUidIdx, &uid, sizeof(uid), pMeta->txn);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to delete uid idx:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), e.name, e.uid,
+ tstrerror(ret));
+ }
if (e.type == TSDB_CHILD_TABLE || e.type == TSDB_NORMAL_TABLE) metaDeleteBtimeIdx(pMeta, &e);
if (e.type == TSDB_NORMAL_TABLE) metaDeleteNcolIdx(pMeta, &e);
@@ -1428,13 +1578,25 @@ static int metaDropTableByUid(SMeta *pMeta, tb_uid_t uid, int *type, tb_uid_t *p
if (e.type != TSDB_SUPER_TABLE) metaDeleteTtl(pMeta, &e);
if (e.type == TSDB_CHILD_TABLE) {
- (void)tdbTbDelete(pMeta->pCtbIdx, &(SCtbIdxKey){.suid = e.ctbEntry.suid, .uid = uid}, sizeof(SCtbIdxKey),
- pMeta->txn);
+ ret =
+ tdbTbDelete(pMeta->pCtbIdx, &(SCtbIdxKey){.suid = e.ctbEntry.suid, .uid = uid}, sizeof(SCtbIdxKey), pMeta->txn);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to delete ctb idx:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), e.name, e.uid,
+ tstrerror(ret));
+ }
--pMeta->pVnode->config.vndStats.numOfCTables;
- (void)metaUpdateStbStats(pMeta, e.ctbEntry.suid, -1, 0);
- (void)metaUidCacheClear(pMeta, e.ctbEntry.suid);
- (void)metaTbGroupCacheClear(pMeta, e.ctbEntry.suid);
+ metaUpdateStbStats(pMeta, e.ctbEntry.suid, -1, 0);
+ ret = metaUidCacheClear(pMeta, e.ctbEntry.suid);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to clear uid cache:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), e.name,
+ e.ctbEntry.suid, tstrerror(ret));
+ }
+ ret = metaTbGroupCacheClear(pMeta, e.ctbEntry.suid);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to clear group cache:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), e.name,
+ e.ctbEntry.suid, tstrerror(ret));
+ }
/*
if (!TSDB_CACHE_NO(pMeta->pVnode->config)) {
tsdbCacheDropTable(pMeta->pVnode->pTsdb, e.uid, e.ctbEntry.suid, NULL);
@@ -1452,16 +1614,36 @@ static int metaDropTableByUid(SMeta *pMeta, tb_uid_t uid, int *type, tb_uid_t *p
}
*/
} else if (e.type == TSDB_SUPER_TABLE) {
- (void)tdbTbDelete(pMeta->pSuidIdx, &e.uid, sizeof(tb_uid_t), pMeta->txn);
+ ret = tdbTbDelete(pMeta->pSuidIdx, &e.uid, sizeof(tb_uid_t), pMeta->txn);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to delete suid idx:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), e.name, e.uid,
+ tstrerror(ret));
+ }
// drop schema.db (todo)
- (void)metaStatsCacheDrop(pMeta, uid);
- (void)metaUidCacheClear(pMeta, uid);
- (void)metaTbGroupCacheClear(pMeta, uid);
+ ret = metaStatsCacheDrop(pMeta, uid);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to drop stats cache:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), e.name, e.uid,
+ tstrerror(ret));
+ }
+ ret = metaUidCacheClear(pMeta, uid);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to clear uid cache:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), e.name, e.uid,
+ tstrerror(ret));
+ }
+ ret = metaTbGroupCacheClear(pMeta, uid);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to clear group cache:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), e.name,
+ e.uid, tstrerror(ret));
+ }
--pMeta->pVnode->config.vndStats.numOfSTables;
}
- (void)metaCacheDrop(pMeta, uid);
+ ret = metaCacheDrop(pMeta, uid);
+ if (ret < 0) {
+ metaError("vgId:%d, failed to drop cache:%s uid:%" PRId64 " since %s", TD_VID(pMeta->pVnode), e.name, e.uid,
+ tstrerror(ret));
+ }
tDecoderClear(&dc);
tdbFree(pData);
@@ -1535,21 +1717,21 @@ static int metaAlterTableColumn(SMeta *pMeta, int64_t version, SVAlterTbReq *pAl
TBC *pUidIdxc = NULL;
TAOS_CHECK_RETURN(tdbTbcOpen(pMeta->pUidIdx, &pUidIdxc, NULL));
- (void)tdbTbcMoveTo(pUidIdxc, &uid, sizeof(uid), &c);
+ ret = tdbTbcMoveTo(pUidIdxc, &uid, sizeof(uid), &c);
if (c != 0) {
tdbTbcClose(pUidIdxc);
metaError("meta/table: invalide c: %" PRId32 " alt tb column failed.", c);
return TSDB_CODE_FAILED;
}
- (void)tdbTbcGet(pUidIdxc, NULL, NULL, &pData, &nData);
+ ret = tdbTbcGet(pUidIdxc, NULL, NULL, &pData, &nData);
oversion = ((SUidIdxVal *)pData)[0].version;
// search table.db
TBC *pTbDbc = NULL;
TAOS_CHECK_RETURN(tdbTbcOpen(pMeta->pTbDb, &pTbDbc, NULL));
- (void)tdbTbcMoveTo(pTbDbc, &((STbDbKey){.uid = uid, .version = oversion}), sizeof(STbDbKey), &c);
+ ret = tdbTbcMoveTo(pTbDbc, &((STbDbKey){.uid = uid, .version = oversion}), sizeof(STbDbKey), &c);
if (c != 0) {
tdbTbcClose(pUidIdxc);
tdbTbcClose(pTbDbc);
@@ -1557,13 +1739,13 @@ static int metaAlterTableColumn(SMeta *pMeta, int64_t version, SVAlterTbReq *pAl
return TSDB_CODE_FAILED;
}
- (void)tdbTbcGet(pTbDbc, NULL, NULL, &pData, &nData);
+ ret = tdbTbcGet(pTbDbc, NULL, NULL, &pData, &nData);
// get table entry
SDecoder dc = {0};
if ((entry.pBuf = taosMemoryMalloc(nData)) == NULL) {
- (void)tdbTbcClose(pUidIdxc);
- (void)tdbTbcClose(pTbDbc);
+ tdbTbcClose(pUidIdxc);
+ tdbTbcClose(pTbDbc);
return terrno;
}
memcpy(entry.pBuf, pData, nData);
@@ -2578,7 +2760,7 @@ int metaCreateTagIdxKey(tb_uid_t suid, int32_t cid, const void *pTagData, int32_
*ppTagIdxKey = (STagIdxKey *)taosMemoryMalloc(*nTagIdxKey);
if (*ppTagIdxKey == NULL) {
- return terrno = TSDB_CODE_OUT_OF_MEMORY;
+ return terrno;
}
(*ppTagIdxKey)->suid = suid;
diff --git a/source/dnode/vnode/src/meta/metaTtl.c b/source/dnode/vnode/src/meta/metaTtl.c
index 0b5b9280df..e3d6e2cf9b 100644
--- a/source/dnode/vnode/src/meta/metaTtl.c
+++ b/source/dnode/vnode/src/meta/metaTtl.c
@@ -53,12 +53,12 @@ int32_t ttlMgrOpen(STtlManger **ppTtlMgr, TDB *pEnv, int8_t rollback, const char
*ppTtlMgr = NULL;
STtlManger *pTtlMgr = (STtlManger *)tdbOsCalloc(1, sizeof(*pTtlMgr));
- if (pTtlMgr == NULL) TAOS_RETURN(TSDB_CODE_OUT_OF_MEMORY);
+ if (pTtlMgr == NULL) TAOS_RETURN(terrno);
char *logBuffer = (char *)tdbOsCalloc(1, strlen(logPrefix) + 1);
if (logBuffer == NULL) {
tdbOsFree(pTtlMgr);
- TAOS_RETURN(TSDB_CODE_OUT_OF_MEMORY);
+ TAOS_RETURN(terrno);
}
(void)strcpy(logBuffer, logPrefix);
pTtlMgr->logPrefix = logBuffer;
diff --git a/source/dnode/vnode/src/sma/smaEnv.c b/source/dnode/vnode/src/sma/smaEnv.c
index c26708bd68..721c0130cf 100644
--- a/source/dnode/vnode/src/sma/smaEnv.c
+++ b/source/dnode/vnode/src/sma/smaEnv.c
@@ -213,11 +213,11 @@ static int32_t tdInitSmaStat(SSmaStat **pSmaStat, int8_t smaType, const SSma *pS
atomic_store_8(RSMA_TRIGGER_STAT(pRSmaStat), TASK_TRIGGER_STAT_INIT);
(void)tsem_init(&pRSmaStat->notEmpty, 0, 0);
if (!(pRSmaStat->blocks = taosArrayInit(1, sizeof(SSDataBlock)))) {
- code = TSDB_CODE_OUT_OF_MEMORY;
+ code = terrno;
TAOS_CHECK_GOTO(code, &lino, _exit);
}
SSDataBlock datablock = {.info.type = STREAM_CHECKPOINT};
- TSDB_CHECK_NULL(taosArrayPush(pRSmaStat->blocks, &datablock), code, lino, _exit, TSDB_CODE_OUT_OF_MEMORY);
+ TSDB_CHECK_NULL(taosArrayPush(pRSmaStat->blocks, &datablock), code, lino, _exit, terrno);
// init smaMgmt
TAOS_CHECK_GOTO(smaInit(), &lino, _exit);
diff --git a/source/dnode/vnode/src/sma/smaRollup.c b/source/dnode/vnode/src/sma/smaRollup.c
index 69819c87dc..14e79200aa 100644
--- a/source/dnode/vnode/src/sma/smaRollup.c
+++ b/source/dnode/vnode/src/sma/smaRollup.c
@@ -273,7 +273,7 @@ static int32_t tdSetRSmaInfoItemParams(SSma *pSma, SRSmaParam *param, SRSmaStat
if (!taosCheckExistFile(taskInfDir)) {
char *s = taosStrdup(taskInfDir);
if (!s) {
- TAOS_RETURN(TSDB_CODE_OUT_OF_MEMORY);
+ TAOS_RETURN(terrno);
}
if (taosMulMkDir(s) != 0) {
code = TAOS_SYSTEM_ERROR(errno);
@@ -294,7 +294,7 @@ static int32_t tdSetRSmaInfoItemParams(SSma *pSma, SRSmaParam *param, SRSmaStat
pStreamTask->pMeta = pVnode->pTq->pStreamMeta;
pStreamTask->exec.qmsg = taosMemoryMalloc(strlen(RSMA_EXEC_TASK_FLAG) + 1);
if (!pStreamTask->exec.qmsg) {
- TAOS_RETURN(TSDB_CODE_OUT_OF_MEMORY);
+ TAOS_RETURN(terrno);
}
(void)sprintf(pStreamTask->exec.qmsg, "%s", RSMA_EXEC_TASK_FLAG);
pStreamTask->chkInfo.checkpointId = streamMetaGetLatestCheckpointId(pStreamTask->pMeta);
@@ -321,7 +321,7 @@ static int32_t tdSetRSmaInfoItemParams(SSma *pSma, SRSmaParam *param, SRSmaStat
}
if (!(pItem->pResList = taosArrayInit(1, POINTER_BYTES))) {
- TAOS_RETURN(TSDB_CODE_OUT_OF_MEMORY);
+ TAOS_RETURN(terrno);
}
if (pItem->fetchResultVer < pItem->submitReqVer) {
@@ -504,11 +504,11 @@ static int32_t tdUidStorePut(STbUidStore *pStore, tb_uid_t suid, tb_uid_t *uid)
if (uid) {
if (!pStore->tbUids) {
if (!(pStore->tbUids = taosArrayInit(1, sizeof(tb_uid_t)))) {
- TAOS_RETURN(TSDB_CODE_OUT_OF_MEMORY);
+ TAOS_RETURN(terrno);
}
}
if (!taosArrayPush(pStore->tbUids, uid)) {
- TAOS_RETURN(TSDB_CODE_OUT_OF_MEMORY);
+ TAOS_RETURN(terrno);
}
}
} else {
@@ -516,7 +516,7 @@ static int32_t tdUidStorePut(STbUidStore *pStore, tb_uid_t suid, tb_uid_t *uid)
if (!pStore->uidHash) {
pStore->uidHash = taosHashInit(4, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_ENTRY_LOCK);
if (!pStore->uidHash) {
- TAOS_RETURN(TSDB_CODE_OUT_OF_MEMORY);
+ TAOS_RETURN(terrno);
}
}
if (uid) {
@@ -524,16 +524,16 @@ static int32_t tdUidStorePut(STbUidStore *pStore, tb_uid_t suid, tb_uid_t *uid)
if (uidArray && ((uidArray = *(SArray **)uidArray))) {
if (!taosArrayPush(uidArray, uid)) {
taosArrayDestroy(uidArray);
- TAOS_RETURN(TSDB_CODE_OUT_OF_MEMORY);
+ TAOS_RETURN(terrno);
}
} else {
SArray *pUidArray = taosArrayInit(1, sizeof(tb_uid_t));
if (!pUidArray) {
- TAOS_RETURN(TSDB_CODE_OUT_OF_MEMORY);
+ TAOS_RETURN(terrno);
}
if (!taosArrayPush(pUidArray, uid)) {
taosArrayDestroy(pUidArray);
- TAOS_RETURN(TSDB_CODE_OUT_OF_MEMORY);
+ TAOS_RETURN(terrno);
}
TAOS_CHECK_RETURN(taosHashPut(pStore->uidHash, &suid, sizeof(suid), &pUidArray, sizeof(pUidArray)));
}
@@ -634,7 +634,7 @@ static int32_t tdRSmaProcessDelReq(SSma *pSma, int64_t suid, int8_t level, SBatc
void *pBuf = rpcMallocCont(len + sizeof(SMsgHead));
if (!pBuf) {
- code = TSDB_CODE_OUT_OF_MEMORY;
+ code = terrno;
TSDB_CHECK_CODE(code, lino, _exit);
}
@@ -696,7 +696,7 @@ static int32_t tdRSmaExecAndSubmitResult(SSma *pSma, qTaskInfo_t taskInfo, SRSma
SBatchDeleteReq deleteReq = {.suid = suid, .level = pItem->level};
deleteReq.deleteReqs = taosArrayInit(0, sizeof(SSingleDeleteReq));
if (!deleteReq.deleteReqs) {
- code = TSDB_CODE_OUT_OF_MEMORY;
+ code = terrno;
TSDB_CHECK_CODE(code, lino, _exit);
}
code = tqBuildDeleteReq(pSma->pVnode->pTq, NULL, output, &deleteReq, "", true);
@@ -1065,7 +1065,7 @@ static int32_t tdRSmaRestoreQTaskInfoInit(SSma *pSma, int64_t *nTables) {
tb_uid_t suid = 0;
if (!(suidList = taosArrayInit(1, sizeof(tb_uid_t)))) {
- code = TSDB_CODE_OUT_OF_MEMORY;
+ code = terrno;
TSDB_CHECK_CODE(code, lino, _exit);
}
@@ -1085,7 +1085,7 @@ static int32_t tdRSmaRestoreQTaskInfoInit(SSma *pSma, int64_t *nTables) {
int64_t nRsmaTables = 0;
metaReaderDoInit(&mr, SMA_META(pSma), META_READER_LOCK);
if (!(uidStore.tbUids = taosArrayInit(1024, sizeof(tb_uid_t)))) {
- code = TSDB_CODE_OUT_OF_MEMORY;
+ code = terrno;
TSDB_CHECK_CODE(code, lino, _exit);
}
@@ -1518,7 +1518,7 @@ static int32_t tdRSmaBatchExec(SSma *pSma, SRSmaInfo *pInfo, STaosQall *qall, SA
version = packData.ver;
if (!taosArrayPush(pSubmitArr, &packData)) {
taosFreeQitem(msg);
- TAOS_CHECK_EXIT(TSDB_CODE_OUT_OF_MEMORY);
+ TAOS_CHECK_EXIT(terrno);
}
++nSubmit;
} else if (inputType == STREAM_INPUT__REF_DATA_BLOCK) {
@@ -1608,7 +1608,7 @@ int32_t tdRSmaProcessExecImpl(SSma *pSma, ERsmaExecType type) {
if (!(pSubmitArr =
taosArrayInit(TMIN(RSMA_EXEC_BATCH_SIZE, atomic_load_64(&pRSmaStat->nBufItems)), sizeof(SPackedData)))) {
- code = TSDB_CODE_OUT_OF_MEMORY;
+ code = terrno;
TSDB_CHECK_CODE(code, lino, _exit);
}
diff --git a/source/dnode/vnode/src/sma/smaTimeRange.c b/source/dnode/vnode/src/sma/smaTimeRange.c
index f66282ac25..58dafa4f8b 100644
--- a/source/dnode/vnode/src/sma/smaTimeRange.c
+++ b/source/dnode/vnode/src/sma/smaTimeRange.c
@@ -170,12 +170,12 @@ int32_t smaBlockToSubmit(SVnode *pVnode, const SArray *pBlocks, const STSchema *
pReq->aSubmitTbData = taosArrayInit(1, sizeof(SSubmitTbData));
if (pReq->aSubmitTbData == NULL) {
- TAOS_CHECK_EXIT(TSDB_CODE_OUT_OF_MEMORY);
+ TAOS_CHECK_EXIT(terrno);
}
pTableIndexMap = taosHashInit(numOfBlocks, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_NO_LOCK);
if (pTableIndexMap == NULL) {
- TAOS_CHECK_EXIT(TSDB_CODE_OUT_OF_MEMORY);
+ TAOS_CHECK_EXIT(terrno);
}
// SSubmitTbData req
@@ -205,7 +205,7 @@ int32_t smaBlockToSubmit(SVnode *pVnode, const SArray *pBlocks, const STSchema *
}
if( taosArrayPush(pReq->aSubmitTbData, &tbData) == NULL) {
- code = TSDB_CODE_OUT_OF_MEMORY;
+ code = terrno;
continue;
}
@@ -232,7 +232,7 @@ int32_t smaBlockToSubmit(SVnode *pVnode, const SArray *pBlocks, const STSchema *
SEncoder encoder;
len += sizeof(SSubmitReq2Msg);
if (!(pBuf = rpcMallocCont(len))) {
- code = TSDB_CODE_OUT_OF_MEMORY;
+ code = terrno;
TSDB_CHECK_CODE(code, lino, _exit);
}
@@ -277,7 +277,7 @@ static int32_t tsmaProcessDelReq(SSma *pSma, int64_t indexUid, SBatchDeleteReq *
void *pBuf = rpcMallocCont(len + sizeof(SMsgHead));
if (!pBuf) {
- code = TSDB_CODE_OUT_OF_MEMORY;
+ code = terrno;
TSDB_CHECK_CODE(code, lino, _exit);
}
diff --git a/source/dnode/vnode/src/tq/tq.c b/source/dnode/vnode/src/tq/tq.c
index a2c088de68..e5aefaedd2 100644
--- a/source/dnode/vnode/src/tq/tq.c
+++ b/source/dnode/vnode/src/tq/tq.c
@@ -74,7 +74,7 @@ int32_t tqOpen(const char* path, SVnode* pVnode) {
pTq->pHandle = taosHashInit(64, MurmurHash3_32, true, HASH_ENTRY_LOCK);
if (pTq->pHandle == NULL) {
- return TSDB_CODE_OUT_OF_MEMORY;
+ return terrno;
}
taosHashSetFreeFp(pTq->pHandle, tqDestroyTqHandle);
@@ -82,18 +82,18 @@ int32_t tqOpen(const char* path, SVnode* pVnode) {
pTq->pPushMgr = taosHashInit(64, MurmurHash3_32, false, HASH_NO_LOCK);
if (pTq->pPushMgr == NULL) {
- return TSDB_CODE_OUT_OF_MEMORY;
+ return terrno;
}
pTq->pCheckInfo = taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_ENTRY_LOCK);
if (pTq->pCheckInfo == NULL) {
- return TSDB_CODE_OUT_OF_MEMORY;
+ return terrno;
}
taosHashSetFreeFp(pTq->pCheckInfo, (FDelete)tDeleteSTqCheckInfo);
pTq->pOffset = taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_VARCHAR), true, HASH_ENTRY_LOCK);
if (pTq->pOffset == NULL) {
- return TSDB_CODE_OUT_OF_MEMORY;
+ return terrno;
}
taosHashSetFreeFp(pTq->pOffset, (FDelete)tDeleteSTqOffset);
@@ -464,7 +464,6 @@ int32_t tqProcessVgCommittedInfoReq(STQ* pTq, SRpcMsg* pMsg) {
void* buf = rpcMallocCont(len);
if (buf == NULL) {
- terrno = TSDB_CODE_OUT_OF_MEMORY;
return terrno;
}
SEncoder encoder = {0};
diff --git a/source/dnode/vnode/src/tq/tqMeta.c b/source/dnode/vnode/src/tq/tqMeta.c
index 460ed89a4b..b9cf26ee54 100644
--- a/source/dnode/vnode/src/tq/tqMeta.c
+++ b/source/dnode/vnode/src/tq/tqMeta.c
@@ -198,7 +198,7 @@ int32_t tqMetaGetOffset(STQ* pTq, const char* subkey, STqOffset** pOffset){
if (taosHashPut(pTq->pOffset, subkey, strlen(subkey), &offset, sizeof(STqOffset)) != 0) {
tDeleteSTqOffset(&offset);
tdbFree(data);
- return TSDB_CODE_OUT_OF_MEMORY;
+ return terrno;
}
tdbFree(data);
@@ -346,7 +346,7 @@ int32_t tqMetaCreateHandle(STQ* pTq, SMqRebVgReq* req, STqHandle* handle) {
handle->execHandle.execDb.pFilterOutTbUid =
taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_ENTRY_LOCK);
if(handle->execHandle.execDb.pFilterOutTbUid == NULL){
- return TSDB_CODE_OUT_OF_MEMORY;
+ return terrno;
}
}else if(req->subType == TOPIC_SUB_TYPE__TABLE){
handle->execHandle.execTb.suid = req->suid;
diff --git a/source/dnode/vnode/src/tq/tqRead.c b/source/dnode/vnode/src/tq/tqRead.c
index eb5d6aafe7..6dc5453d50 100644
--- a/source/dnode/vnode/src/tq/tqRead.c
+++ b/source/dnode/vnode/src/tq/tqRead.c
@@ -577,7 +577,7 @@ static int32_t buildResSDataBlock(SSDataBlock* pBlock, SSchemaWrapper* pSchema,
int32_t code = blockDataAppendColInfo(pBlock, &colInfo);
if (code != TSDB_CODE_SUCCESS) {
blockDataFreeRes(pBlock);
- return TSDB_CODE_OUT_OF_MEMORY;
+ return terrno;
}
}
} else {
diff --git a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c
index a7a705fa78..9a601c4c75 100644
--- a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c
+++ b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c
@@ -309,6 +309,10 @@ int32_t tsdbCacherowsReaderOpen(void* pVnode, int32_t type, void* pTableIdList,
p->rowKey.pks[0].type = pPkCol->type;
if (IS_VAR_DATA_TYPE(pPkCol->type)) {
p->rowKey.pks[0].pData = taosMemoryCalloc(1, pPkCol->bytes);
+ if (p->rowKey.pks[0].pData == NULL) {
+ taosMemoryFree(p);
+ return terrno;
+ }
}
p->pkColumn = *pPkCol;
@@ -345,6 +349,10 @@ int32_t tsdbCacherowsReaderOpen(void* pVnode, int32_t type, void* pTableIdList,
}
p->idstr = taosStrdup(idstr);
+ if (idstr != NULL && p->idstr == NULL) {
+ tsdbCacherowsReaderClose(p);
+ return terrno;
+ }
code = taosThreadMutexInit(&p->readerMutex, NULL);
if (code) {
tsdbCacherowsReaderClose(p);
diff --git a/source/dnode/vnode/src/tsdb/tsdbMergeTree.c b/source/dnode/vnode/src/tsdb/tsdbMergeTree.c
index 3191ddb447..85ed223c6c 100644
--- a/source/dnode/vnode/src/tsdb/tsdbMergeTree.c
+++ b/source/dnode/vnode/src/tsdb/tsdbMergeTree.c
@@ -1051,7 +1051,9 @@ _end:
return code;
}
-void tMergeTreeAddIter(SMergeTree *pMTree, SLDataIter *pIter) { (void)tRBTreePut(&pMTree->rbt, (SRBTreeNode *)pIter); }
+void tMergeTreeAddIter(SMergeTree *pMTree, SLDataIter *pIter) {
+ SRBTreeNode *node = tRBTreePut(&pMTree->rbt, (SRBTreeNode *)pIter);
+}
bool tMergeTreeIgnoreEarlierTs(SMergeTree *pMTree) { return pMTree->ignoreEarlierTs; }
diff --git a/source/dnode/vnode/src/tsdb/tsdbRead2.c b/source/dnode/vnode/src/tsdb/tsdbRead2.c
index 731b733b52..991c7863ac 100644
--- a/source/dnode/vnode/src/tsdb/tsdbRead2.c
+++ b/source/dnode/vnode/src/tsdb/tsdbRead2.c
@@ -828,7 +828,11 @@ static int32_t loadFileBlockBrinInfo(STsdbReader* pReader, SArray* pIndexList, S
}
SFileDataBlockInfo blockInfo = {.tbBlockIdx = TARRAY_SIZE(pScanInfo->pBlockList)};
- recordToBlockInfo(&blockInfo, pRecord);
+ code = recordToBlockInfo(&blockInfo, pRecord);
+ if (code != TSDB_CODE_SUCCESS) {
+ clearBrinBlockIter(&iter);
+ return code;
+ }
void* p1 = taosArrayPush(pScanInfo->pBlockList, &blockInfo);
if (p1 == NULL) {
clearBrinBlockIter(&iter);
diff --git a/source/dnode/vnode/src/tsdb/tsdbReadUtil.c b/source/dnode/vnode/src/tsdb/tsdbReadUtil.c
index db987d07d6..24a0dca5c8 100644
--- a/source/dnode/vnode/src/tsdb/tsdbReadUtil.c
+++ b/source/dnode/vnode/src/tsdb/tsdbReadUtil.c
@@ -559,7 +559,7 @@ static int32_t fileDataBlockOrderCompar(const void* pLeft, const void* pRight, v
return pLeftBlock->offset > pRightBlock->offset ? 1 : -1;
}
-void recordToBlockInfo(SFileDataBlockInfo* pBlockInfo, SBrinRecord* record) {
+int32_t recordToBlockInfo(SFileDataBlockInfo* pBlockInfo, SBrinRecord* record) {
pBlockInfo->uid = record->uid;
pBlockInfo->firstKey = record->firstKey.key.ts;
pBlockInfo->lastKey = record->lastKey.key.ts;
@@ -580,17 +580,24 @@ void recordToBlockInfo(SFileDataBlockInfo* pBlockInfo, SBrinRecord* record) {
pBlockInfo->lastPk.val = record->lastKey.key.pks[0].val;
} else {
char* p = taosMemoryCalloc(1, pFirstKey->pks[0].nData + VARSTR_HEADER_SIZE);
+ if (p == NULL) {
+ return terrno;
+ }
memcpy(varDataVal(p), pFirstKey->pks[0].pData, pFirstKey->pks[0].nData);
varDataSetLen(p, pFirstKey->pks[0].nData);
pBlockInfo->firstPk.pData = (uint8_t*)p;
int32_t keyLen = record->lastKey.key.pks[0].nData;
p = taosMemoryCalloc(1, keyLen + VARSTR_HEADER_SIZE);
+ if (p == NULL) {
+ return terrno;
+ }
memcpy(varDataVal(p), record->lastKey.key.pks[0].pData, keyLen);
varDataSetLen(p, keyLen);
pBlockInfo->lastPk.pData = (uint8_t*)p;
}
}
+ return TSDB_CODE_SUCCESS;
}
static void freePkItem(void* pItem) {
diff --git a/source/dnode/vnode/src/tsdb/tsdbReadUtil.h b/source/dnode/vnode/src/tsdb/tsdbReadUtil.h
index cc37f20cf6..7c7bee8260 100644
--- a/source/dnode/vnode/src/tsdb/tsdbReadUtil.h
+++ b/source/dnode/vnode/src/tsdb/tsdbReadUtil.h
@@ -344,7 +344,7 @@ int32_t loadSttTombDataForAll(STsdbReader* pReader, SSttFileReader* pSttFileRead
int32_t getNumOfRowsInSttBlock(SSttFileReader* pSttFileReader, SSttBlockLoadInfo* pBlockLoadInfo,
TStatisBlkArray* pStatisBlkArray, uint64_t suid, const uint64_t* pUidList,
int32_t numOfTables, int32_t* pNumOfRows);
-void recordToBlockInfo(SFileDataBlockInfo* pBlockInfo, SBrinRecord* record);
+int32_t recordToBlockInfo(SFileDataBlockInfo* pBlockInfo, SBrinRecord* record);
void destroyLDataIter(SLDataIter* pIter);
int32_t adjustSttDataIters(SArray* pSttFileBlockIterArray, STFileSet* pFileSet);
diff --git a/source/libs/audit/inc/auditInt.h b/source/libs/audit/inc/auditInt.h
index e5fed2e473..7af48e3d41 100644
--- a/source/libs/audit/inc/auditInt.h
+++ b/source/libs/audit/inc/auditInt.h
@@ -23,6 +23,7 @@ typedef struct {
SAuditCfg cfg;
SArray *records;
TdThreadMutex lock;
+ int32_t dnodeId;
} SAudit;
#endif /*_TD_AUDIT_INT_H_*/
diff --git a/source/libs/audit/src/auditMain.c b/source/libs/audit/src/auditMain.c
index cdc4a964c7..52a4c1c523 100644
--- a/source/libs/audit/src/auditMain.c
+++ b/source/libs/audit/src/auditMain.c
@@ -36,6 +36,8 @@ int32_t auditInit(const SAuditCfg *pCfg) {
return taosThreadMutexInit(&tsAudit.lock, NULL);
}
+void auditSetDnodeId(int32_t dnodeId) { tsAudit.dnodeId = dnodeId; }
+
static FORCE_INLINE void auditDeleteRecord(SAuditRecord * record) {
if (record) {
taosMemoryFree(record->detail);
diff --git a/source/libs/executor/src/executil.c b/source/libs/executor/src/executil.c
index 05ed5a9d1e..01104dfe21 100644
--- a/source/libs/executor/src/executil.c
+++ b/source/libs/executor/src/executil.c
@@ -2833,11 +2833,13 @@ void printDataBlock(SSDataBlock* pBlock, const char* flag, const char* taskIdStr
qDebug("%s===stream===%s: Block is Empty. block type %d", taskIdStr, flag, pBlock->info.type);
return;
}
- char* pBuf = NULL;
- int32_t code = dumpBlockData(pBlock, flag, &pBuf, taskIdStr);
- if (code == 0) {
- qDebug("%s", pBuf);
- taosMemoryFree(pBuf);
+ if (qDebugFlag & DEBUG_DEBUG) {
+ char* pBuf = NULL;
+ int32_t code = dumpBlockData(pBlock, flag, &pBuf, taskIdStr);
+ if (code == 0) {
+ qDebug("%s", pBuf);
+ taosMemoryFree(pBuf);
+ }
}
}
diff --git a/source/libs/executor/src/executorInt.c b/source/libs/executor/src/executorInt.c
index 1804f0ce26..4fef157984 100644
--- a/source/libs/executor/src/executorInt.c
+++ b/source/libs/executor/src/executorInt.c
@@ -314,6 +314,8 @@ static int32_t doCreateConstantValColumnInfo(SInputColumnInfoData* pInput, SFunc
}
} else if (type == TSDB_DATA_TYPE_VARCHAR || type == TSDB_DATA_TYPE_GEOMETRY) {
char* tmp = taosMemoryMalloc(pFuncParam->param.nLen + VARSTR_HEADER_SIZE);
+ QUERY_CHECK_NULL(tmp, code, lino, _end, terrno);
+
STR_WITH_SIZE_TO_VARSTR(tmp, pFuncParam->param.pz, pFuncParam->param.nLen);
for (int32_t i = 0; i < numOfRows; ++i) {
code = colDataSetVal(pColInfo, i, tmp, false);
diff --git a/source/libs/executor/src/tlinearhash.c b/source/libs/executor/src/tlinearhash.c
index 69ce50e150..9654f74ab1 100644
--- a/source/libs/executor/src/tlinearhash.c
+++ b/source/libs/executor/src/tlinearhash.c
@@ -238,11 +238,14 @@ static int32_t doAddNewBucket(SLHashObj* pHashObj) {
}
SLHashBucket* pBucket = taosMemoryCalloc(1, sizeof(SLHashBucket));
+ if (pBucket == NULL) {
+ return terrno;
+ }
pHashObj->pBucket[pHashObj->numOfBuckets] = pBucket;
pBucket->pPageIdList = taosArrayInit(2, sizeof(int32_t));
if (pBucket->pPageIdList == NULL) {
- return TSDB_CODE_OUT_OF_MEMORY;
+ return terrno;
}
int32_t pageId = -1;
@@ -281,13 +284,14 @@ SLHashObj* tHashInit(int32_t inMemPages, int32_t pageSize, _hash_fn_t fn, int32_
int32_t code = createDiskbasedBuf(&pHashObj->pBuf, pageSize, inMemPages * pageSize, "", tsTempDir);
if (code != 0) {
- taosMemoryFree(pHashObj);
- terrno = code;
- return NULL;
+ goto _error;
}
// disable compress when flushing to disk
- setBufPageCompressOnDisk(pHashObj->pBuf, false);
+ code = setBufPageCompressOnDisk(pHashObj->pBuf, false);
+ if (code != 0) {
+ goto _error;
+ }
/**
* The number of bits in the hash value, which is used to decide the exact bucket where the object should be located
@@ -299,16 +303,32 @@ SLHashObj* tHashInit(int32_t inMemPages, int32_t pageSize, _hash_fn_t fn, int32_
pHashObj->numOfAlloc = 4; // initial allocated array list
pHashObj->pBucket = taosMemoryCalloc(pHashObj->numOfAlloc, POINTER_BYTES);
+ if (pHashObj->pBucket == NULL) {
+ code = terrno;
+ goto _error;
+ }
code = doAddNewBucket(pHashObj);
if (code != TSDB_CODE_SUCCESS) {
- destroyDiskbasedBuf(pHashObj->pBuf);
- taosMemoryFreeClear(pHashObj);
- terrno = code;
- return NULL;
+ goto _error;
}
return pHashObj;
+
+_error:
+ if (pHashObj->pBuf) {
+ destroyDiskbasedBuf(pHashObj->pBuf);
+ }
+ if (pHashObj->pBucket) {
+ for (int32_t i = 0; i < pHashObj->numOfBuckets; ++i) {
+ taosArrayDestroy(pHashObj->pBucket[i]->pPageIdList);
+ taosMemoryFree(pHashObj->pBucket[i]);
+ }
+ taosMemoryFree(pHashObj->pBucket);
+ }
+ taosMemoryFree(pHashObj);
+ terrno = code;
+ return NULL;
}
void* tHashCleanup(SLHashObj* pHashObj) {
diff --git a/source/libs/function/src/tudf.c b/source/libs/function/src/tudf.c
index c2cd8b8c3d..6145d9f03f 100644
--- a/source/libs/function/src/tudf.c
+++ b/source/libs/function/src/tudf.c
@@ -1091,7 +1091,7 @@ int32_t acquireUdfFuncHandle(char *udfName, UdfcFuncHandle *pHandle) {
taosArrayRemove(gUdfcProxy.udfStubs, stubIndex);
}
} else {
- fnInfo("udf handle expired for %s, will setup udf. move it to expired list", udfName);
+ fnDebug("udf handle expired for %s, will setup udf. move it to expired list", udfName);
if (taosArrayPush(gUdfcProxy.expiredUdfStubs, foundStub) == NULL) {
fnError("acquireUdfFuncHandle: failed to push udf stub to array");
} else {
@@ -1718,7 +1718,7 @@ int32_t udfcQueueUvTask(SClientUvTaskNode *uvTask) {
}
uv_sem_wait(&uvTask->taskSem);
- fnInfo("udfc uvTask finished. uvTask:%" PRId64 "-%d-%p", uvTask->seqNum, uvTask->type, uvTask);
+ fnDebug("udfc uvTask finished. uvTask:%" PRId64 "-%d-%p", uvTask->seqNum, uvTask->type, uvTask);
uv_sem_destroy(&uvTask->taskSem);
return 0;
diff --git a/source/libs/function/src/udfd.c b/source/libs/function/src/udfd.c
index cd54c03a68..72eaae9451 100644
--- a/source/libs/function/src/udfd.c
+++ b/source/libs/function/src/udfd.c
@@ -918,7 +918,8 @@ void udfdProcessTeardownRequest(SUvUdfWork *uvUdf, SUdfRequest *request) {
unloadUdf = true;
code = taosHashRemove(global.udfsHash, udf->name, strlen(udf->name));
if (code != 0) {
- fnError("udf name %s remove from hash failed", udf->name);
+ fnError("udf name %s remove from hash failed, err:%0x %s", udf->name, code, tstrerror(code));
+ uv_mutex_unlock(&global.udfsMutex);
goto _send;
}
}
diff --git a/source/libs/monitor/inc/monInt.h b/source/libs/monitor/inc/monInt.h
index 7fc718393b..e7d2552e57 100644
--- a/source/libs/monitor/inc/monInt.h
+++ b/source/libs/monitor/inc/monInt.h
@@ -47,6 +47,7 @@ typedef struct {
SMonQmInfo qmInfo;
SMonBmInfo bmInfo;
SHashObj *metrics;
+ int32_t dnodeId;
} SMonitor;
void monGenClusterInfoTable(SMonInfo *pMonitor);
diff --git a/source/libs/monitor/src/monFramework.c b/source/libs/monitor/src/monFramework.c
index c3b63787a6..76473ccbb1 100644
--- a/source/libs/monitor/src/monFramework.c
+++ b/source/libs/monitor/src/monFramework.c
@@ -767,9 +767,13 @@ void monSendPromReport() {
}
if (pCont != NULL) {
EHttpCompFlag flag = tsMonitor.cfg.comp ? HTTP_GZIP : HTTP_FLAT;
- if (taosSendHttpReport(tsMonitor.cfg.server, tsMonFwUri, tsMonitor.cfg.port, pCont, strlen(pCont), flag) != 0) {
+ char tmp[100] = {0};
+ (void)sprintf(tmp, "0x%" PRIxLEAST64, tGenQid64(tsMonitor.dnodeId));
+ uDebug("report cont with QID:%s", tmp);
+ if (taosSendHttpReportWithQID(tsMonitor.cfg.server, tsMonFwUri, tsMonitor.cfg.port, pCont, strlen(pCont), flag,
+ tmp) != 0) {
uError("failed to send monitor msg");
- }else{
+ } else {
(void)taos_collector_registry_clear_batch(TAOS_COLLECTOR_REGISTRY_DEFAULT);
}
taosMemoryFreeClear(pCont);
diff --git a/source/libs/monitor/src/monMain.c b/source/libs/monitor/src/monMain.c
index 14d62fbf0e..4808ae0fdf 100644
--- a/source/libs/monitor/src/monMain.c
+++ b/source/libs/monitor/src/monMain.c
@@ -131,6 +131,8 @@ int32_t monInit(const SMonCfg *pCfg) {
return 0;
}
+void monSetDnodeId(int32_t dnodeId) { tsMonitor.dnodeId = dnodeId; }
+
void monInitVnode() {
if (!tsEnableMonitor || tsMonitorFqdn[0] == 0 || tsMonitorPort == 0) return;
if (tsInsertCounter == NULL) {
@@ -599,7 +601,11 @@ void monSendReport(SMonInfo *pMonitor) {
}
if (pCont != NULL) {
EHttpCompFlag flag = tsMonitor.cfg.comp ? HTTP_GZIP : HTTP_FLAT;
- if (taosSendHttpReport(tsMonitor.cfg.server, tsMonUri, tsMonitor.cfg.port, pCont, strlen(pCont), flag) != 0) {
+ char tmp[100] = {0};
+ (void)snprintf(tmp, 100, "0x%" PRIxLEAST64, tGenQid64(tsMonitor.dnodeId));
+ uDebug("report cont with QID:%s", tmp);
+ if (taosSendHttpReportWithQID(tsMonitor.cfg.server, tsMonUri, tsMonitor.cfg.port, pCont, strlen(pCont), flag,
+ tmp) != 0) {
uError("failed to send monitor msg");
}
taosMemoryFree(pCont);
@@ -617,8 +623,11 @@ void monSendReportBasic(SMonInfo *pMonitor) {
}
if (pCont != NULL) {
EHttpCompFlag flag = tsMonitor.cfg.comp ? HTTP_GZIP : HTTP_FLAT;
- if (taosSendHttpReport(tsMonitor.cfg.server, tsMonFwBasicUri, tsMonitor.cfg.port, pCont, strlen(pCont), flag) !=
- 0) {
+ char tmp[100] = {0};
+ (void)sprintf(tmp, "0x%" PRIxLEAST64, tGenQid64(tsMonitor.dnodeId));
+ uDebug("report cont basic with QID:%s", tmp);
+ if (taosSendHttpReportWithQID(tsMonitor.cfg.server, tsMonFwBasicUri, tsMonitor.cfg.port, pCont, strlen(pCont), flag,
+ tmp) != 0) {
uError("failed to send monitor msg");
}
taosMemoryFree(pCont);
@@ -669,8 +678,12 @@ void monSendContent(char *pCont, const char *uri) {
}
}
if (pCont != NULL) {
+ char tmp[100] = {0};
+ (void)sprintf(tmp, "0x%" PRIxLEAST64, tGenQid64(tsMonitor.dnodeId));
+ uInfoL("report client cont with QID:%s", tmp);
EHttpCompFlag flag = tsMonitor.cfg.comp ? HTTP_GZIP : HTTP_FLAT;
- if (taosSendHttpReport(tsMonitor.cfg.server, uri, tsMonitor.cfg.port, pCont, strlen(pCont), flag) != 0) {
+ if (taosSendHttpReportWithQID(tsMonitor.cfg.server, uri, tsMonitor.cfg.port, pCont, strlen(pCont), flag, tmp) !=
+ 0) {
uError("failed to send monitor msg");
}
}
diff --git a/source/libs/stream/inc/streamBackendRocksdb.h b/source/libs/stream/inc/streamBackendRocksdb.h
index 3a5d72576b..567d9de949 100644
--- a/source/libs/stream/inc/streamBackendRocksdb.h
+++ b/source/libs/stream/inc/streamBackendRocksdb.h
@@ -135,7 +135,7 @@ typedef struct {
#define META_ON_S3_FORMATE "%s_%" PRId64 "\n%s_%" PRId64 "\n%s_%" PRId64 ""
bool streamBackendDataIsExist(const char* path, int64_t chkpId);
-void* streamBackendInit(const char* path, int64_t chkpId, int32_t vgId);
+int32_t streamBackendInit(const char* path, int64_t chkpId, int32_t vgId, SBackendWrapper** pBackend);
void streamBackendCleanup(void* arg);
void streamBackendHandleCleanup(void* arg);
int32_t streamBackendLoadCheckpointInfo(void* pMeta);
diff --git a/source/libs/stream/src/streamBackendRocksdb.c b/source/libs/stream/src/streamBackendRocksdb.c
index d9b6671d9b..57f75743ac 100644
--- a/source/libs/stream/src/streamBackendRocksdb.c
+++ b/source/libs/stream/src/streamBackendRocksdb.c
@@ -812,29 +812,32 @@ bool streamBackendDataIsExist(const char* path, int64_t chkpId) {
return exist;
}
-void* streamBackendInit(const char* streamPath, int64_t chkpId, int32_t vgId) {
+int32_t streamBackendInit(const char* streamPath, int64_t chkpId, int32_t vgId, SBackendWrapper** pBackend) {
char* backendPath = NULL;
- int32_t code = rebuildDirFromCheckpoint(streamPath, chkpId, &backendPath);
+ int32_t code = 0;
+ int32_t lino = 0;
+ char* err = NULL;
+ size_t nCf = 0;
+
+ *pBackend = NULL;
+
+ code = rebuildDirFromCheckpoint(streamPath, chkpId, &backendPath);
+ TSDB_CHECK_CODE(code, lino, _EXIT);
stDebug("start to init stream backend:%s, checkpointId:%" PRId64 " vgId:%d", backendPath, chkpId, vgId);
uint32_t dbMemLimit = nextPow2(tsMaxStreamBackendCache) << 20;
SBackendWrapper* pHandle = taosMemoryCalloc(1, sizeof(SBackendWrapper));
- if (pHandle == NULL) {
- goto _EXIT;
- }
+ TSDB_CHECK_NULL(pHandle, code, lino, _EXIT, terrno);
pHandle->list = tdListNew(sizeof(SCfComparator));
- if (pHandle->list == NULL) {
- goto _EXIT;
- }
+ TSDB_CHECK_NULL(pHandle->list, code, lino, _EXIT, terrno);
(void)taosThreadMutexInit(&pHandle->mutex, NULL);
(void)taosThreadMutexInit(&pHandle->cfMutex, NULL);
+
pHandle->cfInst = taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), false, HASH_NO_LOCK);
- if (pHandle->cfInst == NULL) {
- goto _EXIT;
- }
+ TSDB_CHECK_NULL(pHandle->cfInst, code, lino, _EXIT, terrno);
rocksdb_env_t* env = rocksdb_create_default_env(); // rocksdb_envoptions_create();
@@ -863,9 +866,6 @@ void* streamBackendInit(const char* streamPath, int64_t chkpId, int32_t vgId) {
NULL, destroyCompactFilteFactory, compactFilteFactoryCreateFilter, compactFilteFactoryName);
rocksdb_options_set_compaction_filter_factory(pHandle->dbOpt, pHandle->filterFactory);
- char* err = NULL;
- size_t nCf = 0;
-
char** cfs = rocksdb_list_column_families(opts, backendPath, &nCf, &err);
if (nCf == 0 || nCf == 1 || err != NULL) {
taosMemoryFreeClear(err);
@@ -894,7 +894,9 @@ void* streamBackendInit(const char* streamPath, int64_t chkpId, int32_t vgId) {
stDebug("init stream backend at %s, backend:%p, vgId:%d", backendPath, pHandle, vgId);
taosMemoryFreeClear(backendPath);
- return (void*)pHandle;
+ *pBackend = pHandle;
+ return code;
+
_EXIT:
rocksdb_options_destroy(opts);
rocksdb_cache_destroy(cache);
@@ -904,9 +906,9 @@ _EXIT:
taosHashCleanup(pHandle->cfInst);
pHandle->list = tdListFree(pHandle->list);
taosMemoryFree(pHandle);
- stDebug("failed to init stream backend at %s", backendPath);
+ stDebug("failed to init stream backend at %s, vgId:%d line:%d code:%s", backendPath, vgId, lino, tstrerror(code));
taosMemoryFree(backendPath);
- return NULL;
+ return code;
}
void streamBackendCleanup(void* arg) {
SBackendWrapper* pHandle = (SBackendWrapper*)arg;
diff --git a/source/libs/stream/src/streamMeta.c b/source/libs/stream/src/streamMeta.c
index 67fc5af013..cf519c09a5 100644
--- a/source/libs/stream/src/streamMeta.c
+++ b/source/libs/stream/src/streamMeta.c
@@ -224,8 +224,10 @@ int32_t streamMetaCheckBackendCompatible(SStreamMeta* pMeta) {
}
int32_t streamMetaCvtDbFormat(SStreamMeta* pMeta) {
- int32_t code = 0;
- int64_t chkpId = streamMetaGetLatestCheckpointId(pMeta);
+ int32_t code = 0;
+ SBackendWrapper* pBackend = NULL;
+ int64_t chkpId = streamMetaGetLatestCheckpointId(pMeta);
+
terrno = 0;
bool exist = streamBackendDataIsExist(pMeta->path, chkpId);
if (exist == false) {
@@ -233,7 +235,10 @@ int32_t streamMetaCvtDbFormat(SStreamMeta* pMeta) {
return code;
}
- SBackendWrapper* pBackend = streamBackendInit(pMeta->path, chkpId, pMeta->vgId);
+ code = streamBackendInit(pMeta->path, chkpId, pMeta->vgId, &pBackend);
+ if (code) {
+ return code;
+ }
void* pIter = taosHashIterate(pBackend->cfInst, NULL);
while (pIter) {
@@ -252,9 +257,13 @@ _EXIT:
if (code == 0) {
char* state = taosMemoryCalloc(1, strlen(pMeta->path) + 32);
- sprintf(state, "%s%s%s", pMeta->path, TD_DIRSEP, "state");
- taosRemoveDir(state);
- taosMemoryFree(state);
+ if (state != NULL) {
+ sprintf(state, "%s%s%s", pMeta->path, TD_DIRSEP, "state");
+ taosRemoveDir(state);
+ taosMemoryFree(state);
+ } else {
+ stError("vgId:%d, failed to remove file dir:%s, since:%s", pMeta->vgId, pMeta->path, tstrerror(code));
+ }
}
return code;
@@ -444,6 +453,8 @@ int32_t streamMetaOpen(const char* path, void* ahandle, FTaskBuild buildTaskFn,
TSDB_CHECK_CODE(code, lino, _err);
int64_t* pRid = taosMemoryMalloc(sizeof(int64_t));
+ TSDB_CHECK_NULL(pRid, code, lino, _err, terrno);
+
memcpy(pRid, &pMeta->rid, sizeof(pMeta->rid));
code = metaRefMgtAdd(pMeta->vgId, pRid);
TSDB_CHECK_CODE(code, lino, _err);
diff --git a/source/libs/stream/src/streamSessionState.c b/source/libs/stream/src/streamSessionState.c
index 5eb55467b1..a192f03947 100644
--- a/source/libs/stream/src/streamSessionState.c
+++ b/source/libs/stream/src/streamSessionState.c
@@ -81,6 +81,9 @@ bool inSessionWindow(SSessionKey* pKey, TSKEY ts, int64_t gap) {
SStreamStateCur* createSessionStateCursor(SStreamFileState* pFileState) {
SStreamStateCur* pCur = createStreamStateCursor();
+ if (pCur == NULL) {
+ return NULL;
+ }
pCur->pStreamFileState = pFileState;
return pCur;
}
@@ -533,6 +536,9 @@ static SStreamStateCur* seekKeyCurrentPrev_buff(SStreamFileState* pFileState, co
if (index >= 0) {
pCur = createSessionStateCursor(pFileState);
+ if (pCur == NULL) {
+ return NULL;
+ }
pCur->buffIndex = index;
if (pIndex) {
*pIndex = index;
@@ -634,6 +640,9 @@ SStreamStateCur* countWinStateSeekKeyPrev(SStreamFileState* pFileState, const SS
pBuffCur->buffIndex = 0;
} else if (taosArrayGetSize(pWinStates) > 0) {
pBuffCur = createSessionStateCursor(pFileState);
+ if (pBuffCur == NULL) {
+ return NULL;
+ }
pBuffCur->buffIndex = 0;
}
diff --git a/source/libs/stream/src/streamSnapshot.c b/source/libs/stream/src/streamSnapshot.c
index 511445858a..2742798a04 100644
--- a/source/libs/stream/src/streamSnapshot.c
+++ b/source/libs/stream/src/streamSnapshot.c
@@ -130,6 +130,11 @@ int32_t streamGetFileSize(char* path, char* name, int64_t* sz) {
int32_t ret = 0;
char* fullname = taosMemoryCalloc(1, strlen(path) + 32);
+ if (fullname == NULL) {
+ stError("failed to get file:%s size, code: out of memory", name);
+ return terrno;
+ }
+
sprintf(fullname, "%s%s%s", path, TD_DIRSEP, name);
ret = taosStatFile(fullname, sz, NULL, NULL);
@@ -555,6 +560,11 @@ _NEXT:
(int32_t)taosArrayGetSize(pHandle->pDbSnapSet), pHandle->currIdx);
uint8_t* buf = taosMemoryCalloc(1, sizeof(SStreamSnapBlockHdr) + kBlockSize);
+ if (buf == NULL) {
+ stError("%s failed to prepare the block header, code:Out of memory", item->name);
+ return terrno;
+ }
+
int64_t nread = taosPReadFile(pSnapFile->fd, buf + sizeof(SStreamSnapBlockHdr), kBlockSize, pSnapFile->offset);
if (nread < 0) {
taosMemoryFree(buf);
@@ -764,6 +774,11 @@ int32_t streamSnapWrite(SStreamSnapWriter* pWriter, uint8_t* pData, uint32_t nDa
sprintf(idstr, "0x%" PRIx64 "-0x%x", snapInfo.streamId, (int32_t)(snapInfo.taskId));
char* path = taosMemoryCalloc(1, strlen(pHandle->metaPath) + 256);
+ if (path == NULL) {
+ stError("s-task:0x%x failed to prepare meta header buffer, code:Out of memory", (int32_t) snapInfo.taskId);
+ return terrno;
+ }
+
sprintf(path, "%s%s%s%s%s%s%s%" PRId64 "", pHandle->metaPath, TD_DIRSEP, idstr, TD_DIRSEP, "checkpoints", TD_DIRSEP,
"checkpoint", snapInfo.chkpId);
if (!taosIsDir(path)) {
@@ -778,11 +793,19 @@ int32_t streamSnapWrite(SStreamSnapWriter* pWriter, uint8_t* pData, uint32_t nDa
pDbSnapFile->path = path;
pDbSnapFile->snapInfo = snapInfo;
pDbSnapFile->pFileList = taosArrayInit(64, sizeof(SBackendFileItem));
+ if (pDbSnapFile->pFileList == NULL) {
+ return terrno;
+ }
+
pDbSnapFile->currFileIdx = 0;
pDbSnapFile->offset = 0;
SBackendFileItem item = {0};
item.name = taosStrdup((char*)ROCKSDB_CURRENT);
+ if (item.name == NULL) {
+ return terrno;
+ }
+
item.type = ROCKSDB_CURRENT_TYPE;
void* p = taosArrayPush(pDbSnapFile->pFileList, &item);
diff --git a/source/libs/stream/src/streamState.c b/source/libs/stream/src/streamState.c
index 0e2d31cc8f..e1edebc861 100644
--- a/source/libs/stream/src/streamState.c
+++ b/source/libs/stream/src/streamState.c
@@ -530,6 +530,9 @@ void streamStateCopyBackend(SStreamState* src, SStreamState* dst) {
}
SStreamStateCur* createStreamStateCursor() {
SStreamStateCur* pCur = taosMemoryCalloc(1, sizeof(SStreamStateCur));
+ if (pCur == NULL) {
+ return NULL;
+ }
pCur->buffIndex = -1;
return pCur;
}
diff --git a/source/libs/stream/src/streamTask.c b/source/libs/stream/src/streamTask.c
index 365710f8a7..3319c7c74f 100644
--- a/source/libs/stream/src/streamTask.c
+++ b/source/libs/stream/src/streamTask.c
@@ -133,6 +133,11 @@ int32_t tNewStreamTask(int64_t streamId, int8_t taskLevel, SEpSet* pEpset, bool
sprintf(buf, "0x%" PRIx64 "-0x%x", pTask->id.streamId, pTask->id.taskId);
pTask->id.idStr = taosStrdup(buf);
+ if (pTask->id.idStr == NULL) {
+ stError("s-task:0x%x failed to build task id, code: out of memory", pTask->id.taskId);
+ return terrno;
+ }
+
pTask->status.schedStatus = TASK_SCHED_STATUS__INACTIVE;
pTask->status.taskStatus = fillHistory ? TASK_STATUS__SCAN_HISTORY : TASK_STATUS__READY;
pTask->inputq.status = TASK_INPUT_STATUS__NORMAL;
diff --git a/source/libs/stream/src/streamUpdate.c b/source/libs/stream/src/streamUpdate.c
index cc80e27467..d5adbcee77 100644
--- a/source/libs/stream/src/streamUpdate.c
+++ b/source/libs/stream/src/streamUpdate.c
@@ -631,7 +631,11 @@ int32_t updateInfoDeserialize(void* buf, int32_t bufLen, SUpdateInfo* pInfo) {
if (tDecodeI8(&decoder, &pInfo->pkColType) < 0) return -1;
pInfo->pKeyBuff = taosMemoryCalloc(1, sizeof(TSKEY) + sizeof(int64_t) + pInfo->pkColLen);
+ QUERY_CHECK_NULL(pInfo->pKeyBuff, code, lino, _error, terrno);
+
pInfo->pValueBuff = taosMemoryCalloc(1, sizeof(TSKEY) + pInfo->pkColLen);
+ QUERY_CHECK_NULL(pInfo->pValueBuff, code, lino, _error, terrno);
+
if (pInfo->pkColLen != 0) {
pInfo->comparePkRowFn = compareKeyTsAndPk;
pInfo->comparePkCol = getKeyComparFunc(pInfo->pkColType, TSDB_ORDER_ASC);
diff --git a/source/libs/stream/src/tstreamFileState.c b/source/libs/stream/src/tstreamFileState.c
index abb796b0b7..5626aa29da 100644
--- a/source/libs/stream/src/tstreamFileState.c
+++ b/source/libs/stream/src/tstreamFileState.c
@@ -96,6 +96,10 @@ int32_t intervalFileGetFn(SStreamFileState* pFileState, void* pKey, void* data,
void* intervalCreateStateKey(SRowBuffPos* pPos, int64_t num) {
SStateKey* pStateKey = taosMemoryCalloc(1, sizeof(SStateKey));
+ if (pStateKey == NULL) {
+ qError("%s failed at line %d since %s", __func__, __LINE__, tstrerror(terrno));
+ return NULL;
+ }
SWinKey* pWinKey = pPos->pKey;
pStateKey->key = *pWinKey;
pStateKey->opNum = num;
@@ -112,6 +116,10 @@ int32_t sessionFileGetFn(SStreamFileState* pFileState, void* pKey, void* data, i
void* sessionCreateStateKey(SRowBuffPos* pPos, int64_t num) {
SStateSessionKey* pStateKey = taosMemoryCalloc(1, sizeof(SStateSessionKey));
+ if (pStateKey == NULL) {
+ qError("%s failed at line %d since %s", __func__, __LINE__, tstrerror(terrno));
+ return NULL;
+ }
SSessionKey* pWinKey = pPos->pKey;
pStateKey->key = *pWinKey;
pStateKey->opNum = num;
@@ -120,11 +128,16 @@ void* sessionCreateStateKey(SRowBuffPos* pPos, int64_t num) {
static void streamFileStateDecode(TSKEY* pKey, void* pBuff, int32_t len) { pBuff = taosDecodeFixedI64(pBuff, pKey); }
-static void streamFileStateEncode(TSKEY* pKey, void** pVal, int32_t* pLen) {
+static int32_t streamFileStateEncode(TSKEY* pKey, void** pVal, int32_t* pLen) {
*pLen = sizeof(TSKEY);
(*pVal) = taosMemoryCalloc(1, *pLen);
+ if ((*pVal) == NULL) {
+ qError("%s failed at line %d since %s", __func__, __LINE__, tstrerror(terrno));
+ return terrno;
+ }
void* buff = *pVal;
int32_t tmp = taosEncodeFixedI64(&buff, *pKey);
+ return TSDB_CODE_SUCCESS;
}
int32_t streamFileStateInit(int64_t memSize, uint32_t keySize, uint32_t rowSize, uint32_t selectRowSize, GetTsFun fp,
@@ -177,6 +190,7 @@ int32_t streamFileStateInit(int64_t memSize, uint32_t keySize, uint32_t rowSize,
pFileState->stateFunctionGetFn = getSessionRowBuff;
}
QUERY_CHECK_NULL(pFileState->rowStateBuff, code, lino, _error, terrno);
+ QUERY_CHECK_NULL(pFileState->cfName, code, lino, _error, terrno);
pFileState->keyLen = keySize;
pFileState->rowSize = rowSize;
@@ -480,11 +494,10 @@ SRowBuffPos* getNewRowPos(SStreamFileState* pFileState) {
if (pFileState->curRowCount < pFileState->maxRowCount) {
pBuff = taosMemoryCalloc(1, pFileState->rowSize);
- if (pBuff) {
- pPos->pRowBuff = pBuff;
- pFileState->curRowCount++;
- goto _end;
- }
+ QUERY_CHECK_NULL(pBuff, code, lino, _error, terrno);
+ pPos->pRowBuff = pBuff;
+ pFileState->curRowCount++;
+ goto _end;
}
code = clearRowBuff(pFileState);
@@ -712,6 +725,8 @@ void flushSnapshot(SStreamFileState* pFileState, SStreamSnapshot* pSnapshot, boo
}
void* pSKey = pFileState->stateBuffCreateStateKeyFn(pPos, ((SStreamState*)pFileState->pFileStore)->number);
+ QUERY_CHECK_NULL(pSKey, code, lino, _end, terrno);
+
code = streamStatePutBatchOptimize(pFileState->pFileStore, idx, batch, pSKey, pPos->pRowBuff, pFileState->rowSize,
0, buf);
taosMemoryFreeClear(pSKey);
@@ -738,7 +753,9 @@ void flushSnapshot(SStreamFileState* pFileState, SStreamSnapshot* pSnapshot, boo
if (flushState) {
void* valBuf = NULL;
int32_t len = 0;
- streamFileStateEncode(&pFileState->flushMark, &valBuf, &len);
+ code = streamFileStateEncode(&pFileState->flushMark, &valBuf, &len);
+ QUERY_CHECK_CODE(code, lino, _end);
+
qDebug("===stream===flushMark write:%" PRId64, pFileState->flushMark);
code = streamStatePutBatch(pFileState->pFileStore, "default", batch, STREAM_STATE_INFO_NAME, valBuf, len, 0);
taosMemoryFree(valBuf);
diff --git a/source/libs/stream/test/backendTest.cpp b/source/libs/stream/test/backendTest.cpp
index 2b21510e45..e7e7149882 100644
--- a/source/libs/stream/test/backendTest.cpp
+++ b/source/libs/stream/test/backendTest.cpp
@@ -478,11 +478,13 @@ TEST_F(BackendEnv, oldBackendInit) {
ASSERT(code == 0);
{
- SBackendWrapper *p = (SBackendWrapper *)streamBackendInit(path, 10, 10);
+ SBackendWrapper *p = NULL;
+ int32_t code = streamBackendInit(path, 10, 10, &p);
streamBackendCleanup((void *)p);
}
{
- SBackendWrapper *p = (SBackendWrapper *)streamBackendInit(path, 10, 10);
+ SBackendWrapper *p = NULL;
+ int32_t code = streamBackendInit(path, 10, 10, &p);
streamBackendCleanup((void *)p);
}
diff --git a/source/libs/stream/test/tstreamUpdateTest.cpp b/source/libs/stream/test/tstreamUpdateTest.cpp
index 4360fc7d54..f3992aaa13 100644
--- a/source/libs/stream/test/tstreamUpdateTest.cpp
+++ b/source/libs/stream/test/tstreamUpdateTest.cpp
@@ -12,7 +12,7 @@ class StreamStateEnv : public ::testing::Test {
protected:
virtual void SetUp() {
streamMetaInit();
- backend = streamBackendInit(path, 0, 0);
+ int32_t code = streamBackendInit(path, 0, 0, (SBackendWrapper**)&backend);
}
virtual void TearDown() { streamMetaCleanup(); }
diff --git a/source/libs/transport/src/thttp.c b/source/libs/transport/src/thttp.c
index f62d728511..d4c973926e 100644
--- a/source/libs/transport/src/thttp.c
+++ b/source/libs/transport/src/thttp.c
@@ -53,6 +53,7 @@ typedef struct SHttpMsg {
int8_t quit;
int64_t chanId;
int64_t seq;
+ char* qid;
} SHttpMsg;
typedef struct SHttpClient {
@@ -81,7 +82,7 @@ static void httpHandleQuit(SHttpMsg* msg);
static int32_t httpSendQuit(SHttpModule* http, int64_t chanId);
static int32_t httpCreateMsg(const char* server, const char* uri, uint16_t port, char* pCont, int32_t contLen,
- EHttpCompFlag flag, int64_t chanId, SHttpMsg** httpMsg);
+ EHttpCompFlag flag, int64_t chanId, const char* qid, SHttpMsg** httpMsg);
static void httpDestroyMsg(SHttpMsg* msg);
static bool httpFailFastShoudIgnoreMsg(SHashObj* pTable, char* server, int16_t port);
@@ -91,31 +92,53 @@ static int32_t taosSendHttpReportImpl(const char* server, const char* uri, uint1
static void httpModuleDestroy(SHttpModule* http);
static int32_t taosSendHttpReportImplByChan(const char* server, const char* uri, uint16_t port, char* pCont,
- int32_t contLen, EHttpCompFlag flag, int64_t chanId);
+ int32_t contLen, EHttpCompFlag flag, int64_t chanId, const char* qid);
-static int32_t taosBuildHttpHeader(const char* server, const char* uri, int32_t contLen, char* pHead, int32_t headLen,
+static int32_t taosBuildHttpHeader(const char* server, const char* uri, int32_t contLen, const char* qid, char* pHead,
+ int32_t headLen,
EHttpCompFlag flag) {
int32_t code = 0;
int32_t len = 0;
if (flag == HTTP_FLAT) {
- len = snprintf(pHead, headLen,
- "POST %s HTTP/1.1\n"
- "Host: %s\n"
- "Content-Type: application/json\n"
- "Content-Length: %d\n\n",
- uri, server, contLen);
+ if (qid == NULL) {
+ len = snprintf(pHead, headLen,
+ "POST %s HTTP/1.1\n"
+ "Host: %s\n"
+ "Content-Type: application/json\n"
+ "Content-Length: %d\n\n",
+ uri, server, contLen);
+ } else {
+ len = snprintf(pHead, headLen,
+ "POST %s HTTP/1.1\n"
+ "Host: %s\n"
+ "X-QID: %s\n"
+ "Content-Type: application/json\n"
+ "Content-Length: %d\n\n",
+ uri, server, qid, contLen);
+ }
if (len < 0 || len >= headLen) {
code = TSDB_CODE_OUT_OF_RANGE;
}
} else if (flag == HTTP_GZIP) {
- len = snprintf(pHead, headLen,
- "POST %s HTTP/1.1\n"
- "Host: %s\n"
- "Content-Type: application/json\n"
- "Content-Encoding: gzip\n"
- "Content-Length: %d\n\n",
- uri, server, contLen);
+ if (qid == NULL) {
+ len = snprintf(pHead, headLen,
+ "POST %s HTTP/1.1\n"
+ "Host: %s\n"
+ "Content-Type: application/json\n"
+ "Content-Encoding: gzip\n"
+ "Content-Length: %d\n\n",
+ uri, server, contLen);
+ } else {
+ len = snprintf(pHead, headLen,
+ "POST %s HTTP/1.1\n"
+ "Host: %s\n"
+ "X-QID: %s\n"
+ "Content-Type: application/json\n"
+ "Content-Encoding: gzip\n"
+ "Content-Length: %d\n\n",
+ uri, server, qid, contLen);
+ }
if (len < 0 || len >= headLen) {
code = TSDB_CODE_OUT_OF_RANGE;
}
@@ -218,7 +241,7 @@ static void* httpThread(void* arg) {
}
static int32_t httpCreateMsg(const char* server, const char* uri, uint16_t port, char* pCont, int32_t contLen,
- EHttpCompFlag flag, int64_t chanId, SHttpMsg** httpMsg) {
+ EHttpCompFlag flag, int64_t chanId, const char* qid, SHttpMsg** httpMsg) {
int64_t seqNum = atomic_fetch_add_64(&httpSeqNum, 1);
if (server == NULL || uri == NULL) {
tError("http-report failed to report to invalid addr, chanId:%" PRId64 ", seq:%" PRId64 "", chanId, seqNum);
@@ -243,6 +266,10 @@ static int32_t httpCreateMsg(const char* server, const char* uri, uint16_t port,
msg->server = taosStrdup(server);
msg->uri = taosStrdup(uri);
msg->cont = taosMemoryMalloc(contLen);
+ if (qid != NULL)
+ msg->qid = taosStrdup(qid);
+ else
+ msg->qid = NULL;
if (msg->server == NULL || msg->uri == NULL || msg->cont == NULL) {
httpDestroyMsg(msg);
*httpMsg = NULL;
@@ -263,6 +290,7 @@ static void httpDestroyMsg(SHttpMsg* msg) {
taosMemoryFree(msg->server);
taosMemoryFree(msg->uri);
taosMemoryFree(msg->cont);
+ if (msg->qid != NULL) taosMemoryFree(msg->qid);
taosMemoryFree(msg);
}
static void httpDestroyMsgWrapper(void* cont, void* param) {
@@ -561,7 +589,7 @@ static void httpHandleReq(SHttpMsg* msg) {
goto END;
}
- int32_t headLen = taosBuildHttpHeader(msg->server, msg->uri, msg->len, header, cap, msg->flag);
+ int32_t headLen = taosBuildHttpHeader(msg->server, msg->uri, msg->len, msg->qid, header, cap, msg->flag);
if (headLen < 0) {
code = headLen;
goto END;
@@ -590,6 +618,7 @@ static void httpHandleReq(SHttpMsg* msg) {
cli->chanId = chanId;
cli->addr = msg->server;
cli->port = msg->port;
+ if (msg->qid != NULL) taosMemoryFree(msg->qid);
taosMemoryFree(msg->uri);
taosMemoryFree(msg);
@@ -675,10 +704,10 @@ void httpModuleDestroy2(SHttpModule* http) {
}
static int32_t taosSendHttpReportImplByChan(const char* server, const char* uri, uint16_t port, char* pCont,
- int32_t contLen, EHttpCompFlag flag, int64_t chanId) {
+ int32_t contLen, EHttpCompFlag flag, int64_t chanId, const char* qid) {
SHttpModule* load = NULL;
SHttpMsg* msg = NULL;
- int32_t code = httpCreateMsg(server, uri, port, pCont, contLen, flag, chanId, &msg);
+ int32_t code = httpCreateMsg(server, uri, port, pCont, contLen, flag, chanId, qid, &msg);
if (code != 0) {
goto _ERROR;
}
@@ -714,14 +743,19 @@ _ERROR:
}
int32_t taosSendHttpReportByChan(const char* server, const char* uri, uint16_t port, char* pCont, int32_t contLen,
- EHttpCompFlag flag, int64_t chanId) {
- return taosSendHttpReportImplByChan(server, uri, port, pCont, contLen, flag, chanId);
+ EHttpCompFlag flag, int64_t chanId, const char* qid) {
+ return taosSendHttpReportImplByChan(server, uri, port, pCont, contLen, flag, chanId, qid);
}
int32_t taosSendHttpReport(const char* server, const char* uri, uint16_t port, char* pCont, int32_t contLen,
EHttpCompFlag flag) {
+ return taosSendHttpReportWithQID(server, uri, port, pCont, contLen, flag, NULL);
+}
+
+int32_t taosSendHttpReportWithQID(const char* server, const char* uri, uint16_t port, char* pCont, int32_t contLen,
+ EHttpCompFlag flag, const char* qid) {
(void)taosThreadOnce(&transHttpInit, transHttpEnvInit);
- return taosSendHttpReportImplByChan(server, uri, port, pCont, contLen, flag, httpDefaultChanId);
+ return taosSendHttpReportImplByChan(server, uri, port, pCont, contLen, flag, httpDefaultChanId, qid);
}
static void transHttpDestroyHandle(void* handle) { taosMemoryFree(handle); }
diff --git a/source/util/src/tbloomfilter.c b/source/util/src/tbloomfilter.c
index 841657e628..0889017cde 100644
--- a/source/util/src/tbloomfilter.c
+++ b/source/util/src/tbloomfilter.c
@@ -177,6 +177,8 @@ int32_t tBloomFilterDecode(SDecoder* pDecoder, SBloomFilter** ppBF) {
QUERY_CHECK_CODE(code, lino, _error);
}
pBF->buffer = taosMemoryCalloc(pBF->numUnits, sizeof(uint64_t));
+ QUERY_CHECK_NULL(pBF->buffer, code, lino, _error, terrno);
+
for (int32_t i = 0; i < pBF->numUnits; i++) {
uint64_t* pUnits = (uint64_t*)pBF->buffer;
if (tDecodeU64(pDecoder, pUnits + i) < 0) {
diff --git a/source/util/src/tpagedbuf.c b/source/util/src/tpagedbuf.c
index ffb11b156d..46f2fdc647 100644
--- a/source/util/src/tpagedbuf.c
+++ b/source/util/src/tpagedbuf.c
@@ -362,6 +362,9 @@ int32_t createDiskbasedBuf(SDiskbasedBuf** pBuf, int32_t pagesize, int32_t inMem
pPBuf->allocateId = -1;
pPBuf->pFile = NULL;
pPBuf->id = taosStrdup(id);
+ if (id != NULL && pPBuf->id == NULL) {
+ goto _error;
+ }
pPBuf->fileSize = 0;
pPBuf->pFree = taosArrayInit(4, sizeof(SFreeListItem));
pPBuf->freePgList = tdListNew(POINTER_BYTES);
@@ -689,11 +692,15 @@ void setBufPageDirty(void* pPage, bool dirty) {
ppi->dirty = dirty;
}
-void setBufPageCompressOnDisk(SDiskbasedBuf* pBuf, bool comp) {
+int32_t setBufPageCompressOnDisk(SDiskbasedBuf* pBuf, bool comp) {
pBuf->comp = comp;
if (comp && (pBuf->assistBuf == NULL)) {
pBuf->assistBuf = taosMemoryMalloc(pBuf->pageSize + 2); // EXTRA BYTES
+ if (pBuf->assistBuf) {
+ return terrno;
+ }
}
+ return TSDB_CODE_SUCCESS;
}
int32_t dBufSetBufPageRecycled(SDiskbasedBuf* pBuf, void* pPage) {
diff --git a/source/util/src/tuuid.c b/source/util/src/tuuid.c
index 9d749cc002..4472189d37 100644
--- a/source/util/src/tuuid.c
+++ b/source/util/src/tuuid.c
@@ -65,3 +65,18 @@ int64_t tGenIdPI64(void) {
return id;
}
+
+int64_t tGenQid64(int8_t dnodeId) {
+ int64_t id = dnodeId;
+
+ while (true) {
+ int32_t val = atomic_add_fetch_32(&tUUIDSerialNo, 1);
+
+ id = (id << 56) | (val & 0xFFFFF) << 8;
+ if (id) {
+ break;
+ }
+ }
+
+ return id;
+}
\ No newline at end of file
diff --git a/tests/army/query/sys/test_no_coredump.py b/tests/army/query/sys/test_no_coredump.py
new file mode 100644
index 0000000000..57633dfb37
--- /dev/null
+++ b/tests/army/query/sys/test_no_coredump.py
@@ -0,0 +1,22 @@
+# -*- coding: utf-8 -*-
+
+from frame.log import *
+from frame.cases import *
+from frame.sql import *
+from frame.caseBase import *
+from frame import *
+from frame.autogen import *
+
+"""
+ TS-5437: https://jira.taosdata.com:18080/browse/TS-5437
+ No coredump occurs when querying metadata.
+"""
+
+class TDTestCase(TBase):
+ def run(self):
+ tdSql.prepare("test")
+ tdSql.execute("create table t1(ts timestamp, a int, b int, c int, d double);")
+ tdSql.query("select * from information_schema.ins_tables where create_time < now;")
+
+tdCases.addLinux(__file__, TDTestCase())
+tdCases.addWindows(__file__, TDTestCase())