Merge branch '3.0' of https://github.com/taosdata/TDengine into fix/TD-31270
This commit is contained in:
commit
56a14915d7
|
@ -4,9 +4,9 @@ sidebar_label: 文档首页
|
||||||
slug: /
|
slug: /
|
||||||
---
|
---
|
||||||
|
|
||||||
TDengine 是一款[开源](https://www.taosdata.com/tdengine/open_source_time-series_database)、[高性能](https://www.taosdata.com/fast)、[云原生](https://www.taosdata.com/tdengine/cloud_native_time-series_database)的<a href="https://www.taosdata.com/" data-internallinksmanager029f6b8e52c="2" title="时序数据库" target="_blank" rel="noopener">时序数据库</a>(<a href="https://www.taosdata.com/time-series-database" data-internallinksmanager029f6b8e52c="9" title="Time Series DataBase" target="_blank" rel="noopener">Time Series Database</a>, <a href="https://www.taosdata.com/tsdb" data-internallinksmanager029f6b8e52c="8" title="TSDB" target="_blank" rel="noopener">TSDB</a>), 它专为物联网、车联网、工业互联网、金融、IT 运维等场景优化设计。同时它还带有内建的缓存、流式计算、数据订阅等系统功能,能大幅减少系统设计的复杂度,降低研发和运营成本,是一款极简的时序数据处理平台。本文档是 TDengine 的用户手册,主要是介绍 TDengine 的基本概念、安装、使用、功能、开发接口、运营维护、TDengine 内核设计等等,它主要是面向架构师、开发工程师与系统管理员的。
|
TDengine 是一款[开源](https://www.taosdata.com/tdengine/open_source_time-series_database)、[高性能](https://www.taosdata.com/fast)、[云原生](https://www.taosdata.com/tdengine/cloud_native_time-series_database)的<a href="https://www.taosdata.com/" data-internallinksmanager029f6b8e52c="2" title="时序数据库" target="_blank" rel="noopener">时序数据库</a>(<a href="https://www.taosdata.com/time-series-database" data-internallinksmanager029f6b8e52c="9" title="Time Series DataBase" target="_blank" rel="noopener">Time Series Database</a>, <a href="https://www.taosdata.com/tsdb" data-internallinksmanager029f6b8e52c="8" title="TSDB" target="_blank" rel="noopener">TSDB</a>), 它专为物联网、车联网、工业互联网、金融、IT 运维等场景优化设计。同时它还带有内建的缓存、流式计算、数据订阅等系统功能,能大幅减少系统设计的复杂度,降低研发和运营成本,是一款极简的时序数据处理平台。本文档是 TDengine 的用户手册,主要是介绍 TDengine 的基本概念、安装、使用、功能、开发接口、运营维护、TDengine 内核设计等等,它主要是面向架构师、开发工程师与系统管理员的。如果你对时序数据的基本概念、价值以及其所能带来的业务价值尚不了解,请参考[时序数据基础](./concept)
|
||||||
|
|
||||||
TDengine 充分利用了时序数据的特点,提出了“一个数据采集点一张表”与“超级表”的概念,设计了创新的存储引擎,让数据的写入、查询和存储效率都得到极大的提升。为正确理解并使用 TDengine,无论如何,请您仔细阅读[基本概念](./concept)一章。
|
TDengine 充分利用了时序数据的特点,提出了“一个数据采集点一张表”与“超级表”的概念,设计了创新的存储引擎,让数据的写入、查询和存储效率都得到极大的提升。为正确理解并使用 TDengine,无论如何,请您仔细阅读[快速入门](./basic)一章。
|
||||||
|
|
||||||
如果你是开发工程师,请一定仔细阅读[开发指南](./develop)一章,该部分对数据库连接、建模、插入数据、查询、流式计算、缓存、数据订阅、用户自定义函数等功能都做了详细介绍,并配有各种编程语言的示例代码。大部分情况下,你只要复制粘贴示例代码,针对自己的应用稍作改动,就能跑起来。
|
如果你是开发工程师,请一定仔细阅读[开发指南](./develop)一章,该部分对数据库连接、建模、插入数据、查询、流式计算、缓存、数据订阅、用户自定义函数等功能都做了详细介绍,并配有各种编程语言的示例代码。大部分情况下,你只要复制粘贴示例代码,针对自己的应用稍作改动,就能跑起来。
|
||||||
|
|
||||||
|
|
|
@ -40,7 +40,7 @@ TDengine 经过特别优化,以适应时间序列数据的独特需求,引
|
||||||
|
|
||||||
9. 编程连接器:TDengine 提供不同语言的连接器,包括 C/C++、Java、Go、Node.js、Rust、Python、C#、R、PHP 等。而且 TDengine 支持 REST 接口,应用可以直接通过 HTTP POST 请求 BODY 中包含的 SQL 语句来操作数据库。
|
9. 编程连接器:TDengine 提供不同语言的连接器,包括 C/C++、Java、Go、Node.js、Rust、Python、C#、R、PHP 等。而且 TDengine 支持 REST 接口,应用可以直接通过 HTTP POST 请求 BODY 中包含的 SQL 语句来操作数据库。
|
||||||
|
|
||||||
10. 数据安全共享:TDengine 通过数据库视图功能和权限管理,确保数据访问的安全性。结合数据订阅功能实现灵活精细的数据分发控制,保护数据安全和隐私
|
10. 数据安全:TDengine 提供了丰富的用户管理和权限管理功能以控制不同用户对数据库和表的访问权限,提供了 IP 白名单功能以控制不同帐号只能从特定的服务器接入集群。TDengine 支持系统管理员对不同数据库按需加密,数据加密后对读写完全透明且对性能的影响很小。还提供了审计日志功能以记录系统中的敏感操作。
|
||||||
|
|
||||||
11. 编程连接器:TDengine 提供了丰富的编程语言连接器,包括 C/C++、Java、Go、Node.js、Rust、Python、C#、R、PHP 等,并支持 REST ful 接口,方便应用通过HTTP POST 请求操作数据库。
|
11. 编程连接器:TDengine 提供了丰富的编程语言连接器,包括 C/C++、Java、Go、Node.js、Rust、Python、C#、R、PHP 等,并支持 REST ful 接口,方便应用通过HTTP POST 请求操作数据库。
|
||||||
|
|
||||||
|
|
|
@ -1,13 +1,13 @@
|
||||||
---
|
---
|
||||||
title: 快速入门
|
title: 快速入门
|
||||||
toc_max_heading_level: 4
|
description: 'TDengine 基本功能'
|
||||||
---
|
---
|
||||||
|
|
||||||
本章主要介绍 TDengine 的数据模型以及基本的写入和查询功能。
|
本章主要介绍 TDengine 的数据模型以及写入和查询功能。
|
||||||
|
|
||||||
```mdx-code-block
|
```mdx-code-block
|
||||||
import DocCardList from '@theme/DocCardList';
|
import DocCardList from '@theme/DocCardList';
|
||||||
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
|
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
|
||||||
|
|
||||||
<DocCardList items={useCurrentSidebarCategory().items}/>
|
<DocCardList items={useCurrentSidebarCategory().items}/>
|
||||||
```
|
```
|
||||||
|
|
|
@ -1,17 +1,13 @@
|
||||||
---
|
---
|
||||||
title: TDengine 高级功能
|
title: 高级功能
|
||||||
toc_max_heading_level: 4
|
description: 'TDengine 高级功能'
|
||||||
---
|
---
|
||||||
|
|
||||||
TDengine 不仅是一个高性能、分布式的时序数据库核心产品,而且集成了专为时序数据量身定制的一系列功能,包括数据订阅、缓存、流计算和 ETL 等。这些功能共同构成了一个完整的时序数据处理解决方案。因此,当你选择使用 TDengine 时,你的应用程序无须额外集成 Kafka、Redis、Spark 或 Flink 等第三方工具,从而极大地简化应用程序的设计复杂度,并显著降低运维成本。下图直观地展示了传统大数据平台架构与TDengine 架构之间的异同点,突显了 TDengine 在时序数据处理领域的独特优势。
|
本章主要介绍 TDengine 的高级功能,如数据订阅、缓存、流计算、边云协同和数据接入。
|
||||||
|
|
||||||

|
```mdx-code-block
|
||||||
|
import DocCardList from '@theme/DocCardList';
|
||||||
本章主要介绍 TDengine 的一些高级功能,如数据订阅、缓存、流计算、边云协同和数据接入等。
|
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
|
||||||
|
|
||||||
```mdx-code-block
|
<DocCardList items={useCurrentSidebarCategory().items}/>
|
||||||
import DocCardList from '@theme/DocCardList';
|
```
|
||||||
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
|
|
||||||
|
|
||||||
<DocCardList items={useCurrentSidebarCategory().items}/>
|
|
||||||
```
|
|
||||||
|
|
|
@ -8,13 +8,15 @@ toc_max_heading_level: 4
|
||||||
|
|
||||||
## 手动部署
|
## 手动部署
|
||||||
|
|
||||||
按照以下步骤手动搭建 TDengine 集群。
|
### 部署 taosd
|
||||||
|
|
||||||
### 清除数据
|
taosd 是 TDengine 集群中最主要的服务组件,本节介绍手动部署 taosd 集群的步骤。
|
||||||
|
|
||||||
|
#### 1. 清除数据
|
||||||
|
|
||||||
如果搭建集群的物理节点中存在之前的测试数据或者装过其他版本(如 1.x/2.x)的TDengine,请先将其删除,并清空所有数据。
|
如果搭建集群的物理节点中存在之前的测试数据或者装过其他版本(如 1.x/2.x)的TDengine,请先将其删除,并清空所有数据。
|
||||||
|
|
||||||
### 检查环境
|
#### 2. 检查环境
|
||||||
|
|
||||||
在进行 TDengine 集群部署之前,全面检查所有 dnode 以及应用程序所在物理节点的网络设置至关重要。以下是检查步骤:
|
在进行 TDengine 集群部署之前,全面检查所有 dnode 以及应用程序所在物理节点的网络设置至关重要。以下是检查步骤:
|
||||||
|
|
||||||
|
@ -25,11 +27,11 @@ toc_max_heading_level: 4
|
||||||
|
|
||||||
通过以上步骤,你可以确保所有节点在网络层面顺利通信,从而为成功部署TDengine 集群奠定坚实基础
|
通过以上步骤,你可以确保所有节点在网络层面顺利通信,从而为成功部署TDengine 集群奠定坚实基础
|
||||||
|
|
||||||
### 安装 TDengine
|
#### 3. 安装
|
||||||
|
|
||||||
为了确保集群内各物理节点的一致性和稳定性,请在所有物理节点上安装相同版本的 TDengine。
|
为了确保集群内各物理节点的一致性和稳定性,请在所有物理节点上安装相同版本的 TDengine。
|
||||||
|
|
||||||
### 修改配置
|
#### 4. 修改配置
|
||||||
|
|
||||||
修改 TDengine 的配置文件(所有节点的配置文件都需要修改)。假设准备启动的第 1 个 dnode 的 endpoint 为 h1.taosdata.com:6030,其与集群配置相关参数如下。
|
修改 TDengine 的配置文件(所有节点的配置文件都需要修改)。假设准备启动的第 1 个 dnode 的 endpoint 为 h1.taosdata.com:6030,其与集群配置相关参数如下。
|
||||||
|
|
||||||
|
@ -54,7 +56,7 @@ serverPort 6030
|
||||||
|charset | 字符集编码 |
|
|charset | 字符集编码 |
|
||||||
|ttlChangeOnWrite | ttl 到期时间是否伴随表的修改操作而改变 |
|
|ttlChangeOnWrite | ttl 到期时间是否伴随表的修改操作而改变 |
|
||||||
|
|
||||||
### 启动集群
|
#### 5. 启动
|
||||||
|
|
||||||
按照前述步骤启动第 1 个 dnode,例如 h1.taosdata.com。接着在终端中执行 taos,启动 TDengine 的 CLI 程序 taos,并在其中执行 show dnodes 命令,以查看当前集群中的所有 dnode 信息。
|
按照前述步骤启动第 1 个 dnode,例如 h1.taosdata.com。接着在终端中执行 taos,启动 TDengine 的 CLI 程序 taos,并在其中执行 show dnodes 命令,以查看当前集群中的所有 dnode 信息。
|
||||||
|
|
||||||
|
@ -67,7 +69,7 @@ taos> show dnodes;
|
||||||
|
|
||||||
可以看到,刚刚启动的 dnode 节点的 endpoint 为 h1.taosdata.com:6030。这个地址就是新建集群的 first Ep。
|
可以看到,刚刚启动的 dnode 节点的 endpoint 为 h1.taosdata.com:6030。这个地址就是新建集群的 first Ep。
|
||||||
|
|
||||||
### 添加 dnode
|
#### 6. 添加 dnode
|
||||||
|
|
||||||
按照前述步骤,在每个物理节点启动 taosd。每个 dnode 都需要在 taos.cfg 文件中将 firstEp 参数配置为新建集群首个节点的 endpoint,在本例中是 h1.taosdata.com:6030。在第 1 个 dnode 所在机器,在终端中运行 taos,打开 TDengine 的 CLI 程序 taos,然后登录TDengine 集群,执行如下 SQL。
|
按照前述步骤,在每个物理节点启动 taosd。每个 dnode 都需要在 taos.cfg 文件中将 firstEp 参数配置为新建集群首个节点的 endpoint,在本例中是 h1.taosdata.com:6030。在第 1 个 dnode 所在机器,在终端中运行 taos,打开 TDengine 的 CLI 程序 taos,然后登录TDengine 集群,执行如下 SQL。
|
||||||
|
|
||||||
|
@ -88,7 +90,7 @@ show dnodes;
|
||||||
- 两个没有配置 firstEp 参数的 dnode 在启动后会独立运行。这时无法将其中一个dnode 加入另外一个 dnode,形成集群。
|
- 两个没有配置 firstEp 参数的 dnode 在启动后会独立运行。这时无法将其中一个dnode 加入另外一个 dnode,形成集群。
|
||||||
- TDengine 不允许将两个独立的集群合并成新的集群。
|
- TDengine 不允许将两个独立的集群合并成新的集群。
|
||||||
|
|
||||||
### 添加 mnode
|
#### 7. 添加 mnode
|
||||||
|
|
||||||
在创建 TDengine 集群时,首个 dnode 将自动成为集群的 mnode,负责集群的管理和协调工作。为了实现 mnode 的高可用性,后续添加的 dnode 需要手动创建 mnode。请注意,一个集群最多允许创建 3 个 mnode,且每个 dnode 上只能创建一个 mnode。当集群中的 dnode 数量达到或超过 3 个时,你可以为现有集群创建 mnode。在第 1个 dnode 中,首先通过 TDengine 的 CLI 程序 taos 登录 TDengine,然后执行如下 SQL。
|
在创建 TDengine 集群时,首个 dnode 将自动成为集群的 mnode,负责集群的管理和协调工作。为了实现 mnode 的高可用性,后续添加的 dnode 需要手动创建 mnode。请注意,一个集群最多允许创建 3 个 mnode,且每个 dnode 上只能创建一个 mnode。当集群中的 dnode 数量达到或超过 3 个时,你可以为现有集群创建 mnode。在第 1个 dnode 中,首先通过 TDengine 的 CLI 程序 taos 登录 TDengine,然后执行如下 SQL。
|
||||||
|
|
||||||
|
@ -98,21 +100,14 @@ create mnode on dnode <dnodeId>
|
||||||
|
|
||||||
请注意将上面示例中的 dnodeId 替换为刚创建 dnode 的序号(可以通过执行 `show dnodes` 命令获得)。最后执行如下 `show mnodes`,查看新创建的 mnode 是否成功加入集群。
|
请注意将上面示例中的 dnodeId 替换为刚创建 dnode 的序号(可以通过执行 `show dnodes` 命令获得)。最后执行如下 `show mnodes`,查看新创建的 mnode 是否成功加入集群。
|
||||||
|
|
||||||
### 删除 dnode
|
|
||||||
|
|
||||||
对于错误加入集群的 dnode 可以通过 `drop dnode <dnodeID>` 命令删除。
|
|
||||||
|
|
||||||
**Tips**
|
**Tips**
|
||||||
- 一旦 dnode 被删除,它将无法直接重新加入集群。如果需要重新加入此类节点,你应首先对该节点进行初始化操作,即清空其数据文件夹。
|
|
||||||
- 在执行 drop dnode 命令时,集群会先将待删除 dnode 上的数据迁移至其他节点。请注意,drop dnode 与停止 taosd 进程是两个截然不同的操作,请勿混淆。由于删除 dnode 前须执行数据迁移,因此被删除的 dnode 必须保持在线状态,直至删除操作完成。删除操作结束后,方可停止 taosd 进程。
|
|
||||||
- 一旦 dnode 被删除,集群中的其他节点将感知到此操作,并且不再接收该 dnodeId 的请求。dnodeId 是由集群自动分配的,用户无法手动指定。
|
|
||||||
|
|
||||||
### 常见问题
|
|
||||||
|
|
||||||
在搭建 TDengine 集群的过程中,如果在执行 create dnode 命令以添加新节点后,新节点始终显示为离线状态,请按照以下步骤进行排查。
|
在搭建 TDengine 集群的过程中,如果在执行 create dnode 命令以添加新节点后,新节点始终显示为离线状态,请按照以下步骤进行排查。
|
||||||
第 1 步,检查新节点上的 taosd 服务是否已经正常启动。你可以通过查看日志文件或使用 ps 命令来确认。
|
|
||||||
第 2 步,如果 taosd 服务已启动,接下来请检查新节点的网络连接是否畅通,并确认防火墙是否已关闭。网络不通或防火墙设置可能会阻止节点与集群的其他节点通信。
|
- 第 1 步,检查新节点上的 taosd 服务是否已经正常启动。你可以通过查看日志文件或使用 ps 命令来确认。
|
||||||
第 3 步,使用 taos -h fqdn 命令尝试连接到新节点,然后执行 show dnodes 命令。这将显示新节点作为独立集群的运行状态。如果显示的列表与主节点上显示的不一致,说明新节点可能已自行组成一个单节点集群。要解决这个问题,请按照以下步骤操作。首先,停止新节点上的 taosd 服务。其次,清空新节点上 taos.cfg 配置文件中指定的 dataDir 目录下的所有文件。这将删除与该节点相关的所有数据和配置信息。最后,重新启动新节点上的 taosd 服务。这将使新节点恢复到初始状态,并准备好重新加入主集群。
|
- 第 2 步,如果 taosd 服务已启动,接下来请检查新节点的网络连接是否畅通,并确认防火墙是否已关闭。网络不通或防火墙设置可能会阻止节点与集群的其他节点通信。
|
||||||
|
- 第 3 步,使用 taos -h fqdn 命令尝试连接到新节点,然后执行 show dnodes 命令。这将显示新节点作为独立集群的运行状态。如果显示的列表与主节点上显示的不一致,说明新节点可能已自行组成一个单节点集群。要解决这个问题,请按照以下步骤操作。首先,停止新节点上的 taosd 服务。其次,清空新节点上 taos.cfg 配置文件中指定的 dataDir 目录下的所有文件。这将删除与该节点相关的所有数据和配置信息。最后,重新启动新节点上的 taosd 服务。这将使新节点恢复到初始状态,并准备好重新加入主集群。
|
||||||
|
|
||||||
### 部署 taosAdapter
|
### 部署 taosAdapter
|
||||||
|
|
||||||
|
@ -205,6 +200,23 @@ http {
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### 部署 taosKeeper
|
||||||
|
|
||||||
|
如果要想使用 TDegnine 的监控功能,taosKeeper 是一个必要的组件,关于监控请参考[TDinsight](../../reference/components/tdinsight),关于部署 taosKeeper 的细节请参考[taosKeeper参考手册](../../reference/components/taoskeeper)。
|
||||||
|
|
||||||
|
### 部署 taosX
|
||||||
|
|
||||||
|
如果想使用 TDengine 的数据接入能力,需要部署 taosX 服务,关于它的详细说明和部署请参考[taosX 参考手册](../../reference/components/taosx)。
|
||||||
|
|
||||||
|
### 部署 taosX-Agent
|
||||||
|
|
||||||
|
有些数据源如 Pi, OPC 等,因为网络条件和数据源访问的限制,taosX 无法直接访问数据源,这种情况下需要部署一个代理服务 taosX-Agent,关于它的详细说明和部署请参考[taosX-Agent 参考手册](../../reference/components/taosx-agent)。
|
||||||
|
|
||||||
|
### 部署 taos-Explorer
|
||||||
|
|
||||||
|
TDengine 提供了可视化管理 TDengine 集群的能力,要想使用图形化界面需要部署 taos-Explorer 服务,关于它的详细说明和部署请参考[taos-Explorer 参考手册](../../reference/components/explorer)
|
||||||
|
|
||||||
|
|
||||||
## Docker 部署
|
## Docker 部署
|
||||||
|
|
||||||
本节将介绍如何在 Docker 容器中启动 TDengine 服务并对其进行访问。你可以在 docker run 命令行或者 docker-compose 文件中使用环境变量来控制容器中服务的行为。
|
本节将介绍如何在 Docker 容器中启动 TDengine 服务并对其进行访问。你可以在 docker run 命令行或者 docker-compose 文件中使用环境变量来控制容器中服务的行为。
|
||||||
|
|
|
@ -1,13 +1,13 @@
|
||||||
---
|
---
|
||||||
title: 运维管理
|
title: 运维指南
|
||||||
toc_max_heading_level: 4
|
description: 'TDengine 运维指南'
|
||||||
---
|
---
|
||||||
|
|
||||||
本章主要介绍如何规划、建设、维护以及监控 TDengine 集群。
|
本章主要介绍如何规划、部署、维护和监控 TDengine 集群。
|
||||||
|
|
||||||
```mdx-code-block
|
```mdx-code-block
|
||||||
import DocCardList from '@theme/DocCardList';
|
import DocCardList from '@theme/DocCardList';
|
||||||
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
|
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
|
||||||
|
|
||||||
<DocCardList items={useCurrentSidebarCategory().items}/>
|
<DocCardList items={useCurrentSidebarCategory().items}/>
|
||||||
```
|
```
|
||||||
|
|
|
@ -511,7 +511,7 @@ static int32_t doSendCommitMsg(tmq_t* tmq, int32_t vgId, SEpSet* epSet, STqOffse
|
||||||
|
|
||||||
void* abuf = POINTER_SHIFT(buf, sizeof(SMsgHead));
|
void* abuf = POINTER_SHIFT(buf, sizeof(SMsgHead));
|
||||||
|
|
||||||
SEncoder encoder;
|
SEncoder encoder = {0};
|
||||||
tEncoderInit(&encoder, abuf, len);
|
tEncoderInit(&encoder, abuf, len);
|
||||||
if(tEncodeMqVgOffset(&encoder, &pOffset) < 0) {
|
if(tEncodeMqVgOffset(&encoder, &pOffset) < 0) {
|
||||||
tEncoderClear(&encoder);
|
tEncoderClear(&encoder);
|
||||||
|
@ -953,7 +953,7 @@ void tmqSendHbReq(void* param, void* tmrId) {
|
||||||
tscError("tmqSendHbReq asyncSendMsgToServer failed");
|
tscError("tmqSendHbReq asyncSendMsgToServer failed");
|
||||||
}
|
}
|
||||||
|
|
||||||
atomic_val_compare_exchange_8(&pollFlag, 1, 0);
|
(void)atomic_val_compare_exchange_8(&pollFlag, 1, 0);
|
||||||
OVER:
|
OVER:
|
||||||
tDestroySMqHbReq(&req);
|
tDestroySMqHbReq(&req);
|
||||||
if(tmrId != NULL){
|
if(tmrId != NULL){
|
||||||
|
@ -2394,7 +2394,7 @@ TAOS_RES* tmq_consumer_poll(tmq_t* tmq, int64_t timeout) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
atomic_val_compare_exchange_8(&pollFlag, 0, 1);
|
(void)atomic_val_compare_exchange_8(&pollFlag, 0, 1);
|
||||||
|
|
||||||
while (1) {
|
while (1) {
|
||||||
tmqHandleAllDelayedTask(tmq);
|
tmqHandleAllDelayedTask(tmq);
|
||||||
|
@ -3133,7 +3133,7 @@ int64_t getCommittedFromServer(tmq_t* tmq, char* tname, int32_t vgId, SEpSet* ep
|
||||||
|
|
||||||
void* abuf = POINTER_SHIFT(buf, sizeof(SMsgHead));
|
void* abuf = POINTER_SHIFT(buf, sizeof(SMsgHead));
|
||||||
|
|
||||||
SEncoder encoder;
|
SEncoder encoder = {0};
|
||||||
tEncoderInit(&encoder, abuf, len);
|
tEncoderInit(&encoder, abuf, len);
|
||||||
code = tEncodeMqVgOffset(&encoder, &pOffset);
|
code = tEncodeMqVgOffset(&encoder, &pOffset);
|
||||||
if (code < 0) {
|
if (code < 0) {
|
||||||
|
|
|
@ -1204,8 +1204,12 @@ static int32_t taosSetSystemCfg(SConfig *pCfg) {
|
||||||
SConfigItem *pItem = NULL;
|
SConfigItem *pItem = NULL;
|
||||||
|
|
||||||
TAOS_CHECK_GET_CFG_ITEM(pCfg, pItem, "timezone");
|
TAOS_CHECK_GET_CFG_ITEM(pCfg, pItem, "timezone");
|
||||||
TAOS_CHECK_RETURN(osSetTimezone(pItem->str));
|
if (0 == strlen(pItem->str)) {
|
||||||
uDebug("timezone format changed from %s to %s", pItem->str, tsTimezoneStr);
|
uError("timezone is not set");
|
||||||
|
} else {
|
||||||
|
TAOS_CHECK_RETURN(osSetTimezone(pItem->str));
|
||||||
|
uDebug("timezone format changed from %s to %s", pItem->str, tsTimezoneStr);
|
||||||
|
}
|
||||||
TAOS_CHECK_RETURN(cfgSetItem(pCfg, "timezone", tsTimezoneStr, pItem->stype, true));
|
TAOS_CHECK_RETURN(cfgSetItem(pCfg, "timezone", tsTimezoneStr, pItem->stype, true));
|
||||||
|
|
||||||
TAOS_CHECK_GET_CFG_ITEM(pCfg, pItem, "locale");
|
TAOS_CHECK_GET_CFG_ITEM(pCfg, pItem, "locale");
|
||||||
|
@ -1216,7 +1220,7 @@ static int32_t taosSetSystemCfg(SConfig *pCfg) {
|
||||||
|
|
||||||
int32_t code = taosSetSystemLocale(locale, charset);
|
int32_t code = taosSetSystemLocale(locale, charset);
|
||||||
if (TSDB_CODE_SUCCESS != code) {
|
if (TSDB_CODE_SUCCESS != code) {
|
||||||
uInfo("failed to set locale %s, since: %s", locale, tstrerror(code));
|
uError("failed to set locale:%s, since: %s", locale, tstrerror(code));
|
||||||
char curLocale[TD_LOCALE_LEN] = {0};
|
char curLocale[TD_LOCALE_LEN] = {0};
|
||||||
char curCharset[TD_CHARSET_LEN] = {0};
|
char curCharset[TD_CHARSET_LEN] = {0};
|
||||||
taosGetSystemLocale(curLocale, curCharset);
|
taosGetSystemLocale(curLocale, curCharset);
|
||||||
|
@ -1568,13 +1572,13 @@ int32_t taosCreateLog(const char *logname, int32_t logFileNum, const char *cfgDi
|
||||||
const char *envFile, char *apolloUrl, SArray *pArgs, bool tsc) {
|
const char *envFile, char *apolloUrl, SArray *pArgs, bool tsc) {
|
||||||
int32_t code = TSDB_CODE_SUCCESS;
|
int32_t code = TSDB_CODE_SUCCESS;
|
||||||
int32_t lino = 0;
|
int32_t lino = 0;
|
||||||
|
SConfig *pCfg = NULL;
|
||||||
|
|
||||||
if (tsCfg == NULL) {
|
if (tsCfg == NULL) {
|
||||||
TAOS_CHECK_RETURN(osDefaultInit());
|
TAOS_CHECK_GOTO(osDefaultInit(), &lino, _exit);
|
||||||
}
|
}
|
||||||
|
|
||||||
SConfig *pCfg = NULL;
|
TAOS_CHECK_GOTO(cfgInit(&pCfg), &lino, _exit);
|
||||||
TAOS_CHECK_RETURN(cfgInit(&pCfg));
|
|
||||||
|
|
||||||
if (tsc) {
|
if (tsc) {
|
||||||
tsLogEmbedded = 0;
|
tsLogEmbedded = 0;
|
||||||
|
@ -1604,7 +1608,7 @@ int32_t taosCreateLog(const char *logname, int32_t logFileNum, const char *cfgDi
|
||||||
|
|
||||||
SConfigItem *pDebugItem = NULL;
|
SConfigItem *pDebugItem = NULL;
|
||||||
TAOS_CHECK_GET_CFG_ITEM(pCfg, pDebugItem, "debugFlag");
|
TAOS_CHECK_GET_CFG_ITEM(pCfg, pDebugItem, "debugFlag");
|
||||||
TAOS_CHECK_RETURN(taosSetAllDebugFlag(pCfg, pDebugItem->i32));
|
TAOS_CHECK_GOTO(taosSetAllDebugFlag(pCfg, pDebugItem->i32), &lino, _exit);
|
||||||
|
|
||||||
if ((code = taosMulModeMkDir(tsLogDir, 0777, true)) != TSDB_CODE_SUCCESS) {
|
if ((code = taosMulModeMkDir(tsLogDir, 0777, true)) != TSDB_CODE_SUCCESS) {
|
||||||
printf("failed to create dir:%s since %s\n", tsLogDir, tstrerror(code));
|
printf("failed to create dir:%s since %s\n", tsLogDir, tstrerror(code));
|
||||||
|
|
|
@ -30,6 +30,7 @@ void mndReleaseTopic(SMnode *pMnode, SMqTopicObj *pTopic);
|
||||||
int32_t mndDropTopicByDB(SMnode *pMnode, STrans *pTrans, SDbObj *pDb);
|
int32_t mndDropTopicByDB(SMnode *pMnode, STrans *pTrans, SDbObj *pDb);
|
||||||
bool mndTopicExistsForDb(SMnode *pMnode, SDbObj *pDb);
|
bool mndTopicExistsForDb(SMnode *pMnode, SDbObj *pDb);
|
||||||
void mndTopicGetShowName(const char* fullTopic, char* topic);
|
void mndTopicGetShowName(const char* fullTopic, char* topic);
|
||||||
|
bool checkTopic(SArray *topics, char *topicName);
|
||||||
|
|
||||||
int32_t mndGetNumOfTopics(SMnode *pMnode, char *dbName, int32_t *pNumOfTopics);
|
int32_t mndGetNumOfTopics(SMnode *pMnode, char *dbName, int32_t *pNumOfTopics);
|
||||||
|
|
||||||
|
|
|
@ -1018,37 +1018,37 @@ END:
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
//static int32_t mndDropConsumerByGroup(SMnode *pMnode, STrans *pTrans, char *cgroup, char *topic) {
|
static int32_t mndCheckConsumerByGroup(SMnode *pMnode, STrans *pTrans, char *cgroup, char *topic) {
|
||||||
// void *pIter = NULL;
|
void *pIter = NULL;
|
||||||
// SMqConsumerObj *pConsumer = NULL;
|
SMqConsumerObj *pConsumer = NULL;
|
||||||
// int code = 0;
|
int code = 0;
|
||||||
// while (1) {
|
while (1) {
|
||||||
// pIter = sdbFetch(pMnode->pSdb, SDB_CONSUMER, pIter, (void **)&pConsumer);
|
pIter = sdbFetch(pMnode->pSdb, SDB_CONSUMER, pIter, (void **)&pConsumer);
|
||||||
// if (pIter == NULL) {
|
if (pIter == NULL) {
|
||||||
// break;
|
break;
|
||||||
// }
|
}
|
||||||
//
|
|
||||||
// // drop consumer in lost status, other consumers not in lost status already deleted by rebalance
|
if (strcmp(cgroup, pConsumer->cgroup) != 0) {
|
||||||
// if (pConsumer->status != MQ_CONSUMER_STATUS_LOST || strcmp(cgroup, pConsumer->cgroup) != 0) {
|
sdbRelease(pMnode->pSdb, pConsumer);
|
||||||
// sdbRelease(pMnode->pSdb, pConsumer);
|
continue;
|
||||||
// continue;
|
}
|
||||||
// }
|
|
||||||
// int32_t sz = taosArrayGetSize(pConsumer->assignedTopics);
|
bool found = checkTopic(pConsumer->assignedTopics, topic);
|
||||||
// for (int32_t i = 0; i < sz; i++) {
|
if (found){
|
||||||
// char *name = taosArrayGetP(pConsumer->assignedTopics, i);
|
mError("topic:%s, failed to drop since subscribed by consumer:0x%" PRIx64 ", in consumer group %s",
|
||||||
// if (name && strcmp(topic, name) == 0) {
|
topic, pConsumer->consumerId, pConsumer->cgroup);
|
||||||
// MND_TMQ_RETURN_CHECK(mndSetConsumerDropLogs(pTrans, pConsumer));
|
code = TSDB_CODE_MND_CGROUP_USED;
|
||||||
// }
|
goto END;
|
||||||
// }
|
}
|
||||||
//
|
|
||||||
// sdbRelease(pMnode->pSdb, pConsumer);
|
sdbRelease(pMnode->pSdb, pConsumer);
|
||||||
// }
|
}
|
||||||
//
|
|
||||||
//END:
|
END:
|
||||||
// sdbRelease(pMnode->pSdb, pConsumer);
|
sdbRelease(pMnode->pSdb, pConsumer);
|
||||||
// sdbCancelFetch(pMnode->pSdb, pIter);
|
sdbCancelFetch(pMnode->pSdb, pIter);
|
||||||
// return code;
|
return code;
|
||||||
//}
|
}
|
||||||
|
|
||||||
static int32_t mndProcessDropCgroupReq(SRpcMsg *pMsg) {
|
static int32_t mndProcessDropCgroupReq(SRpcMsg *pMsg) {
|
||||||
SMnode *pMnode = pMsg->info.node;
|
SMnode *pMnode = pMsg->info.node;
|
||||||
|
@ -1085,6 +1085,7 @@ static int32_t mndProcessDropCgroupReq(SRpcMsg *pMsg) {
|
||||||
mndTransSetDbName(pTrans, pSub->dbName, NULL);
|
mndTransSetDbName(pTrans, pSub->dbName, NULL);
|
||||||
MND_TMQ_RETURN_CHECK(mndTransCheckConflict(pMnode, pTrans));
|
MND_TMQ_RETURN_CHECK(mndTransCheckConflict(pMnode, pTrans));
|
||||||
MND_TMQ_RETURN_CHECK(sendDeleteSubToVnode(pMnode, pSub, pTrans));
|
MND_TMQ_RETURN_CHECK(sendDeleteSubToVnode(pMnode, pSub, pTrans));
|
||||||
|
MND_TMQ_RETURN_CHECK(mndCheckConsumerByGroup(pMnode, pTrans, dropReq.cgroup, dropReq.topic));
|
||||||
MND_TMQ_RETURN_CHECK(mndSetDropSubCommitLogs(pMnode, pTrans, pSub));
|
MND_TMQ_RETURN_CHECK(mndSetDropSubCommitLogs(pMnode, pTrans, pSub));
|
||||||
MND_TMQ_RETURN_CHECK(mndTransPrepare(pMnode, pTrans));
|
MND_TMQ_RETURN_CHECK(mndTransPrepare(pMnode, pTrans));
|
||||||
|
|
||||||
|
|
|
@ -602,7 +602,7 @@ END:
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
static bool checkTopic(SArray *topics, char *topicName){
|
bool checkTopic(SArray *topics, char *topicName){
|
||||||
int32_t sz = taosArrayGetSize(topics);
|
int32_t sz = taosArrayGetSize(topics);
|
||||||
for (int32_t i = 0; i < sz; i++) {
|
for (int32_t i = 0; i < sz; i++) {
|
||||||
char *name = taosArrayGetP(topics, i);
|
char *name = taosArrayGetP(topics, i);
|
||||||
|
@ -613,44 +613,33 @@ static bool checkTopic(SArray *topics, char *topicName){
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
//static int32_t mndDropConsumerByTopic(SMnode *pMnode, STrans *pTrans, char *topicName){
|
static int32_t mndCheckConsumerByTopic(SMnode *pMnode, STrans *pTrans, char *topicName){
|
||||||
// int32_t code = 0;
|
int32_t code = 0;
|
||||||
// SSdb *pSdb = pMnode->pSdb;
|
SSdb *pSdb = pMnode->pSdb;
|
||||||
// void *pIter = NULL;
|
void *pIter = NULL;
|
||||||
// SMqConsumerObj *pConsumer = NULL;
|
SMqConsumerObj *pConsumer = NULL;
|
||||||
// while (1) {
|
while (1) {
|
||||||
// pIter = sdbFetch(pSdb, SDB_CONSUMER, pIter, (void **)&pConsumer);
|
pIter = sdbFetch(pSdb, SDB_CONSUMER, pIter, (void **)&pConsumer);
|
||||||
// if (pIter == NULL) {
|
if (pIter == NULL) {
|
||||||
// break;
|
break;
|
||||||
// }
|
}
|
||||||
//
|
|
||||||
// bool found = checkTopic(pConsumer->assignedTopics, topicName);
|
bool found = checkTopic(pConsumer->assignedTopics, topicName);
|
||||||
// if (found){
|
if (found){
|
||||||
// if (pConsumer->status == MQ_CONSUMER_STATUS_LOST) {
|
mError("topic:%s, failed to drop since subscribed by consumer:0x%" PRIx64 ", in consumer group %s",
|
||||||
// MND_TMQ_RETURN_CHECK(mndSetConsumerDropLogs(pTrans, pConsumer));
|
topicName, pConsumer->consumerId, pConsumer->cgroup);
|
||||||
// sdbRelease(pSdb, pConsumer);
|
code = TSDB_CODE_MND_TOPIC_SUBSCRIBED;
|
||||||
// continue;
|
goto END;
|
||||||
// }
|
}
|
||||||
// mError("topic:%s, failed to drop since subscribed by consumer:0x%" PRIx64 ", in consumer group %s",
|
|
||||||
// topicName, pConsumer->consumerId, pConsumer->cgroup);
|
sdbRelease(pSdb, pConsumer);
|
||||||
// code = TSDB_CODE_MND_TOPIC_SUBSCRIBED;
|
}
|
||||||
// goto END;
|
|
||||||
// }
|
END:
|
||||||
//
|
sdbRelease(pSdb, pConsumer);
|
||||||
// if (checkTopic(pConsumer->rebNewTopics, topicName) || checkTopic(pConsumer->rebRemovedTopics, topicName)) {
|
sdbCancelFetch(pSdb, pIter);
|
||||||
// code = TSDB_CODE_MND_TOPIC_SUBSCRIBED;
|
return code;
|
||||||
// mError("topic:%s, failed to drop since subscribed by consumer:%" PRId64 ", in consumer group %s (reb new)",
|
}
|
||||||
// topicName, pConsumer->consumerId, pConsumer->cgroup);
|
|
||||||
// goto END;
|
|
||||||
// }
|
|
||||||
// sdbRelease(pSdb, pConsumer);
|
|
||||||
// }
|
|
||||||
//
|
|
||||||
//END:
|
|
||||||
// sdbRelease(pSdb, pConsumer);
|
|
||||||
// sdbCancelFetch(pSdb, pIter);
|
|
||||||
// return code;
|
|
||||||
//}
|
|
||||||
|
|
||||||
static int32_t mndDropCheckInfoByTopic(SMnode *pMnode, STrans *pTrans, SMqTopicObj *pTopic){
|
static int32_t mndDropCheckInfoByTopic(SMnode *pMnode, STrans *pTrans, SMqTopicObj *pTopic){
|
||||||
// broadcast to all vnode
|
// broadcast to all vnode
|
||||||
|
@ -725,7 +714,7 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) {
|
||||||
|
|
||||||
MND_TMQ_RETURN_CHECK(mndCheckTopicPrivilege(pMnode, pReq->info.conn.user, MND_OPER_DROP_TOPIC, pTopic));
|
MND_TMQ_RETURN_CHECK(mndCheckTopicPrivilege(pMnode, pReq->info.conn.user, MND_OPER_DROP_TOPIC, pTopic));
|
||||||
MND_TMQ_RETURN_CHECK(mndCheckDbPrivilegeByName(pMnode, pReq->info.conn.user, MND_OPER_READ_DB, pTopic->db));
|
MND_TMQ_RETURN_CHECK(mndCheckDbPrivilegeByName(pMnode, pReq->info.conn.user, MND_OPER_READ_DB, pTopic->db));
|
||||||
// MND_TMQ_RETURN_CHECK(mndDropConsumerByTopic(pMnode, pTrans, dropReq.name));
|
MND_TMQ_RETURN_CHECK(mndCheckConsumerByTopic(pMnode, pTrans, dropReq.name));
|
||||||
MND_TMQ_RETURN_CHECK(mndDropSubByTopic(pMnode, pTrans, dropReq.name));
|
MND_TMQ_RETURN_CHECK(mndDropSubByTopic(pMnode, pTrans, dropReq.name));
|
||||||
|
|
||||||
if (pTopic->ntbUid != 0) {
|
if (pTopic->ntbUid != 0) {
|
||||||
|
|
|
@ -463,7 +463,7 @@ int32_t tqProcessVgCommittedInfoReq(STQ* pTq, SRpcMsg* pMsg) {
|
||||||
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
terrno = TSDB_CODE_OUT_OF_MEMORY;
|
||||||
return terrno;
|
return terrno;
|
||||||
}
|
}
|
||||||
SEncoder encoder;
|
SEncoder encoder = {0};
|
||||||
tEncoderInit(&encoder, buf, len);
|
tEncoderInit(&encoder, buf, len);
|
||||||
code = tEncodeMqVgOffset(&encoder, &vgOffset);
|
code = tEncodeMqVgOffset(&encoder, &vgOffset);
|
||||||
tEncoderClear(&encoder);
|
tEncoderClear(&encoder);
|
||||||
|
|
|
@ -106,7 +106,7 @@ int32_t tqMetaSaveOffset(STQ* pTq, STqOffset* pOffset) {
|
||||||
void* buf = NULL;
|
void* buf = NULL;
|
||||||
int32_t code = TDB_CODE_SUCCESS;
|
int32_t code = TDB_CODE_SUCCESS;
|
||||||
int32_t vlen;
|
int32_t vlen;
|
||||||
SEncoder encoder;
|
SEncoder encoder = {0};
|
||||||
tEncodeSize(tEncodeSTqOffset, pOffset, vlen, code);
|
tEncodeSize(tEncodeSTqOffset, pOffset, vlen, code);
|
||||||
if (code < 0) {
|
if (code < 0) {
|
||||||
goto END;
|
goto END;
|
||||||
|
@ -201,7 +201,7 @@ int32_t tqMetaSaveHandle(STQ* pTq, const char* key, const STqHandle* pHandle) {
|
||||||
int32_t code = TDB_CODE_SUCCESS;
|
int32_t code = TDB_CODE_SUCCESS;
|
||||||
int32_t vlen;
|
int32_t vlen;
|
||||||
void* buf = NULL;
|
void* buf = NULL;
|
||||||
SEncoder encoder;
|
SEncoder encoder = {0};
|
||||||
tEncodeSize(tEncodeSTqHandle, pHandle, vlen, code);
|
tEncodeSize(tEncodeSTqHandle, pHandle, vlen, code);
|
||||||
if (code < 0) {
|
if (code < 0) {
|
||||||
goto END;
|
goto END;
|
||||||
|
|
|
@ -595,7 +595,7 @@ int32_t buildSubmitMsgImpl(SSubmitReq2* pSubmitReq, int32_t vgId, void** pMsg, i
|
||||||
int32_t len = 0;
|
int32_t len = 0;
|
||||||
tEncodeSize(tEncodeSubmitReq, pSubmitReq, len, code);
|
tEncodeSize(tEncodeSubmitReq, pSubmitReq, len, code);
|
||||||
|
|
||||||
SEncoder encoder;
|
SEncoder encoder = {0};
|
||||||
len += sizeof(SSubmitReq2Msg);
|
len += sizeof(SSubmitReq2Msg);
|
||||||
|
|
||||||
pBuf = rpcMallocCont(len);
|
pBuf = rpcMallocCont(len);
|
||||||
|
@ -1230,7 +1230,7 @@ int32_t doBuildAndSendDeleteMsg(SVnode* pVnode, char* stbFullName, SSDataBlock*
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
SEncoder encoder;
|
SEncoder encoder = {0};
|
||||||
void* serializedDeleteReq = rpcMallocCont(len + sizeof(SMsgHead));
|
void* serializedDeleteReq = rpcMallocCont(len + sizeof(SMsgHead));
|
||||||
void* abuf = POINTER_SHIFT(serializedDeleteReq, sizeof(SMsgHead));
|
void* abuf = POINTER_SHIFT(serializedDeleteReq, sizeof(SMsgHead));
|
||||||
tEncoderInit(&encoder, abuf, len);
|
tEncoderInit(&encoder, abuf, len);
|
||||||
|
|
|
@ -484,6 +484,7 @@ static void tsdbCachePutBatch(SLastCol *pLastCol, const void *key, size_t klen,
|
||||||
code = tsdbCacheSerialize(pLastCol, &rocks_value, &vlen);
|
code = tsdbCacheSerialize(pLastCol, &rocks_value, &vlen);
|
||||||
if (code) {
|
if (code) {
|
||||||
tsdbError("tsdb/cache: vgId:%d, serialize failed since %s.", TD_VID(pTsdb->pVnode), tstrerror(code));
|
tsdbError("tsdb/cache: vgId:%d, serialize failed since %s.", TD_VID(pTsdb->pVnode), tstrerror(code));
|
||||||
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
(void)taosThreadMutexLock(&rCache->rMutex);
|
(void)taosThreadMutexLock(&rCache->rMutex);
|
||||||
|
@ -1143,14 +1144,14 @@ static int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, SArray
|
||||||
code = tsdbCacheSerialize(&lastColTmp, &value, &vlen);
|
code = tsdbCacheSerialize(&lastColTmp, &value, &vlen);
|
||||||
if (code) {
|
if (code) {
|
||||||
tsdbError("tsdb/cache: vgId:%d, serialize failed since %s.", TD_VID(pTsdb->pVnode), tstrerror(code));
|
tsdbError("tsdb/cache: vgId:%d, serialize failed since %s.", TD_VID(pTsdb->pVnode), tstrerror(code));
|
||||||
|
} else {
|
||||||
|
(void)taosThreadMutexLock(&pTsdb->rCache.rMutex);
|
||||||
|
|
||||||
|
rocksdb_writebatch_put(wb, (char *)&idxKey->key, ROCKS_KEY_LEN, value, vlen);
|
||||||
|
|
||||||
|
(void)taosThreadMutexUnlock(&pTsdb->rCache.rMutex);
|
||||||
}
|
}
|
||||||
|
|
||||||
(void)taosThreadMutexLock(&pTsdb->rCache.rMutex);
|
|
||||||
|
|
||||||
rocksdb_writebatch_put(wb, (char *)&idxKey->key, ROCKS_KEY_LEN, value, vlen);
|
|
||||||
|
|
||||||
(void)taosThreadMutexUnlock(&pTsdb->rCache.rMutex);
|
|
||||||
|
|
||||||
pLastCol = &lastColTmp;
|
pLastCol = &lastColTmp;
|
||||||
SLastCol *pTmpLastCol = taosMemoryCalloc(1, sizeof(SLastCol));
|
SLastCol *pTmpLastCol = taosMemoryCalloc(1, sizeof(SLastCol));
|
||||||
if (!pTmpLastCol) {
|
if (!pTmpLastCol) {
|
||||||
|
@ -1411,6 +1412,11 @@ static int32_t tsdbCacheLoadFromRaw(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArr
|
||||||
if (IS_LAST_KEY(idxKey->key)) {
|
if (IS_LAST_KEY(idxKey->key)) {
|
||||||
if (NULL == lastTmpIndexArray) {
|
if (NULL == lastTmpIndexArray) {
|
||||||
lastTmpIndexArray = taosArrayInit(num_keys, sizeof(int32_t));
|
lastTmpIndexArray = taosArrayInit(num_keys, sizeof(int32_t));
|
||||||
|
if (!lastTmpIndexArray) {
|
||||||
|
taosArrayDestroy(lastrowTmpIndexArray);
|
||||||
|
|
||||||
|
TAOS_RETURN(TSDB_CODE_OUT_OF_MEMORY);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
(void)taosArrayPush(lastTmpIndexArray, &(i));
|
(void)taosArrayPush(lastTmpIndexArray, &(i));
|
||||||
lastColIds[lastIndex] = idxKey->key.cid;
|
lastColIds[lastIndex] = idxKey->key.cid;
|
||||||
|
@ -1419,6 +1425,11 @@ static int32_t tsdbCacheLoadFromRaw(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArr
|
||||||
} else {
|
} else {
|
||||||
if (NULL == lastrowTmpIndexArray) {
|
if (NULL == lastrowTmpIndexArray) {
|
||||||
lastrowTmpIndexArray = taosArrayInit(num_keys, sizeof(int32_t));
|
lastrowTmpIndexArray = taosArrayInit(num_keys, sizeof(int32_t));
|
||||||
|
if (!lastrowTmpIndexArray) {
|
||||||
|
taosArrayDestroy(lastTmpIndexArray);
|
||||||
|
|
||||||
|
TAOS_RETURN(TSDB_CODE_OUT_OF_MEMORY);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
(void)taosArrayPush(lastrowTmpIndexArray, &(i));
|
(void)taosArrayPush(lastrowTmpIndexArray, &(i));
|
||||||
lastrowColIds[lastrowIndex] = idxKey->key.cid;
|
lastrowColIds[lastrowIndex] = idxKey->key.cid;
|
||||||
|
@ -1428,6 +1439,11 @@ static int32_t tsdbCacheLoadFromRaw(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArr
|
||||||
}
|
}
|
||||||
|
|
||||||
pTmpColArray = taosArrayInit(lastIndex + lastrowIndex, sizeof(SLastCol));
|
pTmpColArray = taosArrayInit(lastIndex + lastrowIndex, sizeof(SLastCol));
|
||||||
|
if (!pTmpColArray) {
|
||||||
|
taosArrayDestroy(lastrowTmpIndexArray);
|
||||||
|
taosArrayDestroy(lastTmpIndexArray);
|
||||||
|
TAOS_RETURN(TSDB_CODE_OUT_OF_MEMORY);
|
||||||
|
}
|
||||||
|
|
||||||
if (lastTmpIndexArray != NULL) {
|
if (lastTmpIndexArray != NULL) {
|
||||||
(void)mergeLastCid(uid, pTsdb, &lastTmpColArray, pr, lastColIds, lastIndex, lastSlotIds);
|
(void)mergeLastCid(uid, pTsdb, &lastTmpColArray, pr, lastColIds, lastIndex, lastSlotIds);
|
||||||
|
@ -1510,12 +1526,12 @@ static int32_t tsdbCacheLoadFromRaw(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArr
|
||||||
code = tsdbCacheSerialize(pLastCol, &value, &vlen);
|
code = tsdbCacheSerialize(pLastCol, &value, &vlen);
|
||||||
if (code) {
|
if (code) {
|
||||||
tsdbError("tsdb/cache: vgId:%d, serialize failed since %s.", TD_VID(pTsdb->pVnode), tstrerror(code));
|
tsdbError("tsdb/cache: vgId:%d, serialize failed since %s.", TD_VID(pTsdb->pVnode), tstrerror(code));
|
||||||
|
} else {
|
||||||
|
SLastKey *key = &idxKey->key;
|
||||||
|
size_t klen = ROCKS_KEY_LEN;
|
||||||
|
rocksdb_writebatch_put(wb, (char *)key, klen, value, vlen);
|
||||||
|
taosMemoryFree(value);
|
||||||
}
|
}
|
||||||
|
|
||||||
SLastKey *key = &idxKey->key;
|
|
||||||
size_t klen = ROCKS_KEY_LEN;
|
|
||||||
rocksdb_writebatch_put(wb, (char *)key, klen, value, vlen);
|
|
||||||
taosMemoryFree(value);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if (wb) {
|
if (wb) {
|
||||||
|
|
|
@ -1881,7 +1881,6 @@ int tdbBtreeNext(SBTC *pBtc, void **ppKey, int *kLen, void **ppVal, int *vLen) {
|
||||||
if (cd.vLen > 0) {
|
if (cd.vLen > 0) {
|
||||||
pVal = tdbRealloc(*ppVal, cd.vLen);
|
pVal = tdbRealloc(*ppVal, cd.vLen);
|
||||||
if (pVal == NULL) {
|
if (pVal == NULL) {
|
||||||
tdbFree(pKey);
|
|
||||||
return terrno;
|
return terrno;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -47,14 +47,15 @@ char tsAVX512Supported = 0;
|
||||||
|
|
||||||
int32_t osDefaultInit() {
|
int32_t osDefaultInit() {
|
||||||
int32_t code = TSDB_CODE_SUCCESS;
|
int32_t code = TSDB_CODE_SUCCESS;
|
||||||
|
|
||||||
taosSeedRand(taosSafeRand());
|
taosSeedRand(taosSafeRand());
|
||||||
taosGetSystemLocale(tsLocale, tsCharset);
|
taosGetSystemLocale(tsLocale, tsCharset);
|
||||||
taosGetSystemTimezone(tsTimezoneStr, &tsTimezone);
|
taosGetSystemTimezone(tsTimezoneStr, &tsTimezone);
|
||||||
code = taosSetSystemTimezone(tsTimezoneStr, tsTimezoneStr, &tsDaylight, &tsTimezone);
|
if (strlen(tsTimezoneStr) > 0) { // ignore empty timezone
|
||||||
if (code) {
|
if ((code = taosSetSystemTimezone(tsTimezoneStr, tsTimezoneStr, &tsDaylight, &tsTimezone)) != TSDB_CODE_SUCCESS)
|
||||||
return code;
|
return code;
|
||||||
}
|
}
|
||||||
|
|
||||||
taosGetSystemInfo();
|
taosGetSystemInfo();
|
||||||
|
|
||||||
// deadlock in query
|
// deadlock in query
|
||||||
|
|
|
@ -826,7 +826,7 @@ int32_t taosSetSystemTimezone(const char *inTimezoneStr, char *outTimezoneStr, i
|
||||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||||
return terrno;
|
return terrno;
|
||||||
}
|
}
|
||||||
|
|
||||||
tzset();
|
tzset();
|
||||||
int32_t tz = (int32_t)((-timezone * MILLISECOND_PER_SECOND) / MILLISECOND_PER_HOUR);
|
int32_t tz = (int32_t)((-timezone * MILLISECOND_PER_SECOND) / MILLISECOND_PER_HOUR);
|
||||||
*tsTimezone = tz;
|
*tsTimezone = tz;
|
||||||
|
|
|
@ -244,6 +244,12 @@ static int32_t doSetConf(SConfigItem *pItem, const char *value, ECfgSrcType styp
|
||||||
|
|
||||||
static int32_t cfgSetTimezone(SConfigItem *pItem, const char *value, ECfgSrcType stype) {
|
static int32_t cfgSetTimezone(SConfigItem *pItem, const char *value, ECfgSrcType stype) {
|
||||||
TAOS_CHECK_RETURN(doSetConf(pItem, value, stype));
|
TAOS_CHECK_RETURN(doSetConf(pItem, value, stype));
|
||||||
|
if (strlen(value) == 0) {
|
||||||
|
uError("cfg:%s, type:%s src:%s, value:%s, skip to set timezone", pItem->name, cfgDtypeStr(pItem->dtype),
|
||||||
|
cfgStypeStr(stype), value);
|
||||||
|
TAOS_RETURN(TSDB_CODE_SUCCESS);
|
||||||
|
}
|
||||||
|
|
||||||
TAOS_CHECK_RETURN(osSetTimezone(value));
|
TAOS_CHECK_RETURN(osSetTimezone(value));
|
||||||
TAOS_RETURN(TSDB_CODE_SUCCESS);
|
TAOS_RETURN(TSDB_CODE_SUCCESS);
|
||||||
}
|
}
|
||||||
|
|
|
@ -122,7 +122,7 @@ def scan_files_path(source_file_path):
|
||||||
for file in files:
|
for file in files:
|
||||||
if any(item in root for item in scan_dir_list):
|
if any(item in root for item in scan_dir_list):
|
||||||
file_path = os.path.join(root, file)
|
file_path = os.path.join(root, file)
|
||||||
if (file_path.endswith(".c") or file_path.endswith(".h") or file_path.endswith(".cpp")) and all(item not in file_path for item in scan_skip_file_list):
|
if (file_path.endswith(".c") or file_path.endswith(".cpp")) and all(item not in file_path for item in scan_skip_file_list):
|
||||||
all_file_path.append(file_path)
|
all_file_path.append(file_path)
|
||||||
logger.info("Found %s files" % len(all_file_path))
|
logger.info("Found %s files" % len(all_file_path))
|
||||||
|
|
||||||
|
@ -226,4 +226,4 @@ if __name__ == "__main__":
|
||||||
logger.error(f"Scan failed,please check the log file:{scan_result_log}")
|
logger.error(f"Scan failed,please check the log file:{scan_result_log}")
|
||||||
for index, failed_result_file in enumerate(web_path):
|
for index, failed_result_file in enumerate(web_path):
|
||||||
logger.error(f"failed number: {index}, failed_result_file: {failed_result_file}")
|
logger.error(f"failed number: {index}, failed_result_file: {failed_result_file}")
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
|
@ -544,6 +544,8 @@
|
||||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/limit.py
|
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/limit.py
|
||||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/log.py
|
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/log.py
|
||||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/log.py -R
|
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/log.py -R
|
||||||
|
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/logical_operators.py
|
||||||
|
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/logical_operators.py -R
|
||||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/lower.py
|
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/lower.py
|
||||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/lower.py -R
|
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/lower.py -R
|
||||||
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/ltrim.py
|
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/ltrim.py
|
||||||
|
|
|
@ -0,0 +1,120 @@
|
||||||
|
from wsgiref.headers import tspecials
|
||||||
|
from util.log import *
|
||||||
|
from util.cases import *
|
||||||
|
from util.sql import *
|
||||||
|
import numpy as np
|
||||||
|
|
||||||
|
DBNAME = "db"
|
||||||
|
|
||||||
|
class TDTestCase:
|
||||||
|
def init(self, conn, logSql, replicaVar=1):
|
||||||
|
self.replicaVar = int(replicaVar)
|
||||||
|
tdLog.debug("start to execute %s" % __file__)
|
||||||
|
tdSql.init(conn.cursor())
|
||||||
|
|
||||||
|
self.rowNum = 10
|
||||||
|
self.batchNum = 5
|
||||||
|
self.ts = 1537146000000
|
||||||
|
|
||||||
|
def run(self,dbname=DBNAME):
|
||||||
|
tdSql.prepare()
|
||||||
|
|
||||||
|
tdSql.execute(f'''create table {dbname}.tb (ts timestamp, v int, f float, b varchar(8))''')
|
||||||
|
tdSql.execute(f"insert into {dbname}.tb values('2024-07-04 10:00:00', 1, 2.0, 't0')")
|
||||||
|
tdSql.execute(f"insert into {dbname}.tb values('2024-07-04 10:01:00', 11, 12.1, 't0')")
|
||||||
|
tdSql.execute(f"insert into {dbname}.tb values('2024-07-04 10:02:00', 21, 22.2, 't0')")
|
||||||
|
tdSql.execute(f"insert into {dbname}.tb values('2024-07-04 10:03:00', 31, 32.3, 't0')")
|
||||||
|
tdSql.execute(f"insert into {dbname}.tb values('2024-07-04 10:04:00', 41, 42.4, 't0')")
|
||||||
|
tdSql.execute(f"insert into {dbname}.tb values('2024-07-04 10:05:00', 51, 52.5, 't1')")
|
||||||
|
tdSql.execute(f"insert into {dbname}.tb values('2024-07-04 10:06:00', 61, 62.6, 't1')")
|
||||||
|
tdSql.execute(f"insert into {dbname}.tb values('2024-07-04 10:07:00', 71, 72.7, 't1')")
|
||||||
|
tdSql.execute(f"insert into {dbname}.tb values('2024-07-04 10:08:00', 81, 82.8, 't1')")
|
||||||
|
tdSql.execute(f"insert into {dbname}.tb values('2024-07-04 10:09:00', 91, 92.9, 't1')")
|
||||||
|
tdSql.execute(f"insert into {dbname}.tb values('2024-07-04 10:00:00',101,112.9, 't2')")
|
||||||
|
tdSql.execute(f"insert into {dbname}.tb values('2024-07-04 10:01:00',111,112.1, 't2')")
|
||||||
|
tdSql.execute(f"insert into {dbname}.tb values('2024-07-04 10:02:00',121,122.2, 't2')")
|
||||||
|
tdSql.execute(f"insert into {dbname}.tb values('2024-07-04 10:03:00',131,132.3, 't2')")
|
||||||
|
tdSql.execute(f"insert into {dbname}.tb values('2024-07-04 10:04:00',141,142.4, 't2')")
|
||||||
|
tdSql.execute(f"insert into {dbname}.tb values('2024-07-04 10:05:00',151,152.5, 't3')")
|
||||||
|
tdSql.execute(f"insert into {dbname}.tb values('2024-07-04 10:06:00',161,162.6, 't3')")
|
||||||
|
tdSql.execute(f"insert into {dbname}.tb values('2024-07-04 10:07:00',171,172.7, 't3')")
|
||||||
|
tdSql.execute(f"insert into {dbname}.tb values('2024-07-04 10:08:00',181,182.8, 't3')")
|
||||||
|
tdSql.execute(f"insert into {dbname}.tb values('2024-07-04 10:09:00',191,192.9, 't3')")
|
||||||
|
#test for operator and
|
||||||
|
tdSql.query('''select
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`ts` as `__fcol_0`,
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`v` as `__fcol_1`,
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`f` as `__fcol_2`,
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`b` as `__fcol_3`,
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`v` > 0
|
||||||
|
and `T_9048C6F41B2A45CE94FF3`.`f` > `T_9048C6F41B2A45CE94FF3`.`v`
|
||||||
|
from `db`.`tb` as `T_9048C6F41B2A45CE94FF3`
|
||||||
|
limit 5000''')
|
||||||
|
tdSql.checkRows(10)
|
||||||
|
#test for operator or
|
||||||
|
tdSql.query('''select
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`ts` as `__fcol_0`,
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`v` as `__fcol_1`,
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`f` as `__fcol_2`,
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`b` as `__fcol_3`,
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`v` > 0
|
||||||
|
or `T_9048C6F41B2A45CE94FF3`.`f` > `T_9048C6F41B2A45CE94FF3`.`v`
|
||||||
|
from `db`.`tb` as `T_9048C6F41B2A45CE94FF3`
|
||||||
|
limit 5000''')
|
||||||
|
tdSql.checkRows(10)
|
||||||
|
#test for operator in
|
||||||
|
tdSql.query('''select
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`ts` as `__fcol_0`,
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`v` as `__fcol_1`,
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`f` as `__fcol_2`,
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`b` as `__fcol_3`,
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`v` in (1)
|
||||||
|
from `db`.`tb` as `T_9048C6F41B2A45CE94FF3`
|
||||||
|
limit 5000;''')
|
||||||
|
tdSql.checkRows(10)
|
||||||
|
#test for operator not
|
||||||
|
tdSql.query('''select
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`ts` as `__fcol_0`,
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`v` as `__fcol_1`,
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`f` as `__fcol_2`,
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`b` as `__fcol_3`,
|
||||||
|
not `T_9048C6F41B2A45CE94FF3`.`v` > 0
|
||||||
|
from `db`.`tb` as `T_9048C6F41B2A45CE94FF3`
|
||||||
|
limit 5000''')
|
||||||
|
tdSql.checkRows(10)
|
||||||
|
#test for operator between and
|
||||||
|
tdSql.query('''select
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`ts` as `__fcol_0`,
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`v` as `__fcol_1`,
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`f` as `__fcol_2`,
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`b` as `__fcol_3`,
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`v` between 1 and 200
|
||||||
|
from `db`.`tb` as `T_9048C6F41B2A45CE94FF3`
|
||||||
|
limit 5000''')
|
||||||
|
tdSql.checkRows(10)
|
||||||
|
#test for operator is null
|
||||||
|
tdSql.query('''select
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`ts` as `__fcol_0`,
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`v` as `__fcol_1`,
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`f` as `__fcol_2`,
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`b` as `__fcol_3`,
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`v` is null
|
||||||
|
from `db`.`tb` as `T_9048C6F41B2A45CE94FF3`
|
||||||
|
limit 5000''')
|
||||||
|
tdSql.checkRows(10)
|
||||||
|
#test for operator is not null
|
||||||
|
tdSql.query('''select
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`ts` as `__fcol_0`,
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`v` as `__fcol_1`,
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`f` as `__fcol_2`,
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`b` as `__fcol_3`,
|
||||||
|
`T_9048C6F41B2A45CE94FF3`.`v` is not null
|
||||||
|
from `db`.`tb` as `T_9048C6F41B2A45CE94FF3`
|
||||||
|
limit 5000''')
|
||||||
|
tdSql.checkRows(10)
|
||||||
|
def stop(self):
|
||||||
|
tdSql.close()
|
||||||
|
tdLog.success("%s successfully executed" % __file__)
|
||||||
|
|
||||||
|
tdCases.addWindows(__file__, TDTestCase())
|
||||||
|
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -454,7 +454,7 @@ int buildStable(TAOS* pConn, TAOS_RES* pRes) {
|
||||||
taos_free_result(pRes);
|
taos_free_result(pRes);
|
||||||
#else
|
#else
|
||||||
pRes = taos_query(pConn,
|
pRes = taos_query(pConn,
|
||||||
"create stream meters_summary_s trigger at_once IGNORE EXPIRED 0 into meters_summary as select "
|
"create stream meters_summary_s trigger at_once IGNORE EXPIRED 0 fill_history 1 into meters_summary as select "
|
||||||
"_wstart, max(current) as current, "
|
"_wstart, max(current) as current, "
|
||||||
"groupid, location from meters partition by groupid, location interval(10m)");
|
"groupid, location from meters partition by groupid, location interval(10m)");
|
||||||
if (taos_errno(pRes) != 0) {
|
if (taos_errno(pRes) != 0) {
|
||||||
|
|
Loading…
Reference in New Issue