commit
c88216ff6b
|
@ -348,7 +348,7 @@ TDengine 提供了丰富的应用程序开发接口,其中包括 C/C++、Java
|
|||
|
||||
# 成为社区贡献者
|
||||
|
||||
点击 [这里](https://www.taosdata.com/cn/contributor/),了解如何成为 TDengine 的贡献者。
|
||||
点击 [这里](https://www.taosdata.com/contributor),了解如何成为 TDengine 的贡献者。
|
||||
|
||||
# 加入技术交流群
|
||||
|
||||
|
|
|
@ -75,9 +75,9 @@ taos>
|
|||
|
||||
## TDegnine Graphic User Interface
|
||||
|
||||
From TDengine 3.3.0.0, there is a new componenet called `taos-explorer` added in the TDengine docker image. You can use it to manage the databases, super tables, child tables, and data in your TDengine system. There are also some features only available in TDengine Enterprise Edition, please contact TDengine sales team in case you need these features.
|
||||
From TDengine 3.3.0.0, there is a new component called `taos-explorer` added in the TDengine docker image. You can use it to manage the databases, super tables, child tables, and data in your TDengine system. There are also some features only available in TDengine Enterprise Edition, please contact TDengine sales team in case you need these features.
|
||||
|
||||
To use taos-explorer in the container, you need to access the host port mapped from container port 6060. Assuming the host name is abc.com, and the port used on host is 6060, you need to access `http://abc.com:6060`. taos-explorer uses port 6060 by default in the container. When you use it the first time, you need to register with your enterprise email, then can logon using your user name and password in the TDengine database management system.
|
||||
To use taos-explorer in the container, you need to access the host port mapped from container port 6060. Assuming the host name is abc.com, and the port used on host is 6060, you need to access `http://abc.com:6060`. taos-explorer uses port 6060 by default in the container. The default username and password to log in to the TDengine Database Management System is "root/taosdata".
|
||||
|
||||
## Test data insert performance
|
||||
|
||||
|
|
|
@ -4,9 +4,9 @@ sidebar_label: 文档首页
|
|||
slug: /
|
||||
---
|
||||
|
||||
TDengine 是一款[开源](https://www.taosdata.com/tdengine/open_source_time-series_database)、[高性能](https://www.taosdata.com/fast)、[云原生](https://www.taosdata.com/tdengine/cloud_native_time-series_database)的<a href="https://www.taosdata.com/" data-internallinksmanager029f6b8e52c="2" title="时序数据库" target="_blank" rel="noopener">时序数据库</a>(<a href="https://www.taosdata.com/time-series-database" data-internallinksmanager029f6b8e52c="9" title="Time Series DataBase" target="_blank" rel="noopener">Time Series Database</a>, <a href="https://www.taosdata.com/tsdb" data-internallinksmanager029f6b8e52c="8" title="TSDB" target="_blank" rel="noopener">TSDB</a>), 它专为物联网、车联网、工业互联网、金融、IT 运维等场景优化设计。同时它还带有内建的缓存、流式计算、数据订阅等系统功能,能大幅减少系统设计的复杂度,降低研发和运营成本,是一款极简的时序数据处理平台。本文档是 TDengine 的用户手册,主要是介绍 TDengine 的基本概念、安装、使用、功能、开发接口、运营维护、TDengine 内核设计等等,它主要是面向架构师、开发工程师与系统管理员的。如果你对时序数据的基本概念、价值以及其所能带来的业务价值尚不了解,请参考[时序数据基础](./concept)
|
||||
TDengine 是一款[开源](https://www.taosdata.com/tdengine/open_source_time-series_database)、[高性能](https://www.taosdata.com/fast)、[云原生](https://www.taosdata.com/tdengine/cloud_native_time-series_database)的<a href="https://www.taosdata.com/" data-internallinksmanager029f6b8e52c="2" title="时序数据库" target="_blank" rel="noopener">时序数据库</a>(<a href="https://www.taosdata.com/time-series-database" data-internallinksmanager029f6b8e52c="9" title="Time Series DataBase" target="_blank" rel="noopener">Time Series Database</a>, <a href="https://www.taosdata.com/tsdb" data-internallinksmanager029f6b8e52c="8" title="TSDB" target="_blank" rel="noopener">TSDB</a>), 它专为物联网、车联网、工业互联网、金融、IT 运维等场景优化设计。同时它还带有内建的缓存、流式计算、数据订阅等系统功能,能大幅减少系统设计的复杂度,降低研发和运营成本,是一款极简的时序数据处理平台。本文档是 TDengine 的用户手册,主要是介绍 TDengine 的基本概念、安装、使用、功能、开发接口、运营维护、TDengine 内核设计等等,它主要是面向架构师、开发工程师与系统管理员的。如果你对时序数据的基本概念、价值以及其所能带来的业务价值尚不了解,请参考[时序数据基础](./concept)。
|
||||
|
||||
TDengine 充分利用了时序数据的特点,提出了“一个数据采集点一张表”与“超级表”的概念,设计了创新的存储引擎,让数据的写入、查询和存储效率都得到极大的提升。为正确理解并使用 TDengine,无论如何,请您仔细阅读[数据模型](./basic/model)一章。
|
||||
TDengine 充分利用了时序数据的特点,提出了“一个数据采集点一张表”与“超级表”的概念,设计了创新的存储引擎,让数据的写入、查询和存储效率都得到极大的提升。为正确理解并使用 TDengine,无论你在工作中是什么角色,请您仔细阅读[数据模型](./basic/model)一章。
|
||||
|
||||
如果你是开发工程师,请一定仔细阅读[开发指南](./develop)一章,该部分对数据库连接、建模、插入数据、查询、流式计算、缓存、数据订阅、用户自定义函数等功能都做了详细介绍,并配有各种编程语言的示例代码。大部分情况下,你只要复制粘贴示例代码,针对自己的应用稍作改动,就能跑起来。对 REST API、各种编程语言的连接器(Connector)想做更多详细了解的话,请看[连接器](./reference/connector)一章。
|
||||
|
||||
|
@ -16,6 +16,8 @@ TDengine 采用 SQL 作为查询语言,大大降低学习成本、降低迁移
|
|||
|
||||
如果你是系统管理员,关心安装、升级、容错灾备、关心数据导入、导出、配置参数,如何监测 TDengine 是否健康运行,如何提升系统运行的性能,请仔细参考[运维指南](./operation)一章。
|
||||
|
||||
如果你对数据库内核设计感兴趣,或是开源爱好者,建议仔细阅读[技术内幕](./tdinterna)一章。该章从分布式架构到存储引擎、查询引擎、数据订阅,再到流计算引擎都做了详细阐述。建议对照文档,查看TDengine在GitHub的源代码,对TDengine的设计和编码做深入了解,更欢迎加入开源社区,贡献代码。
|
||||
|
||||
最后,作为一个开源软件,欢迎大家的参与。如果发现文档有任何错误、描述不清晰的地方,请在每个页面的最下方,点击“编辑本文档”直接进行修改。
|
||||
|
||||
Together, we make a difference!
|
||||
|
|
|
@ -14,7 +14,9 @@ TDengine 完整的软件包包括服务端(taosd)、应用驱动(taosc)
|
|||
|
||||
为方便使用,标准的服务端安装包包含了 taosd、taosAdapter、taosc、taos、taosdump、taosBenchmark、TDinsight 安装脚本和示例代码;如果您只需要用到服务端程序和客户端连接的 C/C++ 语言支持,也可以仅下载 Lite 版本的安装包。
|
||||
|
||||
在 Linux 系统上,TDengine 社区版提供 Deb 和 RPM 格式安装包,用户可以根据自己的运行环境选择合适的安装包。其中 Deb 支持 Debian/Ubuntu 及其衍生系统,RPM 支持 CentOS/RHEL/SUSE 及其衍生系统。同时我们也为企业用户提供 tar.gz 格式安装包,也支持通过 `apt-get` 工具从线上进行安装。需要注意的是,RPM 和 Deb 包不含 `taosdump` 和 TDinsight 安装脚本,这些工具需要通过安装 taosTools 包获得。TDengine 也提供 Windows x64 平台和 macOS x64/m1 平台的安装包。
|
||||
在 Linux 系统上,TDengine 社区版提供 Deb 和 RPM 格式安装包,其中 Deb 支持 Debian/Ubuntu 及其衍生系统,RPM 支持 CentOS/RHEL/SUSE 及其衍生系统,用户可以根据自己的运行环境自行选择。同时我们也提供了 tar.gz 格式安装包,以及 `apt-get` 工具从线上进行安装。
|
||||
|
||||
此外,TDengine 也提供 macOS x64/m1 平台的 pkg 安装包。
|
||||
|
||||
## 运行环境要求
|
||||
在linux系统中,运行环境最低要求如下:
|
||||
|
|
|
@ -54,7 +54,7 @@ TDengine 利用这些日志文件实现故障前的状态恢复。在写入 WAL
|
|||
|
||||
数据库参数 wal_level 和 wal_fsync_period 共同决定了 WAL 的保存行为。。
|
||||
- wal_level:此参数控制 WAL 的保存级别。级别 1 表示仅将数据写入 WAL,但不立即执行 fsync 函数;级别 2 则表示在写入 WAL 的同时执行 fsync 函数。默认情况下,wal_level 设为 1。虽然执行 fsync 函数可以提高数据的持久性,但相应地也会降低写入性能。
|
||||
- wal_fsync_period:当 wal_level 设置为 2 时,这个参数控制执行 fsync 的频率。设置为 0 表示每次写入后立即执行 fsync,这可以确保数据的安全性,但可能会牺牲一些性能。当设置为大于 0 的数值时,表示 fsync 周期,默认为 3000,范围是[1, 180000],单位毫秒。
|
||||
- wal_fsync_period:当 wal_level 设置为 1 时,这个参数控制执行 fsync 的频率。设置为 0 表示每次写入后立即执行 fsync,这可以确保数据的安全性,但可能会牺牲一些性能。当设置为大于 0 的数值时,表示 fsync 周期,默认为 3000,范围是[1, 180000],单位毫秒。
|
||||
|
||||
```sql
|
||||
CREATE DATABASE POWER WAL_LEVEL 1 WAL_FSYNC_PERIOD 3000;
|
||||
|
|
|
@ -22,7 +22,7 @@ TDengine Enterprise 配备了一个强大的可视化数据管理工具—taosEx
|
|||
| Aveva Historian | AVEVA Historian 2020 RS SP1 | 工业大数据分析软件,前身为 Wonderware Historian,专为工业环境设计,用于存储、管理和分析来自各种工业设备、传感器的实时和历史数据 |
|
||||
| OPC DA | Matrikon OPC version: 1.7.2.7433 | OPC 是 Open Platform Communications 的缩写,是一种开放式、标准化的通信协议,用于不同厂商的自动化设备之间进行数据交换。它最初由微软公司开发,旨在解决工业控制领域中不同设备之间互操作性差的问题;OPC 协议最初于 1996 年发布,当时称为 OPC DA (Data Access),主要用于实时数据采集和控制。 |
|
||||
| OPC UA | KeepWare KEPServerEx 6.5 | 2006 年,OPC 基金会发布了 OPC UA (Unified Architecture) 标准,它是一种基于服务的面向对象的协议,具有更高的灵活性和可扩展性,已成为 OPC 协议的主流版本 |
|
||||
| MQTT | emqx: 3.0.0 到 5.7.1<br> hivemq: 4.0.0 到 4.31.0<br> mosquitto: 1.4.4 到 2.0.18 | Message Queuing Telemetry Transport 的缩写,一种基于发布/订阅模式的轻量级通讯协议,专为低开销、低带宽占用的即时通讯设计,广泛适用于物联网、小型设备、移动应用等领域。 |
|
||||
| MQTT | emqx: 3.0.0 到 5.7.1<br/> hivemq: 4.0.0 到 4.31.0<br/> mosquitto: 1.4.4 到 2.0.18 | Message Queuing Telemetry Transport 的缩写,一种基于发布/订阅模式的轻量级通讯协议,专为低开销、低带宽占用的即时通讯设计,广泛适用于物联网、小型设备、移动应用等领域。 |
|
||||
| Kafka | 2.11 ~ 3.8.0 | 由 Apache 软件基金会开发的一个开源流处理平台,主要用于处理实时数据,并提供一个统一、高通量、低延迟的消息系统。它具备高速度、可伸缩性、持久性和分布式设计等特点,使得它能够在每秒处理数十万次的读写操作,支持上千个客户端,同时保持数据的可靠性和可用性。 |
|
||||
| InfluxDB | 1.7、1.8、2.0-2.7 | InfluxDB 是一种流行的开源时间序列数据库,它针对处理大量时间序列数据进行了优化。|
|
||||
| OpenTSDB | 2.4.1 | 基于 HBase 的分布式、可伸缩的时序数据库。它主要用于存储、索引和提供从大规模集群(包括网络设备、操作系统、应用程序等)中收集的指标数据,使这些数据更易于访问和图形化展示。 |
|
||||
|
|
|
@ -368,6 +368,18 @@ spec:
|
|||
labels:
|
||||
app: "tdengine"
|
||||
spec:
|
||||
affinity:
|
||||
podAntiAffinity:
|
||||
preferredDuringSchedulingIgnoredDuringExecution:
|
||||
- weight: 100
|
||||
podAffinityTerm:
|
||||
labelSelector:
|
||||
matchExpressions:
|
||||
- key: app
|
||||
operator: In
|
||||
values:
|
||||
- tdengine
|
||||
topologyKey: kubernetes.io/hostname
|
||||
containers:
|
||||
- name: "tdengine"
|
||||
image: "tdengine/tdengine:3.2.3.0"
|
||||
|
|
|
@ -33,7 +33,7 @@ taosd 命令行参数如下
|
|||
| secondEp | taosd 启动时,如果 firstEp 连接不上,尝试连接集群中第二个 dnode 的 endpoint,缺省值:无 |
|
||||
| fqdn | 启动 taosd 后所监听的服务地址,缺省值:所在服务器上配置的第一个 hostname |
|
||||
| serverPort | 启动 taosd 后所监听的端口,缺省值:6030 |
|
||||
| numOfRpcSessions | 允许一个客户端能创建的最大连接数,取值范围 100-100000,缺省值:30000 |
|
||||
| numOfRpcSessions | 允许一个 dnode 能发起的最大连接数,取值范围 100-100000,缺省值:30000 |
|
||||
| timeToGetAvailableConn | 获得可用连接的最长等待时间,取值范围 10-50000000,单位为毫秒,缺省值:500000 |
|
||||
|
||||
### 监控相关
|
||||
|
@ -458,3 +458,4 @@ TDengine 的日志文件主要包括普通日志和慢日志两种类型。
|
|||
3. 多个客户端的日志存储在相应日志路径下的同一个 taosSlowLog.yyyy.mm.dd 文件里。
|
||||
4. 慢日志文件不自动删除,不压缩。
|
||||
5. 使用和普通日志文件相同的三个参数 logDir, minimalLogDirGB, asyncLog。另外两个参数 numOfLogLines,logKeepDays 不适用于慢日志。
|
||||
|
||||
|
|
|
@ -212,11 +212,11 @@ TDengine 对于修改数据提供两种处理方式,由 IGNORE UPDATE 选项
|
|||
```sql
|
||||
[field1_name,...]
|
||||
```
|
||||
用来指定stb_name的列与subquery输出结果的对应关系。如果stb_name的列与subquery输出结果的位置、数量全部匹配,则不需要显示指定对应关系。如果stb_name的列与subquery输出结果的数据类型不匹配,会把subquery输出结果的类型转换成对应的stb_name的列的类型。
|
||||
在本页文档顶部的 [field1_name,...] 是用来指定 stb_name 的列与 subquery 输出结果的对应关系的。如果 stb_name 的列与 subquery 输出结果的位置、数量全部匹配,则不需要显示指定对应关系。如果 stb_name 的列与 subquery 输出结果的数据类型不匹配,会把 subquery 输出结果的类型转换成对应的 stb_name 的列的类型。
|
||||
|
||||
对于已经存在的超级表,检查列的schema信息
|
||||
1. 检查列的schema信息是否匹配,对于不匹配的,则自动进行类型转换,当前只有数据长度大于4096byte时才报错,其余场景都能进行类型转换。
|
||||
2. 检查列的个数是否相同,如果不同,需要显示的指定超级表与subquery的列的对应关系,否则报错;如果相同,可以指定对应关系,也可以不指定,不指定则按位置顺序对应。
|
||||
1. 检查列的 schema 信息是否匹配,对于不匹配的,则自动进行类型转换,当前只有数据长度大于 4096byte 时才报错,其余场景都能进行类型转换。
|
||||
2. 检查列的个数是否相同,如果不同,需要显示的指定超级表与 subquery 的列的对应关系,否则报错;如果相同,可以指定对应关系,也可以不指定,不指定则按位置顺序对应。
|
||||
|
||||
## 自定义TAG
|
||||
|
||||
|
|
|
@ -6,15 +6,28 @@ description: "TDengine 服务端、客户端和连接器支持的平台列表"
|
|||
|
||||
## TDengine 服务端支持的平台列表
|
||||
|
||||
| | **Windows server 2016/2019** | **Windows 10/11** | **CentOS 7.9/8** | **Ubuntu 18 以上** | **统信 UOS** | **银河/中标麒麟** | **凝思 V60/V80** | **macOS** |
|
||||
| ------------ | ---------------------------- | ----------------- | ---------------- | ------------------ | ------------ | ----------------- | ---------------- | --------- |
|
||||
| X64 | ●/E | ●/E | ● | ● | ●/E | ●/E | ●/E | ● |
|
||||
| 树莓派 ARM64 | | | ● | | | | | |
|
||||
| 华为云 ARM64 | | | | ● | | | | |
|
||||
| M1 | | | | | | | | ● |
|
||||
| | **版本** | **X64 64bit** | **ARM64** |
|
||||
| ----------------------|----------------| ------------- | --------- |
|
||||
| **CentOS** | **7.9 以上** | ● | ● |
|
||||
| **Ubuntu** | **18 以上** | ● | ● |
|
||||
| **RedHat** | **RHEL 7 以上** | ● | ● |
|
||||
| **Debian** | **6.0 以上** | ● | ● |
|
||||
| **FreeBSD** | **12 以上** | ● | ● |
|
||||
| **OpenSUSE** | **全部版本** | ● | ● |
|
||||
| **SUSE Linux** | **11 以上** | ● | ● |
|
||||
| **Fedora** | **21 以上** | ● | ● |
|
||||
| **Windows Server** | **2016 以上** | ●/E | |
|
||||
| **Windows** | **10/11** | ●/E | |
|
||||
| **银河麒麟** | **V10 以上** | ●/E | ●/E |
|
||||
| **中标麒麟** | **V7.0 以上** | ●/E | ●/E |
|
||||
| **统信 UOS** | **V20 以上** | ●/E | |
|
||||
| **凝思磐石** | **V8.0 以上** | ●/E | |
|
||||
| **华为欧拉 openEuler** | **V20.03 以上** | ●/E | |
|
||||
| **龙蜥 Anolis OS** | **V8.6 以上** | ●/E | |
|
||||
| **macOS** | **11.0 以上** | | ● |
|
||||
|
||||
注:1) ● 表示经过官方测试验证, ○ 表示非官方测试验证,E 表示仅企业版支持。
|
||||
2) 社区版仅支持主流操作系统的较新版本,包括 Ubuntu 18+/CentOS 7+/RedHat/Debian/CoreOS/FreeBSD/OpenSUSE/SUSE Linux/Fedora/macOS 等。如果有其他操作系统及版本的需求,请联系企业版支持。
|
||||
2) 社区版仅支持主流操作系统的较新版本,包括 Ubuntu 18+/CentOS 7+/CentOS Stream/RedHat/Debian/CoreOS/FreeBSD/OpenSUSE/SUSE Linux/Fedora/macOS 等。如果有其他操作系统及版本的需求,请联系企业版支持。
|
||||
|
||||
## TDengine 客户端和连接器支持的平台列表
|
||||
|
||||
|
@ -22,16 +35,16 @@ description: "TDengine 服务端、客户端和连接器支持的平台列表"
|
|||
|
||||
对照矩阵如下:
|
||||
|
||||
| **CPU** | **X64 64bit** | **X64 64bit** | **ARM64** | **X64 64bit** | **ARM64** |
|
||||
| ----------- | ------------- | ------------- | --------- | ------------- | --------- |
|
||||
| **OS** | **Linux** | **Win64** | **Linux** | **macOS** | **macOS** |
|
||||
| **CPU** | **X64 64bit** | **X64 64bit** | **X64 64bit** | **ARM64** | **ARM64** |
|
||||
| ----------- | ------------- | ------------- | ------------- | --------- | --------- |
|
||||
| **OS** | **Linux** | **Win64** | **macOS** | **Linux** | **macOS** |
|
||||
| **C/C++** | ● | ● | ● | ● | ● |
|
||||
| **JDBC** | ● | ● | ● | ● | ● |
|
||||
| **Python** | ● | ● | ● | ● | ● |
|
||||
| **Go** | ● | ● | ● | ● | ● |
|
||||
| **NodeJs** | ● | ● | ● | ● | ● |
|
||||
| **C#** | ● | ● | ○ | ○ | ○ |
|
||||
| **Rust** | ● | ● | ○ | ● | ● |
|
||||
| **C#** | ● | ● | ○ | ● | ○ |
|
||||
| **Rust** | ● | ● | ● | ○ | ● |
|
||||
| **RESTful** | ● | ● | ● | ● | ● |
|
||||
|
||||
注:● 表示官方测试验证通过,○ 表示非官方测试验证通过,-- 表示未经验证。
|
||||
|
|
|
@ -178,7 +178,7 @@ TDengine 集群可以容纳单个、多个甚至几千个数据节点。应用
|
|||
|
||||
TDengine 存储的数据包括采集的时序数据以及库、表相关的元数据、标签数据等,这些数据具体分为三部分:
|
||||
|
||||
- 时序数据:TDengine 的核心存储对象,存放于 vnode 里,由 data、head 和 last 三个文件组成,数据量大,查询量取决于应用场景。允许乱序写入,但暂时不支持删除操作,并且仅在 update 参数设置为 1 时允许更新操作。通过采用一个采集点一张表的模型,一个时间段的数据是连续存储,对单张表的写入是简单的追加操作,一次读,可以读到多条记录,这样保证对单个采集点的插入和查询操作,性能达到最优。
|
||||
- 时序数据:时序数据是 TDengine 的核心存储对象,它们被存储在 vnode 中。时序数据由 data、head、sma 和 stt 4 类文件组成,这些文件共同构成了时序数据的完整存储结构。由于时序数据的特点是数据量大且查询需求取决于具体应用场景,因此 TDengine 采用了“一个数据采集点一张表”的模型来优化存储和查询性能。在这种模型下,一个时间段内的数据是连续存储的,对单张表的写入是简单的追加操作,一次读取可以获取多条记录。这种设计确保了单个数据采集点的写入和查询操作都能达到最优性能。
|
||||
- 数据表元数据:包含标签信息和 Table Schema 信息,存放于 vnode 里的 meta 文件,支持增删改查四个标准操作。数据量很大,有 N 张表,就有 N 条记录,因此采用 LRU 存储,支持标签数据的索引。TDengine 支持多核多线程并发查询。只要计算内存足够,元数据全内存存储,千万级别规模的标签数据过滤结果能毫秒级返回。在内存资源不足的情况下,仍然可以支持数千万张表的快速查询。
|
||||
- 数据库元数据:存放于 mnode 里,包含系统节点、用户、DB、STable Schema 等信息,支持增删改查四个标准操作。这部分数据的量不大,可以全内存保存,而且由于客户端有缓存,查询量也不大。因此目前的设计虽是集中式存储管理,但不会构成性能瓶颈。
|
||||
|
||||
|
|
|
@ -92,6 +92,26 @@ extern "C" {
|
|||
} \
|
||||
}
|
||||
|
||||
#define SML_CHECK_CODE(CMD) \
|
||||
code = (CMD); \
|
||||
if (TSDB_CODE_SUCCESS != code) { \
|
||||
lino = __LINE__; \
|
||||
goto END; \
|
||||
}
|
||||
|
||||
#define SML_CHECK_NULL(CMD) \
|
||||
if (NULL == (CMD)) { \
|
||||
code = terrno; \
|
||||
lino = __LINE__; \
|
||||
goto END; \
|
||||
}
|
||||
|
||||
#define RETURN \
|
||||
if (code != 0){ \
|
||||
uError("%s failed code:%d line:%d", __FUNCTION__ , code, lino); \
|
||||
} \
|
||||
return code;
|
||||
|
||||
typedef enum {
|
||||
SCHEMA_ACTION_NULL,
|
||||
SCHEMA_ACTION_CREATE_STABLE,
|
||||
|
@ -191,7 +211,6 @@ typedef struct {
|
|||
cJSON *root; // for parse json
|
||||
int8_t offset[OTD_JSON_FIELDS_NUM];
|
||||
SSmlLineInfo *lines; // element is SSmlLineInfo
|
||||
bool parseJsonByLib;
|
||||
SArray *tagJsonArray;
|
||||
SArray *valueJsonArray;
|
||||
|
||||
|
@ -211,13 +230,8 @@ typedef struct {
|
|||
extern int64_t smlFactorNS[];
|
||||
extern int64_t smlFactorS[];
|
||||
|
||||
typedef int32_t (*_equal_fn_sml)(const void *, const void *);
|
||||
|
||||
int32_t smlBuildSmlInfo(TAOS *taos, SSmlHandle **handle);
|
||||
void smlDestroyInfo(SSmlHandle *info);
|
||||
int smlJsonParseObjFirst(char **start, SSmlLineInfo *element, int8_t *offset);
|
||||
int smlJsonParseObj(char **start, SSmlLineInfo *element, int8_t *offset);
|
||||
bool smlParseNumberOld(SSmlKv *kvVal, SSmlMsgBuf *msg);
|
||||
void smlBuildInvalidDataMsg(SSmlMsgBuf *pBuf, const char *msg1, const char *msg2);
|
||||
int32_t smlParseNumber(SSmlKv *kvVal, SSmlMsgBuf *msg);
|
||||
int64_t smlGetTimeValue(const char *value, int32_t len, uint8_t fromPrecision, uint8_t toPrecision);
|
||||
|
@ -237,7 +251,7 @@ void smlDestroyTableInfo(void *para);
|
|||
void freeSSmlKv(void* data);
|
||||
int32_t smlParseInfluxString(SSmlHandle *info, char *sql, char *sqlEnd, SSmlLineInfo *elements);
|
||||
int32_t smlParseTelnetString(SSmlHandle *info, char *sql, char *sqlEnd, SSmlLineInfo *elements);
|
||||
int32_t smlParseJSON(SSmlHandle *info, char *payload);
|
||||
int32_t smlParseJSONExt(SSmlHandle *info, char *payload);
|
||||
|
||||
int32_t smlBuildSuperTableInfo(SSmlHandle *info, SSmlLineInfo *currElement, SSmlSTableMeta** sMeta);
|
||||
bool isSmlTagAligned(SSmlHandle *info, int cnt, SSmlKv *kv);
|
||||
|
@ -246,7 +260,8 @@ int32_t smlProcessChildTable(SSmlHandle *info, SSmlLineInfo *elements);
|
|||
int32_t smlProcessSuperTable(SSmlHandle *info, SSmlLineInfo *elements);
|
||||
int32_t smlJoinMeasureTag(SSmlLineInfo *elements);
|
||||
void smlBuildTsKv(SSmlKv *kv, int64_t ts);
|
||||
int32_t smlParseEndTelnetJson(SSmlHandle *info, SSmlLineInfo *elements, SSmlKv *kvTs, SSmlKv *kv);
|
||||
int32_t smlParseEndTelnetJsonFormat(SSmlHandle *info, SSmlLineInfo *elements, SSmlKv *kvTs, SSmlKv *kv);
|
||||
int32_t smlParseEndTelnetJsonUnFormat(SSmlHandle *info, SSmlLineInfo *elements, SSmlKv *kvTs, SSmlKv *kv);
|
||||
int32_t smlParseEndLine(SSmlHandle *info, SSmlLineInfo *elements, SSmlKv *kvTs);
|
||||
|
||||
static inline bool smlDoubleToInt64OverFlow(double num) {
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -21,259 +21,10 @@
|
|||
|
||||
#define OTD_JSON_SUB_FIELDS_NUM 2
|
||||
|
||||
#define JUMP_JSON_SPACE(start) \
|
||||
while (*(start)) { \
|
||||
if (unlikely(*(start) > 32)) \
|
||||
break; \
|
||||
else \
|
||||
(start)++; \
|
||||
}
|
||||
|
||||
static int32_t smlJsonGetObj(char **payload) {
|
||||
int leftBracketCnt = 0;
|
||||
bool isInQuote = false;
|
||||
while (**payload) {
|
||||
if (**payload == '"' && *((*payload) - 1) != '\\') {
|
||||
isInQuote = !isInQuote;
|
||||
} else if (!isInQuote && unlikely(**payload == '{')) {
|
||||
leftBracketCnt++;
|
||||
(*payload)++;
|
||||
continue;
|
||||
} else if (!isInQuote && unlikely(**payload == '}')) {
|
||||
leftBracketCnt--;
|
||||
(*payload)++;
|
||||
if (leftBracketCnt == 0) {
|
||||
return 0;
|
||||
} else if (leftBracketCnt < 0) {
|
||||
return -1;
|
||||
}
|
||||
continue;
|
||||
}
|
||||
(*payload)++;
|
||||
}
|
||||
return -1;
|
||||
}
|
||||
|
||||
int smlJsonParseObjFirst(char **start, SSmlLineInfo *element, int8_t *offset) {
|
||||
int index = 0;
|
||||
while (*(*start)) {
|
||||
if ((*start)[0] != '"') {
|
||||
(*start)++;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (unlikely(index >= OTD_JSON_FIELDS_NUM)) {
|
||||
uError("index >= %d, %s", OTD_JSON_FIELDS_NUM, *start);
|
||||
return TSDB_CODE_TSC_INVALID_JSON;
|
||||
}
|
||||
|
||||
char *sTmp = *start;
|
||||
if ((*start)[1] == 'm' && (*start)[2] == 'e' && (*start)[3] == 't' && (*start)[4] == 'r' && (*start)[5] == 'i' &&
|
||||
(*start)[6] == 'c' && (*start)[7] == '"') {
|
||||
(*start) += 8;
|
||||
bool isInQuote = false;
|
||||
while (*(*start)) {
|
||||
if (unlikely(!isInQuote && *(*start) == '"')) {
|
||||
(*start)++;
|
||||
offset[index++] = *start - sTmp;
|
||||
element->measure = (*start);
|
||||
isInQuote = true;
|
||||
continue;
|
||||
}
|
||||
if (unlikely(isInQuote && *(*start) == '"')) {
|
||||
element->measureLen = (*start) - element->measure;
|
||||
(*start)++;
|
||||
break;
|
||||
}
|
||||
(*start)++;
|
||||
}
|
||||
} else if ((*start)[1] == 't' && (*start)[2] == 'i' && (*start)[3] == 'm' && (*start)[4] == 'e' &&
|
||||
(*start)[5] == 's' && (*start)[6] == 't' && (*start)[7] == 'a' && (*start)[8] == 'm' &&
|
||||
(*start)[9] == 'p' && (*start)[10] == '"') {
|
||||
(*start) += 11;
|
||||
bool hasColon = false;
|
||||
while (*(*start)) {
|
||||
if (unlikely(!hasColon && *(*start) == ':')) {
|
||||
(*start)++;
|
||||
JUMP_JSON_SPACE((*start))
|
||||
offset[index++] = *start - sTmp;
|
||||
element->timestamp = (*start);
|
||||
if (*(*start) == '{') {
|
||||
char *tmp = *start;
|
||||
int32_t code = smlJsonGetObj(&tmp);
|
||||
if (code == 0) {
|
||||
element->timestampLen = tmp - (*start);
|
||||
*start = tmp;
|
||||
}
|
||||
break;
|
||||
}
|
||||
hasColon = true;
|
||||
continue;
|
||||
}
|
||||
if (unlikely(hasColon && (*(*start) == ',' || *(*start) == '}' || (*(*start)) <= 32))) {
|
||||
element->timestampLen = (*start) - element->timestamp;
|
||||
break;
|
||||
}
|
||||
(*start)++;
|
||||
}
|
||||
} else if ((*start)[1] == 'v' && (*start)[2] == 'a' && (*start)[3] == 'l' && (*start)[4] == 'u' &&
|
||||
(*start)[5] == 'e' && (*start)[6] == '"') {
|
||||
(*start) += 7;
|
||||
|
||||
bool hasColon = false;
|
||||
while (*(*start)) {
|
||||
if (unlikely(!hasColon && *(*start) == ':')) {
|
||||
(*start)++;
|
||||
JUMP_JSON_SPACE((*start))
|
||||
offset[index++] = *start - sTmp;
|
||||
element->cols = (*start);
|
||||
if (*(*start) == '{') {
|
||||
char *tmp = *start;
|
||||
int32_t code = smlJsonGetObj(&tmp);
|
||||
if (code == 0) {
|
||||
element->colsLen = tmp - (*start);
|
||||
*start = tmp;
|
||||
}
|
||||
break;
|
||||
}
|
||||
hasColon = true;
|
||||
continue;
|
||||
}
|
||||
if (unlikely(hasColon && (*(*start) == ',' || *(*start) == '}' || (*(*start)) <= 32))) {
|
||||
element->colsLen = (*start) - element->cols;
|
||||
break;
|
||||
}
|
||||
(*start)++;
|
||||
}
|
||||
} else if ((*start)[1] == 't' && (*start)[2] == 'a' && (*start)[3] == 'g' && (*start)[4] == 's' &&
|
||||
(*start)[5] == '"') {
|
||||
(*start) += 6;
|
||||
|
||||
while (*(*start)) {
|
||||
if (unlikely(*(*start) == ':')) {
|
||||
(*start)++;
|
||||
JUMP_JSON_SPACE((*start))
|
||||
offset[index++] = *start - sTmp;
|
||||
element->tags = (*start);
|
||||
char *tmp = *start;
|
||||
int32_t code = smlJsonGetObj(&tmp);
|
||||
if (code == 0) {
|
||||
element->tagsLen = tmp - (*start);
|
||||
*start = tmp;
|
||||
}
|
||||
break;
|
||||
}
|
||||
(*start)++;
|
||||
}
|
||||
}
|
||||
if (*(*start) == '\0') {
|
||||
break;
|
||||
}
|
||||
if (*(*start) == '}') {
|
||||
(*start)++;
|
||||
break;
|
||||
}
|
||||
(*start)++;
|
||||
}
|
||||
|
||||
if (unlikely(index != OTD_JSON_FIELDS_NUM) || element->tags == NULL || element->cols == NULL ||
|
||||
element->measure == NULL || element->timestamp == NULL) {
|
||||
uError("elements != %d or element parse null", OTD_JSON_FIELDS_NUM);
|
||||
return TSDB_CODE_TSC_INVALID_JSON;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
int smlJsonParseObj(char **start, SSmlLineInfo *element, int8_t *offset) {
|
||||
int index = 0;
|
||||
while (*(*start)) {
|
||||
if ((*start)[0] != '"') {
|
||||
(*start)++;
|
||||
continue;
|
||||
}
|
||||
|
||||
if (unlikely(index >= OTD_JSON_FIELDS_NUM)) {
|
||||
uError("index >= %d, %s", OTD_JSON_FIELDS_NUM, *start);
|
||||
return TSDB_CODE_TSC_INVALID_JSON;
|
||||
}
|
||||
|
||||
if ((*start)[1] == 'm') {
|
||||
(*start) += offset[index++];
|
||||
element->measure = *start;
|
||||
while (*(*start)) {
|
||||
if (unlikely(*(*start) == '"')) {
|
||||
element->measureLen = (*start) - element->measure;
|
||||
(*start)++;
|
||||
break;
|
||||
}
|
||||
(*start)++;
|
||||
}
|
||||
} else if ((*start)[1] == 't' && (*start)[2] == 'i') {
|
||||
(*start) += offset[index++];
|
||||
element->timestamp = *start;
|
||||
if (*(*start) == '{') {
|
||||
char *tmp = *start;
|
||||
int32_t code = smlJsonGetObj(&tmp);
|
||||
if (code == 0) {
|
||||
element->timestampLen = tmp - (*start);
|
||||
*start = tmp;
|
||||
}
|
||||
} else {
|
||||
while (*(*start)) {
|
||||
if (unlikely(*(*start) == ',' || *(*start) == '}' || (*(*start)) <= 32)) {
|
||||
element->timestampLen = (*start) - element->timestamp;
|
||||
break;
|
||||
}
|
||||
(*start)++;
|
||||
}
|
||||
}
|
||||
} else if ((*start)[1] == 'v') {
|
||||
(*start) += offset[index++];
|
||||
element->cols = *start;
|
||||
if (*(*start) == '{') {
|
||||
char *tmp = *start;
|
||||
int32_t code = smlJsonGetObj(&tmp);
|
||||
if (code == 0) {
|
||||
element->colsLen = tmp - (*start);
|
||||
*start = tmp;
|
||||
}
|
||||
} else {
|
||||
while (*(*start)) {
|
||||
if (unlikely(*(*start) == ',' || *(*start) == '}' || (*(*start)) <= 32)) {
|
||||
element->colsLen = (*start) - element->cols;
|
||||
break;
|
||||
}
|
||||
(*start)++;
|
||||
}
|
||||
}
|
||||
} else if ((*start)[1] == 't' && (*start)[2] == 'a') {
|
||||
(*start) += offset[index++];
|
||||
element->tags = (*start);
|
||||
char *tmp = *start;
|
||||
int32_t code = smlJsonGetObj(&tmp);
|
||||
if (code == 0) {
|
||||
element->tagsLen = tmp - (*start);
|
||||
*start = tmp;
|
||||
}
|
||||
}
|
||||
if (*(*start) == '}') {
|
||||
(*start)++;
|
||||
break;
|
||||
}
|
||||
(*start)++;
|
||||
}
|
||||
|
||||
if (unlikely(index != 0 && index != OTD_JSON_FIELDS_NUM)) {
|
||||
uError("elements != %d", OTD_JSON_FIELDS_NUM);
|
||||
return TSDB_CODE_TSC_INVALID_JSON;
|
||||
}
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static inline int32_t smlParseMetricFromJSON(SSmlHandle *info, cJSON *metric, SSmlLineInfo *elements) {
|
||||
elements->measureLen = strlen(metric->valuestring);
|
||||
if (IS_INVALID_TABLE_LEN(elements->measureLen)) {
|
||||
uError("OTD:0x%" PRIx64 " Metric length is 0 or large than 192", info->id);
|
||||
uError("SML:0x%" PRIx64 " Metric length is 0 or large than 192", info->id);
|
||||
return TSDB_CODE_TSC_INVALID_TABLE_ID_LENGTH;
|
||||
}
|
||||
|
||||
|
@ -293,7 +44,7 @@ static int32_t smlGetJsonElements(cJSON *root, cJSON ***marks) {
|
|||
child = child->next;
|
||||
}
|
||||
if (*marks[i] == NULL) {
|
||||
uError("smlGetJsonElements error, not find mark:%d:%s", i, jsonName[i]);
|
||||
uError("SML %s error, not find mark:%d:%s", __FUNCTION__, i, jsonName[i]);
|
||||
return TSDB_CODE_TSC_INVALID_JSON;
|
||||
}
|
||||
}
|
||||
|
@ -302,7 +53,7 @@ static int32_t smlGetJsonElements(cJSON *root, cJSON ***marks) {
|
|||
|
||||
static int32_t smlConvertJSONBool(SSmlKv *pVal, char *typeStr, cJSON *value) {
|
||||
if (strcasecmp(typeStr, "bool") != 0) {
|
||||
uError("OTD:invalid type(%s) for JSON Bool", typeStr);
|
||||
uError("SML:invalid type(%s) for JSON Bool", typeStr);
|
||||
return TSDB_CODE_TSC_INVALID_JSON_TYPE;
|
||||
}
|
||||
pVal->type = TSDB_DATA_TYPE_BOOL;
|
||||
|
@ -316,7 +67,7 @@ static int32_t smlConvertJSONNumber(SSmlKv *pVal, char *typeStr, cJSON *value) {
|
|||
// tinyint
|
||||
if (strcasecmp(typeStr, "i8") == 0 || strcasecmp(typeStr, "tinyint") == 0) {
|
||||
if (!IS_VALID_TINYINT(value->valuedouble)) {
|
||||
uError("OTD:JSON value(%f) cannot fit in type(tinyint)", value->valuedouble);
|
||||
uError("SML:JSON value(%f) cannot fit in type(tinyint)", value->valuedouble);
|
||||
return TSDB_CODE_TSC_VALUE_OUT_OF_RANGE;
|
||||
}
|
||||
pVal->type = TSDB_DATA_TYPE_TINYINT;
|
||||
|
@ -327,7 +78,7 @@ static int32_t smlConvertJSONNumber(SSmlKv *pVal, char *typeStr, cJSON *value) {
|
|||
// smallint
|
||||
if (strcasecmp(typeStr, "i16") == 0 || strcasecmp(typeStr, "smallint") == 0) {
|
||||
if (!IS_VALID_SMALLINT(value->valuedouble)) {
|
||||
uError("OTD:JSON value(%f) cannot fit in type(smallint)", value->valuedouble);
|
||||
uError("SML:JSON value(%f) cannot fit in type(smallint)", value->valuedouble);
|
||||
return TSDB_CODE_TSC_VALUE_OUT_OF_RANGE;
|
||||
}
|
||||
pVal->type = TSDB_DATA_TYPE_SMALLINT;
|
||||
|
@ -338,7 +89,7 @@ static int32_t smlConvertJSONNumber(SSmlKv *pVal, char *typeStr, cJSON *value) {
|
|||
// int
|
||||
if (strcasecmp(typeStr, "i32") == 0 || strcasecmp(typeStr, "int") == 0) {
|
||||
if (!IS_VALID_INT(value->valuedouble)) {
|
||||
uError("OTD:JSON value(%f) cannot fit in type(int)", value->valuedouble);
|
||||
uError("SML:JSON value(%f) cannot fit in type(int)", value->valuedouble);
|
||||
return TSDB_CODE_TSC_VALUE_OUT_OF_RANGE;
|
||||
}
|
||||
pVal->type = TSDB_DATA_TYPE_INT;
|
||||
|
@ -362,7 +113,7 @@ static int32_t smlConvertJSONNumber(SSmlKv *pVal, char *typeStr, cJSON *value) {
|
|||
// float
|
||||
if (strcasecmp(typeStr, "f32") == 0 || strcasecmp(typeStr, "float") == 0) {
|
||||
if (!IS_VALID_FLOAT(value->valuedouble)) {
|
||||
uError("OTD:JSON value(%f) cannot fit in type(float)", value->valuedouble);
|
||||
uError("SML:JSON value(%f) cannot fit in type(float)", value->valuedouble);
|
||||
return TSDB_CODE_TSC_VALUE_OUT_OF_RANGE;
|
||||
}
|
||||
pVal->type = TSDB_DATA_TYPE_FLOAT;
|
||||
|
@ -379,7 +130,7 @@ static int32_t smlConvertJSONNumber(SSmlKv *pVal, char *typeStr, cJSON *value) {
|
|||
}
|
||||
|
||||
// if reach here means type is unsupported
|
||||
uError("OTD:invalid type(%s) for JSON Number", typeStr);
|
||||
uError("SML:invalid type(%s) for JSON Number", typeStr);
|
||||
return TSDB_CODE_TSC_INVALID_JSON_TYPE;
|
||||
}
|
||||
|
||||
|
@ -391,7 +142,7 @@ static int32_t smlConvertJSONString(SSmlKv *pVal, char *typeStr, cJSON *value) {
|
|||
} else if (strcasecmp(typeStr, "nchar") == 0) {
|
||||
pVal->type = TSDB_DATA_TYPE_NCHAR;
|
||||
} else {
|
||||
uError("OTD:invalid type(%s) for JSON String", typeStr);
|
||||
uError("SML:invalid type(%s) for JSON String", typeStr);
|
||||
return TSDB_CODE_TSC_INVALID_JSON_TYPE;
|
||||
}
|
||||
pVal->length = strlen(value->valuestring);
|
||||
|
@ -474,7 +225,7 @@ static int32_t smlParseValueFromJSON(cJSON *root, SSmlKv *kv) {
|
|||
case cJSON_String: {
|
||||
int32_t ret = smlConvertJSONString(kv, "binary", root);
|
||||
if (ret != TSDB_CODE_SUCCESS) {
|
||||
uError("OTD:Failed to parse binary value from JSON Obj");
|
||||
uError("SML:Failed to parse binary value from JSON Obj");
|
||||
return ret;
|
||||
}
|
||||
break;
|
||||
|
@ -482,7 +233,7 @@ static int32_t smlParseValueFromJSON(cJSON *root, SSmlKv *kv) {
|
|||
case cJSON_Object: {
|
||||
int32_t ret = smlParseValueFromJSONObj(root, kv);
|
||||
if (ret != TSDB_CODE_SUCCESS) {
|
||||
uError("OTD:Failed to parse value from JSON Obj");
|
||||
uError("SML:Failed to parse value from JSON Obj");
|
||||
return ret;
|
||||
}
|
||||
break;
|
||||
|
@ -511,7 +262,7 @@ static int32_t smlProcessTagJson(SSmlHandle *info, cJSON *tags){
|
|||
}
|
||||
size_t keyLen = strlen(tag->string);
|
||||
if (unlikely(IS_INVALID_COL_LEN(keyLen))) {
|
||||
uError("OTD:Tag key length is 0 or too large than 64");
|
||||
uError("SML:Tag key length is 0 or too large than 64");
|
||||
return TSDB_CODE_TSC_INVALID_COLUMN_LENGTH;
|
||||
}
|
||||
|
||||
|
@ -539,28 +290,24 @@ static int32_t smlProcessTagJson(SSmlHandle *info, cJSON *tags){
|
|||
}
|
||||
|
||||
static int32_t smlParseTagsFromJSON(SSmlHandle *info, cJSON *tags, SSmlLineInfo *elements) {
|
||||
int32_t ret = 0;
|
||||
if (is_same_child_table_telnet(elements, &info->preLine) == 0) {
|
||||
elements->measureTag = info->preLine.measureTag;
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
int32_t code = 0;
|
||||
int32_t lino = 0;
|
||||
if(info->dataFormat){
|
||||
ret = smlProcessSuperTable(info, elements);
|
||||
if(ret != 0){
|
||||
if(info->reRun){
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
ret = smlProcessTagJson(info, tags);
|
||||
if(ret != 0){
|
||||
if(info->reRun){
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
ret = smlJoinMeasureTag(elements);
|
||||
if(ret != 0){
|
||||
return ret;
|
||||
SML_CHECK_CODE(smlProcessSuperTable(info, elements));
|
||||
}
|
||||
SML_CHECK_CODE(smlProcessTagJson(info, tags));
|
||||
SML_CHECK_CODE(smlJoinMeasureTag(elements));
|
||||
return smlProcessChildTable(info, elements);
|
||||
|
||||
END:
|
||||
if(info->reRun){
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
RETURN
|
||||
}
|
||||
|
||||
static int64_t smlParseTSFromJSONObj(SSmlHandle *info, cJSON *root, int32_t toPrecision) {
|
||||
|
@ -678,7 +425,8 @@ static int64_t smlParseTSFromJSON(SSmlHandle *info, cJSON *timestamp) {
|
|||
}
|
||||
|
||||
static int32_t smlParseJSONStringExt(SSmlHandle *info, cJSON *root, SSmlLineInfo *elements) {
|
||||
int32_t ret = TSDB_CODE_SUCCESS;
|
||||
int32_t code = TSDB_CODE_SUCCESS;
|
||||
int32_t lino = 0;
|
||||
|
||||
cJSON *metricJson = NULL;
|
||||
cJSON *tsJson = NULL;
|
||||
|
@ -688,57 +436,27 @@ static int32_t smlParseJSONStringExt(SSmlHandle *info, cJSON *root, SSmlLineInfo
|
|||
int32_t size = cJSON_GetArraySize(root);
|
||||
// outmost json fields has to be exactly 4
|
||||
if (size != OTD_JSON_FIELDS_NUM) {
|
||||
uError("OTD:0x%" PRIx64 " Invalid number of JSON fields in data point %d", info->id, size);
|
||||
uError("SML:0x%" PRIx64 " Invalid number of JSON fields in data point %d", info->id, size);
|
||||
return TSDB_CODE_TSC_INVALID_JSON;
|
||||
}
|
||||
|
||||
cJSON **marks[OTD_JSON_FIELDS_NUM] = {&metricJson, &tsJson, &valueJson, &tagsJson};
|
||||
ret = smlGetJsonElements(root, marks);
|
||||
if (unlikely(ret != TSDB_CODE_SUCCESS)) {
|
||||
return ret;
|
||||
}
|
||||
|
||||
SML_CHECK_CODE(smlGetJsonElements(root, marks));
|
||||
// Parse metric
|
||||
ret = smlParseMetricFromJSON(info, metricJson, elements);
|
||||
if (unlikely(ret != TSDB_CODE_SUCCESS)) {
|
||||
uError("OTD:0x%" PRIx64 " Unable to parse metric from JSON payload", info->id);
|
||||
return ret;
|
||||
}
|
||||
|
||||
SML_CHECK_CODE(smlParseMetricFromJSON(info, metricJson, elements));
|
||||
// Parse metric value
|
||||
SSmlKv kv = {.key = VALUE, .keyLen = VALUE_LEN};
|
||||
ret = smlParseValueFromJSON(valueJson, &kv);
|
||||
if (unlikely(ret)) {
|
||||
uError("OTD:0x%" PRIx64 " Unable to parse metric value from JSON payload", info->id);
|
||||
return ret;
|
||||
}
|
||||
SML_CHECK_CODE(smlParseValueFromJSON(valueJson, &kv));
|
||||
|
||||
// Parse tags
|
||||
bool needFree = info->dataFormat;
|
||||
elements->tags = cJSON_PrintUnformatted(tagsJson);
|
||||
if (elements->tags == NULL){
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
elements->tagsLen = strlen(elements->tags);
|
||||
if (is_same_child_table_telnet(elements, &info->preLine) != 0) {
|
||||
ret = smlParseTagsFromJSON(info, tagsJson, elements);
|
||||
if (unlikely(ret)) {
|
||||
uError("OTD:0x%" PRIx64 " Unable to parse tags from JSON payload", info->id);
|
||||
taosMemoryFree(elements->tags);
|
||||
elements->tags = NULL;
|
||||
return ret;
|
||||
}
|
||||
} else {
|
||||
elements->measureTag = info->preLine.measureTag;
|
||||
}
|
||||
SML_CHECK_NULL(elements->tags);
|
||||
|
||||
if (needFree) {
|
||||
taosMemoryFree(elements->tags);
|
||||
elements->tags = NULL;
|
||||
}
|
||||
elements->tagsLen = strlen(elements->tags);
|
||||
SML_CHECK_CODE(smlParseTagsFromJSON(info, tagsJson, elements));
|
||||
|
||||
if (unlikely(info->reRun)) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
goto END;
|
||||
}
|
||||
|
||||
// Parse timestamp
|
||||
|
@ -747,29 +465,34 @@ static int32_t smlParseJSONStringExt(SSmlHandle *info, cJSON *root, SSmlLineInfo
|
|||
if (unlikely(ts < 0)) {
|
||||
char* tmp = cJSON_PrintUnformatted(tsJson);
|
||||
if (tmp == NULL) {
|
||||
uError("cJSON_PrintUnformatted failed since %s", tstrerror(TSDB_CODE_OUT_OF_MEMORY));
|
||||
uError("OTD:0x%" PRIx64 " Unable to parse timestamp from JSON payload %s %" PRId64, info->id, info->msgBuf.buf, ts);
|
||||
uError("SML:0x%" PRIx64 " Unable to parse timestamp from JSON payload %s %" PRId64, info->id, info->msgBuf.buf, ts);
|
||||
} else {
|
||||
uError("OTD:0x%" PRIx64 " Unable to parse timestamp from JSON payload %s %s %" PRId64, info->id, info->msgBuf.buf,tmp, ts);
|
||||
uError("SML:0x%" PRIx64 " Unable to parse timestamp from JSON payload %s %s %" PRId64, info->id, info->msgBuf.buf,tmp, ts);
|
||||
taosMemoryFree(tmp);
|
||||
}
|
||||
return TSDB_CODE_INVALID_TIMESTAMP;
|
||||
SML_CHECK_CODE(TSDB_CODE_INVALID_TIMESTAMP);
|
||||
}
|
||||
SSmlKv kvTs = {0};
|
||||
smlBuildTsKv(&kvTs, ts);
|
||||
if (info->dataFormat){
|
||||
code = smlParseEndTelnetJsonFormat(info, elements, &kvTs, &kv);
|
||||
} else {
|
||||
code = smlParseEndTelnetJsonUnFormat(info, elements, &kvTs, &kv);
|
||||
}
|
||||
SML_CHECK_CODE(code);
|
||||
taosMemoryFreeClear(info->preLine.tags);
|
||||
info->preLine = *elements;
|
||||
elements->tags = NULL;
|
||||
|
||||
return smlParseEndTelnetJson(info, elements, &kvTs, &kv);
|
||||
END:
|
||||
taosMemoryFree(elements->tags);
|
||||
RETURN
|
||||
}
|
||||
|
||||
static int32_t smlParseJSONExt(SSmlHandle *info, char *payload) {
|
||||
int32_t smlParseJSONExt(SSmlHandle *info, char *payload) {
|
||||
int32_t payloadNum = 0;
|
||||
int32_t ret = TSDB_CODE_SUCCESS;
|
||||
|
||||
if (unlikely(payload == NULL)) {
|
||||
uError("SML:0x%" PRIx64 " empty JSON Payload", info->id);
|
||||
return TSDB_CODE_TSC_INVALID_JSON;
|
||||
}
|
||||
|
||||
info->root = cJSON_Parse(payload);
|
||||
if (unlikely(info->root == NULL)) {
|
||||
uError("SML:0x%" PRIx64 " parse json failed:%s", info->id, payload);
|
||||
|
@ -782,27 +505,11 @@ static int32_t smlParseJSONExt(SSmlHandle *info, char *payload) {
|
|||
} else if (cJSON_IsObject(info->root)) {
|
||||
payloadNum = 1;
|
||||
} else {
|
||||
uError("SML:0x%" PRIx64 " Invalid JSON Payload 3:%s", info->id, payload);
|
||||
uError("SML:0x%" PRIx64 " Invalid JSON type:%s", info->id, payload);
|
||||
return TSDB_CODE_TSC_INVALID_JSON;
|
||||
}
|
||||
|
||||
if (unlikely(info->lines != NULL)) {
|
||||
for (int i = 0; i < info->lineNum; i++) {
|
||||
taosArrayDestroyEx(info->lines[i].colArray, freeSSmlKv);
|
||||
if (info->lines[i].measureTagsLen != 0) taosMemoryFree(info->lines[i].measureTag);
|
||||
}
|
||||
taosMemoryFree(info->lines);
|
||||
info->lines = NULL;
|
||||
}
|
||||
info->lineNum = payloadNum;
|
||||
info->dataFormat = true;
|
||||
|
||||
ret = smlClearForRerun(info);
|
||||
if (ret != TSDB_CODE_SUCCESS) {
|
||||
return ret;
|
||||
}
|
||||
|
||||
info->parseJsonByLib = true;
|
||||
cJSON *head = (payloadNum == 1 && cJSON_IsObject(info->root)) ? info->root : info->root->child;
|
||||
|
||||
int cnt = 0;
|
||||
|
@ -811,6 +518,7 @@ static int32_t smlParseJSONExt(SSmlHandle *info, char *payload) {
|
|||
if (info->dataFormat) {
|
||||
SSmlLineInfo element = {0};
|
||||
ret = smlParseJSONStringExt(info, dataPoint, &element);
|
||||
if (element.measureTagsLen != 0) taosMemoryFree(element.measureTag);
|
||||
} else {
|
||||
ret = smlParseJSONStringExt(info, dataPoint, info->lines + cnt);
|
||||
}
|
||||
|
@ -836,164 +544,3 @@ static int32_t smlParseJSONExt(SSmlHandle *info, char *payload) {
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t smlParseJSONString(SSmlHandle *info, char **start, SSmlLineInfo *elements) {
|
||||
int32_t ret = TSDB_CODE_SUCCESS;
|
||||
|
||||
if (info->offset[0] == 0) {
|
||||
ret = smlJsonParseObjFirst(start, elements, info->offset);
|
||||
} else {
|
||||
ret = smlJsonParseObj(start, elements, info->offset);
|
||||
}
|
||||
|
||||
if (ret != TSDB_CODE_SUCCESS) {
|
||||
return ret;
|
||||
}
|
||||
|
||||
if (unlikely(**start == '\0' && elements->measure == NULL)) return TSDB_CODE_SUCCESS;
|
||||
|
||||
if (unlikely(IS_INVALID_TABLE_LEN(elements->measureLen))) {
|
||||
smlBuildInvalidDataMsg(&info->msgBuf, "measure is empty or too large than 192", NULL);
|
||||
return TSDB_CODE_TSC_INVALID_TABLE_ID_LENGTH;
|
||||
}
|
||||
|
||||
SSmlKv kv = {.key = VALUE, .keyLen = VALUE_LEN, .value = elements->cols, .length = (size_t)elements->colsLen};
|
||||
|
||||
if (unlikely(elements->colsLen == 0)) {
|
||||
uError("SML:colsLen == 0");
|
||||
return TSDB_CODE_TSC_INVALID_VALUE;
|
||||
} else if (unlikely(elements->cols[0] == '{')) {
|
||||
char tmp = elements->cols[elements->colsLen];
|
||||
elements->cols[elements->colsLen] = '\0';
|
||||
cJSON *valueJson = cJSON_Parse(elements->cols);
|
||||
if (unlikely(valueJson == NULL)) {
|
||||
uError("SML:0x%" PRIx64 " parse json cols failed:%s", info->id, elements->cols);
|
||||
elements->cols[elements->colsLen] = tmp;
|
||||
return TSDB_CODE_TSC_INVALID_JSON;
|
||||
}
|
||||
if (taosArrayPush(info->tagJsonArray, &valueJson) == NULL){
|
||||
cJSON_Delete(valueJson);
|
||||
elements->cols[elements->colsLen] = tmp;
|
||||
return terrno;
|
||||
}
|
||||
ret = smlParseValueFromJSONObj(valueJson, &kv);
|
||||
if (ret != TSDB_CODE_SUCCESS) {
|
||||
uError("SML:0x%" PRIx64 " Failed to parse value from JSON Obj:%s", info->id, elements->cols);
|
||||
elements->cols[elements->colsLen] = tmp;
|
||||
return TSDB_CODE_TSC_INVALID_VALUE;
|
||||
}
|
||||
elements->cols[elements->colsLen] = tmp;
|
||||
} else if (smlParseValue(&kv, &info->msgBuf) != TSDB_CODE_SUCCESS) {
|
||||
uError("SML:0x%" PRIx64 " cols invalidate:%s", info->id, elements->cols);
|
||||
return TSDB_CODE_TSC_INVALID_VALUE;
|
||||
}
|
||||
|
||||
// Parse tags
|
||||
if (is_same_child_table_telnet(elements, &info->preLine) != 0) {
|
||||
char tmp = *(elements->tags + elements->tagsLen);
|
||||
*(elements->tags + elements->tagsLen) = 0;
|
||||
cJSON *tagsJson = cJSON_Parse(elements->tags);
|
||||
*(elements->tags + elements->tagsLen) = tmp;
|
||||
if (unlikely(tagsJson == NULL)) {
|
||||
uError("SML:0x%" PRIx64 " parse json tag failed:%s", info->id, elements->tags);
|
||||
return TSDB_CODE_TSC_INVALID_JSON;
|
||||
}
|
||||
|
||||
if (taosArrayPush(info->tagJsonArray, &tagsJson) == NULL){
|
||||
cJSON_Delete(tagsJson);
|
||||
uError("SML:0x%" PRIx64 " taosArrayPush failed", info->id);
|
||||
return terrno;
|
||||
}
|
||||
ret = smlParseTagsFromJSON(info, tagsJson, elements);
|
||||
if (unlikely(ret)) {
|
||||
uError("OTD:0x%" PRIx64 " Unable to parse tags from JSON payload", info->id);
|
||||
return ret;
|
||||
}
|
||||
} else {
|
||||
elements->measureTag = info->preLine.measureTag;
|
||||
}
|
||||
|
||||
if (unlikely(info->reRun)) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
// Parse timestamp
|
||||
// notice!!! put ts back to tag to ensure get meta->precision
|
||||
int64_t ts = 0;
|
||||
if (unlikely(elements->timestampLen == 0)) {
|
||||
uError("OTD:0x%" PRIx64 " elements->timestampLen == 0", info->id);
|
||||
return TSDB_CODE_INVALID_TIMESTAMP;
|
||||
} else if (elements->timestamp[0] == '{') {
|
||||
char tmp = elements->timestamp[elements->timestampLen];
|
||||
elements->timestamp[elements->timestampLen] = '\0';
|
||||
cJSON *tsJson = cJSON_Parse(elements->timestamp);
|
||||
ts = smlParseTSFromJSON(info, tsJson);
|
||||
if (unlikely(ts < 0)) {
|
||||
uError("SML:0x%" PRIx64 " Unable to parse timestamp from JSON payload:%s", info->id, elements->timestamp);
|
||||
elements->timestamp[elements->timestampLen] = tmp;
|
||||
cJSON_Delete(tsJson);
|
||||
return TSDB_CODE_INVALID_TIMESTAMP;
|
||||
}
|
||||
elements->timestamp[elements->timestampLen] = tmp;
|
||||
cJSON_Delete(tsJson);
|
||||
} else {
|
||||
ts = smlParseOpenTsdbTime(info, elements->timestamp, elements->timestampLen);
|
||||
if (unlikely(ts < 0)) {
|
||||
uError("OTD:0x%" PRIx64 " Unable to parse timestamp from JSON payload", info->id);
|
||||
return TSDB_CODE_INVALID_TIMESTAMP;
|
||||
}
|
||||
}
|
||||
SSmlKv kvTs = {0};
|
||||
smlBuildTsKv(&kvTs, ts);
|
||||
|
||||
return smlParseEndTelnetJson(info, elements, &kvTs, &kv);
|
||||
}
|
||||
|
||||
int32_t smlParseJSON(SSmlHandle *info, char *payload) {
|
||||
int32_t payloadNum = 1 << 15;
|
||||
int32_t ret = TSDB_CODE_SUCCESS;
|
||||
|
||||
uDebug("SML:0x%" PRIx64 "json:%s", info->id, payload);
|
||||
int cnt = 0;
|
||||
char *dataPointStart = payload;
|
||||
while (1) {
|
||||
if (info->dataFormat) {
|
||||
SSmlLineInfo element = {0};
|
||||
ret = smlParseJSONString(info, &dataPointStart, &element);
|
||||
if (element.measureTagsLen != 0) taosMemoryFree(element.measureTag);
|
||||
} else {
|
||||
if (cnt >= payloadNum) {
|
||||
payloadNum = payloadNum << 1;
|
||||
void *tmp = taosMemoryRealloc(info->lines, payloadNum * sizeof(SSmlLineInfo));
|
||||
if (tmp == NULL) {
|
||||
ret = terrno;
|
||||
return ret;
|
||||
}
|
||||
info->lines = (SSmlLineInfo *)tmp;
|
||||
(void)memset(info->lines + cnt, 0, (payloadNum - cnt) * sizeof(SSmlLineInfo));
|
||||
}
|
||||
ret = smlParseJSONString(info, &dataPointStart, info->lines + cnt);
|
||||
if ((info->lines + cnt)->measure == NULL) break;
|
||||
}
|
||||
if (unlikely(ret != TSDB_CODE_SUCCESS)) {
|
||||
uError("SML:0x%" PRIx64 " Invalid JSON Payload 1:%s", info->id, payload);
|
||||
return smlParseJSONExt(info, payload);
|
||||
}
|
||||
|
||||
if (unlikely(info->reRun)) {
|
||||
cnt = 0;
|
||||
dataPointStart = payload;
|
||||
info->lineNum = payloadNum;
|
||||
ret = smlClearForRerun(info);
|
||||
if (ret != TSDB_CODE_SUCCESS) {
|
||||
return ret;
|
||||
}
|
||||
continue;
|
||||
}
|
||||
|
||||
cnt++;
|
||||
if (*dataPointStart == '\0') break;
|
||||
}
|
||||
info->lineNum = cnt;
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
|
|
@ -63,7 +63,7 @@ static int64_t smlParseInfluxTime(SSmlHandle *info, const char *data, int32_t le
|
|||
|
||||
int64_t ts = smlGetTimeValue(data, len, fromPrecision, toPrecision);
|
||||
if (unlikely(ts == -1)) {
|
||||
smlBuildInvalidDataMsg(&info->msgBuf, "invalid timestamp", data);
|
||||
smlBuildInvalidDataMsg(&info->msgBuf, "SML line invalid timestamp", data);
|
||||
return TSDB_CODE_SML_INVALID_DATA;
|
||||
}
|
||||
return ts;
|
||||
|
@ -84,7 +84,7 @@ int32_t smlParseValue(SSmlKv *pVal, SSmlMsgBuf *msg) {
|
|||
}
|
||||
|
||||
if (pVal->value[0] == 'l' || pVal->value[0] == 'L') { // nchar
|
||||
if (pVal->value[1] == '"' && pVal->value[pVal->length - 1] == '"' && pVal->length >= 3) {
|
||||
if (pVal->length >= NCHAR_ADD_LEN && pVal->value[1] == '"' && pVal->value[pVal->length - 1] == '"') {
|
||||
pVal->type = TSDB_DATA_TYPE_NCHAR;
|
||||
pVal->length -= NCHAR_ADD_LEN;
|
||||
if (pVal->length > (TSDB_MAX_NCHAR_LEN - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE) {
|
||||
|
@ -97,7 +97,7 @@ int32_t smlParseValue(SSmlKv *pVal, SSmlMsgBuf *msg) {
|
|||
}
|
||||
|
||||
if (pVal->value[0] == 'g' || pVal->value[0] == 'G') { // geometry
|
||||
if (pVal->value[1] == '"' && pVal->value[pVal->length - 1] == '"' && pVal->length >= sizeof("POINT")+3) {
|
||||
if (pVal->length >= NCHAR_ADD_LEN && pVal->value[1] == '"' && pVal->value[pVal->length - 1] == '"') {
|
||||
int32_t code = initCtxGeomFromText();
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
return code;
|
||||
|
@ -124,7 +124,7 @@ int32_t smlParseValue(SSmlKv *pVal, SSmlMsgBuf *msg) {
|
|||
}
|
||||
|
||||
if (pVal->value[0] == 'b' || pVal->value[0] == 'B') { // varbinary
|
||||
if (pVal->value[1] == '"' && pVal->value[pVal->length - 1] == '"' && pVal->length >= 3) {
|
||||
if (pVal->length >= NCHAR_ADD_LEN && pVal->value[1] == '"' && pVal->value[pVal->length - 1] == '"') {
|
||||
pVal->type = TSDB_DATA_TYPE_VARBINARY;
|
||||
if(isHex(pVal->value + NCHAR_ADD_LEN - 1, pVal->length - NCHAR_ADD_LEN)){
|
||||
if(!isValidateHex(pVal->value + NCHAR_ADD_LEN - 1, pVal->length - NCHAR_ADD_LEN)){
|
||||
|
@ -298,7 +298,7 @@ static int32_t smlProcessTagLine(SSmlHandle *info, char **sql, char *sqlEnd){
|
|||
}
|
||||
|
||||
if (info->dataFormat && !isSmlTagAligned(info, cnt, &kv)) {
|
||||
return TSDB_CODE_TSC_INVALID_JSON;
|
||||
return TSDB_CODE_SML_INVALID_DATA;
|
||||
}
|
||||
|
||||
cnt++;
|
||||
|
@ -311,31 +311,24 @@ static int32_t smlProcessTagLine(SSmlHandle *info, char **sql, char *sqlEnd){
|
|||
}
|
||||
|
||||
static int32_t smlParseTagLine(SSmlHandle *info, char **sql, char *sqlEnd, SSmlLineInfo *elements) {
|
||||
int32_t code = 0;
|
||||
int32_t lino = 0;
|
||||
bool isSameCTable = IS_SAME_CHILD_TABLE;
|
||||
if(isSameCTable){
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t ret = 0;
|
||||
if(info->dataFormat){
|
||||
ret = smlProcessSuperTable(info, elements);
|
||||
if(ret != 0){
|
||||
SML_CHECK_CODE(smlProcessSuperTable(info, elements));
|
||||
}
|
||||
SML_CHECK_CODE(smlProcessTagLine(info, sql, sqlEnd));
|
||||
return smlProcessChildTable(info, elements);
|
||||
|
||||
END:
|
||||
if(info->reRun){
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
ret = smlProcessTagLine(info, sql, sqlEnd);
|
||||
if(ret != 0){
|
||||
if (info->reRun){
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
return smlProcessChildTable(info, elements);
|
||||
RETURN
|
||||
}
|
||||
|
||||
static int32_t smlParseColLine(SSmlHandle *info, char **sql, char *sqlEnd, SSmlLineInfo *currElement) {
|
||||
|
@ -353,7 +346,7 @@ static int32_t smlParseColLine(SSmlHandle *info, char **sql, char *sqlEnd, SSmlL
|
|||
const char *escapeChar = NULL;
|
||||
while (*sql < sqlEnd) {
|
||||
if (unlikely(IS_SPACE(*sql,escapeChar) || IS_COMMA(*sql,escapeChar))) {
|
||||
smlBuildInvalidDataMsg(&info->msgBuf, "invalid data", *sql);
|
||||
smlBuildInvalidDataMsg(&info->msgBuf, "SML line invalid data", *sql);
|
||||
return TSDB_CODE_SML_INVALID_DATA;
|
||||
}
|
||||
if (unlikely(IS_EQUAL(*sql,escapeChar))) {
|
||||
|
@ -370,7 +363,7 @@ static int32_t smlParseColLine(SSmlHandle *info, char **sql, char *sqlEnd, SSmlL
|
|||
}
|
||||
|
||||
if (unlikely(IS_INVALID_COL_LEN(keyLen - keyLenEscaped))) {
|
||||
smlBuildInvalidDataMsg(&info->msgBuf, "invalid key or key is too long than 64", key);
|
||||
smlBuildInvalidDataMsg(&info->msgBuf, "SML line invalid key or key is too long than 64", key);
|
||||
return TSDB_CODE_TSC_INVALID_COLUMN_LENGTH;
|
||||
}
|
||||
|
||||
|
@ -404,18 +397,18 @@ static int32_t smlParseColLine(SSmlHandle *info, char **sql, char *sqlEnd, SSmlL
|
|||
valueLen = *sql - value;
|
||||
|
||||
if (unlikely(quoteNum != 0 && quoteNum != 2)) {
|
||||
smlBuildInvalidDataMsg(&info->msgBuf, "unbalanced quotes", value);
|
||||
smlBuildInvalidDataMsg(&info->msgBuf, "SML line unbalanced quotes", value);
|
||||
return TSDB_CODE_SML_INVALID_DATA;
|
||||
}
|
||||
if (unlikely(valueLen == 0)) {
|
||||
smlBuildInvalidDataMsg(&info->msgBuf, "invalid value", value);
|
||||
smlBuildInvalidDataMsg(&info->msgBuf, "SML line invalid value", value);
|
||||
return TSDB_CODE_SML_INVALID_DATA;
|
||||
}
|
||||
|
||||
SSmlKv kv = {.key = key, .keyLen = keyLen, .value = value, .length = valueLen};
|
||||
int32_t ret = smlParseValue(&kv, &info->msgBuf);
|
||||
if (ret != TSDB_CODE_SUCCESS) {
|
||||
smlBuildInvalidDataMsg(&info->msgBuf, "smlParseValue error", value);
|
||||
uError("SML:0x%" PRIx64 " %s parse value error:%d.", info->id, __FUNCTION__, ret);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -437,11 +430,6 @@ static int32_t smlParseColLine(SSmlHandle *info, char **sql, char *sqlEnd, SSmlL
|
|||
}
|
||||
(void)memcpy(tmp, kv.value, kv.length);
|
||||
PROCESS_SLASH_IN_FIELD_VALUE(tmp, kv.length);
|
||||
if(kv.type == TSDB_DATA_TYPE_GEOMETRY) {
|
||||
uError("SML:0x%" PRIx64 " smlParseColLine error, invalid GEOMETRY type.", info->id);
|
||||
taosMemoryFree((void*)kv.value);
|
||||
return TSDB_CODE_TSC_INVALID_VALUE;
|
||||
}
|
||||
if(kv.type == TSDB_DATA_TYPE_VARBINARY){
|
||||
taosMemoryFree((void*)kv.value);
|
||||
}
|
||||
|
@ -510,7 +498,7 @@ int32_t smlParseInfluxString(SSmlHandle *info, char *sql, char *sqlEnd, SSmlLine
|
|||
}
|
||||
elements->measureLen = sql - elements->measure;
|
||||
if (unlikely(IS_INVALID_TABLE_LEN(elements->measureLen - measureLenEscaped))) {
|
||||
smlBuildInvalidDataMsg(&info->msgBuf, "measure is empty or too large than 192", NULL);
|
||||
smlBuildInvalidDataMsg(&info->msgBuf, "SML line measure is empty or too large than 192", NULL);
|
||||
return TSDB_CODE_TSC_INVALID_TABLE_ID_LENGTH;
|
||||
}
|
||||
|
||||
|
@ -557,7 +545,7 @@ int32_t smlParseInfluxString(SSmlHandle *info, char *sql, char *sqlEnd, SSmlLine
|
|||
|
||||
elements->colsLen = sql - elements->cols;
|
||||
if (unlikely(elements->colsLen == 0)) {
|
||||
smlBuildInvalidDataMsg(&info->msgBuf, "cols is empty", NULL);
|
||||
smlBuildInvalidDataMsg(&info->msgBuf, "SML line cols is empty", NULL);
|
||||
return TSDB_CODE_SML_INVALID_DATA;
|
||||
}
|
||||
|
||||
|
@ -574,7 +562,7 @@ int32_t smlParseInfluxString(SSmlHandle *info, char *sql, char *sqlEnd, SSmlLine
|
|||
|
||||
int64_t ts = smlParseInfluxTime(info, elements->timestamp, elements->timestampLen);
|
||||
if (unlikely(ts <= 0)) {
|
||||
uError("SML:0x%" PRIx64 " smlParseTS error:%" PRId64, info->id, ts);
|
||||
uError("SML:0x%" PRIx64 " %s error:%" PRId64, info->id, __FUNCTION__, ts);
|
||||
return TSDB_CODE_INVALID_TIMESTAMP;
|
||||
}
|
||||
|
||||
|
|
|
@ -148,31 +148,21 @@ static int32_t smlParseTelnetTags(SSmlHandle *info, char *data, char *sqlEnd, SS
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
int32_t ret = 0;
|
||||
int32_t code = 0;
|
||||
int32_t lino = 0;
|
||||
if(info->dataFormat){
|
||||
ret = smlProcessSuperTable(info, elements);
|
||||
if(ret != 0){
|
||||
SML_CHECK_CODE(smlProcessSuperTable(info, elements));
|
||||
}
|
||||
SML_CHECK_CODE(smlProcessTagTelnet(info, data, sqlEnd));
|
||||
SML_CHECK_CODE(smlJoinMeasureTag(elements));
|
||||
|
||||
code = smlProcessChildTable(info, elements);
|
||||
|
||||
END:
|
||||
if(info->reRun){
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
ret = smlProcessTagTelnet(info, data, sqlEnd);
|
||||
if(ret != 0){
|
||||
if (info->reRun){
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = smlJoinMeasureTag(elements);
|
||||
if(ret != 0){
|
||||
return ret;
|
||||
}
|
||||
|
||||
return smlProcessChildTable(info, elements);
|
||||
RETURN
|
||||
}
|
||||
|
||||
// format: <metric> <timestamp> <value> <tagk_1>=<tagv_1>[ <tagk_n>=<tagv_n>]
|
||||
|
@ -182,14 +172,14 @@ int32_t smlParseTelnetString(SSmlHandle *info, char *sql, char *sqlEnd, SSmlLine
|
|||
// parse metric
|
||||
smlParseTelnetElement(&sql, sqlEnd, &elements->measure, &elements->measureLen);
|
||||
if (unlikely((!(elements->measure) || IS_INVALID_TABLE_LEN(elements->measureLen)))) {
|
||||
smlBuildInvalidDataMsg(&info->msgBuf, "invalid data", sql);
|
||||
smlBuildInvalidDataMsg(&info->msgBuf, "SML telnet invalid measure", sql);
|
||||
return TSDB_CODE_TSC_INVALID_TABLE_ID_LENGTH;
|
||||
}
|
||||
|
||||
// parse timestamp
|
||||
smlParseTelnetElement(&sql, sqlEnd, &elements->timestamp, &elements->timestampLen);
|
||||
if (unlikely(!elements->timestamp || elements->timestampLen == 0)) {
|
||||
smlBuildInvalidDataMsg(&info->msgBuf, "invalid timestamp", sql);
|
||||
smlBuildInvalidDataMsg(&info->msgBuf, "SML telnet invalid timestamp", sql);
|
||||
return TSDB_CODE_SML_INVALID_DATA;
|
||||
}
|
||||
|
||||
|
@ -199,19 +189,21 @@ int32_t smlParseTelnetString(SSmlHandle *info, char *sql, char *sqlEnd, SSmlLine
|
|||
}
|
||||
int64_t ts = smlParseOpenTsdbTime(info, elements->timestamp, elements->timestampLen);
|
||||
if (unlikely(ts < 0)) {
|
||||
smlBuildInvalidDataMsg(&info->msgBuf, "invalid timestamp", sql);
|
||||
smlBuildInvalidDataMsg(&info->msgBuf, "SML telnet parse timestamp failed", sql);
|
||||
return TSDB_CODE_INVALID_TIMESTAMP;
|
||||
}
|
||||
|
||||
// parse value
|
||||
smlParseTelnetElement(&sql, sqlEnd, &elements->cols, &elements->colsLen);
|
||||
if (unlikely(!elements->cols || elements->colsLen == 0)) {
|
||||
smlBuildInvalidDataMsg(&info->msgBuf, "invalid value", sql);
|
||||
smlBuildInvalidDataMsg(&info->msgBuf, "SML telnet invalid value", sql);
|
||||
return TSDB_CODE_TSC_INVALID_VALUE;
|
||||
}
|
||||
|
||||
SSmlKv kv = {.key = VALUE, .keyLen = VALUE_LEN, .value = elements->cols, .length = (size_t)elements->colsLen};
|
||||
if (smlParseValue(&kv, &info->msgBuf) != TSDB_CODE_SUCCESS) {
|
||||
int ret = smlParseValue(&kv, &info->msgBuf);
|
||||
if (ret != TSDB_CODE_SUCCESS) {
|
||||
uError("SML:0x%" PRIx64 " %s parse value error:%d.", info->id, __FUNCTION__, ret);
|
||||
return TSDB_CODE_TSC_INVALID_VALUE;
|
||||
}
|
||||
|
||||
|
@ -220,11 +212,11 @@ int32_t smlParseTelnetString(SSmlHandle *info, char *sql, char *sqlEnd, SSmlLine
|
|||
elements->tags = sql;
|
||||
elements->tagsLen = sqlEnd - sql;
|
||||
if (unlikely(!elements->tags || elements->tagsLen == 0)) {
|
||||
smlBuildInvalidDataMsg(&info->msgBuf, "invalid value", sql);
|
||||
smlBuildInvalidDataMsg(&info->msgBuf, "SML telnet invalid tag value", sql);
|
||||
return TSDB_CODE_TSC_INVALID_VALUE;
|
||||
}
|
||||
|
||||
int ret = smlParseTelnetTags(info, sql, sqlEnd, elements);
|
||||
ret = smlParseTelnetTags(info, sql, sqlEnd, elements);
|
||||
if (unlikely(ret != TSDB_CODE_SUCCESS)) {
|
||||
return ret;
|
||||
}
|
||||
|
@ -239,5 +231,12 @@ int32_t smlParseTelnetString(SSmlHandle *info, char *sql, char *sqlEnd, SSmlLine
|
|||
kvTs.i = convertTimePrecision(kvTs.i, TSDB_TIME_PRECISION_NANO, info->currSTableMeta->tableInfo.precision);
|
||||
}
|
||||
|
||||
return smlParseEndTelnetJson(info, elements, &kvTs, &kv);
|
||||
if (info->dataFormat){
|
||||
ret = smlParseEndTelnetJsonFormat(info, elements, &kvTs, &kv);
|
||||
} else {
|
||||
ret = smlParseEndTelnetJsonUnFormat(info, elements, &kvTs, &kv);
|
||||
}
|
||||
info->preLine = *elements;
|
||||
|
||||
return ret;
|
||||
}
|
|
@ -68,6 +68,15 @@ TEST(testCase, smlParseInfluxString_Test) {
|
|||
taosArrayDestroy(elements.colArray);
|
||||
elements.colArray = nullptr;
|
||||
|
||||
// case 0 false
|
||||
tmp = "st,t1=3 c3=\"";
|
||||
(void)memcpy(sql, tmp, strlen(tmp) + 1);
|
||||
(void)memset(&elements, 0, sizeof(SSmlLineInfo));
|
||||
ret = smlParseInfluxString(info, sql, sql + strlen(sql), &elements);
|
||||
ASSERT_NE(ret, 0);
|
||||
taosArrayDestroy(elements.colArray);
|
||||
elements.colArray = nullptr;
|
||||
|
||||
// case 2 false
|
||||
tmp = "st,t1=3,t2=4,t3=t3 c1=3i64,c3=\"passit hello,c1=2,c2=false,c4=4f64 1626006833639000000";
|
||||
(void)memcpy(sql, tmp, strlen(tmp) + 1);
|
||||
|
@ -591,6 +600,104 @@ TEST(testCase, smlParseTelnetLine_Test) {
|
|||
// smlDestroyInfo(info);
|
||||
//}
|
||||
|
||||
bool smlParseNumberOld(SSmlKv *kvVal, SSmlMsgBuf *msg) {
|
||||
const char *pVal = kvVal->value;
|
||||
int32_t len = kvVal->length;
|
||||
char *endptr = NULL;
|
||||
double result = taosStr2Double(pVal, &endptr);
|
||||
if (pVal == endptr) {
|
||||
smlBuildInvalidDataMsg(msg, "invalid data", pVal);
|
||||
return false;
|
||||
}
|
||||
|
||||
int32_t left = len - (endptr - pVal);
|
||||
if (left == 0 || (left == 3 && strncasecmp(endptr, "f64", left) == 0)) {
|
||||
kvVal->type = TSDB_DATA_TYPE_DOUBLE;
|
||||
kvVal->d = result;
|
||||
} else if ((left == 3 && strncasecmp(endptr, "f32", left) == 0)) {
|
||||
if (!IS_VALID_FLOAT(result)) {
|
||||
smlBuildInvalidDataMsg(msg, "float out of range[-3.402823466e+38,3.402823466e+38]", pVal);
|
||||
return false;
|
||||
}
|
||||
kvVal->type = TSDB_DATA_TYPE_FLOAT;
|
||||
kvVal->f = (float)result;
|
||||
} else if ((left == 1 && *endptr == 'i') || (left == 3 && strncasecmp(endptr, "i64", left) == 0)) {
|
||||
if (smlDoubleToInt64OverFlow(result)) {
|
||||
errno = 0;
|
||||
int64_t tmp = taosStr2Int64(pVal, &endptr, 10);
|
||||
if (errno == ERANGE) {
|
||||
smlBuildInvalidDataMsg(msg, "big int out of range[-9223372036854775808,9223372036854775807]", pVal);
|
||||
return false;
|
||||
}
|
||||
kvVal->type = TSDB_DATA_TYPE_BIGINT;
|
||||
kvVal->i = tmp;
|
||||
return true;
|
||||
}
|
||||
kvVal->type = TSDB_DATA_TYPE_BIGINT;
|
||||
kvVal->i = (int64_t)result;
|
||||
} else if ((left == 1 && *endptr == 'u') || (left == 3 && strncasecmp(endptr, "u64", left) == 0)) {
|
||||
if (result >= (double)UINT64_MAX || result < 0) {
|
||||
errno = 0;
|
||||
uint64_t tmp = taosStr2UInt64(pVal, &endptr, 10);
|
||||
if (errno == ERANGE || result < 0) {
|
||||
smlBuildInvalidDataMsg(msg, "unsigned big int out of range[0,18446744073709551615]", pVal);
|
||||
return false;
|
||||
}
|
||||
kvVal->type = TSDB_DATA_TYPE_UBIGINT;
|
||||
kvVal->u = tmp;
|
||||
return true;
|
||||
}
|
||||
kvVal->type = TSDB_DATA_TYPE_UBIGINT;
|
||||
kvVal->u = result;
|
||||
} else if (left == 3 && strncasecmp(endptr, "i32", left) == 0) {
|
||||
if (!IS_VALID_INT(result)) {
|
||||
smlBuildInvalidDataMsg(msg, "int out of range[-2147483648,2147483647]", pVal);
|
||||
return false;
|
||||
}
|
||||
kvVal->type = TSDB_DATA_TYPE_INT;
|
||||
kvVal->i = result;
|
||||
} else if (left == 3 && strncasecmp(endptr, "u32", left) == 0) {
|
||||
if (!IS_VALID_UINT(result)) {
|
||||
smlBuildInvalidDataMsg(msg, "unsigned int out of range[0,4294967295]", pVal);
|
||||
return false;
|
||||
}
|
||||
kvVal->type = TSDB_DATA_TYPE_UINT;
|
||||
kvVal->u = result;
|
||||
} else if (left == 3 && strncasecmp(endptr, "i16", left) == 0) {
|
||||
if (!IS_VALID_SMALLINT(result)) {
|
||||
smlBuildInvalidDataMsg(msg, "small int our of range[-32768,32767]", pVal);
|
||||
return false;
|
||||
}
|
||||
kvVal->type = TSDB_DATA_TYPE_SMALLINT;
|
||||
kvVal->i = result;
|
||||
} else if (left == 3 && strncasecmp(endptr, "u16", left) == 0) {
|
||||
if (!IS_VALID_USMALLINT(result)) {
|
||||
smlBuildInvalidDataMsg(msg, "unsigned small int out of rang[0,65535]", pVal);
|
||||
return false;
|
||||
}
|
||||
kvVal->type = TSDB_DATA_TYPE_USMALLINT;
|
||||
kvVal->u = result;
|
||||
} else if (left == 2 && strncasecmp(endptr, "i8", left) == 0) {
|
||||
if (!IS_VALID_TINYINT(result)) {
|
||||
smlBuildInvalidDataMsg(msg, "tiny int out of range[-128,127]", pVal);
|
||||
return false;
|
||||
}
|
||||
kvVal->type = TSDB_DATA_TYPE_TINYINT;
|
||||
kvVal->i = result;
|
||||
} else if (left == 2 && strncasecmp(endptr, "u8", left) == 0) {
|
||||
if (!IS_VALID_UTINYINT(result)) {
|
||||
smlBuildInvalidDataMsg(msg, "unsigned tiny int out of range[0,255]", pVal);
|
||||
return false;
|
||||
}
|
||||
kvVal->type = TSDB_DATA_TYPE_UTINYINT;
|
||||
kvVal->u = result;
|
||||
} else {
|
||||
smlBuildInvalidDataMsg(msg, "invalid data", pVal);
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
TEST(testCase, smlParseNumber_performance_Test) {
|
||||
char msg[256] = {0};
|
||||
SSmlMsgBuf msgBuf;
|
||||
|
|
|
@ -38,7 +38,7 @@ int32_t vmGetAllVnodeListFromHash(SVnodeMgmt *pMgmt, int32_t *numOfVnodes, SVnod
|
|||
SVnodeObj *pVnode = *ppVnode;
|
||||
if (pVnode && num < size) {
|
||||
int32_t refCount = atomic_add_fetch_32(&pVnode->refCount, 1);
|
||||
// dTrace("vgId:%d, acquire vnode list, ref:%d", pVnode->vgId, refCount);
|
||||
dTrace("vgId:%d,acquire vnode, vnode:%p, ref:%d", pVnode->vgId, pVnode, refCount);
|
||||
pVnodes[num++] = (*ppVnode);
|
||||
pIter = taosHashIterate(pMgmt->hash, pIter);
|
||||
} else {
|
||||
|
@ -52,7 +52,7 @@ int32_t vmGetAllVnodeListFromHash(SVnodeMgmt *pMgmt, int32_t *numOfVnodes, SVnod
|
|||
SVnodeObj *pVnode = *ppVnode;
|
||||
if (pVnode && num < size) {
|
||||
int32_t refCount = atomic_add_fetch_32(&pVnode->refCount, 1);
|
||||
// dTrace("vgId:%d, acquire vnode list, ref:%d", pVnode->vgId, refCount);
|
||||
dTrace("vgId:%d, acquire vnode, vnode:%p, ref:%d", pVnode->vgId, pVnode, refCount);
|
||||
pVnodes[num++] = (*ppVnode);
|
||||
pIter = taosHashIterate(pMgmt->closedHash, pIter);
|
||||
} else {
|
||||
|
@ -84,7 +84,7 @@ int32_t vmGetVnodeListFromHash(SVnodeMgmt *pMgmt, int32_t *numOfVnodes, SVnodeOb
|
|||
SVnodeObj *pVnode = *ppVnode;
|
||||
if (pVnode && num < size) {
|
||||
int32_t refCount = atomic_add_fetch_32(&pVnode->refCount, 1);
|
||||
// dTrace("vgId:%d, acquire vnode list, ref:%d", pVnode->vgId, refCount);
|
||||
dTrace("vgId:%d, acquire vnode, vnode:%p, ref:%d", pVnode->vgId, pVnode, refCount);
|
||||
pVnodes[num++] = (*ppVnode);
|
||||
pIter = taosHashIterate(pMgmt->hash, pIter);
|
||||
} else {
|
||||
|
|
|
@ -103,7 +103,7 @@ SVnodeObj *vmAcquireVnodeImpl(SVnodeMgmt *pMgmt, int32_t vgId, bool strict) {
|
|||
pVnode = NULL;
|
||||
} else {
|
||||
int32_t refCount = atomic_add_fetch_32(&pVnode->refCount, 1);
|
||||
// dTrace("vgId:%d, acquire vnode, ref:%d", pVnode->vgId, refCount);
|
||||
dTrace("vgId:%d, acquire vnode, vnode:%p, ref:%d", pVnode->vgId, pVnode, refCount);
|
||||
}
|
||||
(void)taosThreadRwlockUnlock(&pMgmt->lock);
|
||||
|
||||
|
@ -115,16 +115,24 @@ SVnodeObj *vmAcquireVnode(SVnodeMgmt *pMgmt, int32_t vgId) { return vmAcquireVno
|
|||
void vmReleaseVnode(SVnodeMgmt *pMgmt, SVnodeObj *pVnode) {
|
||||
if (pVnode == NULL) return;
|
||||
|
||||
(void)taosThreadRwlockRdlock(&pMgmt->lock);
|
||||
//(void)taosThreadRwlockRdlock(&pMgmt->lock);
|
||||
int32_t refCount = atomic_sub_fetch_32(&pVnode->refCount, 1);
|
||||
// dTrace("vgId:%d, release vnode, ref:%d", pVnode->vgId, refCount);
|
||||
(void)taosThreadRwlockUnlock(&pMgmt->lock);
|
||||
dTrace("vgId:%d, release vnode, vnode:%p, ref:%d", pVnode->vgId, pVnode, refCount);
|
||||
//(void)taosThreadRwlockUnlock(&pMgmt->lock);
|
||||
}
|
||||
|
||||
static void vmFreeVnodeObj(SVnodeObj **ppVnode) {
|
||||
if (!ppVnode || !(*ppVnode)) return;
|
||||
|
||||
SVnodeObj *pVnode = *ppVnode;
|
||||
|
||||
int32_t refCount = atomic_load_32(&pVnode->refCount);
|
||||
while (refCount > 0) {
|
||||
dWarn("vgId:%d, vnode is refenced, retry to free in 200ms, vnode:%p, ref:%d", pVnode->vgId, pVnode, refCount);
|
||||
taosMsleep(200);
|
||||
refCount = atomic_load_32(&pVnode->refCount);
|
||||
}
|
||||
|
||||
taosMemoryFree(pVnode->path);
|
||||
taosMemoryFree(pVnode);
|
||||
ppVnode[0] = NULL;
|
||||
|
|
|
@ -30,10 +30,19 @@ static inline void dmBuildMnodeRedirectRsp(SDnode *pDnode, SRpcMsg *pMsg) {
|
|||
dmGetMnodeEpSetForRedirect(&pDnode->data, pMsg, &epSet);
|
||||
|
||||
if (epSet.numOfEps <= 1) {
|
||||
if (epSet.numOfEps == 0) {
|
||||
pMsg->pCont = NULL;
|
||||
pMsg->code = TSDB_CODE_MNODE_NOT_FOUND;
|
||||
return;
|
||||
}
|
||||
// dnode is not the mnode or mnode leader and This ensures that the function correctly handles cases where the
|
||||
// dnode cannot obtain a valid epSet and avoids returning an incorrect or misleading epSet.
|
||||
if (strcmp(epSet.eps[0].fqdn, tsLocalFqdn) == 0 && epSet.eps[0].port == tsServerPort) {
|
||||
pMsg->pCont = NULL;
|
||||
pMsg->code = TSDB_CODE_MNODE_NOT_FOUND;
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
int32_t contLen = tSerializeSEpSet(NULL, 0, &epSet);
|
||||
pMsg->pCont = rpcMallocCont(contLen);
|
||||
|
|
|
@ -1928,7 +1928,7 @@ static int32_t mndDropSuperTableColumn(SMnode *pMnode, const SStbObj *pOld, SStb
|
|||
}
|
||||
|
||||
if (pOld->numOfColumns == 2) {
|
||||
code = TSDB_CODE_MND_INVALID_STB_ALTER_OPTION;
|
||||
code = TSDB_CODE_PAR_INVALID_DROP_COL;
|
||||
TAOS_RETURN(code);
|
||||
}
|
||||
|
||||
|
@ -4295,9 +4295,9 @@ static int32_t mndDropTbAdd(SMnode *pMnode, SHashObj *pVgHashMap, const SVgroupI
|
|||
return 0;
|
||||
}
|
||||
|
||||
int vgInfoCmp(const void* lp, const void* rp) {
|
||||
SVgroupInfo* pLeft = (SVgroupInfo*)lp;
|
||||
SVgroupInfo* pRight = (SVgroupInfo*)rp;
|
||||
int vgInfoCmp(const void *lp, const void *rp) {
|
||||
SVgroupInfo *pLeft = (SVgroupInfo *)lp;
|
||||
SVgroupInfo *pRight = (SVgroupInfo *)rp;
|
||||
if (pLeft->hashBegin < pRight->hashBegin) {
|
||||
return -1;
|
||||
} else if (pLeft->hashBegin > pRight->hashBegin) {
|
||||
|
|
|
@ -782,7 +782,7 @@ TEST_F(MndTestStb, 07_Alter_Stb_DropColumn) {
|
|||
{
|
||||
void* pReq = BuildAlterStbDropColumnReq(stbname, "col1", &contLen);
|
||||
SRpcMsg* pRsp = test.SendReq(TDMT_MND_ALTER_STB, pReq, contLen);
|
||||
ASSERT_EQ(pRsp->code, TSDB_CODE_MND_INVALID_STB_ALTER_OPTION);
|
||||
ASSERT_EQ(pRsp->code, TSDB_CODE_PAR_INVALID_DROP_COL);
|
||||
rpcFreeCont(pRsp->pCont);
|
||||
}
|
||||
|
||||
|
|
|
@ -2018,7 +2018,12 @@ static void doMergeAlignedIntervalAgg(SOperatorInfo* pOperator) {
|
|||
cleanupAfterGroupResultGen(pMiaInfo, pRes);
|
||||
code = doFilter(pRes, pOperator->exprSupp.pFilterInfo, NULL);
|
||||
QUERY_CHECK_CODE(code, lino, _end);
|
||||
if (pRes->info.rows == 0) {
|
||||
// After filtering for last group, the result is empty, so we need to continue to process next group
|
||||
continue;
|
||||
} else {
|
||||
break;
|
||||
}
|
||||
} else {
|
||||
// continue
|
||||
pRes->info.id.groupId = pMiaInfo->groupId;
|
||||
|
|
|
@ -468,35 +468,38 @@ end:
|
|||
int32_t smlInitHandle(SQuery** query) {
|
||||
*query = NULL;
|
||||
SQuery* pQuery = NULL;
|
||||
SVnodeModifyOpStmt* stmt = NULL;
|
||||
|
||||
int32_t code = nodesMakeNode(QUERY_NODE_QUERY, (SNode**)&pQuery);
|
||||
if (NULL == pQuery) {
|
||||
uError("create pQuery error");
|
||||
return code;
|
||||
if (code != 0) {
|
||||
uError("SML create pQuery error");
|
||||
goto END;
|
||||
}
|
||||
pQuery->execMode = QUERY_EXEC_MODE_SCHEDULE;
|
||||
pQuery->haveResultSet = false;
|
||||
pQuery->msgType = TDMT_VND_SUBMIT;
|
||||
SVnodeModifyOpStmt* stmt = NULL;
|
||||
code = nodesMakeNode(QUERY_NODE_VNODE_MODIFY_STMT, (SNode**)&stmt);
|
||||
if (NULL == stmt) {
|
||||
uError("create SVnodeModifyOpStmt error");
|
||||
qDestroyQuery(pQuery);
|
||||
return code;
|
||||
if (code != 0) {
|
||||
uError("SML create SVnodeModifyOpStmt error");
|
||||
goto END;
|
||||
}
|
||||
stmt->pTableBlockHashObj = taosHashInit(16, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, HASH_NO_LOCK);
|
||||
if (stmt->pTableBlockHashObj == NULL){
|
||||
uError("create pTableBlockHashObj error");
|
||||
qDestroyQuery(pQuery);
|
||||
nodesDestroyNode((SNode*)stmt);
|
||||
return terrno;
|
||||
uError("SML create pTableBlockHashObj error");
|
||||
code = terrno;
|
||||
goto END;
|
||||
}
|
||||
stmt->freeHashFunc = insDestroyTableDataCxtHashMap;
|
||||
stmt->freeArrayFunc = insDestroyVgroupDataCxtList;
|
||||
|
||||
pQuery->pRoot = (SNode*)stmt;
|
||||
*query = pQuery;
|
||||
return code;
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
END:
|
||||
nodesDestroyNode((SNode*)stmt);
|
||||
qDestroyQuery(pQuery);
|
||||
return code;
|
||||
}
|
||||
|
||||
int32_t smlBuildOutput(SQuery* handle, SHashObj* pVgHash) {
|
||||
|
|
|
@ -1607,26 +1607,6 @@ static EDealRes translateColumnUseAlias(STranslateContext* pCxt, SColumnNode** p
|
|||
}
|
||||
}
|
||||
if (*pFound) {
|
||||
if (QUERY_NODE_FUNCTION == nodeType(pFoundNode) &&
|
||||
(SQL_CLAUSE_GROUP_BY == pCxt->currClause || SQL_CLAUSE_PARTITION_BY == pCxt->currClause)) {
|
||||
pCxt->errCode = getFuncInfo(pCxt, (SFunctionNode*)pFoundNode);
|
||||
if (TSDB_CODE_SUCCESS == pCxt->errCode) {
|
||||
if (fmIsVectorFunc(((SFunctionNode*)pFoundNode)->funcId)) {
|
||||
pCxt->errCode = generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_ILLEGAL_USE_AGG_FUNCTION, (*pCol)->colName);
|
||||
return DEAL_RES_ERROR;
|
||||
} else if (fmIsPseudoColumnFunc(((SFunctionNode*)pFoundNode)->funcId)) {
|
||||
if ('\0' != (*pCol)->tableAlias[0]) {
|
||||
return translateColumnWithPrefix(pCxt, pCol);
|
||||
} else {
|
||||
return translateColumnWithoutPrefix(pCxt, pCol);
|
||||
}
|
||||
} else {
|
||||
/* Do nothing and replace old node with found node. */
|
||||
}
|
||||
} else {
|
||||
return DEAL_RES_ERROR;
|
||||
}
|
||||
}
|
||||
SNode* pNew = NULL;
|
||||
int32_t code = nodesCloneNode(pFoundNode, &pNew);
|
||||
if (NULL == pNew) {
|
||||
|
@ -1635,14 +1615,6 @@ static EDealRes translateColumnUseAlias(STranslateContext* pCxt, SColumnNode** p
|
|||
}
|
||||
nodesDestroyNode(*(SNode**)pCol);
|
||||
*(SNode**)pCol = (SNode*)pNew;
|
||||
if (QUERY_NODE_COLUMN == nodeType(pFoundNode)) {
|
||||
pCxt->errCode = TSDB_CODE_SUCCESS;
|
||||
if ('\0' != (*pCol)->tableAlias[0]) {
|
||||
return translateColumnWithPrefix(pCxt, pCol);
|
||||
} else {
|
||||
return translateColumnWithoutPrefix(pCxt, pCol);
|
||||
}
|
||||
}
|
||||
}
|
||||
return DEAL_RES_CONTINUE;
|
||||
}
|
||||
|
@ -1881,6 +1853,39 @@ static bool clauseSupportAlias(ESqlClause clause) {
|
|||
return SQL_CLAUSE_GROUP_BY == clause || SQL_CLAUSE_PARTITION_BY == clause || SQL_CLAUSE_ORDER_BY == clause;
|
||||
}
|
||||
|
||||
static EDealRes translateColumnInGroupByClause(STranslateContext* pCxt, SColumnNode** pCol, bool *translateAsAlias) {
|
||||
*translateAsAlias = false;
|
||||
// count(*)/first(*)/last(*) and so on
|
||||
if (0 == strcmp((*pCol)->colName, "*")) {
|
||||
return DEAL_RES_CONTINUE;
|
||||
}
|
||||
|
||||
if (pCxt->pParseCxt->biMode) {
|
||||
SNode** ppNode = (SNode**)pCol;
|
||||
bool ret;
|
||||
pCxt->errCode = biRewriteToTbnameFunc(pCxt, ppNode, &ret);
|
||||
if (TSDB_CODE_SUCCESS != pCxt->errCode) return DEAL_RES_ERROR;
|
||||
if (ret) {
|
||||
return translateFunction(pCxt, (SFunctionNode**)ppNode);
|
||||
}
|
||||
}
|
||||
|
||||
EDealRes res = DEAL_RES_CONTINUE;
|
||||
if ('\0' != (*pCol)->tableAlias[0]) {
|
||||
res = translateColumnWithPrefix(pCxt, pCol);
|
||||
} else {
|
||||
bool found = false;
|
||||
res = translateColumnWithoutPrefix(pCxt, pCol);
|
||||
if (!(*pCol)->node.asParam &&
|
||||
res != DEAL_RES_CONTINUE &&
|
||||
res != DEAL_RES_END && pCxt->errCode != TSDB_CODE_PAR_AMBIGUOUS_COLUMN) {
|
||||
res = translateColumnUseAlias(pCxt, pCol, &found);
|
||||
*translateAsAlias = true;
|
||||
}
|
||||
}
|
||||
return res;
|
||||
}
|
||||
|
||||
static EDealRes translateColumn(STranslateContext* pCxt, SColumnNode** pCol) {
|
||||
if (NULL == pCxt->pCurrStmt ||
|
||||
(isSelectStmt(pCxt->pCurrStmt) && NULL == ((SSelectStmt*)pCxt->pCurrStmt)->pFromTable)) {
|
||||
|
@ -3752,7 +3757,7 @@ static EDealRes doCheckExprForGroupBy(SNode** pNode, void* pContext) {
|
|||
bool partionByTbname = hasTbnameFunction(pSelect->pPartitionByList);
|
||||
FOREACH(pPartKey, pSelect->pPartitionByList) {
|
||||
if (nodesEqualNode(pPartKey, *pNode)) {
|
||||
return pCxt->currClause == SQL_CLAUSE_HAVING ? DEAL_RES_IGNORE_CHILD : rewriteExprToGroupKeyFunc(pCxt, pNode);
|
||||
return pSelect->hasAggFuncs ? rewriteExprToGroupKeyFunc(pCxt, pNode) : DEAL_RES_IGNORE_CHILD;
|
||||
}
|
||||
if ((partionByTbname) && QUERY_NODE_COLUMN == nodeType(*pNode) &&
|
||||
((SColumnNode*)*pNode)->colType == COLUMN_TYPE_TAG) {
|
||||
|
@ -5464,13 +5469,14 @@ typedef struct SReplaceGroupByAliasCxt {
|
|||
SNodeList* pProjectionList;
|
||||
} SReplaceGroupByAliasCxt;
|
||||
|
||||
static EDealRes replaceGroupByAliasImpl(SNode** pNode, void* pContext) {
|
||||
static EDealRes translateGroupPartitionByImpl(SNode** pNode, void* pContext) {
|
||||
SReplaceGroupByAliasCxt* pCxt = pContext;
|
||||
SNodeList* pProjectionList = pCxt->pProjectionList;
|
||||
SNode* pProject = NULL;
|
||||
if (QUERY_NODE_VALUE == nodeType(*pNode)) {
|
||||
int32_t code = TSDB_CODE_SUCCESS;
|
||||
STranslateContext* pTransCxt = pCxt->pTranslateCxt;
|
||||
SValueNode* pVal = (SValueNode*)*pNode;
|
||||
if (QUERY_NODE_VALUE == nodeType(*pNode)) {
|
||||
SValueNode* pVal = (SValueNode*) *pNode;
|
||||
if (DEAL_RES_ERROR == translateValue(pTransCxt, pVal)) {
|
||||
return DEAL_RES_CONTINUE;
|
||||
}
|
||||
|
@ -5480,41 +5486,59 @@ static EDealRes replaceGroupByAliasImpl(SNode** pNode, void* pContext) {
|
|||
int32_t pos = getPositionValue(pVal);
|
||||
if (0 < pos && pos <= LIST_LENGTH(pProjectionList)) {
|
||||
SNode* pNew = NULL;
|
||||
int32_t code = nodesCloneNode(nodesListGetNode(pProjectionList, pos - 1), (SNode**)&pNew);
|
||||
code = nodesCloneNode(nodesListGetNode(pProjectionList, pos - 1), (SNode**)&pNew);
|
||||
if (TSDB_CODE_SUCCESS != code) {
|
||||
pCxt->pTranslateCxt->errCode = code;
|
||||
return DEAL_RES_ERROR;
|
||||
}
|
||||
nodesDestroyNode(*pNode);
|
||||
*pNode = pNew;
|
||||
return DEAL_RES_CONTINUE;
|
||||
} else {
|
||||
return DEAL_RES_CONTINUE;
|
||||
}
|
||||
code = translateExpr(pTransCxt, pNode);
|
||||
if (TSDB_CODE_SUCCESS != code) {
|
||||
pTransCxt->errCode = code;
|
||||
return DEAL_RES_ERROR;
|
||||
}
|
||||
return DEAL_RES_CONTINUE;
|
||||
} else if (QUERY_NODE_COLUMN == nodeType(*pNode)) {
|
||||
STranslateContext* pTransCxt = pCxt->pTranslateCxt;
|
||||
return translateColumn(pTransCxt, (SColumnNode**)pNode);
|
||||
bool asAlias = false;
|
||||
EDealRes res = translateColumnInGroupByClause(pTransCxt, (SColumnNode**)pNode, &asAlias);
|
||||
if (DEAL_RES_ERROR == res) {
|
||||
return DEAL_RES_ERROR;
|
||||
}
|
||||
|
||||
pTransCxt->errCode = TSDB_CODE_SUCCESS;
|
||||
if (nodeType(*pNode) == QUERY_NODE_COLUMN && !asAlias) {
|
||||
return DEAL_RES_CONTINUE;
|
||||
}
|
||||
code = translateExpr(pTransCxt, pNode);
|
||||
if (TSDB_CODE_SUCCESS != code) {
|
||||
pTransCxt->errCode = code;
|
||||
return DEAL_RES_ERROR;
|
||||
}
|
||||
return DEAL_RES_CONTINUE;
|
||||
}
|
||||
return doTranslateExpr(pNode, pTransCxt);
|
||||
}
|
||||
|
||||
static int32_t replaceGroupByAlias(STranslateContext* pCxt, SSelectStmt* pSelect) {
|
||||
static int32_t translateGroupByList(STranslateContext* pCxt, SSelectStmt* pSelect) {
|
||||
if (NULL == pSelect->pGroupByList) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
SReplaceGroupByAliasCxt cxt = {.pTranslateCxt = pCxt, .pProjectionList = pSelect->pProjectionList};
|
||||
nodesRewriteExprsPostOrder(pSelect->pGroupByList, replaceGroupByAliasImpl, &cxt);
|
||||
SReplaceGroupByAliasCxt cxt = {
|
||||
.pTranslateCxt = pCxt, .pProjectionList = pSelect->pProjectionList};
|
||||
nodesRewriteExprsPostOrder(pSelect->pGroupByList, translateGroupPartitionByImpl, &cxt);
|
||||
|
||||
return pCxt->errCode;
|
||||
}
|
||||
|
||||
static int32_t replacePartitionByAlias(STranslateContext* pCxt, SSelectStmt* pSelect) {
|
||||
static int32_t translatePartitionByList(STranslateContext* pCxt, SSelectStmt* pSelect) {
|
||||
if (NULL == pSelect->pPartitionByList) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
SReplaceGroupByAliasCxt cxt = {.pTranslateCxt = pCxt, .pProjectionList = pSelect->pProjectionList};
|
||||
nodesRewriteExprsPostOrder(pSelect->pPartitionByList, replaceGroupByAliasImpl, &cxt);
|
||||
|
||||
SReplaceGroupByAliasCxt cxt = {
|
||||
.pTranslateCxt = pCxt, .pProjectionList = pSelect->pProjectionList};
|
||||
nodesRewriteExprsPostOrder(pSelect->pPartitionByList, translateGroupPartitionByImpl, &cxt);
|
||||
|
||||
return pCxt->errCode;
|
||||
}
|
||||
|
@ -5578,11 +5602,8 @@ static int32_t translateGroupBy(STranslateContext* pCxt, SSelectStmt* pSelect) {
|
|||
NODES_DESTORY_LIST(pSelect->pGroupByList);
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
code = replaceGroupByAlias(pCxt, pSelect);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
pSelect->timeLineResMode = TIME_LINE_NONE;
|
||||
code = translateExprList(pCxt, pSelect->pGroupByList);
|
||||
code = translateGroupByList(pCxt, pSelect);
|
||||
}
|
||||
return code;
|
||||
}
|
||||
|
@ -6277,10 +6298,7 @@ static int32_t translatePartitionBy(STranslateContext* pCxt, SSelectStmt* pSelec
|
|||
(QUERY_NODE_FUNCTION == nodeType(pPar) && FUNCTION_TYPE_TBNAME == ((SFunctionNode*)pPar)->funcType))) {
|
||||
pSelect->timeLineResMode = TIME_LINE_MULTI;
|
||||
}
|
||||
code = replacePartitionByAlias(pCxt, pSelect);
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = translateExprList(pCxt, pSelect->pPartitionByList);
|
||||
}
|
||||
code = translatePartitionByList(pCxt, pSelect);
|
||||
}
|
||||
if (TSDB_CODE_SUCCESS == code) {
|
||||
code = translateExprList(pCxt, pSelect->pTags);
|
||||
|
|
|
@ -664,7 +664,7 @@ static FORCE_INLINE int32_t walWriteImpl(SWal *pWal, int64_t index, tmsg_t msgTy
|
|||
|
||||
// set status
|
||||
if (pWal->vers.firstVer == -1) {
|
||||
pWal->vers.firstVer = 0;
|
||||
pWal->vers.firstVer = index;
|
||||
}
|
||||
pWal->vers.lastVer = index;
|
||||
pWal->totSize += sizeof(SWalCkHead) + cyptedBodyLen;
|
||||
|
|
|
@ -344,9 +344,72 @@ class TDTestCase:
|
|||
tdSql.query(sql, queryTimes=1)
|
||||
tdSql.checkRows(55)
|
||||
|
||||
# TODO Fix Me!
|
||||
sql = "explain SELECT count(*), timediff(_wend, last(ts)), timediff('2018-09-20 01:00:00', _wstart) FROM meters WHERE ts >= '2018-09-20 00:00:00.000' AND ts < '2018-09-20 01:00:00.000' PARTITION BY concat(tbname, 'asd') INTERVAL(5m) having(concat(tbname, 'asd') like '%asd');"
|
||||
tdSql.error(sql, -2147473664) # Error: Planner internal error
|
||||
sql = "SELECT count(*), timediff(_wend, last(ts)), timediff('2018-09-20 01:00:00', _wstart) FROM meters WHERE ts >= '2018-09-20 00:00:00.000' AND ts < '2018-09-20 01:00:00.000' PARTITION BY concat(tbname, 'asd') INTERVAL(5m) having(concat(tbname, 'asd') like '%asd');"
|
||||
tdSql.query(sql, queryTimes=1)
|
||||
tdSql.checkRows(60)
|
||||
|
||||
sql = "SELECT count(*), timediff(_wend, last(ts)), timediff('2018-09-20 01:00:00', _wstart) FROM meters WHERE ts >= '2018-09-20 00:00:00.000' AND ts < '2018-09-20 01:00:00.000' PARTITION BY concat(tbname, 'asd') INTERVAL(5m) having(concat(tbname, 'asd') like 'asd%');"
|
||||
tdSql.query(sql, queryTimes=1)
|
||||
tdSql.checkRows(0)
|
||||
|
||||
sql = "SELECT c1 FROM meters PARTITION BY c1 HAVING c1 > 0 slimit 2 limit 10"
|
||||
tdSql.query(sql, queryTimes=1)
|
||||
tdSql.checkRows(20)
|
||||
|
||||
sql = "SELECT t1 FROM meters PARTITION BY t1 HAVING(t1 = 1)"
|
||||
tdSql.query(sql, queryTimes=1)
|
||||
tdSql.checkRows(20000)
|
||||
|
||||
sql = "SELECT concat(t2, 'asd') FROM meters PARTITION BY t2 HAVING(t2 like '%5')"
|
||||
tdSql.query(sql, queryTimes=1)
|
||||
tdSql.checkRows(10000)
|
||||
tdSql.checkData(0, 0, 'tb5asd')
|
||||
|
||||
sql = "SELECT concat(t2, 'asd') FROM meters PARTITION BY concat(t2, 'asd') HAVING(concat(t2, 'asd')like '%5%')"
|
||||
tdSql.query(sql, queryTimes=1)
|
||||
tdSql.checkRows(10000)
|
||||
tdSql.checkData(0, 0, 'tb5asd')
|
||||
|
||||
sql = "SELECT avg(c1) FROM meters PARTITION BY tbname, t1 HAVING(t1 = 1)"
|
||||
tdSql.query(sql, queryTimes=1)
|
||||
tdSql.checkRows(2)
|
||||
|
||||
sql = "SELECT count(*) FROM meters PARTITION BY concat(tbname, 'asd') HAVING(concat(tbname, 'asd') like '%asd')"
|
||||
tdSql.query(sql, queryTimes=1)
|
||||
tdSql.checkRows(10)
|
||||
|
||||
sql = "SELECT count(*), concat(tbname, 'asd') FROM meters PARTITION BY concat(tbname, 'asd') HAVING(concat(tbname, 'asd') like '%asd')"
|
||||
tdSql.query(sql, queryTimes=1)
|
||||
tdSql.checkRows(10)
|
||||
|
||||
sql = "SELECT count(*) FROM meters PARTITION BY t1 HAVING(t1 < 4) order by t1 +1"
|
||||
tdSql.query(sql, queryTimes=1)
|
||||
tdSql.checkRows(4)
|
||||
|
||||
sql = "SELECT count(*), t1 + 100 FROM meters PARTITION BY t1 HAVING(t1 < 4) order by t1 +1"
|
||||
tdSql.query(sql, queryTimes=1)
|
||||
tdSql.checkRows(4)
|
||||
|
||||
sql = "SELECT count(*), t1 + 100 FROM meters PARTITION BY t1 INTERVAL(1d) HAVING(t1 < 4) order by t1 +1 desc"
|
||||
tdSql.query(sql, queryTimes=1)
|
||||
tdSql.checkRows(280)
|
||||
|
||||
sql = "SELECT count(*), concat(t3, 'asd') FROM meters PARTITION BY concat(t3, 'asd') INTERVAL(1d) HAVING(concat(t3, 'asd') like '%5asd' and count(*) = 118)"
|
||||
tdSql.query(sql, queryTimes=1)
|
||||
tdSql.checkRows(1)
|
||||
|
||||
sql = "SELECT count(*), concat(t3, 'asd') FROM meters PARTITION BY concat(t3, 'asd') INTERVAL(1d) HAVING(concat(t3, 'asd') like '%5asd' and count(*) != 118)"
|
||||
tdSql.query(sql, queryTimes=1)
|
||||
tdSql.checkRows(69)
|
||||
|
||||
sql = "SELECT count(*), concat(t3, 'asd') FROM meters PARTITION BY concat(t3, 'asd') INTERVAL(1d) HAVING(concat(t3, 'asd') like '%5asd') order by count(*) asc limit 10"
|
||||
tdSql.query(sql, queryTimes=1)
|
||||
tdSql.checkRows(10)
|
||||
|
||||
sql = "SELECT count(*), concat(t3, 'asd') FROM meters PARTITION BY concat(t3, 'asd') INTERVAL(1d) HAVING(concat(t3, 'asd') like '%5asd' or concat(t3, 'asd') like '%3asd') order by count(*) asc limit 10000"
|
||||
tdSql.query(sql, queryTimes=1)
|
||||
tdSql.checkRows(140)
|
||||
|
||||
|
||||
def run(self):
|
||||
self.prepareTestEnv()
|
||||
|
|
|
@ -420,7 +420,23 @@ class TDTestCase:
|
|||
tdSql.error(f"select t2, count(*) from {self.dbname}.{self.stable} group by t2 where t2 = 1")
|
||||
tdSql.error(f"select t2, count(*) from {self.dbname}.{self.stable} group by t2 interval(1d)")
|
||||
|
||||
|
||||
def test_TS5567(self):
|
||||
tdSql.query(f"select const_col from (select 1 as const_col from {self.dbname}.{self.stable}) t group by const_col")
|
||||
tdSql.checkRows(50)
|
||||
tdSql.query(f"select const_col from (select 1 as const_col from {self.dbname}.{self.stable}) t partition by const_col")
|
||||
tdSql.checkRows(50)
|
||||
tdSql.query(f"select const_col from (select 1 as const_col, count(c1) from {self.dbname}.{self.stable} t group by c1) group by const_col")
|
||||
tdSql.checkRows(10)
|
||||
tdSql.query(f"select const_col from (select 1 as const_col, count(c1) from {self.dbname}.{self.stable} t group by c1) partition by const_col")
|
||||
tdSql.checkRows(10)
|
||||
tdSql.query(f"select const_col as c_c from (select 1 as const_col from {self.dbname}.{self.stable}) t group by c_c")
|
||||
tdSql.checkRows(50)
|
||||
tdSql.query(f"select const_col as c_c from (select 1 as const_col from {self.dbname}.{self.stable}) t partition by c_c")
|
||||
tdSql.checkRows(50)
|
||||
tdSql.query(f"select const_col from (select 1 as const_col, count(c1) from {self.dbname}.{self.stable} t group by c1) group by 1")
|
||||
tdSql.checkRows(10)
|
||||
tdSql.query(f"select const_col from (select 1 as const_col, count(c1) from {self.dbname}.{self.stable} t group by c1) partition by 1")
|
||||
tdSql.checkRows(10)
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
self.prepare_db()
|
||||
|
@ -453,6 +469,7 @@ class TDTestCase:
|
|||
self.test_window(nonempty_tb_num)
|
||||
self.test_event_window(nonempty_tb_num)
|
||||
|
||||
self.test_TS5567()
|
||||
|
||||
## test old version before changed
|
||||
# self.test_groupby('group', 0, 0)
|
||||
|
|
|
@ -17,8 +17,6 @@ sys.path.append("./7-tmq")
|
|||
from tmqCommon import *
|
||||
|
||||
class TDTestCase:
|
||||
|
||||
updatecfgDict = {'sDebugFlag':143, 'wDebugFlag':143}
|
||||
def __init__(self):
|
||||
self.vgroups = 1
|
||||
self.ctbNum = 10
|
||||
|
|
|
@ -105,6 +105,113 @@ int smlProcess_telnet_Test() {
|
|||
return code;
|
||||
}
|
||||
|
||||
int smlProcess_telnet0_Test() {
|
||||
TAOS *taos = taos_connect("localhost", "root", "taosdata", NULL, 0);
|
||||
|
||||
TAOS_RES *pRes = taos_query(taos, "create database if not exists sml_db schemaless 1");
|
||||
taos_free_result(pRes);
|
||||
|
||||
pRes = taos_query(taos, "use sml_db");
|
||||
taos_free_result(pRes);
|
||||
|
||||
const char *sql1[] = {"sysif.bytes.out 1479496100 1.3E0 host=web01 interface=eth0"};
|
||||
pRes = taos_schemaless_insert(taos, (char **)sql1, sizeof(sql1) / sizeof(sql1[0]), TSDB_SML_TELNET_PROTOCOL,
|
||||
TSDB_SML_TIMESTAMP_NANO_SECONDS);
|
||||
printf("%s result:%s\n", __FUNCTION__, taos_errstr(pRes));
|
||||
int code = taos_errno(pRes);
|
||||
ASSERT(code == 0);
|
||||
taos_free_result(pRes);
|
||||
|
||||
const char *sql2[] = {"sysif.bytes.out 1479496700 1.6E0 host=web01 interface=eth0"};
|
||||
pRes = taos_schemaless_insert(taos, (char **)sql2, sizeof(sql2) / sizeof(sql2[0]), TSDB_SML_TELNET_PROTOCOL,
|
||||
TSDB_SML_TIMESTAMP_NANO_SECONDS);
|
||||
printf("%s result:%s\n", __FUNCTION__, taos_errstr(pRes));
|
||||
code = taos_errno(pRes);
|
||||
ASSERT(code == 0);
|
||||
taos_free_result(pRes);
|
||||
|
||||
const char *sql3[] = {"sysif.bytes.out 1479496300 1.1E0 interface=eth0 host=web01"};
|
||||
pRes = taos_schemaless_insert(taos, (char **)sql3, sizeof(sql3) / sizeof(sql3[0]), TSDB_SML_TELNET_PROTOCOL,
|
||||
TSDB_SML_TIMESTAMP_NANO_SECONDS);
|
||||
printf("%s result:%s\n", __FUNCTION__, taos_errstr(pRes));
|
||||
code = taos_errno(pRes);
|
||||
ASSERT(code == 0);
|
||||
taos_free_result(pRes);
|
||||
|
||||
taos_close(taos);
|
||||
|
||||
return code;
|
||||
}
|
||||
|
||||
int smlProcess_json0_Test() {
|
||||
TAOS *taos = taos_connect("localhost", "root", "taosdata", NULL, 0);
|
||||
|
||||
TAOS_RES *pRes = taos_query(taos, "create database if not exists sml_db");
|
||||
taos_free_result(pRes);
|
||||
|
||||
pRes = taos_query(taos, "use sml_db");
|
||||
taos_free_result(pRes);
|
||||
|
||||
const char *sql[] = {
|
||||
"[{\"metric\":\"syscpu.nice\",\"timestamp\":1662344045,\"value\":9,\"tags\":{\"host\":\"web02\",\"dc\":4}}]"};
|
||||
|
||||
char *sql1[1] = {0};
|
||||
for (int i = 0; i < 1; i++) {
|
||||
sql1[i] = taosMemoryCalloc(1, 1024);
|
||||
ASSERT(sql1[i] != NULL);
|
||||
(void)strncpy(sql1[i], sql[i], 1023);
|
||||
}
|
||||
|
||||
pRes = taos_schemaless_insert(taos, (char **)sql1, sizeof(sql1) / sizeof(sql1[0]), TSDB_SML_JSON_PROTOCOL,
|
||||
TSDB_SML_TIMESTAMP_NANO_SECONDS);
|
||||
int code = taos_errno(pRes);
|
||||
if (code != 0) {
|
||||
printf("%s result:%s\n", __FUNCTION__, taos_errstr(pRes));
|
||||
} else {
|
||||
printf("%s result:success\n", __FUNCTION__);
|
||||
}
|
||||
taos_free_result(pRes);
|
||||
|
||||
for (int i = 0; i < 1; i++) {
|
||||
taosMemoryFree(sql1[i]);
|
||||
}
|
||||
ASSERT(code == 0);
|
||||
|
||||
|
||||
const char *sql2[] = {
|
||||
"[{\"metric\":\"syscpu.nice\",\"timestamp\":1662344041,\"value\":13,\"tags\":{\"host\":\"web01\",\"dc\":1}"
|
||||
"},{\"metric\":\"syscpu.nice\",\"timestamp\":1662344042,\"value\":9,\"tags\":{\"host\":\"web02\",\"dc\":4}"
|
||||
"}]",
|
||||
};
|
||||
|
||||
char *sql3[1] = {0};
|
||||
for (int i = 0; i < 1; i++) {
|
||||
sql3[i] = taosMemoryCalloc(1, 1024);
|
||||
ASSERT(sql3[i] != NULL);
|
||||
(void)strncpy(sql3[i], sql2[i], 1023);
|
||||
}
|
||||
|
||||
pRes = taos_schemaless_insert(taos, (char **)sql3, sizeof(sql3) / sizeof(sql3[0]), TSDB_SML_JSON_PROTOCOL,
|
||||
TSDB_SML_TIMESTAMP_NANO_SECONDS);
|
||||
code = taos_errno(pRes);
|
||||
if (code != 0) {
|
||||
printf("%s result:%s\n", __FUNCTION__, taos_errstr(pRes));
|
||||
} else {
|
||||
printf("%s result:success\n", __FUNCTION__);
|
||||
}
|
||||
taos_free_result(pRes);
|
||||
|
||||
for (int i = 0; i < 1; i++) {
|
||||
taosMemoryFree(sql3[i]);
|
||||
}
|
||||
|
||||
ASSERT(code == 0);
|
||||
|
||||
taos_close(taos);
|
||||
|
||||
return code;
|
||||
}
|
||||
|
||||
int smlProcess_json1_Test() {
|
||||
TAOS *taos = taos_connect("localhost", "root", "taosdata", NULL, 0);
|
||||
|
||||
|
@ -1775,6 +1882,21 @@ int sml_td24559_Test() {
|
|||
pRes = taos_query(taos, "create database if not exists td24559");
|
||||
taos_free_result(pRes);
|
||||
|
||||
const char *sql1[] = {
|
||||
"sttb,t1=1 f1=283i32,f2=g\"\" 1632299372000",
|
||||
"sttb,t1=1 f2=G\"Point(4.343 89.342)\",f1=106i32 1632299373000",
|
||||
};
|
||||
|
||||
pRes = taos_query(taos, "use td24559");
|
||||
taos_free_result(pRes);
|
||||
|
||||
pRes = taos_schemaless_insert(taos, (char **)sql1, sizeof(sql1) / sizeof(sql1[0]), TSDB_SML_LINE_PROTOCOL,
|
||||
TSDB_SML_TIMESTAMP_MILLI_SECONDS);
|
||||
int code = taos_errno(pRes);
|
||||
printf("%s result0:%s\n", __FUNCTION__, taos_errstr(pRes));
|
||||
ASSERT(code);
|
||||
taos_free_result(pRes);
|
||||
|
||||
const char *sql[] = {
|
||||
"stb,t1=1 f1=283i32,f2=g\"Point(4.343 89.342)\" 1632299372000",
|
||||
"stb,t1=1 f2=G\"Point(4.343 89.342)\",f1=106i32 1632299373000",
|
||||
|
@ -1788,7 +1910,7 @@ int sml_td24559_Test() {
|
|||
pRes = taos_schemaless_insert(taos, (char **)sql, sizeof(sql) / sizeof(sql[0]), TSDB_SML_LINE_PROTOCOL,
|
||||
TSDB_SML_TIMESTAMP_MILLI_SECONDS);
|
||||
|
||||
int code = taos_errno(pRes);
|
||||
code = taos_errno(pRes);
|
||||
printf("%s result0:%s\n", __FUNCTION__, taos_errstr(pRes));
|
||||
taos_free_result(pRes);
|
||||
|
||||
|
@ -2136,7 +2258,8 @@ int main(int argc, char *argv[]) {
|
|||
taos_options(TSDB_OPTION_CONFIGDIR, argv[1]);
|
||||
}
|
||||
|
||||
int ret = 0;
|
||||
int ret = smlProcess_json0_Test();
|
||||
ASSERT(!ret);
|
||||
ret = sml_ts5528_test();
|
||||
ASSERT(!ret);
|
||||
ret = sml_td29691_Test();
|
||||
|
@ -2173,6 +2296,8 @@ int main(int argc, char *argv[]) {
|
|||
ASSERT(!ret);
|
||||
ret = smlProcess_telnet_Test();
|
||||
ASSERT(!ret);
|
||||
ret = smlProcess_telnet0_Test();
|
||||
ASSERT(!ret);
|
||||
ret = smlProcess_json1_Test();
|
||||
ASSERT(!ret);
|
||||
ret = smlProcess_json2_Test();
|
||||
|
|
Loading…
Reference in New Issue