Merge from develop
This commit is contained in:
commit
e043b0bb51
|
@ -1,4 +1,5 @@
|
|||
build/
|
||||
.ycm_extra_conf.py
|
||||
.vscode/
|
||||
.idea/
|
||||
cmake-build-debug/
|
||||
|
|
|
@ -2,18 +2,18 @@
|
|||
|
||||
## <a class="anchor" id="intro"></a>TDengine 简介
|
||||
|
||||
TDengine是涛思数据面对高速增长的物联网大数据市场和技术挑战推出的创新性的大数据处理产品,它不依赖任何第三方软件,也不是优化或包装了一个开源的数据库或流式计算产品,而是在吸取众多传统关系型数据库、NoSQL数据库、流式计算引擎、消息队列等软件的优点之后自主开发的产品,在时序空间大数据处理上,有着自己独到的优势。
|
||||
TDengine 是涛思数据面对高速增长的物联网大数据市场和技术挑战推出的创新性的大数据处理产品,它不依赖任何第三方软件,也不是优化或包装了一个开源的数据库或流式计算产品,而是在吸取众多传统关系型数据库、NoSQL 数据库、流式计算引擎、消息队列等软件的优点之后自主开发的产品,在时序空间大数据处理上,有着自己独到的优势。
|
||||
|
||||
TDengine的模块之一是时序数据库。但除此之外,为减少研发的复杂度、系统维护的难度,TDengine还提供缓存、消息队列、订阅、流式计算等功能,为物联网、工业互联网大数据的处理提供全栈的技术方案,是一个高效易用的物联网大数据平台。与Hadoop等典型的大数据平台相比,它具有如下鲜明的特点:
|
||||
TDengine 的模块之一是时序数据库。但除此之外,为减少研发的复杂度、系统维护的难度,TDengine 还提供缓存、消息队列、订阅、流式计算等功能,为物联网、工业互联网大数据的处理提供全栈的技术方案,是一个高效易用的物联网大数据平台。与 Hadoop 等典型的大数据平台相比,它具有如下鲜明的特点:
|
||||
|
||||
* __10倍以上的性能提升__:定义了创新的数据存储结构,单核每秒能处理至少2万次请求,插入数百万个数据点,读出一千万以上数据点,比现有通用数据库快十倍以上。
|
||||
* __硬件或云服务成本降至1/5__:由于超强性能,计算资源不到通用大数据方案的1/5;通过列式存储和先进的压缩算法,存储空间不到通用数据库的1/10。
|
||||
* __全栈时序数据处理引擎__:将数据库、消息队列、缓存、流式计算等功能融为一体,应用无需再集成Kafka/Redis/HBase/Spark/HDFS等软件,大幅降低应用开发和维护的复杂度成本。
|
||||
* __强大的分析功能__:无论是十年前还是一秒钟前的数据,指定时间范围即可查询。数据可在时间轴上或多个设备上进行聚合。即席查询可通过Shell, Python, R, MATLAB随时进行。
|
||||
* __与第三方工具无缝连接__:不用一行代码,即可与Telegraf, Grafana, EMQ, HiveMQ, Prometheus, MATLAB, R等集成。后续将支持OPC, Hadoop, Spark等,BI工具也将无缝连接。
|
||||
* __零运维成本、零学习成本__:安装集群简单快捷,无需分库分表,实时备份。类标准SQL,支持RESTful,支持Python/Java/C/C++/C#/Go/Node.js, 与MySQL相似,零学习成本。
|
||||
* __10 倍以上的性能提升__:定义了创新的数据存储结构,单核每秒能处理至少 2 万次请求,插入数百万个数据点,读出一千万以上数据点,比现有通用数据库快十倍以上。
|
||||
* __硬件或云服务成本降至 1/5__:由于超强性能,计算资源不到通用大数据方案的 1/5;通过列式存储和先进的压缩算法,存储空间不到通用数据库的 1/10。
|
||||
* __全栈时序数据处理引擎__:将数据库、消息队列、缓存、流式计算等功能融为一体,应用无需再集成 Kafka/Redis/HBase/Spark/HDFS 等软件,大幅降低应用开发和维护的复杂度成本。
|
||||
* __强大的分析功能__:无论是十年前还是一秒钟前的数据,指定时间范围即可查询。数据可在时间轴上或多个设备上进行聚合。即席查询可通过 Shell, Python, R, MATLAB 随时进行。
|
||||
* __与第三方工具无缝连接__:不用一行代码,即可与 Telegraf, Grafana, EMQ, HiveMQ, Prometheus, MATLAB, R 等集成。后续将支持 OPC, Hadoop, Spark 等,BI 工具也将无缝连接。
|
||||
* __零运维成本、零学习成本__:安装集群简单快捷,无需分库分表,实时备份。类标准 SQL,支持 RESTful,支持 Python/Java/C/C++/C#/Go/Node.js, 与 MySQL 相似,零学习成本。
|
||||
|
||||
采用TDengine,可将典型的物联网、车联网、工业互联网大数据平台的总拥有成本大幅降低。但需要指出的是,因充分利用了物联网时序数据的特点,它无法用来处理网络爬虫、微博、微信、电商、ERP、CRM等通用型数据。
|
||||
采用 TDengine,可将典型的物联网、车联网、工业互联网大数据平台的总拥有成本大幅降低。但需要指出的是,因充分利用了物联网时序数据的特点,它无法用来处理网络爬虫、微博、微信、电商、ERP、CRM 等通用型数据。
|
||||
|
||||

|
||||
<center>图 1. TDengine技术生态图</center>
|
||||
|
@ -21,42 +21,47 @@ TDengine的模块之一是时序数据库。但除此之外,为减少研发的
|
|||
|
||||
## <a class="anchor" id="scenes"></a>TDengine 总体适用场景
|
||||
|
||||
作为一个IOT大数据平台,TDengine的典型适用场景是在IOT范畴,而且用户有一定的数据量。本文后续的介绍主要针对这个范畴里面的系统。范畴之外的系统,比如CRM,ERP等,不在本文讨论范围内。
|
||||
作为一个 IoT 大数据平台,TDengine 的典型适用场景是在 IoT 范畴,而且用户有一定的数据量。本文后续的介绍主要针对这个范畴里面的系统。范畴之外的系统,比如 CRM,ERP 等,不在本文讨论范围内。
|
||||
|
||||
|
||||
### 数据源特点和需求
|
||||
从数据源角度,设计人员可以从下面几个角度分析TDengine在目标应用系统里面的适用性。
|
||||
|
||||
从数据源角度,设计人员可以从下面几个角度分析 TDengine 在目标应用系统里面的适用性。
|
||||
|
||||
|数据源特点和需求|不适用|可能适用|非常适用|简单说明|
|
||||
|---|---|---|---|---|
|
||||
|总体数据量巨大| | | √ |TDengine在容量方面提供出色的水平扩展功能,并且具备匹配高压缩的存储结构,达到业界最优的存储效率。|
|
||||
|数据输入速度偶尔或者持续巨大| | | √ | TDengine的性能大大超过同类产品,可以在同样的硬件环境下持续处理大量的输入数据,并且提供很容易在用户环境里面运行的性能评估工具。|
|
||||
|数据源数目巨大| | | √ |TDengine设计中包含专门针对大量数据源的优化,包括数据的写入和查询,尤其适合高效处理海量(千万或者更多量级)的数据源。|
|
||||
|总体数据量巨大| | | √ |TDengine 在容量方面提供出色的水平扩展功能,并且具备匹配高压缩的存储结构,达到业界最优的存储效率。|
|
||||
|数据输入速度偶尔或者持续巨大| | | √ | TDengine 的性能大大超过同类产品,可以在同样的硬件环境下持续处理大量的输入数据,并且提供很容易在用户环境里面运行的性能评估工具。|
|
||||
|数据源数目巨大| | | √ | TDengine 设计中包含专门针对大量数据源的优化,包括数据的写入和查询,尤其适合高效处理海量(千万或者更多量级)的数据源。|
|
||||
|
||||
### 系统架构要求
|
||||
|
||||
|系统架构要求|不适用|可能适用|非常适用|简单说明|
|
||||
|---|---|---|---|---|
|
||||
|要求简单可靠的系统架构| | | √ |TDengine的系统架构非常简单可靠,自带消息队列,缓存,流式计算,监控等功能,无需集成额外的第三方产品。|
|
||||
|要求容错和高可靠| | | √ |TDengine的集群功能,自动提供容错灾备等高可靠功能。|
|
||||
|标准化规范| | | √ |TDengine使用标准的SQL语言提供主要功能,遵守标准化规范。|
|
||||
|要求简单可靠的系统架构| | | √ | TDengine 的系统架构非常简单可靠,自带消息队列,缓存,流式计算,监控等功能,无需集成额外的第三方产品。|
|
||||
|要求容错和高可靠| | | √ | TDengine 的集群功能,自动提供容错灾备等高可靠功能。|
|
||||
|标准化规范| | | √ | TDengine 使用标准的 SQL 语言提供主要功能,遵守标准化规范。|
|
||||
|
||||
### 系统功能需求
|
||||
|
||||
|系统功能需求|不适用|可能适用|非常适用|简单说明|
|
||||
|---|---|---|---|---|
|
||||
|要求完整的内置数据处理算法| | √ | |TDengine的实现了通用的数据处理算法,但是还没有做到妥善处理各行各业的所有要求,因此特殊类型的处理还需要应用层面处理。|
|
||||
|需要大量的交叉查询处理| | √ | |这种类型的处理更多应该用关系型数据系统处理,或者应该考虑TDengine和关系型数据系统配合实现系统功能。|
|
||||
|要求完整的内置数据处理算法| | √ | | TDengine 的实现了通用的数据处理算法,但是还没有做到妥善处理各行各业的所有要求,因此特殊类型的处理还需要应用层面处理。|
|
||||
|需要大量的交叉查询处理| | √ | |这种类型的处理更多应该用关系型数据系统处理,或者应该考虑 TDengine 和关系型数据系统配合实现系统功能。|
|
||||
|
||||
### 系统性能需求
|
||||
|
||||
|系统性能需求|不适用|可能适用|非常适用|简单说明|
|
||||
|---|---|---|---|---|
|
||||
|要求较大的总体处理能力| | | √ |TDengine的集群功能可以轻松地让多服务器配合达成处理能力的提升。|
|
||||
|要求高速处理数据 | | | √ |TDengine的专门为IOT优化的存储和数据处理的设计,一般可以让系统得到超出同类产品多倍数的处理速度提升。|
|
||||
|要求快速处理小粒度数据| | | √ |这方面TDengine性能可以完全对标关系型和NoSQL型数据处理系统。|
|
||||
|要求较大的总体处理能力| | | √ | TDengine 的集群功能可以轻松地让多服务器配合达成处理能力的提升。|
|
||||
|要求高速处理数据 | | | √ | TDengine 的专门为 IoT 优化的存储和数据处理的设计,一般可以让系统得到超出同类产品多倍数的处理速度提升。|
|
||||
|要求快速处理小粒度数据| | | √ |这方面 TDengine 性能可以完全对标关系型和 NoSQL 型数据处理系统。|
|
||||
|
||||
### 系统维护需求
|
||||
|
||||
|系统维护需求|不适用|可能适用|非常适用|简单说明|
|
||||
|---|---|---|---|---|
|
||||
|要求系统可靠运行| | | √ |TDengine的系统架构非常稳定可靠,日常维护也简单便捷,对维护人员的要求简洁明了,最大程度上杜绝人为错误和事故。|
|
||||
|要求系统可靠运行| | | √ | TDengine 的系统架构非常稳定可靠,日常维护也简单便捷,对维护人员的要求简洁明了,最大程度上杜绝人为错误和事故。|
|
||||
|要求运维学习成本可控| | | √ |同上。|
|
||||
|要求市场有大量人才储备| √ | | |TDengine作为新一代产品,目前人才市场里面有经验的人员还有限。但是学习成本低,我们作为厂家也提供运维的培训和辅助服务。|
|
||||
|要求市场有大量人才储备| √ | | | TDengine 作为新一代产品,目前人才市场里面有经验的人员还有限。但是学习成本低,我们作为厂家也提供运维的培训和辅助服务。|
|
||||
|
||||
|
|
|
@ -191,7 +191,7 @@ cdf548465318
|
|||
1,通过端口映射(-p),将容器内部开放的网络端口映射到宿主机的指定端口上。通过挂载本地目录(-v),可以实现宿主机与容器内部的数据同步,防止容器删除后,数据丢失。
|
||||
|
||||
```bash
|
||||
$ docker run -d -v /etc/taos:/etc/taos -p 6041:6041 tdengine/tdengine
|
||||
$ docker run -d -v /etc/taos:/etc/taos -P 6041:6041 tdengine/tdengine
|
||||
526aa188da767ae94b244226a2b2eec2b5f17dd8eff592893d9ec0cd0f3a1ccd
|
||||
|
||||
$ curl -u root:taosdata -d 'show databases' 127.0.0.1:6041/rest/sql
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
# TDengine数据建模
|
||||
|
||||
TDengine采用关系型数据模型,需要建库、建表。因此对于一个具体的应用场景,需要考虑库的设计,超级表和普通表的设计。本节不讨论细致的语法规则,只介绍概念。
|
||||
TDengine采用关系型数据模型,需要建库、建表。因此对于一个具体的应用场景,需要考虑库、超级表和普通表的设计。本节不讨论细致的语法规则,只介绍概念。
|
||||
|
||||
关于数据建模请参考[视频教程](https://www.taosdata.com/blog/2020/11/11/1945.html)。
|
||||
|
||||
|
@ -13,7 +13,7 @@ TDengine采用关系型数据模型,需要建库、建表。因此对于一个
|
|||
```mysql
|
||||
CREATE DATABASE power KEEP 365 DAYS 10 BLOCKS 6 UPDATE 1;
|
||||
```
|
||||
上述语句将创建一个名为power的库,这个库的数据将保留365天(超过365天将被自动删除),每10天一个数据文件,内存块数为4,允许更新数据。详细的语法及参数请见 [TAOS SQL 的数据管理](https://www.taosdata.com/cn/documentation/taos-sql#management) 章节。
|
||||
上述语句将创建一个名为power的库,这个库的数据将保留365天(超过365天将被自动删除),每10天一个数据文件,内存块数为6,允许更新数据。详细的语法及参数请见 [TAOS SQL 的数据管理](https://www.taosdata.com/cn/documentation/taos-sql#management) 章节。
|
||||
|
||||
创建库之后,需要使用SQL命令USE将当前库切换过来,例如:
|
||||
|
||||
|
|
|
@ -1,5 +1,87 @@
|
|||
# Java Connector
|
||||
|
||||
## 总体介绍
|
||||
|
||||
TDengine 提供了遵循 JDBC 标准(3.0)API 规范的 `taos-jdbcdriver` 实现,可在 maven 的中央仓库 [Sonatype Repository](https://search.maven.org/artifact/com.taosdata.jdbc/taos-jdbcdriver) 搜索下载。
|
||||
|
||||
`taos-jdbcdriver` 的实现包括 2 种形式: JDBC-JNI 和 JDBC-RESTful(taos-jdbcdriver-2.0.18 开始支持 JDBC-RESTful)。 JDBC-JNI 通过调用客户端 libtaos.so(或 taos.dll )的本地方法实现, JDBC-RESTful 则在内部封装了 RESTful 接口实现。
|
||||
|
||||

|
||||
|
||||
上图显示了 3 种 Java 应用使用连接器访问 TDengine 的方式:
|
||||
|
||||
* JDBC-JNI:Java 应用在物理节点1(pnode1)上使用 JDBC-JNI 的 API ,直接调用客户端 API(libtaos.so 或 taos.dll)将写入和查询请求发送到位于物理节点2(pnode2)上的 taosd 实例。
|
||||
* RESTful:应用将 SQL 发送给位于物理节点2(pnode2)上的 RESTful 连接器,再调用客户端 API(libtaos.so)。
|
||||
* JDBC-RESTful:Java 应用通过 JDBC-RESTful 的 API ,将 SQL 封装成一个 RESTful 请求,发送给物理节点2的 RESTful 连接器。
|
||||
|
||||
TDengine 的 JDBC 驱动实现尽可能与关系型数据库驱动保持一致,但时序空间数据库与关系对象型数据库服务的对象和技术特征存在差异,导致 `taos-jdbcdriver` 与传统的 JDBC driver 也存在一定差异。在使用时需要注意以下几点:
|
||||
|
||||
* TDengine 目前不支持针对单条数据记录的删除操作。
|
||||
* 目前不支持事务操作。
|
||||
* 目前不支持嵌套查询(nested query)。
|
||||
* 对每个 Connection 的实例,至多只能有一个打开的 ResultSet 实例;如果在 ResultSet 还没关闭的情况下执行了新的查询,taos-jdbcdriver 会自动关闭上一个 ResultSet。
|
||||
|
||||
### JDBC-JNI和JDBC-RESTful的对比
|
||||
|
||||
<table>
|
||||
<tr align="center"><th>对比项</th><th>JDBC-JNI</th><th>JDBC-RESTful</th></tr>
|
||||
<tr align="center">
|
||||
<td>支持的操作系统</td>
|
||||
<td>linux、windows</td>
|
||||
<td>全平台</td>
|
||||
</tr>
|
||||
<tr align="center">
|
||||
<td>是否需要安装 client</td>
|
||||
<td>需要</td>
|
||||
<td>不需要</td>
|
||||
</tr>
|
||||
<tr align="center">
|
||||
<td>server 升级后是否需要升级 client</td>
|
||||
<td>需要</td>
|
||||
<td>不需要</td>
|
||||
</tr>
|
||||
<tr align="center">
|
||||
<td>写入性能</td>
|
||||
<td colspan="2">JDBC-RESTful 是 JDBC-JNI 的 50%~90% </td>
|
||||
</tr>
|
||||
<tr align="center">
|
||||
<td>查询性能</td>
|
||||
<td colspan="2">JDBC-RESTful 与 JDBC-JNI 没有差别</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
注意:与 JNI 方式不同,RESTful 接口是无状态的,因此 `USE db_name` 指令没有效果,RESTful 下所有对表名、超级表名的引用都需要指定数据库名前缀。
|
||||
|
||||
### <a class="anchor" id="version"></a>TAOS-JDBCDriver 版本以及支持的 TDengine 版本和 JDK 版本
|
||||
|
||||
| taos-jdbcdriver 版本 | TDengine 版本 | JDK 版本 |
|
||||
| -------------------- | ----------------- | -------- |
|
||||
| 2.0.33 - 2.0.34 | 2.0.3.0 及以上 | 1.8.x |
|
||||
| 2.0.31 - 2.0.32 | 2.1.3.0 及以上 | 1.8.x |
|
||||
| 2.0.22 - 2.0.30 | 2.0.18.0 - 2.1.2.x | 1.8.x |
|
||||
| 2.0.12 - 2.0.21 | 2.0.8.0 - 2.0.17.x | 1.8.x |
|
||||
| 2.0.4 - 2.0.11 | 2.0.0.0 - 2.0.7.x | 1.8.x |
|
||||
| 1.0.3 | 1.6.1.x 及以上 | 1.8.x |
|
||||
| 1.0.2 | 1.6.1.x 及以上 | 1.8.x |
|
||||
| 1.0.1 | 1.6.1.x 及以上 | 1.8.x |
|
||||
|
||||
### TDengine DataType 和 Java DataType
|
||||
|
||||
TDengine 目前支持时间戳、数字、字符、布尔类型,与 Java 对应类型转换如下:
|
||||
|
||||
| TDengine DataType | Java DataType |
|
||||
| ----------------- | ------------------ |
|
||||
| TIMESTAMP | java.sql.Timestamp |
|
||||
| INT | java.lang.Integer |
|
||||
| BIGINT | java.lang.Long |
|
||||
| FLOAT | java.lang.Float |
|
||||
| DOUBLE | java.lang.Double |
|
||||
| SMALLINT | java.lang.Short |
|
||||
| TINYINT | java.lang.Byte |
|
||||
| BOOL | java.lang.Boolean |
|
||||
| BINARY | byte array |
|
||||
| NCHAR | java.lang.String |
|
||||
|
||||
## 安装
|
||||
|
||||
Java连接器支持的系统有: Linux 64/Windows x64/Windows x86。
|
||||
|
@ -9,7 +91,7 @@ Java连接器支持的系统有: Linux 64/Windows x64/Windows x86。
|
|||
- 已安装TDengine服务器端
|
||||
- 已安装好TDengine应用驱动,具体请参照 [安装连接器驱动步骤](https://www.taosdata.com/cn/documentation/connector#driver) 章节
|
||||
|
||||
TDengine 为了方便 Java 应用使用,提供了遵循 JDBC 标准(3.0)API 规范的 `taos-jdbcdriver` 实现。目前可以通过 [Sonatype Repository](https://search.maven.org/artifact/com.taosdata.jdbc/taos-jdbcdriver) 搜索并下载。
|
||||
TDengine 为了方便 Java 应用使用,遵循 JDBC 标准(3.0)API 规范提供了 `taos-jdbcdriver` 实现。可以通过 [Sonatype Repository](https://search.maven.org/artifact/com.taosdata.jdbc/taos-jdbcdriver) 搜索并下载。
|
||||
|
||||
由于 TDengine 的应用驱动是使用C语言开发的,使用 taos-jdbcdriver 驱动包时需要依赖系统对应的本地函数库。
|
||||
|
||||
|
@ -66,83 +148,6 @@ java -jar JDBCConnectorChecker.jar -host <fqdn>
|
|||
|
||||
## Java连接器的使用
|
||||
|
||||
`taos-jdbcdriver` 的实现包括 2 种形式: JDBC-JNI 和 JDBC-RESTful(taos-jdbcdriver-2.0.18 开始支持 JDBC-RESTful)。 JDBC-JNI 通过调用客户端 libtaos.so(或 taos.dll )的本地方法实现, JDBC-RESTful 则在内部封装了 RESTful 接口实现。
|
||||
|
||||

|
||||
|
||||
上图显示了 3 种 Java 应用使用连接器访问 TDengine 的方式:
|
||||
|
||||
* JDBC-JNI:Java 应用在物理节点1(pnode1)上使用 JDBC-JNI 的 API ,直接调用客户端 API(libtaos.so 或 taos.dll)将写入和查询请求发送到位于物理节点2(pnode2)上的 taosd 实例。
|
||||
* RESTful:应用将 SQL 发送给位于物理节点2(pnode2)上的 RESTful 连接器,再调用客户端 API(libtaos.so)。
|
||||
* JDBC-RESTful:Java 应用通过 JDBC-RESTful 的 API ,将 SQL 封装成一个 RESTful 请求,发送给物理节点2的 RESTful 连接器。
|
||||
|
||||
TDengine 的 JDBC 驱动实现尽可能与关系型数据库驱动保持一致,但时序空间数据库与关系对象型数据库服务的对象和技术特征存在差异,导致 `taos-jdbcdriver` 与传统的 JDBC driver 也存在一定差异。在使用时需要注意以下几点:
|
||||
|
||||
* TDengine 目前不支持针对单条数据记录的删除操作。
|
||||
* 目前不支持事务操作。
|
||||
* 目前不支持嵌套查询(nested query)。
|
||||
* 对每个 Connection 的实例,至多只能有一个打开的 ResultSet 实例;如果在 ResultSet 还没关闭的情况下执行了新的查询,taos-jdbcdriver 会自动关闭上一个 ResultSet。
|
||||
|
||||
### JDBC-JNI和JDBC-RESTful的对比
|
||||
|
||||
<table>
|
||||
<tr align="center"><th>对比项</th><th>JDBC-JNI</th><th>JDBC-RESTful</th></tr>
|
||||
<tr align="center">
|
||||
<td>支持的操作系统</td>
|
||||
<td>linux、windows</td>
|
||||
<td>全平台</td>
|
||||
</tr>
|
||||
<tr align="center">
|
||||
<td>是否需要安装 client</td>
|
||||
<td>需要</td>
|
||||
<td>不需要</td>
|
||||
</tr>
|
||||
<tr align="center">
|
||||
<td>server 升级后是否需要升级 client</td>
|
||||
<td>需要</td>
|
||||
<td>不需要</td>
|
||||
</tr>
|
||||
<tr align="center">
|
||||
<td>写入性能</td>
|
||||
<td colspan="2">JDBC-RESTful 是 JDBC-JNI 的 50%~90% </td>
|
||||
</tr>
|
||||
<tr align="center">
|
||||
<td>查询性能</td>
|
||||
<td colspan="2">JDBC-RESTful 与 JDBC-JNI 没有差别</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
注意:与 JNI 方式不同,RESTful 接口是无状态的,因此 `USE db_name` 指令没有效果,RESTful 下所有对表名、超级表名的引用都需要指定数据库名前缀。
|
||||
|
||||
### <a class="anchor" id="version"></a>TAOS-JDBCDriver 版本以及支持的 TDengine 版本和 JDK 版本
|
||||
|
||||
| taos-jdbcdriver 版本 | TDengine 版本 | JDK 版本 |
|
||||
| -------------------- | ----------------- | -------- |
|
||||
| 2.0.31 | 2.1.3.0 及以上 | 1.8.x |
|
||||
| 2.0.22 - 2.0.30 | 2.0.18.0 - 2.1.2.x | 1.8.x |
|
||||
| 2.0.12 - 2.0.21 | 2.0.8.0 - 2.0.17.x | 1.8.x |
|
||||
| 2.0.4 - 2.0.11 | 2.0.0.0 - 2.0.7.x | 1.8.x |
|
||||
| 1.0.3 | 1.6.1.x 及以上 | 1.8.x |
|
||||
| 1.0.2 | 1.6.1.x 及以上 | 1.8.x |
|
||||
| 1.0.1 | 1.6.1.x 及以上 | 1.8.x |
|
||||
|
||||
### TDengine DataType 和 Java DataType
|
||||
|
||||
TDengine 目前支持时间戳、数字、字符、布尔类型,与 Java 对应类型转换如下:
|
||||
|
||||
| TDengine DataType | Java DataType |
|
||||
| ----------------- | ------------------ |
|
||||
| TIMESTAMP | java.sql.Timestamp |
|
||||
| INT | java.lang.Integer |
|
||||
| BIGINT | java.lang.Long |
|
||||
| FLOAT | java.lang.Float |
|
||||
| DOUBLE | java.lang.Double |
|
||||
| SMALLINT | java.lang.Short |
|
||||
| TINYINT | java.lang.Byte |
|
||||
| BOOL | java.lang.Boolean |
|
||||
| BINARY | byte array |
|
||||
| NCHAR | java.lang.String |
|
||||
|
||||
### 获取连接
|
||||
|
||||
#### 指定URL获取连接
|
||||
|
@ -173,15 +178,9 @@ Connection conn = DriverManager.getConnection(jdbcUrl);
|
|||
|
||||
以上示例,使用了 JDBC-JNI 的 driver,建立了到 hostname 为 taosdemo.com,端口为 6030(TDengine 的默认端口),数据库名为 test 的连接。这个 URL 中指定用户名(user)为 root,密码(password)为 taosdata。
|
||||
|
||||
**注意**:使用 JDBC-JNI 的 driver,taos-jdbcdriver 驱动包时需要依赖系统对应的本地函数库。
|
||||
**注意**:使用 JDBC-JNI 的 driver,taos-jdbcdriver 驱动包时需要依赖系统对应的本地函数库(Linux 下是 libtaos.so;Windows 下是 taos.dll)。
|
||||
|
||||
* libtaos.so
|
||||
在 Linux 系统中成功安装 TDengine 后,依赖的本地函数库 libtaos.so 文件会被自动拷贝至 /usr/lib/libtaos.so,该目录包含在 Linux 自动扫描路径上,无需单独指定。
|
||||
|
||||
* taos.dll
|
||||
在 Windows 系统中安装完客户端之后,驱动包依赖的 taos.dll 文件会自动拷贝到系统默认搜索路径 C:/Windows/System32 下,同样无需要单独指定。
|
||||
|
||||
> 在 Windows 环境开发时需要安装 TDengine 对应的 [windows 客户端][14],Linux 服务器安装完 TDengine 之后默认已安装 client,也可以单独安装 [Linux 客户端][15] 连接远程 TDengine Server。
|
||||
> 在 Windows 环境开发时需要安装 TDengine 对应的 [windows 客户端](https://www.taosdata.com/cn/all-downloads/#TDengine-Windows-Client),Linux 服务器安装完 TDengine 之后默认已安装 client,也可以单独安装 [Linux 客户端](https://www.taosdata.com/cn/getting-started/#%E5%AE%A2%E6%88%B7%E7%AB%AF) 连接远程 TDengine Server。
|
||||
|
||||
JDBC-JNI 的使用请参见[视频教程](https://www.taosdata.com/blog/2020/11/11/1955.html)。
|
||||
|
||||
|
@ -189,9 +188,9 @@ TDengine 的 JDBC URL 规范格式为:
|
|||
`jdbc:[TAOS|TAOS-RS]://[host_name]:[port]/[database_name]?[user={user}|&password={password}|&charset={charset}|&cfgdir={config_dir}|&locale={locale}|&timezone={timezone}]`
|
||||
|
||||
url中的配置参数如下:
|
||||
* user:登录 TDengine 用户名,默认值 root。
|
||||
* password:用户登录密码,默认值 taosdata。
|
||||
* cfgdir:客户端配置文件目录路径,Linux OS 上默认值 /etc/taos ,Windows OS 上默认值 C:/TDengine/cfg。
|
||||
* user:登录 TDengine 用户名,默认值 'root'。
|
||||
* password:用户登录密码,默认值 'taosdata'。
|
||||
* cfgdir:客户端配置文件目录路径,Linux OS 上默认值 `/etc/taos`,Windows OS 上默认值 `C:/TDengine/cfg`。
|
||||
* charset:客户端使用的字符集,默认值为系统字符集。
|
||||
* locale:客户端语言环境,默认值系统当前 locale。
|
||||
* timezone:客户端使用的时区,默认值为系统当前时区。
|
||||
|
@ -217,9 +216,9 @@ public Connection getConn() throws Exception{
|
|||
以上示例,建立一个到 hostname 为 taosdemo.com,端口为 6030,数据库名为 test 的连接。注释为使用 JDBC-RESTful 时的方法。这个连接在 url 中指定了用户名(user)为 root,密码(password)为 taosdata,并在 connProps 中指定了使用的字符集、语言环境、时区等信息。
|
||||
|
||||
properties 中的配置参数如下:
|
||||
* TSDBDriver.PROPERTY_KEY_USER:登录 TDengine 用户名,默认值 root。
|
||||
* TSDBDriver.PROPERTY_KEY_PASSWORD:用户登录密码,默认值 taosdata。
|
||||
* TSDBDriver.PROPERTY_KEY_CONFIG_DIR:客户端配置文件目录路径,Linux OS 上默认值 /etc/taos ,Windows OS 上默认值 C:/TDengine/cfg。
|
||||
* TSDBDriver.PROPERTY_KEY_USER:登录 TDengine 用户名,默认值 'root'。
|
||||
* TSDBDriver.PROPERTY_KEY_PASSWORD:用户登录密码,默认值 'taosdata'。
|
||||
* TSDBDriver.PROPERTY_KEY_CONFIG_DIR:客户端配置文件目录路径,Linux OS 上默认值 `/etc/taos`,Windows OS 上默认值 `C:/TDengine/cfg`。
|
||||
* TSDBDriver.PROPERTY_KEY_CHARSET:客户端使用的字符集,默认值为系统字符集。
|
||||
* TSDBDriver.PROPERTY_KEY_LOCALE:客户端语言环境,默认值系统当前 locale。
|
||||
* TSDBDriver.PROPERTY_KEY_TIME_ZONE:客户端使用的时区,默认值为系统当前时区。
|
||||
|
|
|
@ -217,7 +217,7 @@ taosd -C
|
|||
| 99 | queryBufferSize | | **S** | MB | 为所有并发查询占用保留的内存大小。 | | | 计算规则可以根据实际应用可能的最大并发数和表的数字相乘,再乘 170 。(2.0.15 以前的版本中,此参数的单位是字节) |
|
||||
| 100 | ratioOfQueryCores | | **S** | | 设置查询线程的最大数量。 | | | 最小值0 表示只有1个查询线程;最大值2表示最大建立2倍CPU核数的查询线程。默认为1,表示最大和CPU核数相等的查询线程。该值可以为小数,即0.5表示最大建立CPU核数一半的查询线程。 |
|
||||
| 101 | update | | **S** | | 允许更新已存在的数据行 | 0 \| 1 | 0 | 从 2.0.8.0 版本开始 |
|
||||
| 102 | cacheLast | | **S** | | 是否在内存中缓存子表的最近数据 | 0:关闭;1:缓存子表最近一行数据;2:缓存子表每一列的最近的非NULL值;3:同时打开缓存最近行和列功能。 | 0 | 2.1.2.0 版本之前、2.0.20.7 版本之前在 taos.cfg 文件中不支持此参数。 |
|
||||
| 102 | cacheLast | | **S** | | 是否在内存中缓存子表的最近数据 | 0:关闭;1:缓存子表最近一行数据;2:缓存子表每一列的最近的非NULL值;3:同时打开缓存最近行和列功能。(2.1.2.0 版本开始此参数支持 0~3 的取值范围,在此之前取值只能是 [0, 1]) | 0 | 2.1.2.0 版本之前、2.0.20.7 版本之前在 taos.cfg 文件中不支持此参数。 |
|
||||
| 103 | numOfCommitThreads | YES | **S** | | 设置写入线程的最大数量 | | | |
|
||||
| 104 | maxWildCardsLength | | **C** | bytes | 设定 LIKE 算子的通配符字符串允许的最大长度 | 0-16384 | 100 | 2.1.6.1 版本新增。 |
|
||||
|
||||
|
@ -800,7 +800,7 @@ taos -n sync -P 6042 -h <fqdn of server>
|
|||
|
||||
`taos -n speed -h <fqdn of server> -P 6030 -N 10 -l 10000000 -S TCP`
|
||||
|
||||
从 2.1.7.0 版本开始,taos 工具新提供了一个网络速度诊断的模式,可以对一个正在运行中的 taosd 实例或者 `taos -n server` 方式模拟的一个服务端实例,以非压缩传输的方式进行网络测速。这个模式下可供调整的参数如下:
|
||||
从 2.1.8.0 版本开始,taos 工具新提供了一个网络速度诊断的模式,可以对一个正在运行中的 taosd 实例或者 `taos -n server` 方式模拟的一个服务端实例,以非压缩传输的方式进行网络测速。这个模式下可供调整的参数如下:
|
||||
|
||||
-n:设为“speed”时,表示对网络速度进行诊断。
|
||||
-h:所要连接的服务端的 FQDN 或 ip 地址。如果不设置这一项,会使用本机 taos.cfg 文件中 FQDN 参数的设置作为默认值。
|
||||
|
@ -809,6 +809,15 @@ taos -n sync -P 6042 -h <fqdn of server>
|
|||
-l:单个网络包的大小(单位:字节)。最小值是 1024、最大值是 1024*1024*1024,默认值为 1000。
|
||||
-S:网络封包的类型。可以是 TCP 或 UDP,默认值为 TCP。
|
||||
|
||||
#### FQDN 解析速度诊断
|
||||
|
||||
`taos -n fqdn -h <fqdn of server>`
|
||||
|
||||
从 2.1.8.0 版本开始,taos 工具新提供了一个 FQDN 解析速度的诊断模式,可以对一个目标 FQDN 地址尝试解析,并记录解析过程中所消耗的时间。这个模式下可供调整的参数如下:
|
||||
|
||||
-n:设为“fqdn”时,表示对 FQDN 解析进行诊断。
|
||||
-h:所要解析的目标 FQDN 地址。如果不设置这一项,会使用本机 taos.cfg 文件中 FQDN 参数的设置作为默认值。
|
||||
|
||||
#### 服务端日志
|
||||
|
||||
taosd 服务端日志文件标志位 debugflag 默认为 131,在 debug 时往往需要将其提升到 135 或 143 。
|
||||
|
|
|
@ -1197,8 +1197,6 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
|
|||
|
||||
适用于:**表、超级表**。
|
||||
|
||||
说明:与LAST函数不同,LAST_ROW不支持时间范围限制,强制返回最后一条记录。
|
||||
|
||||
限制:LAST_ROW()不能与INTERVAL一起使用。
|
||||
|
||||
示例:
|
||||
|
@ -1285,6 +1283,19 @@ TDengine支持针对数据的聚合查询。提供支持的聚合和选择函数
|
|||
|
||||
说明:(从 2.1.3.0 版本开始新增此函数)输出结果行数是范围内总行数减一,第一行没有结果输出。DERIVATIVE 函数可以在由 GROUP BY 划分出单独时间线的情况下用于超级表(也即 GROUP BY tbname)。
|
||||
|
||||
示例:
|
||||
```mysql
|
||||
taos> select derivative(current, 10m, 0) from t1;
|
||||
ts | derivative(current, 10m, 0) |
|
||||
========================================================
|
||||
2021-08-20 10:11:22.790 | 0.500000000 |
|
||||
2021-08-20 11:11:22.791 | 0.166666620 |
|
||||
2021-08-20 12:11:22.791 | 0.000000000 |
|
||||
2021-08-20 13:11:22.792 | 0.166666620 |
|
||||
2021-08-20 14:11:22.792 | -0.666666667 |
|
||||
Query OK, 5 row(s) in set (0.004883s)
|
||||
```
|
||||
|
||||
- **SPREAD**
|
||||
```mysql
|
||||
SELECT SPREAD(field_name) FROM { tb_name | stb_name } [WHERE clause];
|
||||
|
|
|
@ -71,7 +71,7 @@ TDengine is a highly efficient platform to store, query, and analyze time-series
|
|||
## [Connector](/connector)
|
||||
|
||||
- [C/C++ Connector](/connector#c-cpp): primary method to connect to TDengine server through libtaos client library
|
||||
- [Java Connector(JDBC)]: driver for connecting to the server from Java applications using the JDBC API
|
||||
- [Java Connector(JDBC)](/connector/java): driver for connecting to the server from Java applications using the JDBC API
|
||||
- [Python Connector](/connector#python): driver for connecting to TDengine server from Python applications
|
||||
- [RESTful Connector](/connector#restful): a simple way to interact with TDengine via HTTP
|
||||
- [Go Connector](/connector#go): driver for connecting to TDengine server from Go applications
|
||||
|
|
|
@ -16,9 +16,7 @@ Please visit our [TDengine Official Docker Image: Distribution, Downloading, and
|
|||
|
||||
It’s extremely easy to install for TDengine, which takes only a few seconds from downloaded to successful installed. The server installation package includes clients and connectors. We provide 3 installation packages, which you can choose according to actual needs:
|
||||
|
||||
Click [here](https://www.taosdata.com/cn/getting-started/#%E9%80%9A%E8%BF%87%E5%AE%89%E8%A3%85%E5%8C%85%E5%AE%89%E8%A3%85) to download the install package.
|
||||
|
||||
For more about installation process, please refer [TDengine Installation Packages: Install and Uninstall](https://www.taosdata.com/blog/2019/08/09/566.html), and [Video Tutorials](https://www.taosdata.com/blog/2020/11/11/1941.html).
|
||||
Click [here](https://www.taosdata.com/en/getting-started/#Install-from-Package) to download the install package.
|
||||
|
||||
## <a class="anchor" id="start"></a>Quick Launch
|
||||
|
||||
|
|
|
@ -0,0 +1,517 @@
|
|||
# Java connector
|
||||
|
||||
## Introduction
|
||||
|
||||
The taos-jdbcdriver is implemented in two forms: JDBC-JNI and JDBC-RESTful (supported from taos-jdbcdriver-2.0.18). JDBC-JNI is implemented by calling the local methods of libtaos.so (or taos.dll) on the client, while JDBC-RESTful encapsulates the RESTful interface implementation internally.
|
||||
|
||||

|
||||
|
||||
The figure above shows the three ways Java applications can access the TDengine:
|
||||
|
||||
* JDBC-JNI: The Java application uses JDBC-JNI's API on physical node1 (pnode1) and directly calls the client API (libtaos.so or taos.dll) to send write or query requests to the taosd instance on physical node2 (pnode2).
|
||||
* RESTful: The Java application sends the SQL to the RESTful connector on physical node2 (pnode2), which then calls the client API (libtaos.so).
|
||||
* JDBC-RESTful: The Java application uses the JDBC-restful API to encapsulate SQL into a RESTful request and send it to the RESTful connector of physical node 2.
|
||||
|
||||
In terms of implementation, the JDBC driver of TDengine is as consistent as possible with the behavior of the relational database driver. However, due to the differences between TDengine and relational database in the object and technical characteristics of services, there are some differences between taos-jdbcdriver and traditional relational database JDBC driver. The following points should be watched:
|
||||
|
||||
* deleting a record is not supported in TDengine.
|
||||
* transaction is not supported in TDengine.
|
||||
|
||||
### Difference between JDBC-JNI and JDBC-restful
|
||||
|
||||
<table>
|
||||
<tr align="center"><th>Difference</th><th>JDBC-JNI</th><th>JDBC-RESTful</th></tr>
|
||||
<tr align="center">
|
||||
<td>Supported OS</td>
|
||||
<td>linux、windows</td>
|
||||
<td>all platform</td>
|
||||
</tr>
|
||||
<tr align="center">
|
||||
<td>Whether to install the Client</td>
|
||||
<td>need</td>
|
||||
<td>do not need</td>
|
||||
</tr>
|
||||
<tr align="center">
|
||||
<td>Whether to upgrade the client after the server is upgraded</td>
|
||||
<td>need</td>
|
||||
<td>do not need</td>
|
||||
</tr>
|
||||
<tr align="center">
|
||||
<td>Write performance</td>
|
||||
<td colspan="2">JDBC-RESTful is 50% to 90% of JDBC-JNI</td>
|
||||
</tr>
|
||||
<tr align="center">
|
||||
<td>Read performance</td>
|
||||
<td colspan="2">JDBC-RESTful is no different from JDBC-JNI</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
**Note**: RESTful interfaces are stateless. Therefore, when using JDBC-restful, you should specify the database name in SQL before all table names and super table names, for example:
|
||||
|
||||
```sql
|
||||
INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('beijing') VALUES(now, 24.6);
|
||||
```
|
||||
|
||||
## JDBC driver version and supported TDengine and JDK versions
|
||||
|
||||
| taos-jdbcdriver | TDengine | JDK |
|
||||
| -------------------- | ----------------- | -------- |
|
||||
| 2.0.33 - 2.0.34 | 2.0.3.0 and above | 1.8.x |
|
||||
| 2.0.31 - 2.0.32 | 2.1.3.0 and above | 1.8.x |
|
||||
| 2.0.22 - 2.0.30 | 2.0.18.0 - 2.1.2.x | 1.8.x |
|
||||
| 2.0.12 - 2.0.21 | 2.0.8.0 - 2.0.17.x | 1.8.x |
|
||||
| 2.0.4 - 2.0.11 | 2.0.0.0 - 2.0.7.x | 1.8.x |
|
||||
| 1.0.3 | 1.6.1.x and above | 1.8.x |
|
||||
| 1.0.2 | 1.6.1.x and above | 1.8.x |
|
||||
| 1.0.1 | 1.6.1.x and above | 1.8.x |
|
||||
|
||||
## DataType in TDengine and Java connector
|
||||
|
||||
The TDengine supports the following data types and Java data types:
|
||||
|
||||
| TDengine DataType | Java DataType |
|
||||
| ----------------- | ------------------ |
|
||||
| TIMESTAMP | java.sql.Timestamp |
|
||||
| INT | java.lang.Integer |
|
||||
| BIGINT | java.lang.Long |
|
||||
| FLOAT | java.lang.Float |
|
||||
| DOUBLE | java.lang.Double |
|
||||
| SMALLINT | java.lang.Short |
|
||||
| TINYINT | java.lang.Byte |
|
||||
| BOOL | java.lang.Boolean |
|
||||
| BINARY | byte[] |
|
||||
| NCHAR | java.lang.String |
|
||||
|
||||
## Install Java connector
|
||||
|
||||
### Runtime Requirements
|
||||
|
||||
To run TDengine's Java connector, the following requirements shall be met:
|
||||
|
||||
1. A Linux or Windows System
|
||||
|
||||
2. Java Runtime Environment 1.8 or later
|
||||
|
||||
3. TDengine client (required for JDBC-JNI, not required for JDBC-restful)
|
||||
|
||||
**Note**:
|
||||
|
||||
* After the TDengine client is successfully installed on Linux, the libtaos.so file is automatically copied to /usr/lib/libtaos.so, which is included in the Linux automatic scan path and does not need to be specified separately.
|
||||
* After the TDengine client is installed on Windows, the taos.dll file that the driver package depends on is automatically copied to the default search path C:/Windows/System32. You do not need to specify it separately.
|
||||
|
||||
### Obtain JDBC driver by maven
|
||||
|
||||
To Java delevopers, TDengine provides `taos-jdbcdriver` according to the JDBC(3.0) API. Users can find and download it through [Sonatype Repository](https://search.maven.org/artifact/com.taosdata.jdbc/taos-jdbcdriver). Add the following dependencies in pom.xml for your maven projects.
|
||||
|
||||
```xml
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>com.taosdata.jdbc</groupId>
|
||||
<artifactId>taos-jdbcdriver</artifactId>
|
||||
<version>2.0.34</version>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
```
|
||||
|
||||
### Obtain JDBC driver by compiling source code
|
||||
|
||||
You can download the TDengine source code and compile the latest version of the JDBC Connector.
|
||||
|
||||
```shell
|
||||
git clone https://github.com/taosdata/TDengine.git
|
||||
cd TDengine/src/connector/jdbc
|
||||
mvn clean package -Dmaven.test.skip=true
|
||||
```
|
||||
|
||||
a taos-jdbcdriver-2.0.xx-dist.jar will be released in the target directory.
|
||||
|
||||
## Usage of java connector
|
||||
|
||||
### Establishing a Connection
|
||||
|
||||
#### Establishing a connection with URL
|
||||
|
||||
Establish the connection by specifying the URL, as shown below:
|
||||
|
||||
```java
|
||||
String jdbcUrl = "jdbc:TAOS-RS://taosdemo.com:6041/test?user=root&password=taosdata";
|
||||
Connection conn = DriverManager.getConnection(jdbcUrl);
|
||||
```
|
||||
|
||||
In the example above, the JDBC-RESTful driver is used to establish a connection to the hostname of 'taosdemo.com', port of 6041, and database name of 'test'. This URL specifies the user name as 'root' and the password as 'taosdata'.
|
||||
|
||||
The JDBC-RESTful does not depend on the local function library. Compared with JDBC-JNI, only the following is required:
|
||||
|
||||
* DriverClass designated as "com.taosdata.jdbc.rs.RestfulDriver"
|
||||
* JdbcUrl starts with "JDBC:TAOS-RS://"
|
||||
* Use port 6041 as the connection port
|
||||
|
||||
For better write and query performance, Java applications can use the JDBC-JNI driver, as shown below:
|
||||
|
||||
```java
|
||||
String jdbcUrl = "jdbc:TAOS://taosdemo.com:6030/test?user=root&password=taosdata";
|
||||
Connection conn = DriverManager.getConnection(jdbcUrl);
|
||||
```
|
||||
|
||||
In the example above, The JDBC-JNI driver is used to establish a connection to the hostname of 'taosdemo.com', port 6030 (TDengine's default port), and database name of 'test'. This URL specifies the user name as 'root' and the password as 'taosdata'.
|
||||
|
||||
<!-- You can also see the JDBC-JNI video tutorial: [JDBC connector of TDengine](https://www.taosdata.com/blog/2020/11/11/1955.html) -->
|
||||
|
||||
The format of JDBC URL is:
|
||||
|
||||
```url
|
||||
jdbc:[TAOS|TAOS-RS]://[host_name]:[port]/[database_name]?[user={user}|&password={password}|&charset={charset}|&cfgdir={config_dir}|&locale={locale}|&timezone={timezone}]
|
||||
```
|
||||
|
||||
The configuration parameters in the URL are as follows:
|
||||
|
||||
* user: user name for logging in to the TDengine. The default value is 'root'.
|
||||
* password: the user login password. The default value is 'taosdata'.
|
||||
* cfgdir: directory of the client configuration file. It is valid only for JDBC-JNI. The default value is `/etc/taos` on Linux and `C:/TDengine/cfg` on Windows.
|
||||
* charset: character set used by the client. The default value is the system character set.
|
||||
* locale: client locale. The default value is the current system locale.
|
||||
* timezone: timezone used by the client. The default value is the current timezone of the system.
|
||||
* batchfetch: only valid for JDBC-JNI. True if batch ResultSet fetching is enabled; false if row-by-row ResultSet fetching is enabled. Default value is flase.
|
||||
* timestampFormat: only valid for JDBC-RESTful. 'TIMESTAMP' if you want to get a long value in a ResultSet; 'UTC' if you want to get a string in UTC date-time format in a ResultSet; 'STRING' if you want to get a local date-time format string in ResultSet. Default value is 'STRING'.
|
||||
* batchErrorIgnore: true if you want to continue executing the rest of the SQL when error happens during execute the executeBatch method in Statement; false, false if the remaining SQL statements are not executed. Default value is false.
|
||||
|
||||
#### Establishing a connection with URL and Properties
|
||||
|
||||
In addition to establish the connection with the specified URL, you can also use Properties to specify the parameters to set up the connection, as shown below:
|
||||
|
||||
```java
|
||||
public Connection getConn() throws Exception{
|
||||
String jdbcUrl = "jdbc:TAOS://taosdemo.com:6030/test?user=root&password=taosdata";
|
||||
// String jdbcUrl = "jdbc:TAOS-RS://taosdemo.com:6041/test?user=root&password=taosdata";
|
||||
Properties connProps = new Properties();
|
||||
connProps.setProperty(TSDBDriver.PROPERTY_KEY_CHARSET, "UTF-8");
|
||||
connProps.setProperty(TSDBDriver.PROPERTY_KEY_LOCALE, "en_US.UTF-8");
|
||||
connProps.setProperty(TSDBDriver.PROPERTY_KEY_TIME_ZONE, "UTC-8");
|
||||
Connection conn = DriverManager.getConnection(jdbcUrl, connProps);
|
||||
return conn;
|
||||
}
|
||||
```
|
||||
|
||||
In the example above, JDBC-JNI is used to establish a connection to hostname of 'taosdemo.com', port at 6030, and database name of 'test'. The annotation is the method when using JDBC-RESTful. The connection specifies the user name as 'root' and the password as 'taosdata' in the URL, and the character set to use, locale, time zone, and so on in connProps.
|
||||
|
||||
The configuration parameters in properties are as follows:
|
||||
|
||||
* TSDBDriver.PROPERTY_KEY_USER: user name for logging in to the TDengine. The default value is 'root'.
|
||||
* TSDBDriver.PROPERTY_KEY_PASSWORD: the user login password. The default value is 'taosdata'.
|
||||
* TSDBDriver.PROPERTY_KEY_CONFIG_DIR: directory of the client configuration file. It is valid only for JDBC-JNI. The default value is `/etc/taos` on Linux and `C:/TDengine/cfg on Windows`.
|
||||
* TSDBDriver.PROPERTY_KEY_CHARSET: character set used by the client. The default value is the system character set.
|
||||
* TSDBDriver.PROPERTY_KEY_LOCALE: client locale. The default value is the current system locale.
|
||||
* TSDBDriver.PROPERTY_KEY_TIME_ZONE: timezone used by the client. The default value is the current timezone of the system.
|
||||
* TSDBDriver.PROPERTY_KEY_BATCH_LOAD: only valid for JDBC-JNI. True if batch ResultSet fetching is enabled; false if row-by-row ResultSet fetching is enabled. Default value is flase.
|
||||
* TSDBDriver.PROPERTY_KEY_BATCH_LOAD: only valid for JDBC-RESTful. 'TIMESTAMP' if you want to get a long value in a ResultSet; 'UTC' if you want to get a string in UTC date-time format in a ResultSet; 'STRING' if you want to get a local date-time format string in ResultSet. Default value is 'STRING'.
|
||||
* TSDBDriver.PROPERTY_KEY_BATCH_ERROR_IGNORE: true if you want to continue executing the rest of the SQL when error happens during execute the executeBatch method in Statement; false, false if the remaining SQL statements are not executed. Default value is false.
|
||||
|
||||
#### Establishing a connection with configuration file
|
||||
|
||||
When JDBC-JNI is used to connect to the TDengine cluster, you can specify firstEp and secondEp parameters of the cluster in the client configuration file. As follows:
|
||||
|
||||
1. The hostname and port are not specified in Java applications
|
||||
|
||||
```java
|
||||
public Connection getConn() throws Exception{
|
||||
String jdbcUrl = "jdbc:TAOS://:/test?user=root&password=taosdata";
|
||||
Properties connProps = new Properties();
|
||||
connProps.setProperty(TSDBDriver.PROPERTY_KEY_CHARSET, "UTF-8");
|
||||
connProps.setProperty(TSDBDriver.PROPERTY_KEY_LOCALE, "en_US.UTF-8");
|
||||
connProps.setProperty(TSDBDriver.PROPERTY_KEY_TIME_ZONE, "UTC-8");
|
||||
Connection conn = DriverManager.getConnection(jdbcUrl, connProps);
|
||||
return conn;
|
||||
}
|
||||
```
|
||||
|
||||
2. Specify firstEp and secondEp in the configuration file
|
||||
|
||||
```txt
|
||||
# first fully qualified domain name (FQDN) for TDengine system
|
||||
firstEp cluster_node1:6030
|
||||
# second fully qualified domain name (FQDN) for TDengine system, for cluster only
|
||||
secondEp cluster_node2:6030
|
||||
```
|
||||
|
||||
In the above example, JDBC driver uses the client configuration file to establish a connection to the hostname of 'cluster_node1', port 6030, and database name of 'test'. When the firstEp node in the cluster fails, JDBC will try to connect to the cluster using secondEp. In the TDengine, as long as one node in firstEp and secondEp is valid, the connection to the cluster can be established.
|
||||
|
||||
**Note**: In this case, the configuration file belongs to TDengine client which is running inside a Java application. default file path of Linux OS is '/etc/taos/taos.cfg', and default file path of Windows OS is 'C://TDengine/cfg/taos.cfg'.
|
||||
|
||||
#### Priority of the parameters
|
||||
|
||||
If the parameters in the URL, Properties, and client configuration file are repeated set, the priorities of the parameters in descending order are as follows:
|
||||
|
||||
1. URL parameters
|
||||
2. Properties
|
||||
3. Client configuration file in taos.cfg
|
||||
|
||||
For example, if you specify password as 'taosdata' in the URL and password as 'taosdemo' in the Properties, JDBC will establish a connection using the password in the URL.
|
||||
|
||||
For details, see Client Configuration:[client configuration](https://www.taosdata.com/en/documentation/administrator#client)
|
||||
|
||||
### Create database and table
|
||||
|
||||
```java
|
||||
Statement stmt = conn.createStatement();
|
||||
// create database
|
||||
stmt.executeUpdate("create database if not exists db");
|
||||
// use database
|
||||
stmt.executeUpdate("use db");
|
||||
// create table
|
||||
stmt.executeUpdate("create table if not exists tb (ts timestamp, temperature int, humidity float)");
|
||||
```
|
||||
|
||||
### Insert
|
||||
|
||||
```java
|
||||
// insert data
|
||||
int affectedRows = stmt.executeUpdate("insert into tb values(now, 23, 10.3) (now + 1s, 20, 9.3)");
|
||||
System.out.println("insert " + affectedRows + " rows.");
|
||||
```
|
||||
|
||||
**Note**: 'now' is an internal system function. The default value is the current time of the computer where the client resides. 'now + 1s' indicates that the current time on the client is added by one second. The following time units are a(millisecond), s (second), m(minute), h(hour), d(day), w(week), n(month), and y(year).
|
||||
|
||||
### Query
|
||||
|
||||
```java
|
||||
// query data
|
||||
ResultSet resultSet = stmt.executeQuery("select * from tb");
|
||||
Timestamp ts = null;
|
||||
int temperature = 0;
|
||||
float humidity = 0;
|
||||
while(resultSet.next()){
|
||||
ts = resultSet.getTimestamp(1);
|
||||
temperature = resultSet.getInt(2);
|
||||
humidity = resultSet.getFloat("humidity");
|
||||
System.out.printf("%s, %d, %s\n", ts, temperature, humidity);
|
||||
}
|
||||
```
|
||||
|
||||
**Note**: The query is consistent with the operation of the relational database, and the index in ResultSet starts from 1.
|
||||
|
||||
### Handle exceptions
|
||||
|
||||
```java
|
||||
try (Statement statement = connection.createStatement()) {
|
||||
// executeQuery
|
||||
ResultSet resultSet = statement.executeQuery(sql);
|
||||
// print result
|
||||
printResult(resultSet);
|
||||
} catch (SQLException e) {
|
||||
System.out.println("ERROR Message: " + e.getMessage());
|
||||
System.out.println("ERROR Code: " + e.getErrorCode());
|
||||
e.printStackTrace();
|
||||
}
|
||||
```
|
||||
|
||||
The Java connector may report three types of error codes: JDBC Driver (error codes ranging from 0x2301 to 0x2350), JNI method (error codes ranging from 0x2351 to 0x2400), and TDengine Error. For details about the error code, see:
|
||||
|
||||
- https://github.com/taosdata/TDengine/blob/develop/src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBErrorNumbers.java
|
||||
- https://github.com/taosdata/TDengine/blob/develop/src/inc/taoserror.h
|
||||
|
||||
### Write data through parameter binding
|
||||
|
||||
Since version 2.1.2.0, TDengine's JDBC-JNI implementation has significantly improved parameter binding support for data write (INSERT) scenarios. Data can be written in the following way, avoiding SQL parsing and significantly improving the write performance.(**Note**: parameter binding is not supported in JDBC-RESTful)
|
||||
|
||||
```java
|
||||
Statement stmt = conn.createStatement();
|
||||
Random r = new Random();
|
||||
|
||||
TSDBPreparedStatement s = (TSDBPreparedStatement) conn.prepareStatement("insert into ? using weather_test tags (?, ?) (ts, c1, c2) values(?, ?, ?)");
|
||||
|
||||
s.setTableName("w1");
|
||||
|
||||
s.setTagInt(0, r.nextInt(10));
|
||||
s.setTagString(1, "Beijing");
|
||||
int numOfRows = 10;
|
||||
|
||||
ArrayList<Long> ts = new ArrayList<>();
|
||||
for (int i = 0; i < numOfRows; i++){
|
||||
ts.add(System.currentTimeMillis() + i);
|
||||
}
|
||||
s.setTimestamp(0, ts);
|
||||
ArrayList<Integer> s1 = new ArrayList<>();
|
||||
for (int i = 0; i < numOfRows; i++){
|
||||
s1.add(r.nextInt(100));
|
||||
}
|
||||
s.setInt(1, s1);
|
||||
ArrayList<String> s2 = new ArrayList<>();
|
||||
for (int i = 0; i < numOfRows; i++){
|
||||
s2.add("test" + r.nextInt(100));
|
||||
}
|
||||
s.setString(2, s2, 10);
|
||||
|
||||
s.columnDataAddBatch();
|
||||
s.columnDataExecuteBatch();
|
||||
|
||||
s.columnDataClearBatch();
|
||||
s.columnDataCloseBatch();
|
||||
```
|
||||
|
||||
The methods used to set tags are:
|
||||
|
||||
```java
|
||||
public void setTagNull(int index, int type)
|
||||
public void setTagBoolean(int index, boolean value)
|
||||
public void setTagInt(int index, int value)
|
||||
public void setTagByte(int index, byte value)
|
||||
public void setTagShort(int index, short value)
|
||||
public void setTagLong(int index, long value)
|
||||
public void setTagTimestamp(int index, long value)
|
||||
public void setTagFloat(int index, float value)
|
||||
public void setTagDouble(int index, double value)
|
||||
public void setTagString(int index, String value)
|
||||
public void setTagNString(int index, String value)
|
||||
```
|
||||
|
||||
The methods used to set columns are:
|
||||
|
||||
```java
|
||||
public void setInt(int columnIndex, ArrayList<Integer> list) throws SQLException
|
||||
public void setFloat(int columnIndex, ArrayList<Float> list) throws SQLException
|
||||
public void setTimestamp(int columnIndex, ArrayList<Long> list) throws SQLException
|
||||
public void setLong(int columnIndex, ArrayList<Long> list) throws SQLException
|
||||
public void setDouble(int columnIndex, ArrayList<Double> list) throws SQLException
|
||||
public void setBoolean(int columnIndex, ArrayList<Boolean> list) throws SQLException
|
||||
public void setByte(int columnIndex, ArrayList<Byte> list) throws SQLException
|
||||
public void setShort(int columnIndex, ArrayList<Short> list) throws SQLException
|
||||
public void setString(int columnIndex, ArrayList<String> list, int size) throws SQLException
|
||||
public void setNString(int columnIndex, ArrayList<String> list, int size) throws SQLException
|
||||
```
|
||||
|
||||
**Note**: Both setString and setNString require the user to declare the column width of the corresponding column in the table definition in the size parameter.
|
||||
|
||||
### Data Subscription
|
||||
|
||||
#### Subscribe
|
||||
|
||||
```java
|
||||
TSDBSubscribe sub = ((TSDBConnection)conn).subscribe("topic", "select * from meters", false);
|
||||
```
|
||||
|
||||
parameters:
|
||||
|
||||
* topic: the unique topic name of the subscription.
|
||||
* sql: a select statement.
|
||||
* restart: true if restart the subscription already exists; false if continue the previous subscription.
|
||||
|
||||
In the example above, a subscription named 'topic' is created which use the SQL statement 'select * from meters'. If the subscription already exists, it will continue with the previous query progress, rather than consuming all the data from scratch.
|
||||
|
||||
#### Consume
|
||||
|
||||
```java
|
||||
int total = 0;
|
||||
while(true) {
|
||||
TSDBResultSet rs = sub.consume();
|
||||
int count = 0;
|
||||
while(rs.next()) {
|
||||
count++;
|
||||
}
|
||||
total += count;
|
||||
System.out.printf("%d rows consumed, total %d\n", count, total);
|
||||
Thread.sleep(1000);
|
||||
}
|
||||
```
|
||||
|
||||
The consume method returns a result set containing all the new data so far since the last consume. Make sure to call consume as often as you need (like Thread.sleep(1000) in the example), otherwise you will put unnecessary stress on the server.
|
||||
|
||||
#### Close
|
||||
|
||||
```java
|
||||
sub.close(true);
|
||||
// release resources
|
||||
resultSet.close();
|
||||
stmt.close();
|
||||
conn.close();
|
||||
```
|
||||
|
||||
The close method closes a subscription. If the parameter is true, the subscription progress information is reserved, and a subscription with the same name can be created later to continue consuming data. If false, the subscription progress is not retained.
|
||||
|
||||
**Note**: the connection must be closed; otherwise, a connection leak may occur.
|
||||
|
||||
## Connection Pool
|
||||
|
||||
### HikariCP example
|
||||
|
||||
```java
|
||||
public static void main(String[] args) throws SQLException {
|
||||
HikariConfig config = new HikariConfig();
|
||||
// jdbc properties
|
||||
config.setJdbcUrl("jdbc:TAOS://127.0.0.1:6030/log");
|
||||
config.setUsername("root");
|
||||
config.setPassword("taosdata");
|
||||
// connection pool configurations
|
||||
config.setMinimumIdle(10); //minimum number of idle connection
|
||||
config.setMaximumPoolSize(10); //maximum number of connection in the pool
|
||||
config.setConnectionTimeout(30000); //maximum wait milliseconds for get connection from pool
|
||||
config.setMaxLifetime(0); // maximum life time for each connection
|
||||
config.setIdleTimeout(0); // max idle time for recycle idle connection
|
||||
config.setConnectionTestQuery("select server_status()"); //validation query
|
||||
HikariDataSource ds = new HikariDataSource(config); //create datasource
|
||||
Connection connection = ds.getConnection(); // get connection
|
||||
Statement statement = connection.createStatement(); // get statement
|
||||
//query or insert
|
||||
// ...
|
||||
connection.close(); // put back to conneciton pool
|
||||
}
|
||||
```
|
||||
|
||||
### Druid example
|
||||
|
||||
```java
|
||||
public static void main(String[] args) throws Exception {
|
||||
DruidDataSource dataSource = new DruidDataSource();
|
||||
// jdbc properties
|
||||
dataSource.setDriverClassName("com.taosdata.jdbc.TSDBDriver");
|
||||
dataSource.setUrl(url);
|
||||
dataSource.setUsername("root");
|
||||
dataSource.setPassword("taosdata");
|
||||
// pool configurations
|
||||
dataSource.setInitialSize(10);
|
||||
dataSource.setMinIdle(10);
|
||||
dataSource.setMaxActive(10);
|
||||
dataSource.setMaxWait(30000);
|
||||
dataSource.setValidationQuery("select server_status()");
|
||||
Connection connection = dataSource.getConnection(); // get connection
|
||||
Statement statement = connection.createStatement(); // get statement
|
||||
//query or insert
|
||||
// ...
|
||||
connection.close(); // put back to conneciton pool
|
||||
}
|
||||
```
|
||||
|
||||
**Note**:
|
||||
|
||||
As of TDengine V1.6.4.1, the function select server_status() is supported specifically for heartbeat detection, so it is recommended to use select server_status() for Validation queries when using connection pools.
|
||||
|
||||
Select server_status() returns 1 on success, as shown below.
|
||||
|
||||
```sql
|
||||
taos> select server_status();
|
||||
server_status()|
|
||||
================
|
||||
1 |
|
||||
Query OK, 1 row(s) in set (0.000141s)
|
||||
```
|
||||
|
||||
## Integrated with framework
|
||||
|
||||
- Please refer to [SpringJdbcTemplate](https://github.com/taosdata/TDengine/tree/develop/tests/examples/JDBC/SpringJdbcTemplate) if using taos-jdbcdriver in Spring JdbcTemplate.
|
||||
- Please refer to [springbootdemo](https://github.com/taosdata/TDengine/tree/develop/tests/examples/JDBC/springbootdemo) if using taos-jdbcdriver in Spring JdbcTemplate.
|
||||
|
||||
## FAQ
|
||||
|
||||
- java.lang.UnsatisfiedLinkError: no taos in java.library.path
|
||||
|
||||
**Cause**:The application program cannot find Library function *taos*
|
||||
|
||||
**Answer**:Copy `C:\TDengine\driver\taos.dll` to `C:\Windows\System32\` on Windows and make a soft link through `ln -s /usr/local/taos/driver/libtaos.so.x.x.x.x /usr/lib/libtaos.so` on Linux.
|
||||
|
||||
- java.lang.UnsatisfiedLinkError: taos.dll Can't load AMD 64 bit on a IA 32-bit platform
|
||||
|
||||
**Cause**:Currently TDengine only support 64bit JDK
|
||||
|
||||
**Answer**:re-install 64bit JDK.
|
||||
|
||||
- For other questions, please refer to [Issues](https://github.com/taosdata/TDengine/issues)
|
||||
|
|
@ -284,3 +284,5 @@ keepColumnName 1
|
|||
# 0 no query allowed, queries are disabled
|
||||
# queryBufferSize -1
|
||||
|
||||
# percent of redundant data in tsdb meta will compact meta data,0 means donot compact
|
||||
# tsdbMetaCompactRatio 0
|
||||
|
|
|
@ -142,6 +142,7 @@ function install_bin() {
|
|||
if [ "$osType" != "Darwin" ]; then
|
||||
${csudo} rm -f ${bin_link_dir}/taosd || :
|
||||
${csudo} rm -f ${bin_link_dir}/taosdemo || :
|
||||
${csudo} rm -f ${bin_link_dir}/perfMonitor || :
|
||||
${csudo} rm -f ${bin_link_dir}/taosdump || :
|
||||
${csudo} rm -f ${bin_link_dir}/set_core || :
|
||||
fi
|
||||
|
@ -167,6 +168,7 @@ function install_bin() {
|
|||
[ -x ${install_main_dir}/bin/taosd ] && ${csudo} ln -s ${install_main_dir}/bin/taosd ${bin_link_dir}/taosd || :
|
||||
[ -x ${install_main_dir}/bin/taosdump ] && ${csudo} ln -s ${install_main_dir}/bin/taosdump ${bin_link_dir}/taosdump || :
|
||||
[ -x ${install_main_dir}/bin/taosdemo ] && ${csudo} ln -s ${install_main_dir}/bin/taosdemo ${bin_link_dir}/taosdemo || :
|
||||
[ -x ${install_main_dir}/bin/perfMonitor ] && ${csudo} ln -s ${install_main_dir}/bin/perfMonitor ${bin_link_dir}/perfMonitor || :
|
||||
[ -x ${install_main_dir}/set_core.sh ] && ${csudo} ln -s ${install_main_dir}/bin/set_core.sh ${bin_link_dir}/set_core || :
|
||||
fi
|
||||
|
||||
|
|
|
@ -4,6 +4,8 @@ PROJECT(TDengine)
|
|||
INCLUDE_DIRECTORIES(inc)
|
||||
INCLUDE_DIRECTORIES(jni)
|
||||
INCLUDE_DIRECTORIES(${TD_COMMUNITY_DIR}/src/query/inc)
|
||||
INCLUDE_DIRECTORIES(${TD_COMMUNITY_DIR}/deps/zlib-1.2.11/inc)
|
||||
INCLUDE_DIRECTORIES(${TD_COMMUNITY_DIR}/src/plugins/http/inc)
|
||||
AUX_SOURCE_DIRECTORY(src SRC)
|
||||
|
||||
IF (TD_LINUX)
|
||||
|
|
|
@ -215,6 +215,7 @@ SExprInfo* tscExprUpdate(SQueryInfo* pQueryInfo, int32_t index, int16_t function
|
|||
int16_t size);
|
||||
|
||||
size_t tscNumOfExprs(SQueryInfo* pQueryInfo);
|
||||
int32_t tscExprTopBottomIndex(SQueryInfo* pQueryInfo);
|
||||
SExprInfo *tscExprGet(SQueryInfo* pQueryInfo, int32_t index);
|
||||
int32_t tscExprCopy(SArray* dst, const SArray* src, uint64_t uid, bool deepcopy);
|
||||
int32_t tscExprCopyAll(SArray* dst, const SArray* src, bool deepcopy);
|
||||
|
|
|
@ -38,6 +38,11 @@ extern "C" {
|
|||
#include "qUtil.h"
|
||||
#include "tcmdtype.h"
|
||||
|
||||
typedef enum {
|
||||
TAOS_REQ_FROM_SHELL,
|
||||
TAOS_REQ_FROM_HTTP
|
||||
} SReqOrigin;
|
||||
|
||||
// forward declaration
|
||||
struct SSqlInfo;
|
||||
|
||||
|
@ -340,6 +345,7 @@ typedef struct STscObj {
|
|||
SRpcCorEpSet *tscCorMgmtEpSet;
|
||||
pthread_mutex_t mutex;
|
||||
int32_t numOfObj; // number of sqlObj from this tscObj
|
||||
SReqOrigin from;
|
||||
} STscObj;
|
||||
|
||||
typedef struct SSubqueryState {
|
||||
|
|
|
@ -643,7 +643,7 @@ static void doExecuteFinalMerge(SOperatorInfo* pOperator, int32_t numOfExpr, SSD
|
|||
for(int32_t j = 0; j < numOfExpr; ++j) {
|
||||
pCtx[j].pOutput += (pCtx[j].outputBytes * numOfRows);
|
||||
if (pCtx[j].functionId == TSDB_FUNC_TOP || pCtx[j].functionId == TSDB_FUNC_BOTTOM) {
|
||||
pCtx[j].ptsOutputBuf = pCtx[0].pOutput;
|
||||
if(j > 0)pCtx[j].ptsOutputBuf = pCtx[j - 1].pOutput;
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -40,6 +40,7 @@
|
|||
#include "qScript.h"
|
||||
#include "ttype.h"
|
||||
#include "qFilter.h"
|
||||
#include "httpInt.h"
|
||||
|
||||
#define DEFAULT_PRIMARY_TIMESTAMP_COL_NAME "_c0"
|
||||
|
||||
|
@ -1687,8 +1688,28 @@ static bool has(SArray* pFieldList, int32_t startIdx, const char* name) {
|
|||
static char* getAccountId(SSqlObj* pSql) { return pSql->pTscObj->acctId; }
|
||||
|
||||
static char* cloneCurrentDBName(SSqlObj* pSql) {
|
||||
char *p = NULL;
|
||||
HttpContext *pCtx = NULL;
|
||||
|
||||
pthread_mutex_lock(&pSql->pTscObj->mutex);
|
||||
char *p = strdup(pSql->pTscObj->db);
|
||||
STscObj *pTscObj = pSql->pTscObj;
|
||||
switch (pTscObj->from) {
|
||||
case TAOS_REQ_FROM_HTTP:
|
||||
pCtx = pSql->param;
|
||||
if (pCtx && pCtx->db[0] != '\0') {
|
||||
char db[TSDB_ACCT_ID_LEN + TSDB_DB_NAME_LEN] = {0};
|
||||
int32_t len = sprintf(db, "%s%s%s", pTscObj->acctId, TS_PATH_DELIMITER, pCtx->db);
|
||||
assert(len <= sizeof(db));
|
||||
|
||||
p = strdup(db);
|
||||
}
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
if (p == NULL) {
|
||||
p = strdup(pSql->pTscObj->db);
|
||||
}
|
||||
pthread_mutex_unlock(&pSql->pTscObj->mutex);
|
||||
|
||||
return p;
|
||||
|
@ -2607,13 +2628,12 @@ int32_t addExprAndResultField(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t col
|
|||
|
||||
// set the first column ts for diff query
|
||||
if (functionId == TSDB_FUNC_DIFF || functionId == TSDB_FUNC_DERIVATIVE) {
|
||||
colIndex += 1;
|
||||
SColumnIndex indexTS = {.tableIndex = index.tableIndex, .columnIndex = 0};
|
||||
SExprInfo* pExpr = tscExprAppend(pQueryInfo, TSDB_FUNC_TS_DUMMY, &indexTS, TSDB_DATA_TYPE_TIMESTAMP,
|
||||
TSDB_KEYSIZE, getNewResColId(pCmd), TSDB_KEYSIZE, false);
|
||||
|
||||
SColumnList ids = createColumnList(1, 0, 0);
|
||||
insertResultField(pQueryInfo, 0, &ids, TSDB_KEYSIZE, TSDB_DATA_TYPE_TIMESTAMP, aAggs[TSDB_FUNC_TS_DUMMY].name, pExpr);
|
||||
insertResultField(pQueryInfo, colIndex, &ids, TSDB_KEYSIZE, TSDB_DATA_TYPE_TIMESTAMP, aAggs[TSDB_FUNC_TS_DUMMY].name, pExpr);
|
||||
}
|
||||
|
||||
SExprInfo* pExpr = tscExprAppend(pQueryInfo, functionId, &index, resultType, resultSize, getNewResColId(pCmd), intermediateResSize, false);
|
||||
|
@ -2886,7 +2906,7 @@ int32_t addExprAndResultField(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, int32_t col
|
|||
|
||||
const int32_t TS_COLUMN_INDEX = PRIMARYKEY_TIMESTAMP_COL_INDEX;
|
||||
SColumnList ids = createColumnList(1, index.tableIndex, TS_COLUMN_INDEX);
|
||||
insertResultField(pQueryInfo, TS_COLUMN_INDEX, &ids, TSDB_KEYSIZE, TSDB_DATA_TYPE_TIMESTAMP,
|
||||
insertResultField(pQueryInfo, colIndex, &ids, TSDB_KEYSIZE, TSDB_DATA_TYPE_TIMESTAMP,
|
||||
aAggs[TSDB_FUNC_TS].name, pExpr);
|
||||
|
||||
colIndex += 1; // the first column is ts
|
||||
|
@ -5883,13 +5903,15 @@ int32_t validateOrderbyNode(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, SSqlNode* pSq
|
|||
pQueryInfo->groupbyExpr.orderType = p1->sortOrder;
|
||||
pQueryInfo->order.orderColId = pSchema[index.columnIndex].colId;
|
||||
} else if (isTopBottomQuery(pQueryInfo)) {
|
||||
int32_t topBotIndex = tscGetTopBotQueryExprIndex(pQueryInfo);
|
||||
assert(topBotIndex >= 1);
|
||||
/* order of top/bottom query in interval is not valid */
|
||||
SExprInfo* pExpr = tscExprGet(pQueryInfo, topBotIndex-1);
|
||||
|
||||
int32_t pos = tscExprTopBottomIndex(pQueryInfo);
|
||||
assert(pos > 0);
|
||||
SExprInfo* pExpr = tscExprGet(pQueryInfo, pos - 1);
|
||||
assert(pExpr->base.functionId == TSDB_FUNC_TS);
|
||||
|
||||
pExpr = tscExprGet(pQueryInfo, topBotIndex);
|
||||
pExpr = tscExprGet(pQueryInfo, pos);
|
||||
|
||||
if (pExpr->base.colInfo.colIndex != index.columnIndex && index.columnIndex != PRIMARYKEY_TIMESTAMP_COL_INDEX) {
|
||||
return invalidOperationMsg(pMsgBuf, msg5);
|
||||
}
|
||||
|
@ -5980,13 +6002,13 @@ int32_t validateOrderbyNode(SSqlCmd* pCmd, SQueryInfo* pQueryInfo, SSqlNode* pSq
|
|||
return invalidOperationMsg(pMsgBuf, msg8);
|
||||
}
|
||||
} else {
|
||||
int32_t topBotIndex = tscGetTopBotQueryExprIndex(pQueryInfo);
|
||||
assert(topBotIndex >= 1);
|
||||
/* order of top/bottom query in interval is not valid */
|
||||
SExprInfo* pExpr = tscExprGet(pQueryInfo, topBotIndex-1);
|
||||
int32_t pos = tscExprTopBottomIndex(pQueryInfo);
|
||||
assert(pos > 0);
|
||||
SExprInfo* pExpr = tscExprGet(pQueryInfo, pos - 1);
|
||||
assert(pExpr->base.functionId == TSDB_FUNC_TS);
|
||||
|
||||
pExpr = tscExprGet(pQueryInfo, topBotIndex);
|
||||
pExpr = tscExprGet(pQueryInfo, pos);
|
||||
|
||||
if (pExpr->base.colInfo.colIndex != index.columnIndex && index.columnIndex != PRIMARYKEY_TIMESTAMP_COL_INDEX) {
|
||||
return invalidOperationMsg(pMsgBuf, msg5);
|
||||
}
|
||||
|
|
|
@ -2678,7 +2678,7 @@ int tscProcessQueryRsp(SSqlObj *pSql) {
|
|||
return 0;
|
||||
}
|
||||
|
||||
static void decompressQueryColData(SSqlRes *pRes, SQueryInfo* pQueryInfo, char **data, int8_t compressed, int compLen) {
|
||||
static void decompressQueryColData(SSqlObj *pSql, SSqlRes *pRes, SQueryInfo* pQueryInfo, char **data, int8_t compressed, int32_t compLen) {
|
||||
int32_t decompLen = 0;
|
||||
int32_t numOfCols = pQueryInfo->fieldsInfo.numOfOutput;
|
||||
int32_t *compSizes;
|
||||
|
@ -2715,6 +2715,9 @@ static void decompressQueryColData(SSqlRes *pRes, SQueryInfo* pQueryInfo, char *
|
|||
pData = *data + compLen + numOfCols * sizeof(int32_t);
|
||||
}
|
||||
|
||||
tscDebug("0x%"PRIx64" decompress col data, compressed size:%d, decompressed size:%d",
|
||||
pSql->self, (int32_t)(compLen + numOfCols * sizeof(int32_t)), decompLen);
|
||||
|
||||
int32_t tailLen = pRes->rspLen - sizeof(SRetrieveTableRsp) - decompLen;
|
||||
memmove(*data + decompLen, pData, tailLen);
|
||||
memmove(*data, outputBuf, decompLen);
|
||||
|
@ -2749,7 +2752,7 @@ int tscProcessRetrieveRspFromNode(SSqlObj *pSql) {
|
|||
//Decompress col data if compressed from server
|
||||
if (pRetrieve->compressed) {
|
||||
int32_t compLen = htonl(pRetrieve->compLen);
|
||||
decompressQueryColData(pRes, pQueryInfo, &pRes->data, pRetrieve->compressed, compLen);
|
||||
decompressQueryColData(pSql, pRes, pQueryInfo, &pRes->data, pRetrieve->compressed, compLen);
|
||||
}
|
||||
|
||||
STableMetaInfo *pTableMetaInfo = tscGetMetaInfo(pQueryInfo, 0);
|
||||
|
|
|
@ -2434,6 +2434,26 @@ int32_t tscHandleFirstRoundStableQuery(SSqlObj *pSql) {
|
|||
return terrno;
|
||||
}
|
||||
|
||||
typedef struct SPair {
|
||||
int32_t first;
|
||||
int32_t second;
|
||||
} SPair;
|
||||
|
||||
static void doSendQueryReqs(SSchedMsg* pSchedMsg) {
|
||||
SSqlObj* pSql = pSchedMsg->ahandle;
|
||||
SPair* p = pSchedMsg->msg;
|
||||
|
||||
for(int32_t i = p->first; i < p->second; ++i) {
|
||||
SSqlObj* pSub = pSql->pSubs[i];
|
||||
SRetrieveSupport* pSupport = pSub->param;
|
||||
|
||||
tscDebug("0x%"PRIx64" sub:0x%"PRIx64" launch subquery, orderOfSub:%d.", pSql->self, pSub->self, pSupport->subqueryIndex);
|
||||
tscBuildAndSendRequest(pSub, NULL);
|
||||
}
|
||||
|
||||
tfree(p);
|
||||
}
|
||||
|
||||
int32_t tscHandleMasterSTableQuery(SSqlObj *pSql) {
|
||||
SSqlRes *pRes = &pSql->res;
|
||||
SSqlCmd *pCmd = &pSql->cmd;
|
||||
|
@ -2556,13 +2576,33 @@ int32_t tscHandleMasterSTableQuery(SSqlObj *pSql) {
|
|||
doCleanupSubqueries(pSql, i);
|
||||
return pRes->code;
|
||||
}
|
||||
|
||||
for(int32_t j = 0; j < pState->numOfSub; ++j) {
|
||||
SSqlObj* pSub = pSql->pSubs[j];
|
||||
SRetrieveSupport* pSupport = pSub->param;
|
||||
|
||||
tscDebug("0x%"PRIx64" sub:0x%"PRIx64" launch subquery, orderOfSub:%d.", pSql->self, pSub->self, pSupport->subqueryIndex);
|
||||
tscBuildAndSendRequest(pSub, NULL);
|
||||
|
||||
// concurrently sent the query requests.
|
||||
const int32_t MAX_REQUEST_PER_TASK = 8;
|
||||
|
||||
int32_t numOfTasks = (pState->numOfSub + MAX_REQUEST_PER_TASK - 1)/MAX_REQUEST_PER_TASK;
|
||||
assert(numOfTasks >= 1);
|
||||
|
||||
int32_t num = (pState->numOfSub/numOfTasks) + 1;
|
||||
tscDebug("0x%"PRIx64 " query will be sent by %d threads", pSql->self, numOfTasks);
|
||||
|
||||
for(int32_t j = 0; j < numOfTasks; ++j) {
|
||||
SSchedMsg schedMsg = {0};
|
||||
schedMsg.fp = doSendQueryReqs;
|
||||
schedMsg.ahandle = (void*)pSql;
|
||||
|
||||
schedMsg.thandle = NULL;
|
||||
SPair* p = calloc(1, sizeof(SPair));
|
||||
p->first = j * num;
|
||||
|
||||
if (j == numOfTasks - 1) {
|
||||
p->second = pState->numOfSub;
|
||||
} else {
|
||||
p->second = (j + 1) * num;
|
||||
}
|
||||
|
||||
schedMsg.msg = p;
|
||||
taosScheduleTask(tscQhandle, &schedMsg);
|
||||
}
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
|
|
|
@ -2443,6 +2443,19 @@ size_t tscNumOfExprs(SQueryInfo* pQueryInfo) {
|
|||
return taosArrayGetSize(pQueryInfo->exprList);
|
||||
}
|
||||
|
||||
int32_t tscExprTopBottomIndex(SQueryInfo* pQueryInfo){
|
||||
size_t numOfExprs = tscNumOfExprs(pQueryInfo);
|
||||
for(int32_t i = 0; i < numOfExprs; ++i) {
|
||||
SExprInfo* pExpr = tscExprGet(pQueryInfo, i);
|
||||
if (pExpr == NULL)
|
||||
continue;
|
||||
if (pExpr->base.functionId == TSDB_FUNC_TOP || pExpr->base.functionId == TSDB_FUNC_BOTTOM) {
|
||||
return i;
|
||||
}
|
||||
}
|
||||
return -1;
|
||||
}
|
||||
|
||||
// todo REFACTOR
|
||||
void tscExprAddParams(SSqlExpr* pExpr, char* argument, int32_t type, int32_t bytes) {
|
||||
assert (pExpr != NULL || argument != NULL || bytes != 0);
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -518,6 +518,7 @@ void tdAppendMemRowToDataCol(SMemRow row, STSchema *pSchema, SDataCols *pCols, b
|
|||
}
|
||||
}
|
||||
|
||||
//TODO: refactor this function to eliminate additional memory copy
|
||||
int tdMergeDataCols(SDataCols *target, SDataCols *source, int rowsToMerge, int *pOffset, bool forceSetNull) {
|
||||
ASSERT(rowsToMerge > 0 && rowsToMerge <= source->numOfRows);
|
||||
ASSERT(target->numOfCols == source->numOfCols);
|
||||
|
|
|
@ -76,12 +76,11 @@ int32_t tsMaxBinaryDisplayWidth = 30;
|
|||
int32_t tsCompressMsgSize = -1;
|
||||
|
||||
/* denote if server needs to compress the retrieved column data before adding to the rpc response message body.
|
||||
* 0: disable column data compression
|
||||
* 1: enable column data compression
|
||||
* This option is default to disabled. Once enabled, compression will be conducted if any column has size more
|
||||
* than QUERY_COMP_THRESHOLD. Otherwise, no further compression is needed.
|
||||
* 0: all data are compressed
|
||||
* -1: all data are not compressed
|
||||
* other values: if any retrieved column size is greater than the tsCompressColData, all data will be compressed.
|
||||
*/
|
||||
int32_t tsCompressColData = 0;
|
||||
int32_t tsCompressColData = -1;
|
||||
|
||||
// client
|
||||
int32_t tsMaxSQLStringLen = TSDB_MAX_ALLOWED_SQL_LEN;
|
||||
|
@ -150,6 +149,7 @@ int32_t tsMaxVgroupsPerDb = 0;
|
|||
int32_t tsMinTablePerVnode = TSDB_TABLES_STEP;
|
||||
int32_t tsMaxTablePerVnode = TSDB_DEFAULT_TABLES;
|
||||
int32_t tsTableIncStepPerVnode = TSDB_TABLES_STEP;
|
||||
int32_t tsTsdbMetaCompactRatio = TSDB_META_COMPACT_RATIO;
|
||||
|
||||
// tsdb config
|
||||
// For backward compatibility
|
||||
|
@ -1005,10 +1005,10 @@ static void doInitGlobalConfig(void) {
|
|||
|
||||
cfg.option = "compressColData";
|
||||
cfg.ptr = &tsCompressColData;
|
||||
cfg.valType = TAOS_CFG_VTYPE_INT8;
|
||||
cfg.cfgType = TSDB_CFG_CTYPE_B_CONFIG | TSDB_CFG_CTYPE_B_SHOW;
|
||||
cfg.minValue = 0;
|
||||
cfg.maxValue = 1;
|
||||
cfg.valType = TAOS_CFG_VTYPE_INT32;
|
||||
cfg.cfgType = TSDB_CFG_CTYPE_B_CONFIG | TSDB_CFG_CTYPE_B_CLIENT | TSDB_CFG_CTYPE_B_SHOW;
|
||||
cfg.minValue = -1;
|
||||
cfg.maxValue = 100000000.0f;
|
||||
cfg.ptrLength = 0;
|
||||
cfg.unitType = TAOS_CFG_UTYPE_NONE;
|
||||
taosInitConfigOption(cfg);
|
||||
|
@ -1590,6 +1590,16 @@ static void doInitGlobalConfig(void) {
|
|||
cfg.unitType = TAOS_CFG_UTYPE_NONE;
|
||||
taosInitConfigOption(cfg);
|
||||
|
||||
cfg.option = "tsdbMetaCompactRatio";
|
||||
cfg.ptr = &tsTsdbMetaCompactRatio;
|
||||
cfg.valType = TAOS_CFG_VTYPE_INT32;
|
||||
cfg.cfgType = TSDB_CFG_CTYPE_B_CONFIG;
|
||||
cfg.minValue = 0;
|
||||
cfg.maxValue = 100;
|
||||
cfg.ptrLength = 0;
|
||||
cfg.unitType = TAOS_CFG_UTYPE_NONE;
|
||||
taosInitConfigOption(cfg);
|
||||
|
||||
assert(tsGlobalConfigNum <= TSDB_CFG_MAX_NUM);
|
||||
#ifdef TD_TSZ
|
||||
// lossy compress
|
||||
|
|
|
@ -38,12 +38,12 @@ void tVariantCreate(tVariant *pVar, SStrToken *token) {
|
|||
|
||||
switch (token->type) {
|
||||
case TSDB_DATA_TYPE_BOOL: {
|
||||
int32_t k = strncasecmp(token->z, "true", 4);
|
||||
if (k == 0) {
|
||||
if (strncasecmp(token->z, "true", 4) == 0) {
|
||||
pVar->i64 = TSDB_TRUE;
|
||||
} else {
|
||||
assert(strncasecmp(token->z, "false", 5) == 0);
|
||||
} else if (strncasecmp(token->z, "false", 5) == 0) {
|
||||
pVar->i64 = TSDB_FALSE;
|
||||
} else {
|
||||
return;
|
||||
}
|
||||
|
||||
break;
|
||||
|
|
|
@ -10,7 +10,8 @@ import sys
|
|||
_datetime_epoch = datetime.utcfromtimestamp(0)
|
||||
|
||||
def _is_not_none(obj):
|
||||
obj != None
|
||||
return obj != None
|
||||
|
||||
class TaosBind(ctypes.Structure):
|
||||
_fields_ = [
|
||||
("buffer_type", c_int),
|
||||
|
@ -299,27 +300,14 @@ class TaosMultiBind(ctypes.Structure):
|
|||
self.buffer = cast(buffer, c_void_p)
|
||||
self.num = len(values)
|
||||
|
||||
def binary(self, values):
|
||||
def _str_to_buffer(self, values):
|
||||
self.num = len(values)
|
||||
self.buffer = cast(c_char_p("".join(filter(_is_not_none, values)).encode("utf-8")), c_void_p)
|
||||
self.length = (c_int * len(values))(*[len(value) if value is not None else 0 for value in values])
|
||||
self.buffer_type = FieldType.C_BINARY
|
||||
self.is_null = cast((c_byte * self.num)(*[1 if v == None else 0 for v in values]), c_char_p)
|
||||
|
||||
def timestamp(self, values, precision=PrecisionEnum.Milliseconds):
|
||||
try:
|
||||
buffer = cast(values, c_void_p)
|
||||
except:
|
||||
buffer_type = c_int64 * len(values)
|
||||
buffer = buffer_type(*[_datetime_to_timestamp(value, precision) for value in values])
|
||||
|
||||
self.buffer_type = FieldType.C_TIMESTAMP
|
||||
self.buffer = cast(buffer, c_void_p)
|
||||
self.buffer_length = sizeof(c_int64)
|
||||
self.num = len(values)
|
||||
|
||||
def nchar(self, values):
|
||||
# type: (list[str]) -> None
|
||||
is_null = [1 if v == None else 0 for v in values]
|
||||
self.is_null = cast((c_byte * self.num)(*is_null), c_char_p)
|
||||
|
||||
if sum(is_null) == self.num:
|
||||
self.length = (c_int32 * len(values))(0 * self.num)
|
||||
return
|
||||
if sys.version_info < (3, 0):
|
||||
_bytes = [bytes(value) if value is not None else None for value in values]
|
||||
buffer_length = max(len(b) + 1 for b in _bytes if b is not None)
|
||||
|
@ -347,9 +335,26 @@ class TaosMultiBind(ctypes.Structure):
|
|||
)
|
||||
self.length = (c_int32 * len(values))(*[len(b) if b is not None else 0 for b in _bytes])
|
||||
self.buffer_length = buffer_length
|
||||
def binary(self, values):
|
||||
self.buffer_type = FieldType.C_BINARY
|
||||
self._str_to_buffer(values)
|
||||
|
||||
def timestamp(self, values, precision=PrecisionEnum.Milliseconds):
|
||||
try:
|
||||
buffer = cast(values, c_void_p)
|
||||
except:
|
||||
buffer_type = c_int64 * len(values)
|
||||
buffer = buffer_type(*[_datetime_to_timestamp(value, precision) for value in values])
|
||||
|
||||
self.buffer_type = FieldType.C_TIMESTAMP
|
||||
self.buffer = cast(buffer, c_void_p)
|
||||
self.buffer_length = sizeof(c_int64)
|
||||
self.num = len(values)
|
||||
self.is_null = cast((c_byte * self.num)(*[1 if v == None else 0 for v in values]), c_char_p)
|
||||
|
||||
def nchar(self, values):
|
||||
# type: (list[str]) -> None
|
||||
self.buffer_type = FieldType.C_NCHAR
|
||||
self._str_to_buffer(values)
|
||||
|
||||
def tinyint_unsigned(self, values):
|
||||
self.buffer_type = FieldType.C_TINYINT_UNSIGNED
|
||||
|
|
|
@ -49,7 +49,7 @@ def _load_taos():
|
|||
try:
|
||||
return load_func[platform.system()]()
|
||||
except:
|
||||
sys.exit("unsupported platform to TDengine connector")
|
||||
raise InterfaceError('unsupported platform or failed to load taos client library')
|
||||
|
||||
|
||||
_libtaos = _load_taos()
|
||||
|
|
|
@ -3,6 +3,9 @@
|
|||
"""Constants in TDengine python
|
||||
"""
|
||||
|
||||
import ctypes, struct
|
||||
|
||||
|
||||
class FieldType(object):
|
||||
"""TDengine Field Types"""
|
||||
|
||||
|
@ -33,8 +36,8 @@ class FieldType(object):
|
|||
C_INT_UNSIGNED_NULL = 4294967295
|
||||
C_BIGINT_NULL = -9223372036854775808
|
||||
C_BIGINT_UNSIGNED_NULL = 18446744073709551615
|
||||
C_FLOAT_NULL = float("nan")
|
||||
C_DOUBLE_NULL = float("nan")
|
||||
C_FLOAT_NULL = ctypes.c_float(struct.unpack("<f", b"\x00\x00\xf0\x7f")[0])
|
||||
C_DOUBLE_NULL = ctypes.c_double(struct.unpack("<d", b"\x00\x00\x00\x00\x00\xff\xff\x7f")[0])
|
||||
C_BINARY_NULL = bytearray([int("0xff", 16)])
|
||||
# Timestamp precision definition
|
||||
C_TIMESTAMP_MILLI = 0
|
||||
|
|
|
@ -0,0 +1,50 @@
|
|||
from taos import *
|
||||
|
||||
conn = connect()
|
||||
|
||||
dbname = "pytest_taos_stmt_multi"
|
||||
conn.execute("drop database if exists %s" % dbname)
|
||||
conn.execute("create database if not exists %s" % dbname)
|
||||
conn.select_db(dbname)
|
||||
|
||||
conn.execute(
|
||||
"create table if not exists log(ts timestamp, bo bool, nil tinyint, \
|
||||
ti tinyint, si smallint, ii int, bi bigint, tu tinyint unsigned, \
|
||||
su smallint unsigned, iu int unsigned, bu bigint unsigned, \
|
||||
ff float, dd double, bb binary(100), nn nchar(100), tt timestamp)",
|
||||
)
|
||||
|
||||
stmt = conn.statement("insert into log values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)")
|
||||
|
||||
params = new_multi_binds(16)
|
||||
params[0].timestamp((1626861392589, 1626861392590, 1626861392591))
|
||||
params[1].bool((True, None, False))
|
||||
params[2].tinyint([-128, -128, None]) # -128 is tinyint null
|
||||
params[3].tinyint([0, 127, None])
|
||||
params[4].smallint([3, None, 2])
|
||||
params[5].int([3, 4, None])
|
||||
params[6].bigint([3, 4, None])
|
||||
params[7].tinyint_unsigned([3, 4, None])
|
||||
params[8].smallint_unsigned([3, 4, None])
|
||||
params[9].int_unsigned([3, 4, None])
|
||||
params[10].bigint_unsigned([3, 4, None])
|
||||
params[11].float([3, None, 1])
|
||||
params[12].double([3, None, 1.2])
|
||||
params[13].binary(["abc", "dddafadfadfadfadfa", None])
|
||||
# params[14].nchar(["涛思数据", None, "a long string with 中文字符"])
|
||||
params[14].nchar([None, None, None])
|
||||
params[15].timestamp([None, None, 1626861392591])
|
||||
stmt.bind_param_batch(params)
|
||||
stmt.execute()
|
||||
|
||||
|
||||
result = stmt.use_result()
|
||||
assert result.affected_rows == 3
|
||||
result.close()
|
||||
|
||||
result = conn.query("select * from log")
|
||||
for row in result:
|
||||
print(row)
|
||||
result.close()
|
||||
stmt.close()
|
||||
conn.close()
|
|
@ -88,6 +88,8 @@ extern const int32_t TYPE_BYTES[15];
|
|||
#define TSDB_DEFAULT_PASS "taosdata"
|
||||
#endif
|
||||
|
||||
#define SHELL_MAX_PASSWORD_LEN 20
|
||||
|
||||
#define TSDB_TRUE 1
|
||||
#define TSDB_FALSE 0
|
||||
#define TSDB_OK 0
|
||||
|
@ -275,6 +277,7 @@ do { \
|
|||
#define TSDB_MAX_TABLES 10000000
|
||||
#define TSDB_DEFAULT_TABLES 1000000
|
||||
#define TSDB_TABLES_STEP 1000
|
||||
#define TSDB_META_COMPACT_RATIO 0 // disable tsdb meta compact by default
|
||||
|
||||
#define TSDB_MIN_DAYS_PER_FILE 1
|
||||
#define TSDB_MAX_DAYS_PER_FILE 3650
|
||||
|
|
|
@ -25,7 +25,6 @@
|
|||
#define MAX_USERNAME_SIZE 64
|
||||
#define MAX_DBNAME_SIZE 64
|
||||
#define MAX_IP_SIZE 20
|
||||
#define MAX_PASSWORD_SIZE 20
|
||||
#define MAX_HISTORY_SIZE 1000
|
||||
#define MAX_COMMAND_SIZE 1048586
|
||||
#define HISTORY_FILE ".taos_history"
|
||||
|
|
|
@ -66,7 +66,7 @@ void printHelp() {
|
|||
|
||||
char DARWINCLIENT_VERSION[] = "Welcome to the TDengine shell from %s, Client Version:%s\n"
|
||||
"Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.\n\n";
|
||||
char g_password[MAX_PASSWORD_SIZE];
|
||||
char g_password[SHELL_MAX_PASSWORD_LEN];
|
||||
|
||||
void shellParseArgument(int argc, char *argv[], SShellArguments *arguments) {
|
||||
wordexp_t full_path;
|
||||
|
@ -81,19 +81,25 @@ void shellParseArgument(int argc, char *argv[], SShellArguments *arguments) {
|
|||
}
|
||||
}
|
||||
// for password
|
||||
else if (strncmp(argv[i], "-p", 2) == 0) {
|
||||
else if ((strncmp(argv[i], "-p", 2) == 0)
|
||||
|| (strncmp(argv[i], "--password", 10) == 0)) {
|
||||
strcpy(tsOsName, "Darwin");
|
||||
printf(DARWINCLIENT_VERSION, tsOsName, taos_get_client_info());
|
||||
if (strlen(argv[i]) == 2) {
|
||||
if ((strlen(argv[i]) == 2)
|
||||
|| (strncmp(argv[i], "--password", 10) == 0)) {
|
||||
printf("Enter password: ");
|
||||
taosSetConsoleEcho(false);
|
||||
if (scanf("%s", g_password) > 1) {
|
||||
fprintf(stderr, "password read error\n");
|
||||
}
|
||||
taosSetConsoleEcho(true);
|
||||
getchar();
|
||||
} else {
|
||||
tstrncpy(g_password, (char *)(argv[i] + 2), MAX_PASSWORD_SIZE);
|
||||
tstrncpy(g_password, (char *)(argv[i] + 2), SHELL_MAX_PASSWORD_LEN);
|
||||
}
|
||||
arguments->password = g_password;
|
||||
strcpy(argv[i], "");
|
||||
argc -= 1;
|
||||
}
|
||||
// for management port
|
||||
else if (strcmp(argv[i], "-P") == 0) {
|
||||
|
|
|
@ -254,8 +254,12 @@ int32_t shellRunCommand(TAOS* con, char* command) {
|
|||
}
|
||||
|
||||
if (c == '\\') {
|
||||
esc = true;
|
||||
continue;
|
||||
if (quote != 0 && (*command == '_' || *command == '\\')) {
|
||||
//DO nothing
|
||||
} else {
|
||||
esc = true;
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
if (quote == c) {
|
||||
|
|
|
@ -47,7 +47,7 @@ static struct argp_option options[] = {
|
|||
{"thread", 'T', "THREADNUM", 0, "Number of threads when using multi-thread to import data."},
|
||||
{"check", 'k', "CHECK", 0, "Check tables."},
|
||||
{"database", 'd', "DATABASE", 0, "Database to use when connecting to the server."},
|
||||
{"timezone", 't', "TIMEZONE", 0, "Time zone of the shell, default is local."},
|
||||
{"timezone", 'z', "TIMEZONE", 0, "Time zone of the shell, default is local."},
|
||||
{"netrole", 'n', "NETROLE", 0, "Net role when network connectivity test, default is startup, options: client|server|rpc|startup|sync|speen|fqdn."},
|
||||
{"pktlen", 'l', "PKTLEN", 0, "Packet length used for net test, default is 1000 bytes."},
|
||||
{"pktnum", 'N', "PKTNUM", 0, "Packet numbers used for net test, default is 100."},
|
||||
|
@ -76,7 +76,7 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) {
|
|||
}
|
||||
|
||||
break;
|
||||
case 't':
|
||||
case 'z':
|
||||
arguments->timezone = arg;
|
||||
break;
|
||||
case 'u':
|
||||
|
@ -173,22 +173,29 @@ static struct argp argp = {options, parse_opt, args_doc, doc};
|
|||
|
||||
char LINUXCLIENT_VERSION[] = "Welcome to the TDengine shell from %s, Client Version:%s\n"
|
||||
"Copyright (c) 2020 by TAOS Data, Inc. All rights reserved.\n\n";
|
||||
char g_password[MAX_PASSWORD_SIZE];
|
||||
char g_password[SHELL_MAX_PASSWORD_LEN];
|
||||
|
||||
static void parse_password(
|
||||
static void parse_args(
|
||||
int argc, char *argv[], SShellArguments *arguments) {
|
||||
for (int i = 1; i < argc; i++) {
|
||||
if (strncmp(argv[i], "-p", 2) == 0) {
|
||||
if ((strncmp(argv[i], "-p", 2) == 0)
|
||||
|| (strncmp(argv[i], "--password", 10) == 0)) {
|
||||
strcpy(tsOsName, "Linux");
|
||||
printf(LINUXCLIENT_VERSION, tsOsName, taos_get_client_info());
|
||||
if (strlen(argv[i]) == 2) {
|
||||
if ((strlen(argv[i]) == 2)
|
||||
|| (strncmp(argv[i], "--password", 10) == 0)) {
|
||||
printf("Enter password: ");
|
||||
taosSetConsoleEcho(false);
|
||||
if (scanf("%20s", g_password) > 1) {
|
||||
fprintf(stderr, "password reading error\n");
|
||||
}
|
||||
getchar();
|
||||
taosSetConsoleEcho(true);
|
||||
if (EOF == getchar()) {
|
||||
fprintf(stderr, "getchar() return EOF\n");
|
||||
}
|
||||
} else {
|
||||
tstrncpy(g_password, (char *)(argv[i] + 2), MAX_PASSWORD_SIZE);
|
||||
tstrncpy(g_password, (char *)(argv[i] + 2), SHELL_MAX_PASSWORD_LEN);
|
||||
strcpy(argv[i], "-p");
|
||||
}
|
||||
arguments->password = g_password;
|
||||
arguments->is_use_passwd = true;
|
||||
|
@ -203,7 +210,7 @@ void shellParseArgument(int argc, char *argv[], SShellArguments *arguments) {
|
|||
argp_program_version = verType;
|
||||
|
||||
if (argc > 1) {
|
||||
parse_password(argc, argv, arguments);
|
||||
parse_args(argc, argv, arguments);
|
||||
}
|
||||
|
||||
argp_parse(&argp, argc, argv, 0, 0, arguments);
|
||||
|
|
|
@ -68,7 +68,7 @@ void printHelp() {
|
|||
exit(EXIT_SUCCESS);
|
||||
}
|
||||
|
||||
char g_password[MAX_PASSWORD_SIZE];
|
||||
char g_password[SHELL_MAX_PASSWORD_LEN];
|
||||
|
||||
void shellParseArgument(int argc, char *argv[], SShellArguments *arguments) {
|
||||
for (int i = 1; i < argc; i++) {
|
||||
|
@ -82,20 +82,26 @@ void shellParseArgument(int argc, char *argv[], SShellArguments *arguments) {
|
|||
}
|
||||
}
|
||||
// for password
|
||||
else if (strncmp(argv[i], "-p", 2) == 0) {
|
||||
else if ((strncmp(argv[i], "-p", 2) == 0)
|
||||
|| (strncmp(argv[i], "--password", 10) == 0)) {
|
||||
arguments->is_use_passwd = true;
|
||||
strcpy(tsOsName, "Windows");
|
||||
printf(WINCLIENT_VERSION, tsOsName, taos_get_client_info());
|
||||
if (strlen(argv[i]) == 2) {
|
||||
if ((strlen(argv[i]) == 2)
|
||||
|| (strncmp(argv[i], "--password", 10) == 0)) {
|
||||
printf("Enter password: ");
|
||||
taosSetConsoleEcho(false);
|
||||
if (scanf("%s", g_password) > 1) {
|
||||
fprintf(stderr, "password read error!\n");
|
||||
}
|
||||
taosSetConsoleEcho(true);
|
||||
getchar();
|
||||
} else {
|
||||
tstrncpy(g_password, (char *)(argv[i] + 2), MAX_PASSWORD_SIZE);
|
||||
tstrncpy(g_password, (char *)(argv[i] + 2), SHELL_MAX_PASSWORD_LEN);
|
||||
}
|
||||
arguments->password = g_password;
|
||||
strcpy(argv[i], "");
|
||||
argc -= 1;
|
||||
}
|
||||
// for management port
|
||||
else if (strcmp(argv[i], "-P") == 0) {
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -62,6 +62,20 @@ typedef struct {
|
|||
#define errorPrint(fmt, ...) \
|
||||
do { fprintf(stderr, "\033[31m"); fprintf(stderr, "ERROR: "fmt, __VA_ARGS__); fprintf(stderr, "\033[0m"); } while(0)
|
||||
|
||||
static bool isStringNumber(char *input)
|
||||
{
|
||||
int len = strlen(input);
|
||||
if (0 == len) {
|
||||
return false;
|
||||
}
|
||||
|
||||
for (int i = 0; i < len; i++) {
|
||||
if (!isdigit(input[i]))
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
// -------------------------- SHOW DATABASE INTERFACE-----------------------
|
||||
enum _show_db_index {
|
||||
|
@ -243,19 +257,15 @@ static struct argp_option options[] = {
|
|||
{"table-batch", 't', "TABLE_BATCH", 0, "Number of table dumpout into one output file. Default is 1.", 3},
|
||||
{"thread_num", 'T', "THREAD_NUM", 0, "Number of thread for dump in file. Default is 5.", 3},
|
||||
{"debug", 'g', 0, 0, "Print debug info.", 8},
|
||||
{"verbose", 'b', 0, 0, "Print verbose debug info.", 9},
|
||||
{"performanceprint", 'm', 0, 0, "Print performance debug info.", 10},
|
||||
{0}
|
||||
};
|
||||
|
||||
#define MAX_PASSWORD_SIZE 20
|
||||
|
||||
/* Used by main to communicate with parse_opt. */
|
||||
typedef struct arguments {
|
||||
// connection option
|
||||
char *host;
|
||||
char *user;
|
||||
char password[MAX_PASSWORD_SIZE];
|
||||
char password[SHELL_MAX_PASSWORD_LEN];
|
||||
uint16_t port;
|
||||
char cversion[12];
|
||||
uint16_t mysqlFlag;
|
||||
|
@ -432,7 +442,6 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) {
|
|||
break;
|
||||
// dump unit option
|
||||
case 'A':
|
||||
g_args.all_databases = true;
|
||||
break;
|
||||
case 'D':
|
||||
g_args.databases = true;
|
||||
|
@ -477,6 +486,10 @@ static error_t parse_opt(int key, char *arg, struct argp_state *state) {
|
|||
g_args.table_batch = atoi(arg);
|
||||
break;
|
||||
case 'T':
|
||||
if (!isStringNumber(arg)) {
|
||||
errorPrint("%s", "\n\t-T need a number following!\n");
|
||||
exit(EXIT_FAILURE);
|
||||
}
|
||||
g_args.thread_num = atoi(arg);
|
||||
break;
|
||||
case OPT_ABORT:
|
||||
|
@ -555,11 +568,14 @@ static void parse_precision_first(
|
|||
}
|
||||
}
|
||||
|
||||
static void parse_password(
|
||||
static void parse_args(
|
||||
int argc, char *argv[], SArguments *arguments) {
|
||||
|
||||
for (int i = 1; i < argc; i++) {
|
||||
if (strncmp(argv[i], "-p", 2) == 0) {
|
||||
if (strlen(argv[i]) == 2) {
|
||||
if ((strncmp(argv[i], "-p", 2) == 0)
|
||||
|| (strncmp(argv[i], "--password", 10) == 0)) {
|
||||
if ((strlen(argv[i]) == 2)
|
||||
|| (strncmp(argv[i], "--password", 10) == 0)) {
|
||||
printf("Enter password: ");
|
||||
taosSetConsoleEcho(false);
|
||||
if(scanf("%20s", arguments->password) > 1) {
|
||||
|
@ -567,10 +583,22 @@ static void parse_password(
|
|||
}
|
||||
taosSetConsoleEcho(true);
|
||||
} else {
|
||||
tstrncpy(arguments->password, (char *)(argv[i] + 2), MAX_PASSWORD_SIZE);
|
||||
tstrncpy(arguments->password, (char *)(argv[i] + 2),
|
||||
SHELL_MAX_PASSWORD_LEN);
|
||||
strcpy(argv[i], "-p");
|
||||
}
|
||||
argv[i] = "";
|
||||
} else if (strcmp(argv[i], "-gg") == 0) {
|
||||
arguments->verbose_print = true;
|
||||
strcpy(argv[i], "");
|
||||
} else if (strcmp(argv[i], "-PP") == 0) {
|
||||
arguments->performance_print = true;
|
||||
strcpy(argv[i], "");
|
||||
} else if (strcmp(argv[i], "-A") == 0) {
|
||||
g_args.all_databases = true;
|
||||
} else {
|
||||
continue;
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -639,7 +667,7 @@ int main(int argc, char *argv[]) {
|
|||
if (argc > 1) {
|
||||
parse_precision_first(argc, argv, &g_args);
|
||||
parse_timestamp(argc, argv, &g_args);
|
||||
parse_password(argc, argv, &g_args);
|
||||
parse_args(argc, argv, &g_args);
|
||||
}
|
||||
|
||||
argp_parse(&argp, argc, argv, 0, 0, &g_args);
|
||||
|
|
|
@ -63,12 +63,12 @@ int taosSetConsoleEcho(bool on)
|
|||
}
|
||||
|
||||
if (on)
|
||||
term.c_lflag|=ECHOFLAGS;
|
||||
term.c_lflag |= ECHOFLAGS;
|
||||
else
|
||||
term.c_lflag &=~ECHOFLAGS;
|
||||
term.c_lflag &= ~ECHOFLAGS;
|
||||
|
||||
err = tcsetattr(STDIN_FILENO,TCSAFLUSH,&term);
|
||||
if (err == -1 && err == EINTR) {
|
||||
err = tcsetattr(STDIN_FILENO, TCSAFLUSH, &term);
|
||||
if (err == -1 || err == EINTR) {
|
||||
perror("Cannot set the attribution of the terminal");
|
||||
return -1;
|
||||
}
|
||||
|
|
|
@ -150,6 +150,7 @@ typedef struct HttpContext {
|
|||
char ipstr[22];
|
||||
char user[TSDB_USER_LEN]; // parsed from auth token or login message
|
||||
char pass[HTTP_PASSWORD_LEN];
|
||||
char db[/*TSDB_ACCT_ID_LEN + */TSDB_DB_NAME_LEN];
|
||||
TAOS * taos;
|
||||
void * ppContext;
|
||||
HttpSession *session;
|
||||
|
|
|
@ -22,12 +22,12 @@
|
|||
#include "httpResp.h"
|
||||
#include "httpSql.h"
|
||||
|
||||
#define REST_ROOT_URL_POS 0
|
||||
#define REST_ACTION_URL_POS 1
|
||||
#define REST_USER_URL_POS 2
|
||||
#define REST_PASS_URL_POS 3
|
||||
#define REST_ROOT_URL_POS 0
|
||||
#define REST_ACTION_URL_POS 1
|
||||
#define REST_USER_USEDB_URL_POS 2
|
||||
#define REST_PASS_URL_POS 3
|
||||
|
||||
void restInitHandle(HttpServer* pServer);
|
||||
bool restProcessRequest(struct HttpContext* pContext);
|
||||
|
||||
#endif
|
||||
#endif
|
||||
|
|
|
@ -62,11 +62,11 @@ void restInitHandle(HttpServer* pServer) {
|
|||
|
||||
bool restGetUserFromUrl(HttpContext* pContext) {
|
||||
HttpParser* pParser = pContext->parser;
|
||||
if (pParser->path[REST_USER_URL_POS].pos >= TSDB_USER_LEN || pParser->path[REST_USER_URL_POS].pos <= 0) {
|
||||
if (pParser->path[REST_USER_USEDB_URL_POS].pos >= TSDB_USER_LEN || pParser->path[REST_USER_USEDB_URL_POS].pos <= 0) {
|
||||
return false;
|
||||
}
|
||||
|
||||
tstrncpy(pContext->user, pParser->path[REST_USER_URL_POS].str, TSDB_USER_LEN);
|
||||
tstrncpy(pContext->user, pParser->path[REST_USER_USEDB_URL_POS].str, TSDB_USER_LEN);
|
||||
return true;
|
||||
}
|
||||
|
||||
|
@ -107,6 +107,16 @@ bool restProcessSqlRequest(HttpContext* pContext, int32_t timestampFmt) {
|
|||
HttpSqlCmd* cmd = &(pContext->singleCmd);
|
||||
cmd->nativSql = sql;
|
||||
|
||||
/* find if there is db_name in url */
|
||||
pContext->db[0] = '\0';
|
||||
|
||||
HttpString *path = &pContext->parser->path[REST_USER_USEDB_URL_POS];
|
||||
if (path->pos > 0 && !(strlen(sql) > 4 && (sql[0] == 'u' || sql[0] == 'U') &&
|
||||
(sql[1] == 's' || sql[1] == 'S') && (sql[2] == 'e' || sql[2] == 'E') && sql[3] == ' '))
|
||||
{
|
||||
snprintf(pContext->db, /*TSDB_ACCT_ID_LEN + */TSDB_DB_NAME_LEN, "%s", path->str);
|
||||
}
|
||||
|
||||
pContext->reqType = HTTP_REQTYPE_SINGLE_SQL;
|
||||
if (timestampFmt == REST_TIMESTAMP_FMT_LOCAL_STRING) {
|
||||
pContext->encodeMethod = &restEncodeSqlLocalTimeStringMethod;
|
||||
|
|
|
@ -419,6 +419,11 @@ void httpProcessRequest(HttpContext *pContext) {
|
|||
&(pContext->taos));
|
||||
httpDebug("context:%p, fd:%d, user:%s, try connect tdengine, taos:%p", pContext, pContext->fd, pContext->user,
|
||||
pContext->taos);
|
||||
|
||||
if (pContext->taos != NULL) {
|
||||
STscObj *pObj = pContext->taos;
|
||||
pObj->from = TAOS_REQ_FROM_HTTP;
|
||||
}
|
||||
} else {
|
||||
httpExecCmd(pContext);
|
||||
}
|
||||
|
|
|
@ -43,9 +43,7 @@ typedef int32_t (*__block_search_fn_t)(char* data, int32_t num, int64_t key, int
|
|||
|
||||
#define GET_NUM_OF_RESULTS(_r) (((_r)->outputBuf) == NULL? 0:((_r)->outputBuf)->info.rows)
|
||||
|
||||
//TODO: may need to fine tune this threshold
|
||||
#define QUERY_COMP_THRESHOLD (1024 * 512)
|
||||
#define NEEDTO_COMPRESS_QUERY(size) ((size) > QUERY_COMP_THRESHOLD ? 1 : 0)
|
||||
#define NEEDTO_COMPRESS_QUERY(size) ((size) > tsCompressColData? 1 : 0)
|
||||
|
||||
enum {
|
||||
// when query starts to execute, this status will set
|
||||
|
@ -614,6 +612,7 @@ int32_t getNumOfResult(SQueryRuntimeEnv *pRuntimeEnv, SQLFunctionCtx* pCtx, int3
|
|||
void finalizeQueryResult(SOperatorInfo* pOperator, SQLFunctionCtx* pCtx, SResultRowInfo* pResultRowInfo, int32_t* rowCellInfoOffset);
|
||||
void updateOutputBuf(SOptrBasicInfo* pBInfo, int32_t *bufCapacity, int32_t numOfInputRows);
|
||||
void clearOutputBuf(SOptrBasicInfo* pBInfo, int32_t *bufCapacity);
|
||||
void copyTsColoum(SSDataBlock* pRes, SQLFunctionCtx* pCtx, int32_t numOfOutput);
|
||||
|
||||
void freeParam(SQueryParam *param);
|
||||
int32_t convertQueryMsg(SQueryTableMsg *pQueryMsg, SQueryParam* param);
|
||||
|
|
|
@ -3603,7 +3603,7 @@ void setDefaultOutputBuf(SQueryRuntimeEnv *pRuntimeEnv, SOptrBasicInfo *pInfo, i
|
|||
// set the timestamp output buffer for top/bottom/diff query
|
||||
int32_t fid = pCtx[i].functionId;
|
||||
if (fid == TSDB_FUNC_TOP || fid == TSDB_FUNC_BOTTOM || fid == TSDB_FUNC_DIFF || fid == TSDB_FUNC_DERIVATIVE) {
|
||||
pCtx[i].ptsOutputBuf = pCtx[0].pOutput;
|
||||
if (i > 0) pCtx[i].ptsOutputBuf = pCtx[i-1].pOutput;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -3631,14 +3631,15 @@ void updateOutputBuf(SOptrBasicInfo* pBInfo, int32_t *bufCapacity, int32_t numOf
|
|||
}
|
||||
}
|
||||
|
||||
|
||||
for (int32_t i = 0; i < pDataBlock->info.numOfCols; ++i) {
|
||||
SColumnInfoData *pColInfo = taosArrayGet(pDataBlock->pDataBlock, i);
|
||||
pBInfo->pCtx[i].pOutput = pColInfo->pData + pColInfo->info.bytes * pDataBlock->info.rows;
|
||||
|
||||
// re-estabilish output buffer pointer.
|
||||
int32_t functionId = pBInfo->pCtx[i].functionId;
|
||||
if (functionId == TSDB_FUNC_TOP || functionId == TSDB_FUNC_BOTTOM || functionId == TSDB_FUNC_DIFF || functionId == TSDB_FUNC_DERIVATIVE) {
|
||||
pBInfo->pCtx[i].ptsOutputBuf = pBInfo->pCtx[i-1].pOutput;
|
||||
if (functionId == TSDB_FUNC_TOP || functionId == TSDB_FUNC_BOTTOM || functionId == TSDB_FUNC_DIFF || functionId == TSDB_FUNC_DERIVATIVE){
|
||||
if (i > 0) pBInfo->pCtx[i].ptsOutputBuf = pBInfo->pCtx[i-1].pOutput;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -3656,7 +3657,35 @@ void clearOutputBuf(SOptrBasicInfo* pBInfo, int32_t *bufCapacity) {
|
|||
}
|
||||
}
|
||||
|
||||
void copyTsColoum(SSDataBlock* pRes, SQLFunctionCtx* pCtx, int32_t numOfOutput) {
|
||||
bool needCopyTs = false;
|
||||
int32_t tsNum = 0;
|
||||
char *src = NULL;
|
||||
for (int32_t i = 0; i < numOfOutput; i++) {
|
||||
int32_t functionId = pCtx[i].functionId;
|
||||
if (functionId == TSDB_FUNC_DIFF || functionId == TSDB_FUNC_DERIVATIVE) {
|
||||
needCopyTs = true;
|
||||
if (i > 0 && pCtx[i-1].functionId == TSDB_FUNC_TS_DUMMY){
|
||||
SColumnInfoData* pColRes = taosArrayGet(pRes->pDataBlock, i - 1); // find ts data
|
||||
src = pColRes->pData;
|
||||
}
|
||||
}else if(functionId == TSDB_FUNC_TS_DUMMY) {
|
||||
tsNum++;
|
||||
}
|
||||
}
|
||||
|
||||
if (!needCopyTs) return;
|
||||
if (tsNum < 2) return;
|
||||
if (src == NULL) return;
|
||||
|
||||
for (int32_t i = 0; i < numOfOutput; i++) {
|
||||
int32_t functionId = pCtx[i].functionId;
|
||||
if(functionId == TSDB_FUNC_TS_DUMMY) {
|
||||
SColumnInfoData* pColRes = taosArrayGet(pRes->pDataBlock, i);
|
||||
memcpy(pColRes->pData, src, pColRes->info.bytes * pRes->info.rows);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
void initCtxOutputBuffer(SQLFunctionCtx* pCtx, int32_t size) {
|
||||
for (int32_t j = 0; j < size; ++j) {
|
||||
|
@ -3838,7 +3867,7 @@ void setResultRowOutputBufInitCtx(SQueryRuntimeEnv *pRuntimeEnv, SResultRow *pRe
|
|||
}
|
||||
|
||||
if (functionId == TSDB_FUNC_TOP || functionId == TSDB_FUNC_BOTTOM || functionId == TSDB_FUNC_DIFF) {
|
||||
pCtx[i].ptsOutputBuf = pCtx[0].pOutput;
|
||||
if(i > 0) pCtx[i].ptsOutputBuf = pCtx[i-1].pOutput;
|
||||
}
|
||||
|
||||
if (!pResInfo->initialized) {
|
||||
|
@ -3899,7 +3928,7 @@ void setResultOutputBuf(SQueryRuntimeEnv *pRuntimeEnv, SResultRow *pResult, SQLF
|
|||
|
||||
int32_t functionId = pCtx[i].functionId;
|
||||
if (functionId == TSDB_FUNC_TOP || functionId == TSDB_FUNC_BOTTOM || functionId == TSDB_FUNC_DIFF || functionId == TSDB_FUNC_DERIVATIVE) {
|
||||
pCtx[i].ptsOutputBuf = pCtx[0].pOutput;
|
||||
if(i > 0) pCtx[i].ptsOutputBuf = pCtx[i-1].pOutput;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -4277,6 +4306,7 @@ static void doCopyQueryResultToMsg(SQInfo *pQInfo, int32_t numOfRows, char *data
|
|||
}
|
||||
|
||||
qDebug("QInfo:0x%"PRIx64" set %d subscribe info", pQInfo->qId, total);
|
||||
|
||||
// Check if query is completed or not for stable query or normal table query respectively.
|
||||
if (Q_STATUS_EQUAL(pRuntimeEnv->status, QUERY_COMPLETED) && pRuntimeEnv->proot->status == OP_EXEC_DONE) {
|
||||
setQueryStatus(pRuntimeEnv, QUERY_OVER);
|
||||
|
@ -5719,6 +5749,7 @@ static SSDataBlock* doProjectOperation(void* param, bool* newgroup) {
|
|||
|
||||
pRes->info.rows = getNumOfResult(pRuntimeEnv, pInfo->pCtx, pOperator->numOfOutput);
|
||||
if (pRes->info.rows >= pRuntimeEnv->resultInfo.threshold) {
|
||||
copyTsColoum(pRes, pInfo->pCtx, pOperator->numOfOutput);
|
||||
clearNumOfRes(pInfo->pCtx, pOperator->numOfOutput);
|
||||
return pRes;
|
||||
}
|
||||
|
@ -5744,8 +5775,7 @@ static SSDataBlock* doProjectOperation(void* param, bool* newgroup) {
|
|||
if (*newgroup) {
|
||||
if (pRes->info.rows > 0) {
|
||||
pProjectInfo->existDataBlock = pBlock;
|
||||
clearNumOfRes(pInfo->pCtx, pOperator->numOfOutput);
|
||||
return pInfo->pRes;
|
||||
break;
|
||||
} else { // init output buffer for a new group data
|
||||
for (int32_t j = 0; j < pOperator->numOfOutput; ++j) {
|
||||
aAggs[pInfo->pCtx[j].functionId].xFinalize(&pInfo->pCtx[j]);
|
||||
|
@ -5775,7 +5805,7 @@ static SSDataBlock* doProjectOperation(void* param, bool* newgroup) {
|
|||
break;
|
||||
}
|
||||
}
|
||||
|
||||
copyTsColoum(pRes, pInfo->pCtx, pOperator->numOfOutput);
|
||||
clearNumOfRes(pInfo->pCtx, pOperator->numOfOutput);
|
||||
return (pInfo->pRes->info.rows > 0)? pInfo->pRes:NULL;
|
||||
}
|
||||
|
@ -7091,6 +7121,10 @@ static SSDataBlock* doTagScan(void* param, bool* newgroup) {
|
|||
qDebug("QInfo:0x%"PRIx64" create tag values results completed, rows:%d", GET_QID(pRuntimeEnv), count);
|
||||
}
|
||||
|
||||
if (pOperator->status == OP_EXEC_DONE) {
|
||||
setQueryStatus(pOperator->pRuntimeEnv, QUERY_COMPLETED);
|
||||
}
|
||||
|
||||
pRes->info.rows = count;
|
||||
return (pRes->info.rows == 0)? NULL:pInfo->pRes;
|
||||
}
|
||||
|
|
|
@ -357,7 +357,7 @@ int32_t qDumpRetrieveResult(qinfo_t qinfo, SRetrieveTableRsp **pRsp, int32_t *co
|
|||
}
|
||||
|
||||
(*pRsp)->precision = htons(pQueryAttr->precision);
|
||||
(*pRsp)->compressed = (int8_t)(tsCompressColData && checkNeedToCompressQueryCol(pQInfo));
|
||||
(*pRsp)->compressed = (int8_t)((tsCompressColData != -1) && checkNeedToCompressQueryCol(pQInfo));
|
||||
|
||||
if (GET_NUM_OF_RESULTS(&(pQInfo->runtimeEnv)) > 0 && pQInfo->code == TSDB_CODE_SUCCESS) {
|
||||
doDumpQueryResult(pQInfo, (*pRsp)->data, (*pRsp)->compressed, &compLen);
|
||||
|
@ -367,8 +367,12 @@ int32_t qDumpRetrieveResult(qinfo_t qinfo, SRetrieveTableRsp **pRsp, int32_t *co
|
|||
|
||||
if ((*pRsp)->compressed && compLen != 0) {
|
||||
int32_t numOfCols = pQueryAttr->pExpr2 ? pQueryAttr->numOfExpr2 : pQueryAttr->numOfOutput;
|
||||
*contLen = *contLen - pQueryAttr->resultRowSize * s + compLen + numOfCols * sizeof(int32_t);
|
||||
int32_t origSize = pQueryAttr->resultRowSize * s;
|
||||
int32_t compSize = compLen + numOfCols * sizeof(int32_t);
|
||||
*contLen = *contLen - origSize + compSize;
|
||||
*pRsp = (SRetrieveTableRsp *)rpcReallocCont(*pRsp, *contLen);
|
||||
qDebug("QInfo:0x%"PRIx64" compress col data, uncompressed size:%d, compressed size:%d, ratio:%.2f",
|
||||
pQInfo->qId, origSize, compSize, (float)origSize / (float)compSize);
|
||||
}
|
||||
(*pRsp)->compLen = htonl(compLen);
|
||||
|
||||
|
|
|
@ -45,8 +45,9 @@ typedef struct {
|
|||
typedef struct {
|
||||
pthread_rwlock_t lock;
|
||||
|
||||
SFSStatus* cstatus; // current status
|
||||
SHashObj* metaCache; // meta cache
|
||||
SFSStatus* cstatus; // current status
|
||||
SHashObj* metaCache; // meta cache
|
||||
SHashObj* metaCacheComp; // meta cache for compact
|
||||
bool intxn;
|
||||
SFSStatus* nstatus; // new status
|
||||
} STsdbFS;
|
||||
|
|
|
@ -14,6 +14,8 @@
|
|||
*/
|
||||
#include "tsdbint.h"
|
||||
|
||||
extern int32_t tsTsdbMetaCompactRatio;
|
||||
|
||||
#define TSDB_MAX_SUBBLOCKS 8
|
||||
static FORCE_INLINE int TSDB_KEY_FID(TSKEY key, int32_t days, int8_t precision) {
|
||||
if (key < 0) {
|
||||
|
@ -55,8 +57,9 @@ typedef struct {
|
|||
#define TSDB_COMMIT_TXN_VERSION(ch) FS_TXN_VERSION(REPO_FS(TSDB_COMMIT_REPO(ch)))
|
||||
|
||||
static int tsdbCommitMeta(STsdbRepo *pRepo);
|
||||
static int tsdbUpdateMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid, void *cont, int contLen);
|
||||
static int tsdbUpdateMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid, void *cont, int contLen, bool compact);
|
||||
static int tsdbDropMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid);
|
||||
static int tsdbCompactMetaFile(STsdbRepo *pRepo, STsdbFS *pfs, SMFile *pMFile);
|
||||
static int tsdbCommitTSData(STsdbRepo *pRepo);
|
||||
static void tsdbStartCommit(STsdbRepo *pRepo);
|
||||
static void tsdbEndCommit(STsdbRepo *pRepo, int eno);
|
||||
|
@ -261,6 +264,35 @@ int tsdbWriteBlockIdx(SDFile *pHeadf, SArray *pIdxA, void **ppBuf) {
|
|||
|
||||
|
||||
// =================== Commit Meta Data
|
||||
static int tsdbInitCommitMetaFile(STsdbRepo *pRepo, SMFile* pMf, bool open) {
|
||||
STsdbFS * pfs = REPO_FS(pRepo);
|
||||
SMFile * pOMFile = pfs->cstatus->pmf;
|
||||
SDiskID did;
|
||||
|
||||
// Create/Open a meta file or open the existing file
|
||||
if (pOMFile == NULL) {
|
||||
// Create a new meta file
|
||||
did.level = TFS_PRIMARY_LEVEL;
|
||||
did.id = TFS_PRIMARY_ID;
|
||||
tsdbInitMFile(pMf, did, REPO_ID(pRepo), FS_TXN_VERSION(REPO_FS(pRepo)));
|
||||
|
||||
if (open && tsdbCreateMFile(pMf, true) < 0) {
|
||||
tsdbError("vgId:%d failed to create META file since %s", REPO_ID(pRepo), tstrerror(terrno));
|
||||
return -1;
|
||||
}
|
||||
|
||||
tsdbInfo("vgId:%d meta file %s is created to commit", REPO_ID(pRepo), TSDB_FILE_FULL_NAME(pMf));
|
||||
} else {
|
||||
tsdbInitMFileEx(pMf, pOMFile);
|
||||
if (open && tsdbOpenMFile(pMf, O_WRONLY) < 0) {
|
||||
tsdbError("vgId:%d failed to open META file since %s", REPO_ID(pRepo), tstrerror(terrno));
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int tsdbCommitMeta(STsdbRepo *pRepo) {
|
||||
STsdbFS * pfs = REPO_FS(pRepo);
|
||||
SMemTable *pMem = pRepo->imem;
|
||||
|
@ -269,34 +301,25 @@ static int tsdbCommitMeta(STsdbRepo *pRepo) {
|
|||
SActObj * pAct = NULL;
|
||||
SActCont * pCont = NULL;
|
||||
SListNode *pNode = NULL;
|
||||
SDiskID did;
|
||||
|
||||
ASSERT(pOMFile != NULL || listNEles(pMem->actList) > 0);
|
||||
|
||||
if (listNEles(pMem->actList) <= 0) {
|
||||
// no meta data to commit, just keep the old meta file
|
||||
tsdbUpdateMFile(pfs, pOMFile);
|
||||
if (tsTsdbMetaCompactRatio > 0) {
|
||||
if (tsdbInitCommitMetaFile(pRepo, &mf, false) < 0) {
|
||||
return -1;
|
||||
}
|
||||
int ret = tsdbCompactMetaFile(pRepo, pfs, &mf);
|
||||
if (ret < 0) tsdbError("compact meta file error");
|
||||
|
||||
return ret;
|
||||
}
|
||||
return 0;
|
||||
} else {
|
||||
// Create/Open a meta file or open the existing file
|
||||
if (pOMFile == NULL) {
|
||||
// Create a new meta file
|
||||
did.level = TFS_PRIMARY_LEVEL;
|
||||
did.id = TFS_PRIMARY_ID;
|
||||
tsdbInitMFile(&mf, did, REPO_ID(pRepo), FS_TXN_VERSION(REPO_FS(pRepo)));
|
||||
|
||||
if (tsdbCreateMFile(&mf, true) < 0) {
|
||||
tsdbError("vgId:%d failed to create META file since %s", REPO_ID(pRepo), tstrerror(terrno));
|
||||
return -1;
|
||||
}
|
||||
|
||||
tsdbInfo("vgId:%d meta file %s is created to commit", REPO_ID(pRepo), TSDB_FILE_FULL_NAME(&mf));
|
||||
} else {
|
||||
tsdbInitMFileEx(&mf, pOMFile);
|
||||
if (tsdbOpenMFile(&mf, O_WRONLY) < 0) {
|
||||
tsdbError("vgId:%d failed to open META file since %s", REPO_ID(pRepo), tstrerror(terrno));
|
||||
return -1;
|
||||
}
|
||||
if (tsdbInitCommitMetaFile(pRepo, &mf, true) < 0) {
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -305,7 +328,7 @@ static int tsdbCommitMeta(STsdbRepo *pRepo) {
|
|||
pAct = (SActObj *)pNode->data;
|
||||
if (pAct->act == TSDB_UPDATE_META) {
|
||||
pCont = (SActCont *)POINTER_SHIFT(pAct, sizeof(SActObj));
|
||||
if (tsdbUpdateMetaRecord(pfs, &mf, pAct->uid, (void *)(pCont->cont), pCont->len) < 0) {
|
||||
if (tsdbUpdateMetaRecord(pfs, &mf, pAct->uid, (void *)(pCont->cont), pCont->len, false) < 0) {
|
||||
tsdbError("vgId:%d failed to update META record, uid %" PRIu64 " since %s", REPO_ID(pRepo), pAct->uid,
|
||||
tstrerror(terrno));
|
||||
tsdbCloseMFile(&mf);
|
||||
|
@ -338,6 +361,10 @@ static int tsdbCommitMeta(STsdbRepo *pRepo) {
|
|||
tsdbCloseMFile(&mf);
|
||||
tsdbUpdateMFile(pfs, &mf);
|
||||
|
||||
if (tsTsdbMetaCompactRatio > 0 && tsdbCompactMetaFile(pRepo, pfs, &mf) < 0) {
|
||||
tsdbError("compact meta file error");
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
@ -375,7 +402,7 @@ void tsdbGetRtnSnap(STsdbRepo *pRepo, SRtn *pRtn) {
|
|||
pRtn->minFid, pRtn->midFid, pRtn->maxFid);
|
||||
}
|
||||
|
||||
static int tsdbUpdateMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid, void *cont, int contLen) {
|
||||
static int tsdbUpdateMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid, void *cont, int contLen, bool compact) {
|
||||
char buf[64] = "\0";
|
||||
void * pBuf = buf;
|
||||
SKVRecord rInfo;
|
||||
|
@ -401,13 +428,18 @@ static int tsdbUpdateMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid, void
|
|||
}
|
||||
|
||||
tsdbUpdateMFileMagic(pMFile, POINTER_SHIFT(cont, contLen - sizeof(TSCKSUM)));
|
||||
SKVRecord *pRecord = taosHashGet(pfs->metaCache, (void *)&uid, sizeof(uid));
|
||||
|
||||
SHashObj* cache = compact ? pfs->metaCacheComp : pfs->metaCache;
|
||||
|
||||
pMFile->info.nRecords++;
|
||||
|
||||
SKVRecord *pRecord = taosHashGet(cache, (void *)&uid, sizeof(uid));
|
||||
if (pRecord != NULL) {
|
||||
pMFile->info.tombSize += (pRecord->size + sizeof(SKVRecord));
|
||||
} else {
|
||||
pMFile->info.nRecords++;
|
||||
}
|
||||
taosHashPut(pfs->metaCache, (void *)(&uid), sizeof(uid), (void *)(&rInfo), sizeof(rInfo));
|
||||
taosHashPut(cache, (void *)(&uid), sizeof(uid), (void *)(&rInfo), sizeof(rInfo));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -442,6 +474,129 @@ static int tsdbDropMetaRecord(STsdbFS *pfs, SMFile *pMFile, uint64_t uid) {
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int tsdbCompactMetaFile(STsdbRepo *pRepo, STsdbFS *pfs, SMFile *pMFile) {
|
||||
float delPercent = (float)(pMFile->info.nDels) / (float)(pMFile->info.nRecords);
|
||||
float tombPercent = (float)(pMFile->info.tombSize) / (float)(pMFile->info.size);
|
||||
float compactRatio = (float)(tsTsdbMetaCompactRatio)/100;
|
||||
|
||||
if (delPercent < compactRatio && tombPercent < compactRatio) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (tsdbOpenMFile(pMFile, O_RDONLY) < 0) {
|
||||
tsdbError("open meta file %s compact fail", pMFile->f.rname);
|
||||
return -1;
|
||||
}
|
||||
|
||||
tsdbInfo("begin compact tsdb meta file, ratio:%d, nDels:%" PRId64 ",nRecords:%" PRId64 ",tombSize:%" PRId64 ",size:%" PRId64,
|
||||
tsTsdbMetaCompactRatio, pMFile->info.nDels,pMFile->info.nRecords,pMFile->info.tombSize,pMFile->info.size);
|
||||
|
||||
SMFile mf;
|
||||
SDiskID did;
|
||||
|
||||
// first create tmp meta file
|
||||
did.level = TFS_PRIMARY_LEVEL;
|
||||
did.id = TFS_PRIMARY_ID;
|
||||
tsdbInitMFile(&mf, did, REPO_ID(pRepo), FS_TXN_VERSION(REPO_FS(pRepo)) + 1);
|
||||
|
||||
if (tsdbCreateMFile(&mf, true) < 0) {
|
||||
tsdbError("vgId:%d failed to create META file since %s", REPO_ID(pRepo), tstrerror(terrno));
|
||||
return -1;
|
||||
}
|
||||
|
||||
tsdbInfo("vgId:%d meta file %s is created to compact meta data", REPO_ID(pRepo), TSDB_FILE_FULL_NAME(&mf));
|
||||
|
||||
// second iterator metaCache
|
||||
int code = -1;
|
||||
int64_t maxBufSize = 1024;
|
||||
SKVRecord *pRecord;
|
||||
void *pBuf = NULL;
|
||||
|
||||
pBuf = malloc((size_t)maxBufSize);
|
||||
if (pBuf == NULL) {
|
||||
goto _err;
|
||||
}
|
||||
|
||||
// init Comp
|
||||
assert(pfs->metaCacheComp == NULL);
|
||||
pfs->metaCacheComp = taosHashInit(4096, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, HASH_NO_LOCK);
|
||||
if (pfs->metaCacheComp == NULL) {
|
||||
goto _err;
|
||||
}
|
||||
|
||||
pRecord = taosHashIterate(pfs->metaCache, NULL);
|
||||
while (pRecord) {
|
||||
if (tsdbSeekMFile(pMFile, pRecord->offset + sizeof(SKVRecord), SEEK_SET) < 0) {
|
||||
tsdbError("vgId:%d failed to seek file %s since %s", REPO_ID(pRepo), TSDB_FILE_FULL_NAME(pMFile),
|
||||
tstrerror(terrno));
|
||||
goto _err;
|
||||
}
|
||||
if (pRecord->size > maxBufSize) {
|
||||
maxBufSize = pRecord->size;
|
||||
void* tmp = realloc(pBuf, (size_t)maxBufSize);
|
||||
if (tmp == NULL) {
|
||||
goto _err;
|
||||
}
|
||||
pBuf = tmp;
|
||||
}
|
||||
int nread = (int)tsdbReadMFile(pMFile, pBuf, pRecord->size);
|
||||
if (nread < 0) {
|
||||
tsdbError("vgId:%d failed to read file %s since %s", REPO_ID(pRepo), TSDB_FILE_FULL_NAME(pMFile),
|
||||
tstrerror(terrno));
|
||||
goto _err;
|
||||
}
|
||||
|
||||
if (nread < pRecord->size) {
|
||||
tsdbError("vgId:%d failed to read file %s since file corrupted, expected read:%" PRId64 " actual read:%d",
|
||||
REPO_ID(pRepo), TSDB_FILE_FULL_NAME(pMFile), pRecord->size, nread);
|
||||
goto _err;
|
||||
}
|
||||
|
||||
if (tsdbUpdateMetaRecord(pfs, &mf, pRecord->uid, pBuf, (int)pRecord->size, true) < 0) {
|
||||
tsdbError("vgId:%d failed to update META record, uid %" PRIu64 " since %s", REPO_ID(pRepo), pRecord->uid,
|
||||
tstrerror(terrno));
|
||||
goto _err;
|
||||
}
|
||||
|
||||
pRecord = taosHashIterate(pfs->metaCache, pRecord);
|
||||
}
|
||||
code = 0;
|
||||
|
||||
_err:
|
||||
if (code == 0) TSDB_FILE_FSYNC(&mf);
|
||||
tsdbCloseMFile(&mf);
|
||||
tsdbCloseMFile(pMFile);
|
||||
|
||||
if (code == 0) {
|
||||
// rename meta.tmp -> meta
|
||||
tsdbInfo("vgId:%d meta file rename %s -> %s", REPO_ID(pRepo), TSDB_FILE_FULL_NAME(&mf), TSDB_FILE_FULL_NAME(pMFile));
|
||||
taosRename(mf.f.aname,pMFile->f.aname);
|
||||
tstrncpy(mf.f.aname, pMFile->f.aname, TSDB_FILENAME_LEN);
|
||||
tstrncpy(mf.f.rname, pMFile->f.rname, TSDB_FILENAME_LEN);
|
||||
// update current meta file info
|
||||
pfs->nstatus->pmf = NULL;
|
||||
tsdbUpdateMFile(pfs, &mf);
|
||||
|
||||
taosHashCleanup(pfs->metaCache);
|
||||
pfs->metaCache = pfs->metaCacheComp;
|
||||
pfs->metaCacheComp = NULL;
|
||||
} else {
|
||||
// remove meta.tmp file
|
||||
remove(mf.f.aname);
|
||||
taosHashCleanup(pfs->metaCacheComp);
|
||||
pfs->metaCacheComp = NULL;
|
||||
}
|
||||
|
||||
tfree(pBuf);
|
||||
|
||||
ASSERT(mf.info.nDels == 0);
|
||||
ASSERT(mf.info.tombSize == 0);
|
||||
|
||||
tsdbInfo("end compact tsdb meta file,code:%d,nRecords:%" PRId64 ",size:%" PRId64,
|
||||
code,mf.info.nRecords,mf.info.size);
|
||||
return code;
|
||||
}
|
||||
|
||||
// =================== Commit Time-Series Data
|
||||
static int tsdbCommitTSData(STsdbRepo *pRepo) {
|
||||
SMemTable *pMem = pRepo->imem;
|
||||
|
|
|
@ -216,6 +216,7 @@ STsdbFS *tsdbNewFS(STsdbCfg *pCfg) {
|
|||
}
|
||||
|
||||
pfs->intxn = false;
|
||||
pfs->metaCacheComp = NULL;
|
||||
|
||||
pfs->nstatus = tsdbNewFSStatus(maxFSet);
|
||||
if (pfs->nstatus == NULL) {
|
||||
|
|
|
@ -16,11 +16,11 @@
|
|||
#include "tsdbint.h"
|
||||
|
||||
static const char *TSDB_FNAME_SUFFIX[] = {
|
||||
"head", // TSDB_FILE_HEAD
|
||||
"data", // TSDB_FILE_DATA
|
||||
"last", // TSDB_FILE_LAST
|
||||
"", // TSDB_FILE_MAX
|
||||
"meta" // TSDB_FILE_META
|
||||
"head", // TSDB_FILE_HEAD
|
||||
"data", // TSDB_FILE_DATA
|
||||
"last", // TSDB_FILE_LAST
|
||||
"", // TSDB_FILE_MAX
|
||||
"meta", // TSDB_FILE_META
|
||||
};
|
||||
|
||||
static void tsdbGetFilename(int vid, int fid, uint32_t ver, TSDB_FILE_T ftype, char *fname);
|
||||
|
|
|
@ -43,6 +43,7 @@ static int tsdbRemoveTableFromStore(STsdbRepo *pRepo, STable *pTable);
|
|||
static int tsdbRmTableFromMeta(STsdbRepo *pRepo, STable *pTable);
|
||||
static int tsdbAdjustMetaTables(STsdbRepo *pRepo, int tid);
|
||||
static int tsdbCheckTableTagVal(SKVRow *pKVRow, STSchema *pSchema);
|
||||
static int tsdbInsertNewTableAction(STsdbRepo *pRepo, STable* pTable);
|
||||
static int tsdbAddSchema(STable *pTable, STSchema *pSchema);
|
||||
static void tsdbFreeTableSchema(STable *pTable);
|
||||
|
||||
|
@ -128,21 +129,16 @@ int tsdbCreateTable(STsdbRepo *repo, STableCfg *pCfg) {
|
|||
tsdbUnlockRepoMeta(pRepo);
|
||||
|
||||
// Write to memtable action
|
||||
// TODO: refactor duplicate codes
|
||||
int tlen = 0;
|
||||
void *pBuf = NULL;
|
||||
if (newSuper || superChanged) {
|
||||
tlen = tsdbGetTableEncodeSize(TSDB_UPDATE_META, super);
|
||||
pBuf = tsdbAllocBytes(pRepo, tlen);
|
||||
if (pBuf == NULL) goto _err;
|
||||
void *tBuf = tsdbInsertTableAct(pRepo, TSDB_UPDATE_META, pBuf, super);
|
||||
ASSERT(POINTER_DISTANCE(tBuf, pBuf) == tlen);
|
||||
// add insert new super table action
|
||||
if (tsdbInsertNewTableAction(pRepo, super) != 0) {
|
||||
goto _err;
|
||||
}
|
||||
}
|
||||
// add insert new table action
|
||||
if (tsdbInsertNewTableAction(pRepo, table) != 0) {
|
||||
goto _err;
|
||||
}
|
||||
tlen = tsdbGetTableEncodeSize(TSDB_UPDATE_META, table);
|
||||
pBuf = tsdbAllocBytes(pRepo, tlen);
|
||||
if (pBuf == NULL) goto _err;
|
||||
void *tBuf = tsdbInsertTableAct(pRepo, TSDB_UPDATE_META, pBuf, table);
|
||||
ASSERT(POINTER_DISTANCE(tBuf, pBuf) == tlen);
|
||||
|
||||
if (tsdbCheckCommit(pRepo) < 0) return -1;
|
||||
|
||||
|
@ -383,7 +379,7 @@ int tsdbUpdateTableTagValue(STsdbRepo *repo, SUpdateTableTagValMsg *pMsg) {
|
|||
tdDestroyTSchemaBuilder(&schemaBuilder);
|
||||
}
|
||||
|
||||
// Chage in memory
|
||||
// Change in memory
|
||||
if (pNewSchema != NULL) { // change super table tag schema
|
||||
TSDB_WLOCK_TABLE(pTable->pSuper);
|
||||
STSchema *pOldSchema = pTable->pSuper->tagSchema;
|
||||
|
@ -426,6 +422,21 @@ int tsdbUpdateTableTagValue(STsdbRepo *repo, SUpdateTableTagValMsg *pMsg) {
|
|||
}
|
||||
|
||||
// ------------------ INTERNAL FUNCTIONS ------------------
|
||||
static int tsdbInsertNewTableAction(STsdbRepo *pRepo, STable* pTable) {
|
||||
int tlen = 0;
|
||||
void *pBuf = NULL;
|
||||
|
||||
tlen = tsdbGetTableEncodeSize(TSDB_UPDATE_META, pTable);
|
||||
pBuf = tsdbAllocBytes(pRepo, tlen);
|
||||
if (pBuf == NULL) {
|
||||
return -1;
|
||||
}
|
||||
void *tBuf = tsdbInsertTableAct(pRepo, TSDB_UPDATE_META, pBuf, pTable);
|
||||
ASSERT(POINTER_DISTANCE(tBuf, pBuf) == tlen);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
STsdbMeta *tsdbNewMeta(STsdbCfg *pCfg) {
|
||||
STsdbMeta *pMeta = (STsdbMeta *)calloc(1, sizeof(*pMeta));
|
||||
if (pMeta == NULL) {
|
||||
|
@ -617,6 +628,7 @@ int16_t tsdbGetLastColumnsIndexByColId(STable* pTable, int16_t colId) {
|
|||
if (pTable->lastCols == NULL) {
|
||||
return -1;
|
||||
}
|
||||
// TODO: use binary search instead
|
||||
for (int16_t i = 0; i < pTable->maxColNum; ++i) {
|
||||
if (pTable->lastCols[i].colId == colId) {
|
||||
return i;
|
||||
|
@ -734,10 +746,10 @@ void tsdbUpdateTableSchema(STsdbRepo *pRepo, STable *pTable, STSchema *pSchema,
|
|||
TSDB_WUNLOCK_TABLE(pCTable);
|
||||
|
||||
if (insertAct) {
|
||||
int tlen = tsdbGetTableEncodeSize(TSDB_UPDATE_META, pCTable);
|
||||
void *buf = tsdbAllocBytes(pRepo, tlen);
|
||||
ASSERT(buf != NULL);
|
||||
tsdbInsertTableAct(pRepo, TSDB_UPDATE_META, buf, pCTable);
|
||||
if (tsdbInsertNewTableAction(pRepo, pCTable) != 0) {
|
||||
tsdbError("vgId:%d table %s tid %d uid %" PRIu64 " tsdbInsertNewTableAction fail", REPO_ID(pRepo), TABLE_CHAR_NAME(pTable),
|
||||
TABLE_TID(pTable), TABLE_UID(pTable));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1250,8 +1262,14 @@ static int tsdbEncodeTable(void **buf, STable *pTable) {
|
|||
tlen += taosEncodeFixedU64(buf, TABLE_SUID(pTable));
|
||||
tlen += tdEncodeKVRow(buf, pTable->tagVal);
|
||||
} else {
|
||||
tlen += taosEncodeFixedU8(buf, (uint8_t)taosArrayGetSize(pTable->schema));
|
||||
for (int i = 0; i < taosArrayGetSize(pTable->schema); i++) {
|
||||
uint32_t arraySize = (uint32_t)taosArrayGetSize(pTable->schema);
|
||||
if(arraySize > UINT8_MAX) {
|
||||
tlen += taosEncodeFixedU8(buf, 0);
|
||||
tlen += taosEncodeFixedU32(buf, arraySize);
|
||||
} else {
|
||||
tlen += taosEncodeFixedU8(buf, (uint8_t)arraySize);
|
||||
}
|
||||
for (uint32_t i = 0; i < arraySize; i++) {
|
||||
STSchema *pSchema = taosArrayGetP(pTable->schema, i);
|
||||
tlen += tdEncodeSchema(buf, pSchema);
|
||||
}
|
||||
|
@ -1284,8 +1302,11 @@ static void *tsdbDecodeTable(void *buf, STable **pRTable) {
|
|||
buf = taosDecodeFixedU64(buf, &TABLE_SUID(pTable));
|
||||
buf = tdDecodeKVRow(buf, &(pTable->tagVal));
|
||||
} else {
|
||||
uint8_t nSchemas;
|
||||
buf = taosDecodeFixedU8(buf, &nSchemas);
|
||||
uint32_t nSchemas = 0;
|
||||
buf = taosDecodeFixedU8(buf, (uint8_t *)&nSchemas);
|
||||
if(nSchemas == 0) {
|
||||
buf = taosDecodeFixedU32(buf, &nSchemas);
|
||||
}
|
||||
for (int i = 0; i < nSchemas; i++) {
|
||||
STSchema *pSchema;
|
||||
buf = tdDecodeSchema(buf, &pSchema);
|
||||
|
@ -1485,4 +1506,4 @@ static void tsdbFreeTableSchema(STable *pTable) {
|
|||
|
||||
taosArrayDestroy(pTable->schema);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -262,6 +262,7 @@ int patternMatch(const char *patterStr, const char *str, size_t size, const SPat
|
|||
c1 = str[j++];
|
||||
|
||||
if (j <= size) {
|
||||
if (c == '\\' && patterStr[i] == '_' && c1 == '_') { i++; continue; }
|
||||
if (c == c1 || tolower(c) == tolower(c1) || (c == pInfo->matchOne && c1 != 0)) {
|
||||
continue;
|
||||
}
|
||||
|
|
|
@ -14,23 +14,24 @@
|
|||
*/
|
||||
|
||||
#include "tfunctional.h"
|
||||
#include "tarray.h"
|
||||
|
||||
|
||||
tGenericSavedFunc* genericSavedFuncInit(GenericVaFunc func, int numOfArgs) {
|
||||
tGenericSavedFunc* pSavedFunc = malloc(sizeof(tGenericSavedFunc) + numOfArgs * (sizeof(void*)));
|
||||
if(pSavedFunc == NULL) return NULL;
|
||||
pSavedFunc->func = func;
|
||||
return pSavedFunc;
|
||||
}
|
||||
|
||||
tI32SavedFunc* i32SavedFuncInit(I32VaFunc func, int numOfArgs) {
|
||||
tI32SavedFunc* pSavedFunc = malloc(sizeof(tI32SavedFunc) + numOfArgs * sizeof(void *));
|
||||
if(pSavedFunc == NULL) return NULL;
|
||||
pSavedFunc->func = func;
|
||||
return pSavedFunc;
|
||||
}
|
||||
|
||||
tVoidSavedFunc* voidSavedFuncInit(VoidVaFunc func, int numOfArgs) {
|
||||
tVoidSavedFunc* pSavedFunc = malloc(sizeof(tVoidSavedFunc) + numOfArgs * sizeof(void*));
|
||||
if(pSavedFunc == NULL) return NULL;
|
||||
pSavedFunc->func = func;
|
||||
return pSavedFunc;
|
||||
}
|
||||
|
|
|
@ -0,0 +1,429 @@
|
|||
#include <stdio.h>
|
||||
#include <string.h>
|
||||
#include <sys/socket.h>
|
||||
#include <arpa/inet.h>
|
||||
#include <unistd.h>
|
||||
#include <fcntl.h>
|
||||
#include <netinet/in.h>
|
||||
#include <stdlib.h>
|
||||
#include <sys/epoll.h>
|
||||
#include <errno.h>
|
||||
#include <signal.h>
|
||||
|
||||
|
||||
#define RECV_MAX_LINE 2048
|
||||
#define ITEM_MAX_LINE 128
|
||||
#define REQ_MAX_LINE 2048
|
||||
#define REQ_CLI_COUNT 100
|
||||
|
||||
|
||||
typedef enum
|
||||
{
|
||||
uninited,
|
||||
connecting,
|
||||
connected,
|
||||
datasent
|
||||
} conn_stat;
|
||||
|
||||
|
||||
typedef enum
|
||||
{
|
||||
false,
|
||||
true
|
||||
} bool;
|
||||
|
||||
|
||||
typedef unsigned short u16_t;
|
||||
typedef unsigned int u32_t;
|
||||
|
||||
|
||||
typedef struct
|
||||
{
|
||||
int sockfd;
|
||||
int index;
|
||||
conn_stat state;
|
||||
size_t nsent;
|
||||
size_t nrecv;
|
||||
size_t nlen;
|
||||
bool error;
|
||||
bool success;
|
||||
struct sockaddr_in serv_addr;
|
||||
} socket_ctx;
|
||||
|
||||
|
||||
int set_nonblocking(int sockfd)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = fcntl(sockfd, F_SETFL, fcntl(sockfd, F_GETFL) | O_NONBLOCK);
|
||||
if (ret == -1) {
|
||||
printf("failed to fcntl for %d\r\n", sockfd);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
||||
int create_socket(const char *ip, const u16_t port, socket_ctx *pctx)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (ip == NULL || port == 0 || pctx == NULL) {
|
||||
printf("invalid parameter\r\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
pctx->sockfd = socket(AF_INET, SOCK_STREAM, 0);
|
||||
if (pctx->sockfd == -1) {
|
||||
printf("failed to create socket\r\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
bzero(&pctx->serv_addr, sizeof(struct sockaddr_in));
|
||||
|
||||
pctx->serv_addr.sin_family = AF_INET;
|
||||
pctx->serv_addr.sin_port = htons(port);
|
||||
|
||||
ret = inet_pton(AF_INET, ip, &pctx->serv_addr.sin_addr);
|
||||
if (ret <= 0) {
|
||||
printf("inet_pton error, ip: %s\r\n", ip);
|
||||
return -1;
|
||||
}
|
||||
|
||||
ret = set_nonblocking(pctx->sockfd);
|
||||
if (ret == -1) {
|
||||
printf("failed to set %d as nonblocking\r\n", pctx->sockfd);
|
||||
return -1;
|
||||
}
|
||||
|
||||
return pctx->sockfd;
|
||||
}
|
||||
|
||||
|
||||
void close_sockets(socket_ctx *pctx, int cnt)
|
||||
{
|
||||
int i;
|
||||
|
||||
if (pctx == NULL) {
|
||||
return;
|
||||
}
|
||||
|
||||
for (i = 0; i < cnt; i++) {
|
||||
if (pctx[i].sockfd > 0) {
|
||||
close(pctx[i].sockfd);
|
||||
pctx[i].sockfd = -1;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
int proc_pending_error(socket_ctx *ctx)
|
||||
{
|
||||
int ret;
|
||||
int err;
|
||||
socklen_t len;
|
||||
|
||||
if (ctx == NULL) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
err = 0;
|
||||
len = sizeof(int);
|
||||
|
||||
ret = getsockopt(ctx->sockfd, SOL_SOCKET, SO_ERROR, (void *)&err, &len);
|
||||
if (ret == -1) {
|
||||
err = errno;
|
||||
}
|
||||
|
||||
if (err) {
|
||||
printf("failed to connect at index: %d\r\n", ctx->index);
|
||||
|
||||
close(ctx->sockfd);
|
||||
ctx->sockfd = -1;
|
||||
|
||||
return -1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
void build_http_request(char *ip, u16_t port, char *url, char *sql, char *req_buf, int len)
|
||||
{
|
||||
char req_line[ITEM_MAX_LINE];
|
||||
char req_host[ITEM_MAX_LINE];
|
||||
char req_cont_type[ITEM_MAX_LINE];
|
||||
char req_cont_len[ITEM_MAX_LINE];
|
||||
const char* req_auth = "Authorization: Basic cm9vdDp0YW9zZGF0YQ==\r\n";
|
||||
|
||||
if (ip == NULL || port == 0 ||
|
||||
url == NULL || url[0] == '\0' ||
|
||||
sql == NULL || sql[0] == '\0' ||
|
||||
req_buf == NULL || len <= 0)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
snprintf(req_line, ITEM_MAX_LINE, "POST %s HTTP/1.1\r\n", url);
|
||||
snprintf(req_host, ITEM_MAX_LINE, "HOST: %s:%d\r\n", ip, port);
|
||||
snprintf(req_cont_type, ITEM_MAX_LINE, "%s\r\n", "Content-Type: text/plain");
|
||||
snprintf(req_cont_len, ITEM_MAX_LINE, "Content-Length: %ld\r\n\r\n", strlen(sql));
|
||||
|
||||
snprintf(req_buf, len, "%s%s%s%s%s%s", req_line, req_host, req_auth, req_cont_type, req_cont_len, sql);
|
||||
}
|
||||
|
||||
|
||||
int add_event(int epfd, int sockfd, u32_t events, void *data)
|
||||
{
|
||||
struct epoll_event evs_op;
|
||||
|
||||
evs_op.data.ptr = data;
|
||||
evs_op.events = events;
|
||||
|
||||
return epoll_ctl(epfd, EPOLL_CTL_ADD, sockfd, &evs_op);
|
||||
}
|
||||
|
||||
|
||||
int mod_event(int epfd, int sockfd, u32_t events, void *data)
|
||||
{
|
||||
struct epoll_event evs_op;
|
||||
|
||||
evs_op.data.ptr = data;
|
||||
evs_op.events = events;
|
||||
|
||||
return epoll_ctl(epfd, EPOLL_CTL_MOD, sockfd, &evs_op);
|
||||
}
|
||||
|
||||
|
||||
int del_event(int epfd, int sockfd)
|
||||
{
|
||||
struct epoll_event evs_op;
|
||||
|
||||
evs_op.events = 0;
|
||||
evs_op.data.ptr = NULL;
|
||||
|
||||
return epoll_ctl(epfd, EPOLL_CTL_DEL, sockfd, &evs_op);
|
||||
}
|
||||
|
||||
|
||||
int main()
|
||||
{
|
||||
int i;
|
||||
int ret, n, nsent, nrecv;
|
||||
int epfd;
|
||||
u32_t events;
|
||||
char *str;
|
||||
socket_ctx *pctx, ctx[REQ_CLI_COUNT];
|
||||
char *ip = "127.0.0.1";
|
||||
char *url = "/rest/sql";
|
||||
u16_t port = 6041;
|
||||
struct epoll_event evs[REQ_CLI_COUNT];
|
||||
char sql[REQ_MAX_LINE];
|
||||
char send_buf[REQ_CLI_COUNT][REQ_MAX_LINE + 5 * ITEM_MAX_LINE];
|
||||
char recv_buf[REQ_CLI_COUNT][RECV_MAX_LINE];
|
||||
int count;
|
||||
|
||||
signal(SIGPIPE, SIG_IGN);
|
||||
|
||||
for (i = 0; i < REQ_CLI_COUNT; i++) {
|
||||
ctx[i].sockfd = -1;
|
||||
ctx[i].index = i;
|
||||
ctx[i].state = uninited;
|
||||
ctx[i].nsent = 0;
|
||||
ctx[i].nrecv = 0;
|
||||
ctx[i].error = false;
|
||||
ctx[i].success = false;
|
||||
|
||||
memset(sql, 0, REQ_MAX_LINE);
|
||||
memset(send_buf[i], 0, REQ_MAX_LINE + 5 * ITEM_MAX_LINE);
|
||||
memset(recv_buf[i], 0, RECV_MAX_LINE);
|
||||
|
||||
snprintf(sql, REQ_MAX_LINE, "create database if not exists db%d precision 'us'", i);
|
||||
build_http_request(ip, port, url, sql, send_buf[i], REQ_MAX_LINE + 5 * ITEM_MAX_LINE);
|
||||
|
||||
ctx[i].nlen = strlen(send_buf[i]);
|
||||
}
|
||||
|
||||
epfd = epoll_create(REQ_CLI_COUNT);
|
||||
if (epfd <= 0) {
|
||||
printf("failed to create epoll\r\n");
|
||||
goto failed;
|
||||
}
|
||||
|
||||
for (i = 0; i < REQ_CLI_COUNT; i++) {
|
||||
ret = create_socket(ip, port, &ctx[i]);
|
||||
if (ret == -1) {
|
||||
printf("failed to create socket ar %d\r\n", i);
|
||||
goto failed;
|
||||
}
|
||||
}
|
||||
|
||||
for (i = 0; i < REQ_CLI_COUNT; i++) {
|
||||
events = EPOLLET | EPOLLIN | EPOLLOUT;
|
||||
ret = add_event(epfd, ctx[i].sockfd, events, (void *) &ctx[i]);
|
||||
if (ret == -1) {
|
||||
printf("failed to add sockfd at %d to epoll\r\n", i);
|
||||
goto failed;
|
||||
}
|
||||
}
|
||||
|
||||
count = 0;
|
||||
|
||||
for (i = 0; i < REQ_CLI_COUNT; i++) {
|
||||
ret = connect(ctx[i].sockfd, (struct sockaddr *) &ctx[i].serv_addr, sizeof(ctx[i].serv_addr));
|
||||
if (ret == -1) {
|
||||
if (errno != EINPROGRESS) {
|
||||
printf("connect error, index: %d\r\n", ctx[i].index);
|
||||
(void) del_event(epfd, ctx[i].sockfd);
|
||||
close(ctx[i].sockfd);
|
||||
ctx[i].sockfd = -1;
|
||||
} else {
|
||||
ctx[i].state = connecting;
|
||||
count++;
|
||||
}
|
||||
|
||||
continue;
|
||||
}
|
||||
|
||||
ctx[i].state = connected;
|
||||
count++;
|
||||
}
|
||||
|
||||
printf("clients: %d\r\n", count);
|
||||
|
||||
while (count > 0) {
|
||||
n = epoll_wait(epfd, evs, REQ_CLI_COUNT, 0);
|
||||
if (n == -1) {
|
||||
if (errno != EINTR) {
|
||||
printf("epoll_wait error, reason: %s\r\n", strerror(errno));
|
||||
break;
|
||||
}
|
||||
} else {
|
||||
for (i = 0; i < n; i++) {
|
||||
if (evs[i].events & EPOLLERR) {
|
||||
pctx = (socket_ctx *) evs[i].data.ptr;
|
||||
printf("event error, index: %d\r\n", pctx->index);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
} else if (evs[i].events & EPOLLIN) {
|
||||
pctx = (socket_ctx *) evs[i].data.ptr;
|
||||
if (pctx->state == connecting) {
|
||||
ret = proc_pending_error(pctx);
|
||||
if (ret == 0) {
|
||||
printf("client connected, index: %d\r\n", pctx->index);
|
||||
pctx->state = connected;
|
||||
} else {
|
||||
printf("client connect failed, index: %d\r\n", pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
for ( ;; ) {
|
||||
nrecv = recv(pctx->sockfd, recv_buf[pctx->index] + pctx->nrecv, RECV_MAX_LINE, 0);
|
||||
if (nrecv == -1) {
|
||||
if (errno != EAGAIN && errno != EINTR) {
|
||||
printf("failed to recv, index: %d, reason: %s\r\n", pctx->index, strerror(errno));
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
}
|
||||
|
||||
break;
|
||||
} else if (nrecv == 0) {
|
||||
printf("peer closed connection, index: %d\r\n", pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
break;
|
||||
}
|
||||
|
||||
pctx->nrecv += nrecv;
|
||||
if (pctx->nrecv > 12) {
|
||||
if (pctx->error == false && pctx->success == false) {
|
||||
str = recv_buf[pctx->index] + 9;
|
||||
if (str[0] != '2' || str[1] != '0' || str[2] != '0') {
|
||||
printf("response error, index: %d, recv: %s\r\n", pctx->index, recv_buf[pctx->index]);
|
||||
pctx->error = true;
|
||||
} else {
|
||||
printf("response ok, index: %d\r\n", pctx->index);
|
||||
pctx->success = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
} else if (evs[i].events & EPOLLOUT) {
|
||||
pctx = (socket_ctx *) evs[i].data.ptr;
|
||||
if (pctx->state == connecting) {
|
||||
ret = proc_pending_error(pctx);
|
||||
if (ret == 0) {
|
||||
printf("client connected, index: %d\r\n", pctx->index);
|
||||
pctx->state = connected;
|
||||
} else {
|
||||
printf("client connect failed, index: %d\r\n", pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
for ( ;; ) {
|
||||
nsent = send(pctx->sockfd, send_buf[pctx->index] + pctx->nsent, pctx->nlen - pctx->nsent, 0);
|
||||
if (nsent == -1) {
|
||||
if (errno != EAGAIN && errno != EINTR) {
|
||||
printf("failed to send, index: %d\r\n", pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
}
|
||||
|
||||
break;
|
||||
}
|
||||
|
||||
if (nsent == (int) (pctx->nlen - pctx->nsent)) {
|
||||
printf("request done, request: %s, index: %d\r\n", send_buf[pctx->index], pctx->index);
|
||||
|
||||
pctx->state = datasent;
|
||||
|
||||
events = EPOLLET | EPOLLIN;
|
||||
(void) mod_event(epfd, pctx->sockfd, events, (void *)pctx);
|
||||
|
||||
break;
|
||||
} else {
|
||||
pctx->nsent += nsent;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
pctx = (socket_ctx *) evs[i].data.ptr;
|
||||
printf("unknown event(%u), index: %d\r\n", evs[i].events, pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
failed:
|
||||
|
||||
if (epfd > 0) {
|
||||
close(epfd);
|
||||
}
|
||||
|
||||
close_sockets(ctx, REQ_CLI_COUNT);
|
||||
|
||||
return 0;
|
||||
}
|
|
@ -0,0 +1,433 @@
|
|||
#include <stdio.h>
|
||||
#include <string.h>
|
||||
#include <sys/socket.h>
|
||||
#include <arpa/inet.h>
|
||||
#include <unistd.h>
|
||||
#include <fcntl.h>
|
||||
#include <netinet/in.h>
|
||||
#include <stdlib.h>
|
||||
#include <sys/epoll.h>
|
||||
#include <errno.h>
|
||||
#include <signal.h>
|
||||
|
||||
|
||||
#define RECV_MAX_LINE 2048
|
||||
#define ITEM_MAX_LINE 128
|
||||
#define REQ_MAX_LINE 2048
|
||||
#define REQ_CLI_COUNT 100
|
||||
|
||||
|
||||
typedef enum
|
||||
{
|
||||
uninited,
|
||||
connecting,
|
||||
connected,
|
||||
datasent
|
||||
} conn_stat;
|
||||
|
||||
|
||||
typedef enum
|
||||
{
|
||||
false,
|
||||
true
|
||||
} bool;
|
||||
|
||||
|
||||
typedef unsigned short u16_t;
|
||||
typedef unsigned int u32_t;
|
||||
|
||||
|
||||
typedef struct
|
||||
{
|
||||
int sockfd;
|
||||
int index;
|
||||
conn_stat state;
|
||||
size_t nsent;
|
||||
size_t nrecv;
|
||||
size_t nlen;
|
||||
bool error;
|
||||
bool success;
|
||||
struct sockaddr_in serv_addr;
|
||||
} socket_ctx;
|
||||
|
||||
|
||||
int set_nonblocking(int sockfd)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = fcntl(sockfd, F_SETFL, fcntl(sockfd, F_GETFL) | O_NONBLOCK);
|
||||
if (ret == -1) {
|
||||
printf("failed to fcntl for %d\r\n", sockfd);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
||||
int create_socket(const char *ip, const u16_t port, socket_ctx *pctx)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (ip == NULL || port == 0 || pctx == NULL) {
|
||||
printf("invalid parameter\r\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
pctx->sockfd = socket(AF_INET, SOCK_STREAM, 0);
|
||||
if (pctx->sockfd == -1) {
|
||||
printf("failed to create socket\r\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
bzero(&pctx->serv_addr, sizeof(struct sockaddr_in));
|
||||
|
||||
pctx->serv_addr.sin_family = AF_INET;
|
||||
pctx->serv_addr.sin_port = htons(port);
|
||||
|
||||
ret = inet_pton(AF_INET, ip, &pctx->serv_addr.sin_addr);
|
||||
if (ret <= 0) {
|
||||
printf("inet_pton error, ip: %s\r\n", ip);
|
||||
return -1;
|
||||
}
|
||||
|
||||
ret = set_nonblocking(pctx->sockfd);
|
||||
if (ret == -1) {
|
||||
printf("failed to set %d as nonblocking\r\n", pctx->sockfd);
|
||||
return -1;
|
||||
}
|
||||
|
||||
return pctx->sockfd;
|
||||
}
|
||||
|
||||
|
||||
void close_sockets(socket_ctx *pctx, int cnt)
|
||||
{
|
||||
int i;
|
||||
|
||||
if (pctx == NULL) {
|
||||
return;
|
||||
}
|
||||
|
||||
for (i = 0; i < cnt; i++) {
|
||||
if (pctx[i].sockfd > 0) {
|
||||
close(pctx[i].sockfd);
|
||||
pctx[i].sockfd = -1;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
int proc_pending_error(socket_ctx *ctx)
|
||||
{
|
||||
int ret;
|
||||
int err;
|
||||
socklen_t len;
|
||||
|
||||
if (ctx == NULL) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
err = 0;
|
||||
len = sizeof(int);
|
||||
|
||||
ret = getsockopt(ctx->sockfd, SOL_SOCKET, SO_ERROR, (void *)&err, &len);
|
||||
if (ret == -1) {
|
||||
err = errno;
|
||||
}
|
||||
|
||||
if (err) {
|
||||
printf("failed to connect at index: %d\r\n", ctx->index);
|
||||
|
||||
close(ctx->sockfd);
|
||||
ctx->sockfd = -1;
|
||||
|
||||
return -1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
void build_http_request(char *ip, u16_t port, char *url, char *sql, char *req_buf, int len)
|
||||
{
|
||||
char req_line[ITEM_MAX_LINE];
|
||||
char req_host[ITEM_MAX_LINE];
|
||||
char req_cont_type[ITEM_MAX_LINE];
|
||||
char req_cont_len[ITEM_MAX_LINE];
|
||||
const char* req_auth = "Authorization: Basic cm9vdDp0YW9zZGF0YQ==\r\n";
|
||||
|
||||
if (ip == NULL || port == 0 ||
|
||||
url == NULL || url[0] == '\0' ||
|
||||
sql == NULL || sql[0] == '\0' ||
|
||||
req_buf == NULL || len <= 0)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
snprintf(req_line, ITEM_MAX_LINE, "POST %s HTTP/1.1\r\n", url);
|
||||
snprintf(req_host, ITEM_MAX_LINE, "HOST: %s:%d\r\n", ip, port);
|
||||
snprintf(req_cont_type, ITEM_MAX_LINE, "%s\r\n", "Content-Type: text/plain");
|
||||
snprintf(req_cont_len, ITEM_MAX_LINE, "Content-Length: %ld\r\n\r\n", strlen(sql));
|
||||
|
||||
snprintf(req_buf, len, "%s%s%s%s%s%s", req_line, req_host, req_auth, req_cont_type, req_cont_len, sql);
|
||||
}
|
||||
|
||||
|
||||
int add_event(int epfd, int sockfd, u32_t events, void *data)
|
||||
{
|
||||
struct epoll_event evs_op;
|
||||
|
||||
evs_op.data.ptr = data;
|
||||
evs_op.events = events;
|
||||
|
||||
return epoll_ctl(epfd, EPOLL_CTL_ADD, sockfd, &evs_op);
|
||||
}
|
||||
|
||||
|
||||
int mod_event(int epfd, int sockfd, u32_t events, void *data)
|
||||
{
|
||||
struct epoll_event evs_op;
|
||||
|
||||
evs_op.data.ptr = data;
|
||||
evs_op.events = events;
|
||||
|
||||
return epoll_ctl(epfd, EPOLL_CTL_MOD, sockfd, &evs_op);
|
||||
}
|
||||
|
||||
|
||||
int del_event(int epfd, int sockfd)
|
||||
{
|
||||
struct epoll_event evs_op;
|
||||
|
||||
evs_op.events = 0;
|
||||
evs_op.data.ptr = NULL;
|
||||
|
||||
return epoll_ctl(epfd, EPOLL_CTL_DEL, sockfd, &evs_op);
|
||||
}
|
||||
|
||||
|
||||
int main()
|
||||
{
|
||||
int i;
|
||||
int ret, n, nsent, nrecv;
|
||||
int epfd;
|
||||
u32_t events;
|
||||
char *str;
|
||||
socket_ctx *pctx, ctx[REQ_CLI_COUNT];
|
||||
char *ip = "127.0.0.1";
|
||||
char *url_prefix = "/rest/sql";
|
||||
char url[ITEM_MAX_LINE];
|
||||
u16_t port = 6041;
|
||||
struct epoll_event evs[REQ_CLI_COUNT];
|
||||
char sql[REQ_MAX_LINE];
|
||||
char send_buf[REQ_CLI_COUNT][REQ_MAX_LINE + 5 * ITEM_MAX_LINE];
|
||||
char recv_buf[REQ_CLI_COUNT][RECV_MAX_LINE];
|
||||
int count;
|
||||
|
||||
signal(SIGPIPE, SIG_IGN);
|
||||
|
||||
for (i = 0; i < REQ_CLI_COUNT; i++) {
|
||||
ctx[i].sockfd = -1;
|
||||
ctx[i].index = i;
|
||||
ctx[i].state = uninited;
|
||||
ctx[i].nsent = 0;
|
||||
ctx[i].nrecv = 0;
|
||||
ctx[i].error = false;
|
||||
ctx[i].success = false;
|
||||
|
||||
memset(url, 0, ITEM_MAX_LINE);
|
||||
memset(sql, 0, REQ_MAX_LINE);
|
||||
memset(send_buf[i], 0, REQ_MAX_LINE + 5 * ITEM_MAX_LINE);
|
||||
memset(recv_buf[i], 0, RECV_MAX_LINE);
|
||||
|
||||
snprintf(url, ITEM_MAX_LINE, "%s/db%d", url_prefix, i);
|
||||
snprintf(sql, REQ_MAX_LINE, "create table if not exists tb%d (ts timestamp, index int, val binary(40))", i);
|
||||
|
||||
build_http_request(ip, port, url, sql, send_buf[i], REQ_MAX_LINE + 5 * ITEM_MAX_LINE);
|
||||
|
||||
ctx[i].nlen = strlen(send_buf[i]);
|
||||
}
|
||||
|
||||
epfd = epoll_create(REQ_CLI_COUNT);
|
||||
if (epfd <= 0) {
|
||||
printf("failed to create epoll\r\n");
|
||||
goto failed;
|
||||
}
|
||||
|
||||
for (i = 0; i < REQ_CLI_COUNT; i++) {
|
||||
ret = create_socket(ip, port, &ctx[i]);
|
||||
if (ret == -1) {
|
||||
printf("failed to create socket, index: %d\r\n", i);
|
||||
goto failed;
|
||||
}
|
||||
}
|
||||
|
||||
for (i = 0; i < REQ_CLI_COUNT; i++) {
|
||||
events = EPOLLET | EPOLLIN | EPOLLOUT;
|
||||
ret = add_event(epfd, ctx[i].sockfd, events, (void *) &ctx[i]);
|
||||
if (ret == -1) {
|
||||
printf("failed to add sockfd to epoll, index: %d\r\n", i);
|
||||
goto failed;
|
||||
}
|
||||
}
|
||||
|
||||
count = 0;
|
||||
|
||||
for (i = 0; i < REQ_CLI_COUNT; i++) {
|
||||
ret = connect(ctx[i].sockfd, (struct sockaddr *) &ctx[i].serv_addr, sizeof(ctx[i].serv_addr));
|
||||
if (ret == -1) {
|
||||
if (errno != EINPROGRESS) {
|
||||
printf("connect error, index: %d\r\n", ctx[i].index);
|
||||
(void) del_event(epfd, ctx[i].sockfd);
|
||||
close(ctx[i].sockfd);
|
||||
ctx[i].sockfd = -1;
|
||||
} else {
|
||||
ctx[i].state = connecting;
|
||||
count++;
|
||||
}
|
||||
|
||||
continue;
|
||||
}
|
||||
|
||||
ctx[i].state = connected;
|
||||
count++;
|
||||
}
|
||||
|
||||
printf("clients: %d\r\n", count);
|
||||
|
||||
while (count > 0) {
|
||||
n = epoll_wait(epfd, evs, REQ_CLI_COUNT, 0);
|
||||
if (n == -1) {
|
||||
if (errno != EINTR) {
|
||||
printf("epoll_wait error, reason: %s\r\n", strerror(errno));
|
||||
break;
|
||||
}
|
||||
} else {
|
||||
for (i = 0; i < n; i++) {
|
||||
if (evs[i].events & EPOLLERR) {
|
||||
pctx = (socket_ctx *) evs[i].data.ptr;
|
||||
printf("event error, index: %d\r\n", pctx->index);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
} else if (evs[i].events & EPOLLIN) {
|
||||
pctx = (socket_ctx *) evs[i].data.ptr;
|
||||
if (pctx->state == connecting) {
|
||||
ret = proc_pending_error(pctx);
|
||||
if (ret == 0) {
|
||||
printf("client connected, index: %d\r\n", pctx->index);
|
||||
pctx->state = connected;
|
||||
} else {
|
||||
printf("client connect failed, index: %d\r\n", pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
for ( ;; ) {
|
||||
nrecv = recv(pctx->sockfd, recv_buf[pctx->index] + pctx->nrecv, RECV_MAX_LINE, 0);
|
||||
if (nrecv == -1) {
|
||||
if (errno != EAGAIN && errno != EINTR) {
|
||||
printf("failed to recv, index: %d, reason: %s\r\n", pctx->index, strerror(errno));
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
}
|
||||
|
||||
break;
|
||||
} else if (nrecv == 0) {
|
||||
printf("peer closed connection, index: %d\r\n", pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
break;
|
||||
}
|
||||
|
||||
pctx->nrecv += nrecv;
|
||||
if (pctx->nrecv > 12) {
|
||||
if (pctx->error == false && pctx->success == false) {
|
||||
str = recv_buf[pctx->index] + 9;
|
||||
if (str[0] != '2' || str[1] != '0' || str[2] != '0') {
|
||||
printf("response error, index: %d, recv: %s\r\n", pctx->index, recv_buf[pctx->index]);
|
||||
pctx->error = true;
|
||||
} else {
|
||||
printf("response ok, index: %d\r\n", pctx->index);
|
||||
pctx->success = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
} else if (evs[i].events & EPOLLOUT) {
|
||||
pctx = (socket_ctx *) evs[i].data.ptr;
|
||||
if (pctx->state == connecting) {
|
||||
ret = proc_pending_error(pctx);
|
||||
if (ret == 0) {
|
||||
printf("client connected, index: %d\r\n", pctx->index);
|
||||
pctx->state = connected;
|
||||
} else {
|
||||
printf("client connect failed, index: %d\r\n", pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
for ( ;; ) {
|
||||
nsent = send(pctx->sockfd, send_buf[pctx->index] + pctx->nsent, pctx->nlen - pctx->nsent, 0);
|
||||
if (nsent == -1) {
|
||||
if (errno != EAGAIN && errno != EINTR) {
|
||||
printf("failed to send, index: %d\r\n", pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
}
|
||||
|
||||
break;
|
||||
}
|
||||
|
||||
if (nsent == (int) (pctx->nlen - pctx->nsent)) {
|
||||
printf("request done, request: %s, index: %d\r\n", send_buf[pctx->index], pctx->index);
|
||||
|
||||
pctx->state = datasent;
|
||||
|
||||
events = EPOLLET | EPOLLIN;
|
||||
(void) mod_event(epfd, pctx->sockfd, events, (void *)pctx);
|
||||
|
||||
break;
|
||||
} else {
|
||||
pctx->nsent += nsent;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
pctx = (socket_ctx *) evs[i].data.ptr;
|
||||
printf("unknown event(%u), index: %d\r\n", evs[i].events, pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
failed:
|
||||
|
||||
if (epfd > 0) {
|
||||
close(epfd);
|
||||
}
|
||||
|
||||
close_sockets(ctx, REQ_CLI_COUNT);
|
||||
|
||||
return 0;
|
||||
}
|
|
@ -0,0 +1,433 @@
|
|||
#include <stdio.h>
|
||||
#include <string.h>
|
||||
#include <sys/socket.h>
|
||||
#include <arpa/inet.h>
|
||||
#include <unistd.h>
|
||||
#include <fcntl.h>
|
||||
#include <netinet/in.h>
|
||||
#include <stdlib.h>
|
||||
#include <sys/epoll.h>
|
||||
#include <errno.h>
|
||||
#include <signal.h>
|
||||
|
||||
|
||||
#define RECV_MAX_LINE 2048
|
||||
#define ITEM_MAX_LINE 128
|
||||
#define REQ_MAX_LINE 2048
|
||||
#define REQ_CLI_COUNT 100
|
||||
|
||||
|
||||
typedef enum
|
||||
{
|
||||
uninited,
|
||||
connecting,
|
||||
connected,
|
||||
datasent
|
||||
} conn_stat;
|
||||
|
||||
|
||||
typedef enum
|
||||
{
|
||||
false,
|
||||
true
|
||||
} bool;
|
||||
|
||||
|
||||
typedef unsigned short u16_t;
|
||||
typedef unsigned int u32_t;
|
||||
|
||||
|
||||
typedef struct
|
||||
{
|
||||
int sockfd;
|
||||
int index;
|
||||
conn_stat state;
|
||||
size_t nsent;
|
||||
size_t nrecv;
|
||||
size_t nlen;
|
||||
bool error;
|
||||
bool success;
|
||||
struct sockaddr_in serv_addr;
|
||||
} socket_ctx;
|
||||
|
||||
|
||||
int set_nonblocking(int sockfd)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = fcntl(sockfd, F_SETFL, fcntl(sockfd, F_GETFL) | O_NONBLOCK);
|
||||
if (ret == -1) {
|
||||
printf("failed to fcntl for %d\r\n", sockfd);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
||||
int create_socket(const char *ip, const u16_t port, socket_ctx *pctx)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (ip == NULL || port == 0 || pctx == NULL) {
|
||||
printf("invalid parameter\r\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
pctx->sockfd = socket(AF_INET, SOCK_STREAM, 0);
|
||||
if (pctx->sockfd == -1) {
|
||||
printf("failed to create socket\r\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
bzero(&pctx->serv_addr, sizeof(struct sockaddr_in));
|
||||
|
||||
pctx->serv_addr.sin_family = AF_INET;
|
||||
pctx->serv_addr.sin_port = htons(port);
|
||||
|
||||
ret = inet_pton(AF_INET, ip, &pctx->serv_addr.sin_addr);
|
||||
if (ret <= 0) {
|
||||
printf("inet_pton error, ip: %s\r\n", ip);
|
||||
return -1;
|
||||
}
|
||||
|
||||
ret = set_nonblocking(pctx->sockfd);
|
||||
if (ret == -1) {
|
||||
printf("failed to set %d as nonblocking\r\n", pctx->sockfd);
|
||||
return -1;
|
||||
}
|
||||
|
||||
return pctx->sockfd;
|
||||
}
|
||||
|
||||
|
||||
void close_sockets(socket_ctx *pctx, int cnt)
|
||||
{
|
||||
int i;
|
||||
|
||||
if (pctx == NULL) {
|
||||
return;
|
||||
}
|
||||
|
||||
for (i = 0; i < cnt; i++) {
|
||||
if (pctx[i].sockfd > 0) {
|
||||
close(pctx[i].sockfd);
|
||||
pctx[i].sockfd = -1;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
int proc_pending_error(socket_ctx *ctx)
|
||||
{
|
||||
int ret;
|
||||
int err;
|
||||
socklen_t len;
|
||||
|
||||
if (ctx == NULL) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
err = 0;
|
||||
len = sizeof(int);
|
||||
|
||||
ret = getsockopt(ctx->sockfd, SOL_SOCKET, SO_ERROR, (void *)&err, &len);
|
||||
if (ret == -1) {
|
||||
err = errno;
|
||||
}
|
||||
|
||||
if (err) {
|
||||
printf("failed to connect at index: %d\r\n", ctx->index);
|
||||
|
||||
close(ctx->sockfd);
|
||||
ctx->sockfd = -1;
|
||||
|
||||
return -1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
void build_http_request(char *ip, u16_t port, char *url, char *sql, char *req_buf, int len)
|
||||
{
|
||||
char req_line[ITEM_MAX_LINE];
|
||||
char req_host[ITEM_MAX_LINE];
|
||||
char req_cont_type[ITEM_MAX_LINE];
|
||||
char req_cont_len[ITEM_MAX_LINE];
|
||||
const char* req_auth = "Authorization: Basic cm9vdDp0YW9zZGF0YQ==\r\n";
|
||||
|
||||
if (ip == NULL || port == 0 ||
|
||||
url == NULL || url[0] == '\0' ||
|
||||
sql == NULL || sql[0] == '\0' ||
|
||||
req_buf == NULL || len <= 0)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
snprintf(req_line, ITEM_MAX_LINE, "POST %s HTTP/1.1\r\n", url);
|
||||
snprintf(req_host, ITEM_MAX_LINE, "HOST: %s:%d\r\n", ip, port);
|
||||
snprintf(req_cont_type, ITEM_MAX_LINE, "%s\r\n", "Content-Type: text/plain");
|
||||
snprintf(req_cont_len, ITEM_MAX_LINE, "Content-Length: %ld\r\n\r\n", strlen(sql));
|
||||
|
||||
snprintf(req_buf, len, "%s%s%s%s%s%s", req_line, req_host, req_auth, req_cont_type, req_cont_len, sql);
|
||||
}
|
||||
|
||||
|
||||
int add_event(int epfd, int sockfd, u32_t events, void *data)
|
||||
{
|
||||
struct epoll_event evs_op;
|
||||
|
||||
evs_op.data.ptr = data;
|
||||
evs_op.events = events;
|
||||
|
||||
return epoll_ctl(epfd, EPOLL_CTL_ADD, sockfd, &evs_op);
|
||||
}
|
||||
|
||||
|
||||
int mod_event(int epfd, int sockfd, u32_t events, void *data)
|
||||
{
|
||||
struct epoll_event evs_op;
|
||||
|
||||
evs_op.data.ptr = data;
|
||||
evs_op.events = events;
|
||||
|
||||
return epoll_ctl(epfd, EPOLL_CTL_MOD, sockfd, &evs_op);
|
||||
}
|
||||
|
||||
|
||||
int del_event(int epfd, int sockfd)
|
||||
{
|
||||
struct epoll_event evs_op;
|
||||
|
||||
evs_op.events = 0;
|
||||
evs_op.data.ptr = NULL;
|
||||
|
||||
return epoll_ctl(epfd, EPOLL_CTL_DEL, sockfd, &evs_op);
|
||||
}
|
||||
|
||||
|
||||
int main()
|
||||
{
|
||||
int i;
|
||||
int ret, n, nsent, nrecv;
|
||||
int epfd;
|
||||
u32_t events;
|
||||
char *str;
|
||||
socket_ctx *pctx, ctx[REQ_CLI_COUNT];
|
||||
char *ip = "127.0.0.1";
|
||||
char *url_prefix = "/rest/sql";
|
||||
char url[ITEM_MAX_LINE];
|
||||
u16_t port = 6041;
|
||||
struct epoll_event evs[REQ_CLI_COUNT];
|
||||
char sql[REQ_MAX_LINE];
|
||||
char send_buf[REQ_CLI_COUNT][REQ_MAX_LINE + 5 * ITEM_MAX_LINE];
|
||||
char recv_buf[REQ_CLI_COUNT][RECV_MAX_LINE];
|
||||
int count;
|
||||
|
||||
signal(SIGPIPE, SIG_IGN);
|
||||
|
||||
for (i = 0; i < REQ_CLI_COUNT; i++) {
|
||||
ctx[i].sockfd = -1;
|
||||
ctx[i].index = i;
|
||||
ctx[i].state = uninited;
|
||||
ctx[i].nsent = 0;
|
||||
ctx[i].nrecv = 0;
|
||||
ctx[i].error = false;
|
||||
ctx[i].success = false;
|
||||
|
||||
memset(url, 0, ITEM_MAX_LINE);
|
||||
memset(sql, 0, REQ_MAX_LINE);
|
||||
memset(send_buf[i], 0, REQ_MAX_LINE + 5 * ITEM_MAX_LINE);
|
||||
memset(recv_buf[i], 0, RECV_MAX_LINE);
|
||||
|
||||
snprintf(url, ITEM_MAX_LINE, "%s/db%d", url_prefix, i);
|
||||
snprintf(sql, REQ_MAX_LINE, "drop database if exists db%d", i);
|
||||
|
||||
build_http_request(ip, port, url, sql, send_buf[i], REQ_MAX_LINE + 5 * ITEM_MAX_LINE);
|
||||
|
||||
ctx[i].nlen = strlen(send_buf[i]);
|
||||
}
|
||||
|
||||
epfd = epoll_create(REQ_CLI_COUNT);
|
||||
if (epfd <= 0) {
|
||||
printf("failed to create epoll\r\n");
|
||||
goto failed;
|
||||
}
|
||||
|
||||
for (i = 0; i < REQ_CLI_COUNT; i++) {
|
||||
ret = create_socket(ip, port, &ctx[i]);
|
||||
if (ret == -1) {
|
||||
printf("failed to create socket, index: %d\r\n", i);
|
||||
goto failed;
|
||||
}
|
||||
}
|
||||
|
||||
for (i = 0; i < REQ_CLI_COUNT; i++) {
|
||||
events = EPOLLET | EPOLLIN | EPOLLOUT;
|
||||
ret = add_event(epfd, ctx[i].sockfd, events, (void *) &ctx[i]);
|
||||
if (ret == -1) {
|
||||
printf("failed to add sockfd to epoll, index: %d\r\n", i);
|
||||
goto failed;
|
||||
}
|
||||
}
|
||||
|
||||
count = 0;
|
||||
|
||||
for (i = 0; i < REQ_CLI_COUNT; i++) {
|
||||
ret = connect(ctx[i].sockfd, (struct sockaddr *) &ctx[i].serv_addr, sizeof(ctx[i].serv_addr));
|
||||
if (ret == -1) {
|
||||
if (errno != EINPROGRESS) {
|
||||
printf("connect error, index: %d\r\n", ctx[i].index);
|
||||
(void) del_event(epfd, ctx[i].sockfd);
|
||||
close(ctx[i].sockfd);
|
||||
ctx[i].sockfd = -1;
|
||||
} else {
|
||||
ctx[i].state = connecting;
|
||||
count++;
|
||||
}
|
||||
|
||||
continue;
|
||||
}
|
||||
|
||||
ctx[i].state = connected;
|
||||
count++;
|
||||
}
|
||||
|
||||
printf("clients: %d\r\n", count);
|
||||
|
||||
while (count > 0) {
|
||||
n = epoll_wait(epfd, evs, REQ_CLI_COUNT, 0);
|
||||
if (n == -1) {
|
||||
if (errno != EINTR) {
|
||||
printf("epoll_wait error, reason: %s\r\n", strerror(errno));
|
||||
break;
|
||||
}
|
||||
} else {
|
||||
for (i = 0; i < n; i++) {
|
||||
if (evs[i].events & EPOLLERR) {
|
||||
pctx = (socket_ctx *) evs[i].data.ptr;
|
||||
printf("event error, index: %d\r\n", pctx->index);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
} else if (evs[i].events & EPOLLIN) {
|
||||
pctx = (socket_ctx *) evs[i].data.ptr;
|
||||
if (pctx->state == connecting) {
|
||||
ret = proc_pending_error(pctx);
|
||||
if (ret == 0) {
|
||||
printf("client connected, index: %d\r\n", pctx->index);
|
||||
pctx->state = connected;
|
||||
} else {
|
||||
printf("client connect failed, index: %d\r\n", pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
for ( ;; ) {
|
||||
nrecv = recv(pctx->sockfd, recv_buf[pctx->index] + pctx->nrecv, RECV_MAX_LINE, 0);
|
||||
if (nrecv == -1) {
|
||||
if (errno != EAGAIN && errno != EINTR) {
|
||||
printf("failed to recv, index: %d, reason: %s\r\n", pctx->index, strerror(errno));
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
}
|
||||
|
||||
break;
|
||||
} else if (nrecv == 0) {
|
||||
printf("peer closed connection, index: %d\r\n", pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
break;
|
||||
}
|
||||
|
||||
pctx->nrecv += nrecv;
|
||||
if (pctx->nrecv > 12) {
|
||||
if (pctx->error == false && pctx->success == false) {
|
||||
str = recv_buf[pctx->index] + 9;
|
||||
if (str[0] != '2' || str[1] != '0' || str[2] != '0') {
|
||||
printf("response error, index: %d, recv: %s\r\n", pctx->index, recv_buf[pctx->index]);
|
||||
pctx->error = true;
|
||||
} else {
|
||||
printf("response ok, index: %d\r\n", pctx->index);
|
||||
pctx->success = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
} else if (evs[i].events & EPOLLOUT) {
|
||||
pctx = (socket_ctx *) evs[i].data.ptr;
|
||||
if (pctx->state == connecting) {
|
||||
ret = proc_pending_error(pctx);
|
||||
if (ret == 0) {
|
||||
printf("client connected, index: %d\r\n", pctx->index);
|
||||
pctx->state = connected;
|
||||
} else {
|
||||
printf("client connect failed, index: %d\r\n", pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
for ( ;; ) {
|
||||
nsent = send(pctx->sockfd, send_buf[pctx->index] + pctx->nsent, pctx->nlen - pctx->nsent, 0);
|
||||
if (nsent == -1) {
|
||||
if (errno != EAGAIN && errno != EINTR) {
|
||||
printf("failed to send, index: %d\r\n", pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
}
|
||||
|
||||
break;
|
||||
}
|
||||
|
||||
if (nsent == (int) (pctx->nlen - pctx->nsent)) {
|
||||
printf("request done, request: %s, index: %d\r\n", send_buf[pctx->index], pctx->index);
|
||||
|
||||
pctx->state = datasent;
|
||||
|
||||
events = EPOLLET | EPOLLIN;
|
||||
(void) mod_event(epfd, pctx->sockfd, events, (void *)pctx);
|
||||
|
||||
break;
|
||||
} else {
|
||||
pctx->nsent += nsent;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
pctx = (socket_ctx *) evs[i].data.ptr;
|
||||
printf("unknown event(%u), index: %d\r\n", evs[i].events, pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
failed:
|
||||
|
||||
if (epfd > 0) {
|
||||
close(epfd);
|
||||
}
|
||||
|
||||
close_sockets(ctx, REQ_CLI_COUNT);
|
||||
|
||||
return 0;
|
||||
}
|
|
@ -0,0 +1,455 @@
|
|||
#include <stdio.h>
|
||||
#include <string.h>
|
||||
#include <sys/socket.h>
|
||||
#include <arpa/inet.h>
|
||||
#include <unistd.h>
|
||||
#include <inttypes.h>
|
||||
#include <fcntl.h>
|
||||
#include <netinet/in.h>
|
||||
#include <stdlib.h>
|
||||
#include <sys/epoll.h>
|
||||
#include <sys/time.h>
|
||||
#include <errno.h>
|
||||
#include <signal.h>
|
||||
|
||||
|
||||
#define RECV_MAX_LINE 2048
|
||||
#define ITEM_MAX_LINE 128
|
||||
#define REQ_MAX_LINE 4096
|
||||
#define REQ_CLI_COUNT 100
|
||||
|
||||
|
||||
typedef enum
|
||||
{
|
||||
uninited,
|
||||
connecting,
|
||||
connected,
|
||||
datasent
|
||||
} conn_stat;
|
||||
|
||||
|
||||
typedef enum
|
||||
{
|
||||
false,
|
||||
true
|
||||
} bool;
|
||||
|
||||
|
||||
typedef struct
|
||||
{
|
||||
int sockfd;
|
||||
int index;
|
||||
conn_stat state;
|
||||
size_t nsent;
|
||||
size_t nrecv;
|
||||
size_t nlen;
|
||||
bool error;
|
||||
bool success;
|
||||
struct sockaddr_in serv_addr;
|
||||
} socket_ctx;
|
||||
|
||||
|
||||
int set_nonblocking(int sockfd)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = fcntl(sockfd, F_SETFL, fcntl(sockfd, F_GETFL) | O_NONBLOCK);
|
||||
if (ret == -1) {
|
||||
printf("failed to fcntl for %d\r\n", sockfd);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
||||
int create_socket(const char *ip, const uint16_t port, socket_ctx *pctx)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (ip == NULL || port == 0 || pctx == NULL) {
|
||||
printf("invalid parameter\r\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
pctx->sockfd = socket(AF_INET, SOCK_STREAM, 0);
|
||||
if (pctx->sockfd == -1) {
|
||||
printf("failed to create socket\r\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
bzero(&pctx->serv_addr, sizeof(struct sockaddr_in));
|
||||
|
||||
pctx->serv_addr.sin_family = AF_INET;
|
||||
pctx->serv_addr.sin_port = htons(port);
|
||||
|
||||
ret = inet_pton(AF_INET, ip, &pctx->serv_addr.sin_addr);
|
||||
if (ret <= 0) {
|
||||
printf("inet_pton error, ip: %s\r\n", ip);
|
||||
return -1;
|
||||
}
|
||||
|
||||
ret = set_nonblocking(pctx->sockfd);
|
||||
if (ret == -1) {
|
||||
printf("failed to set %d as nonblocking\r\n", pctx->sockfd);
|
||||
return -1;
|
||||
}
|
||||
|
||||
return pctx->sockfd;
|
||||
}
|
||||
|
||||
|
||||
void close_sockets(socket_ctx *pctx, int cnt)
|
||||
{
|
||||
int i;
|
||||
|
||||
if (pctx == NULL) {
|
||||
return;
|
||||
}
|
||||
|
||||
for (i = 0; i < cnt; i++) {
|
||||
if (pctx[i].sockfd > 0) {
|
||||
close(pctx[i].sockfd);
|
||||
pctx[i].sockfd = -1;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
int proc_pending_error(socket_ctx *ctx)
|
||||
{
|
||||
int ret;
|
||||
int err;
|
||||
socklen_t len;
|
||||
|
||||
if (ctx == NULL) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
err = 0;
|
||||
len = sizeof(int);
|
||||
|
||||
ret = getsockopt(ctx->sockfd, SOL_SOCKET, SO_ERROR, (void *)&err, &len);
|
||||
if (ret == -1) {
|
||||
err = errno;
|
||||
}
|
||||
|
||||
if (err) {
|
||||
printf("failed to connect at index: %d\r\n", ctx->index);
|
||||
|
||||
close(ctx->sockfd);
|
||||
ctx->sockfd = -1;
|
||||
|
||||
return -1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
void build_http_request(char *ip, uint16_t port, char *url, char *sql, char *req_buf, int len)
|
||||
{
|
||||
char req_line[ITEM_MAX_LINE];
|
||||
char req_host[ITEM_MAX_LINE];
|
||||
char req_cont_type[ITEM_MAX_LINE];
|
||||
char req_cont_len[ITEM_MAX_LINE];
|
||||
const char* req_auth = "Authorization: Basic cm9vdDp0YW9zZGF0YQ==\r\n";
|
||||
|
||||
if (ip == NULL || port == 0 ||
|
||||
url == NULL || url[0] == '\0' ||
|
||||
sql == NULL || sql[0] == '\0' ||
|
||||
req_buf == NULL || len <= 0)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
snprintf(req_line, ITEM_MAX_LINE, "POST %s HTTP/1.1\r\n", url);
|
||||
snprintf(req_host, ITEM_MAX_LINE, "HOST: %s:%d\r\n", ip, port);
|
||||
snprintf(req_cont_type, ITEM_MAX_LINE, "%s\r\n", "Content-Type: text/plain");
|
||||
snprintf(req_cont_len, ITEM_MAX_LINE, "Content-Length: %ld\r\n\r\n", strlen(sql));
|
||||
|
||||
snprintf(req_buf, len, "%s%s%s%s%s%s", req_line, req_host, req_auth, req_cont_type, req_cont_len, sql);
|
||||
}
|
||||
|
||||
|
||||
int add_event(int epfd, int sockfd, uint32_t events, void *data)
|
||||
{
|
||||
struct epoll_event evs_op;
|
||||
|
||||
evs_op.data.ptr = data;
|
||||
evs_op.events = events;
|
||||
|
||||
return epoll_ctl(epfd, EPOLL_CTL_ADD, sockfd, &evs_op);
|
||||
}
|
||||
|
||||
|
||||
int mod_event(int epfd, int sockfd, uint32_t events, void *data)
|
||||
{
|
||||
struct epoll_event evs_op;
|
||||
|
||||
evs_op.data.ptr = data;
|
||||
evs_op.events = events;
|
||||
|
||||
return epoll_ctl(epfd, EPOLL_CTL_MOD, sockfd, &evs_op);
|
||||
}
|
||||
|
||||
|
||||
int del_event(int epfd, int sockfd)
|
||||
{
|
||||
struct epoll_event evs_op;
|
||||
|
||||
evs_op.events = 0;
|
||||
evs_op.data.ptr = NULL;
|
||||
|
||||
return epoll_ctl(epfd, EPOLL_CTL_DEL, sockfd, &evs_op);
|
||||
}
|
||||
|
||||
|
||||
int main()
|
||||
{
|
||||
int i;
|
||||
int ret, n, nsent, nrecv, offset;
|
||||
int epfd;
|
||||
uint32_t events;
|
||||
char *str;
|
||||
socket_ctx *pctx, ctx[REQ_CLI_COUNT];
|
||||
char *ip = "127.0.0.1";
|
||||
char *url_prefix = "/rest/sql";
|
||||
char url[ITEM_MAX_LINE];
|
||||
uint16_t port = 6041;
|
||||
struct epoll_event evs[REQ_CLI_COUNT];
|
||||
struct timeval now;
|
||||
int64_t start_time;
|
||||
char sql[REQ_MAX_LINE];
|
||||
char send_buf[REQ_CLI_COUNT][REQ_MAX_LINE + 5 * ITEM_MAX_LINE];
|
||||
char recv_buf[REQ_CLI_COUNT][RECV_MAX_LINE];
|
||||
int count;
|
||||
|
||||
signal(SIGPIPE, SIG_IGN);
|
||||
|
||||
gettimeofday(&now, NULL);
|
||||
start_time = now.tv_sec * 1000000 + now.tv_usec;
|
||||
|
||||
for (i = 0; i < REQ_CLI_COUNT; i++) {
|
||||
ctx[i].sockfd = -1;
|
||||
ctx[i].index = i;
|
||||
ctx[i].state = uninited;
|
||||
ctx[i].nsent = 0;
|
||||
ctx[i].nrecv = 0;
|
||||
ctx[i].error = false;
|
||||
ctx[i].success = false;
|
||||
|
||||
memset(url, 0, ITEM_MAX_LINE);
|
||||
memset(sql, 0, REQ_MAX_LINE);
|
||||
memset(send_buf[i], 0, REQ_MAX_LINE + 5 * ITEM_MAX_LINE);
|
||||
memset(recv_buf[i], 0, RECV_MAX_LINE);
|
||||
|
||||
snprintf(url, ITEM_MAX_LINE, "%s/db%d", url_prefix, i);
|
||||
|
||||
offset = 0;
|
||||
|
||||
ret = snprintf(sql + offset, REQ_MAX_LINE - offset, "insert into tb%d values ", i);
|
||||
if (ret <= 0) {
|
||||
printf("failed to snprintf for sql(prefix), index: %d\r\n ", i);
|
||||
goto failed;
|
||||
}
|
||||
|
||||
offset += ret;
|
||||
|
||||
while (offset < REQ_MAX_LINE - 128) {
|
||||
ret = snprintf(sql + offset, REQ_MAX_LINE - offset, "(%"PRId64", %d, 'test_string_%d') ", start_time + i, i, i);
|
||||
if (ret <= 0) {
|
||||
printf("failed to snprintf for sql(values), index: %d\r\n ", i);
|
||||
goto failed;
|
||||
}
|
||||
|
||||
offset += ret;
|
||||
}
|
||||
|
||||
build_http_request(ip, port, url, sql, send_buf[i], REQ_MAX_LINE + 5 * ITEM_MAX_LINE);
|
||||
|
||||
ctx[i].nlen = strlen(send_buf[i]);
|
||||
}
|
||||
|
||||
epfd = epoll_create(REQ_CLI_COUNT);
|
||||
if (epfd <= 0) {
|
||||
printf("failed to create epoll\r\n");
|
||||
goto failed;
|
||||
}
|
||||
|
||||
for (i = 0; i < REQ_CLI_COUNT; i++) {
|
||||
ret = create_socket(ip, port, &ctx[i]);
|
||||
if (ret == -1) {
|
||||
printf("failed to create socket, index: %d\r\n", i);
|
||||
goto failed;
|
||||
}
|
||||
}
|
||||
|
||||
for (i = 0; i < REQ_CLI_COUNT; i++) {
|
||||
events = EPOLLET | EPOLLIN | EPOLLOUT;
|
||||
ret = add_event(epfd, ctx[i].sockfd, events, (void *) &ctx[i]);
|
||||
if (ret == -1) {
|
||||
printf("failed to add sockfd to epoll, index: %d\r\n", i);
|
||||
goto failed;
|
||||
}
|
||||
}
|
||||
|
||||
count = 0;
|
||||
|
||||
for (i = 0; i < REQ_CLI_COUNT; i++) {
|
||||
ret = connect(ctx[i].sockfd, (struct sockaddr *) &ctx[i].serv_addr, sizeof(ctx[i].serv_addr));
|
||||
if (ret == -1) {
|
||||
if (errno != EINPROGRESS) {
|
||||
printf("connect error, index: %d\r\n", ctx[i].index);
|
||||
(void) del_event(epfd, ctx[i].sockfd);
|
||||
close(ctx[i].sockfd);
|
||||
ctx[i].sockfd = -1;
|
||||
} else {
|
||||
ctx[i].state = connecting;
|
||||
count++;
|
||||
}
|
||||
|
||||
continue;
|
||||
}
|
||||
|
||||
ctx[i].state = connected;
|
||||
count++;
|
||||
}
|
||||
|
||||
printf("clients: %d\r\n", count);
|
||||
|
||||
while (count > 0) {
|
||||
n = epoll_wait(epfd, evs, REQ_CLI_COUNT, 0);
|
||||
if (n == -1) {
|
||||
if (errno != EINTR) {
|
||||
printf("epoll_wait error, reason: %s\r\n", strerror(errno));
|
||||
break;
|
||||
}
|
||||
} else {
|
||||
for (i = 0; i < n; i++) {
|
||||
if (evs[i].events & EPOLLERR) {
|
||||
pctx = (socket_ctx *) evs[i].data.ptr;
|
||||
printf("event error, index: %d\r\n", pctx->index);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
} else if (evs[i].events & EPOLLIN) {
|
||||
pctx = (socket_ctx *) evs[i].data.ptr;
|
||||
if (pctx->state == connecting) {
|
||||
ret = proc_pending_error(pctx);
|
||||
if (ret == 0) {
|
||||
printf("client connected, index: %d\r\n", pctx->index);
|
||||
pctx->state = connected;
|
||||
} else {
|
||||
printf("client connect failed, index: %d\r\n", pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
for ( ;; ) {
|
||||
nrecv = recv(pctx->sockfd, recv_buf[pctx->index] + pctx->nrecv, RECV_MAX_LINE, 0);
|
||||
if (nrecv == -1) {
|
||||
if (errno != EAGAIN && errno != EINTR) {
|
||||
printf("failed to recv, index: %d, reason: %s\r\n", pctx->index, strerror(errno));
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
}
|
||||
|
||||
break;
|
||||
} else if (nrecv == 0) {
|
||||
printf("peer closed connection, index: %d\r\n", pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
break;
|
||||
}
|
||||
|
||||
pctx->nrecv += nrecv;
|
||||
if (pctx->nrecv > 12) {
|
||||
if (pctx->error == false && pctx->success == false) {
|
||||
str = recv_buf[pctx->index] + 9;
|
||||
if (str[0] != '2' || str[1] != '0' || str[2] != '0') {
|
||||
printf("response error, index: %d, recv: %s\r\n", pctx->index, recv_buf[pctx->index]);
|
||||
pctx->error = true;
|
||||
} else {
|
||||
printf("response ok, index: %d\r\n", pctx->index);
|
||||
pctx->success = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
} else if (evs[i].events & EPOLLOUT) {
|
||||
pctx = (socket_ctx *) evs[i].data.ptr;
|
||||
if (pctx->state == connecting) {
|
||||
ret = proc_pending_error(pctx);
|
||||
if (ret == 0) {
|
||||
printf("client connected, index: %d\r\n", pctx->index);
|
||||
pctx->state = connected;
|
||||
} else {
|
||||
printf("client connect failed, index: %d\r\n", pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
for ( ;; ) {
|
||||
nsent = send(pctx->sockfd, send_buf[pctx->index] + pctx->nsent, pctx->nlen - pctx->nsent, 0);
|
||||
if (nsent == -1) {
|
||||
if (errno != EAGAIN && errno != EINTR) {
|
||||
printf("failed to send, index: %d\r\n", pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
}
|
||||
|
||||
break;
|
||||
}
|
||||
|
||||
if (nsent == (int) (pctx->nlen - pctx->nsent)) {
|
||||
printf("request done, request: %s, index: %d\r\n", send_buf[pctx->index], pctx->index);
|
||||
|
||||
pctx->state = datasent;
|
||||
|
||||
events = EPOLLET | EPOLLIN;
|
||||
(void) mod_event(epfd, pctx->sockfd, events, (void *)pctx);
|
||||
|
||||
break;
|
||||
} else {
|
||||
pctx->nsent += nsent;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
pctx = (socket_ctx *) evs[i].data.ptr;
|
||||
printf("unknown event(%u), index: %d\r\n", evs[i].events, pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
failed:
|
||||
|
||||
if (epfd > 0) {
|
||||
close(epfd);
|
||||
}
|
||||
|
||||
close_sockets(ctx, REQ_CLI_COUNT);
|
||||
|
||||
return 0;
|
||||
}
|
|
@ -0,0 +1,432 @@
|
|||
#include <stdio.h>
|
||||
#include <string.h>
|
||||
#include <sys/socket.h>
|
||||
#include <arpa/inet.h>
|
||||
#include <unistd.h>
|
||||
#include <inttypes.h>
|
||||
#include <fcntl.h>
|
||||
#include <netinet/in.h>
|
||||
#include <stdlib.h>
|
||||
#include <sys/epoll.h>
|
||||
#include <sys/time.h>
|
||||
#include <errno.h>
|
||||
#include <signal.h>
|
||||
|
||||
|
||||
#define RECV_MAX_LINE 2048
|
||||
#define ITEM_MAX_LINE 128
|
||||
#define REQ_MAX_LINE 4096
|
||||
#define REQ_CLI_COUNT 100
|
||||
|
||||
|
||||
typedef enum
|
||||
{
|
||||
uninited,
|
||||
connecting,
|
||||
connected,
|
||||
datasent
|
||||
} conn_stat;
|
||||
|
||||
|
||||
typedef enum
|
||||
{
|
||||
false,
|
||||
true
|
||||
} bool;
|
||||
|
||||
|
||||
typedef struct
|
||||
{
|
||||
int sockfd;
|
||||
int index;
|
||||
conn_stat state;
|
||||
size_t nsent;
|
||||
size_t nrecv;
|
||||
size_t nlen;
|
||||
bool error;
|
||||
bool success;
|
||||
struct sockaddr_in serv_addr;
|
||||
} socket_ctx;
|
||||
|
||||
|
||||
int set_nonblocking(int sockfd)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = fcntl(sockfd, F_SETFL, fcntl(sockfd, F_GETFL) | O_NONBLOCK);
|
||||
if (ret == -1) {
|
||||
printf("failed to fcntl for %d\r\n", sockfd);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
||||
int create_socket(const char *ip, const uint16_t port, socket_ctx *pctx)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (ip == NULL || port == 0 || pctx == NULL) {
|
||||
printf("invalid parameter\r\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
pctx->sockfd = socket(AF_INET, SOCK_STREAM, 0);
|
||||
if (pctx->sockfd == -1) {
|
||||
printf("failed to create socket\r\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
bzero(&pctx->serv_addr, sizeof(struct sockaddr_in));
|
||||
|
||||
pctx->serv_addr.sin_family = AF_INET;
|
||||
pctx->serv_addr.sin_port = htons(port);
|
||||
|
||||
ret = inet_pton(AF_INET, ip, &pctx->serv_addr.sin_addr);
|
||||
if (ret <= 0) {
|
||||
printf("inet_pton error, ip: %s\r\n", ip);
|
||||
return -1;
|
||||
}
|
||||
|
||||
ret = set_nonblocking(pctx->sockfd);
|
||||
if (ret == -1) {
|
||||
printf("failed to set %d as nonblocking\r\n", pctx->sockfd);
|
||||
return -1;
|
||||
}
|
||||
|
||||
return pctx->sockfd;
|
||||
}
|
||||
|
||||
|
||||
void close_sockets(socket_ctx *pctx, int cnt)
|
||||
{
|
||||
int i;
|
||||
|
||||
if (pctx == NULL) {
|
||||
return;
|
||||
}
|
||||
|
||||
for (i = 0; i < cnt; i++) {
|
||||
if (pctx[i].sockfd > 0) {
|
||||
close(pctx[i].sockfd);
|
||||
pctx[i].sockfd = -1;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
int proc_pending_error(socket_ctx *ctx)
|
||||
{
|
||||
int ret;
|
||||
int err;
|
||||
socklen_t len;
|
||||
|
||||
if (ctx == NULL) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
err = 0;
|
||||
len = sizeof(int);
|
||||
|
||||
ret = getsockopt(ctx->sockfd, SOL_SOCKET, SO_ERROR, (void *)&err, &len);
|
||||
if (ret == -1) {
|
||||
err = errno;
|
||||
}
|
||||
|
||||
if (err) {
|
||||
printf("failed to connect at index: %d\r\n", ctx->index);
|
||||
|
||||
close(ctx->sockfd);
|
||||
ctx->sockfd = -1;
|
||||
|
||||
return -1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
void build_http_request(char *ip, uint16_t port, char *url, char *sql, char *req_buf, int len)
|
||||
{
|
||||
char req_line[ITEM_MAX_LINE];
|
||||
char req_host[ITEM_MAX_LINE];
|
||||
char req_cont_type[ITEM_MAX_LINE];
|
||||
char req_cont_len[ITEM_MAX_LINE];
|
||||
const char* req_auth = "Authorization: Basic cm9vdDp0YW9zZGF0YQ==\r\n";
|
||||
|
||||
if (ip == NULL || port == 0 ||
|
||||
url == NULL || url[0] == '\0' ||
|
||||
sql == NULL || sql[0] == '\0' ||
|
||||
req_buf == NULL || len <= 0)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
snprintf(req_line, ITEM_MAX_LINE, "POST %s HTTP/1.1\r\n", url);
|
||||
snprintf(req_host, ITEM_MAX_LINE, "HOST: %s:%d\r\n", ip, port);
|
||||
snprintf(req_cont_type, ITEM_MAX_LINE, "%s\r\n", "Content-Type: text/plain");
|
||||
snprintf(req_cont_len, ITEM_MAX_LINE, "Content-Length: %ld\r\n\r\n", strlen(sql));
|
||||
|
||||
snprintf(req_buf, len, "%s%s%s%s%s%s", req_line, req_host, req_auth, req_cont_type, req_cont_len, sql);
|
||||
}
|
||||
|
||||
|
||||
int add_event(int epfd, int sockfd, uint32_t events, void *data)
|
||||
{
|
||||
struct epoll_event evs_op;
|
||||
|
||||
evs_op.data.ptr = data;
|
||||
evs_op.events = events;
|
||||
|
||||
return epoll_ctl(epfd, EPOLL_CTL_ADD, sockfd, &evs_op);
|
||||
}
|
||||
|
||||
|
||||
int mod_event(int epfd, int sockfd, uint32_t events, void *data)
|
||||
{
|
||||
struct epoll_event evs_op;
|
||||
|
||||
evs_op.data.ptr = data;
|
||||
evs_op.events = events;
|
||||
|
||||
return epoll_ctl(epfd, EPOLL_CTL_MOD, sockfd, &evs_op);
|
||||
}
|
||||
|
||||
|
||||
int del_event(int epfd, int sockfd)
|
||||
{
|
||||
struct epoll_event evs_op;
|
||||
|
||||
evs_op.events = 0;
|
||||
evs_op.data.ptr = NULL;
|
||||
|
||||
return epoll_ctl(epfd, EPOLL_CTL_DEL, sockfd, &evs_op);
|
||||
}
|
||||
|
||||
|
||||
int main()
|
||||
{
|
||||
int i;
|
||||
int ret, n, nsent, nrecv;
|
||||
int epfd;
|
||||
uint32_t events;
|
||||
char *str;
|
||||
socket_ctx *pctx, ctx[REQ_CLI_COUNT];
|
||||
char *ip = "127.0.0.1";
|
||||
char *url_prefix = "/rest/sql";
|
||||
char url[ITEM_MAX_LINE];
|
||||
uint16_t port = 6041;
|
||||
struct epoll_event evs[REQ_CLI_COUNT];
|
||||
char sql[REQ_MAX_LINE];
|
||||
char send_buf[REQ_CLI_COUNT][REQ_MAX_LINE + 5 * ITEM_MAX_LINE];
|
||||
char recv_buf[REQ_CLI_COUNT][RECV_MAX_LINE];
|
||||
int count;
|
||||
|
||||
signal(SIGPIPE, SIG_IGN);
|
||||
|
||||
for (i = 0; i < REQ_CLI_COUNT; i++) {
|
||||
ctx[i].sockfd = -1;
|
||||
ctx[i].index = i;
|
||||
ctx[i].state = uninited;
|
||||
ctx[i].nsent = 0;
|
||||
ctx[i].nrecv = 0;
|
||||
ctx[i].error = false;
|
||||
ctx[i].success = false;
|
||||
|
||||
memset(url, 0, ITEM_MAX_LINE);
|
||||
memset(sql, 0, REQ_MAX_LINE);
|
||||
memset(send_buf[i], 0, REQ_MAX_LINE + 5 * ITEM_MAX_LINE);
|
||||
memset(recv_buf[i], 0, RECV_MAX_LINE);
|
||||
|
||||
snprintf(url, ITEM_MAX_LINE, "%s/db%d", url_prefix, i);
|
||||
|
||||
snprintf(sql, REQ_MAX_LINE, "select count(*) from tb%d", i);
|
||||
|
||||
build_http_request(ip, port, url, sql, send_buf[i], REQ_MAX_LINE + 5 * ITEM_MAX_LINE);
|
||||
|
||||
ctx[i].nlen = strlen(send_buf[i]);
|
||||
}
|
||||
|
||||
epfd = epoll_create(REQ_CLI_COUNT);
|
||||
if (epfd <= 0) {
|
||||
printf("failed to create epoll\r\n");
|
||||
goto failed;
|
||||
}
|
||||
|
||||
for (i = 0; i < REQ_CLI_COUNT; i++) {
|
||||
ret = create_socket(ip, port, &ctx[i]);
|
||||
if (ret == -1) {
|
||||
printf("failed to create socket, index: %d\r\n", i);
|
||||
goto failed;
|
||||
}
|
||||
}
|
||||
|
||||
for (i = 0; i < REQ_CLI_COUNT; i++) {
|
||||
events = EPOLLET | EPOLLIN | EPOLLOUT;
|
||||
ret = add_event(epfd, ctx[i].sockfd, events, (void *) &ctx[i]);
|
||||
if (ret == -1) {
|
||||
printf("failed to add sockfd to epoll, index: %d\r\n", i);
|
||||
goto failed;
|
||||
}
|
||||
}
|
||||
|
||||
count = 0;
|
||||
|
||||
for (i = 0; i < REQ_CLI_COUNT; i++) {
|
||||
ret = connect(ctx[i].sockfd, (struct sockaddr *) &ctx[i].serv_addr, sizeof(ctx[i].serv_addr));
|
||||
if (ret == -1) {
|
||||
if (errno != EINPROGRESS) {
|
||||
printf("connect error, index: %d\r\n", ctx[i].index);
|
||||
(void) del_event(epfd, ctx[i].sockfd);
|
||||
close(ctx[i].sockfd);
|
||||
ctx[i].sockfd = -1;
|
||||
} else {
|
||||
ctx[i].state = connecting;
|
||||
count++;
|
||||
}
|
||||
|
||||
continue;
|
||||
}
|
||||
|
||||
ctx[i].state = connected;
|
||||
count++;
|
||||
}
|
||||
|
||||
printf("clients: %d\r\n", count);
|
||||
|
||||
while (count > 0) {
|
||||
n = epoll_wait(epfd, evs, REQ_CLI_COUNT, 2);
|
||||
if (n == -1) {
|
||||
if (errno != EINTR) {
|
||||
printf("epoll_wait error, reason: %s\r\n", strerror(errno));
|
||||
break;
|
||||
}
|
||||
} else {
|
||||
for (i = 0; i < n; i++) {
|
||||
if (evs[i].events & EPOLLERR) {
|
||||
pctx = (socket_ctx *) evs[i].data.ptr;
|
||||
printf("event error, index: %d\r\n", pctx->index);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
} else if (evs[i].events & EPOLLIN) {
|
||||
pctx = (socket_ctx *) evs[i].data.ptr;
|
||||
if (pctx->state == connecting) {
|
||||
ret = proc_pending_error(pctx);
|
||||
if (ret == 0) {
|
||||
printf("client connected, index: %d\r\n", pctx->index);
|
||||
pctx->state = connected;
|
||||
} else {
|
||||
printf("client connect failed, index: %d\r\n", pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
for ( ;; ) {
|
||||
nrecv = recv(pctx->sockfd, recv_buf[pctx->index] + pctx->nrecv, RECV_MAX_LINE, 0);
|
||||
if (nrecv == -1) {
|
||||
if (errno != EAGAIN && errno != EINTR) {
|
||||
printf("failed to recv, index: %d, reason: %s\r\n", pctx->index, strerror(errno));
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
}
|
||||
|
||||
break;
|
||||
} else if (nrecv == 0) {
|
||||
printf("peer closed connection, index: %d\r\n", pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
break;
|
||||
}
|
||||
|
||||
pctx->nrecv += nrecv;
|
||||
if (pctx->nrecv > 12) {
|
||||
if (pctx->error == false && pctx->success == false) {
|
||||
str = recv_buf[pctx->index] + 9;
|
||||
if (str[0] != '2' || str[1] != '0' || str[2] != '0') {
|
||||
printf("response error, index: %d, recv: %s\r\n", pctx->index, recv_buf[pctx->index]);
|
||||
pctx->error = true;
|
||||
} else {
|
||||
printf("response ok, index: %d\r\n", pctx->index);
|
||||
pctx->success = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
} else if (evs[i].events & EPOLLOUT) {
|
||||
pctx = (socket_ctx *) evs[i].data.ptr;
|
||||
if (pctx->state == connecting) {
|
||||
ret = proc_pending_error(pctx);
|
||||
if (ret == 0) {
|
||||
printf("client connected, index: %d\r\n", pctx->index);
|
||||
pctx->state = connected;
|
||||
} else {
|
||||
printf("client connect failed, index: %d\r\n", pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
for ( ;; ) {
|
||||
nsent = send(pctx->sockfd, send_buf[pctx->index] + pctx->nsent, pctx->nlen - pctx->nsent, 0);
|
||||
if (nsent == -1) {
|
||||
if (errno != EAGAIN && errno != EINTR) {
|
||||
printf("failed to send, index: %d\r\n", pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
}
|
||||
|
||||
break;
|
||||
}
|
||||
|
||||
if (nsent == (int) (pctx->nlen - pctx->nsent)) {
|
||||
printf("request done, request: %s, index: %d\r\n", send_buf[pctx->index], pctx->index);
|
||||
|
||||
pctx->state = datasent;
|
||||
|
||||
events = EPOLLET | EPOLLIN;
|
||||
(void) mod_event(epfd, pctx->sockfd, events, (void *)pctx);
|
||||
|
||||
break;
|
||||
} else {
|
||||
pctx->nsent += nsent;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
pctx = (socket_ctx *) evs[i].data.ptr;
|
||||
printf("unknown event(%u), index: %d\r\n", evs[i].events, pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
failed:
|
||||
|
||||
if (epfd > 0) {
|
||||
close(epfd);
|
||||
}
|
||||
|
||||
close_sockets(ctx, REQ_CLI_COUNT);
|
||||
|
||||
return 0;
|
||||
}
|
|
@ -0,0 +1,430 @@
|
|||
#include <stdio.h>
|
||||
#include <string.h>
|
||||
#include <sys/socket.h>
|
||||
#include <arpa/inet.h>
|
||||
#include <unistd.h>
|
||||
#include <fcntl.h>
|
||||
#include <netinet/in.h>
|
||||
#include <stdlib.h>
|
||||
#include <sys/epoll.h>
|
||||
#include <errno.h>
|
||||
#include <signal.h>
|
||||
|
||||
|
||||
#define RECV_MAX_LINE 2048
|
||||
#define ITEM_MAX_LINE 128
|
||||
#define REQ_MAX_LINE 2048
|
||||
#define REQ_CLI_COUNT 100
|
||||
|
||||
|
||||
typedef enum
|
||||
{
|
||||
uninited,
|
||||
connecting,
|
||||
connected,
|
||||
datasent
|
||||
} conn_stat;
|
||||
|
||||
|
||||
typedef enum
|
||||
{
|
||||
false,
|
||||
true
|
||||
} bool;
|
||||
|
||||
|
||||
typedef unsigned short u16_t;
|
||||
typedef unsigned int u32_t;
|
||||
|
||||
|
||||
typedef struct
|
||||
{
|
||||
int sockfd;
|
||||
int index;
|
||||
conn_stat state;
|
||||
size_t nsent;
|
||||
size_t nrecv;
|
||||
size_t nlen;
|
||||
bool error;
|
||||
bool success;
|
||||
struct sockaddr_in serv_addr;
|
||||
} socket_ctx;
|
||||
|
||||
|
||||
int set_nonblocking(int sockfd)
|
||||
{
|
||||
int ret;
|
||||
|
||||
ret = fcntl(sockfd, F_SETFL, fcntl(sockfd, F_GETFL) | O_NONBLOCK);
|
||||
if (ret == -1) {
|
||||
printf("failed to fcntl for %d\r\n", sockfd);
|
||||
return ret;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
||||
int create_socket(const char *ip, const u16_t port, socket_ctx *pctx)
|
||||
{
|
||||
int ret;
|
||||
|
||||
if (ip == NULL || port == 0 || pctx == NULL) {
|
||||
printf("invalid parameter\r\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
pctx->sockfd = socket(AF_INET, SOCK_STREAM, 0);
|
||||
if (pctx->sockfd == -1) {
|
||||
printf("failed to create socket\r\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
bzero(&pctx->serv_addr, sizeof(struct sockaddr_in));
|
||||
|
||||
pctx->serv_addr.sin_family = AF_INET;
|
||||
pctx->serv_addr.sin_port = htons(port);
|
||||
|
||||
ret = inet_pton(AF_INET, ip, &pctx->serv_addr.sin_addr);
|
||||
if (ret <= 0) {
|
||||
printf("inet_pton error, ip: %s\r\n", ip);
|
||||
return -1;
|
||||
}
|
||||
|
||||
ret = set_nonblocking(pctx->sockfd);
|
||||
if (ret == -1) {
|
||||
printf("failed to set %d as nonblocking\r\n", pctx->sockfd);
|
||||
return -1;
|
||||
}
|
||||
|
||||
return pctx->sockfd;
|
||||
}
|
||||
|
||||
|
||||
void close_sockets(socket_ctx *pctx, int cnt)
|
||||
{
|
||||
int i;
|
||||
|
||||
if (pctx == NULL) {
|
||||
return;
|
||||
}
|
||||
|
||||
for (i = 0; i < cnt; i++) {
|
||||
if (pctx[i].sockfd > 0) {
|
||||
close(pctx[i].sockfd);
|
||||
pctx[i].sockfd = -1;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
int proc_pending_error(socket_ctx *ctx)
|
||||
{
|
||||
int ret;
|
||||
int err;
|
||||
socklen_t len;
|
||||
|
||||
if (ctx == NULL) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
err = 0;
|
||||
len = sizeof(int);
|
||||
|
||||
ret = getsockopt(ctx->sockfd, SOL_SOCKET, SO_ERROR, (void *)&err, &len);
|
||||
if (ret == -1) {
|
||||
err = errno;
|
||||
}
|
||||
|
||||
if (err) {
|
||||
printf("failed to connect at index: %d\r\n", ctx->index);
|
||||
|
||||
close(ctx->sockfd);
|
||||
ctx->sockfd = -1;
|
||||
|
||||
return -1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
void build_http_request(char *ip, u16_t port, char *url, char *sql, char *req_buf, int len)
|
||||
{
|
||||
char req_line[ITEM_MAX_LINE];
|
||||
char req_host[ITEM_MAX_LINE];
|
||||
char req_cont_type[ITEM_MAX_LINE];
|
||||
char req_cont_len[ITEM_MAX_LINE];
|
||||
const char* req_auth = "Authorization: Basic cm9vdDp0YW9zZGF0YQ==\r\n";
|
||||
|
||||
if (ip == NULL || port == 0 ||
|
||||
url == NULL || url[0] == '\0' ||
|
||||
sql == NULL || sql[0] == '\0' ||
|
||||
req_buf == NULL || len <= 0)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
snprintf(req_line, ITEM_MAX_LINE, "POST %s HTTP/1.1\r\n", url);
|
||||
snprintf(req_host, ITEM_MAX_LINE, "HOST: %s:%d\r\n", ip, port);
|
||||
snprintf(req_cont_type, ITEM_MAX_LINE, "%s\r\n", "Content-Type: text/plain");
|
||||
snprintf(req_cont_len, ITEM_MAX_LINE, "Content-Length: %ld\r\n\r\n", strlen(sql));
|
||||
|
||||
snprintf(req_buf, len, "%s%s%s%s%s%s", req_line, req_host, req_auth, req_cont_type, req_cont_len, sql);
|
||||
}
|
||||
|
||||
|
||||
int add_event(int epfd, int sockfd, u32_t events, void *data)
|
||||
{
|
||||
struct epoll_event evs_op;
|
||||
|
||||
evs_op.data.ptr = data;
|
||||
evs_op.events = events;
|
||||
|
||||
return epoll_ctl(epfd, EPOLL_CTL_ADD, sockfd, &evs_op);
|
||||
}
|
||||
|
||||
|
||||
int mod_event(int epfd, int sockfd, u32_t events, void *data)
|
||||
{
|
||||
struct epoll_event evs_op;
|
||||
|
||||
evs_op.data.ptr = data;
|
||||
evs_op.events = events;
|
||||
|
||||
return epoll_ctl(epfd, EPOLL_CTL_MOD, sockfd, &evs_op);
|
||||
}
|
||||
|
||||
|
||||
int del_event(int epfd, int sockfd)
|
||||
{
|
||||
struct epoll_event evs_op;
|
||||
|
||||
evs_op.events = 0;
|
||||
evs_op.data.ptr = NULL;
|
||||
|
||||
return epoll_ctl(epfd, EPOLL_CTL_DEL, sockfd, &evs_op);
|
||||
}
|
||||
|
||||
|
||||
int main()
|
||||
{
|
||||
int i;
|
||||
int ret, n, nsent, nrecv;
|
||||
int epfd;
|
||||
u32_t events;
|
||||
char *str;
|
||||
socket_ctx *pctx, ctx[REQ_CLI_COUNT];
|
||||
char *ip = "127.0.0.1";
|
||||
char *url = "/rest/sql";
|
||||
u16_t port = 6041;
|
||||
struct epoll_event evs[REQ_CLI_COUNT];
|
||||
char sql[REQ_MAX_LINE];
|
||||
char send_buf[REQ_CLI_COUNT][REQ_MAX_LINE + 5 * ITEM_MAX_LINE];
|
||||
char recv_buf[REQ_CLI_COUNT][RECV_MAX_LINE];
|
||||
int count;
|
||||
|
||||
signal(SIGPIPE, SIG_IGN);
|
||||
|
||||
for (i = 0; i < REQ_CLI_COUNT; i++) {
|
||||
ctx[i].sockfd = -1;
|
||||
ctx[i].index = i;
|
||||
ctx[i].state = uninited;
|
||||
ctx[i].nsent = 0;
|
||||
ctx[i].nrecv = 0;
|
||||
ctx[i].error = false;
|
||||
ctx[i].success = false;
|
||||
|
||||
memset(sql, 0, REQ_MAX_LINE);
|
||||
memset(send_buf[i], 0, REQ_MAX_LINE + 5 * ITEM_MAX_LINE);
|
||||
memset(recv_buf[i], 0, RECV_MAX_LINE);
|
||||
|
||||
snprintf(sql, REQ_MAX_LINE, "use db%d", i);
|
||||
|
||||
build_http_request(ip, port, url, sql, send_buf[i], REQ_MAX_LINE + 5 * ITEM_MAX_LINE);
|
||||
|
||||
ctx[i].nlen = strlen(send_buf[i]);
|
||||
}
|
||||
|
||||
epfd = epoll_create(REQ_CLI_COUNT);
|
||||
if (epfd <= 0) {
|
||||
printf("failed to create epoll\r\n");
|
||||
goto failed;
|
||||
}
|
||||
|
||||
for (i = 0; i < REQ_CLI_COUNT; i++) {
|
||||
ret = create_socket(ip, port, &ctx[i]);
|
||||
if (ret == -1) {
|
||||
printf("failed to create socket, index: %d\r\n", i);
|
||||
goto failed;
|
||||
}
|
||||
}
|
||||
|
||||
for (i = 0; i < REQ_CLI_COUNT; i++) {
|
||||
events = EPOLLET | EPOLLIN | EPOLLOUT;
|
||||
ret = add_event(epfd, ctx[i].sockfd, events, (void *) &ctx[i]);
|
||||
if (ret == -1) {
|
||||
printf("failed to add sockfd to epoll, index: %d\r\n", i);
|
||||
goto failed;
|
||||
}
|
||||
}
|
||||
|
||||
count = 0;
|
||||
|
||||
for (i = 0; i < REQ_CLI_COUNT; i++) {
|
||||
ret = connect(ctx[i].sockfd, (struct sockaddr *) &ctx[i].serv_addr, sizeof(ctx[i].serv_addr));
|
||||
if (ret == -1) {
|
||||
if (errno != EINPROGRESS) {
|
||||
printf("connect error, index: %d\r\n", ctx[i].index);
|
||||
(void) del_event(epfd, ctx[i].sockfd);
|
||||
close(ctx[i].sockfd);
|
||||
ctx[i].sockfd = -1;
|
||||
} else {
|
||||
ctx[i].state = connecting;
|
||||
count++;
|
||||
}
|
||||
|
||||
continue;
|
||||
}
|
||||
|
||||
ctx[i].state = connected;
|
||||
count++;
|
||||
}
|
||||
|
||||
printf("clients: %d\r\n", count);
|
||||
|
||||
while (count > 0) {
|
||||
n = epoll_wait(epfd, evs, REQ_CLI_COUNT, 2);
|
||||
if (n == -1) {
|
||||
if (errno != EINTR) {
|
||||
printf("epoll_wait error, reason: %s\r\n", strerror(errno));
|
||||
break;
|
||||
}
|
||||
} else {
|
||||
for (i = 0; i < n; i++) {
|
||||
if (evs[i].events & EPOLLERR) {
|
||||
pctx = (socket_ctx *) evs[i].data.ptr;
|
||||
printf("event error, index: %d\r\n", pctx->index);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
} else if (evs[i].events & EPOLLIN) {
|
||||
pctx = (socket_ctx *) evs[i].data.ptr;
|
||||
if (pctx->state == connecting) {
|
||||
ret = proc_pending_error(pctx);
|
||||
if (ret == 0) {
|
||||
printf("client connected, index: %d\r\n", pctx->index);
|
||||
pctx->state = connected;
|
||||
} else {
|
||||
printf("client connect failed, index: %d\r\n", pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
for ( ;; ) {
|
||||
nrecv = recv(pctx->sockfd, recv_buf[pctx->index] + pctx->nrecv, RECV_MAX_LINE, 0);
|
||||
if (nrecv == -1) {
|
||||
if (errno != EAGAIN && errno != EINTR) {
|
||||
printf("failed to recv, index: %d, reason: %s\r\n", pctx->index, strerror(errno));
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
}
|
||||
|
||||
break;
|
||||
} else if (nrecv == 0) {
|
||||
printf("peer closed connection, index: %d\r\n", pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
break;
|
||||
}
|
||||
|
||||
pctx->nrecv += nrecv;
|
||||
if (pctx->nrecv > 12) {
|
||||
if (pctx->error == false && pctx->success == false) {
|
||||
str = recv_buf[pctx->index] + 9;
|
||||
if (str[0] != '2' || str[1] != '0' || str[2] != '0') {
|
||||
printf("response error, index: %d, recv: %s\r\n", pctx->index, recv_buf[pctx->index]);
|
||||
pctx->error = true;
|
||||
} else {
|
||||
printf("response ok, index: %d\r\n", pctx->index);
|
||||
pctx->success = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
} else if (evs[i].events & EPOLLOUT) {
|
||||
pctx = (socket_ctx *) evs[i].data.ptr;
|
||||
if (pctx->state == connecting) {
|
||||
ret = proc_pending_error(pctx);
|
||||
if (ret == 0) {
|
||||
printf("client connected, index: %d\r\n", pctx->index);
|
||||
pctx->state = connected;
|
||||
} else {
|
||||
printf("client connect failed, index: %d\r\n", pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
for ( ;; ) {
|
||||
nsent = send(pctx->sockfd, send_buf[pctx->index] + pctx->nsent, pctx->nlen - pctx->nsent, 0);
|
||||
if (nsent == -1) {
|
||||
if (errno != EAGAIN && errno != EINTR) {
|
||||
printf("failed to send, index: %d\r\n", pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
}
|
||||
|
||||
break;
|
||||
}
|
||||
|
||||
if (nsent == (int) (pctx->nlen - pctx->nsent)) {
|
||||
printf("request done, request: %s, index: %d\r\n", send_buf[pctx->index], pctx->index);
|
||||
|
||||
pctx->state = datasent;
|
||||
|
||||
events = EPOLLET | EPOLLIN;
|
||||
(void) mod_event(epfd, pctx->sockfd, events, (void *)pctx);
|
||||
|
||||
break;
|
||||
} else {
|
||||
pctx->nsent += nsent;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
pctx = (socket_ctx *) evs[i].data.ptr;
|
||||
printf("unknown event(%u), index: %d\r\n", evs[i].events, pctx->index);
|
||||
(void) del_event(epfd, pctx->sockfd);
|
||||
close(pctx->sockfd);
|
||||
pctx->sockfd = -1;
|
||||
count--;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
failed:
|
||||
|
||||
if (epfd > 0) {
|
||||
close(epfd);
|
||||
}
|
||||
|
||||
close_sockets(ctx, REQ_CLI_COUNT);
|
||||
|
||||
return 0;
|
||||
}
|
|
@ -102,6 +102,20 @@ class TDTestCase:
|
|||
print("check2: i=%d colIdx=%d" % (i, colIdx))
|
||||
tdSql.checkData(0, i, self.rowNum * (colIdx - i + 3))
|
||||
|
||||
def alter_table_255_times(self): # add case for TD-6207
|
||||
for i in range(255):
|
||||
tdLog.info("alter table st add column cb%d int"%i)
|
||||
tdSql.execute("alter table st add column cb%d int"%i)
|
||||
tdSql.execute("insert into t0 (ts,c1) values(now,1)")
|
||||
tdSql.execute("reset query cache")
|
||||
tdSql.query("select * from st")
|
||||
tdSql.execute("create table mt(ts timestamp, i int)")
|
||||
tdSql.execute("insert into mt values(now,11)")
|
||||
tdSql.query("select * from mt")
|
||||
tdDnodes.stop(1)
|
||||
tdDnodes.start(1)
|
||||
tdSql.query("describe db.st")
|
||||
|
||||
def run(self):
|
||||
# Setup params
|
||||
db = "db"
|
||||
|
@ -131,12 +145,14 @@ class TDTestCase:
|
|||
tdSql.checkData(0, i, self.rowNum * (size - i))
|
||||
|
||||
|
||||
tdSql.execute("create table st(ts timestamp, c1 int) tags(t1 float)")
|
||||
tdSql.execute("create table t0 using st tags(null)")
|
||||
tdSql.execute("create table st(ts timestamp, c1 int) tags(t1 float,t2 int,t3 double)")
|
||||
tdSql.execute("create table t0 using st tags(null,1,2.3)")
|
||||
tdSql.execute("alter table t0 set tag t1=2.1")
|
||||
|
||||
tdSql.query("show tables")
|
||||
tdSql.checkRows(2)
|
||||
self.alter_table_255_times()
|
||||
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
|
|
|
@ -175,12 +175,62 @@ class ConcurrentInquiry:
|
|||
def con_group(self,tlist,col_list,tag_list):
|
||||
rand_tag = random.randint(0,5)
|
||||
rand_col = random.randint(0,1)
|
||||
return 'group by '+','.join(random.sample(col_list,rand_col) + random.sample(tag_list,rand_tag))
|
||||
|
||||
if len(tag_list):
|
||||
return 'group by '+','.join(random.sample(col_list,rand_col) + random.sample(tag_list,rand_tag))
|
||||
else:
|
||||
return 'group by '+','.join(random.sample(col_list,rand_col))
|
||||
|
||||
def con_order(self,tlist,col_list,tag_list):
|
||||
return 'order by '+random.choice(tlist)
|
||||
|
||||
def gen_query_sql(self): #生成查询语句
|
||||
def gen_subquery_sql(self):
|
||||
subsql ,col_num = self.gen_query_sql(1)
|
||||
if col_num == 0:
|
||||
return 0
|
||||
col_list=[]
|
||||
tag_list=[]
|
||||
for i in range(col_num):
|
||||
col_list.append("taosd%d"%i)
|
||||
|
||||
tlist=col_list+['abc'] #增加不存在的域'abc',是否会引起新bug
|
||||
con_rand=random.randint(0,len(condition_list))
|
||||
func_rand=random.randint(0,len(func_list))
|
||||
col_rand=random.randint(0,len(col_list))
|
||||
t_rand=random.randint(0,len(tlist))
|
||||
sql='select ' #select
|
||||
random.shuffle(col_list)
|
||||
random.shuffle(func_list)
|
||||
sel_col_list=[]
|
||||
col_rand=random.randint(0,len(col_list))
|
||||
loop = 0
|
||||
for i,j in zip(col_list[0:col_rand],func_list): #决定每个被查询col的函数
|
||||
alias = ' as '+ 'sub%d ' % loop
|
||||
loop += 1
|
||||
pick_func = ''
|
||||
if j == 'leastsquares':
|
||||
pick_func=j+'('+i+',1,1)'
|
||||
elif j == 'top' or j == 'bottom' or j == 'percentile' or j == 'apercentile':
|
||||
pick_func=j+'('+i+',1)'
|
||||
else:
|
||||
pick_func=j+'('+i+')'
|
||||
if bool(random.getrandbits(1)) :
|
||||
pick_func+=alias
|
||||
sel_col_list.append(pick_func)
|
||||
if col_rand == 0:
|
||||
sql = sql + '*'
|
||||
else:
|
||||
sql=sql+','.join(sel_col_list) #select col & func
|
||||
sql = sql + ' from ('+ subsql +') '
|
||||
con_func=[self.con_where,self.con_interval,self.con_limit,self.con_group,self.con_order,self.con_fill]
|
||||
sel_con=random.sample(con_func,random.randint(0,len(con_func)))
|
||||
sel_con_list=[]
|
||||
for i in sel_con:
|
||||
sel_con_list.append(i(tlist,col_list,tag_list)) #获取对应的条件函数
|
||||
sql+=' '.join(sel_con_list) # condition
|
||||
#print(sql)
|
||||
return sql
|
||||
|
||||
def gen_query_sql(self,subquery=0): #生成查询语句
|
||||
tbi=random.randint(0,len(self.subtb_list)+len(self.stb_list)) #随机决定查询哪张表
|
||||
tbname=''
|
||||
col_list=[]
|
||||
|
@ -218,10 +268,10 @@ class ConcurrentInquiry:
|
|||
pick_func=j+'('+i+',1)'
|
||||
else:
|
||||
pick_func=j+'('+i+')'
|
||||
if bool(random.getrandbits(1)):
|
||||
if bool(random.getrandbits(1)) | subquery :
|
||||
pick_func+=alias
|
||||
sel_col_list.append(pick_func)
|
||||
if col_rand == 0:
|
||||
if col_rand == 0 & subquery :
|
||||
sql = sql + '*'
|
||||
else:
|
||||
sql=sql+','.join(sel_col_list) #select col & func
|
||||
|
@ -238,7 +288,7 @@ class ConcurrentInquiry:
|
|||
sel_con_list.append(i(tlist,col_list,tag_list)) #获取对应的条件函数
|
||||
sql+=' '.join(sel_con_list) # condition
|
||||
#print(sql)
|
||||
return sql
|
||||
return (sql,loop)
|
||||
|
||||
def gen_query_join(self): #生成join查询语句
|
||||
tbname = []
|
||||
|
@ -429,9 +479,12 @@ class ConcurrentInquiry:
|
|||
|
||||
try:
|
||||
if self.random_pick():
|
||||
sql=self.gen_query_sql()
|
||||
if self.random_pick():
|
||||
sql,temp=self.gen_query_sql()
|
||||
else:
|
||||
sql = self.gen_subquery_sql()
|
||||
else:
|
||||
sql=self.gen_query_join()
|
||||
sql = self.gen_query_join()
|
||||
print("sql is ",sql)
|
||||
fo.write(sql+'\n')
|
||||
start = time.time()
|
||||
|
@ -496,9 +549,12 @@ class ConcurrentInquiry:
|
|||
while loop:
|
||||
try:
|
||||
if self.random_pick():
|
||||
sql=self.gen_query_sql()
|
||||
if self.random_pick():
|
||||
sql,temp=self.gen_query_sql()
|
||||
else:
|
||||
sql = self.gen_subquery_sql()
|
||||
else:
|
||||
sql=self.gen_query_join()
|
||||
sql = self.gen_query_join()
|
||||
print("sql is ",sql)
|
||||
fo.write(sql+'\n')
|
||||
start = time.time()
|
||||
|
|
|
@ -104,6 +104,21 @@ class TDTestCase:
|
|||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 1, 1)
|
||||
tdSql.checkData(1, 1, 2)
|
||||
|
||||
tdSql.query("select ts,bottom(col1, 2),ts from test1")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
|
||||
tdSql.checkData(0, 1, "2018-09-17 09:00:00.000")
|
||||
tdSql.checkData(1, 0, "2018-09-17 09:00:00.001")
|
||||
tdSql.checkData(1, 3, "2018-09-17 09:00:00.001")
|
||||
|
||||
|
||||
tdSql.query("select ts,bottom(col1, 2),ts from test group by tbname")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
|
||||
tdSql.checkData(0, 1, "2018-09-17 09:00:00.000")
|
||||
tdSql.checkData(1, 0, "2018-09-17 09:00:00.001")
|
||||
tdSql.checkData(1, 3, "2018-09-17 09:00:00.001")
|
||||
|
||||
#TD-2457 bottom + interval + order by
|
||||
tdSql.error('select top(col2,1) from test interval(1y) order by col2;')
|
||||
|
|
|
@ -54,6 +54,29 @@ class TDTestCase:
|
|||
tdSql.query("select derivative(col, 10s, 0) from stb group by tbname")
|
||||
tdSql.checkRows(10)
|
||||
|
||||
tdSql.query("select ts,derivative(col, 10s, 1),ts from stb group by tbname")
|
||||
tdSql.checkRows(4)
|
||||
tdSql.checkData(0, 0, "2018-09-17 09:00:10.000")
|
||||
tdSql.checkData(0, 1, "2018-09-17 09:00:10.000")
|
||||
tdSql.checkData(0, 3, "2018-09-17 09:00:10.000")
|
||||
tdSql.checkData(3, 0, "2018-09-17 09:01:20.000")
|
||||
tdSql.checkData(3, 1, "2018-09-17 09:01:20.000")
|
||||
tdSql.checkData(3, 3, "2018-09-17 09:01:20.000")
|
||||
|
||||
tdSql.query("select ts,derivative(col, 10s, 1),ts from tb1")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 0, "2018-09-17 09:00:10.000")
|
||||
tdSql.checkData(0, 1, "2018-09-17 09:00:10.000")
|
||||
tdSql.checkData(0, 3, "2018-09-17 09:00:10.000")
|
||||
tdSql.checkData(1, 0, "2018-09-17 09:00:20.009")
|
||||
tdSql.checkData(1, 1, "2018-09-17 09:00:20.009")
|
||||
tdSql.checkData(1, 3, "2018-09-17 09:00:20.009")
|
||||
|
||||
|
||||
tdSql.query("select ts from(select ts,derivative(col, 10s, 0) from stb group by tbname)")
|
||||
|
||||
tdSql.checkData(0, 0, "2018-09-17 09:00:10.000")
|
||||
|
||||
tdSql.error("select derivative(col, 10s, 0) from tb1 group by tbname")
|
||||
|
||||
tdSql.query("select derivative(col, 10s, 1) from tb1")
|
||||
|
|
|
@ -95,6 +95,24 @@ class TDTestCase:
|
|||
tdSql.error("select diff(col14) from test")
|
||||
|
||||
|
||||
tdSql.query("select ts,diff(col1),ts from test1")
|
||||
tdSql.checkRows(10)
|
||||
tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
|
||||
tdSql.checkData(0, 1, "2018-09-17 09:00:00.000")
|
||||
tdSql.checkData(0, 3, "2018-09-17 09:00:00.000")
|
||||
tdSql.checkData(9, 0, "2018-09-17 09:00:00.009")
|
||||
tdSql.checkData(9, 1, "2018-09-17 09:00:00.009")
|
||||
tdSql.checkData(9, 3, "2018-09-17 09:00:00.009")
|
||||
|
||||
tdSql.query("select ts,diff(col1),ts from test group by tbname")
|
||||
tdSql.checkRows(10)
|
||||
tdSql.checkData(0, 0, "2018-09-17 09:00:00.000")
|
||||
tdSql.checkData(0, 1, "2018-09-17 09:00:00.000")
|
||||
tdSql.checkData(0, 3, "2018-09-17 09:00:00.000")
|
||||
tdSql.checkData(9, 0, "2018-09-17 09:00:00.009")
|
||||
tdSql.checkData(9, 1, "2018-09-17 09:00:00.009")
|
||||
tdSql.checkData(9, 3, "2018-09-17 09:00:00.009")
|
||||
|
||||
tdSql.query("select diff(col1) from test1")
|
||||
tdSql.checkRows(10)
|
||||
|
||||
|
|
|
@ -117,7 +117,22 @@ class TDTestCase:
|
|||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 1, 8.1)
|
||||
tdSql.checkData(1, 1, 9.1)
|
||||
|
||||
|
||||
tdSql.query("select ts,top(col1, 2),ts from test1")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 0, "2018-09-17 09:00:00.008")
|
||||
tdSql.checkData(0, 1, "2018-09-17 09:00:00.008")
|
||||
tdSql.checkData(1, 0, "2018-09-17 09:00:00.009")
|
||||
tdSql.checkData(1, 3, "2018-09-17 09:00:00.009")
|
||||
|
||||
|
||||
tdSql.query("select ts,top(col1, 2),ts from test group by tbname")
|
||||
tdSql.checkRows(2)
|
||||
tdSql.checkData(0, 0, "2018-09-17 09:00:00.008")
|
||||
tdSql.checkData(0, 1, "2018-09-17 09:00:00.008")
|
||||
tdSql.checkData(1, 0, "2018-09-17 09:00:00.009")
|
||||
tdSql.checkData(1, 3, "2018-09-17 09:00:00.009")
|
||||
|
||||
#TD-2563 top + super_table + interval
|
||||
tdSql.execute("create table meters(ts timestamp, c int) tags (d int)")
|
||||
tdSql.execute("create table t1 using meters tags (1)")
|
||||
|
|
|
@ -28,7 +28,7 @@ class insertFromCSVPerformace:
|
|||
self.tbName = tbName
|
||||
self.branchName = branchName
|
||||
self.type = buildType
|
||||
self.ts = 1500074556514
|
||||
self.ts = 1500000000000
|
||||
self.host = "127.0.0.1"
|
||||
self.user = "root"
|
||||
self.password = "taosdata"
|
||||
|
@ -46,13 +46,20 @@ class insertFromCSVPerformace:
|
|||
config = self.config)
|
||||
|
||||
def writeCSV(self):
|
||||
with open('test3.csv','w', encoding='utf-8', newline='') as csvFile:
|
||||
tsset = set()
|
||||
rows = 0
|
||||
with open('test4.csv','w', encoding='utf-8', newline='') as csvFile:
|
||||
writer = csv.writer(csvFile, dialect='excel')
|
||||
for i in range(1000000):
|
||||
newTimestamp = self.ts + random.randint(10000000, 10000000000) + random.randint(1000, 10000000) + random.randint(1, 1000)
|
||||
d = datetime.datetime.fromtimestamp(newTimestamp / 1000)
|
||||
dt = str(d.strftime("%Y-%m-%d %H:%M:%S.%f"))
|
||||
writer.writerow(["'%s'" % dt, random.randint(1, 100), random.uniform(1, 100), random.randint(1, 100), random.randint(1, 100)])
|
||||
while True:
|
||||
newTimestamp = self.ts + random.randint(1, 10) * 10000000000 + random.randint(1, 10) * 1000000000 + random.randint(1, 10) * 100000000 + random.randint(1, 10) * 10000000 + random.randint(1, 10) * 1000000 + random.randint(1, 10) * 100000 + random.randint(1, 10) * 10000 + random.randint(1, 10) * 1000 + random.randint(1, 10) * 100 + random.randint(1, 10) * 10 + random.randint(1, 10)
|
||||
if newTimestamp not in tsset:
|
||||
tsset.add(newTimestamp)
|
||||
d = datetime.datetime.fromtimestamp(newTimestamp / 1000)
|
||||
dt = str(d.strftime("%Y-%m-%d %H:%M:%S.%f"))
|
||||
writer.writerow(["'%s'" % dt, random.randint(1, 100), random.uniform(1, 100), random.randint(1, 100), random.randint(1, 100)])
|
||||
rows += 1
|
||||
if rows == 2000000:
|
||||
break
|
||||
|
||||
def removCSVHeader(self):
|
||||
data = pd.read_csv("ordered.csv")
|
||||
|
@ -71,7 +78,9 @@ class insertFromCSVPerformace:
|
|||
cursor.execute("create table if not exists t1(ts timestamp, c1 int, c2 float, c3 int, c4 int)")
|
||||
startTime = time.time()
|
||||
cursor.execute("insert into t1 file 'outoforder.csv'")
|
||||
totalTime += time.time() - startTime
|
||||
totalTime += time.time() - startTime
|
||||
time.sleep(1)
|
||||
|
||||
out_of_order_time = (float) (totalTime / 10)
|
||||
print("Out of Order - Insert time: %f" % out_of_order_time)
|
||||
|
||||
|
@ -81,7 +90,8 @@ class insertFromCSVPerformace:
|
|||
cursor.execute("create table if not exists t2(ts timestamp, c1 int, c2 float, c3 int, c4 int)")
|
||||
startTime = time.time()
|
||||
cursor.execute("insert into t2 file 'ordered.csv'")
|
||||
totalTime += time.time() - startTime
|
||||
totalTime += time.time() - startTime
|
||||
time.sleep(1)
|
||||
|
||||
in_order_time = (float) (totalTime / 10)
|
||||
print("In order - Insert time: %f" % in_order_time)
|
||||
|
|
|
@ -0,0 +1,97 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import os
|
||||
import sys
|
||||
sys.path.insert(0, os.getcwd())
|
||||
from util.log import *
|
||||
from util.sql import *
|
||||
from util.dnodes import *
|
||||
import taos
|
||||
import threading
|
||||
import subprocess
|
||||
from random import choice
|
||||
|
||||
class TwoClients:
|
||||
def initConnection(self):
|
||||
self.host = "chr03"
|
||||
self.user = "root"
|
||||
self.password = "taosdata"
|
||||
self.config = "/etc/taos/"
|
||||
self.port =6030
|
||||
self.rowNum = 10
|
||||
self.ts = 1537146000000
|
||||
|
||||
def run(self):
|
||||
|
||||
# new taos client
|
||||
conn1 = taos.connect(host=self.host, user=self.user, password=self.password, config=self.config )
|
||||
print(conn1)
|
||||
cur1 = conn1.cursor()
|
||||
tdSql.init(cur1, True)
|
||||
|
||||
tdSql.execute("drop database if exists db3")
|
||||
|
||||
# insert data with taosc
|
||||
for i in range(10):
|
||||
os.system("taosdemo -f manualTest/TD-5114/insertDataDb3Replica2.json -y ")
|
||||
# # check data correct
|
||||
tdSql.execute("show databases")
|
||||
tdSql.execute("use db3")
|
||||
tdSql.query("select count (tbname) from stb0")
|
||||
tdSql.checkData(0, 0, 20000)
|
||||
tdSql.query("select count (*) from stb0")
|
||||
tdSql.checkData(0, 0, 4000000)
|
||||
|
||||
# insert data with python connector , if you want to use this case ,cancel note.
|
||||
|
||||
# for x in range(10):
|
||||
# dataType= [ "tinyint", "smallint", "int", "bigint", "float", "double", "bool", " binary(20)", "nchar(20)", "tinyint unsigned", "smallint unsigned", "int unsigned", "bigint unsigned"]
|
||||
# tdSql.execute("drop database if exists db3")
|
||||
# tdSql.execute("create database db3 keep 3650 replica 2 ")
|
||||
# tdSql.execute("use db3")
|
||||
# tdSql.execute('''create table test(ts timestamp, col0 tinyint, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
|
||||
# col7 bool, col8 binary(20), col9 nchar(20), col10 tinyint unsigned, col11 smallint unsigned, col12 int unsigned, col13 bigint unsigned) tags(loc nchar(3000), tag1 int)''')
|
||||
# rowNum2= 988
|
||||
# for i in range(rowNum2):
|
||||
# tdSql.execute("alter table test add column col%d %s ;" %( i+14, choice(dataType)) )
|
||||
# rowNum3= 988
|
||||
# for i in range(rowNum3):
|
||||
# tdSql.execute("alter table test drop column col%d ;" %( i+14) )
|
||||
# self.rowNum = 50
|
||||
# self.rowNum2 = 2000
|
||||
# self.ts = 1537146000000
|
||||
# for j in range(self.rowNum2):
|
||||
# tdSql.execute("create table test%d using test tags('beijing%d', 10)" % (j,j) )
|
||||
# for i in range(self.rowNum):
|
||||
# tdSql.execute("insert into test%d values(%d, %d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
|
||||
# % (j, self.ts + i*1000, i + 1, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
|
||||
|
||||
# # check data correct
|
||||
# tdSql.execute("show databases")
|
||||
# tdSql.execute("use db3")
|
||||
# tdSql.query("select count (tbname) from test")
|
||||
# tdSql.checkData(0, 0, 200)
|
||||
# tdSql.query("select count (*) from test")
|
||||
# tdSql.checkData(0, 0, 200000)
|
||||
|
||||
|
||||
# delete useless file
|
||||
testcaseFilename = os.path.split(__file__)[-1]
|
||||
os.system("rm -rf ./insert_res.txt")
|
||||
os.system("rm -rf manualTest/TD-5114/%s.sql" % testcaseFilename )
|
||||
|
||||
clients = TwoClients()
|
||||
clients.initConnection()
|
||||
# clients.getBuildPath()
|
||||
clients.run()
|
|
@ -0,0 +1,61 @@
|
|||
{
|
||||
"filetype": "insert",
|
||||
"cfgdir": "/etc/taos",
|
||||
"host": "127.0.0.1",
|
||||
"port": 6030,
|
||||
"user": "root",
|
||||
"password": "taosdata",
|
||||
"thread_count": 4,
|
||||
"thread_count_create_tbl": 4,
|
||||
"result_file": "./insert_res.txt",
|
||||
"confirm_parameter_prompt": "no",
|
||||
"insert_interval": 0,
|
||||
"interlace_rows": 0,
|
||||
"num_of_records_per_req": 3000,
|
||||
"max_sql_len": 1024000,
|
||||
"databases": [{
|
||||
"dbinfo": {
|
||||
"name": "db3",
|
||||
"drop": "yes",
|
||||
"replica": 2,
|
||||
"days": 10,
|
||||
"cache": 50,
|
||||
"blocks": 8,
|
||||
"precision": "ms",
|
||||
"keep": 365,
|
||||
"minRows": 100,
|
||||
"maxRows": 4096,
|
||||
"comp":2,
|
||||
"walLevel":1,
|
||||
"cachelast":0,
|
||||
"quorum":1,
|
||||
"fsync":3000,
|
||||
"update": 0
|
||||
},
|
||||
"super_tables": [{
|
||||
"name": "stb0",
|
||||
"child_table_exists":"no",
|
||||
"childtable_count": 20000,
|
||||
"childtable_prefix": "stb0_",
|
||||
"auto_create_table": "no",
|
||||
"batch_create_tbl_num": 1000,
|
||||
"data_source": "rand",
|
||||
"insert_mode": "taosc",
|
||||
"insert_rows": 2000,
|
||||
"childtable_limit": 0,
|
||||
"childtable_offset":0,
|
||||
"interlace_rows": 0,
|
||||
"insert_interval":0,
|
||||
"max_sql_len": 1024000,
|
||||
"disorder_ratio": 0,
|
||||
"disorder_range": 1000,
|
||||
"timestamp_step": 1,
|
||||
"start_timestamp": "2020-10-01 00:00:00.000",
|
||||
"sample_format": "csv",
|
||||
"sample_file": "./sample.csv",
|
||||
"tags_file": "",
|
||||
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}, {"type": "BINARY", "len": 32, "count":1}],
|
||||
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
|
||||
}]
|
||||
}]
|
||||
}
|
|
@ -0,0 +1,275 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from sys import version
|
||||
from fabric import Connection
|
||||
import random
|
||||
import time
|
||||
import datetime
|
||||
import logging
|
||||
import subprocess
|
||||
import os
|
||||
import sys
|
||||
|
||||
class Node:
|
||||
def __init__(self, index, username, hostIP, password, version):
|
||||
self.index = index
|
||||
self.username = username
|
||||
self.hostIP = hostIP
|
||||
# self.hostName = hostName
|
||||
# self.homeDir = homeDir
|
||||
self.version = version
|
||||
self.verName = "TDengine-enterprise-server-%s-Linux-x64.tar.gz" % self.version
|
||||
self.installPath = "TDengine-enterprise-server-%s" % self.version
|
||||
# self.corePath = '/coredump'
|
||||
self.conn = Connection("{}@{}".format(username, hostIP), connect_kwargs={"password": "{}".format(password)})
|
||||
|
||||
|
||||
def buildTaosd(self):
|
||||
try:
|
||||
print(self.conn)
|
||||
# self.conn.run('echo "1234" > /home/chr/installtest/test.log')
|
||||
self.conn.run("cd /home/chr/installtest/ && tar -xvf %s " %self.verName)
|
||||
self.conn.run("cd /home/chr/installtest/%s && ./install.sh " % self.installPath)
|
||||
except Exception as e:
|
||||
print("Build Taosd error for node %d " % self.index)
|
||||
logging.exception(e)
|
||||
pass
|
||||
|
||||
def rebuildTaosd(self):
|
||||
try:
|
||||
print(self.conn)
|
||||
# self.conn.run('echo "1234" > /home/chr/installtest/test.log')
|
||||
self.conn.run("cd /home/chr/installtest/%s && ./install.sh " % self.installPath)
|
||||
except Exception as e:
|
||||
print("Build Taosd error for node %d " % self.index)
|
||||
logging.exception(e)
|
||||
pass
|
||||
|
||||
def startTaosd(self):
|
||||
try:
|
||||
self.conn.run("sudo systemctl start taosd")
|
||||
except Exception as e:
|
||||
print("Start Taosd error for node %d " % self.index)
|
||||
logging.exception(e)
|
||||
|
||||
def restartTarbi(self):
|
||||
try:
|
||||
self.conn.run("sudo systemctl restart tarbitratord ")
|
||||
except Exception as e:
|
||||
print("Start Taosd error for node %d " % self.index)
|
||||
logging.exception(e)
|
||||
|
||||
def clearData(self):
|
||||
timeNow = datetime.datetime.now()
|
||||
# timeYes = datetime.datetime.now() + datetime.timedelta(days=-1)
|
||||
timStr = timeNow.strftime('%Y%m%d%H%M%S')
|
||||
# timStr = timeNow.strftime('%Y%m%d%H%M%S')
|
||||
try:
|
||||
# self.conn.run("mv /var/lib/taos/ /var/lib/taos%s " % timStr)
|
||||
self.conn.run("rm -rf /home/chr/data/taos*")
|
||||
except Exception as e:
|
||||
print("rm -rf /var/lib/taos error %d " % self.index)
|
||||
logging.exception(e)
|
||||
|
||||
def stopTaosd(self):
|
||||
try:
|
||||
self.conn.run("sudo systemctl stop taosd")
|
||||
except Exception as e:
|
||||
print("Stop Taosd error for node %d " % self.index)
|
||||
logging.exception(e)
|
||||
|
||||
def restartTaosd(self):
|
||||
try:
|
||||
self.conn.run("sudo systemctl restart taosd")
|
||||
except Exception as e:
|
||||
print("Stop Taosd error for node %d " % self.index)
|
||||
logging.exception(e)
|
||||
|
||||
class oneNode:
|
||||
|
||||
def FirestStartNode(self, id, username, IP, passwd, version):
|
||||
# get installPackage
|
||||
verName = "TDengine-enterprise-server-%s-Linux-x64.tar.gz" % version
|
||||
# installPath = "TDengine-enterprise-server-%s" % self.version
|
||||
node131 = Node(131, 'ubuntu', '192.168.1.131', 'tbase125!', '2.0.20.0')
|
||||
node131.conn.run('sshpass -p tbase125! scp /nas/TDengine/v%s/enterprise/%s root@192.168.1.%d:/home/chr/installtest/' % (version,verName,id))
|
||||
node131.conn.close()
|
||||
# install TDengine at 192.168.103/104/141
|
||||
try:
|
||||
node = Node(id, username, IP, passwd, version)
|
||||
node.conn.run('echo "start taosd"')
|
||||
node.buildTaosd()
|
||||
# clear DataPath , if need clear data
|
||||
node.clearData()
|
||||
node.startTaosd()
|
||||
if id == 103 :
|
||||
node.restartTarbi()
|
||||
print("start taosd ver:%s node:%d successfully " % (version,id))
|
||||
node.conn.close()
|
||||
|
||||
# query_pid = int(subprocess.getstatusoutput('ps aux|grep taosd |grep -v "grep"|awk \'{print $2}\'')[1])
|
||||
# assert query_pid == 1 , "node %d: start taosd failed " % id
|
||||
except Exception as e:
|
||||
print("Stop Taosd error for node %d " % id)
|
||||
logging.exception(e)
|
||||
|
||||
def startNode(self, id, username, IP, passwd, version):
|
||||
# start TDengine
|
||||
try:
|
||||
node = Node(id, username, IP, passwd, version)
|
||||
node.conn.run('echo "restart taosd"')
|
||||
# clear DataPath , if need clear data
|
||||
node.clearData()
|
||||
node.restartTaosd()
|
||||
time.sleep(5)
|
||||
if id == 103 :
|
||||
node.restartTarbi()
|
||||
print("start taosd ver:%s node:%d successfully " % (version,id))
|
||||
node.conn.close()
|
||||
|
||||
# query_pid = int(subprocess.getstatusoutput('ps aux|grep taosd |grep -v "grep"|awk \'{print $2}\'')[1])
|
||||
# assert query_pid == 1 , "node %d: start taosd failed " % id
|
||||
except Exception as e:
|
||||
print("Stop Taosd error for node %d " % id)
|
||||
logging.exception(e)
|
||||
|
||||
def firstUpgradeNode(self, id, username, IP, passwd, version):
|
||||
# get installPackage
|
||||
verName = "TDengine-enterprise-server-%s-Linux-x64.tar.gz" % version
|
||||
# installPath = "TDengine-enterprise-server-%s" % self.version
|
||||
node131 = Node(131, 'ubuntu', '192.168.1.131', 'tbase125!', '2.0.20.0')
|
||||
node131.conn.run('echo "upgrade cluster"')
|
||||
node131.conn.run('sshpass -p tbase125! scp /nas/TDengine/v%s/enterprise/%s root@192.168.1.%d:/home/chr/installtest/' % (version,verName,id))
|
||||
node131.conn.close()
|
||||
# upgrade TDengine at 192.168.103/104/141
|
||||
try:
|
||||
node = Node(id, username, IP, passwd, version)
|
||||
node.conn.run('echo "start taosd"')
|
||||
node.conn.run('echo "1234" > /home/chr/test.log')
|
||||
node.buildTaosd()
|
||||
time.sleep(5)
|
||||
node.startTaosd()
|
||||
if id == 103 :
|
||||
node.restartTarbi()
|
||||
print("start taosd ver:%s node:%d successfully " % (version,id))
|
||||
node.conn.close()
|
||||
|
||||
# query_pid = int(subprocess.getstatusoutput('ps aux|grep taosd |grep -v "grep"|awk \'{print $2}\'')[1])
|
||||
# assert query_pid == 1 , "node %d: start taosd failed " % id
|
||||
except Exception as e:
|
||||
print("Stop Taosd error for node %d " % id)
|
||||
logging.exception(e)
|
||||
|
||||
def upgradeNode(self, id, username, IP, passwd, version):
|
||||
|
||||
# backCluster TDengine at 192.168.103/104/141
|
||||
try:
|
||||
node = Node(id, username, IP, passwd, version)
|
||||
node.conn.run('echo "rollback taos"')
|
||||
node.rebuildTaosd()
|
||||
time.sleep(5)
|
||||
node.startTaosd()
|
||||
if id == 103 :
|
||||
node.restartTarbi()
|
||||
print("start taosd ver:%s node:%d successfully " % (version,id))
|
||||
node.conn.close()
|
||||
except Exception as e:
|
||||
print("Stop Taosd error for node %d " % id)
|
||||
logging.exception(e)
|
||||
|
||||
|
||||
# how to use : cd TDinternal/commumity/test/pytest && python3 manualTest/rollingUpgrade.py ,when inserting data, we can start " python3 manualTest/rollingUpagrade.py". add example "oneNode().FirestStartNode(103,'root','192.168.1.103','tbase125!','2.0.20.0')"
|
||||
|
||||
|
||||
# node103=oneNode().FirestStartNode(103,'root','192.168.1.103','tbase125!','2.0.20.0')
|
||||
# node104=oneNode().FirestStartNode(104,'root','192.168.1.104','tbase125!','2.0.20.0')
|
||||
# node141=oneNode().FirestStartNode(141,'root','192.168.1.141','tbase125!','2.0.20.0')
|
||||
|
||||
# node103=oneNode().startNode(103,'root','192.168.1.103','tbase125!','2.0.20.0')
|
||||
# time.sleep(30)
|
||||
# node141=oneNode().startNode(141,'root','192.168.1.141','tbase125!','2.0.20.0')
|
||||
# time.sleep(30)
|
||||
# node104=oneNode().startNode(104,'root','192.168.1.104','tbase125!','2.0.20.0')
|
||||
# time.sleep(30)
|
||||
|
||||
# node103=oneNode().firstUpgradeNode(103,'root','192.168.1.103','tbase125!','2.0.20.5')
|
||||
# time.sleep(30)
|
||||
# node104=oneNode().firstUpgradeNode(104,'root','192.168.1.104','tbase125!','2.0.20.5')
|
||||
# time.sleep(30)
|
||||
# node141=oneNode().firstUpgradeNode(141,'root','192.168.1.141','tbase125!','2.0.20.5')
|
||||
# time.sleep(30)
|
||||
|
||||
# node141=oneNode().firstUpgradeNode(141,'root','192.168.1.141','tbase125!','2.0.20.10')
|
||||
# time.sleep(30)
|
||||
# node103=oneNode().firstUpgradeNode(103,'root','192.168.1.103','tbase125!','2.0.20.10')
|
||||
# time.sleep(30)
|
||||
# node104=oneNode().firstUpgradeNode(104,'root','192.168.1.104','tbase125!','2.0.20.10')
|
||||
# time.sleep(30)
|
||||
|
||||
# node141=oneNode().firstUpgradeNode(141,'root','192.168.1.141','tbase125!','2.0.20.12')
|
||||
# time.sleep(30)
|
||||
# node103=oneNode().firstUpgradeNode(103,'root','192.168.1.103','tbase125!','2.0.20.12')
|
||||
# time.sleep(30)
|
||||
# node104=oneNode().firstUpgradeNode(104,'root','192.168.1.104','tbase125!','2.0.20.12')
|
||||
# time.sleep(30)
|
||||
|
||||
|
||||
|
||||
# node103=oneNode().upgradeNode(103,'root','192.168.1.103','tbase125!','2.0.20.0')
|
||||
# time.sleep(120)
|
||||
# node104=oneNode().upgradeNode(104,'root','192.168.1.104','tbase125!','2.0.20.0')
|
||||
# time.sleep(180)
|
||||
# node141=oneNode().upgradeNode(141,'root','192.168.1.141','tbase125!','2.0.20.0')
|
||||
# time.sleep(240)
|
||||
|
||||
# node104=oneNode().upgradeNode(104,'root','192.168.1.104','tbase125!','2.0.20.5')
|
||||
# time.sleep(120)
|
||||
# node103=oneNode().upgradeNode(103,'root','192.168.1.103','tbase125!','2.0.20.5')
|
||||
# time.sleep(120)
|
||||
# node141=oneNode().upgradeNode(141,'root','192.168.1.141','tbase125!','2.0.20.5')
|
||||
# time.sleep(180)
|
||||
|
||||
# node141=oneNode().upgradeNode(141,'root','192.168.1.141','tbase125!','2.0.20.10')
|
||||
# time.sleep(120)
|
||||
# node103=oneNode().upgradeNode(103,'root','192.168.1.103','tbase125!','2.0.20.10')
|
||||
# time.sleep(120)
|
||||
# node104=oneNode().upgradeNode(104,'root','192.168.1.104','tbase125!','2.0.20.10')
|
||||
# time.sleep(180)
|
||||
|
||||
# node103=oneNode().upgradeNode(103,'root','192.168.1.103','tbase125!','2.0.20.12')
|
||||
# time.sleep(180)
|
||||
# node141=oneNode().upgradeNode(141,'root','192.168.1.141','tbase125!','2.0.20.12')
|
||||
# time.sleep(180)
|
||||
# node104=oneNode().upgradeNode(104,'root','192.168.1.104','tbase125!','2.0.20.12')
|
||||
|
||||
|
||||
# node141=oneNode().firstUpgradeNode(141,'root','192.168.1.141','tbase125!','2.0.20.9')
|
||||
# time.sleep(5)
|
||||
# node103=oneNode().firstUpgradeNode(103,'root','192.168.1.103','tbase125!','2.0.20.9')
|
||||
# time.sleep(5)
|
||||
# node104=oneNode().firstUpgradeNode(104,'root','192.168.1.104','tbase125!','2.0.20.9')
|
||||
# time.sleep(30)
|
||||
|
||||
# node141=oneNode().upgradeNode(141,'root','192.168.1.141','tbase125!','2.0.20.10')
|
||||
# time.sleep(12)
|
||||
# node103=oneNode().upgradeNode(103,'root','192.168.1.103','tbase125!','2.0.20.10')
|
||||
# time.sleep(12)
|
||||
# node104=oneNode().upgradeNode(104,'root','192.168.1.104','tbase125!','2.0.20.10')
|
||||
# time.sleep(180)
|
||||
|
||||
# node103=oneNode().upgradeNode(103,'root','192.168.1.103','tbase125!','2.0.20.12')
|
||||
# time.sleep(120)
|
||||
# node141=oneNode().upgradeNode(141,'root','192.168.1.141','tbase125!','2.0.20.12')
|
||||
# time.sleep(120)
|
||||
# node104=oneNode().upgradeNode(104,'root','192.168.1.104','tbase125!','2.0.20.12')
|
|
@ -0,0 +1,83 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import sys
|
||||
import taos
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor())
|
||||
|
||||
self.ts = 1537146000000
|
||||
|
||||
def run(self):
|
||||
tdSql.prepare()
|
||||
|
||||
print("======= Verify filter for bool, nchar and binary type =========")
|
||||
tdLog.debug(
|
||||
"create table st(ts timestamp, tbcol1 bool, tbcol2 binary(10), tbcol3 nchar(20), tbcol4 tinyint, tbcol5 smallint, tbcol6 int, tbcol7 bigint, tbcol8 float, tbcol9 double) tags(tagcol1 bool, tagcol2 binary(10), tagcol3 nchar(10))")
|
||||
tdSql.execute(
|
||||
"create table st(ts timestamp, tbcol1 bool, tbcol2 binary(10), tbcol3 nchar(20), tbcol4 tinyint, tbcol5 smallint, tbcol6 int, tbcol7 bigint, tbcol8 float, tbcol9 double) tags(tagcol1 bool, tagcol2 binary(10), tagcol3 nchar(10))")
|
||||
|
||||
tdSql.execute("create table st1 using st tags(true, 'table1', '水表')")
|
||||
for i in range(1, 6):
|
||||
tdSql.execute(
|
||||
"insert into st1 values(%d, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d, %f, %f)" %
|
||||
(self.ts + i, i %
|
||||
2, i, i,
|
||||
i, i, i, i, 1.0, 1.0))
|
||||
|
||||
# =============Data type keywords cannot be used in filter====================
|
||||
# timestamp
|
||||
tdSql.error("select * from st where timestamp = 1629417600")
|
||||
|
||||
# bool
|
||||
tdSql.error("select * from st where bool = false")
|
||||
|
||||
#binary
|
||||
tdSql.error("select * from st where binary = 'taosdata'")
|
||||
|
||||
# nchar
|
||||
tdSql.error("select * from st where nchar = '涛思数据'")
|
||||
|
||||
# tinyint
|
||||
tdSql.error("select * from st where tinyint = 127")
|
||||
|
||||
# smallint
|
||||
tdSql.error("select * from st where smallint = 32767")
|
||||
|
||||
# int
|
||||
tdSql.error("select * from st where INTEGER = 2147483647")
|
||||
tdSql.error("select * from st where int = 2147483647")
|
||||
|
||||
# bigint
|
||||
tdSql.error("select * from st where bigint = 2147483647")
|
||||
|
||||
# float
|
||||
tdSql.error("select * from st where float = 3.4E38")
|
||||
|
||||
# double
|
||||
tdSql.error("select * from st where double = 1.7E308")
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -29,7 +29,6 @@ class TDTestCase:
|
|||
self.tables = 10
|
||||
self.rowsPerTable = 100
|
||||
|
||||
|
||||
def run(self):
|
||||
# tdSql.execute("drop database db ")
|
||||
tdSql.prepare()
|
||||
|
|
|
@ -0,0 +1,87 @@
|
|||
{
|
||||
"filetype": "insert",
|
||||
"cfgdir": "/etc/taos",
|
||||
"host": "127.0.0.1",
|
||||
"port": 6030,
|
||||
"user": "root",
|
||||
"password": "taosdata",
|
||||
"thread_count": 4,
|
||||
"thread_count_create_tbl": 4,
|
||||
"result_file": "./insert_res.txt",
|
||||
"confirm_parameter_prompt": "no",
|
||||
"insert_interval": 0,
|
||||
"interlace_rows": 10,
|
||||
"num_of_records_per_req": 1000,
|
||||
"max_sql_len": 1024000,
|
||||
"databases": [{
|
||||
"dbinfo": {
|
||||
"name": "db1",
|
||||
"drop": "yes",
|
||||
"replica": 1,
|
||||
"days": 10,
|
||||
"cache": 50,
|
||||
"blocks": 8,
|
||||
"precision": "ms",
|
||||
"keep": 365,
|
||||
"minRows": 100,
|
||||
"maxRows": 4096,
|
||||
"comp":2,
|
||||
"walLevel":1,
|
||||
"cachelast":0,
|
||||
"quorum":1,
|
||||
"fsync":3000,
|
||||
"update": 0
|
||||
},
|
||||
"super_tables": [{
|
||||
"name": "stb0",
|
||||
"child_table_exists":"no",
|
||||
"childtable_count": 1000,
|
||||
"childtable_prefix": "stb00_",
|
||||
"auto_create_table": "no",
|
||||
"batch_create_tbl_num": 1000,
|
||||
"data_source": "rand",
|
||||
"insert_mode": "taosc",
|
||||
"insert_rows": 1000,
|
||||
"childtable_limit": 0,
|
||||
"childtable_offset":0,
|
||||
"interlace_rows": 0,
|
||||
"insert_interval":0,
|
||||
"max_sql_len": 1024000,
|
||||
"disorder_ratio": 0,
|
||||
"disorder_range": 1000,
|
||||
"timestamp_step": 1,
|
||||
"start_timestamp": "2020-10-01 00:00:00.000",
|
||||
"sample_format": "csv",
|
||||
"sample_file": "./sample.csv",
|
||||
"tags_file": "",
|
||||
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}, {"type": "BINARY", "len": 32, "count":1}],
|
||||
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
|
||||
},
|
||||
{
|
||||
"name": "stb1",
|
||||
"child_table_exists":"no",
|
||||
"childtable_count": 10000,
|
||||
"childtable_prefix": "stb01_",
|
||||
"auto_create_table": "no",
|
||||
"batch_create_tbl_num": 1000,
|
||||
"data_source": "rand",
|
||||
"insert_mode": "taosc",
|
||||
"insert_rows": 200,
|
||||
"childtable_limit": 0,
|
||||
"childtable_offset":0,
|
||||
"interlace_rows": 0,
|
||||
"insert_interval":0,
|
||||
"max_sql_len": 1024000,
|
||||
"disorder_ratio": 0,
|
||||
"disorder_range": 1000,
|
||||
"timestamp_step": 1,
|
||||
"start_timestamp": "2020-10-01 00:00:00.000",
|
||||
"sample_format": "csv",
|
||||
"sample_file": "./sample.csv",
|
||||
"tags_file": "",
|
||||
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}, {"type": "BINARY", "len": 32, "count":1}],
|
||||
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
|
||||
}]
|
||||
}]
|
||||
}
|
||||
|
|
@ -0,0 +1,87 @@
|
|||
{
|
||||
"filetype": "insert",
|
||||
"cfgdir": "/etc/taos",
|
||||
"host": "127.0.0.1",
|
||||
"port": 6030,
|
||||
"user": "root",
|
||||
"password": "taosdata",
|
||||
"thread_count": 4,
|
||||
"thread_count_create_tbl": 4,
|
||||
"result_file": "./insert_res.txt",
|
||||
"confirm_parameter_prompt": "no",
|
||||
"insert_interval": 0,
|
||||
"interlace_rows": 10,
|
||||
"num_of_records_per_req": 1000,
|
||||
"max_sql_len": 1024000,
|
||||
"databases": [{
|
||||
"dbinfo": {
|
||||
"name": "db1",
|
||||
"drop": "yes",
|
||||
"replica": 2,
|
||||
"days": 10,
|
||||
"cache": 50,
|
||||
"blocks": 8,
|
||||
"precision": "ms",
|
||||
"keep": 365,
|
||||
"minRows": 100,
|
||||
"maxRows": 4096,
|
||||
"comp":2,
|
||||
"walLevel":1,
|
||||
"cachelast":0,
|
||||
"quorum":1,
|
||||
"fsync":3000,
|
||||
"update": 0
|
||||
},
|
||||
"super_tables": [{
|
||||
"name": "stb0",
|
||||
"child_table_exists":"no",
|
||||
"childtable_count": 1000,
|
||||
"childtable_prefix": "stb00_",
|
||||
"auto_create_table": "no",
|
||||
"batch_create_tbl_num": 100,
|
||||
"data_source": "rand",
|
||||
"insert_mode": "taosc",
|
||||
"insert_rows": 100,
|
||||
"childtable_limit": 0,
|
||||
"childtable_offset":0,
|
||||
"interlace_rows": 0,
|
||||
"insert_interval":0,
|
||||
"max_sql_len": 1024000,
|
||||
"disorder_ratio": 0,
|
||||
"disorder_range": 1000,
|
||||
"timestamp_step": 1,
|
||||
"start_timestamp": "2020-10-01 00:00:00.000",
|
||||
"sample_format": "csv",
|
||||
"sample_file": "./sample.csv",
|
||||
"tags_file": "",
|
||||
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}, {"type": "BINARY", "len": 32, "count":1}],
|
||||
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
|
||||
},
|
||||
{
|
||||
"name": "stb1",
|
||||
"child_table_exists":"no",
|
||||
"childtable_count": 1000,
|
||||
"childtable_prefix": "stb01_",
|
||||
"auto_create_table": "no",
|
||||
"batch_create_tbl_num": 10,
|
||||
"data_source": "rand",
|
||||
"insert_mode": "taosc",
|
||||
"insert_rows": 200,
|
||||
"childtable_limit": 0,
|
||||
"childtable_offset":0,
|
||||
"interlace_rows": 0,
|
||||
"insert_interval":0,
|
||||
"max_sql_len": 1024000,
|
||||
"disorder_ratio": 0,
|
||||
"disorder_range": 1000,
|
||||
"timestamp_step": 1,
|
||||
"start_timestamp": "2020-10-01 00:00:00.000",
|
||||
"sample_format": "csv",
|
||||
"sample_file": "./sample.csv",
|
||||
"tags_file": "",
|
||||
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}, {"type": "BINARY", "len": 32, "count":1}],
|
||||
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
|
||||
}]
|
||||
}]
|
||||
}
|
||||
|
|
@ -0,0 +1,86 @@
|
|||
{
|
||||
"filetype": "insert",
|
||||
"cfgdir": "/etc/taos",
|
||||
"host": "127.0.0.1",
|
||||
"port": 6030,
|
||||
"user": "root",
|
||||
"password": "taosdata",
|
||||
"thread_count": 4,
|
||||
"thread_count_create_tbl": 4,
|
||||
"result_file": "./insert_res.txt",
|
||||
"confirm_parameter_prompt": "no",
|
||||
"insert_interval": 0,
|
||||
"interlace_rows": 0,
|
||||
"num_of_records_per_req": 3000,
|
||||
"max_sql_len": 1024000,
|
||||
"databases": [{
|
||||
"dbinfo": {
|
||||
"name": "db2",
|
||||
"drop": "yes",
|
||||
"replica": 1,
|
||||
"days": 10,
|
||||
"cache": 50,
|
||||
"blocks": 8,
|
||||
"precision": "ms",
|
||||
"keep": 365,
|
||||
"minRows": 100,
|
||||
"maxRows": 4096,
|
||||
"comp":2,
|
||||
"walLevel":1,
|
||||
"cachelast":0,
|
||||
"quorum":1,
|
||||
"fsync":3000,
|
||||
"update": 0
|
||||
},
|
||||
"super_tables": [{
|
||||
"name": "stb0",
|
||||
"child_table_exists":"no",
|
||||
"childtable_count": 200000,
|
||||
"childtable_prefix": "stb0_",
|
||||
"auto_create_table": "no",
|
||||
"batch_create_tbl_num": 1000,
|
||||
"data_source": "rand",
|
||||
"insert_mode": "taosc",
|
||||
"insert_rows": 0,
|
||||
"childtable_limit": 0,
|
||||
"childtable_offset":0,
|
||||
"interlace_rows": 0,
|
||||
"insert_interval":0,
|
||||
"max_sql_len": 1024000,
|
||||
"disorder_ratio": 0,
|
||||
"disorder_range": 1000,
|
||||
"timestamp_step": 1,
|
||||
"start_timestamp": "2020-10-01 00:00:00.000",
|
||||
"sample_format": "csv",
|
||||
"sample_file": "./sample.csv",
|
||||
"tags_file": "",
|
||||
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}, {"type": "BINARY", "len": 32, "count":1}],
|
||||
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
|
||||
},
|
||||
{
|
||||
"name": "stb1",
|
||||
"child_table_exists":"no",
|
||||
"childtable_count": 2,
|
||||
"childtable_prefix": "stb1_",
|
||||
"auto_create_table": "no",
|
||||
"batch_create_tbl_num": 1000,
|
||||
"data_source": "rand",
|
||||
"insert_mode": "taosc",
|
||||
"insert_rows": 5,
|
||||
"childtable_limit": 0,
|
||||
"childtable_offset":0,
|
||||
"interlace_rows": 0,
|
||||
"insert_interval":0,
|
||||
"max_sql_len": 1024000,
|
||||
"disorder_ratio": 0,
|
||||
"disorder_range": 1000,
|
||||
"timestamp_step": 1,
|
||||
"start_timestamp": "2020-10-01 00:00:00.000",
|
||||
"sample_format": "csv",
|
||||
"sample_file": "./sample.csv",
|
||||
"tags_file": "",
|
||||
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}, {"type": "BINARY", "len": 32, "count":1}],
|
||||
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
|
||||
}]
|
||||
}]
|
||||
}
|
|
@ -0,0 +1,86 @@
|
|||
{
|
||||
"filetype": "insert",
|
||||
"cfgdir": "/etc/taos",
|
||||
"host": "127.0.0.1",
|
||||
"port": 6030,
|
||||
"user": "root",
|
||||
"password": "taosdata",
|
||||
"thread_count": 4,
|
||||
"thread_count_create_tbl": 4,
|
||||
"result_file": "./insert_res.txt",
|
||||
"confirm_parameter_prompt": "no",
|
||||
"insert_interval": 0,
|
||||
"interlace_rows": 0,
|
||||
"num_of_records_per_req": 3000,
|
||||
"max_sql_len": 1024000,
|
||||
"databases": [{
|
||||
"dbinfo": {
|
||||
"name": "db2",
|
||||
"drop": "no",
|
||||
"replica": 1,
|
||||
"days": 10,
|
||||
"cache": 50,
|
||||
"blocks": 8,
|
||||
"precision": "ms",
|
||||
"keep": 365,
|
||||
"minRows": 100,
|
||||
"maxRows": 4096,
|
||||
"comp":2,
|
||||
"walLevel":1,
|
||||
"cachelast":0,
|
||||
"quorum":1,
|
||||
"fsync":3000,
|
||||
"update": 0
|
||||
},
|
||||
"super_tables": [{
|
||||
"name": "stb0",
|
||||
"child_table_exists":"no",
|
||||
"childtable_count": 1,
|
||||
"childtable_prefix": "stb0_",
|
||||
"auto_create_table": "no",
|
||||
"batch_create_tbl_num": 100,
|
||||
"data_source": "rand",
|
||||
"insert_mode": "taosc",
|
||||
"insert_rows": 0,
|
||||
"childtable_limit": -1,
|
||||
"childtable_offset":0,
|
||||
"interlace_rows": 0,
|
||||
"insert_interval":0,
|
||||
"max_sql_len": 1024000,
|
||||
"disorder_ratio": 0,
|
||||
"disorder_range": 1000,
|
||||
"timestamp_step": 1,
|
||||
"start_timestamp": "2020-10-01 00:00:00.000",
|
||||
"sample_format": "csv",
|
||||
"sample_file": "./sample.csv",
|
||||
"tags_file": "",
|
||||
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}, {"type": "BINARY", "len": 32, "count":1}],
|
||||
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
|
||||
},
|
||||
{
|
||||
"name": "stb1",
|
||||
"child_table_exists":"yes",
|
||||
"childtable_count": 1,
|
||||
"childtable_prefix": "stb01_",
|
||||
"auto_create_table": "no",
|
||||
"batch_create_tbl_num": 10,
|
||||
"data_source": "rand",
|
||||
"insert_mode": "taosc",
|
||||
"insert_rows": 10,
|
||||
"childtable_limit": -1,
|
||||
"childtable_offset":0,
|
||||
"interlace_rows": 0,
|
||||
"insert_interval":0,
|
||||
"max_sql_len": 1024000,
|
||||
"disorder_ratio": 0,
|
||||
"disorder_range": 1000,
|
||||
"timestamp_step": 1,
|
||||
"start_timestamp": "2020-11-01 00:00:00.000",
|
||||
"sample_format": "csv",
|
||||
"sample_file": "./sample.csv",
|
||||
"tags_file": "",
|
||||
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}],
|
||||
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
|
||||
}]
|
||||
}]
|
||||
}
|
|
@ -0,0 +1,86 @@
|
|||
{
|
||||
"filetype": "insert",
|
||||
"cfgdir": "/etc/taos",
|
||||
"host": "127.0.0.1",
|
||||
"port": 6030,
|
||||
"user": "root",
|
||||
"password": "taosdata",
|
||||
"thread_count": 4,
|
||||
"thread_count_create_tbl": 4,
|
||||
"result_file": "./insert_res.txt",
|
||||
"confirm_parameter_prompt": "no",
|
||||
"insert_interval": 0,
|
||||
"interlace_rows": 0,
|
||||
"num_of_records_per_req": 3000,
|
||||
"max_sql_len": 1024000,
|
||||
"databases": [{
|
||||
"dbinfo": {
|
||||
"name": "db2",
|
||||
"drop": "no",
|
||||
"replica": 2,
|
||||
"days": 10,
|
||||
"cache": 50,
|
||||
"blocks": 8,
|
||||
"precision": "ms",
|
||||
"keep": 365,
|
||||
"minRows": 100,
|
||||
"maxRows": 4096,
|
||||
"comp":2,
|
||||
"walLevel":1,
|
||||
"cachelast":0,
|
||||
"quorum":1,
|
||||
"fsync":3000,
|
||||
"update": 0
|
||||
},
|
||||
"super_tables": [{
|
||||
"name": "stb0",
|
||||
"child_table_exists":"no",
|
||||
"childtable_count": 1,
|
||||
"childtable_prefix": "stb0_",
|
||||
"auto_create_table": "no",
|
||||
"batch_create_tbl_num": 100,
|
||||
"data_source": "rand",
|
||||
"insert_mode": "taosc",
|
||||
"insert_rows": 0,
|
||||
"childtable_limit": -1,
|
||||
"childtable_offset":0,
|
||||
"interlace_rows": 0,
|
||||
"insert_interval":0,
|
||||
"max_sql_len": 1024000,
|
||||
"disorder_ratio": 0,
|
||||
"disorder_range": 1000,
|
||||
"timestamp_step": 1,
|
||||
"start_timestamp": "2020-10-01 00:00:00.000",
|
||||
"sample_format": "csv",
|
||||
"sample_file": "./sample.csv",
|
||||
"tags_file": "",
|
||||
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}, {"type": "BINARY", "len": 32, "count":1}],
|
||||
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
|
||||
},
|
||||
{
|
||||
"name": "stb1",
|
||||
"child_table_exists":"yes",
|
||||
"childtable_count": 1,
|
||||
"childtable_prefix": "stb01_",
|
||||
"auto_create_table": "no",
|
||||
"batch_create_tbl_num": 10,
|
||||
"data_source": "rand",
|
||||
"insert_mode": "taosc",
|
||||
"insert_rows": 10,
|
||||
"childtable_limit": -1,
|
||||
"childtable_offset":0,
|
||||
"interlace_rows": 0,
|
||||
"insert_interval":0,
|
||||
"max_sql_len": 1024000,
|
||||
"disorder_ratio": 0,
|
||||
"disorder_range": 1000,
|
||||
"timestamp_step": 1,
|
||||
"start_timestamp": "2020-11-01 00:00:00.000",
|
||||
"sample_format": "csv",
|
||||
"sample_file": "./sample.csv",
|
||||
"tags_file": "",
|
||||
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}],
|
||||
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
|
||||
}]
|
||||
}]
|
||||
}
|
|
@ -0,0 +1,86 @@
|
|||
{
|
||||
"filetype": "insert",
|
||||
"cfgdir": "/etc/taos",
|
||||
"host": "127.0.0.1",
|
||||
"port": 6030,
|
||||
"user": "root",
|
||||
"password": "taosdata",
|
||||
"thread_count": 4,
|
||||
"thread_count_create_tbl": 4,
|
||||
"result_file": "./insert_res.txt",
|
||||
"confirm_parameter_prompt": "no",
|
||||
"insert_interval": 0,
|
||||
"interlace_rows": 0,
|
||||
"num_of_records_per_req": 3000,
|
||||
"max_sql_len": 1024000,
|
||||
"databases": [{
|
||||
"dbinfo": {
|
||||
"name": "db2",
|
||||
"drop": "yes",
|
||||
"replica": 2,
|
||||
"days": 10,
|
||||
"cache": 50,
|
||||
"blocks": 8,
|
||||
"precision": "ms",
|
||||
"keep": 365,
|
||||
"minRows": 100,
|
||||
"maxRows": 4096,
|
||||
"comp":2,
|
||||
"walLevel":1,
|
||||
"cachelast":0,
|
||||
"quorum":1,
|
||||
"fsync":3000,
|
||||
"update": 0
|
||||
},
|
||||
"super_tables": [{
|
||||
"name": "stb0",
|
||||
"child_table_exists":"no",
|
||||
"childtable_count": 2000,
|
||||
"childtable_prefix": "stb0_",
|
||||
"auto_create_table": "no",
|
||||
"batch_create_tbl_num": 100,
|
||||
"data_source": "rand",
|
||||
"insert_mode": "taosc",
|
||||
"insert_rows": 2000,
|
||||
"childtable_limit": 0,
|
||||
"childtable_offset":0,
|
||||
"interlace_rows": 0,
|
||||
"insert_interval":0,
|
||||
"max_sql_len": 1024000,
|
||||
"disorder_ratio": 0,
|
||||
"disorder_range": 1000,
|
||||
"timestamp_step": 1,
|
||||
"start_timestamp": "2020-10-01 00:00:00.000",
|
||||
"sample_format": "csv",
|
||||
"sample_file": "./sample.csv",
|
||||
"tags_file": "",
|
||||
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}, {"type": "BINARY", "len": 32, "count":1}],
|
||||
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
|
||||
},
|
||||
{
|
||||
"name": "stb1",
|
||||
"child_table_exists":"no",
|
||||
"childtable_count": 2,
|
||||
"childtable_prefix": "stb1_",
|
||||
"auto_create_table": "no",
|
||||
"batch_create_tbl_num": 10,
|
||||
"data_source": "rand",
|
||||
"insert_mode": "taosc",
|
||||
"insert_rows": 5,
|
||||
"childtable_limit": 0,
|
||||
"childtable_offset":0,
|
||||
"interlace_rows": 0,
|
||||
"insert_interval":0,
|
||||
"max_sql_len": 1024000,
|
||||
"disorder_ratio": 0,
|
||||
"disorder_range": 1000,
|
||||
"timestamp_step": 1,
|
||||
"start_timestamp": "2020-10-01 00:00:00.000",
|
||||
"sample_format": "csv",
|
||||
"sample_file": "./sample.csv",
|
||||
"tags_file": "",
|
||||
"columns": [{"type": "INT"}, {"type": "DOUBLE", "count":1}, {"type": "BINARY", "len": 16, "count":1}, {"type": "BINARY", "len": 32, "count":1}],
|
||||
"tags": [{"type": "TINYINT", "count":2}, {"type": "BINARY", "len": 16, "count":1}]
|
||||
}]
|
||||
}]
|
||||
}
|
|
@ -0,0 +1,161 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from distutils.log import debug
|
||||
import sys
|
||||
import os
|
||||
import taos
|
||||
from util.log import *
|
||||
from util.cases import *
|
||||
from util.sql import *
|
||||
from util.dnodes import *
|
||||
import subprocess
|
||||
from random import choice
|
||||
|
||||
class TDTestCase:
|
||||
def init(self, conn, logSql):
|
||||
tdLog.debug("start to execute %s" % __file__)
|
||||
tdSql.init(conn.cursor(), logSql)
|
||||
|
||||
def getBuildPath(self):
|
||||
global selfPath
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosd" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root)-len("/build/bin")]
|
||||
break
|
||||
return buildPath
|
||||
|
||||
def run(self):
|
||||
|
||||
# set path para
|
||||
buildPath = self.getBuildPath()
|
||||
if (buildPath == ""):
|
||||
tdLog.exit("taosd not found!")
|
||||
else:
|
||||
tdLog.info("taosd found in %s" % buildPath)
|
||||
|
||||
binPath = buildPath+ "/build/bin/"
|
||||
testPath = selfPath+ "/../../../"
|
||||
walFilePath = testPath + "/sim/dnode1/data/mnode_bak/wal/"
|
||||
|
||||
#new db and insert data
|
||||
tdSql.execute("drop database if exists db2")
|
||||
os.system("%staosdemo -f tsdb/insertDataDb1.json -y " % binPath)
|
||||
tdSql.execute("drop database if exists db1")
|
||||
os.system("%staosdemo -f tsdb/insertDataDb2.json -y " % binPath)
|
||||
tdSql.execute("drop table if exists db2.stb0")
|
||||
os.system("%staosdemo -f tsdb/insertDataDb2Newstab.json -y " % binPath)
|
||||
|
||||
tdSql.execute("use db2")
|
||||
tdSql.execute("drop table if exists stb1_0")
|
||||
tdSql.execute("drop table if exists stb1_1")
|
||||
tdSql.execute("insert into stb0_0 values(1614218412000,8637,78.861045,'R','bf3')(1614218422000,8637,98.861045,'R','bf3')")
|
||||
tdSql.execute("alter table db2.stb0 add column c4 int")
|
||||
tdSql.execute("alter table db2.stb0 drop column c2")
|
||||
tdSql.execute("alter table db2.stb0 add tag t3 int;")
|
||||
tdSql.execute("alter table db2.stb0 drop tag t1")
|
||||
tdSql.execute("create table if not exists stb2_0 (ts timestamp, c0 int, c1 float) ")
|
||||
tdSql.execute("insert into stb2_0 values(1614218412000,8637,78.861045)")
|
||||
tdSql.execute("alter table stb2_0 add column c2 binary(4)")
|
||||
tdSql.execute("alter table stb2_0 drop column c1")
|
||||
tdSql.execute("insert into stb2_0 values(1614218422000,8638,'R')")
|
||||
|
||||
# create db utest
|
||||
|
||||
|
||||
dataType= [ "tinyint", "smallint", "int", "bigint", "float", "double", "bool", " binary(20)", "nchar(20)", "tinyint unsigned", "smallint unsigned", "int unsigned", "bigint unsigned"]
|
||||
|
||||
tdSql.execute("drop database if exists utest")
|
||||
tdSql.execute("create database utest keep 3650")
|
||||
tdSql.execute("use utest")
|
||||
tdSql.execute('''create table test(ts timestamp, col0 tinyint, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
|
||||
col7 bool, col8 binary(20), col9 nchar(20), col10 tinyint unsigned, col11 smallint unsigned, col12 int unsigned, col13 bigint unsigned) tags(loc nchar(200), tag1 int)''')
|
||||
|
||||
# rowNum1 = 13
|
||||
# for i in range(rowNum1):
|
||||
# columnName= "col" + str(i+1)
|
||||
# tdSql.execute("alter table test drop column %s ;" % columnName )
|
||||
|
||||
rowNum2= 988
|
||||
for i in range(rowNum2):
|
||||
tdSql.execute("alter table test add column col%d %s ;" %( i+14, choice(dataType)) )
|
||||
|
||||
rowNum3= 988
|
||||
for i in range(rowNum3):
|
||||
tdSql.execute("alter table test drop column col%d ;" %( i+14) )
|
||||
|
||||
|
||||
self.rowNum = 1
|
||||
self.rowNum2 = 100
|
||||
self.rowNum3 = 20
|
||||
self.ts = 1537146000000
|
||||
|
||||
for j in range(self.rowNum2):
|
||||
tdSql.execute("create table test%d using test tags('beijing%d', 10)" % (j,j) )
|
||||
for i in range(self.rowNum):
|
||||
tdSql.execute("insert into test%d values(%d, %d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
|
||||
% (j, self.ts + i*1000, i + 1, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
|
||||
|
||||
for j in range(self.rowNum2):
|
||||
tdSql.execute("drop table if exists test%d" % (j+1))
|
||||
|
||||
|
||||
# stop taosd and restart taosd
|
||||
tdDnodes.stop(1)
|
||||
sleep(10)
|
||||
tdDnodes.start(1)
|
||||
sleep(5)
|
||||
tdSql.execute("reset query cache")
|
||||
query_pid2 = int(subprocess.getstatusoutput('ps aux|grep taosd |grep -v "grep"|awk \'{print $2}\'')[1])
|
||||
print(query_pid2)
|
||||
|
||||
# verify that the data is correct
|
||||
tdSql.execute("use db2")
|
||||
tdSql.query("select count (tbname) from stb0")
|
||||
tdSql.checkData(0, 0, 1)
|
||||
tdSql.query("select count (tbname) from stb1")
|
||||
tdSql.checkRows(0)
|
||||
tdSql.query("select count(*) from stb0_0")
|
||||
tdSql.checkData(0, 0, 2)
|
||||
tdSql.query("select count(*) from stb0")
|
||||
tdSql.checkData(0, 0, 2)
|
||||
tdSql.query("select count(*) from stb2_0")
|
||||
tdSql.checkData(0, 0, 2)
|
||||
|
||||
tdSql.execute("use utest")
|
||||
tdSql.query("select count (tbname) from test")
|
||||
tdSql.checkData(0, 0, 1)
|
||||
|
||||
# delete useless file
|
||||
testcaseFilename = os.path.split(__file__)[-1]
|
||||
os.system("rm -rf ./insert_res.txt")
|
||||
os.system("rm -rf tsdb/%s.sql" % testcaseFilename )
|
||||
|
||||
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success("%s successfully executed" % __file__)
|
||||
|
||||
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
|
@ -0,0 +1,160 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import os
|
||||
import sys
|
||||
sys.path.insert(0, os.getcwd())
|
||||
from util.log import *
|
||||
from util.sql import *
|
||||
from util.dnodes import *
|
||||
import taos
|
||||
import threading
|
||||
import subprocess
|
||||
from random import choice
|
||||
|
||||
|
||||
class TwoClients:
|
||||
def initConnection(self):
|
||||
self.host = "chenhaoran02"
|
||||
self.user = "root"
|
||||
self.password = "taosdata"
|
||||
self.config = "/etc/taos/"
|
||||
self.port =6030
|
||||
self.rowNum = 10
|
||||
self.ts = 1537146000000
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosd" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root)-len("/build/bin")]
|
||||
break
|
||||
return buildPath
|
||||
|
||||
def run(self):
|
||||
buildPath = self.getBuildPath()
|
||||
if (buildPath == ""):
|
||||
tdLog.exit("taosd not found!")
|
||||
else:
|
||||
tdLog.info("taosd found in %s" % buildPath)
|
||||
binPath = buildPath+ "/build/bin/"
|
||||
walFilePath = "/var/lib/taos/mnode_bak/wal/"
|
||||
|
||||
# new taos client
|
||||
conn1 = taos.connect(host=self.host, user=self.user, password=self.password, config=self.config )
|
||||
print(conn1)
|
||||
cur1 = conn1.cursor()
|
||||
tdSql.init(cur1, True)
|
||||
|
||||
# new db ,new super tables , child tables, and insert data
|
||||
tdSql.execute("drop database if exists db2")
|
||||
os.system("%staosdemo -f tsdb/insertDataDb1.json -y " % binPath)
|
||||
tdSql.execute("drop database if exists db1")
|
||||
os.system("%staosdemo -f tsdb/insertDataDb2.json -y " % binPath)
|
||||
tdSql.execute("drop table if exists db2.stb0")
|
||||
os.system("%staosdemo -f tsdb/insertDataDb2Newstab.json -y " % binPath)
|
||||
|
||||
# new general tables and modify general tables;
|
||||
tdSql.execute("use db2")
|
||||
tdSql.execute("drop table if exists stb1_0")
|
||||
tdSql.execute("drop table if exists stb1_1")
|
||||
tdSql.execute("insert into stb0_0 values(1614218412000,8637,78.861045,'R','bf3')(1614218422000,8637,98.861045,'R','bf3')")
|
||||
tdSql.execute("alter table db2.stb0 add column c4 int")
|
||||
tdSql.execute("alter table db2.stb0 drop column c2")
|
||||
tdSql.execute("alter table db2.stb0 add tag t3 int")
|
||||
tdSql.execute("alter table db2.stb0 drop tag t1")
|
||||
tdSql.execute("create table if not exists stb2_0 (ts timestamp, c0 int, c1 float) ")
|
||||
tdSql.execute("insert into stb2_0 values(1614218412000,8637,78.861045)")
|
||||
tdSql.execute("alter table stb2_0 add column c2 binary(4)")
|
||||
tdSql.execute("alter table stb2_0 drop column c1")
|
||||
tdSql.execute("insert into stb2_0 values(1614218422000,8638,'R')")
|
||||
|
||||
# create db utest and modify super tables;
|
||||
dataType= [ "tinyint", "smallint", "int", "bigint", "float", "double", "bool", " binary(20)", "nchar(20)", "tinyint unsigned", "smallint unsigned", "int unsigned", "bigint unsigned"]
|
||||
tdSql.execute("drop database if exists utest")
|
||||
tdSql.execute("create database utest keep 3650")
|
||||
tdSql.execute("use utest")
|
||||
tdSql.execute('''create table test(ts timestamp, col0 tinyint, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
|
||||
col7 bool, col8 binary(20), col9 nchar(20), col10 tinyint unsigned, col11 smallint unsigned, col12 int unsigned, col13 bigint unsigned) tags(loc nchar(200), tag1 int)''')
|
||||
rowNum2= 988
|
||||
for i in range(rowNum2):
|
||||
tdSql.execute("alter table test add column col%d %s ;" %( i+14, choice(dataType)) )
|
||||
rowNum3= 988
|
||||
for i in range(rowNum3):
|
||||
tdSql.execute("alter table test drop column col%d ;" %( i+14) )
|
||||
|
||||
self.rowNum = 1
|
||||
self.rowNum2 = 100
|
||||
self.ts = 1537146000000
|
||||
for j in range(self.rowNum2):
|
||||
tdSql.execute("create table test%d using test tags('beijing%d', 10)" % (j,j) )
|
||||
for i in range(self.rowNum):
|
||||
tdSql.execute("insert into test%d values(%d, %d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
|
||||
% (j, self.ts + i*1000, i + 1, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
|
||||
# delete child tables;
|
||||
for j in range(self.rowNum2):
|
||||
tdSql.execute("drop table if exists test%d" % (j+1))
|
||||
|
||||
#restart taosd
|
||||
os.system("ps -ef |grep taosd |grep -v 'grep' |awk '{print $2}'|xargs kill -2")
|
||||
sleep(20)
|
||||
print("123")
|
||||
os.system("nohup /usr/bin/taosd > /dev/null 2>&1 &")
|
||||
sleep(4)
|
||||
tdSql.execute("reset query cache")
|
||||
query_pid2 = int(subprocess.getstatusoutput('ps aux|grep taosd |grep -v "grep"|awk \'{print $2}\'')[1])
|
||||
print(query_pid2)
|
||||
|
||||
# new taos connecting to server
|
||||
conn2 = taos.connect(host=self.host, user=self.user, password=self.password, config=self.config )
|
||||
print(conn2)
|
||||
cur2 = conn2.cursor()
|
||||
tdSql.init(cur2, True)
|
||||
|
||||
|
||||
# check data correct
|
||||
tdSql.query("show databases")
|
||||
tdSql.execute("use db2")
|
||||
tdSql.query("select count (tbname) from stb0")
|
||||
tdSql.checkData(0, 0, 1)
|
||||
tdSql.query("select count (tbname) from stb1")
|
||||
tdSql.checkRows(0)
|
||||
tdSql.query("select count(*) from stb0_0")
|
||||
tdSql.checkData(0, 0, 2)
|
||||
tdSql.query("select count(*) from stb0")
|
||||
tdSql.checkData(0, 0, 2)
|
||||
tdSql.query("select count(*) from stb2_0")
|
||||
tdSql.checkData(0, 0, 2)
|
||||
tdSql.query("select * from stb2_0")
|
||||
tdSql.checkData(1, 2, 'R')
|
||||
tdSql.execute("use utest")
|
||||
tdSql.query("select count (tbname) from test")
|
||||
tdSql.checkData(0, 0, 1)
|
||||
|
||||
# delete useless file
|
||||
testcaseFilename = os.path.split(__file__)[-1]
|
||||
os.system("rm -rf ./insert_res.txt")
|
||||
os.system("rm -rf tsdb/%s.sql" % testcaseFilename )
|
||||
|
||||
clients = TwoClients()
|
||||
clients.initConnection()
|
||||
# clients.getBuildPath()
|
||||
clients.run()
|
|
@ -0,0 +1,170 @@
|
|||
###################################################################
|
||||
# Copyright (c) 2016 by TAOS Technologies, Inc.
|
||||
# All rights reserved.
|
||||
#
|
||||
# This file is proprietary and confidential to TAOS Technologies.
|
||||
# No part of this file may be reproduced, stored, transmitted,
|
||||
# disclosed or used in any form or by any means other than as
|
||||
# expressly provided by the written permission from Jianhui Tao
|
||||
#
|
||||
###################################################################
|
||||
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import os
|
||||
import sys
|
||||
sys.path.insert(0, os.getcwd())
|
||||
from util.log import *
|
||||
from util.sql import *
|
||||
from util.dnodes import *
|
||||
import taos
|
||||
import threading
|
||||
import subprocess
|
||||
from random import choice
|
||||
|
||||
class TwoClients:
|
||||
def initConnection(self):
|
||||
self.host = "chenhaoran02"
|
||||
self.user = "root"
|
||||
self.password = "taosdata"
|
||||
self.config = "/etc/taos/"
|
||||
self.port =6030
|
||||
self.rowNum = 10
|
||||
self.ts = 1537146000000
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosd" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root)-len("/build/bin")]
|
||||
break
|
||||
return buildPath
|
||||
|
||||
def run(self):
|
||||
buildPath = self.getBuildPath()
|
||||
if (buildPath == ""):
|
||||
tdLog.exit("taosd not found!")
|
||||
else:
|
||||
tdLog.info("taosd found in %s" % buildPath)
|
||||
binPath = buildPath+ "/build/bin/"
|
||||
walFilePath = "/var/lib/taos/mnode_bak/wal/"
|
||||
|
||||
# new taos client
|
||||
conn1 = taos.connect(host=self.host, user=self.user, password=self.password, config=self.config )
|
||||
print(conn1)
|
||||
cur1 = conn1.cursor()
|
||||
tdSql.init(cur1, True)
|
||||
|
||||
# new db ,new super tables , child tables, and insert data
|
||||
tdSql.execute("drop database if exists db2")
|
||||
os.system("%staosdemo -f tsdb/insertDataDb1Replica2.json -y " % binPath)
|
||||
tdSql.execute("drop database if exists db1")
|
||||
os.system("%staosdemo -f tsdb/insertDataDb2Replica2.json -y " % binPath)
|
||||
tdSql.execute("drop table if exists db2.stb0")
|
||||
os.system("%staosdemo -f tsdb/insertDataDb2NewstabReplica2.json -y " % binPath)
|
||||
|
||||
# new general tables and modify general tables;
|
||||
tdSql.execute("use db2")
|
||||
tdSql.execute("drop table if exists stb1_0")
|
||||
tdSql.execute("drop table if exists stb1_1")
|
||||
tdSql.execute("insert into stb0_0 values(1614218412000,8637,78.861045,'R','bf3')(1614218422000,8637,98.861045,'R','bf3')")
|
||||
tdSql.execute("alter table db2.stb0 add column c4 int")
|
||||
tdSql.execute("alter table db2.stb0 drop column c2")
|
||||
tdSql.execute("alter table db2.stb0 add tag t3 int")
|
||||
tdSql.execute("alter table db2.stb0 drop tag t1")
|
||||
tdSql.execute("create table if not exists stb2_0 (ts timestamp, c0 int, c1 float) ")
|
||||
tdSql.execute("insert into stb2_0 values(1614218412000,8637,78.861045)")
|
||||
tdSql.execute("alter table stb2_0 add column c2 binary(4)")
|
||||
tdSql.execute("alter table stb2_0 drop column c1")
|
||||
tdSql.execute("insert into stb2_0 values(1614218422000,8638,'R')")
|
||||
|
||||
|
||||
# create db utest replica 2 and modify super tables;
|
||||
dataType= [ "tinyint", "smallint", "int", "bigint", "float", "double", "bool", " binary(20)", "nchar(20)", "tinyint unsigned", "smallint unsigned", "int unsigned", "bigint unsigned"]
|
||||
tdSql.execute("drop database if exists utest")
|
||||
tdSql.execute("create database utest keep 3650 replica 2 ")
|
||||
tdSql.execute("use utest")
|
||||
tdSql.execute('''create table test(ts timestamp, col0 tinyint, col1 tinyint, col2 smallint, col3 int, col4 bigint, col5 float, col6 double,
|
||||
col7 bool, col8 binary(20), col9 nchar(20), col10 tinyint unsigned, col11 smallint unsigned, col12 int unsigned, col13 bigint unsigned) tags(loc nchar(200), tag1 int)''')
|
||||
rowNum2= 988
|
||||
for i in range(rowNum2):
|
||||
tdSql.execute("alter table test add column col%d %s ;" %( i+14, choice(dataType)) )
|
||||
rowNum3= 988
|
||||
for i in range(rowNum3):
|
||||
tdSql.execute("alter table test drop column col%d ;" %( i+14) )
|
||||
self.rowNum = 1
|
||||
self.rowNum2 = 100
|
||||
self.ts = 1537146000000
|
||||
for j in range(self.rowNum2):
|
||||
tdSql.execute("create table test%d using test tags('beijing%d', 10)" % (j,j) )
|
||||
for i in range(self.rowNum):
|
||||
tdSql.execute("insert into test%d values(%d, %d, %d, %d, %d, %d, %f, %f, %d, 'taosdata%d', '涛思数据%d', %d, %d, %d, %d)"
|
||||
% (j, self.ts + i*1000, i + 1, i + 1, i + 1, i + 1, i + 1, i + 0.1, i + 0.1, i % 2, i + 1, i + 1, i + 1, i + 1, i + 1, i + 1))
|
||||
# delete child tables;
|
||||
for j in range(self.rowNum2):
|
||||
tdSql.execute("drop table if exists test%d" % (j+1))
|
||||
|
||||
# drop dnodes and restart taosd;
|
||||
sleep(3)
|
||||
tdSql.execute(" drop dnode 'chenhaoran02:6030'; ")
|
||||
sleep(20)
|
||||
os.system("rm -rf /var/lib/taos/*")
|
||||
print("clear dnode chenhaoran02'data files")
|
||||
os.system("nohup /usr/bin/taosd > /dev/null 2>&1 &")
|
||||
print("start taosd")
|
||||
sleep(10)
|
||||
tdSql.execute("reset query cache ;")
|
||||
tdSql.execute("create dnode chenhaoran02 ;")
|
||||
|
||||
# #
|
||||
# os.system("ps -ef |grep taosd |grep -v 'grep' |awk '{print $2}'|xargs kill -2")
|
||||
# sleep(20)
|
||||
# os.system("nohup /usr/bin/taosd > /dev/null 2>&1 &")
|
||||
# sleep(4)
|
||||
# tdSql.execute("reset query cache")
|
||||
# query_pid2 = int(subprocess.getstatusoutput('ps aux|grep taosd |grep -v "grep"|awk \'{print $2}\'')[1])
|
||||
# print(query_pid2)
|
||||
|
||||
# new taos connecting to server
|
||||
conn2 = taos.connect(host=self.host, user=self.user, password=self.password, config=self.config )
|
||||
print(conn2)
|
||||
cur2 = conn2.cursor()
|
||||
tdSql.init(cur2, True)
|
||||
|
||||
# check data correct
|
||||
tdSql.query("show databases")
|
||||
tdSql.execute("use db2")
|
||||
tdSql.query("select count (tbname) from stb0")
|
||||
tdSql.checkData(0, 0, 1)
|
||||
tdSql.query("select count (tbname) from stb1")
|
||||
tdSql.checkRows(0)
|
||||
tdSql.query("select count(*) from stb0_0")
|
||||
tdSql.checkData(0, 0, 2)
|
||||
tdSql.query("select count(*) from stb0")
|
||||
tdSql.checkData(0, 0, 2)
|
||||
tdSql.query("select count(*) from stb2_0")
|
||||
tdSql.checkData(0, 0, 2)
|
||||
tdSql.query("select * from stb2_0")
|
||||
tdSql.checkData(1, 2, 'R')
|
||||
|
||||
tdSql.execute("use utest")
|
||||
tdSql.query("select count (tbname) from test")
|
||||
tdSql.checkData(0, 0, 1)
|
||||
|
||||
# delete useless file
|
||||
testcaseFilename = os.path.split(__file__)[-1]
|
||||
os.system("rm -rf ./insert_res.txt")
|
||||
os.system("rm -rf tsdb/%s.sql" % testcaseFilename )
|
||||
|
||||
clients = TwoClients()
|
||||
clients.initConnection()
|
||||
# clients.getBuildPath()
|
||||
clients.run()
|
|
@ -20,9 +20,9 @@ from util.log import *
|
|||
|
||||
|
||||
class TDSimClient:
|
||||
def __init__(self):
|
||||
def __init__(self, path):
|
||||
self.testCluster = False
|
||||
|
||||
self.path = path
|
||||
self.cfgDict = {
|
||||
"numOfLogLines": "100000000",
|
||||
"numOfThreadsPerCore": "2.0",
|
||||
|
@ -41,10 +41,7 @@ class TDSimClient:
|
|||
"jnidebugFlag": "135",
|
||||
"qdebugFlag": "135",
|
||||
"telemetryReporting": "0",
|
||||
}
|
||||
def init(self, path):
|
||||
self.__init__()
|
||||
self.path = path
|
||||
}
|
||||
|
||||
def getLogDir(self):
|
||||
self.logDir = "%s/sim/psim/log" % (self.path)
|
||||
|
@ -480,8 +477,7 @@ class TDDnodes:
|
|||
for i in range(len(self.dnodes)):
|
||||
self.dnodes[i].init(self.path)
|
||||
|
||||
self.sim = TDSimClient()
|
||||
self.sim.init(self.path)
|
||||
self.sim = TDSimClient(self.path)
|
||||
|
||||
def setTestCluster(self, value):
|
||||
self.testCluster = value
|
||||
|
|
|
@ -0,0 +1,128 @@
|
|||
#include <stdio.h>
|
||||
#include <string.h>
|
||||
#include <sys/socket.h>
|
||||
#include <arpa/inet.h>
|
||||
#include <unistd.h>
|
||||
#include <netinet/in.h>
|
||||
#include <stdlib.h>
|
||||
#include <pthread.h>
|
||||
|
||||
#define MAXLINE 1024
|
||||
|
||||
typedef struct {
|
||||
pthread_t pid;
|
||||
int threadId;
|
||||
int rows;
|
||||
int tables;
|
||||
} ThreadObj;
|
||||
|
||||
void post(char *ip,int port,char *page,char *msg) {
|
||||
int sockfd,n;
|
||||
char recvline[MAXLINE];
|
||||
struct sockaddr_in servaddr;
|
||||
char content[4096];
|
||||
char content_page[50];
|
||||
sprintf(content_page,"POST /%s HTTP/1.1\r\n",page);
|
||||
char content_host[50];
|
||||
sprintf(content_host,"HOST: %s:%d\r\n",ip,port);
|
||||
char content_type[] = "Content-Type: text/plain\r\n";
|
||||
char Auth[] = "Authorization: Basic cm9vdDp0YW9zZGF0YQ==\r\n";
|
||||
char content_len[50];
|
||||
sprintf(content_len,"Content-Length: %ld\r\n\r\n",strlen(msg));
|
||||
sprintf(content,"%s%s%s%s%s%s",content_page,content_host,content_type,Auth,content_len,msg);
|
||||
if((sockfd = socket(AF_INET,SOCK_STREAM,0)) < 0) {
|
||||
printf("socket error\n");
|
||||
}
|
||||
bzero(&servaddr,sizeof(servaddr));
|
||||
servaddr.sin_family = AF_INET;
|
||||
servaddr.sin_port = htons(port);
|
||||
if(inet_pton(AF_INET,ip,&servaddr.sin_addr) <= 0) {
|
||||
printf("inet_pton error\n");
|
||||
}
|
||||
if(connect(sockfd,(struct sockaddr *)&servaddr,sizeof(servaddr)) < 0) {
|
||||
printf("connect error\n");
|
||||
}
|
||||
write(sockfd,content,strlen(content));
|
||||
printf("%s\n", content);
|
||||
while((n = read(sockfd,recvline,MAXLINE)) > 0) {
|
||||
recvline[n] = 0;
|
||||
if(fputs(recvline,stdout) == EOF) {
|
||||
printf("fputs error\n");
|
||||
}
|
||||
}
|
||||
if(n < 0) {
|
||||
printf("read error\n");
|
||||
}
|
||||
}
|
||||
|
||||
void singleThread() {
|
||||
char ip[] = "127.0.0.1";
|
||||
int port = 6041;
|
||||
char page[] = "rest/sql";
|
||||
char page1[] = "rest/sql/db1";
|
||||
char page2[] = "rest/sql/db2";
|
||||
char nonexit[] = "rest/sql/xxdb";
|
||||
|
||||
post(ip,port,page,"drop database if exists db1");
|
||||
post(ip,port,page,"create database if not exists db1");
|
||||
post(ip,port,page,"drop database if exists db2");
|
||||
post(ip,port,page,"create database if not exists db2");
|
||||
post(ip,port,page1,"create table t11 (ts timestamp, c1 int)");
|
||||
post(ip,port,page2,"create table t21 (ts timestamp, c1 int)");
|
||||
post(ip,port,page1,"insert into t11 values (now, 1)");
|
||||
post(ip,port,page2,"insert into t21 values (now, 2)");
|
||||
post(ip,port,nonexit,"create database if not exists db3");
|
||||
}
|
||||
|
||||
void execute(void *params) {
|
||||
char ip[] = "127.0.0.1";
|
||||
int port = 6041;
|
||||
char page[] = "rest/sql";
|
||||
char *unique = calloc(1, 1024);
|
||||
char *sql = calloc(1, 1024);
|
||||
ThreadObj *pThread = (ThreadObj *)params;
|
||||
printf("Thread %d started\n", pThread->threadId);
|
||||
sprintf(unique, "rest/sql/db%d",pThread->threadId);
|
||||
sprintf(sql, "drop database if exists db%d", pThread->threadId);
|
||||
post(ip,port,page, sql);
|
||||
sprintf(sql, "create database if not exists db%d", pThread->threadId);
|
||||
post(ip,port,page, sql);
|
||||
for (int i = 0; i < pThread->tables; i++) {
|
||||
sprintf(sql, "create table t%d (ts timestamp, c1 int)", i);
|
||||
post(ip,port,unique, sql);
|
||||
}
|
||||
for (int i = 0; i < pThread->rows; i++) {
|
||||
sprintf(sql, "insert into t%d values (now + %ds, %d)", pThread->threadId, i, pThread->threadId);
|
||||
post(ip,port,unique, sql);
|
||||
}
|
||||
free(unique);
|
||||
free(sql);
|
||||
return;
|
||||
}
|
||||
|
||||
void multiThread() {
|
||||
int numOfThreads = 100;
|
||||
int numOfTables = 100;
|
||||
int numOfRows = 1;
|
||||
ThreadObj *threads = calloc((size_t)numOfThreads, sizeof(ThreadObj));
|
||||
for (int i = 0; i < numOfThreads; i++) {
|
||||
ThreadObj *pthread = threads + i;
|
||||
pthread_attr_t thattr;
|
||||
pthread->threadId = i + 1;
|
||||
pthread->rows = numOfRows;
|
||||
pthread->tables = numOfTables;
|
||||
pthread_attr_init(&thattr);
|
||||
pthread_attr_setdetachstate(&thattr, PTHREAD_CREATE_JOINABLE);
|
||||
pthread_create(&pthread->pid, &thattr, (void *(*)(void *))execute, pthread);
|
||||
}
|
||||
for (int i = 0; i < numOfThreads; i++) {
|
||||
pthread_join(threads[i].pid, NULL);
|
||||
}
|
||||
free(threads);
|
||||
}
|
||||
|
||||
int main() {
|
||||
singleThread();
|
||||
multiThread();
|
||||
exit(0);
|
||||
}
|
|
@ -0,0 +1,2 @@
|
|||
all:
|
||||
gcc -g httpTest.c -o httpTest -lpthread
|
Loading…
Reference in New Issue