Merge branch 'develop' into test/testcase
This commit is contained in:
commit
e5a2a446b5
|
@ -27,7 +27,6 @@ tests/hdfs/
|
||||||
nmake/
|
nmake/
|
||||||
sln/
|
sln/
|
||||||
hdfs/
|
hdfs/
|
||||||
c/
|
|
||||||
taoshebei/
|
taoshebei/
|
||||||
taosdalipu/
|
taosdalipu/
|
||||||
Target/
|
Target/
|
||||||
|
|
|
@ -227,6 +227,8 @@ pipeline {
|
||||||
./test-all.sh p4
|
./test-all.sh p4
|
||||||
cd ${WKC}/tests
|
cd ${WKC}/tests
|
||||||
./test-all.sh full jdbc
|
./test-all.sh full jdbc
|
||||||
|
cd ${WKC}/tests
|
||||||
|
./test-all.sh full unit
|
||||||
date'''
|
date'''
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -32,7 +32,7 @@ ELSEIF (TD_WINDOWS)
|
||||||
#INSTALL(TARGETS taos RUNTIME DESTINATION driver)
|
#INSTALL(TARGETS taos RUNTIME DESTINATION driver)
|
||||||
#INSTALL(TARGETS shell RUNTIME DESTINATION .)
|
#INSTALL(TARGETS shell RUNTIME DESTINATION .)
|
||||||
IF (TD_MVN_INSTALLED)
|
IF (TD_MVN_INSTALLED)
|
||||||
INSTALL(FILES ${LIBRARY_OUTPUT_PATH}/taos-jdbcdriver-2.0.20-dist.jar DESTINATION connector/jdbc)
|
INSTALL(FILES ${LIBRARY_OUTPUT_PATH}/taos-jdbcdriver-2.0.21-dist.jar DESTINATION connector/jdbc)
|
||||||
ENDIF ()
|
ENDIF ()
|
||||||
ELSEIF (TD_DARWIN)
|
ELSEIF (TD_DARWIN)
|
||||||
SET(TD_MAKE_INSTALL_SH "${TD_COMMUNITY_DIR}/packaging/tools/make_install.sh")
|
SET(TD_MAKE_INSTALL_SH "${TD_COMMUNITY_DIR}/packaging/tools/make_install.sh")
|
||||||
|
|
|
@ -31,6 +31,20 @@ TDengine是一个高效的存储、查询、分析时序大数据的平台,专
|
||||||
* [创建超级表](/model#create-stable):为同一类型的数据采集点创建一个超级表
|
* [创建超级表](/model#create-stable):为同一类型的数据采集点创建一个超级表
|
||||||
* [创建表](/model#create-table):使用超级表做模板,为每一个具体的数据采集点单独建表
|
* [创建表](/model#create-table):使用超级表做模板,为每一个具体的数据采集点单独建表
|
||||||
|
|
||||||
|
## [TAOS SQL](/taos-sql)
|
||||||
|
|
||||||
|
* [支持的数据类型](/taos-sql#data-type):支持时间戳、整型、浮点型、布尔型、字符型等多种数据类型
|
||||||
|
* [数据库管理](/taos-sql#management):添加、删除、查看数据库
|
||||||
|
* [表管理](/taos-sql#table):添加、删除、查看、修改表
|
||||||
|
* [超级表管理](/taos-sql#super-table):添加、删除、查看、修改超级表
|
||||||
|
* [标签管理](/taos-sql#tags):增加、删除、修改标签
|
||||||
|
* [数据写入](/taos-sql#insert):支持单表单条、多条、多表多条写入,支持历史数据写入
|
||||||
|
* [数据查询](/taos-sql#select):支持时间段、值过滤、排序、查询结果手动分页等
|
||||||
|
* [SQL函数](/taos-sql#functions):支持各种聚合函数、选择函数、计算函数,如avg, min, diff等
|
||||||
|
* [时间维度聚合](/taos-sql#aggregation):将表中数据按照时间段进行切割后聚合,降维处理
|
||||||
|
* [边界限制](/taos-sql#limitation):库、表、SQL等边界限制条件
|
||||||
|
* [错误码](/taos-sql/error-code):TDengine 2.0 错误码以及对应的十进制码
|
||||||
|
|
||||||
## [高效写入数据](/insert)
|
## [高效写入数据](/insert)
|
||||||
|
|
||||||
* [SQL写入](/insert#sql):使用SQL insert命令向一张或多张表写入单条或多条记录
|
* [SQL写入](/insert#sql):使用SQL insert命令向一张或多张表写入单条或多条记录
|
||||||
|
@ -94,20 +108,6 @@ TDengine是一个高效的存储、查询、分析时序大数据的平台,专
|
||||||
* [文件目录结构](/administrator#directories):TDengine数据文件、配置文件等所在目录
|
* [文件目录结构](/administrator#directories):TDengine数据文件、配置文件等所在目录
|
||||||
* [参数限制与保留关键字](/administrator#keywords):TDengine的参数限制与保留关键字列表
|
* [参数限制与保留关键字](/administrator#keywords):TDengine的参数限制与保留关键字列表
|
||||||
|
|
||||||
## [TAOS SQL](/taos-sql)
|
|
||||||
|
|
||||||
* [支持的数据类型](/taos-sql#data-type):支持时间戳、整型、浮点型、布尔型、字符型等多种数据类型
|
|
||||||
* [数据库管理](/taos-sql#management):添加、删除、查看数据库
|
|
||||||
* [表管理](/taos-sql#table):添加、删除、查看、修改表
|
|
||||||
* [超级表管理](/taos-sql#super-table):添加、删除、查看、修改超级表
|
|
||||||
* [标签管理](/taos-sql#tags):增加、删除、修改标签
|
|
||||||
* [数据写入](/taos-sql#insert):支持单表单条、多条、多表多条写入,支持历史数据写入
|
|
||||||
* [数据查询](/taos-sql#select):支持时间段、值过滤、排序、查询结果手动分页等
|
|
||||||
* [SQL函数](/taos-sql#functions):支持各种聚合函数、选择函数、计算函数,如avg, min, diff等
|
|
||||||
* [时间维度聚合](/taos-sql#aggregation):将表中数据按照时间段进行切割后聚合,降维处理
|
|
||||||
* [边界限制](/taos-sql#limitation):库、表、SQL等边界限制条件
|
|
||||||
* [错误码](/taos-sql/error-code):TDengine 2.0 错误码以及对应的十进制码
|
|
||||||
|
|
||||||
## TDengine的技术设计
|
## TDengine的技术设计
|
||||||
|
|
||||||
* [系统模块](/architecture/taosd):taosd的功能和模块划分
|
* [系统模块](/architecture/taosd):taosd的功能和模块划分
|
||||||
|
@ -120,6 +120,7 @@ TDengine是一个高效的存储、查询、分析时序大数据的平台,专
|
||||||
* [TDengine性能对比测试工具](https://www.taosdata.com/blog/2020/01/18/1166.html)
|
* [TDengine性能对比测试工具](https://www.taosdata.com/blog/2020/01/18/1166.html)
|
||||||
* [IDEA数据库管理工具可视化使用TDengine](https://www.taosdata.com/blog/2020/08/27/1767.html)
|
* [IDEA数据库管理工具可视化使用TDengine](https://www.taosdata.com/blog/2020/08/27/1767.html)
|
||||||
* [基于eletron开发的跨平台TDengine图形化管理工具](https://github.com/skye0207/TDengineGUI)
|
* [基于eletron开发的跨平台TDengine图形化管理工具](https://github.com/skye0207/TDengineGUI)
|
||||||
|
* [DataX,支持TDengine的离线数据采集/同步工具](https://github.com/alibaba/DataX)
|
||||||
|
|
||||||
## TDengine与其他数据库的对比测试
|
## TDengine与其他数据库的对比测试
|
||||||
|
|
||||||
|
|
|
@ -178,11 +178,11 @@ TDengine 分布式架构的逻辑结构图如下:
|
||||||
|
|
||||||
**FQDN配置**:一个数据节点有一个或多个FQDN,可以在系统配置文件taos.cfg通过参数“fqdn"进行指定,如果没有指定,系统将自动获取计算机的hostname作为其FQDN。如果节点没有配置FQDN,可以直接将该节点的配置参数fqdn设置为它的IP地址。但不建议使用IP,因为IP地址可变,一旦变化,将让集群无法正常工作。一个数据节点的EP(End Point)由FQDN + Port组成。采用FQDN,需要保证DNS服务正常工作,或者在节点以及应用所在的节点配置好hosts文件。
|
**FQDN配置**:一个数据节点有一个或多个FQDN,可以在系统配置文件taos.cfg通过参数“fqdn"进行指定,如果没有指定,系统将自动获取计算机的hostname作为其FQDN。如果节点没有配置FQDN,可以直接将该节点的配置参数fqdn设置为它的IP地址。但不建议使用IP,因为IP地址可变,一旦变化,将让集群无法正常工作。一个数据节点的EP(End Point)由FQDN + Port组成。采用FQDN,需要保证DNS服务正常工作,或者在节点以及应用所在的节点配置好hosts文件。
|
||||||
|
|
||||||
**端口配置:**一个数据节点对外的端口由TDengine的系统配置参数serverPort决定,对集群内部通讯的端口是serverPort+5。集群内数据节点之间的数据复制操作还占有一个TCP端口,是serverPort+10. 为支持多线程高效的处理UDP数据,每个对内和对外的UDP连接,都需要占用5个连续的端口。因此一个数据节点总的端口范围为serverPort到serverPort + 10,总共11个TCP/UDP端口。使用时,需要确保防火墙将这些端口打开。每个数据节点可以配置不同的serverPort。
|
**端口配置:**一个数据节点对外的端口由TDengine的系统配置参数serverPort决定,对集群内部通讯的端口是serverPort+5。集群内数据节点之间的数据复制操作还占有一个TCP端口,是serverPort+10. 为支持多线程高效的处理UDP数据,每个对内和对外的UDP连接,都需要占用5个连续的端口。因此一个数据节点总的端口范围为serverPort到serverPort + 10,总共11个TCP/UDP端口。(另外还可能有 RESTful、Arbitrator 所使用的端口,那样的话就一共是 13 个。)使用时,需要确保防火墙将这些端口打开,以备使用。每个数据节点可以配置不同的serverPort。
|
||||||
|
|
||||||
**集群对外连接:** TDengine集群可以容纳单个、多个甚至几千个数据节点。应用只需要向集群中任何一个数据节点发起连接即可,连接需要提供的网络参数是一数据节点的End Point(FQDN加配置的端口号)。通过命令行CLI启动应用taos时,可以通过选项-h来指定数据节点的FQDN, -P来指定其配置的端口号,如果端口不配置,将采用TDengine的系统配置参数serverPort。
|
**集群对外连接:** TDengine集群可以容纳单个、多个甚至几千个数据节点。应用只需要向集群中任何一个数据节点发起连接即可,连接需要提供的网络参数是一数据节点的End Point(FQDN加配置的端口号)。通过命令行CLI启动应用taos时,可以通过选项-h来指定数据节点的FQDN, -P来指定其配置的端口号,如果端口不配置,将采用TDengine的系统配置参数serverPort。
|
||||||
|
|
||||||
**集群内部通讯**: 各个数据节点之间通过TCP/UDP进行连接。一个数据节点启动时,将获取mnode所在的dnode的EP信息,然后与系统中的mnode建立起连接,交换信息。获取mnode的EP信息有三步,1:检查mnodeEpList文件是否存在,如果不存在或不能正常打开获得mnode EP信息,进入第二步;2:检查系统配置文件taos.cfg, 获取节点配置参数firstEp, secondEp,(这两个参数指定的节点可以是不带mnode的普通节点,这样的话,节点被连接时会尝试重定向到mnode节点)如果不存在或者taos.cfg里没有这两个配置参数,或无效,进入第三步;3:将自己的EP设为mnode EP, 并独立运行起来。获取mnode EP列表后,数据节点发起连接,如果连接成功,则成功加入进工作的集群,如果不成功,则尝试mnode EP列表中的下一个。如果都尝试了,但连接都仍然失败,则休眠几秒后,再进行尝试。
|
**集群内部通讯**: 各个数据节点之间通过TCP/UDP进行连接。一个数据节点启动时,将获取mnode所在的dnode的EP信息,然后与系统中的mnode建立起连接,交换信息。获取mnode的EP信息有三步,1:检查mnodeEpSet文件是否存在,如果不存在或不能正常打开获得mnode EP信息,进入第二步;2:检查系统配置文件taos.cfg, 获取节点配置参数firstEp, secondEp,(这两个参数指定的节点可以是不带mnode的普通节点,这样的话,节点被连接时会尝试重定向到mnode节点)如果不存在或者taos.cfg里没有这两个配置参数,或无效,进入第三步;3:将自己的EP设为mnode EP, 并独立运行起来。获取mnode EP列表后,数据节点发起连接,如果连接成功,则成功加入进工作的集群,如果不成功,则尝试mnode EP列表中的下一个。如果都尝试了,但连接都仍然失败,则休眠几秒后,再进行尝试。
|
||||||
|
|
||||||
**MNODE的选择:** TDengine逻辑上有管理节点,但没有单独的执行代码,服务器侧只有一套执行代码taosd。那么哪个数据节点会是管理节点呢?这是系统自动决定的,无需任何人工干预。原则如下:一个数据节点启动时,会检查自己的End Point, 并与获取的mnode EP List进行比对,如果在其中,该数据节点认为自己应该启动mnode模块,成为mnode。如果自己的EP不在mnode EP List里,则不启动mnode模块。在系统的运行过程中,由于负载均衡、宕机等原因,mnode有可能迁移至新的dnode,但一切都是透明的,无需人工干预,配置参数的修改,是mnode自己根据资源做出的决定。
|
**MNODE的选择:** TDengine逻辑上有管理节点,但没有单独的执行代码,服务器侧只有一套执行代码taosd。那么哪个数据节点会是管理节点呢?这是系统自动决定的,无需任何人工干预。原则如下:一个数据节点启动时,会检查自己的End Point, 并与获取的mnode EP List进行比对,如果在其中,该数据节点认为自己应该启动mnode模块,成为mnode。如果自己的EP不在mnode EP List里,则不启动mnode模块。在系统的运行过程中,由于负载均衡、宕机等原因,mnode有可能迁移至新的dnode,但一切都是透明的,无需人工干预,配置参数的修改,是mnode自己根据资源做出的决定。
|
||||||
|
|
||||||
|
|
|
@ -209,7 +209,7 @@ C/C++的API类似于MySQL的C API。应用程序使用时,需要包含TDengine
|
||||||
|
|
||||||
- `TAOS_RES* taos_query(TAOS *taos, const char *sql)`
|
- `TAOS_RES* taos_query(TAOS *taos, const char *sql)`
|
||||||
|
|
||||||
该API用来执行SQL语句,可以是DQL、DML或DDL语句。 其中的`taos`参数是通过`taos_connect`获得的指针。返回值 NULL 表示失败。
|
该API用来执行SQL语句,可以是DQL、DML或DDL语句。 其中的`taos`参数是通过`taos_connect`获得的指针。不能通过返回值是否是 NULL 来判断执行结果是否失败,而是需要用`taos_errno`函数解析结果集中的错误代码来进行判断。
|
||||||
|
|
||||||
- `int taos_result_precision(TAOS_RES *res)`
|
- `int taos_result_precision(TAOS_RES *res)`
|
||||||
|
|
||||||
|
@ -591,7 +591,8 @@ curl -u username:password -d '<SQL>' <ip>:<PORT>/rest/sql
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"status": "succ",
|
"status": "succ",
|
||||||
"head": ["Time Stamp","current", …],
|
"head": ["ts","current", …],
|
||||||
|
"column_meta": [["ts",9,8],["current",6,4], …],
|
||||||
"data": [
|
"data": [
|
||||||
["2018-10-03 14:38:05.000", 10.3, …],
|
["2018-10-03 14:38:05.000", 10.3, …],
|
||||||
["2018-10-03 14:38:15.000", 12.6, …]
|
["2018-10-03 14:38:15.000", 12.6, …]
|
||||||
|
@ -602,10 +603,23 @@ curl -u username:password -d '<SQL>' <ip>:<PORT>/rest/sql
|
||||||
|
|
||||||
说明:
|
说明:
|
||||||
|
|
||||||
- status: 告知操作结果是成功还是失败
|
- status: 告知操作结果是成功还是失败。
|
||||||
- head: 表的定义,如果不返回结果集,仅有一列“affected_rows”
|
- head: 表的定义,如果不返回结果集,则仅有一列“affected_rows”。(从 2.0.17 版本开始,建议不要依赖 head 返回值来判断数据列类型,而推荐使用 column_meta。在未来版本中,有可能会从返回值中去掉 head 这一项。)
|
||||||
- data: 具体返回的数据,一排一排的呈现,如果不返回结果集,仅[[affected_rows]]
|
- column_meta: 从 2.0.17 版本开始,返回值中增加这一项来说明 data 里每一列的数据类型。具体每个列会用三个值来说明,分别为:列名、列类型、类型长度。例如`["current",6,4]`表示列名为“current”;列类型为 6,也即 float 类型;类型长度为 4,也即对应 4 个字节表示的 float。如果列类型为 binary 或 nchar,则类型长度表示该列最多可以保存的内容长度,而不是本次返回值中的具体数据长度。当列类型是 nchar 的时候,其类型长度表示可以保存的 unicode 字符数量,而不是 bytes。
|
||||||
- rows: 表明总共多少行数据
|
- data: 具体返回的数据,一行一行的呈现,如果不返回结果集,那么就仅有[[affected_rows]]。data 中每一行的数据列顺序,与 column_meta 中描述数据列的顺序完全一致。
|
||||||
|
- rows: 表明总共多少行数据。
|
||||||
|
|
||||||
|
column_meta 中的列类型说明:
|
||||||
|
* 1:BOOL
|
||||||
|
* 2:TINYINT
|
||||||
|
* 3:SMALLINT
|
||||||
|
* 4:INT
|
||||||
|
* 5:BIGINT
|
||||||
|
* 6:FLOAT
|
||||||
|
* 7:DOUBLE
|
||||||
|
* 8:BINARY
|
||||||
|
* 9:TIMESTAMP
|
||||||
|
* 10:NCHAR
|
||||||
|
|
||||||
### 自定义授权码
|
### 自定义授权码
|
||||||
|
|
||||||
|
@ -651,7 +665,8 @@ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.d1001
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"status": "succ",
|
"status": "succ",
|
||||||
"head": ["Time Stamp","current","voltage","phase"],
|
"head": ["ts","current","voltage","phase"],
|
||||||
|
"column_meta": [["ts",9,8],["current",6,4],["voltage",4,4],["phase",6,4]],
|
||||||
"data": [
|
"data": [
|
||||||
["2018-10-03 14:38:05.000",10.3,219,0.31],
|
["2018-10-03 14:38:05.000",10.3,219,0.31],
|
||||||
["2018-10-03 14:38:15.000",12.6,218,0.33]
|
["2018-10-03 14:38:15.000",12.6,218,0.33]
|
||||||
|
@ -671,8 +686,9 @@ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'create database demo' 19
|
||||||
{
|
{
|
||||||
"status": "succ",
|
"status": "succ",
|
||||||
"head": ["affected_rows"],
|
"head": ["affected_rows"],
|
||||||
|
"column_meta": [["affected_rows",4,4]],
|
||||||
"data": [[1]],
|
"data": [[1]],
|
||||||
"rows": 1,
|
"rows": 1
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -691,7 +707,8 @@ curl -H 'Authorization: Basic cm9vdDp0YW9zZGF0YQ==' -d 'select * from demo.d1001
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"status": "succ",
|
"status": "succ",
|
||||||
"head": ["column1","column2","column3"],
|
"head": ["ts","current","voltage","phase"],
|
||||||
|
"column_meta": [["ts",9,8],["current",6,4],["voltage",4,4],["phase",6,4]],
|
||||||
"data": [
|
"data": [
|
||||||
[1538548685000,10.3,219,0.31],
|
[1538548685000,10.3,219,0.31],
|
||||||
[1538548695000,12.6,218,0.33]
|
[1538548695000,12.6,218,0.33]
|
||||||
|
@ -712,7 +729,8 @@ HTTP请求URL采用`sqlutc`时,返回结果集的时间戳将采用UTC时间
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
"status": "succ",
|
"status": "succ",
|
||||||
"head": ["column1","column2","column3"],
|
"head": ["ts","current","voltage","phase"],
|
||||||
|
"column_meta": [["ts",9,8],["current",6,4],["voltage",4,4],["phase",6,4]],
|
||||||
"data": [
|
"data": [
|
||||||
["2018-10-03T14:38:05.000+0800",10.3,219,0.31],
|
["2018-10-03T14:38:05.000+0800",10.3,219,0.31],
|
||||||
["2018-10-03T14:38:15.000+0800",12.6,218,0.33]
|
["2018-10-03T14:38:15.000+0800",12.6,218,0.33]
|
||||||
|
|
|
@ -155,11 +155,3 @@ TDengine客户端暂不支持如下函数:
|
||||||
- dbExistsTable(conn, "test"):是否存在表test
|
- dbExistsTable(conn, "test"):是否存在表test
|
||||||
- dbListTables(conn):显示连接中的所有表
|
- dbListTables(conn):显示连接中的所有表
|
||||||
|
|
||||||
|
|
||||||
## <a class="anchor" id="datax"></a>DataX
|
|
||||||
|
|
||||||
[DataX](https://github.com/alibaba/DataX) 是阿里巴巴集团开源的一款通用离线数据采集/同步工具,能够简单、高效地接入 TDengine 进行数据写入和读取。
|
|
||||||
|
|
||||||
* 数据读取集成的方法请参见 [TSDBReader 插件文档](https://github.com/alibaba/DataX/blob/master/tsdbreader/doc/tsdbreader.md)
|
|
||||||
* 数据写入集成的方法请参见 [TSDBWriter 插件文档](https://github.com/alibaba/DataX/blob/master/tsdbwriter/doc/tsdbhttpwriter.md)
|
|
||||||
|
|
||||||
|
|
|
@ -13,7 +13,7 @@ TDengine的集群管理极其简单,除添加和删除节点需要人工干预
|
||||||
**第零步**:规划集群所有物理节点的FQDN,将规划好的FQDN分别添加到每个物理节点的/etc/hostname;修改每个物理节点的/etc/hosts,将所有集群物理节点的IP与FQDN的对应添加好。【如部署了DNS,请联系网络管理员在DNS上做好相关配置】
|
**第零步**:规划集群所有物理节点的FQDN,将规划好的FQDN分别添加到每个物理节点的/etc/hostname;修改每个物理节点的/etc/hosts,将所有集群物理节点的IP与FQDN的对应添加好。【如部署了DNS,请联系网络管理员在DNS上做好相关配置】
|
||||||
|
|
||||||
**第一步**:如果搭建集群的物理节点中,存有之前的测试数据、装过1.X的版本,或者装过其他版本的TDengine,请先将其删除,并清空所有数据,具体步骤请参考博客[《TDengine多种安装包的安装和卸载》](https://www.taosdata.com/blog/2019/08/09/566.html )
|
**第一步**:如果搭建集群的物理节点中,存有之前的测试数据、装过1.X的版本,或者装过其他版本的TDengine,请先将其删除,并清空所有数据,具体步骤请参考博客[《TDengine多种安装包的安装和卸载》](https://www.taosdata.com/blog/2019/08/09/566.html )
|
||||||
**注意1:**因为FQDN的信息会写进文件,如果之前没有配置或者更改FQDN,且启动了TDengine。请一定在确保数据无用或者备份的前提下,清理一下之前的数据(rm -rf /var/lib/taos/);
|
**注意1:**因为FQDN的信息会写进文件,如果之前没有配置或者更改FQDN,且启动了TDengine。请一定在确保数据无用或者备份的前提下,清理一下之前的数据(`rm -rf /var/lib/taos/*`);
|
||||||
**注意2:**客户端也需要配置,确保它可以正确解析每个节点的FQDN配置,不管是通过DNS服务,还是 Host 文件。
|
**注意2:**客户端也需要配置,确保它可以正确解析每个节点的FQDN配置,不管是通过DNS服务,还是 Host 文件。
|
||||||
|
|
||||||
**第二步**:建议关闭所有物理节点的防火墙,至少保证端口:6030 - 6042的TCP和UDP端口都是开放的。**强烈建议**先关闭防火墙,集群搭建完毕之后,再来配置端口;
|
**第二步**:建议关闭所有物理节点的防火墙,至少保证端口:6030 - 6042的TCP和UDP端口都是开放的。**强烈建议**先关闭防火墙,集群搭建完毕之后,再来配置端口;
|
||||||
|
|
|
@ -627,19 +627,21 @@ Query OK, 1 row(s) in set (0.001091s)
|
||||||
|
|
||||||
### 支持的条件过滤操作
|
### 支持的条件过滤操作
|
||||||
|
|
||||||
| Operation | Note | Applicable Data Types |
|
| Operation | Note | Applicable Data Types |
|
||||||
| --------- | ----------------------------- | ------------------------------------- |
|
| ----------- | ----------------------------- | ------------------------------------- |
|
||||||
| > | larger than | **`timestamp`** and all numeric types |
|
| > | larger than | **`timestamp`** and all numeric types |
|
||||||
| < | smaller than | **`timestamp`** and all numeric types |
|
| < | smaller than | **`timestamp`** and all numeric types |
|
||||||
| >= | larger than or equal to | **`timestamp`** and all numeric types |
|
| >= | larger than or equal to | **`timestamp`** and all numeric types |
|
||||||
| <= | smaller than or equal to | **`timestamp`** and all numeric types |
|
| <= | smaller than or equal to | **`timestamp`** and all numeric types |
|
||||||
| = | equal to | all types |
|
| = | equal to | all types |
|
||||||
| <> | not equal to | all types |
|
| <> | not equal to | all types |
|
||||||
| % | match with any char sequences | **`binary`** **`nchar`** |
|
| between and | within a certain range | **`timestamp`** and all numeric types |
|
||||||
| _ | match with a single char | **`binary`** **`nchar`** |
|
| % | match with any char sequences | **`binary`** **`nchar`** |
|
||||||
|
| _ | match with a single char | **`binary`** **`nchar`** |
|
||||||
|
|
||||||
1. 同时进行多个字段的范围过滤,需要使用关键词 AND 来连接不同的查询条件,暂不支持 OR 连接的不同列之间的查询过滤条件。
|
1. 同时进行多个字段的范围过滤,需要使用关键词 AND 来连接不同的查询条件,暂不支持 OR 连接的不同列之间的查询过滤条件。
|
||||||
2. 针对单一字段的过滤,如果是时间过滤条件,则一条语句中只支持设定一个;但针对其他的(普通)列或标签列,则可以使用``` OR``` 关键字进行组合条件的查询过滤。例如:((value > 20 and value < 30) OR (value < 12)) 。
|
2. 针对单一字段的过滤,如果是时间过滤条件,则一条语句中只支持设定一个;但针对其他的(普通)列或标签列,则可以使用 `OR` 关键字进行组合条件的查询过滤。例如:((value > 20 AND value < 30) OR (value < 12)) 。
|
||||||
|
3. 从 2.0.17 版本开始,条件过滤开始支持 BETWEEN AND 语法,例如 `WHERE col2 BETWEEN 1.5 AND 3.25` 表示查询条件为“1.5 ≤ col2 ≤ 3.25”。
|
||||||
|
|
||||||
### SQL 示例
|
### SQL 示例
|
||||||
|
|
||||||
|
|
|
@ -13,9 +13,8 @@ WORKDIR /root/${dirName}/
|
||||||
RUN /bin/bash install.sh -e no
|
RUN /bin/bash install.sh -e no
|
||||||
|
|
||||||
ENV LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/lib"
|
ENV LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/lib"
|
||||||
ENV LANG=en_US.UTF-8
|
ENV LANG=C.UTF-8
|
||||||
ENV LANGUAGE=en_US:en
|
ENV LC_ALL=C.UTF-8
|
||||||
ENV LC_ALL=en_US.UTF-8
|
|
||||||
EXPOSE 6030 6031 6032 6033 6034 6035 6036 6037 6038 6039 6040 6041 6042
|
EXPOSE 6030 6031 6032 6033 6034 6035 6036 6037 6038 6039 6040 6041 6042
|
||||||
CMD ["taosd"]
|
CMD ["taosd"]
|
||||||
VOLUME [ "/var/lib/taos", "/var/log/taos","/etc/taos/" ]
|
VOLUME [ "/var/lib/taos", "/var/log/taos","/etc/taos/" ]
|
||||||
|
|
|
@ -270,7 +270,7 @@ void tscPrintSelectClause(SSqlObj* pSql, int32_t subClauseIndex);
|
||||||
bool hasMoreVnodesToTry(SSqlObj *pSql);
|
bool hasMoreVnodesToTry(SSqlObj *pSql);
|
||||||
bool hasMoreClauseToTry(SSqlObj* pSql);
|
bool hasMoreClauseToTry(SSqlObj* pSql);
|
||||||
|
|
||||||
void tscFreeQueryInfo(SSqlCmd* pCmd);
|
void tscFreeQueryInfo(SSqlCmd* pCmd, bool removeMeta);
|
||||||
|
|
||||||
void tscTryQueryNextVnode(SSqlObj *pSql, __async_cb_func_t fp);
|
void tscTryQueryNextVnode(SSqlObj *pSql, __async_cb_func_t fp);
|
||||||
void tscAsyncQuerySingleRowForNextVnode(void *param, TAOS_RES *tres, int numOfRows);
|
void tscAsyncQuerySingleRowForNextVnode(void *param, TAOS_RES *tres, int numOfRows);
|
||||||
|
|
|
@ -442,6 +442,8 @@ void tscCloseTscObj(void *pObj);
|
||||||
TAOS *taos_connect_a(char *ip, char *user, char *pass, char *db, uint16_t port, void (*fp)(void *, TAOS_RES *, int),
|
TAOS *taos_connect_a(char *ip, char *user, char *pass, char *db, uint16_t port, void (*fp)(void *, TAOS_RES *, int),
|
||||||
void *param, TAOS **taos);
|
void *param, TAOS **taos);
|
||||||
TAOS_RES* taos_query_h(TAOS* taos, const char *sqlstr, int64_t* res);
|
TAOS_RES* taos_query_h(TAOS* taos, const char *sqlstr, int64_t* res);
|
||||||
|
TAOS_RES * taos_query_ra(TAOS *taos, const char *sqlstr, __async_cb_func_t fp, void *param);
|
||||||
|
|
||||||
void waitForQueryRsp(void *param, TAOS_RES *tres, int code);
|
void waitForQueryRsp(void *param, TAOS_RES *tres, int code);
|
||||||
|
|
||||||
void doAsyncQuery(STscObj *pObj, SSqlObj *pSql, __async_cb_func_t fp, void *param, const char *sqlstr, size_t sqlLen);
|
void doAsyncQuery(STscObj *pObj, SSqlObj *pSql, __async_cb_func_t fp, void *param, const char *sqlstr, size_t sqlLen);
|
||||||
|
|
|
@ -74,12 +74,16 @@ void doAsyncQuery(STscObj* pObj, SSqlObj* pSql, __async_cb_func_t fp, void* para
|
||||||
|
|
||||||
// TODO return the correct error code to client in tscQueueAsyncError
|
// TODO return the correct error code to client in tscQueueAsyncError
|
||||||
void taos_query_a(TAOS *taos, const char *sqlstr, __async_cb_func_t fp, void *param) {
|
void taos_query_a(TAOS *taos, const char *sqlstr, __async_cb_func_t fp, void *param) {
|
||||||
|
taos_query_ra(taos, sqlstr, fp, param);
|
||||||
|
}
|
||||||
|
|
||||||
|
TAOS_RES * taos_query_ra(TAOS *taos, const char *sqlstr, __async_cb_func_t fp, void *param) {
|
||||||
STscObj *pObj = (STscObj *)taos;
|
STscObj *pObj = (STscObj *)taos;
|
||||||
if (pObj == NULL || pObj->signature != pObj) {
|
if (pObj == NULL || pObj->signature != pObj) {
|
||||||
tscError("bug!!! pObj:%p", pObj);
|
tscError("bug!!! pObj:%p", pObj);
|
||||||
terrno = TSDB_CODE_TSC_DISCONNECTED;
|
terrno = TSDB_CODE_TSC_DISCONNECTED;
|
||||||
tscQueueAsyncError(fp, param, TSDB_CODE_TSC_DISCONNECTED);
|
tscQueueAsyncError(fp, param, TSDB_CODE_TSC_DISCONNECTED);
|
||||||
return;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
int32_t sqlLen = (int32_t)strlen(sqlstr);
|
int32_t sqlLen = (int32_t)strlen(sqlstr);
|
||||||
|
@ -87,7 +91,7 @@ void taos_query_a(TAOS *taos, const char *sqlstr, __async_cb_func_t fp, void *pa
|
||||||
tscError("sql string exceeds max length:%d", tsMaxSQLStringLen);
|
tscError("sql string exceeds max length:%d", tsMaxSQLStringLen);
|
||||||
terrno = TSDB_CODE_TSC_EXCEED_SQL_LIMIT;
|
terrno = TSDB_CODE_TSC_EXCEED_SQL_LIMIT;
|
||||||
tscQueueAsyncError(fp, param, terrno);
|
tscQueueAsyncError(fp, param, terrno);
|
||||||
return;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
nPrintTsc("%s", sqlstr);
|
nPrintTsc("%s", sqlstr);
|
||||||
|
@ -96,12 +100,15 @@ void taos_query_a(TAOS *taos, const char *sqlstr, __async_cb_func_t fp, void *pa
|
||||||
if (pSql == NULL) {
|
if (pSql == NULL) {
|
||||||
tscError("failed to malloc sqlObj");
|
tscError("failed to malloc sqlObj");
|
||||||
tscQueueAsyncError(fp, param, TSDB_CODE_TSC_OUT_OF_MEMORY);
|
tscQueueAsyncError(fp, param, TSDB_CODE_TSC_OUT_OF_MEMORY);
|
||||||
return;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
doAsyncQuery(pObj, pSql, fp, param, sqlstr, sqlLen);
|
doAsyncQuery(pObj, pSql, fp, param, sqlstr, sqlLen);
|
||||||
|
|
||||||
|
return pSql;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
static void tscAsyncFetchRowsProxy(void *param, TAOS_RES *tres, int numOfRows) {
|
static void tscAsyncFetchRowsProxy(void *param, TAOS_RES *tres, int numOfRows) {
|
||||||
if (tres == NULL) {
|
if (tres == NULL) {
|
||||||
return;
|
return;
|
||||||
|
|
|
@ -2802,7 +2802,7 @@ static void multiVnodeInsertFinalize(void* param, TAOS_RES* tres, int numOfRows)
|
||||||
numOfFailed += 1;
|
numOfFailed += 1;
|
||||||
|
|
||||||
// clean up tableMeta in cache
|
// clean up tableMeta in cache
|
||||||
tscFreeQueryInfo(&pSql->cmd);
|
tscFreeQueryInfo(&pSql->cmd, false);
|
||||||
SQueryInfo* pQueryInfo = tscGetQueryInfoDetailSafely(&pSql->cmd, 0);
|
SQueryInfo* pQueryInfo = tscGetQueryInfoDetailSafely(&pSql->cmd, 0);
|
||||||
STableMetaInfo* pMasterTableMetaInfo = tscGetTableMetaInfoFromCmd(&pParentObj->cmd, pSql->cmd.clauseIndex, 0);
|
STableMetaInfo* pMasterTableMetaInfo = tscGetTableMetaInfoFromCmd(&pParentObj->cmd, pSql->cmd.clauseIndex, 0);
|
||||||
tscAddTableMetaInfo(pQueryInfo, &pMasterTableMetaInfo->name, NULL, NULL, NULL, NULL);
|
tscAddTableMetaInfo(pQueryInfo, &pMasterTableMetaInfo->name, NULL, NULL, NULL, NULL);
|
||||||
|
|
|
@ -30,7 +30,7 @@
|
||||||
#include "ttokendef.h"
|
#include "ttokendef.h"
|
||||||
|
|
||||||
static void freeQueryInfoImpl(SQueryInfo* pQueryInfo);
|
static void freeQueryInfoImpl(SQueryInfo* pQueryInfo);
|
||||||
static void clearAllTableMetaInfo(SQueryInfo* pQueryInfo);
|
static void clearAllTableMetaInfo(SQueryInfo* pQueryInfo, bool removeMeta);
|
||||||
|
|
||||||
static void tscStrToLower(char *str, int32_t n) {
|
static void tscStrToLower(char *str, int32_t n) {
|
||||||
if (str == NULL || n <= 0) { return;}
|
if (str == NULL || n <= 0) { return;}
|
||||||
|
@ -367,7 +367,7 @@ static void tscDestroyResPointerInfo(SSqlRes* pRes) {
|
||||||
pRes->data = NULL; // pRes->data points to the buffer of pRsp, no need to free
|
pRes->data = NULL; // pRes->data points to the buffer of pRsp, no need to free
|
||||||
}
|
}
|
||||||
|
|
||||||
void tscFreeQueryInfo(SSqlCmd* pCmd) {
|
void tscFreeQueryInfo(SSqlCmd* pCmd, bool removeMeta) {
|
||||||
if (pCmd == NULL || pCmd->numOfClause == 0) {
|
if (pCmd == NULL || pCmd->numOfClause == 0) {
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
@ -376,7 +376,7 @@ void tscFreeQueryInfo(SSqlCmd* pCmd) {
|
||||||
SQueryInfo* pQueryInfo = tscGetQueryInfoDetail(pCmd, i);
|
SQueryInfo* pQueryInfo = tscGetQueryInfoDetail(pCmd, i);
|
||||||
|
|
||||||
freeQueryInfoImpl(pQueryInfo);
|
freeQueryInfoImpl(pQueryInfo);
|
||||||
clearAllTableMetaInfo(pQueryInfo);
|
clearAllTableMetaInfo(pQueryInfo, removeMeta);
|
||||||
tfree(pQueryInfo);
|
tfree(pQueryInfo);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -404,7 +404,7 @@ void tscResetSqlCmd(SSqlCmd* pCmd, bool removeMeta) {
|
||||||
|
|
||||||
pCmd->pTableBlockHashList = tscDestroyBlockHashTable(pCmd->pTableBlockHashList, removeMeta);
|
pCmd->pTableBlockHashList = tscDestroyBlockHashTable(pCmd->pTableBlockHashList, removeMeta);
|
||||||
pCmd->pDataBlocks = tscDestroyBlockArrayList(pCmd->pDataBlocks);
|
pCmd->pDataBlocks = tscDestroyBlockArrayList(pCmd->pDataBlocks);
|
||||||
tscFreeQueryInfo(pCmd);
|
tscFreeQueryInfo(pCmd, removeMeta);
|
||||||
}
|
}
|
||||||
|
|
||||||
void tscFreeSqlResult(SSqlObj* pSql) {
|
void tscFreeSqlResult(SSqlObj* pSql) {
|
||||||
|
@ -1847,10 +1847,17 @@ SArray* tscVgroupTableInfoDup(SArray* pVgroupTables) {
|
||||||
return pa;
|
return pa;
|
||||||
}
|
}
|
||||||
|
|
||||||
void clearAllTableMetaInfo(SQueryInfo* pQueryInfo) {
|
void clearAllTableMetaInfo(SQueryInfo* pQueryInfo, bool removeMeta) {
|
||||||
for(int32_t i = 0; i < pQueryInfo->numOfTables; ++i) {
|
for(int32_t i = 0; i < pQueryInfo->numOfTables; ++i) {
|
||||||
STableMetaInfo* pTableMetaInfo = tscGetMetaInfo(pQueryInfo, i);
|
STableMetaInfo* pTableMetaInfo = tscGetMetaInfo(pQueryInfo, i);
|
||||||
|
|
||||||
|
if (removeMeta) {
|
||||||
|
char name[TSDB_TABLE_FNAME_LEN] = {0};
|
||||||
|
tNameExtractFullName(&pTableMetaInfo->name, name);
|
||||||
|
|
||||||
|
taosHashRemove(tscTableMetaInfo, name, strnlen(name, TSDB_TABLE_FNAME_LEN));
|
||||||
|
}
|
||||||
|
|
||||||
tscFreeVgroupTableInfo(pTableMetaInfo->pVgroupTables);
|
tscFreeVgroupTableInfo(pTableMetaInfo->pVgroupTables);
|
||||||
tscClearTableMetaInfo(pTableMetaInfo);
|
tscClearTableMetaInfo(pTableMetaInfo);
|
||||||
free(pTableMetaInfo);
|
free(pTableMetaInfo);
|
||||||
|
@ -2714,7 +2721,11 @@ STableMeta* createSuperTableMeta(STableMetaMsg* pChild) {
|
||||||
uint32_t tscGetTableMetaSize(STableMeta* pTableMeta) {
|
uint32_t tscGetTableMetaSize(STableMeta* pTableMeta) {
|
||||||
assert(pTableMeta != NULL);
|
assert(pTableMeta != NULL);
|
||||||
|
|
||||||
int32_t totalCols = pTableMeta->tableInfo.numOfColumns + pTableMeta->tableInfo.numOfTags;
|
int32_t totalCols = 0;
|
||||||
|
if (pTableMeta->tableInfo.numOfColumns >= 0 && pTableMeta->tableInfo.numOfTags >= 0) {
|
||||||
|
totalCols = pTableMeta->tableInfo.numOfColumns + pTableMeta->tableInfo.numOfTags;
|
||||||
|
}
|
||||||
|
|
||||||
return sizeof(STableMeta) + totalCols * sizeof(SSchema);
|
return sizeof(STableMeta) + totalCols * sizeof(SSchema);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -430,10 +430,10 @@ static void doInitGlobalConfig(void) {
|
||||||
// port
|
// port
|
||||||
cfg.option = "serverPort";
|
cfg.option = "serverPort";
|
||||||
cfg.ptr = &tsServerPort;
|
cfg.ptr = &tsServerPort;
|
||||||
cfg.valType = TAOS_CFG_VTYPE_INT16;
|
cfg.valType = TAOS_CFG_VTYPE_UINT16;
|
||||||
cfg.cfgType = TSDB_CFG_CTYPE_B_CONFIG | TSDB_CFG_CTYPE_B_SHOW | TSDB_CFG_CTYPE_B_CLIENT;
|
cfg.cfgType = TSDB_CFG_CTYPE_B_CONFIG | TSDB_CFG_CTYPE_B_SHOW | TSDB_CFG_CTYPE_B_CLIENT;
|
||||||
cfg.minValue = 1;
|
cfg.minValue = 1;
|
||||||
cfg.maxValue = 65535;
|
cfg.maxValue = 65056;
|
||||||
cfg.ptrLength = 0;
|
cfg.ptrLength = 0;
|
||||||
cfg.unitType = TAOS_CFG_UTYPE_NONE;
|
cfg.unitType = TAOS_CFG_UTYPE_NONE;
|
||||||
taosInitConfigOption(cfg);
|
taosInitConfigOption(cfg);
|
||||||
|
|
|
@ -8,7 +8,7 @@ IF (TD_MVN_INSTALLED)
|
||||||
ADD_CUSTOM_COMMAND(OUTPUT ${JDBC_CMD_NAME}
|
ADD_CUSTOM_COMMAND(OUTPUT ${JDBC_CMD_NAME}
|
||||||
POST_BUILD
|
POST_BUILD
|
||||||
COMMAND mvn -Dmaven.test.skip=true install -f ${CMAKE_CURRENT_SOURCE_DIR}/pom.xml
|
COMMAND mvn -Dmaven.test.skip=true install -f ${CMAKE_CURRENT_SOURCE_DIR}/pom.xml
|
||||||
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/target/taos-jdbcdriver-2.0.20-dist.jar ${LIBRARY_OUTPUT_PATH}
|
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/target/taos-jdbcdriver-2.0.21-dist.jar ${LIBRARY_OUTPUT_PATH}
|
||||||
COMMAND mvn -Dmaven.test.skip=true clean -f ${CMAKE_CURRENT_SOURCE_DIR}/pom.xml
|
COMMAND mvn -Dmaven.test.skip=true clean -f ${CMAKE_CURRENT_SOURCE_DIR}/pom.xml
|
||||||
COMMENT "build jdbc driver")
|
COMMENT "build jdbc driver")
|
||||||
ADD_CUSTOM_TARGET(${JDBC_TARGET_NAME} ALL WORKING_DIRECTORY ${EXECUTABLE_OUTPUT_PATH} DEPENDS ${JDBC_CMD_NAME})
|
ADD_CUSTOM_TARGET(${JDBC_TARGET_NAME} ALL WORKING_DIRECTORY ${EXECUTABLE_OUTPUT_PATH} DEPENDS ${JDBC_CMD_NAME})
|
||||||
|
|
|
@ -5,7 +5,7 @@
|
||||||
|
|
||||||
<groupId>com.taosdata.jdbc</groupId>
|
<groupId>com.taosdata.jdbc</groupId>
|
||||||
<artifactId>taos-jdbcdriver</artifactId>
|
<artifactId>taos-jdbcdriver</artifactId>
|
||||||
<version>2.0.20</version>
|
<version>2.0.21</version>
|
||||||
<packaging>jar</packaging>
|
<packaging>jar</packaging>
|
||||||
|
|
||||||
<name>JDBCDriver</name>
|
<name>JDBCDriver</name>
|
||||||
|
|
|
@ -3,7 +3,7 @@
|
||||||
<modelVersion>4.0.0</modelVersion>
|
<modelVersion>4.0.0</modelVersion>
|
||||||
<groupId>com.taosdata.jdbc</groupId>
|
<groupId>com.taosdata.jdbc</groupId>
|
||||||
<artifactId>taos-jdbcdriver</artifactId>
|
<artifactId>taos-jdbcdriver</artifactId>
|
||||||
<version>2.0.20</version>
|
<version>2.0.21</version>
|
||||||
<packaging>jar</packaging>
|
<packaging>jar</packaging>
|
||||||
<name>JDBCDriver</name>
|
<name>JDBCDriver</name>
|
||||||
<url>https://github.com/taosdata/TDengine/tree/master/src/connector/jdbc</url>
|
<url>https://github.com/taosdata/TDengine/tree/master/src/connector/jdbc</url>
|
||||||
|
|
|
@ -308,7 +308,7 @@ public class DatabaseMetaDataResultSet implements ResultSet {
|
||||||
return colMetaData.getColIndex() + 1;
|
return colMetaData.getColIndex() + 1;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
throw new SQLException(TSDBConstants.INVALID_VARIABLES);
|
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_INVALID_VARIABLE);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
|
|
|
@ -14,16 +14,13 @@
|
||||||
*****************************************************************************/
|
*****************************************************************************/
|
||||||
package com.taosdata.jdbc;
|
package com.taosdata.jdbc;
|
||||||
|
|
||||||
|
import java.sql.SQLException;
|
||||||
|
import java.sql.Types;
|
||||||
import java.util.HashMap;
|
import java.util.HashMap;
|
||||||
import java.util.Map;
|
import java.util.Map;
|
||||||
|
|
||||||
public abstract class TSDBConstants {
|
public abstract class TSDBConstants {
|
||||||
|
|
||||||
public static final String STATEMENT_CLOSED = "statement is closed";
|
|
||||||
public static final String UNSUPPORTED_METHOD_EXCEPTION_MSG = "this operation is NOT supported currently!";
|
|
||||||
public static final String INVALID_VARIABLES = "invalid variables";
|
|
||||||
public static final String RESULT_SET_IS_CLOSED = "resultSet is closed";
|
|
||||||
|
|
||||||
public static final String DEFAULT_PORT = "6200";
|
public static final String DEFAULT_PORT = "6200";
|
||||||
public static Map<Integer, String> DATATYPE_MAP = null;
|
public static Map<Integer, String> DATATYPE_MAP = null;
|
||||||
|
|
||||||
|
@ -77,8 +74,65 @@ public abstract class TSDBConstants {
|
||||||
return WrapErrMsg("unkown error!");
|
return WrapErrMsg("unkown error!");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public static int taosType2JdbcType(int taosType) throws SQLException {
|
||||||
|
switch (taosType) {
|
||||||
|
case TSDBConstants.TSDB_DATA_TYPE_NULL:
|
||||||
|
return Types.NULL;
|
||||||
|
case TSDBConstants.TSDB_DATA_TYPE_BOOL:
|
||||||
|
return Types.BOOLEAN;
|
||||||
|
case TSDBConstants.TSDB_DATA_TYPE_TINYINT:
|
||||||
|
return Types.TINYINT;
|
||||||
|
case TSDBConstants.TSDB_DATA_TYPE_SMALLINT:
|
||||||
|
return Types.SMALLINT;
|
||||||
|
case TSDBConstants.TSDB_DATA_TYPE_INT:
|
||||||
|
return Types.INTEGER;
|
||||||
|
case TSDBConstants.TSDB_DATA_TYPE_BIGINT:
|
||||||
|
return Types.BIGINT;
|
||||||
|
case TSDBConstants.TSDB_DATA_TYPE_FLOAT:
|
||||||
|
return Types.FLOAT;
|
||||||
|
case TSDBConstants.TSDB_DATA_TYPE_DOUBLE:
|
||||||
|
return Types.DOUBLE;
|
||||||
|
case TSDBConstants.TSDB_DATA_TYPE_BINARY:
|
||||||
|
return Types.BINARY;
|
||||||
|
case TSDBConstants.TSDB_DATA_TYPE_TIMESTAMP:
|
||||||
|
return Types.TIMESTAMP;
|
||||||
|
case TSDBConstants.TSDB_DATA_TYPE_NCHAR:
|
||||||
|
return Types.NCHAR;
|
||||||
|
}
|
||||||
|
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNKNOWN_SQL_TYPE_IN_TDENGINE);
|
||||||
|
}
|
||||||
|
|
||||||
|
public static int jdbcType2TaosType(int jdbcType) throws SQLException {
|
||||||
|
switch (jdbcType){
|
||||||
|
case Types.NULL:
|
||||||
|
return TSDBConstants.TSDB_DATA_TYPE_NULL;
|
||||||
|
case Types.BOOLEAN:
|
||||||
|
return TSDBConstants.TSDB_DATA_TYPE_BOOL;
|
||||||
|
case Types.TINYINT:
|
||||||
|
return TSDBConstants.TSDB_DATA_TYPE_TINYINT;
|
||||||
|
case Types.SMALLINT:
|
||||||
|
return TSDBConstants.TSDB_DATA_TYPE_SMALLINT;
|
||||||
|
case Types.INTEGER:
|
||||||
|
return TSDBConstants.TSDB_DATA_TYPE_INT;
|
||||||
|
case Types.BIGINT:
|
||||||
|
return TSDBConstants.TSDB_DATA_TYPE_BIGINT;
|
||||||
|
case Types.FLOAT:
|
||||||
|
return TSDBConstants.TSDB_DATA_TYPE_FLOAT;
|
||||||
|
case Types.DOUBLE:
|
||||||
|
return TSDBConstants.TSDB_DATA_TYPE_DOUBLE;
|
||||||
|
case Types.BINARY:
|
||||||
|
return TSDBConstants.TSDB_DATA_TYPE_BINARY;
|
||||||
|
case Types.TIMESTAMP:
|
||||||
|
return TSDBConstants.TSDB_DATA_TYPE_TIMESTAMP;
|
||||||
|
case Types.NCHAR:
|
||||||
|
return TSDBConstants.TSDB_DATA_TYPE_NCHAR;
|
||||||
|
}
|
||||||
|
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNKNOWN_SQL_TYPE_IN_TDENGINE);
|
||||||
|
}
|
||||||
|
|
||||||
static {
|
static {
|
||||||
DATATYPE_MAP = new HashMap<>();
|
DATATYPE_MAP = new HashMap<>();
|
||||||
|
DATATYPE_MAP.put(0, "NULL");
|
||||||
DATATYPE_MAP.put(1, "BOOL");
|
DATATYPE_MAP.put(1, "BOOL");
|
||||||
DATATYPE_MAP.put(2, "TINYINT");
|
DATATYPE_MAP.put(2, "TINYINT");
|
||||||
DATATYPE_MAP.put(3, "SMALLINT");
|
DATATYPE_MAP.put(3, "SMALLINT");
|
||||||
|
@ -90,4 +144,8 @@ public abstract class TSDBConstants {
|
||||||
DATATYPE_MAP.put(9, "TIMESTAMP");
|
DATATYPE_MAP.put(9, "TIMESTAMP");
|
||||||
DATATYPE_MAP.put(10, "NCHAR");
|
DATATYPE_MAP.put(10, "NCHAR");
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public static String jdbcType2TaosTypeName(int type) throws SQLException {
|
||||||
|
return DATATYPE_MAP.get(jdbcType2TaosType(type));
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -18,6 +18,7 @@ public class TSDBErrorNumbers {
|
||||||
public static final int ERROR_INVALID_FOR_EXECUTE = 0x230c; //not a valid sql for execute: (SQL)
|
public static final int ERROR_INVALID_FOR_EXECUTE = 0x230c; //not a valid sql for execute: (SQL)
|
||||||
public static final int ERROR_PARAMETER_INDEX_OUT_RANGE = 0x230d; // parameter index out of range
|
public static final int ERROR_PARAMETER_INDEX_OUT_RANGE = 0x230d; // parameter index out of range
|
||||||
public static final int ERROR_SQLCLIENT_EXCEPTION_ON_CONNECTION_CLOSED = 0x230e; // connection already closed
|
public static final int ERROR_SQLCLIENT_EXCEPTION_ON_CONNECTION_CLOSED = 0x230e; // connection already closed
|
||||||
|
public static final int ERROR_UNKNOWN_SQL_TYPE_IN_TDENGINE = 0x230f; //unknown sql type in tdengine
|
||||||
|
|
||||||
public static final int ERROR_UNKNOWN = 0x2350; //unknown error
|
public static final int ERROR_UNKNOWN = 0x2350; //unknown error
|
||||||
|
|
||||||
|
@ -49,6 +50,7 @@ public class TSDBErrorNumbers {
|
||||||
errorNumbers.add(ERROR_INVALID_FOR_EXECUTE);
|
errorNumbers.add(ERROR_INVALID_FOR_EXECUTE);
|
||||||
errorNumbers.add(ERROR_PARAMETER_INDEX_OUT_RANGE);
|
errorNumbers.add(ERROR_PARAMETER_INDEX_OUT_RANGE);
|
||||||
errorNumbers.add(ERROR_SQLCLIENT_EXCEPTION_ON_CONNECTION_CLOSED);
|
errorNumbers.add(ERROR_SQLCLIENT_EXCEPTION_ON_CONNECTION_CLOSED);
|
||||||
|
errorNumbers.add(ERROR_UNKNOWN_SQL_TYPE_IN_TDENGINE);
|
||||||
|
|
||||||
/*****************************************************/
|
/*****************************************************/
|
||||||
errorNumbers.add(ERROR_SUBSCRIBE_FAILED);
|
errorNumbers.add(ERROR_SUBSCRIBE_FAILED);
|
||||||
|
|
|
@ -20,7 +20,7 @@ import java.sql.Timestamp;
|
||||||
import java.sql.Types;
|
import java.sql.Types;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
|
|
||||||
public class TSDBResultSetMetaData implements ResultSetMetaData {
|
public class TSDBResultSetMetaData extends WrapperImpl implements ResultSetMetaData {
|
||||||
|
|
||||||
List<ColumnMetaData> colMetaDataList = null;
|
List<ColumnMetaData> colMetaDataList = null;
|
||||||
|
|
||||||
|
@ -28,14 +28,6 @@ public class TSDBResultSetMetaData implements ResultSetMetaData {
|
||||||
this.colMetaDataList = metaDataList;
|
this.colMetaDataList = metaDataList;
|
||||||
}
|
}
|
||||||
|
|
||||||
public <T> T unwrap(Class<T> iface) throws SQLException {
|
|
||||||
throw new SQLException(TSDBConstants.UNSUPPORTED_METHOD_EXCEPTION_MSG);
|
|
||||||
}
|
|
||||||
|
|
||||||
public boolean isWrapperFor(Class<?> iface) throws SQLException {
|
|
||||||
throw new SQLException(TSDBConstants.UNSUPPORTED_METHOD_EXCEPTION_MSG);
|
|
||||||
}
|
|
||||||
|
|
||||||
public int getColumnCount() throws SQLException {
|
public int getColumnCount() throws SQLException {
|
||||||
return colMetaDataList.size();
|
return colMetaDataList.size();
|
||||||
}
|
}
|
||||||
|
@ -94,7 +86,7 @@ public class TSDBResultSetMetaData implements ResultSetMetaData {
|
||||||
}
|
}
|
||||||
|
|
||||||
public String getSchemaName(int column) throws SQLException {
|
public String getSchemaName(int column) throws SQLException {
|
||||||
throw new SQLException(TSDBConstants.UNSUPPORTED_METHOD_EXCEPTION_MSG);
|
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_METHOD);
|
||||||
}
|
}
|
||||||
|
|
||||||
public int getPrecision(int column) throws SQLException {
|
public int getPrecision(int column) throws SQLException {
|
||||||
|
@ -125,18 +117,18 @@ public class TSDBResultSetMetaData implements ResultSetMetaData {
|
||||||
}
|
}
|
||||||
|
|
||||||
public String getTableName(int column) throws SQLException {
|
public String getTableName(int column) throws SQLException {
|
||||||
throw new SQLException(TSDBConstants.UNSUPPORTED_METHOD_EXCEPTION_MSG);
|
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_METHOD);
|
||||||
}
|
}
|
||||||
|
|
||||||
public String getCatalogName(int column) throws SQLException {
|
public String getCatalogName(int column) throws SQLException {
|
||||||
throw new SQLException(TSDBConstants.UNSUPPORTED_METHOD_EXCEPTION_MSG);
|
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_METHOD);
|
||||||
}
|
}
|
||||||
|
|
||||||
public int getColumnType(int column) throws SQLException {
|
public int getColumnType(int column) throws SQLException {
|
||||||
ColumnMetaData meta = this.colMetaDataList.get(column - 1);
|
ColumnMetaData meta = this.colMetaDataList.get(column - 1);
|
||||||
switch (meta.getColType()) {
|
switch (meta.getColType()) {
|
||||||
case TSDBConstants.TSDB_DATA_TYPE_BOOL:
|
case TSDBConstants.TSDB_DATA_TYPE_BOOL:
|
||||||
return java.sql.Types.BIT;
|
return Types.BOOLEAN;
|
||||||
case TSDBConstants.TSDB_DATA_TYPE_TINYINT:
|
case TSDBConstants.TSDB_DATA_TYPE_TINYINT:
|
||||||
return java.sql.Types.TINYINT;
|
return java.sql.Types.TINYINT;
|
||||||
case TSDBConstants.TSDB_DATA_TYPE_SMALLINT:
|
case TSDBConstants.TSDB_DATA_TYPE_SMALLINT:
|
||||||
|
@ -150,13 +142,13 @@ public class TSDBResultSetMetaData implements ResultSetMetaData {
|
||||||
case TSDBConstants.TSDB_DATA_TYPE_DOUBLE:
|
case TSDBConstants.TSDB_DATA_TYPE_DOUBLE:
|
||||||
return java.sql.Types.DOUBLE;
|
return java.sql.Types.DOUBLE;
|
||||||
case TSDBConstants.TSDB_DATA_TYPE_BINARY:
|
case TSDBConstants.TSDB_DATA_TYPE_BINARY:
|
||||||
return java.sql.Types.CHAR;
|
return Types.BINARY;
|
||||||
case TSDBConstants.TSDB_DATA_TYPE_TIMESTAMP:
|
case TSDBConstants.TSDB_DATA_TYPE_TIMESTAMP:
|
||||||
return java.sql.Types.BIGINT;
|
return java.sql.Types.TIMESTAMP;
|
||||||
case TSDBConstants.TSDB_DATA_TYPE_NCHAR:
|
case TSDBConstants.TSDB_DATA_TYPE_NCHAR:
|
||||||
return java.sql.Types.CHAR;
|
return Types.NCHAR;
|
||||||
}
|
}
|
||||||
throw new SQLException(TSDBConstants.INVALID_VARIABLES);
|
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_INVALID_VARIABLE);
|
||||||
}
|
}
|
||||||
|
|
||||||
public String getColumnTypeName(int column) throws SQLException {
|
public String getColumnTypeName(int column) throws SQLException {
|
||||||
|
@ -173,7 +165,7 @@ public class TSDBResultSetMetaData implements ResultSetMetaData {
|
||||||
}
|
}
|
||||||
|
|
||||||
public boolean isDefinitelyWritable(int column) throws SQLException {
|
public boolean isDefinitelyWritable(int column) throws SQLException {
|
||||||
throw new SQLException(TSDBConstants.UNSUPPORTED_METHOD_EXCEPTION_MSG);
|
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_METHOD);
|
||||||
}
|
}
|
||||||
|
|
||||||
public String getColumnClassName(int column) throws SQLException {
|
public String getColumnClassName(int column) throws SQLException {
|
||||||
|
|
|
@ -1153,11 +1153,11 @@ public class TSDBResultSetWrapper implements ResultSet {
|
||||||
}
|
}
|
||||||
|
|
||||||
public <T> T getObject(int columnIndex, Class<T> type) throws SQLException {
|
public <T> T getObject(int columnIndex, Class<T> type) throws SQLException {
|
||||||
throw new SQLException(TSDBConstants.UNSUPPORTED_METHOD_EXCEPTION_MSG);
|
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_METHOD);
|
||||||
}
|
}
|
||||||
|
|
||||||
public <T> T getObject(String columnLabel, Class<T> type) throws SQLException {
|
public <T> T getObject(String columnLabel, Class<T> type) throws SQLException {
|
||||||
throw new SQLException(TSDBConstants.UNSUPPORTED_METHOD_EXCEPTION_MSG);
|
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_UNSUPPORTED_METHOD);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
|
|
|
@ -14,12 +14,11 @@
|
||||||
*****************************************************************************/
|
*****************************************************************************/
|
||||||
package com.taosdata.jdbc;
|
package com.taosdata.jdbc;
|
||||||
|
|
||||||
import javax.management.OperationsException;
|
|
||||||
import java.sql.SQLException;
|
import java.sql.SQLException;
|
||||||
|
|
||||||
public class TSDBSubscribe {
|
public class TSDBSubscribe {
|
||||||
private TSDBJNIConnector connecter = null;
|
private final TSDBJNIConnector connecter;
|
||||||
private long id = 0;
|
private final long id;
|
||||||
|
|
||||||
TSDBSubscribe(TSDBJNIConnector connecter, long id) throws SQLException {
|
TSDBSubscribe(TSDBJNIConnector connecter, long id) throws SQLException {
|
||||||
if (null != connecter) {
|
if (null != connecter) {
|
||||||
|
|
|
@ -18,10 +18,10 @@ public class RestfulResultSet extends AbstractResultSet implements ResultSet {
|
||||||
private final String database;
|
private final String database;
|
||||||
private final Statement statement;
|
private final Statement statement;
|
||||||
// data
|
// data
|
||||||
private ArrayList<ArrayList<Object>> resultSet = new ArrayList<>();
|
private ArrayList<ArrayList<Object>> resultSet;
|
||||||
// meta
|
// meta
|
||||||
private ArrayList<String> columnNames = new ArrayList<>();
|
private ArrayList<String> columnNames;
|
||||||
private ArrayList<Field> columns = new ArrayList<>();
|
private ArrayList<Field> columns;
|
||||||
private RestfulResultSetMetaData metaData;
|
private RestfulResultSetMetaData metaData;
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
@ -29,11 +29,36 @@ public class RestfulResultSet extends AbstractResultSet implements ResultSet {
|
||||||
*
|
*
|
||||||
* @param resultJson: 包含data信息的结果集,有sql返回的结果集
|
* @param resultJson: 包含data信息的结果集,有sql返回的结果集
|
||||||
***/
|
***/
|
||||||
public RestfulResultSet(String database, Statement statement, JSONObject resultJson) {
|
public RestfulResultSet(String database, Statement statement, JSONObject resultJson) throws SQLException {
|
||||||
this.database = database;
|
this.database = database;
|
||||||
this.statement = statement;
|
this.statement = statement;
|
||||||
|
// column metadata
|
||||||
|
JSONArray columnMeta = resultJson.getJSONArray("column_meta");
|
||||||
|
columnNames = new ArrayList<>();
|
||||||
|
columns = new ArrayList<>();
|
||||||
|
for (int colIndex = 0; colIndex < columnMeta.size(); colIndex++) {
|
||||||
|
JSONArray col = columnMeta.getJSONArray(colIndex);
|
||||||
|
String col_name = col.getString(0);
|
||||||
|
int col_type = TSDBConstants.taosType2JdbcType(col.getInteger(1));
|
||||||
|
int col_length = col.getInteger(2);
|
||||||
|
columnNames.add(col_name);
|
||||||
|
columns.add(new Field(col_name, col_type, col_length, ""));
|
||||||
|
}
|
||||||
|
this.metaData = new RestfulResultSetMetaData(this.database, columns, this);
|
||||||
|
|
||||||
// row data
|
// row data
|
||||||
JSONArray data = resultJson.getJSONArray("data");
|
JSONArray data = resultJson.getJSONArray("data");
|
||||||
|
resultSet = new ArrayList<>();
|
||||||
|
for (int rowIndex = 0; rowIndex < data.size(); rowIndex++) {
|
||||||
|
ArrayList row = new ArrayList();
|
||||||
|
JSONArray jsonRow = data.getJSONArray(rowIndex);
|
||||||
|
for (int colIndex = 0; colIndex < jsonRow.size(); colIndex++) {
|
||||||
|
row.add(parseColumnData(jsonRow, colIndex, columns.get(colIndex).type));
|
||||||
|
}
|
||||||
|
resultSet.add(row);
|
||||||
|
}
|
||||||
|
|
||||||
|
/*
|
||||||
int columnIndex = 0;
|
int columnIndex = 0;
|
||||||
for (; columnIndex < data.size(); columnIndex++) {
|
for (; columnIndex < data.size(); columnIndex++) {
|
||||||
ArrayList oneRow = new ArrayList<>();
|
ArrayList oneRow = new ArrayList<>();
|
||||||
|
@ -52,50 +77,77 @@ public class RestfulResultSet extends AbstractResultSet implements ResultSet {
|
||||||
columns.add(new Field(name, "", 0, ""));
|
columns.add(new Field(name, "", 0, ""));
|
||||||
}
|
}
|
||||||
this.metaData = new RestfulResultSetMetaData(this.database, columns, this);
|
this.metaData = new RestfulResultSetMetaData(this.database, columns, this);
|
||||||
|
*/
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
private Object parseColumnData(JSONArray row, int colIndex, int sqlType) {
|
||||||
* 由多个resultSet的JSON构造结果集
|
switch (sqlType) {
|
||||||
*
|
case Types.NULL:
|
||||||
* @param resultJson: 包含data信息的结果集,有sql返回的结果集
|
return null;
|
||||||
* @param fieldJson: 包含多个(最多2个)meta信息的结果集,有describe xxx
|
case Types.BOOLEAN:
|
||||||
**/
|
return row.getBoolean(colIndex);
|
||||||
public RestfulResultSet(String database, Statement statement, JSONObject resultJson, List<JSONObject> fieldJson) {
|
case Types.TINYINT:
|
||||||
this(database, statement, resultJson);
|
case Types.SMALLINT:
|
||||||
ArrayList<Field> newColumns = new ArrayList<>();
|
return row.getShort(colIndex);
|
||||||
|
case Types.INTEGER:
|
||||||
for (Field column : columns) {
|
return row.getInteger(colIndex);
|
||||||
Field field = findField(column.name, fieldJson);
|
case Types.BIGINT:
|
||||||
if (field != null) {
|
return row.getBigInteger(colIndex);
|
||||||
newColumns.add(field);
|
case Types.FLOAT:
|
||||||
} else {
|
return row.getFloat(colIndex);
|
||||||
newColumns.add(column);
|
case Types.DOUBLE:
|
||||||
}
|
return row.getDouble(colIndex);
|
||||||
|
case Types.TIMESTAMP:
|
||||||
|
return new Timestamp(row.getDate(colIndex).getTime());
|
||||||
|
case Types.BINARY:
|
||||||
|
case Types.NCHAR:
|
||||||
|
default:
|
||||||
|
return row.getString(colIndex);
|
||||||
}
|
}
|
||||||
this.columns = newColumns;
|
|
||||||
this.metaData = new RestfulResultSetMetaData(this.database, this.columns, this);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
public Field findField(String columnName, List<JSONObject> fieldJsonList) {
|
// /**
|
||||||
for (JSONObject fieldJSON : fieldJsonList) {
|
// * 由多个resultSet的JSON构造结果集
|
||||||
JSONArray fieldDataJson = fieldJSON.getJSONArray("data");
|
// *
|
||||||
for (int i = 0; i < fieldDataJson.size(); i++) {
|
// * @param resultJson: 包含data信息的结果集,有sql返回的结果集
|
||||||
JSONArray field = fieldDataJson.getJSONArray(i);
|
// * @param fieldJson: 包含多个(最多2个)meta信息的结果集,有describe xxx
|
||||||
if (columnName.equalsIgnoreCase(field.getString(0))) {
|
// **/
|
||||||
return new Field(field.getString(0), field.getString(1), field.getInteger(2), field.getString(3));
|
// public RestfulResultSet(String database, Statement statement, JSONObject resultJson, List<JSONObject> fieldJson) throws SQLException {
|
||||||
}
|
// this(database, statement, resultJson);
|
||||||
}
|
// ArrayList<Field> newColumns = new ArrayList<>();
|
||||||
}
|
//
|
||||||
return null;
|
// for (Field column : columns) {
|
||||||
}
|
// Field field = findField(column.name, fieldJson);
|
||||||
|
// if (field != null) {
|
||||||
|
// newColumns.add(field);
|
||||||
|
// } else {
|
||||||
|
// newColumns.add(column);
|
||||||
|
// }
|
||||||
|
// }
|
||||||
|
// this.columns = newColumns;
|
||||||
|
// this.metaData = new RestfulResultSetMetaData(this.database, this.columns, this);
|
||||||
|
// }
|
||||||
|
|
||||||
|
// public Field findField(String columnName, List<JSONObject> fieldJsonList) {
|
||||||
|
// for (JSONObject fieldJSON : fieldJsonList) {
|
||||||
|
// JSONArray fieldDataJson = fieldJSON.getJSONArray("data");
|
||||||
|
// for (int i = 0; i < fieldDataJson.size(); i++) {
|
||||||
|
// JSONArray field = fieldDataJson.getJSONArray(i);
|
||||||
|
// if (columnName.equalsIgnoreCase(field.getString(0))) {
|
||||||
|
// return new Field(field.getString(0), field.getString(1), field.getInteger(2), field.getString(3));
|
||||||
|
// }
|
||||||
|
// }
|
||||||
|
// }
|
||||||
|
// return null;
|
||||||
|
// }
|
||||||
|
|
||||||
public class Field {
|
public class Field {
|
||||||
String name;
|
String name;
|
||||||
String type;
|
int type;
|
||||||
int length;
|
int length;
|
||||||
String note;
|
String note;
|
||||||
|
|
||||||
public Field(String name, String type, int length, String note) {
|
public Field(String name, int type, int length, String note) {
|
||||||
this.name = name;
|
this.name = name;
|
||||||
this.type = type;
|
this.type = type;
|
||||||
this.length = length;
|
this.length = length;
|
||||||
|
|
|
@ -5,6 +5,7 @@ import com.taosdata.jdbc.TSDBConstants;
|
||||||
import java.sql.ResultSetMetaData;
|
import java.sql.ResultSetMetaData;
|
||||||
import java.sql.SQLException;
|
import java.sql.SQLException;
|
||||||
import java.sql.Timestamp;
|
import java.sql.Timestamp;
|
||||||
|
import java.sql.Types;
|
||||||
import java.util.ArrayList;
|
import java.util.ArrayList;
|
||||||
|
|
||||||
public class RestfulResultSetMetaData implements ResultSetMetaData {
|
public class RestfulResultSetMetaData implements ResultSetMetaData {
|
||||||
|
@ -53,14 +54,14 @@ public class RestfulResultSetMetaData implements ResultSetMetaData {
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public boolean isSigned(int column) throws SQLException {
|
public boolean isSigned(int column) throws SQLException {
|
||||||
String type = this.fields.get(column - 1).type.toUpperCase();
|
int type = this.fields.get(column - 1).type;
|
||||||
switch (type) {
|
switch (type) {
|
||||||
case "TINYINT":
|
case Types.TINYINT:
|
||||||
case "SMALLINT":
|
case Types.SMALLINT:
|
||||||
case "INT":
|
case Types.INTEGER:
|
||||||
case "BIGINT":
|
case Types.BIGINT:
|
||||||
case "FLOAT":
|
case Types.FLOAT:
|
||||||
case "DOUBLE":
|
case Types.DOUBLE:
|
||||||
return true;
|
return true;
|
||||||
default:
|
default:
|
||||||
return false;
|
return false;
|
||||||
|
@ -89,14 +90,14 @@ public class RestfulResultSetMetaData implements ResultSetMetaData {
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public int getPrecision(int column) throws SQLException {
|
public int getPrecision(int column) throws SQLException {
|
||||||
String type = this.fields.get(column - 1).type.toUpperCase();
|
int type = this.fields.get(column - 1).type;
|
||||||
switch (type) {
|
switch (type) {
|
||||||
case "FLOAT":
|
case Types.FLOAT:
|
||||||
return 5;
|
return 5;
|
||||||
case "DOUBLE":
|
case Types.DOUBLE:
|
||||||
return 9;
|
return 9;
|
||||||
case "BINARY":
|
case Types.BINARY:
|
||||||
case "NCHAR":
|
case Types.NCHAR:
|
||||||
return this.fields.get(column - 1).length;
|
return this.fields.get(column - 1).length;
|
||||||
default:
|
default:
|
||||||
return 0;
|
return 0;
|
||||||
|
@ -105,11 +106,11 @@ public class RestfulResultSetMetaData implements ResultSetMetaData {
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public int getScale(int column) throws SQLException {
|
public int getScale(int column) throws SQLException {
|
||||||
String type = this.fields.get(column - 1).type.toUpperCase();
|
int type = this.fields.get(column - 1).type;
|
||||||
switch (type) {
|
switch (type) {
|
||||||
case "FLOAT":
|
case Types.FLOAT:
|
||||||
return 5;
|
return 5;
|
||||||
case "DOUBLE":
|
case Types.DOUBLE:
|
||||||
return 9;
|
return 9;
|
||||||
default:
|
default:
|
||||||
return 0;
|
return 0;
|
||||||
|
@ -128,36 +129,13 @@ public class RestfulResultSetMetaData implements ResultSetMetaData {
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public int getColumnType(int column) throws SQLException {
|
public int getColumnType(int column) throws SQLException {
|
||||||
String type = this.fields.get(column - 1).type.toUpperCase();
|
return this.fields.get(column - 1).type;
|
||||||
switch (type) {
|
|
||||||
case "BOOL":
|
|
||||||
return java.sql.Types.BOOLEAN;
|
|
||||||
case "TINYINT":
|
|
||||||
return java.sql.Types.TINYINT;
|
|
||||||
case "SMALLINT":
|
|
||||||
return java.sql.Types.SMALLINT;
|
|
||||||
case "INT":
|
|
||||||
return java.sql.Types.INTEGER;
|
|
||||||
case "BIGINT":
|
|
||||||
return java.sql.Types.BIGINT;
|
|
||||||
case "FLOAT":
|
|
||||||
return java.sql.Types.FLOAT;
|
|
||||||
case "DOUBLE":
|
|
||||||
return java.sql.Types.DOUBLE;
|
|
||||||
case "BINARY":
|
|
||||||
return java.sql.Types.BINARY;
|
|
||||||
case "TIMESTAMP":
|
|
||||||
return java.sql.Types.TIMESTAMP;
|
|
||||||
case "NCHAR":
|
|
||||||
return java.sql.Types.NCHAR;
|
|
||||||
}
|
|
||||||
throw new SQLException(TSDBConstants.INVALID_VARIABLES);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public String getColumnTypeName(int column) throws SQLException {
|
public String getColumnTypeName(int column) throws SQLException {
|
||||||
String type = fields.get(column - 1).type;
|
int type = fields.get(column - 1).type;
|
||||||
return type.toUpperCase();
|
return TSDBConstants.jdbcType2TaosTypeName(type);
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
|
@ -177,26 +155,26 @@ public class RestfulResultSetMetaData implements ResultSetMetaData {
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public String getColumnClassName(int column) throws SQLException {
|
public String getColumnClassName(int column) throws SQLException {
|
||||||
String type = this.fields.get(column - 1).type;
|
int type = this.fields.get(column - 1).type;
|
||||||
String columnClassName = "";
|
String columnClassName = "";
|
||||||
switch (type) {
|
switch (type) {
|
||||||
case "BOOL":
|
case Types.BOOLEAN:
|
||||||
return Boolean.class.getName();
|
return Boolean.class.getName();
|
||||||
case "TINYINT":
|
case Types.TINYINT:
|
||||||
case "SMALLINT":
|
case Types.SMALLINT:
|
||||||
return Short.class.getName();
|
return Short.class.getName();
|
||||||
case "INT":
|
case Types.INTEGER:
|
||||||
return Integer.class.getName();
|
return Integer.class.getName();
|
||||||
case "BIGINT":
|
case Types.BIGINT:
|
||||||
return Long.class.getName();
|
return Long.class.getName();
|
||||||
case "FLOAT":
|
case Types.FLOAT:
|
||||||
return Float.class.getName();
|
return Float.class.getName();
|
||||||
case "DOUBLE":
|
case Types.DOUBLE:
|
||||||
return Double.class.getName();
|
return Double.class.getName();
|
||||||
case "TIMESTAMP":
|
case Types.TIMESTAMP:
|
||||||
return Timestamp.class.getName();
|
return Timestamp.class.getName();
|
||||||
case "BINARY":
|
case Types.BINARY:
|
||||||
case "NCHAR":
|
case Types.NCHAR:
|
||||||
return String.class.getName();
|
return String.class.getName();
|
||||||
}
|
}
|
||||||
return columnClassName;
|
return columnClassName;
|
||||||
|
|
|
@ -151,22 +151,21 @@ public class RestfulStatement extends AbstractStatement {
|
||||||
throw new SQLException(TSDBConstants.WrapErrMsg("SQL execution error: " + resultJson.getString("desc") + "\n" + "error code: " + resultJson.getString("code")));
|
throw new SQLException(TSDBConstants.WrapErrMsg("SQL execution error: " + resultJson.getString("desc") + "\n" + "error code: " + resultJson.getString("code")));
|
||||||
}
|
}
|
||||||
// parse table name from sql
|
// parse table name from sql
|
||||||
String[] tableIdentifiers = parseTableIdentifier(sql);
|
// String[] tableIdentifiers = parseTableIdentifier(sql);
|
||||||
if (tableIdentifiers != null) {
|
// if (tableIdentifiers != null) {
|
||||||
List<JSONObject> fieldJsonList = new ArrayList<>();
|
// List<JSONObject> fieldJsonList = new ArrayList<>();
|
||||||
for (String tableIdentifier : tableIdentifiers) {
|
// for (String tableIdentifier : tableIdentifiers) {
|
||||||
// field meta
|
// String fields = HttpClientPoolUtil.execute(url, "DESCRIBE " + tableIdentifier);
|
||||||
String fields = HttpClientPoolUtil.execute(url, "DESCRIBE " + tableIdentifier);
|
// JSONObject fieldJson = JSON.parseObject(fields);
|
||||||
JSONObject fieldJson = JSON.parseObject(fields);
|
// if (fieldJson.getString("status").equals("error")) {
|
||||||
if (fieldJson.getString("status").equals("error")) {
|
// throw new SQLException(TSDBConstants.WrapErrMsg("SQL execution error: " + fieldJson.getString("desc") + "\n" + "error code: " + fieldJson.getString("code")));
|
||||||
throw new SQLException(TSDBConstants.WrapErrMsg("SQL execution error: " + fieldJson.getString("desc") + "\n" + "error code: " + fieldJson.getString("code")));
|
// }
|
||||||
}
|
// fieldJsonList.add(fieldJson);
|
||||||
fieldJsonList.add(fieldJson);
|
// }
|
||||||
}
|
// this.resultSet = new RestfulResultSet(database, this, resultJson, fieldJsonList);
|
||||||
this.resultSet = new RestfulResultSet(database, this, resultJson, fieldJsonList);
|
// } else {
|
||||||
} else {
|
this.resultSet = new RestfulResultSet(database, this, resultJson);
|
||||||
this.resultSet = new RestfulResultSet(database, this, resultJson);
|
// }
|
||||||
}
|
|
||||||
this.affectedRows = 0;
|
this.affectedRows = 0;
|
||||||
return resultSet;
|
return resultSet;
|
||||||
}
|
}
|
||||||
|
@ -201,7 +200,7 @@ public class RestfulStatement extends AbstractStatement {
|
||||||
@Override
|
@Override
|
||||||
public ResultSet getResultSet() throws SQLException {
|
public ResultSet getResultSet() throws SQLException {
|
||||||
if (isClosed())
|
if (isClosed())
|
||||||
throw new SQLException(TSDBConstants.STATEMENT_CLOSED);
|
throw TSDBError.createSQLException(TSDBErrorNumbers.ERROR_STATEMENT_CLOSED);
|
||||||
return resultSet;
|
return resultSet;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -13,7 +13,6 @@ import java.util.HashMap;
|
||||||
import java.util.Properties;
|
import java.util.Properties;
|
||||||
|
|
||||||
import static org.junit.Assert.assertEquals;
|
import static org.junit.Assert.assertEquals;
|
||||||
import static org.junit.Assert.assertTrue;
|
|
||||||
|
|
||||||
public class ResultSetTest {
|
public class ResultSetTest {
|
||||||
static Connection connection;
|
static Connection connection;
|
||||||
|
|
|
@ -48,29 +48,28 @@ public class SubscribeTest {
|
||||||
@Test
|
@Test
|
||||||
public void subscribe() {
|
public void subscribe() {
|
||||||
try {
|
try {
|
||||||
|
|
||||||
String rawSql = "select * from " + dbName + "." + tName + ";";
|
String rawSql = "select * from " + dbName + "." + tName + ";";
|
||||||
System.out.println(rawSql);
|
System.out.println(rawSql);
|
||||||
TSDBSubscribe subscribe = ((TSDBConnection) connection).subscribe(topic, rawSql, false);
|
// TSDBSubscribe subscribe = ((TSDBConnection) connection).subscribe(topic, rawSql, false);
|
||||||
|
|
||||||
int a = 0;
|
// int a = 0;
|
||||||
while (true) {
|
// while (true) {
|
||||||
TimeUnit.MILLISECONDS.sleep(1000);
|
// TimeUnit.MILLISECONDS.sleep(1000);
|
||||||
TSDBResultSet resSet = subscribe.consume();
|
// TSDBResultSet resSet = subscribe.consume();
|
||||||
while (resSet.next()) {
|
// while (resSet.next()) {
|
||||||
for (int i = 1; i <= resSet.getMetaData().getColumnCount(); i++) {
|
// for (int i = 1; i <= resSet.getMetaData().getColumnCount(); i++) {
|
||||||
System.out.printf(i + ": " + resSet.getString(i) + "\t");
|
// System.out.printf(i + ": " + resSet.getString(i) + "\t");
|
||||||
}
|
// }
|
||||||
System.out.println("\n======" + a + "==========");
|
// System.out.println("\n======" + a + "==========");
|
||||||
}
|
// }
|
||||||
a++;
|
// a++;
|
||||||
if (a >= 2) {
|
// if (a >= 2) {
|
||||||
break;
|
// break;
|
||||||
}
|
// }
|
||||||
// resSet.close();
|
// resSet.close();
|
||||||
}
|
// }
|
||||||
|
//
|
||||||
subscribe.close(true);
|
// subscribe.close(true);
|
||||||
} catch (Exception e) {
|
} catch (Exception e) {
|
||||||
e.printStackTrace();
|
e.printStackTrace();
|
||||||
}
|
}
|
||||||
|
|
|
@ -10,7 +10,7 @@ import java.util.Random;
|
||||||
public class RestfulJDBCTest {
|
public class RestfulJDBCTest {
|
||||||
|
|
||||||
private static final String host = "127.0.0.1";
|
private static final String host = "127.0.0.1";
|
||||||
// private static final String host = "master";
|
// private static final String host = "master";
|
||||||
private static Connection connection;
|
private static Connection connection;
|
||||||
private Random random = new Random(System.currentTimeMillis());
|
private Random random = new Random(System.currentTimeMillis());
|
||||||
|
|
||||||
|
|
|
@ -12,7 +12,7 @@ import java.sql.*;
|
||||||
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
|
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
|
||||||
public class SQLTest {
|
public class SQLTest {
|
||||||
private static final String host = "127.0.0.1";
|
private static final String host = "127.0.0.1";
|
||||||
// private static final String host = "master";
|
// private static final String host = "master";
|
||||||
private static Connection connection;
|
private static Connection connection;
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
|
@ -323,6 +323,18 @@ public class SQLTest {
|
||||||
SQLExecutor.executeQuery(connection, sql);
|
SQLExecutor.executeQuery(connection, sql);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
public void testCase052() {
|
||||||
|
String sql = "select server_status()";
|
||||||
|
SQLExecutor.executeQuery(connection, sql);
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
public void testCase053() {
|
||||||
|
String sql = "select avg(cpu_taosd), avg(cpu_system), max(cpu_cores), avg(mem_taosd), avg(mem_system), max(mem_total), avg(disk_used), max(disk_total), avg(band_speed), avg(io_read), avg(io_write), sum(req_http), sum(req_select), sum(req_insert) from log.dn1 where ts> now - 60m and ts<= now interval(1m) fill(value, 0)";
|
||||||
|
SQLExecutor.executeQuery(connection, sql);
|
||||||
|
}
|
||||||
|
|
||||||
@BeforeClass
|
@BeforeClass
|
||||||
public static void before() throws ClassNotFoundException, SQLException {
|
public static void before() throws ClassNotFoundException, SQLException {
|
||||||
Class.forName("com.taosdata.jdbc.rs.RestfulDriver");
|
Class.forName("com.taosdata.jdbc.rs.RestfulDriver");
|
||||||
|
|
|
@ -29,7 +29,7 @@ typedef struct {
|
||||||
static SCheckItem tsCheckItem[TSDB_CHECK_ITEM_MAX] = {{0}};
|
static SCheckItem tsCheckItem[TSDB_CHECK_ITEM_MAX] = {{0}};
|
||||||
int64_t tsMinFreeMemSizeForStart = 0;
|
int64_t tsMinFreeMemSizeForStart = 0;
|
||||||
|
|
||||||
static int32_t bindTcpPort(int16_t port) {
|
static int32_t bindTcpPort(uint16_t port) {
|
||||||
SOCKET serverSocket;
|
SOCKET serverSocket;
|
||||||
struct sockaddr_in server_addr;
|
struct sockaddr_in server_addr;
|
||||||
|
|
||||||
|
@ -85,9 +85,9 @@ static int32_t bindUdpPort(int16_t port) {
|
||||||
|
|
||||||
static int32_t dnodeCheckNetwork() {
|
static int32_t dnodeCheckNetwork() {
|
||||||
int32_t ret;
|
int32_t ret;
|
||||||
int16_t startPort = tsServerPort;
|
uint16_t startPort = tsServerPort;
|
||||||
|
|
||||||
for (int16_t port = startPort; port < startPort + 12; port++) {
|
for (uint16_t port = startPort; port < startPort + 12; port++) {
|
||||||
ret = bindTcpPort(port);
|
ret = bindTcpPort(port);
|
||||||
if (0 != ret) {
|
if (0 != ret) {
|
||||||
dError("failed to tcp bind port %d, quit", port);
|
dError("failed to tcp bind port %d, quit", port);
|
||||||
|
|
|
@ -286,7 +286,7 @@ do { \
|
||||||
#define TSDB_MAX_COMP_LEVEL 2
|
#define TSDB_MAX_COMP_LEVEL 2
|
||||||
#define TSDB_DEFAULT_COMP_LEVEL 2
|
#define TSDB_DEFAULT_COMP_LEVEL 2
|
||||||
|
|
||||||
#define TSDB_MIN_WAL_LEVEL 1
|
#define TSDB_MIN_WAL_LEVEL 0
|
||||||
#define TSDB_MAX_WAL_LEVEL 2
|
#define TSDB_MAX_WAL_LEVEL 2
|
||||||
#define TSDB_DEFAULT_WAL_LEVEL 1
|
#define TSDB_DEFAULT_WAL_LEVEL 1
|
||||||
|
|
||||||
|
|
|
@ -8,10 +8,12 @@
|
||||||
"thread_count": 4,
|
"thread_count": 4,
|
||||||
"thread_count_create_tbl": 4,
|
"thread_count_create_tbl": 4,
|
||||||
"result_file": "./insert_res.txt",
|
"result_file": "./insert_res.txt",
|
||||||
"confirm_parameter_prompt": "no",
|
"confirm_parameter_prompt": "no",
|
||||||
|
"insert_interval": 0,
|
||||||
|
"num_of_records_per_req": 100,
|
||||||
"databases": [{
|
"databases": [{
|
||||||
"dbinfo": {
|
"dbinfo": {
|
||||||
"name": "dbx",
|
"name": "db",
|
||||||
"drop": "yes",
|
"drop": "yes",
|
||||||
"replica": 1,
|
"replica": 1,
|
||||||
"days": 10,
|
"days": 10,
|
||||||
|
@ -30,14 +32,13 @@
|
||||||
},
|
},
|
||||||
"super_tables": [{
|
"super_tables": [{
|
||||||
"name": "stb",
|
"name": "stb",
|
||||||
"child_table_exists":"no",
|
"child_table_exists":"no",
|
||||||
"childtable_count": 100,
|
"childtable_count": 100,
|
||||||
"childtable_prefix": "stb_",
|
"childtable_prefix": "stb_",
|
||||||
"auto_create_table": "no",
|
"auto_create_table": "no",
|
||||||
"data_source": "rand",
|
"data_source": "rand",
|
||||||
"insert_mode": "taosc",
|
"insert_mode": "taosc",
|
||||||
"insert_rate": 0,
|
"insert_rows": 100000,
|
||||||
"insert_rows": 1000,
|
|
||||||
"multi_thread_write_one_tbl": "no",
|
"multi_thread_write_one_tbl": "no",
|
||||||
"number_of_tbl_in_one_sql": 0,
|
"number_of_tbl_in_one_sql": 0,
|
||||||
"rows_per_tbl": 100,
|
"rows_per_tbl": 100,
|
||||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -832,12 +832,13 @@ static int32_t mnodeProcessBatchCreateTableMsg(SMnodeMsg *pMsg) {
|
||||||
return code;
|
return code;
|
||||||
} else if (code != TSDB_CODE_MND_ACTION_IN_PROGRESS) {
|
} else if (code != TSDB_CODE_MND_ACTION_IN_PROGRESS) {
|
||||||
++pMsg->pBatchMasterMsg->received;
|
++pMsg->pBatchMasterMsg->received;
|
||||||
|
pMsg->pBatchMasterMsg->code = code;
|
||||||
mnodeDestroySubMsg(pMsg);
|
mnodeDestroySubMsg(pMsg);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pMsg->pBatchMasterMsg->successed + pMsg->pBatchMasterMsg->received
|
if (pMsg->pBatchMasterMsg->successed + pMsg->pBatchMasterMsg->received
|
||||||
>= pMsg->pBatchMasterMsg->expected) {
|
>= pMsg->pBatchMasterMsg->expected) {
|
||||||
dnodeSendRpcMWriteRsp(pMsg->pBatchMasterMsg, TSDB_CODE_SUCCESS);
|
dnodeSendRpcMWriteRsp(pMsg->pBatchMasterMsg, pMsg->pBatchMasterMsg->code);
|
||||||
}
|
}
|
||||||
|
|
||||||
return TSDB_CODE_MND_ACTION_IN_PROGRESS;
|
return TSDB_CODE_MND_ACTION_IN_PROGRESS;
|
||||||
|
@ -916,11 +917,13 @@ static int32_t mnodeProcessDropTableMsg(SMnodeMsg *pMsg) {
|
||||||
return TSDB_CODE_MND_DB_IN_DROPPING;
|
return TSDB_CODE_MND_DB_IN_DROPPING;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#if 0
|
||||||
if (mnodeCheckIsMonitorDB(pMsg->pDb->name, tsMonitorDbName)) {
|
if (mnodeCheckIsMonitorDB(pMsg->pDb->name, tsMonitorDbName)) {
|
||||||
mError("msg:%p, app:%p table:%s, failed to drop table, in monitor database", pMsg, pMsg->rpcMsg.ahandle,
|
mError("msg:%p, app:%p table:%s, failed to drop table, in monitor database", pMsg, pMsg->rpcMsg.ahandle,
|
||||||
pDrop->name);
|
pDrop->name);
|
||||||
return TSDB_CODE_MND_MONITOR_DB_FORBIDDEN;
|
return TSDB_CODE_MND_MONITOR_DB_FORBIDDEN;
|
||||||
}
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
if (pMsg->pTable == NULL) pMsg->pTable = mnodeGetTable(pDrop->name);
|
if (pMsg->pTable == NULL) pMsg->pTable = mnodeGetTable(pDrop->name);
|
||||||
if (pMsg->pTable == NULL) {
|
if (pMsg->pTable == NULL) {
|
||||||
|
@ -1906,7 +1909,8 @@ static int32_t mnodeDoCreateChildTableCb(SMnodeMsg *pMsg, int32_t code) {
|
||||||
sdbDeleteRow(&desc);
|
sdbDeleteRow(&desc);
|
||||||
|
|
||||||
if (pMsg->pBatchMasterMsg) {
|
if (pMsg->pBatchMasterMsg) {
|
||||||
++pMsg->pBatchMasterMsg->successed;
|
++pMsg->pBatchMasterMsg->received;
|
||||||
|
pMsg->pBatchMasterMsg->code = code;
|
||||||
if (pMsg->pBatchMasterMsg->successed + pMsg->pBatchMasterMsg->received
|
if (pMsg->pBatchMasterMsg->successed + pMsg->pBatchMasterMsg->received
|
||||||
>= pMsg->pBatchMasterMsg->expected) {
|
>= pMsg->pBatchMasterMsg->expected) {
|
||||||
dnodeSendRpcMWriteRsp(pMsg->pBatchMasterMsg, code);
|
dnodeSendRpcMWriteRsp(pMsg->pBatchMasterMsg, code);
|
||||||
|
@ -2688,6 +2692,7 @@ static void mnodeProcessCreateChildTableRsp(SRpcMsg *rpcMsg) {
|
||||||
|
|
||||||
if (pMsg->pBatchMasterMsg) {
|
if (pMsg->pBatchMasterMsg) {
|
||||||
++pMsg->pBatchMasterMsg->received;
|
++pMsg->pBatchMasterMsg->received;
|
||||||
|
pMsg->pBatchMasterMsg->code = code;
|
||||||
if (pMsg->pBatchMasterMsg->successed + pMsg->pBatchMasterMsg->received
|
if (pMsg->pBatchMasterMsg->successed + pMsg->pBatchMasterMsg->received
|
||||||
>= pMsg->pBatchMasterMsg->expected) {
|
>= pMsg->pBatchMasterMsg->expected) {
|
||||||
dnodeSendRpcMWriteRsp(pMsg->pBatchMasterMsg, code);
|
dnodeSendRpcMWriteRsp(pMsg->pBatchMasterMsg, code);
|
||||||
|
@ -2726,6 +2731,7 @@ static void mnodeProcessCreateChildTableRsp(SRpcMsg *rpcMsg) {
|
||||||
|
|
||||||
if (pMsg->pBatchMasterMsg) {
|
if (pMsg->pBatchMasterMsg) {
|
||||||
++pMsg->pBatchMasterMsg->received;
|
++pMsg->pBatchMasterMsg->received;
|
||||||
|
pMsg->pBatchMasterMsg->code = rpcMsg->code;
|
||||||
if (pMsg->pBatchMasterMsg->successed + pMsg->pBatchMasterMsg->received
|
if (pMsg->pBatchMasterMsg->successed + pMsg->pBatchMasterMsg->received
|
||||||
>= pMsg->pBatchMasterMsg->expected) {
|
>= pMsg->pBatchMasterMsg->expected) {
|
||||||
dnodeSendRpcMWriteRsp(pMsg->pBatchMasterMsg, rpcMsg->code);
|
dnodeSendRpcMWriteRsp(pMsg->pBatchMasterMsg, rpcMsg->code);
|
||||||
|
@ -3020,10 +3026,12 @@ static int32_t mnodeProcessAlterTableMsg(SMnodeMsg *pMsg) {
|
||||||
return TSDB_CODE_MND_DB_IN_DROPPING;
|
return TSDB_CODE_MND_DB_IN_DROPPING;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#if 0
|
||||||
if (mnodeCheckIsMonitorDB(pMsg->pDb->name, tsMonitorDbName)) {
|
if (mnodeCheckIsMonitorDB(pMsg->pDb->name, tsMonitorDbName)) {
|
||||||
mError("msg:%p, app:%p table:%s, failed to alter table, its log db", pMsg, pMsg->rpcMsg.ahandle, pAlter->tableFname);
|
mError("msg:%p, app:%p table:%s, failed to alter table, its log db", pMsg, pMsg->rpcMsg.ahandle, pAlter->tableFname);
|
||||||
return TSDB_CODE_MND_MONITOR_DB_FORBIDDEN;
|
return TSDB_CODE_MND_MONITOR_DB_FORBIDDEN;
|
||||||
}
|
}
|
||||||
|
#endif
|
||||||
|
|
||||||
if (pMsg->pTable == NULL) pMsg->pTable = mnodeGetTable(pAlter->tableFname);
|
if (pMsg->pTable == NULL) pMsg->pTable = mnodeGetTable(pAlter->tableFname);
|
||||||
if (pMsg->pTable == NULL) {
|
if (pMsg->pTable == NULL) {
|
||||||
|
|
|
@ -537,6 +537,7 @@ static int32_t mnodeCreateVgroupCb(SMnodeMsg *pMsg, int32_t code) {
|
||||||
|
|
||||||
if (pMsg->pBatchMasterMsg) {
|
if (pMsg->pBatchMasterMsg) {
|
||||||
++pMsg->pBatchMasterMsg->received;
|
++pMsg->pBatchMasterMsg->received;
|
||||||
|
pMsg->pBatchMasterMsg->code = pMsg->code;
|
||||||
if (pMsg->pBatchMasterMsg->successed + pMsg->pBatchMasterMsg->received
|
if (pMsg->pBatchMasterMsg->successed + pMsg->pBatchMasterMsg->received
|
||||||
>= pMsg->pBatchMasterMsg->expected) {
|
>= pMsg->pBatchMasterMsg->expected) {
|
||||||
dnodeSendRpcMWriteRsp(pMsg->pBatchMasterMsg, pMsg->code);
|
dnodeSendRpcMWriteRsp(pMsg->pBatchMasterMsg, pMsg->code);
|
||||||
|
@ -1002,6 +1003,7 @@ static void mnodeProcessCreateVnodeRsp(SRpcMsg *rpcMsg) {
|
||||||
|
|
||||||
if (mnodeMsg->pBatchMasterMsg) {
|
if (mnodeMsg->pBatchMasterMsg) {
|
||||||
++mnodeMsg->pBatchMasterMsg->received;
|
++mnodeMsg->pBatchMasterMsg->received;
|
||||||
|
mnodeMsg->pBatchMasterMsg->code = code;
|
||||||
if (mnodeMsg->pBatchMasterMsg->successed + mnodeMsg->pBatchMasterMsg->received
|
if (mnodeMsg->pBatchMasterMsg->successed + mnodeMsg->pBatchMasterMsg->received
|
||||||
>= mnodeMsg->pBatchMasterMsg->expected) {
|
>= mnodeMsg->pBatchMasterMsg->expected) {
|
||||||
dnodeSendRpcMWriteRsp(mnodeMsg->pBatchMasterMsg, code);
|
dnodeSendRpcMWriteRsp(mnodeMsg->pBatchMasterMsg, code);
|
||||||
|
@ -1024,6 +1026,7 @@ static void mnodeProcessCreateVnodeRsp(SRpcMsg *rpcMsg) {
|
||||||
|
|
||||||
if (mnodeMsg->pBatchMasterMsg) {
|
if (mnodeMsg->pBatchMasterMsg) {
|
||||||
++mnodeMsg->pBatchMasterMsg->received;
|
++mnodeMsg->pBatchMasterMsg->received;
|
||||||
|
mnodeMsg->pBatchMasterMsg->code = mnodeMsg->code;
|
||||||
if (mnodeMsg->pBatchMasterMsg->successed + mnodeMsg->pBatchMasterMsg->received
|
if (mnodeMsg->pBatchMasterMsg->successed + mnodeMsg->pBatchMasterMsg->received
|
||||||
>= mnodeMsg->pBatchMasterMsg->expected) {
|
>= mnodeMsg->pBatchMasterMsg->expected) {
|
||||||
dnodeSendRpcMWriteRsp(mnodeMsg->pBatchMasterMsg, mnodeMsg->code);
|
dnodeSendRpcMWriteRsp(mnodeMsg->pBatchMasterMsg, mnodeMsg->code);
|
||||||
|
|
|
@ -83,6 +83,20 @@ extern "C" {
|
||||||
} \
|
} \
|
||||||
} while (0)
|
} while (0)
|
||||||
|
|
||||||
|
#define DEFAULT_DOUBLE_COMP(x, y) \
|
||||||
|
do { \
|
||||||
|
if (isnan(x) && isnan(y)) { return 0; } \
|
||||||
|
if (isnan(x)) { return -1; } \
|
||||||
|
if (isnan(y)) { return 1; } \
|
||||||
|
if ((x) == (y)) { \
|
||||||
|
return 0; \
|
||||||
|
} else { \
|
||||||
|
return (x) < (y) ? -1 : 1; \
|
||||||
|
} \
|
||||||
|
} while (0)
|
||||||
|
|
||||||
|
#define DEFAULT_FLOAT_COMP(x, y) DEFAULT_DOUBLE_COMP(x, y)
|
||||||
|
|
||||||
#define ALIGN_NUM(n, align) (((n) + ((align)-1)) & (~((align)-1)))
|
#define ALIGN_NUM(n, align) (((n) + ((align)-1)) & (~((align)-1)))
|
||||||
|
|
||||||
// align to 8bytes
|
// align to 8bytes
|
||||||
|
|
|
@ -34,6 +34,8 @@
|
||||||
#define REST_JSON_DATA_LEN 4
|
#define REST_JSON_DATA_LEN 4
|
||||||
#define REST_JSON_HEAD "head"
|
#define REST_JSON_HEAD "head"
|
||||||
#define REST_JSON_HEAD_LEN 4
|
#define REST_JSON_HEAD_LEN 4
|
||||||
|
#define REST_JSON_HEAD_INFO "column_meta"
|
||||||
|
#define REST_JSON_HEAD_INFO_LEN 11
|
||||||
#define REST_JSON_ROWS "rows"
|
#define REST_JSON_ROWS "rows"
|
||||||
#define REST_JSON_ROWS_LEN 4
|
#define REST_JSON_ROWS_LEN 4
|
||||||
#define REST_JSON_AFFECT_ROWS "affected_rows"
|
#define REST_JSON_AFFECT_ROWS "affected_rows"
|
||||||
|
@ -51,4 +53,4 @@ bool restBuildSqlLocalTimeStringJson(HttpContext *pContext, HttpSqlCmd *cmd, TAO
|
||||||
bool restBuildSqlUtcTimeStringJson(HttpContext *pContext, HttpSqlCmd *cmd, TAOS_RES *result, int32_t numOfRows);
|
bool restBuildSqlUtcTimeStringJson(HttpContext *pContext, HttpSqlCmd *cmd, TAOS_RES *result, int32_t numOfRows);
|
||||||
void restStopSqlJson(HttpContext *pContext, HttpSqlCmd *cmd);
|
void restStopSqlJson(HttpContext *pContext, HttpSqlCmd *cmd);
|
||||||
|
|
||||||
#endif
|
#endif
|
||||||
|
|
|
@ -59,7 +59,9 @@ void httpDispatchToResultQueue(void *param, TAOS_RES *result, int32_t code, int3
|
||||||
pMsg->fp = fp;
|
pMsg->fp = fp;
|
||||||
taosWriteQitem(tsHttpQueue, TAOS_QTYPE_RPC, pMsg);
|
taosWriteQitem(tsHttpQueue, TAOS_QTYPE_RPC, pMsg);
|
||||||
} else {
|
} else {
|
||||||
(*fp)(param, result, code, rows);
|
taos_stop_query(result);
|
||||||
|
taos_free_result(result);
|
||||||
|
//(*fp)(param, result, code, rows);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -75,6 +75,44 @@ void restStartSqlJson(HttpContext *pContext, HttpSqlCmd *cmd, TAOS_RES *result)
|
||||||
// head array end
|
// head array end
|
||||||
httpJsonToken(jsonBuf, JsonArrEnd);
|
httpJsonToken(jsonBuf, JsonArrEnd);
|
||||||
|
|
||||||
|
// column_meta begin
|
||||||
|
httpJsonItemToken(jsonBuf);
|
||||||
|
httpJsonPairHead(jsonBuf, REST_JSON_HEAD_INFO, REST_JSON_HEAD_INFO_LEN);
|
||||||
|
// column_meta array begin
|
||||||
|
httpJsonItemToken(jsonBuf);
|
||||||
|
httpJsonToken(jsonBuf, JsonArrStt);
|
||||||
|
|
||||||
|
if (num_fields == 0) {
|
||||||
|
httpJsonItemToken(jsonBuf);
|
||||||
|
httpJsonToken(jsonBuf, JsonArrStt);
|
||||||
|
|
||||||
|
httpJsonItemToken(jsonBuf);
|
||||||
|
httpJsonString(jsonBuf, REST_JSON_AFFECT_ROWS, REST_JSON_AFFECT_ROWS_LEN);
|
||||||
|
httpJsonItemToken(jsonBuf);
|
||||||
|
httpJsonInt(jsonBuf, TSDB_DATA_TYPE_INT);
|
||||||
|
httpJsonItemToken(jsonBuf);
|
||||||
|
httpJsonInt(jsonBuf, 4);
|
||||||
|
|
||||||
|
httpJsonToken(jsonBuf, JsonArrEnd);
|
||||||
|
} else {
|
||||||
|
for (int32_t i = 0; i < num_fields; ++i) {
|
||||||
|
httpJsonItemToken(jsonBuf);
|
||||||
|
httpJsonToken(jsonBuf, JsonArrStt);
|
||||||
|
|
||||||
|
httpJsonItemToken(jsonBuf);
|
||||||
|
httpJsonString(jsonBuf, fields[i].name, (int32_t)strlen(fields[i].name));
|
||||||
|
httpJsonItemToken(jsonBuf);
|
||||||
|
httpJsonInt(jsonBuf, fields[i].type);
|
||||||
|
httpJsonItemToken(jsonBuf);
|
||||||
|
httpJsonInt(jsonBuf, fields[i].bytes);
|
||||||
|
|
||||||
|
httpJsonToken(jsonBuf, JsonArrEnd);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// column_meta array end
|
||||||
|
httpJsonToken(jsonBuf, JsonArrEnd);
|
||||||
|
|
||||||
// data begin
|
// data begin
|
||||||
httpJsonItemToken(jsonBuf);
|
httpJsonItemToken(jsonBuf);
|
||||||
httpJsonPairHead(jsonBuf, REST_JSON_DATA, REST_JSON_DATA_LEN);
|
httpJsonPairHead(jsonBuf, REST_JSON_DATA, REST_JSON_DATA_LEN);
|
||||||
|
|
|
@ -362,20 +362,10 @@ static FORCE_INLINE int32_t columnValueAscendingComparator(char *f1, char *f2, i
|
||||||
return (first < second) ? -1 : 1;
|
return (first < second) ? -1 : 1;
|
||||||
};
|
};
|
||||||
case TSDB_DATA_TYPE_DOUBLE: {
|
case TSDB_DATA_TYPE_DOUBLE: {
|
||||||
double first = GET_DOUBLE_VAL(f1);
|
DEFAULT_DOUBLE_COMP(GET_DOUBLE_VAL(f1), GET_DOUBLE_VAL(f2));
|
||||||
double second = GET_DOUBLE_VAL(f2);
|
|
||||||
if (first == second) {
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
return (first < second) ? -1 : 1;
|
|
||||||
};
|
};
|
||||||
case TSDB_DATA_TYPE_FLOAT: {
|
case TSDB_DATA_TYPE_FLOAT: {
|
||||||
float first = GET_FLOAT_VAL(f1);
|
DEFAULT_FLOAT_COMP(GET_FLOAT_VAL(f1), GET_FLOAT_VAL(f2));
|
||||||
float second = GET_FLOAT_VAL(f2);
|
|
||||||
if (first == second) {
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
return (first < second) ? -1 : 1;
|
|
||||||
};
|
};
|
||||||
case TSDB_DATA_TYPE_BIGINT: {
|
case TSDB_DATA_TYPE_BIGINT: {
|
||||||
int64_t first = *(int64_t *)f1;
|
int64_t first = *(int64_t *)f1;
|
||||||
|
|
|
@ -58,6 +58,15 @@ SSqlInfo qSQLParse(const char *pStr) {
|
||||||
sqlInfo.valid = false;
|
sqlInfo.valid = false;
|
||||||
goto abort_parse;
|
goto abort_parse;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
case TK_HEX:
|
||||||
|
case TK_OCT:
|
||||||
|
case TK_BIN:{
|
||||||
|
snprintf(sqlInfo.msg, tListLen(sqlInfo.msg), "unsupported token: \"%s\"", t0.z);
|
||||||
|
sqlInfo.valid = false;
|
||||||
|
goto abort_parse;
|
||||||
|
}
|
||||||
|
|
||||||
default:
|
default:
|
||||||
Parse(pParser, t0.type, t0, &sqlInfo);
|
Parse(pParser, t0.type, t0, &sqlInfo);
|
||||||
if (sqlInfo.valid == false) {
|
if (sqlInfo.valid == false) {
|
||||||
|
|
|
@ -48,7 +48,7 @@ tMemBucket *createUnsignedDataBucket(int32_t start, int32_t end, int32_t type) {
|
||||||
uint64_t k = i;
|
uint64_t k = i;
|
||||||
int32_t ret = tMemBucketPut(pBucket, &k, 1);
|
int32_t ret = tMemBucketPut(pBucket, &k, 1);
|
||||||
if (ret != 0) {
|
if (ret != 0) {
|
||||||
printf("value out of range:%f", k);
|
printf("value out of range:%" PRId64, k);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -245,7 +245,7 @@ void unsignedDataTest() {
|
||||||
} // namespace
|
} // namespace
|
||||||
|
|
||||||
TEST(testCase, percentileTest) {
|
TEST(testCase, percentileTest) {
|
||||||
// qsortTest();
|
// qsortTest();
|
||||||
intDataTest();
|
intDataTest();
|
||||||
bigintDataTest();
|
bigintDataTest();
|
||||||
doubleDataTest();
|
doubleDataTest();
|
||||||
|
|
|
@ -227,10 +227,10 @@ TEST(testCase, db_table_name) {
|
||||||
EXPECT_EQ(testValidateName(t60_1), TSDB_CODE_TSC_INVALID_SQL);
|
EXPECT_EQ(testValidateName(t60_1), TSDB_CODE_TSC_INVALID_SQL);
|
||||||
|
|
||||||
char t61[] = "' ABC '";
|
char t61[] = "' ABC '";
|
||||||
EXPECT_EQ(testValidateName(t61), TSDB_CODE_SUCCESS);
|
EXPECT_EQ(testValidateName(t61), TSDB_CODE_TSC_INVALID_SQL);
|
||||||
|
|
||||||
char t61_1[] = "' ABC '";
|
char t61_1[] = "' ABC '";
|
||||||
EXPECT_EQ(testValidateName(t61_1), TSDB_CODE_SUCCESS);
|
EXPECT_EQ(testValidateName(t61_1), TSDB_CODE_TSC_INVALID_SQL);
|
||||||
|
|
||||||
char t62[] = " ABC . def ";
|
char t62[] = " ABC . def ";
|
||||||
EXPECT_EQ(testValidateName(t62), TSDB_CODE_TSC_INVALID_SQL);
|
EXPECT_EQ(testValidateName(t62), TSDB_CODE_TSC_INVALID_SQL);
|
||||||
|
@ -249,13 +249,13 @@ TEST(testCase, db_table_name) {
|
||||||
EXPECT_EQ(testValidateName(t65), TSDB_CODE_TSC_INVALID_SQL);
|
EXPECT_EQ(testValidateName(t65), TSDB_CODE_TSC_INVALID_SQL);
|
||||||
|
|
||||||
char t66[] = "' ABC '.' DEF '";
|
char t66[] = "' ABC '.' DEF '";
|
||||||
EXPECT_EQ(testValidateName(t66), TSDB_CODE_SUCCESS);
|
EXPECT_EQ(testValidateName(t66), TSDB_CODE_TSC_INVALID_SQL);
|
||||||
|
|
||||||
char t67[] = "abc . ' DEF '";
|
char t67[] = "abc . ' DEF '";
|
||||||
EXPECT_EQ(testValidateName(t67), TSDB_CODE_TSC_INVALID_SQL);
|
EXPECT_EQ(testValidateName(t67), TSDB_CODE_TSC_INVALID_SQL);
|
||||||
|
|
||||||
char t68[] = "' abc '.' DEF '";
|
char t68[] = "' abc '.' DEF '";
|
||||||
EXPECT_EQ(testValidateName(t68), TSDB_CODE_SUCCESS);
|
EXPECT_EQ(testValidateName(t68), TSDB_CODE_TSC_INVALID_SQL);
|
||||||
|
|
||||||
// do not use key words
|
// do not use key words
|
||||||
char t69[] = "table.'DEF'";
|
char t69[] = "table.'DEF'";
|
||||||
|
@ -265,7 +265,7 @@ TEST(testCase, db_table_name) {
|
||||||
EXPECT_EQ(testValidateName(t70), TSDB_CODE_TSC_INVALID_SQL);
|
EXPECT_EQ(testValidateName(t70), TSDB_CODE_TSC_INVALID_SQL);
|
||||||
|
|
||||||
char t71[] = "'_abXYZ1234 '.' deFF '";
|
char t71[] = "'_abXYZ1234 '.' deFF '";
|
||||||
EXPECT_EQ(testValidateName(t71), TSDB_CODE_SUCCESS);
|
EXPECT_EQ(testValidateName(t71), TSDB_CODE_TSC_INVALID_SQL);
|
||||||
|
|
||||||
char t72[] = "'_abDEF&^%1234'.' DIef'";
|
char t72[] = "'_abDEF&^%1234'.' DIef'";
|
||||||
EXPECT_EQ(testValidateName(t72), TSDB_CODE_TSC_INVALID_SQL);
|
EXPECT_EQ(testValidateName(t72), TSDB_CODE_TSC_INVALID_SQL);
|
||||||
|
|
|
@ -1281,7 +1281,7 @@ static void rpcSendReqToServer(SRpcInfo *pRpc, SRpcReqContext *pContext) {
|
||||||
SRpcConn *pConn = rpcSetupConnToServer(pContext);
|
SRpcConn *pConn = rpcSetupConnToServer(pContext);
|
||||||
if (pConn == NULL) {
|
if (pConn == NULL) {
|
||||||
pContext->code = terrno;
|
pContext->code = terrno;
|
||||||
taosTmrStart(rpcProcessConnError, 0, pContext, pRpc->tmrCtrl);
|
taosTmrStart(rpcProcessConnError, 1, pContext, pRpc->tmrCtrl);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -37,6 +37,7 @@ typedef struct {
|
||||||
TSKEY keyLast;
|
TSKEY keyLast;
|
||||||
int64_t numOfRows;
|
int64_t numOfRows;
|
||||||
SSkipList* pData;
|
SSkipList* pData;
|
||||||
|
T_REF_DECLARE()
|
||||||
} STableData;
|
} STableData;
|
||||||
|
|
||||||
typedef struct {
|
typedef struct {
|
||||||
|
@ -76,7 +77,7 @@ typedef struct {
|
||||||
|
|
||||||
int tsdbRefMemTable(STsdbRepo* pRepo, SMemTable* pMemTable);
|
int tsdbRefMemTable(STsdbRepo* pRepo, SMemTable* pMemTable);
|
||||||
int tsdbUnRefMemTable(STsdbRepo* pRepo, SMemTable* pMemTable);
|
int tsdbUnRefMemTable(STsdbRepo* pRepo, SMemTable* pMemTable);
|
||||||
int tsdbTakeMemSnapshot(STsdbRepo* pRepo, SMemTable** pMem, SMemTable** pIMem);
|
int tsdbTakeMemSnapshot(STsdbRepo* pRepo, SMemTable** pMem, SMemTable** pIMem, SArray* pATable);
|
||||||
void tsdbUnTakeMemSnapShot(STsdbRepo* pRepo, SMemTable* pMem, SMemTable* pIMem);
|
void tsdbUnTakeMemSnapShot(STsdbRepo* pRepo, SMemTable* pMem, SMemTable* pIMem);
|
||||||
void* tsdbAllocBytes(STsdbRepo* pRepo, int bytes);
|
void* tsdbAllocBytes(STsdbRepo* pRepo, int bytes);
|
||||||
int tsdbAsyncCommit(STsdbRepo* pRepo);
|
int tsdbAsyncCommit(STsdbRepo* pRepo);
|
||||||
|
|
|
@ -597,7 +597,7 @@ int tsdbRestoreInfo(STsdbRepo *pRepo) {
|
||||||
// Get the data in row
|
// Get the data in row
|
||||||
ASSERT(pTable->lastRow == NULL);
|
ASSERT(pTable->lastRow == NULL);
|
||||||
STSchema *pSchema = tsdbGetTableSchema(pTable);
|
STSchema *pSchema = tsdbGetTableSchema(pTable);
|
||||||
pTable->lastRow = taosTMalloc(schemaTLen(pSchema));
|
pTable->lastRow = taosTMalloc(dataRowMaxBytesFromSchema(pSchema));
|
||||||
if (pTable->lastRow == NULL) {
|
if (pTable->lastRow == NULL) {
|
||||||
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
|
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
|
||||||
tsdbDestroyReadH(&readh);
|
tsdbDestroyReadH(&readh);
|
||||||
|
|
|
@ -124,17 +124,66 @@ int tsdbUnRefMemTable(STsdbRepo *pRepo, SMemTable *pMemTable) {
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
int tsdbTakeMemSnapshot(STsdbRepo *pRepo, SMemTable **pMem, SMemTable **pIMem) {
|
int tsdbTakeMemSnapshot(STsdbRepo *pRepo, SMemTable **pMem, SMemTable **pIMem, SArray *pATable) {
|
||||||
|
SMemTable *tmem;
|
||||||
|
|
||||||
|
// Get snap object
|
||||||
if (tsdbLockRepo(pRepo) < 0) return -1;
|
if (tsdbLockRepo(pRepo) < 0) return -1;
|
||||||
|
|
||||||
*pMem = pRepo->mem;
|
tmem = pRepo->mem;
|
||||||
*pIMem = pRepo->imem;
|
*pIMem = pRepo->imem;
|
||||||
tsdbRefMemTable(pRepo, *pMem);
|
tsdbRefMemTable(pRepo, tmem);
|
||||||
tsdbRefMemTable(pRepo, *pIMem);
|
tsdbRefMemTable(pRepo, *pIMem);
|
||||||
|
|
||||||
if (tsdbUnlockRepo(pRepo) < 0) return -1;
|
if (tsdbUnlockRepo(pRepo) < 0) return -1;
|
||||||
|
|
||||||
if (*pMem != NULL) taosRLockLatch(&((*pMem)->latch));
|
// Copy mem objects and ref needed STableData
|
||||||
|
if (tmem) {
|
||||||
|
taosRLockLatch(&(tmem->latch));
|
||||||
|
|
||||||
|
*pMem = (SMemTable *)calloc(1, sizeof(**pMem));
|
||||||
|
if (*pMem == NULL) {
|
||||||
|
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
|
||||||
|
taosRUnLockLatch(&(tmem->latch));
|
||||||
|
tsdbUnRefMemTable(pRepo, tmem);
|
||||||
|
tsdbUnRefMemTable(pRepo, *pIMem);
|
||||||
|
*pMem = NULL;
|
||||||
|
*pIMem = NULL;
|
||||||
|
return -1;
|
||||||
|
}
|
||||||
|
|
||||||
|
(*pMem)->tData = (STableData **)calloc(tmem->maxTables, sizeof(STableData *));
|
||||||
|
if ((*pMem)->tData == NULL) {
|
||||||
|
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
|
||||||
|
taosRUnLockLatch(&(tmem->latch));
|
||||||
|
free(*pMem);
|
||||||
|
tsdbUnRefMemTable(pRepo, tmem);
|
||||||
|
tsdbUnRefMemTable(pRepo, *pIMem);
|
||||||
|
*pMem = NULL;
|
||||||
|
*pIMem = NULL;
|
||||||
|
return -1;
|
||||||
|
}
|
||||||
|
|
||||||
|
(*pMem)->keyFirst = tmem->keyFirst;
|
||||||
|
(*pMem)->keyLast = tmem->keyLast;
|
||||||
|
(*pMem)->numOfRows = tmem->numOfRows;
|
||||||
|
(*pMem)->maxTables = tmem->maxTables;
|
||||||
|
|
||||||
|
for (size_t i = 0; i < taosArrayGetSize(pATable); i++) {
|
||||||
|
STable * pTable = *(STable **)taosArrayGet(pATable, i);
|
||||||
|
int32_t tid = TABLE_TID(pTable);
|
||||||
|
STableData *pTableData = (tid < tmem->maxTables) ? tmem->tData[tid] : NULL;
|
||||||
|
|
||||||
|
if ((pTableData == NULL) || (TABLE_UID(pTable) != pTableData->uid)) continue;
|
||||||
|
|
||||||
|
(*pMem)->tData[tid] = tmem->tData[tid];
|
||||||
|
T_REF_INC(tmem->tData[tid]);
|
||||||
|
}
|
||||||
|
|
||||||
|
taosRUnLockLatch(&(tmem->latch));
|
||||||
|
}
|
||||||
|
|
||||||
|
tsdbUnRefMemTable(pRepo, tmem);
|
||||||
|
|
||||||
tsdbDebug("vgId:%d take memory snapshot, pMem %p pIMem %p", REPO_ID(pRepo), *pMem, *pIMem);
|
tsdbDebug("vgId:%d take memory snapshot, pMem %p pIMem %p", REPO_ID(pRepo), *pMem, *pIMem);
|
||||||
return 0;
|
return 0;
|
||||||
|
@ -144,8 +193,14 @@ void tsdbUnTakeMemSnapShot(STsdbRepo *pRepo, SMemTable *pMem, SMemTable *pIMem)
|
||||||
tsdbDebug("vgId:%d untake memory snapshot, pMem %p pIMem %p", REPO_ID(pRepo), pMem, pIMem);
|
tsdbDebug("vgId:%d untake memory snapshot, pMem %p pIMem %p", REPO_ID(pRepo), pMem, pIMem);
|
||||||
|
|
||||||
if (pMem != NULL) {
|
if (pMem != NULL) {
|
||||||
taosRUnLockLatch(&(pMem->latch));
|
for (size_t i = 0; i < pMem->maxTables; i++) {
|
||||||
tsdbUnRefMemTable(pRepo, pMem);
|
STableData *pTableData = pMem->tData[i];
|
||||||
|
if (pTableData) {
|
||||||
|
tsdbFreeTableData(pTableData);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
free(pMem->tData);
|
||||||
|
free(pMem);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pIMem != NULL) {
|
if (pIMem != NULL) {
|
||||||
|
@ -436,7 +491,7 @@ static STableData *tsdbNewTableData(STsdbCfg *pCfg, STable *pTable) {
|
||||||
STableData *pTableData = (STableData *)calloc(1, sizeof(*pTableData));
|
STableData *pTableData = (STableData *)calloc(1, sizeof(*pTableData));
|
||||||
if (pTableData == NULL) {
|
if (pTableData == NULL) {
|
||||||
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
|
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
|
||||||
goto _err;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
pTableData->uid = TABLE_UID(pTable);
|
pTableData->uid = TABLE_UID(pTable);
|
||||||
|
@ -449,20 +504,22 @@ static STableData *tsdbNewTableData(STsdbCfg *pCfg, STable *pTable) {
|
||||||
tkeyComparFn, pCfg->update ? SL_UPDATE_DUP_KEY : SL_DISCARD_DUP_KEY, tsdbGetTsTupleKey);
|
tkeyComparFn, pCfg->update ? SL_UPDATE_DUP_KEY : SL_DISCARD_DUP_KEY, tsdbGetTsTupleKey);
|
||||||
if (pTableData->pData == NULL) {
|
if (pTableData->pData == NULL) {
|
||||||
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
|
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
|
||||||
goto _err;
|
free(pTableData);
|
||||||
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
return pTableData;
|
T_REF_INC(pTableData);
|
||||||
|
|
||||||
_err:
|
return pTableData;
|
||||||
tsdbFreeTableData(pTableData);
|
|
||||||
return NULL;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
static void tsdbFreeTableData(STableData *pTableData) {
|
static void tsdbFreeTableData(STableData *pTableData) {
|
||||||
if (pTableData) {
|
if (pTableData) {
|
||||||
tSkipListDestroy(pTableData->pData);
|
int32_t ref = T_REF_DEC(pTableData);
|
||||||
free(pTableData);
|
if (ref == 0) {
|
||||||
|
tSkipListDestroy(pTableData->pData);
|
||||||
|
free(pTableData);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -187,13 +187,15 @@ static SArray* getDefaultLoadColumns(STsdbQueryHandle* pQueryHandle, bool loadTS
|
||||||
return pLocalIdList;
|
return pLocalIdList;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void tsdbMayTakeMemSnapshot(STsdbQueryHandle* pQueryHandle) {
|
static void tsdbMayTakeMemSnapshot(STsdbQueryHandle* pQueryHandle, SArray* psTable) {
|
||||||
assert(pQueryHandle != NULL && pQueryHandle->pMemRef != NULL);
|
assert(pQueryHandle != NULL && pQueryHandle->pMemRef != NULL);
|
||||||
|
|
||||||
SMemRef* pMemRef = pQueryHandle->pMemRef;
|
SMemRef* pMemRef = pQueryHandle->pMemRef;
|
||||||
if (pQueryHandle->pMemRef->ref++ == 0) {
|
if (pQueryHandle->pMemRef->ref++ == 0) {
|
||||||
tsdbTakeMemSnapshot(pQueryHandle->pTsdb, (SMemTable**)&(pMemRef->mem), (SMemTable**)&(pMemRef->imem));
|
tsdbTakeMemSnapshot(pQueryHandle->pTsdb, (SMemTable**)&(pMemRef->mem), (SMemTable**)&(pMemRef->imem), psTable);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
taosArrayDestroy(psTable);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void tsdbMayUnTakeMemSnapshot(STsdbQueryHandle* pQueryHandle) {
|
static void tsdbMayUnTakeMemSnapshot(STsdbQueryHandle* pQueryHandle) {
|
||||||
|
@ -242,7 +244,7 @@ int64_t tsdbGetNumOfRowsInMemTable(TsdbQueryHandleT* pHandle) {
|
||||||
return rows;
|
return rows;
|
||||||
}
|
}
|
||||||
|
|
||||||
static SArray* createCheckInfoFromTableGroup(STsdbQueryHandle* pQueryHandle, STableGroupInfo* pGroupList, STsdbMeta* pMeta) {
|
static SArray* createCheckInfoFromTableGroup(STsdbQueryHandle* pQueryHandle, STableGroupInfo* pGroupList, STsdbMeta* pMeta, SArray** psTable) {
|
||||||
size_t sizeOfGroup = taosArrayGetSize(pGroupList->pGroupList);
|
size_t sizeOfGroup = taosArrayGetSize(pGroupList->pGroupList);
|
||||||
assert(sizeOfGroup >= 1 && pMeta != NULL);
|
assert(sizeOfGroup >= 1 && pMeta != NULL);
|
||||||
|
|
||||||
|
@ -252,6 +254,12 @@ static SArray* createCheckInfoFromTableGroup(STsdbQueryHandle* pQueryHandle, STa
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
SArray* pTable = taosArrayInit(4, sizeof(STable*));
|
||||||
|
if (pTable == NULL) {
|
||||||
|
taosArrayDestroy(pTableCheckInfo);
|
||||||
|
return NULL;
|
||||||
|
}
|
||||||
|
|
||||||
// todo apply the lastkey of table check to avoid to load header file
|
// todo apply the lastkey of table check to avoid to load header file
|
||||||
for (int32_t i = 0; i < sizeOfGroup; ++i) {
|
for (int32_t i = 0; i < sizeOfGroup; ++i) {
|
||||||
SArray* group = *(SArray**) taosArrayGet(pGroupList->pGroupList, i);
|
SArray* group = *(SArray**) taosArrayGet(pGroupList->pGroupList, i);
|
||||||
|
@ -284,24 +292,40 @@ static SArray* createCheckInfoFromTableGroup(STsdbQueryHandle* pQueryHandle, STa
|
||||||
}
|
}
|
||||||
|
|
||||||
taosArraySort(pTableCheckInfo, tsdbCheckInfoCompar);
|
taosArraySort(pTableCheckInfo, tsdbCheckInfoCompar);
|
||||||
|
|
||||||
|
size_t gsize = taosArrayGetSize(pTableCheckInfo);
|
||||||
|
|
||||||
|
for (int32_t i = 0; i < gsize; ++i) {
|
||||||
|
STableCheckInfo* pInfo = (STableCheckInfo*) taosArrayGet(pTableCheckInfo, i);
|
||||||
|
|
||||||
|
taosArrayPush(pTable, &pInfo->pTableObj);
|
||||||
|
}
|
||||||
|
|
||||||
|
*psTable = pTable;
|
||||||
|
|
||||||
return pTableCheckInfo;
|
return pTableCheckInfo;
|
||||||
}
|
}
|
||||||
|
|
||||||
static SArray* createCheckInfoFromCheckInfo(SArray* pTableCheckInfo, TSKEY skey) {
|
static SArray* createCheckInfoFromCheckInfo(SArray* pTableCheckInfo, TSKEY skey, SArray** psTable) {
|
||||||
size_t si = taosArrayGetSize(pTableCheckInfo);
|
size_t si = taosArrayGetSize(pTableCheckInfo);
|
||||||
SArray* pNew = taosArrayInit(si, sizeof(STableCheckInfo));
|
SArray* pNew = taosArrayInit(si, sizeof(STableCheckInfo));
|
||||||
if (pNew == NULL) {
|
if (pNew == NULL) {
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
SArray* pTable = taosArrayInit(si, sizeof(STable*));
|
||||||
|
|
||||||
for (int32_t j = 0; j < si; ++j) {
|
for (int32_t j = 0; j < si; ++j) {
|
||||||
STableCheckInfo* pCheckInfo = (STableCheckInfo*) taosArrayGet(pTableCheckInfo, j);
|
STableCheckInfo* pCheckInfo = (STableCheckInfo*) taosArrayGet(pTableCheckInfo, j);
|
||||||
STableCheckInfo info = { .lastKey = skey, .pTableObj = pCheckInfo->pTableObj};
|
STableCheckInfo info = { .lastKey = skey, .pTableObj = pCheckInfo->pTableObj};
|
||||||
|
|
||||||
info.tableId = pCheckInfo->tableId;
|
info.tableId = pCheckInfo->tableId;
|
||||||
taosArrayPush(pNew, &info);
|
taosArrayPush(pNew, &info);
|
||||||
|
taosArrayPush(pTable, &pCheckInfo->pTableObj);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
*psTable = pTable;
|
||||||
|
|
||||||
// it is ordered already, no need to sort again.
|
// it is ordered already, no need to sort again.
|
||||||
taosArraySort(pNew, tsdbCheckInfoCompar);
|
taosArraySort(pNew, tsdbCheckInfoCompar);
|
||||||
return pNew;
|
return pNew;
|
||||||
|
@ -332,7 +356,7 @@ static STsdbQueryHandle* tsdbQueryTablesImpl(STsdbRepo* tsdb, STsdbQueryCond* pC
|
||||||
goto out_of_memory;
|
goto out_of_memory;
|
||||||
}
|
}
|
||||||
|
|
||||||
tsdbMayTakeMemSnapshot(pQueryHandle);
|
//tsdbMayTakeMemSnapshot(pQueryHandle);
|
||||||
assert(pCond != NULL && pCond->numOfCols > 0 && pMemRef != NULL);
|
assert(pCond != NULL && pCond->numOfCols > 0 && pMemRef != NULL);
|
||||||
|
|
||||||
if (ASCENDING_TRAVERSE(pCond->order)) {
|
if (ASCENDING_TRAVERSE(pCond->order)) {
|
||||||
|
@ -393,14 +417,18 @@ TsdbQueryHandleT* tsdbQueryTables(STsdbRepo* tsdb, STsdbQueryCond* pCond, STable
|
||||||
STsdbMeta* pMeta = tsdbGetMeta(tsdb);
|
STsdbMeta* pMeta = tsdbGetMeta(tsdb);
|
||||||
assert(pMeta != NULL);
|
assert(pMeta != NULL);
|
||||||
|
|
||||||
|
SArray* psTable = NULL;
|
||||||
|
|
||||||
// todo apply the lastkey of table check to avoid to load header file
|
// todo apply the lastkey of table check to avoid to load header file
|
||||||
pQueryHandle->pTableCheckInfo = createCheckInfoFromTableGroup(pQueryHandle, groupList, pMeta);
|
pQueryHandle->pTableCheckInfo = createCheckInfoFromTableGroup(pQueryHandle, groupList, pMeta, &psTable);
|
||||||
if (pQueryHandle->pTableCheckInfo == NULL) {
|
if (pQueryHandle->pTableCheckInfo == NULL) {
|
||||||
tsdbCleanupQueryHandle(pQueryHandle);
|
tsdbCleanupQueryHandle(pQueryHandle);
|
||||||
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
|
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
tsdbMayTakeMemSnapshot(pQueryHandle, psTable);
|
||||||
|
|
||||||
tsdbDebug("%p total numOfTable:%" PRIzu " in query, %p", pQueryHandle, taosArrayGetSize(pQueryHandle->pTableCheckInfo), pQueryHandle->qinfo);
|
tsdbDebug("%p total numOfTable:%" PRIzu " in query, %p", pQueryHandle, taosArrayGetSize(pQueryHandle->pTableCheckInfo), pQueryHandle->qinfo);
|
||||||
return (TsdbQueryHandleT) pQueryHandle;
|
return (TsdbQueryHandleT) pQueryHandle;
|
||||||
}
|
}
|
||||||
|
@ -2337,12 +2365,18 @@ static int32_t doGetExternalRow(STsdbQueryHandle* pQueryHandle, int16_t type, SM
|
||||||
pSecQueryHandle = tsdbQueryTablesImpl(pQueryHandle->pTsdb, &cond, pQueryHandle->qinfo, pMemRef);
|
pSecQueryHandle = tsdbQueryTablesImpl(pQueryHandle->pTsdb, &cond, pQueryHandle->qinfo, pMemRef);
|
||||||
|
|
||||||
tfree(cond.colList);
|
tfree(cond.colList);
|
||||||
pSecQueryHandle->pTableCheckInfo = createCheckInfoFromCheckInfo(pQueryHandle->pTableCheckInfo, pSecQueryHandle->window.skey);
|
|
||||||
|
SArray* psTable = NULL;
|
||||||
|
|
||||||
|
pSecQueryHandle->pTableCheckInfo = createCheckInfoFromCheckInfo(pQueryHandle->pTableCheckInfo, pSecQueryHandle->window.skey, &psTable);
|
||||||
if (pSecQueryHandle->pTableCheckInfo == NULL) {
|
if (pSecQueryHandle->pTableCheckInfo == NULL) {
|
||||||
terrno = TSDB_CODE_QRY_OUT_OF_MEMORY;
|
terrno = TSDB_CODE_QRY_OUT_OF_MEMORY;
|
||||||
goto out_of_memory;
|
goto out_of_memory;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
tsdbMayTakeMemSnapshot(pSecQueryHandle, psTable);
|
||||||
|
|
||||||
if (!tsdbNextDataBlock((void*)pSecQueryHandle)) {
|
if (!tsdbNextDataBlock((void*)pSecQueryHandle)) {
|
||||||
// no result in current query, free the corresponding result rows structure
|
// no result in current query, free the corresponding result rows structure
|
||||||
if (type == TSDB_PREV_ROW) {
|
if (type == TSDB_PREV_ROW) {
|
||||||
|
|
|
@ -44,6 +44,7 @@ enum {
|
||||||
TAOS_CFG_VTYPE_INT8,
|
TAOS_CFG_VTYPE_INT8,
|
||||||
TAOS_CFG_VTYPE_INT16,
|
TAOS_CFG_VTYPE_INT16,
|
||||||
TAOS_CFG_VTYPE_INT32,
|
TAOS_CFG_VTYPE_INT32,
|
||||||
|
TAOS_CFG_VTYPE_UINT16,
|
||||||
TAOS_CFG_VTYPE_FLOAT,
|
TAOS_CFG_VTYPE_FLOAT,
|
||||||
TAOS_CFG_VTYPE_STRING,
|
TAOS_CFG_VTYPE_STRING,
|
||||||
TAOS_CFG_VTYPE_IPSTR,
|
TAOS_CFG_VTYPE_IPSTR,
|
||||||
|
|
|
@ -392,8 +392,8 @@ __compar_fn_t getKeyComparFunc(int32_t keyType) {
|
||||||
int32_t doCompare(const char* f1, const char* f2, int32_t type, size_t size) {
|
int32_t doCompare(const char* f1, const char* f2, int32_t type, size_t size) {
|
||||||
switch (type) {
|
switch (type) {
|
||||||
case TSDB_DATA_TYPE_INT: DEFAULT_COMP(GET_INT32_VAL(f1), GET_INT32_VAL(f2));
|
case TSDB_DATA_TYPE_INT: DEFAULT_COMP(GET_INT32_VAL(f1), GET_INT32_VAL(f2));
|
||||||
case TSDB_DATA_TYPE_DOUBLE: DEFAULT_COMP(GET_DOUBLE_VAL(f1), GET_DOUBLE_VAL(f2));
|
case TSDB_DATA_TYPE_DOUBLE: DEFAULT_DOUBLE_COMP(GET_DOUBLE_VAL(f1), GET_DOUBLE_VAL(f2));
|
||||||
case TSDB_DATA_TYPE_FLOAT: DEFAULT_COMP(GET_FLOAT_VAL(f1), GET_FLOAT_VAL(f2));
|
case TSDB_DATA_TYPE_FLOAT: DEFAULT_FLOAT_COMP(GET_FLOAT_VAL(f1), GET_FLOAT_VAL(f2));
|
||||||
case TSDB_DATA_TYPE_BIGINT: DEFAULT_COMP(GET_INT64_VAL(f1), GET_INT64_VAL(f2));
|
case TSDB_DATA_TYPE_BIGINT: DEFAULT_COMP(GET_INT64_VAL(f1), GET_INT64_VAL(f2));
|
||||||
case TSDB_DATA_TYPE_SMALLINT: DEFAULT_COMP(GET_INT16_VAL(f1), GET_INT16_VAL(f2));
|
case TSDB_DATA_TYPE_SMALLINT: DEFAULT_COMP(GET_INT16_VAL(f1), GET_INT16_VAL(f2));
|
||||||
case TSDB_DATA_TYPE_TINYINT:
|
case TSDB_DATA_TYPE_TINYINT:
|
||||||
|
|
|
@ -95,6 +95,23 @@ static void taosReadInt16Config(SGlobalCfg *cfg, char *input_value) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void taosReadUInt16Config(SGlobalCfg *cfg, char *input_value) {
|
||||||
|
int32_t value = atoi(input_value);
|
||||||
|
uint16_t *option = (uint16_t *)cfg->ptr;
|
||||||
|
if (value < cfg->minValue || value > cfg->maxValue) {
|
||||||
|
uError("config option:%s, input value:%s, out of range[%f, %f], use default value:%d",
|
||||||
|
cfg->option, input_value, cfg->minValue, cfg->maxValue, *option);
|
||||||
|
} else {
|
||||||
|
if (cfg->cfgStatus <= TAOS_CFG_CSTATUS_FILE) {
|
||||||
|
*option = (uint16_t)value;
|
||||||
|
cfg->cfgStatus = TAOS_CFG_CSTATUS_FILE;
|
||||||
|
} else {
|
||||||
|
uWarn("config option:%s, input value:%s, is configured by %s, use %d", cfg->option, input_value,
|
||||||
|
tsCfgStatusStr[cfg->cfgStatus], *option);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
static void taosReadInt8Config(SGlobalCfg *cfg, char *input_value) {
|
static void taosReadInt8Config(SGlobalCfg *cfg, char *input_value) {
|
||||||
int32_t value = atoi(input_value);
|
int32_t value = atoi(input_value);
|
||||||
int8_t *option = (int8_t *)cfg->ptr;
|
int8_t *option = (int8_t *)cfg->ptr;
|
||||||
|
@ -239,6 +256,9 @@ static void taosReadConfigOption(const char *option, char *value, char *value2,
|
||||||
case TAOS_CFG_VTYPE_INT32:
|
case TAOS_CFG_VTYPE_INT32:
|
||||||
taosReadInt32Config(cfg, value);
|
taosReadInt32Config(cfg, value);
|
||||||
break;
|
break;
|
||||||
|
case TAOS_CFG_VTYPE_UINT16:
|
||||||
|
taosReadUInt16Config(cfg, value);
|
||||||
|
break;
|
||||||
case TAOS_CFG_VTYPE_FLOAT:
|
case TAOS_CFG_VTYPE_FLOAT:
|
||||||
taosReadFloatConfig(cfg, value);
|
taosReadFloatConfig(cfg, value);
|
||||||
break;
|
break;
|
||||||
|
@ -422,6 +442,9 @@ void taosPrintGlobalCfg() {
|
||||||
case TAOS_CFG_VTYPE_INT32:
|
case TAOS_CFG_VTYPE_INT32:
|
||||||
uInfo(" %s:%s%d%s", cfg->option, blank, *((int32_t *)cfg->ptr), tsGlobalUnit[cfg->unitType]);
|
uInfo(" %s:%s%d%s", cfg->option, blank, *((int32_t *)cfg->ptr), tsGlobalUnit[cfg->unitType]);
|
||||||
break;
|
break;
|
||||||
|
case TAOS_CFG_VTYPE_UINT16:
|
||||||
|
uInfo(" %s:%s%d%s", cfg->option, blank, *((uint16_t *)cfg->ptr), tsGlobalUnit[cfg->unitType]);
|
||||||
|
break;
|
||||||
case TAOS_CFG_VTYPE_FLOAT:
|
case TAOS_CFG_VTYPE_FLOAT:
|
||||||
uInfo(" %s:%s%f%s", cfg->option, blank, *((float *)cfg->ptr), tsGlobalUnit[cfg->unitType]);
|
uInfo(" %s:%s%f%s", cfg->option, blank, *((float *)cfg->ptr), tsGlobalUnit[cfg->unitType]);
|
||||||
break;
|
break;
|
||||||
|
@ -459,6 +482,9 @@ static void taosDumpCfg(SGlobalCfg *cfg) {
|
||||||
case TAOS_CFG_VTYPE_INT32:
|
case TAOS_CFG_VTYPE_INT32:
|
||||||
printf(" %s:%s%d%s\n", cfg->option, blank, *((int32_t *)cfg->ptr), tsGlobalUnit[cfg->unitType]);
|
printf(" %s:%s%d%s\n", cfg->option, blank, *((int32_t *)cfg->ptr), tsGlobalUnit[cfg->unitType]);
|
||||||
break;
|
break;
|
||||||
|
case TAOS_CFG_VTYPE_UINT16:
|
||||||
|
printf(" %s:%s%d%s\n", cfg->option, blank, *((uint16_t *)cfg->ptr), tsGlobalUnit[cfg->unitType]);
|
||||||
|
break;
|
||||||
case TAOS_CFG_VTYPE_FLOAT:
|
case TAOS_CFG_VTYPE_FLOAT:
|
||||||
printf(" %s:%s%f%s\n", cfg->option, blank, *((float *)cfg->ptr), tsGlobalUnit[cfg->unitType]);
|
printf(" %s:%s%f%s\n", cfg->option, blank, *((float *)cfg->ptr), tsGlobalUnit[cfg->unitType]);
|
||||||
break;
|
break;
|
||||||
|
|
|
@ -55,9 +55,15 @@ pipeline {
|
||||||
sh '''
|
sh '''
|
||||||
cd ${WKC}/tests
|
cd ${WKC}/tests
|
||||||
./test-all.sh b1
|
./test-all.sh b1
|
||||||
|
date'''
|
||||||
|
sh '''
|
||||||
cd ${WKC}/tests
|
cd ${WKC}/tests
|
||||||
./test-all.sh full jdbc
|
./test-all.sh full jdbc
|
||||||
date'''
|
date'''
|
||||||
|
sh '''
|
||||||
|
cd ${WKC}/tests
|
||||||
|
./test-all.sh full unit
|
||||||
|
date'''
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -63,7 +63,9 @@
|
||||||
<dependency>
|
<dependency>
|
||||||
<groupId>com.taosdata.jdbc</groupId>
|
<groupId>com.taosdata.jdbc</groupId>
|
||||||
<artifactId>taos-jdbcdriver</artifactId>
|
<artifactId>taos-jdbcdriver</artifactId>
|
||||||
<version>2.0.18</version>
|
<version>2.0.20</version>
|
||||||
|
<!-- <scope>system</scope>-->
|
||||||
|
<!-- <systemPath>${project.basedir}/src/main/resources/taos-jdbcdriver-2.0.20-dist.jar</systemPath>-->
|
||||||
</dependency>
|
</dependency>
|
||||||
|
|
||||||
<dependency>
|
<dependency>
|
||||||
|
|
|
@ -10,4 +10,4 @@ public class SpringbootdemoApplication {
|
||||||
public static void main(String[] args) {
|
public static void main(String[] args) {
|
||||||
SpringApplication.run(SpringbootdemoApplication.class, args);
|
SpringApplication.run(SpringbootdemoApplication.class, args);
|
||||||
}
|
}
|
||||||
}
|
}
|
|
@ -6,6 +6,7 @@ import org.springframework.beans.factory.annotation.Autowired;
|
||||||
import org.springframework.web.bind.annotation.*;
|
import org.springframework.web.bind.annotation.*;
|
||||||
|
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
|
import java.util.Map;
|
||||||
|
|
||||||
@RequestMapping("/weather")
|
@RequestMapping("/weather")
|
||||||
@RestController
|
@RestController
|
||||||
|
@ -20,7 +21,7 @@ public class WeatherController {
|
||||||
* @return
|
* @return
|
||||||
*/
|
*/
|
||||||
@GetMapping("/init")
|
@GetMapping("/init")
|
||||||
public boolean init() {
|
public int init() {
|
||||||
return weatherService.init();
|
return weatherService.init();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -44,19 +45,23 @@ public class WeatherController {
|
||||||
* @return
|
* @return
|
||||||
*/
|
*/
|
||||||
@PostMapping("/{temperature}/{humidity}")
|
@PostMapping("/{temperature}/{humidity}")
|
||||||
public int saveWeather(@PathVariable int temperature, @PathVariable float humidity) {
|
public int saveWeather(@PathVariable float temperature, @PathVariable int humidity) {
|
||||||
return weatherService.save(temperature, humidity);
|
return weatherService.save(temperature, humidity);
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
@GetMapping("/count")
|
||||||
* upload multi weather info
|
public int count() {
|
||||||
*
|
return weatherService.count();
|
||||||
* @param weatherList
|
}
|
||||||
* @return
|
|
||||||
*/
|
@GetMapping("/subTables")
|
||||||
@PostMapping("/batch")
|
public List<String> getSubTables() {
|
||||||
public int batchSaveWeather(@RequestBody List<Weather> weatherList) {
|
return weatherService.getSubTables();
|
||||||
return weatherService.save(weatherList);
|
}
|
||||||
|
|
||||||
|
@GetMapping("/avg")
|
||||||
|
public List<Weather> avg() {
|
||||||
|
return weatherService.avg();
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -4,16 +4,26 @@ import com.taosdata.example.springbootdemo.domain.Weather;
|
||||||
import org.apache.ibatis.annotations.Param;
|
import org.apache.ibatis.annotations.Param;
|
||||||
|
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
|
import java.util.Map;
|
||||||
|
|
||||||
public interface WeatherMapper {
|
public interface WeatherMapper {
|
||||||
|
|
||||||
int insert(Weather weather);
|
void dropDB();
|
||||||
|
|
||||||
int batchInsert(List<Weather> weatherList);
|
|
||||||
|
|
||||||
List<Weather> select(@Param("limit") Long limit, @Param("offset")Long offset);
|
|
||||||
|
|
||||||
void createDB();
|
void createDB();
|
||||||
|
|
||||||
void createTable();
|
void createSuperTable();
|
||||||
|
|
||||||
|
void createTable(Weather weather);
|
||||||
|
|
||||||
|
List<Weather> select(@Param("limit") Long limit, @Param("offset") Long offset);
|
||||||
|
|
||||||
|
int insert(Weather weather);
|
||||||
|
|
||||||
|
int count();
|
||||||
|
|
||||||
|
List<String> getSubTables();
|
||||||
|
|
||||||
|
List<Weather> avg();
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
|
@ -4,28 +4,29 @@
|
||||||
<mapper namespace="com.taosdata.example.springbootdemo.dao.WeatherMapper">
|
<mapper namespace="com.taosdata.example.springbootdemo.dao.WeatherMapper">
|
||||||
|
|
||||||
<resultMap id="BaseResultMap" type="com.taosdata.example.springbootdemo.domain.Weather">
|
<resultMap id="BaseResultMap" type="com.taosdata.example.springbootdemo.domain.Weather">
|
||||||
<id column="ts" jdbcType="TIMESTAMP" property="ts" />
|
<id column="ts" jdbcType="TIMESTAMP" property="ts"/>
|
||||||
<result column="temperature" jdbcType="INTEGER" property="temperature" />
|
<result column="temperature" jdbcType="FLOAT" property="temperature"/>
|
||||||
<result column="humidity" jdbcType="FLOAT" property="humidity" />
|
<result column="humidity" jdbcType="FLOAT" property="humidity"/>
|
||||||
</resultMap>
|
</resultMap>
|
||||||
|
|
||||||
<update id="createDB" >
|
<update id="dropDB">
|
||||||
create database if not exists test;
|
drop database if exists test
|
||||||
</update>
|
</update>
|
||||||
|
|
||||||
<update id="createTable" >
|
<update id="createDB">
|
||||||
create table if not exists test.weather(ts timestamp, temperature int, humidity float);
|
create database if not exists test
|
||||||
</update>
|
</update>
|
||||||
|
|
||||||
<sql id="Base_Column_List">
|
<update id="createSuperTable">
|
||||||
ts, temperature, humidity
|
create table if not exists test.weather(ts timestamp, temperature float, humidity float) tags(location nchar(64), groupId int)
|
||||||
</sql>
|
</update>
|
||||||
|
|
||||||
|
<update id="createTable" parameterType="com.taosdata.example.springbootdemo.domain.Weather">
|
||||||
|
create table if not exists test.t#{groupId} using test.weather tags(#{location}, #{groupId})
|
||||||
|
</update>
|
||||||
|
|
||||||
<select id="select" resultMap="BaseResultMap">
|
<select id="select" resultMap="BaseResultMap">
|
||||||
select
|
select * from test.weather order by ts desc
|
||||||
<include refid="Base_Column_List" />
|
|
||||||
from test.weather
|
|
||||||
order by ts desc
|
|
||||||
<if test="limit != null">
|
<if test="limit != null">
|
||||||
limit #{limit,jdbcType=BIGINT}
|
limit #{limit,jdbcType=BIGINT}
|
||||||
</if>
|
</if>
|
||||||
|
@ -34,16 +35,26 @@
|
||||||
</if>
|
</if>
|
||||||
</select>
|
</select>
|
||||||
|
|
||||||
<insert id="insert" parameterType="com.taosdata.example.springbootdemo.domain.Weather" >
|
<insert id="insert" parameterType="com.taosdata.example.springbootdemo.domain.Weather">
|
||||||
insert into test.weather (ts, temperature, humidity) values (now, #{temperature,jdbcType=INTEGER}, #{humidity,jdbcType=FLOAT})
|
insert into test.t#{groupId} (ts, temperature, humidity) values (#{ts}, ${temperature}, ${humidity})
|
||||||
</insert>
|
</insert>
|
||||||
|
|
||||||
<insert id="batchInsert" parameterType="java.util.List" >
|
<select id="getSubTables" resultType="String">
|
||||||
insert into test.weather (ts, temperature, humidity) values
|
select tbname from test.weather
|
||||||
<foreach separator=" " collection="list" item="weather" index="index" >
|
</select>
|
||||||
(now + #{index}a, #{weather.temperature}, #{weather.humidity})
|
|
||||||
</foreach>
|
|
||||||
</insert>
|
|
||||||
|
|
||||||
|
<select id="count" resultType="int">
|
||||||
|
select count(*) from test.weather
|
||||||
|
</select>
|
||||||
|
|
||||||
|
<resultMap id="avgResultSet" type="com.taosdata.example.springbootdemo.domain.Weather">
|
||||||
|
<id column="ts" jdbcType="TIMESTAMP" property="ts" />
|
||||||
|
<result column="avg(temperature)" jdbcType="FLOAT" property="temperature" />
|
||||||
|
<result column="avg(humidity)" jdbcType="FLOAT" property="humidity" />
|
||||||
|
</resultMap>
|
||||||
|
|
||||||
|
<select id="avg" resultMap="avgResultSet">
|
||||||
|
select avg(temperature), avg(humidity)from test.weather interval(1m)
|
||||||
|
</select>
|
||||||
|
|
||||||
</mapper>
|
</mapper>
|
|
@ -6,12 +6,21 @@ import java.sql.Timestamp;
|
||||||
|
|
||||||
public class Weather {
|
public class Weather {
|
||||||
|
|
||||||
@JsonFormat(pattern = "yyyy-MM-dd HH:mm:ss.SSS",timezone = "GMT+8")
|
@JsonFormat(pattern = "yyyy-MM-dd HH:mm:ss.SSS", timezone = "GMT+8")
|
||||||
private Timestamp ts;
|
private Timestamp ts;
|
||||||
|
private float temperature;
|
||||||
private int temperature;
|
|
||||||
|
|
||||||
private float humidity;
|
private float humidity;
|
||||||
|
private String location;
|
||||||
|
private int groupId;
|
||||||
|
|
||||||
|
public Weather() {
|
||||||
|
}
|
||||||
|
|
||||||
|
public Weather(Timestamp ts, float temperature, float humidity) {
|
||||||
|
this.ts = ts;
|
||||||
|
this.temperature = temperature;
|
||||||
|
this.humidity = humidity;
|
||||||
|
}
|
||||||
|
|
||||||
public Timestamp getTs() {
|
public Timestamp getTs() {
|
||||||
return ts;
|
return ts;
|
||||||
|
@ -21,11 +30,11 @@ public class Weather {
|
||||||
this.ts = ts;
|
this.ts = ts;
|
||||||
}
|
}
|
||||||
|
|
||||||
public int getTemperature() {
|
public float getTemperature() {
|
||||||
return temperature;
|
return temperature;
|
||||||
}
|
}
|
||||||
|
|
||||||
public void setTemperature(int temperature) {
|
public void setTemperature(float temperature) {
|
||||||
this.temperature = temperature;
|
this.temperature = temperature;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -36,4 +45,20 @@ public class Weather {
|
||||||
public void setHumidity(float humidity) {
|
public void setHumidity(float humidity) {
|
||||||
this.humidity = humidity;
|
this.humidity = humidity;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public String getLocation() {
|
||||||
|
return location;
|
||||||
|
}
|
||||||
|
|
||||||
|
public void setLocation(String location) {
|
||||||
|
this.location = location;
|
||||||
|
}
|
||||||
|
|
||||||
|
public int getGroupId() {
|
||||||
|
return groupId;
|
||||||
|
}
|
||||||
|
|
||||||
|
public void setGroupId(int groupId) {
|
||||||
|
this.groupId = groupId;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -5,25 +5,41 @@ import com.taosdata.example.springbootdemo.domain.Weather;
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
import org.springframework.beans.factory.annotation.Autowired;
|
||||||
import org.springframework.stereotype.Service;
|
import org.springframework.stereotype.Service;
|
||||||
|
|
||||||
|
import java.sql.Timestamp;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
|
import java.util.Map;
|
||||||
|
import java.util.Random;
|
||||||
|
|
||||||
@Service
|
@Service
|
||||||
public class WeatherService {
|
public class WeatherService {
|
||||||
|
|
||||||
@Autowired
|
@Autowired
|
||||||
private WeatherMapper weatherMapper;
|
private WeatherMapper weatherMapper;
|
||||||
|
private Random random = new Random(System.currentTimeMillis());
|
||||||
|
private String[] locations = {"北京", "上海", "广州", "深圳", "天津"};
|
||||||
|
|
||||||
public boolean init() {
|
public int init() {
|
||||||
|
weatherMapper.dropDB();
|
||||||
weatherMapper.createDB();
|
weatherMapper.createDB();
|
||||||
weatherMapper.createTable();
|
weatherMapper.createSuperTable();
|
||||||
return true;
|
long ts = System.currentTimeMillis();
|
||||||
|
long thirtySec = 1000 * 30;
|
||||||
|
int count = 0;
|
||||||
|
for (int i = 0; i < 20; i++) {
|
||||||
|
Weather weather = new Weather(new Timestamp(ts + (thirtySec * i)), 30 * random.nextFloat(), random.nextInt(100));
|
||||||
|
weather.setLocation(locations[random.nextInt(locations.length)]);
|
||||||
|
weather.setGroupId(i % locations.length);
|
||||||
|
weatherMapper.createTable(weather);
|
||||||
|
count += weatherMapper.insert(weather);
|
||||||
|
}
|
||||||
|
return count;
|
||||||
}
|
}
|
||||||
|
|
||||||
public List<Weather> query(Long limit, Long offset) {
|
public List<Weather> query(Long limit, Long offset) {
|
||||||
return weatherMapper.select(limit, offset);
|
return weatherMapper.select(limit, offset);
|
||||||
}
|
}
|
||||||
|
|
||||||
public int save(int temperature, float humidity) {
|
public int save(float temperature, int humidity) {
|
||||||
Weather weather = new Weather();
|
Weather weather = new Weather();
|
||||||
weather.setTemperature(temperature);
|
weather.setTemperature(temperature);
|
||||||
weather.setHumidity(humidity);
|
weather.setHumidity(humidity);
|
||||||
|
@ -31,8 +47,15 @@ public class WeatherService {
|
||||||
return weatherMapper.insert(weather);
|
return weatherMapper.insert(weather);
|
||||||
}
|
}
|
||||||
|
|
||||||
public int save(List<Weather> weatherList) {
|
public int count() {
|
||||||
return weatherMapper.batchInsert(weatherList);
|
return weatherMapper.count();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
public List<String> getSubTables() {
|
||||||
|
return weatherMapper.getSubTables();
|
||||||
|
}
|
||||||
|
|
||||||
|
public List<Weather> avg() {
|
||||||
|
return weatherMapper.avg();
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,12 +1,14 @@
|
||||||
# datasource config - JDBC-JNI
|
# datasource config - JDBC-JNI
|
||||||
spring.datasource.driver-class-name=com.taosdata.jdbc.TSDBDriver
|
#spring.datasource.driver-class-name=com.taosdata.jdbc.TSDBDriver
|
||||||
spring.datasource.url=jdbc:TAOS://127.0.0.1:6030/test?timezone=UTC-8&charset=UTF-8&locale=en_US.UTF-8
|
#spring.datasource.url=jdbc:TAOS://127.0.0.1:6030/test?timezone=UTC-8&charset=UTF-8&locale=en_US.UTF-8
|
||||||
spring.datasource.username=root
|
#spring.datasource.username=root
|
||||||
spring.datasource.password=taosdata
|
#spring.datasource.password=taosdata
|
||||||
|
|
||||||
# datasource config - JDBC-RESTful
|
# datasource config - JDBC-RESTful
|
||||||
#spring.datasource.driver-class-name=com.taosdata.jdbc.rs.RestfulDriver
|
spring.datasource.driver-class-name=com.taosdata.jdbc.rs.RestfulDriver
|
||||||
#spring.datasource.url=jdbc:TAOS-RS://master:6041/test?user=root&password=taosdata
|
spring.datasource.url=jdbc:TAOS-RS://master:6041/test?timezone=UTC-8&charset=UTF-8&locale=en_US.UTF-8
|
||||||
|
spring.datasource.username=root
|
||||||
|
spring.datasource.password=taosdata
|
||||||
|
|
||||||
spring.datasource.druid.initial-size=5
|
spring.datasource.druid.initial-size=5
|
||||||
spring.datasource.druid.min-idle=5
|
spring.datasource.druid.min-idle=5
|
||||||
|
|
|
@ -4,7 +4,7 @@ import com.taosdata.taosdemo.components.DataSourceFactory;
|
||||||
import com.taosdata.taosdemo.components.JdbcTaosdemoConfig;
|
import com.taosdata.taosdemo.components.JdbcTaosdemoConfig;
|
||||||
import com.taosdata.taosdemo.domain.SuperTableMeta;
|
import com.taosdata.taosdemo.domain.SuperTableMeta;
|
||||||
import com.taosdata.taosdemo.service.DatabaseService;
|
import com.taosdata.taosdemo.service.DatabaseService;
|
||||||
import com.taosdata.taosdemo.service.QueryService;
|
import com.taosdata.taosdemo.service.SqlExecuteTask;
|
||||||
import com.taosdata.taosdemo.service.SubTableService;
|
import com.taosdata.taosdemo.service.SubTableService;
|
||||||
import com.taosdata.taosdemo.service.SuperTableService;
|
import com.taosdata.taosdemo.service.SuperTableService;
|
||||||
import com.taosdata.taosdemo.service.data.SuperTableMetaGenerator;
|
import com.taosdata.taosdemo.service.data.SuperTableMetaGenerator;
|
||||||
|
@ -32,6 +32,17 @@ public class TaosDemoApplication {
|
||||||
}
|
}
|
||||||
// 初始化
|
// 初始化
|
||||||
final DataSource dataSource = DataSourceFactory.getInstance(config.host, config.port, config.user, config.password);
|
final DataSource dataSource = DataSourceFactory.getInstance(config.host, config.port, config.user, config.password);
|
||||||
|
if (config.executeSql != null && !config.executeSql.isEmpty() && !config.executeSql.replaceAll("\\s", "").isEmpty()) {
|
||||||
|
Thread task = new Thread(new SqlExecuteTask(dataSource, config.executeSql));
|
||||||
|
task.start();
|
||||||
|
try {
|
||||||
|
task.join();
|
||||||
|
} catch (InterruptedException e) {
|
||||||
|
e.printStackTrace();
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
final DatabaseService databaseService = new DatabaseService(dataSource);
|
final DatabaseService databaseService = new DatabaseService(dataSource);
|
||||||
final SuperTableService superTableService = new SuperTableService(dataSource);
|
final SuperTableService superTableService = new SuperTableService(dataSource);
|
||||||
final SubTableService subTableService = new SubTableService(dataSource);
|
final SubTableService subTableService = new SubTableService(dataSource);
|
||||||
|
@ -96,7 +107,6 @@ public class TaosDemoApplication {
|
||||||
// 查询
|
// 查询
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
/**********************************************************************************/
|
/**********************************************************************************/
|
||||||
// 删除表
|
// 删除表
|
||||||
if (config.dropTable) {
|
if (config.dropTable) {
|
||||||
|
|
|
@ -42,7 +42,7 @@ public final class JdbcTaosdemoConfig {
|
||||||
public int rate = 10;
|
public int rate = 10;
|
||||||
public long range = 1000l;
|
public long range = 1000l;
|
||||||
// select task
|
// select task
|
||||||
|
public String executeSql;
|
||||||
// drop task
|
// drop task
|
||||||
public boolean dropTable = false;
|
public boolean dropTable = false;
|
||||||
|
|
||||||
|
@ -89,7 +89,7 @@ public final class JdbcTaosdemoConfig {
|
||||||
System.out.println("-rate The proportion of data out of order. effective only if order is 1. min 0, max 100, default is 10");
|
System.out.println("-rate The proportion of data out of order. effective only if order is 1. min 0, max 100, default is 10");
|
||||||
System.out.println("-range The range of data out of order. effective only if order is 1. default is 1000 ms");
|
System.out.println("-range The range of data out of order. effective only if order is 1. default is 1000 ms");
|
||||||
// query task
|
// query task
|
||||||
// System.out.println("-sqlFile The select sql file");
|
System.out.println("-executeSql execute a specific sql.");
|
||||||
// drop task
|
// drop task
|
||||||
System.out.println("-dropTable Drop data before quit. Default is false");
|
System.out.println("-dropTable Drop data before quit. Default is false");
|
||||||
System.out.println("--help Give this help list");
|
System.out.println("--help Give this help list");
|
||||||
|
@ -207,6 +207,9 @@ public final class JdbcTaosdemoConfig {
|
||||||
range = Integer.parseInt(args[++i]);
|
range = Integer.parseInt(args[++i]);
|
||||||
}
|
}
|
||||||
// select task
|
// select task
|
||||||
|
if ("-executeSql".equals(args[i]) && i < args.length - 1) {
|
||||||
|
executeSql = args[++i];
|
||||||
|
}
|
||||||
|
|
||||||
// drop task
|
// drop task
|
||||||
if ("-dropTable".equals(args[i]) && i < args.length - 1) {
|
if ("-dropTable".equals(args[i]) && i < args.length - 1) {
|
||||||
|
|
|
@ -0,0 +1,36 @@
|
||||||
|
package com.taosdata.taosdemo.service;
|
||||||
|
|
||||||
|
import com.taosdata.taosdemo.utils.Printer;
|
||||||
|
|
||||||
|
import javax.sql.DataSource;
|
||||||
|
import java.sql.Connection;
|
||||||
|
import java.sql.ResultSet;
|
||||||
|
import java.sql.SQLException;
|
||||||
|
import java.sql.Statement;
|
||||||
|
|
||||||
|
public class SqlExecuteTask implements Runnable {
|
||||||
|
private final DataSource dataSource;
|
||||||
|
private final String sql;
|
||||||
|
|
||||||
|
public SqlExecuteTask(DataSource dataSource, String sql) {
|
||||||
|
this.dataSource = dataSource;
|
||||||
|
this.sql = sql;
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void run() {
|
||||||
|
try (Connection conn = dataSource.getConnection(); Statement stmt = conn.createStatement()) {
|
||||||
|
long start = System.currentTimeMillis();
|
||||||
|
boolean execute = stmt.execute(sql);
|
||||||
|
long end = System.currentTimeMillis();
|
||||||
|
if (execute) {
|
||||||
|
ResultSet rs = stmt.getResultSet();
|
||||||
|
Printer.printResult(rs);
|
||||||
|
} else {
|
||||||
|
Printer.printSql(sql, true, (end - start));
|
||||||
|
}
|
||||||
|
} catch (SQLException e) {
|
||||||
|
e.printStackTrace();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
|
@ -0,0 +1,27 @@
|
||||||
|
package com.taosdata.taosdemo.utils;
|
||||||
|
|
||||||
|
import java.sql.ResultSet;
|
||||||
|
import java.sql.ResultSetMetaData;
|
||||||
|
import java.sql.SQLException;
|
||||||
|
|
||||||
|
public class Printer {
|
||||||
|
|
||||||
|
public static void printResult(ResultSet resultSet) throws SQLException {
|
||||||
|
ResultSetMetaData metaData = resultSet.getMetaData();
|
||||||
|
while (resultSet.next()) {
|
||||||
|
for (int i = 1; i <= metaData.getColumnCount(); i++) {
|
||||||
|
String columnLabel = metaData.getColumnLabel(i);
|
||||||
|
String value = resultSet.getString(i);
|
||||||
|
System.out.printf("%s: %s\t", columnLabel, value);
|
||||||
|
}
|
||||||
|
System.out.println();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
public static void printSql(String sql, boolean succeed, long cost) {
|
||||||
|
System.out.println("[ " + (succeed ? "OK" : "ERROR!") + " ] time cost: " + cost + " ms, execute statement ====> " + sql);
|
||||||
|
}
|
||||||
|
|
||||||
|
private Printer() {
|
||||||
|
}
|
||||||
|
}
|
|
@ -92,15 +92,14 @@ void Test(TAOS *taos, char *qstr, int index) {
|
||||||
// printf("insert row: %i, reason:%s\n", i, taos_errstr(taos));
|
// printf("insert row: %i, reason:%s\n", i, taos_errstr(taos));
|
||||||
// }
|
// }
|
||||||
TAOS_RES *result1 = taos_query(taos, qstr);
|
TAOS_RES *result1 = taos_query(taos, qstr);
|
||||||
if (result1) {
|
if (result1 == NULL || taos_errno(result1) != 0) {
|
||||||
printf("insert row: %i\n", i);
|
printf("failed to insert row, reason:%s\n", taos_errstr(result1));
|
||||||
} else {
|
|
||||||
printf("failed to insert row: %i, reason:%s\n", i, "null result"/*taos_errstr(result)*/);
|
|
||||||
taos_free_result(result1);
|
taos_free_result(result1);
|
||||||
exit(1);
|
exit(1);
|
||||||
|
} else {
|
||||||
|
printf("insert row: %i\n", i);
|
||||||
}
|
}
|
||||||
taos_free_result(result1);
|
taos_free_result(result1);
|
||||||
|
|
||||||
}
|
}
|
||||||
printf("success to insert rows, total %d rows\n", i);
|
printf("success to insert rows, total %d rows\n", i);
|
||||||
|
|
||||||
|
|
|
@ -11,15 +11,9 @@
|
||||||
|
|
||||||
# -*- coding: utf-8 -*-
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
sys.path.insert(0, os.getcwd())
|
|
||||||
from fabric import Connection
|
from fabric import Connection
|
||||||
from util.sql import *
|
|
||||||
from util.log import *
|
|
||||||
import taos
|
|
||||||
import random
|
import random
|
||||||
import threading
|
import time
|
||||||
import logging
|
import logging
|
||||||
|
|
||||||
class Node:
|
class Node:
|
||||||
|
@ -76,6 +70,19 @@ class Node:
|
||||||
print("remove taosd error for node %d " % self.index)
|
print("remove taosd error for node %d " % self.index)
|
||||||
logging.exception(e)
|
logging.exception(e)
|
||||||
|
|
||||||
|
def forceStopOneTaosd(self):
|
||||||
|
try:
|
||||||
|
self.conn.run("kill -9 $(ps -ax|grep taosd|awk '{print $1}')")
|
||||||
|
except Exception as e:
|
||||||
|
print("kill taosd error on node%d " % self.index)
|
||||||
|
|
||||||
|
def startOneTaosd(self):
|
||||||
|
try:
|
||||||
|
self.conn.run("nohup taosd -c /etc/taos/ > /dev/null 2>&1 &")
|
||||||
|
except Exception as e:
|
||||||
|
print("start taosd error on node%d " % self.index)
|
||||||
|
logging.exception(e)
|
||||||
|
|
||||||
def installTaosd(self, packagePath):
|
def installTaosd(self, packagePath):
|
||||||
self.conn.put(packagePath, self.homeDir)
|
self.conn.put(packagePath, self.homeDir)
|
||||||
self.conn.cd(self.homeDir)
|
self.conn.cd(self.homeDir)
|
||||||
|
@ -122,100 +129,51 @@ class Node:
|
||||||
|
|
||||||
class Nodes:
|
class Nodes:
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
self.node1 = Node(1, 'root', '52.151.60.239', 'node1', 'r', '/root/')
|
self.tdnodes = []
|
||||||
self.node2 = Node(2, 'root', '52.183.32.246', 'node1', 'r', '/root/')
|
self.tdnodes.append(Node(0, 'root', '52.143.103.7', 'node1', 'a', '/root/'))
|
||||||
self.node3 = Node(3, 'root', '51.143.46.79', 'node1', 'r', '/root/')
|
self.tdnodes.append(Node(1, 'root', '52.250.48.222', 'node2', 'a', '/root/'))
|
||||||
self.node4 = Node(4, 'root', '52.183.2.76', 'node1', 'r', '/root/')
|
self.tdnodes.append(Node(2, 'root', '51.141.167.23', 'node3', 'a', '/root/'))
|
||||||
self.node5 = Node(5, 'root', '13.66.225.87', 'node1', 'r', '/root/')
|
self.tdnodes.append(Node(3, 'root', '52.247.207.173', 'node4', 'a', '/root/'))
|
||||||
|
self.tdnodes.append(Node(4, 'root', '51.141.166.100', 'node5', 'a', '/root/'))
|
||||||
|
|
||||||
|
def stopOneNode(self, index):
|
||||||
|
self.tdnodes[index].forceStopOneTaosd()
|
||||||
|
|
||||||
|
def startOneNode(self, index):
|
||||||
|
self.tdnodes[index].startOneTaosd()
|
||||||
|
|
||||||
def stopAllTaosd(self):
|
def stopAllTaosd(self):
|
||||||
self.node1.stopTaosd()
|
for i in range(len(self.tdnodes)):
|
||||||
self.node2.stopTaosd()
|
self.tdnodes[i].stopTaosd()
|
||||||
self.node3.stopTaosd()
|
|
||||||
|
|
||||||
def startAllTaosd(self):
|
def startAllTaosd(self):
|
||||||
self.node1.startTaosd()
|
for i in range(len(self.tdnodes)):
|
||||||
self.node2.startTaosd()
|
self.tdnodes[i].startTaosd()
|
||||||
self.node3.startTaosd()
|
|
||||||
|
|
||||||
def restartAllTaosd(self):
|
def restartAllTaosd(self):
|
||||||
self.node1.restartTaosd()
|
for i in range(len(self.tdnodes)):
|
||||||
self.node2.restartTaosd()
|
self.tdnodes[i].restartTaosd()
|
||||||
self.node3.restartTaosd()
|
|
||||||
|
|
||||||
def addConfigs(self, configKey, configValue):
|
def addConfigs(self, configKey, configValue):
|
||||||
self.node1.configTaosd(configKey, configValue)
|
for i in range(len(self.tdnodes)):
|
||||||
self.node2.configTaosd(configKey, configValue)
|
self.tdnodes[i].configTaosd(configKey, configValue)
|
||||||
self.node3.configTaosd(configKey, configValue)
|
|
||||||
|
|
||||||
def removeConfigs(self, configKey, configValue):
|
def removeConfigs(self, configKey, configValue):
|
||||||
self.node1.removeTaosConfig(configKey, configValue)
|
for i in range(len(self.tdnodes)):
|
||||||
self.node2.removeTaosConfig(configKey, configValue)
|
self.tdnodes[i].removeTaosConfig(configKey, configValue)
|
||||||
self.node3.removeTaosConfig(configKey, configValue)
|
|
||||||
|
|
||||||
def removeAllDataFiles(self):
|
def removeAllDataFiles(self):
|
||||||
self.node1.removeData()
|
for i in range(len(self.tdnodes)):
|
||||||
self.node2.removeData()
|
self.tdnodes[i].removeData()
|
||||||
self.node3.removeData()
|
|
||||||
|
|
||||||
class ClusterTest:
|
# kill taosd randomly every 10 mins
|
||||||
def __init__(self, hostName):
|
nodes = Nodes()
|
||||||
self.host = hostName
|
loop = 0
|
||||||
self.user = "root"
|
while True:
|
||||||
self.password = "taosdata"
|
loop = loop + 1
|
||||||
self.config = "/etc/taos"
|
index = random.randint(0, 4)
|
||||||
self.dbName = "mytest"
|
print("loop: %d, kill taosd on node%d" %(loop, index))
|
||||||
self.stbName = "meters"
|
nodes.stopOneNode(index)
|
||||||
self.numberOfThreads = 20
|
time.sleep(60)
|
||||||
self.numberOfTables = 10000
|
nodes.startOneNode(index)
|
||||||
self.numberOfRecords = 1000
|
time.sleep(600)
|
||||||
self.tbPrefix = "t"
|
|
||||||
self.ts = 1538548685000
|
|
||||||
self.repeat = 1
|
|
||||||
|
|
||||||
def connectDB(self):
|
|
||||||
self.conn = taos.connect(
|
|
||||||
host=self.host,
|
|
||||||
user=self.user,
|
|
||||||
password=self.password,
|
|
||||||
config=self.config)
|
|
||||||
|
|
||||||
def createSTable(self, replica):
|
|
||||||
cursor = self.conn.cursor()
|
|
||||||
tdLog.info("drop database if exists %s" % self.dbName)
|
|
||||||
cursor.execute("drop database if exists %s" % self.dbName)
|
|
||||||
tdLog.info("create database %s replica %d" % (self.dbName, replica))
|
|
||||||
cursor.execute("create database %s replica %d" % (self.dbName, replica))
|
|
||||||
tdLog.info("use %s" % self.dbName)
|
|
||||||
cursor.execute("use %s" % self.dbName)
|
|
||||||
tdLog.info("drop table if exists %s" % self.stbName)
|
|
||||||
cursor.execute("drop table if exists %s" % self.stbName)
|
|
||||||
tdLog.info("create table %s(ts timestamp, current float, voltage int, phase int) tags(id int)" % self.stbName)
|
|
||||||
cursor.execute("create table %s(ts timestamp, current float, voltage int, phase int) tags(id int)" % self.stbName)
|
|
||||||
cursor.close()
|
|
||||||
|
|
||||||
def insertData(self, threadID):
|
|
||||||
print("Thread %d: starting" % threadID)
|
|
||||||
cursor = self.conn.cursor()
|
|
||||||
tablesPerThread = int(self.numberOfTables / self.numberOfThreads)
|
|
||||||
baseTableID = tablesPerThread * threadID
|
|
||||||
for i in range (tablesPerThread):
|
|
||||||
cursor.execute("create table %s%d using %s tags(%d)" % (self.tbPrefix, baseTableID + i, self.stbName, baseTableID + i))
|
|
||||||
query = "insert into %s%d values" % (self.tbPrefix, baseTableID + i)
|
|
||||||
base = self.numberOfRecords * i
|
|
||||||
for j in range(self.numberOfRecords):
|
|
||||||
query += "(%d, %f, %d, %d)" % (self.ts + base + j, random.random(), random.randint(210, 230), random.randint(0, 10))
|
|
||||||
cursor.execute(query)
|
|
||||||
cursor.close()
|
|
||||||
print("Thread %d: finishing" % threadID)
|
|
||||||
|
|
||||||
def run(self):
|
|
||||||
threads = []
|
|
||||||
tdLog.info("Inserting data")
|
|
||||||
for i in range(self.numberOfThreads):
|
|
||||||
thread = threading.Thread(target=self.insertData, args=(i,))
|
|
||||||
threads.append(thread)
|
|
||||||
thread.start()
|
|
||||||
|
|
||||||
for i in range(self.numberOfThreads):
|
|
||||||
threads[i].join()
|
|
|
@ -350,18 +350,27 @@ class ConcurrentInquiry:
|
||||||
cl.execute("create database if not exists %s;" %self.dbname)
|
cl.execute("create database if not exists %s;" %self.dbname)
|
||||||
cl.execute("use %s" % self.dbname)
|
cl.execute("use %s" % self.dbname)
|
||||||
for k in range(stableNum):
|
for k in range(stableNum):
|
||||||
sql="create table %s (ts timestamp, c1 int, c2 float, c3 bigint, c4 smallint, c5 tinyint, c6 double, c7 bool,c8 binary(20),c9 nchar(20)) \
|
sql="create table %s (ts timestamp, c1 int, c2 float, c3 bigint, c4 smallint, c5 tinyint, c6 double, c7 bool,c8 binary(20),c9 nchar(20),c11 int unsigned,c12 smallint unsigned,c13 tinyint unsigned,c14 bigint unsigned) \
|
||||||
tags(t1 int, t2 float, t3 bigint, t4 smallint, t5 tinyint, t6 double, t7 bool,t8 binary(20),t9 nchar(20))" % (self.stb_prefix+str(k))
|
tags(t1 int, t2 float, t3 bigint, t4 smallint, t5 tinyint, t6 double, t7 bool,t8 binary(20),t9 nchar(20), t11 int unsigned , t12 smallint unsigned , t13 tinyint unsigned , t14 bigint unsigned)" % (self.stb_prefix+str(k))
|
||||||
cl.execute(sql)
|
cl.execute(sql)
|
||||||
for j in range(subtableNum):
|
for j in range(subtableNum):
|
||||||
sql = "create table %s using %s tags(%d,%d,%d,%d,%d,%d,%d,'%s','%s')" % \
|
if j % 100 == 0:
|
||||||
(self.subtb_prefix+str(k)+'_'+str(j),self.stb_prefix+str(k),j,j/2.0,j%41,j%51,j%53,j*1.0,j%2,'taos'+str(j),'涛思'+str(j))
|
sql = "create table %s using %s tags(NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL)" % \
|
||||||
|
(self.subtb_prefix+str(k)+'_'+str(j),self.stb_prefix+str(k))
|
||||||
|
else:
|
||||||
|
sql = "create table %s using %s tags(%d,%d,%d,%d,%d,%d,%d,'%s','%s',%d,%d,%d,%d)" % \
|
||||||
|
(self.subtb_prefix+str(k)+'_'+str(j),self.stb_prefix+str(k),j,j/2.0,j%41,j%51,j%53,j*1.0,j%2,'taos'+str(j),'涛思'+str(j), j%43, j%23 , j%17 , j%3167)
|
||||||
print(sql)
|
print(sql)
|
||||||
cl.execute(sql)
|
cl.execute(sql)
|
||||||
for i in range(insertRows):
|
for i in range(insertRows):
|
||||||
ret = cl.execute(
|
if i % 100 == 0 :
|
||||||
"insert into %s values (%d , %d,%d,%d,%d,%d,%d,%d,'%s','%s')" %
|
ret = cl.execute(
|
||||||
(self.subtb_prefix+str(k)+'_'+str(j),t0+i,i%100,i/2.0,i%41,i%51,i%53,i*1.0,i%2,'taos'+str(i),'涛思'+str(i)))
|
"insert into %s values (%d , NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL)" %
|
||||||
|
(self.subtb_prefix+str(k)+'_'+str(j), t0+i))
|
||||||
|
else:
|
||||||
|
ret = cl.execute(
|
||||||
|
"insert into %s values (%d , %d,%d,%d,%d,%d,%d,%d,'%s','%s',%d,%d,%d,%d)" %
|
||||||
|
(self.subtb_prefix+str(k)+'_'+str(j), t0+i, i%100, i/2.0, i%41, i%51, i%53, i*1.0, i%2,'taos'+str(i),'涛思'+str(i), i%43, i%23 , i%17 , i%3167))
|
||||||
cl.close()
|
cl.close()
|
||||||
conn.close()
|
conn.close()
|
||||||
|
|
||||||
|
|
|
@ -52,8 +52,8 @@ class TDTestCase:
|
||||||
p.start()
|
p.start()
|
||||||
p.join()
|
p.join()
|
||||||
p.terminate()
|
p.terminate()
|
||||||
|
|
||||||
tdSql.execute("insert into tb values(%d, 1, 2)" % (self.ts + 1))
|
tdSql.execute("insert into tb(ts, col1, col2) values(%d, 1, 2)" % (self.ts + 2))
|
||||||
|
|
||||||
print("==============step2")
|
print("==============step2")
|
||||||
tdSql.query("select * from tb")
|
tdSql.query("select * from tb")
|
||||||
|
|
|
@ -25,18 +25,23 @@ class TDTestCase:
|
||||||
|
|
||||||
self.tables = 10
|
self.tables = 10
|
||||||
self.rows = 20
|
self.rows = 20
|
||||||
|
self.columns = 50
|
||||||
self.perfix = 't'
|
self.perfix = 't'
|
||||||
self.ts = 1601481600000
|
self.ts = 1601481600000
|
||||||
|
|
||||||
def insertData(self):
|
def insertData(self):
|
||||||
print("==============step1")
|
print("==============step1")
|
||||||
tdSql.execute("create table st (ts timestamp, c1 int) tags(t1 int)")
|
sql = "create table st(ts timestamp, "
|
||||||
|
for i in range(self.columns - 1):
|
||||||
|
sql += "c%d int, " % (i + 1)
|
||||||
|
sql += "c50 int) tags(t1 int)"
|
||||||
|
tdSql.execute(sql)
|
||||||
|
|
||||||
for i in range(self.tables):
|
for i in range(self.tables):
|
||||||
tdSql.execute("create table %s%d using st tags(%d)" % (self.perfix, i, i))
|
tdSql.execute("create table %s%d using st tags(%d)" % (self.perfix, i, i))
|
||||||
for j in range(self.rows):
|
for j in range(self.rows):
|
||||||
tc = self.ts + j * 60000
|
tc = self.ts + j * 60000
|
||||||
tdSql.execute("insert into %s%d values(%d, %d)" %(self.perfix, i, tc, j))
|
tdSql.execute("insert into %s%d(ts, c1) values(%d, %d)" %(self.perfix, i, tc, j))
|
||||||
|
|
||||||
def executeQueries(self):
|
def executeQueries(self):
|
||||||
print("==============step2")
|
print("==============step2")
|
||||||
|
@ -66,29 +71,29 @@ class TDTestCase:
|
||||||
tdSql.checkData(0, 0, 19)
|
tdSql.checkData(0, 0, 19)
|
||||||
|
|
||||||
tc = self.ts + 1 * 3600000
|
tc = self.ts + 1 * 3600000
|
||||||
tdSql.execute("insert into %s%d values(%d, %d)" %(self.perfix, 1, tc, 10))
|
tdSql.execute("insert into %s%d(ts, c1) values(%d, %d)" %(self.perfix, 1, tc, 10))
|
||||||
|
|
||||||
tc = self.ts + 3 * 3600000
|
tc = self.ts + 3 * 3600000
|
||||||
tdSql.execute("insert into %s%d values(%d, null)" %(self.perfix, 1, tc))
|
tdSql.execute("insert into %s%d(ts, c1) values(%d, null)" %(self.perfix, 1, tc))
|
||||||
|
|
||||||
tc = self.ts + 5 * 3600000
|
tc = self.ts + 5 * 3600000
|
||||||
tdSql.execute("insert into %s%d values(%d, %d)" %(self.perfix, 1, tc, -1))
|
tdSql.execute("insert into %s%d(ts, c1) values(%d, %d)" %(self.perfix, 1, tc, -1))
|
||||||
|
|
||||||
tc = self.ts + 7 * 3600000
|
tc = self.ts + 7 * 3600000
|
||||||
tdSql.execute("insert into %s%d values(%d, null)" %(self.perfix, 1, tc))
|
tdSql.execute("insert into %s%d(ts, c1) values(%d, null)" %(self.perfix, 1, tc))
|
||||||
|
|
||||||
def insertData2(self):
|
def insertData2(self):
|
||||||
tc = self.ts + 1 * 3600000
|
tc = self.ts + 1 * 3600000
|
||||||
tdSql.execute("insert into %s%d values(%d, %d)" %(self.perfix, 1, tc, 10))
|
tdSql.execute("insert into %s%d(ts, c1) values(%d, %d)" %(self.perfix, 1, tc, 10))
|
||||||
|
|
||||||
tc = self.ts + 3 * 3600000
|
tc = self.ts + 3 * 3600000
|
||||||
tdSql.execute("insert into %s%d values(%d, null)" %(self.perfix, 1, tc))
|
tdSql.execute("insert into %s%d(ts, c1) values(%d, null)" %(self.perfix, 1, tc))
|
||||||
|
|
||||||
tc = self.ts + 5 * 3600000
|
tc = self.ts + 5 * 3600000
|
||||||
tdSql.execute("insert into %s%d values(%d, %d)" %(self.perfix, 1, tc, -1))
|
tdSql.execute("insert into %s%d(ts, c1) values(%d, %d)" %(self.perfix, 1, tc, -1))
|
||||||
|
|
||||||
tc = self.ts + 7 * 3600000
|
tc = self.ts + 7 * 3600000
|
||||||
tdSql.execute("insert into %s%d values(%d, null)" %(self.perfix, 1, tc))
|
tdSql.execute("insert into %s%d(ts, c1) values(%d, null)" %(self.perfix, 1, tc))
|
||||||
|
|
||||||
def executeQueries2(self):
|
def executeQueries2(self):
|
||||||
# For stable
|
# For stable
|
||||||
|
@ -164,6 +169,9 @@ class TDTestCase:
|
||||||
self.executeQueries()
|
self.executeQueries()
|
||||||
self.insertData2()
|
self.insertData2()
|
||||||
self.executeQueries2()
|
self.executeQueries2()
|
||||||
|
tdDnodes.stop(1)
|
||||||
|
tdDnodes.start(1)
|
||||||
|
self.executeQueries2()
|
||||||
|
|
||||||
tdSql.execute("alter database test2 cachelast 0")
|
tdSql.execute("alter database test2 cachelast 0")
|
||||||
self.executeQueries2()
|
self.executeQueries2()
|
||||||
|
|
|
@ -24,7 +24,7 @@ class TDTestCase:
|
||||||
tdLog.debug("start to execute %s" % __file__)
|
tdLog.debug("start to execute %s" % __file__)
|
||||||
tdSql.init(conn.cursor(), logSql)
|
tdSql.init(conn.cursor(), logSql)
|
||||||
|
|
||||||
self.numberOfTables = 10000
|
self.numberOfTables = 1000
|
||||||
self.numberOfRecords = 100
|
self.numberOfRecords = 100
|
||||||
|
|
||||||
def getBuildPath(self):
|
def getBuildPath(self):
|
||||||
|
|
|
@ -209,7 +209,7 @@ sql alter database db wal 1
|
||||||
sql alter database db wal 2
|
sql alter database db wal 2
|
||||||
sql alter database db wal 1
|
sql alter database db wal 1
|
||||||
sql alter database db wal 2
|
sql alter database db wal 2
|
||||||
sql_error alter database db wal 0
|
sql alter database db wal 0
|
||||||
sql_error alter database db wal 3
|
sql_error alter database db wal 3
|
||||||
sql_error alter database db wal 4
|
sql_error alter database db wal 4
|
||||||
sql_error alter database db wal -1
|
sql_error alter database db wal -1
|
||||||
|
|
|
@ -39,14 +39,14 @@ print =============== step3 - query data
|
||||||
|
|
||||||
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'select * from d1.table_rest' 127.0.0.1:7111/rest/sql
|
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'select * from d1.table_rest' 127.0.0.1:7111/rest/sql
|
||||||
print curl 127.0.0.1:7111/rest/sql -----> $system_content
|
print curl 127.0.0.1:7111/rest/sql -----> $system_content
|
||||||
if $system_content != @{"status":"succ","head":["ts","i"],"data":[["2017-12-25 21:28:41.022",1],["2017-12-25 21:28:42.022",2],["2017-12-25 21:28:43.022",3],["2017-12-25 21:28:44.022",4],["2017-12-25 21:28:45.022",5],["2017-12-25 21:28:46.022",6],["2017-12-25 21:28:47.022",7],["2017-12-25 21:28:48.022",8],["2017-12-25 21:28:49.022",9],["2017-12-25 21:28:50.022",10]],"rows":10}@ then
|
if $system_content != @{"status":"succ","head":["ts","i"],"column_meta":[["ts",9,8],["i",4,4]],"data":[["2017-12-25 21:28:41.022",1],["2017-12-25 21:28:42.022",2],["2017-12-25 21:28:43.022",3],["2017-12-25 21:28:44.022",4],["2017-12-25 21:28:45.022",5],["2017-12-25 21:28:46.022",6],["2017-12-25 21:28:47.022",7],["2017-12-25 21:28:48.022",8],["2017-12-25 21:28:49.022",9],["2017-12-25 21:28:50.022",10]],"rows":10}@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
print =============== step4 - insert data
|
print =============== step4 - insert data
|
||||||
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d "insert into d1.table_rest values('2017-12-25 21:28:51.022', 11)" 127.0.0.1:7111/rest/sql
|
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d "insert into d1.table_rest values('2017-12-25 21:28:51.022', 11)" 127.0.0.1:7111/rest/sql
|
||||||
print curl 127.0.0.1:7111/rest/sql -----> $system_content
|
print curl 127.0.0.1:7111/rest/sql -----> $system_content
|
||||||
if $system_content != @{"status":"succ","head":["affected_rows"],"data":[[1]],"rows":1}@ then
|
if $system_content != @{"status":"succ","head":["affected_rows"],"column_meta":[["affected_rows",4,4]],"data":[[1]],"rows":1}@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
|
@ -54,7 +54,7 @@ print =============== step5 - query data
|
||||||
|
|
||||||
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'select * from d1.table_rest' 127.0.0.1:7111/rest/sql
|
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'select * from d1.table_rest' 127.0.0.1:7111/rest/sql
|
||||||
print curl 127.0.0.1:7111/rest/sql -----> $system_content
|
print curl 127.0.0.1:7111/rest/sql -----> $system_content
|
||||||
if $system_content != @{"status":"succ","head":["ts","i"],"data":[["2017-12-25 21:28:41.022",1],["2017-12-25 21:28:42.022",2],["2017-12-25 21:28:43.022",3],["2017-12-25 21:28:44.022",4],["2017-12-25 21:28:45.022",5],["2017-12-25 21:28:46.022",6],["2017-12-25 21:28:47.022",7],["2017-12-25 21:28:48.022",8],["2017-12-25 21:28:49.022",9],["2017-12-25 21:28:50.022",10],["2017-12-25 21:28:51.022",11]],"rows":11}@ then
|
if $system_content != @{"status":"succ","head":["ts","i"],"column_meta":[["ts",9,8],["i",4,4]],"data":[["2017-12-25 21:28:41.022",1],["2017-12-25 21:28:42.022",2],["2017-12-25 21:28:43.022",3],["2017-12-25 21:28:44.022",4],["2017-12-25 21:28:45.022",5],["2017-12-25 21:28:46.022",6],["2017-12-25 21:28:47.022",7],["2017-12-25 21:28:48.022",8],["2017-12-25 21:28:49.022",9],["2017-12-25 21:28:50.022",10],["2017-12-25 21:28:51.022",11]],"rows":11}@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
|
@ -79,4 +79,4 @@ if $system_content != @{"status":"error","code":3,"desc":"Authentication failure
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||||
|
|
|
@ -88,13 +88,13 @@ print =============== step2 - no db
|
||||||
#11
|
#11
|
||||||
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'show databases' 127.0.0.1:7111/rest/sql
|
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'show databases' 127.0.0.1:7111/rest/sql
|
||||||
print 11-> $system_content
|
print 11-> $system_content
|
||||||
if $system_content != @{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep0,keep1,keep(D)","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"data":[],"rows":0}@ then
|
if $system_content != @{"status":"succ","head":["name","created_time","ntables","vgroups","replica","quorum","days","keep0,keep1,keep(D)","cache(MB)","blocks","minrows","maxrows","wallevel","fsync","comp","cachelast","precision","update","status"],"column_meta":[["name",8,32],["created_time",9,8],["ntables",4,4],["vgroups",4,4],["replica",3,2],["quorum",3,2],["days",3,2],["keep0,keep1,keep(D)",8,24],["cache(MB)",4,4],["blocks",4,4],["minrows",4,4],["maxrows",4,4],["wallevel",2,1],["fsync",4,4],["comp",2,1],["cachelast",2,1],["precision",8,3],["update",2,1],["status",8,10]],"data":[],"rows":0}@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'create database d1' 127.0.0.1:7111/rest/sql
|
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'create database d1' 127.0.0.1:7111/rest/sql
|
||||||
print 12-> $system_content
|
print 12-> $system_content
|
||||||
if $system_content != @{"status":"succ","head":["affected_rows"],"data":[[0]],"rows":1}@ then
|
if $system_content != @{"status":"succ","head":["affected_rows"],"column_meta":[["affected_rows",4,4]],"data":[[0]],"rows":1}@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
|
@ -160,26 +160,26 @@ endi
|
||||||
|
|
||||||
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d ' create table d1.t1 (ts timestamp, speed int)' 127.0.0.1:7111/rest/sql
|
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d ' create table d1.t1 (ts timestamp, speed int)' 127.0.0.1:7111/rest/sql
|
||||||
print 22-> $system_content
|
print 22-> $system_content
|
||||||
if $system_content != @{"status":"succ","head":["affected_rows"],"data":[[0]],"rows":1}@ then
|
if $system_content != @{"status":"succ","head":["affected_rows"],"column_meta":[["affected_rows",4,4]],"data":[[0]],"rows":1}@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d ' select * from d1.t1 ' 127.0.0.1:7111/rest/sql
|
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d ' select * from d1.t1 ' 127.0.0.1:7111/rest/sql
|
||||||
print 23-> $system_content
|
print 23-> $system_content
|
||||||
if $system_content != @{"status":"succ","head":["ts","speed"],"data":[],"rows":0}@ then
|
if $system_content != @{"status":"succ","head":["ts","speed"],"column_meta":[["ts",9,8],["speed",4,4]],"data":[],"rows":0}@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
#24
|
#24
|
||||||
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d "insert into d1.t1 values('2017-12-25 21:28:41.022', 1)" 127.0.0.1:7111/rest/sql
|
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d "insert into d1.t1 values('2017-12-25 21:28:41.022', 1)" 127.0.0.1:7111/rest/sql
|
||||||
print 24-> $system_content
|
print 24-> $system_content
|
||||||
if $system_content != @{"status":"succ","head":["affected_rows"],"data":[[1]],"rows":1}@ then
|
if $system_content != @{"status":"succ","head":["affected_rows"],"column_meta":[["affected_rows",4,4]],"data":[[1]],"rows":1}@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d ' select * from d1.t1 ' 127.0.0.1:7111/rest/sql
|
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d ' select * from d1.t1 ' 127.0.0.1:7111/rest/sql
|
||||||
print 25-> $system_content
|
print 25-> $system_content
|
||||||
if $system_content != @{"status":"succ","head":["ts","speed"],"data":[["2017-12-25 21:28:41.022",1]],"rows":1}@ then
|
if $system_content != @{"status":"succ","head":["ts","speed"],"column_meta":[["ts",9,8],["speed",4,4]],"data":[["2017-12-25 21:28:41.022",1]],"rows":1}@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
|
@ -208,32 +208,32 @@ system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl
|
||||||
#27
|
#27
|
||||||
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d ' select * from d1.t1 ' 127.0.0.1:7111/rest/sql
|
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d ' select * from d1.t1 ' 127.0.0.1:7111/rest/sql
|
||||||
print 27-> $system_content
|
print 27-> $system_content
|
||||||
if $system_content != @{"status":"succ","head":["ts","speed"],"data":[["2017-12-25 21:28:41.022",1],["2017-12-25 21:28:42.022",2],["2017-12-25 21:28:43.022",3],["2017-12-25 21:28:44.022",4],["2017-12-25 21:28:45.022",5],["2017-12-25 21:28:46.022",6],["2017-12-25 21:28:47.022",7],["2017-12-25 21:28:48.022",8],["2017-12-25 21:28:49.022",9],["2017-12-25 21:28:50.022",10],["2017-12-25 21:28:51.022",11]],"rows":11}@ then
|
if $system_content != @{"status":"succ","head":["ts","speed"],"column_meta":[["ts",9,8],["speed",4,4]],"data":[["2017-12-25 21:28:41.022",1],["2017-12-25 21:28:42.022",2],["2017-12-25 21:28:43.022",3],["2017-12-25 21:28:44.022",4],["2017-12-25 21:28:45.022",5],["2017-12-25 21:28:46.022",6],["2017-12-25 21:28:47.022",7],["2017-12-25 21:28:48.022",8],["2017-12-25 21:28:49.022",9],["2017-12-25 21:28:50.022",10],["2017-12-25 21:28:51.022",11]],"rows":11}@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'create database d2' 127.0.0.1:7111/rest/sql
|
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d 'create database d2' 127.0.0.1:7111/rest/sql
|
||||||
print 28-> $system_content
|
print 28-> $system_content
|
||||||
if $system_content != @{"status":"succ","head":["affected_rows"],"data":[[0]],"rows":1}@ then
|
if $system_content != @{"status":"succ","head":["affected_rows"],"column_meta":[["affected_rows",4,4]],"data":[[0]],"rows":1}@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d ' create table d2.t1 (ts timestamp, speed int)' 127.0.0.1:7111/rest/sql
|
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d ' create table d2.t1 (ts timestamp, speed int)' 127.0.0.1:7111/rest/sql
|
||||||
print 29-> $system_content
|
print 29-> $system_content
|
||||||
if $system_content != @{"status":"succ","head":["affected_rows"],"data":[[0]],"rows":1}@ then
|
if $system_content != @{"status":"succ","head":["affected_rows"],"column_meta":[["affected_rows",4,4]],"data":[[0]],"rows":1}@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
#30
|
#30
|
||||||
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d "insert into d2.t1 values('2017-12-25 21:28:41.022', 1)" 127.0.0.1:7111/rest/sql
|
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d "insert into d2.t1 values('2017-12-25 21:28:41.022', 1)" 127.0.0.1:7111/rest/sql
|
||||||
print 30-> $system_content
|
print 30-> $system_content
|
||||||
if $system_content != @{"status":"succ","head":["affected_rows"],"data":[[1]],"rows":1}@ then
|
if $system_content != @{"status":"succ","head":["affected_rows"],"column_meta":[["affected_rows",4,4]],"data":[[1]],"rows":1}@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d ' select * from d2.t1 ' 127.0.0.1:7111/rest/sql
|
system_content curl -H 'Authorization: Taosd /KfeAzX/f9na8qdtNZmtONryp201ma04bEl8LcvLUd7a8qdtNZmtONryp201ma04' -d ' select * from d2.t1 ' 127.0.0.1:7111/rest/sql
|
||||||
print 31-> $system_content
|
print 31-> $system_content
|
||||||
if $system_content != @{"status":"succ","head":["ts","speed"],"data":[["2017-12-25 21:28:41.022",1]],"rows":1}@ then
|
if $system_content != @{"status":"succ","head":["ts","speed"],"column_meta":[["ts",9,8],["speed",4,4]],"data":[["2017-12-25 21:28:41.022",1]],"rows":1}@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
|
|
|
@ -285,8 +285,8 @@ system_content curl -u root:taosdata -d 'select count(*) from db.win_cpu' 127.0
|
||||||
|
|
||||||
print $system_content
|
print $system_content
|
||||||
|
|
||||||
if $system_content != @{"status":"succ","head":["count(*)"],"data":[[3]],"rows":1}@ then
|
if $system_content != @{"status":"succ","head":["count(*)"],"column_meta":[["count(*)",5,8]],"data":[[3]],"rows":1}@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||||
|
|
|
@ -169,8 +169,8 @@ sql_error create database $db cache 0
|
||||||
sql_error create database $db ctime 29
|
sql_error create database $db ctime 29
|
||||||
sql_error create database $db ctime 40961
|
sql_error create database $db ctime 40961
|
||||||
|
|
||||||
# wal {1, 2}
|
# wal {0, 2}
|
||||||
sql_error create database $db wal 0
|
#sql_error create database $db wal 0
|
||||||
sql_error create database $db wal -1
|
sql_error create database $db wal -1
|
||||||
sql_error create database $db wal 3
|
sql_error create database $db wal 3
|
||||||
|
|
||||||
|
|
|
@ -15,18 +15,18 @@ print ========== step2
|
||||||
# return -1
|
# return -1
|
||||||
#step21:
|
#step21:
|
||||||
sql drop table log.dn -x step22
|
sql drop table log.dn -x step22
|
||||||
return -1
|
# return -1
|
||||||
step22:
|
step22:
|
||||||
sql drop user log -x step23
|
sql drop user log -x step23
|
||||||
return -1
|
# return -1
|
||||||
step23:
|
step23:
|
||||||
|
|
||||||
print ========== step3
|
print ========== step3
|
||||||
|
|
||||||
sleep 2000
|
sleep 2000
|
||||||
sql select * from log.dn
|
#sql select * from log.dn
|
||||||
if $rows == 0 then
|
#if $rows == 0 then
|
||||||
return -1
|
# return -1
|
||||||
endi
|
#endi
|
||||||
|
|
||||||
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
system sh/exec.sh -n dnode1 -s stop -x SIGINT
|
||||||
|
|
|
@ -169,7 +169,7 @@ endi
|
||||||
|
|
||||||
system_content curl -u root:taosdata -d 'select * from db.sys_cpu_d_bbb_lga_1_web01' 127.0.0.1:7111/rest/sql/
|
system_content curl -u root:taosdata -d 'select * from db.sys_cpu_d_bbb_lga_1_web01' 127.0.0.1:7111/rest/sql/
|
||||||
print $system_content
|
print $system_content
|
||||||
if $system_content != @{"status":"succ","head":["ts","value"],"data":[["2012-09-05 20:00:00.000",18.000000000]],"rows":1}@ then
|
if $system_content != @{"status":"succ","head":["ts","value"],"column_meta":[["ts",9,8],["value",7,8]],"data":[["2012-09-05 20:00:00.000",18.000000000]],"rows":1}@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
|
@ -186,7 +186,7 @@ system_content curl -u root:taosdata -d 'select * from db.sys_cpu_d_bbb_lga_1_w
|
||||||
|
|
||||||
print $system_content
|
print $system_content
|
||||||
|
|
||||||
if $system_content != @{"status":"succ","head":["ts","value"],"data":[["2012-09-05 20:00:00.000",18.000000000],["2012-09-05 20:00:05.000",18.000000000]],"rows":2}@ then
|
if $system_content != @{"status":"succ","head":["ts","value"],"column_meta":[["ts",9,8],["value",7,8]],"data":[["2012-09-05 20:00:00.000",18.000000000],["2012-09-05 20:00:05.000",18.000000000]],"rows":2}@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
|
@ -194,7 +194,7 @@ system_content curl -u root:taosdata -d 'select count(*) from db.sys_cpu_d_bbb'
|
||||||
|
|
||||||
print $system_content
|
print $system_content
|
||||||
|
|
||||||
if $system_content != @{"status":"succ","head":["count(*)"],"data":[[3]],"rows":1}@ then
|
if $system_content != @{"status":"succ","head":["count(*)"],"column_meta":[["count(*)",5,8]],"data":[[3]],"rows":1}@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
|
@ -211,7 +211,7 @@ system_content curl -u root:taosdata -d 'select * from db.sys_mem_d_bbb_lga_1_w
|
||||||
|
|
||||||
print $system_content
|
print $system_content
|
||||||
|
|
||||||
if $system_content != @{"status":"succ","head":["ts","value"],"data":[["2012-09-05 20:00:00.000",8.000000000],["2012-09-05 20:00:05.000",9.000000000]],"rows":2}@ then
|
if $system_content != @{"status":"succ","head":["ts","value"],"column_meta":[["ts",9,8],["value",7,8]],"data":[["2012-09-05 20:00:00.000",8.000000000],["2012-09-05 20:00:05.000",9.000000000]],"rows":2}@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
|
@ -219,7 +219,7 @@ system_content curl -u root:taosdata -d 'select count(*) from db.sys_mem_d_bbb'
|
||||||
|
|
||||||
print $system_content
|
print $system_content
|
||||||
|
|
||||||
if $system_content != @{"status":"succ","head":["count(*)"],"data":[[2]],"rows":1}@ then
|
if $system_content != @{"status":"succ","head":["count(*)"],"column_meta":[["count(*)",5,8]],"data":[[2]],"rows":1}@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
|
@ -233,7 +233,7 @@ system_content curl -u root:taosdata -d '[{"metric": "sys_cpu","timestamp": 134
|
||||||
|
|
||||||
system_content curl -u root:taosdata -d 'select count(*) from db.sys_cpu_d_bbb' 127.0.0.1:7111/rest/sql/
|
system_content curl -u root:taosdata -d 'select count(*) from db.sys_cpu_d_bbb' 127.0.0.1:7111/rest/sql/
|
||||||
print $system_content
|
print $system_content
|
||||||
if $system_content != @{"status":"succ","head":["count(*)"],"data":[[7]],"rows":1}@ then
|
if $system_content != @{"status":"succ","head":["count(*)"],"column_meta":[["count(*)",5,8]],"data":[[7]],"rows":1}@ then
|
||||||
return -1
|
return -1
|
||||||
endi
|
endi
|
||||||
|
|
||||||
|
@ -244,4 +244,4 @@ system sh/exec.sh -n dnode4 -s stop -x SIGINT
|
||||||
system sh/exec.sh -n dnode5 -s stop -x SIGINT
|
system sh/exec.sh -n dnode5 -s stop -x SIGINT
|
||||||
system sh/exec.sh -n dnode6 -s stop -x SIGINT
|
system sh/exec.sh -n dnode6 -s stop -x SIGINT
|
||||||
system sh/exec.sh -n dnode7 -s stop -x SIGINT
|
system sh/exec.sh -n dnode7 -s stop -x SIGINT
|
||||||
system sh/exec.sh -n dnode8 -s stop -x SIGINT
|
system sh/exec.sh -n dnode8 -s stop -x SIGINT
|
||||||
|
|
|
@ -160,9 +160,10 @@ function runPyCaseOneByOnefq {
|
||||||
totalFailed=0
|
totalFailed=0
|
||||||
totalPyFailed=0
|
totalPyFailed=0
|
||||||
totalJDBCFailed=0
|
totalJDBCFailed=0
|
||||||
|
totalUnitFailed=0
|
||||||
|
|
||||||
corepath=`grep -oP '.*(?=core_)' /proc/sys/kernel/core_pattern||grep -oP '.*(?=core-)' /proc/sys/kernel/core_pattern`
|
corepath=`grep -oP '.*(?=core_)' /proc/sys/kernel/core_pattern||grep -oP '.*(?=core-)' /proc/sys/kernel/core_pattern`
|
||||||
if [ "$2" != "jdbc" ] && [ "$2" != "python" ]; then
|
if [ "$2" != "jdbc" ] && [ "$2" != "python" ] && [ "$2" != "unit" ]; then
|
||||||
echo "### run TSIM test case ###"
|
echo "### run TSIM test case ###"
|
||||||
cd $tests_dir/script
|
cd $tests_dir/script
|
||||||
|
|
||||||
|
@ -231,7 +232,7 @@ if [ "$2" != "jdbc" ] && [ "$2" != "python" ]; then
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [ "$2" != "sim" ] && [ "$2" != "jdbc" ] ; then
|
if [ "$2" != "sim" ] && [ "$2" != "jdbc" ] && [ "$2" != "unit" ]; then
|
||||||
echo "### run Python test case ###"
|
echo "### run Python test case ###"
|
||||||
|
|
||||||
cd $tests_dir
|
cd $tests_dir
|
||||||
|
@ -300,8 +301,8 @@ if [ "$2" != "sim" ] && [ "$2" != "jdbc" ] ; then
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
|
||||||
if [ "$2" != "sim" ] && [ "$2" != "python" ] && [ "$1" == "full" ]; then
|
if [ "$2" != "sim" ] && [ "$2" != "python" ] && [ "$2" != "unit" ] && [ "$1" == "full" ]; then
|
||||||
echo "### run JDBC test case ###"
|
echo "### run JDBC test cases ###"
|
||||||
|
|
||||||
cd $tests_dir
|
cd $tests_dir
|
||||||
|
|
||||||
|
@ -318,7 +319,7 @@ if [ "$2" != "sim" ] && [ "$2" != "python" ] && [ "$1" == "full" ]; then
|
||||||
nohup build/bin/taosd -c /etc/taos/ > /dev/null 2>&1 &
|
nohup build/bin/taosd -c /etc/taos/ > /dev/null 2>&1 &
|
||||||
sleep 30
|
sleep 30
|
||||||
|
|
||||||
cd $tests_dir/../src/connector/jdbc
|
cd $tests_dir/../src/connector/jdbc
|
||||||
|
|
||||||
mvn test > jdbc-out.log 2>&1
|
mvn test > jdbc-out.log 2>&1
|
||||||
tail -n 20 jdbc-out.log
|
tail -n 20 jdbc-out.log
|
||||||
|
@ -343,4 +344,40 @@ if [ "$2" != "sim" ] && [ "$2" != "python" ] && [ "$1" == "full" ]; then
|
||||||
dohavecore 1
|
dohavecore 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
exit $(($totalFailed + $totalPyFailed + $totalJDBCFailed))
|
if [ "$2" != "sim" ] && [ "$2" != "python" ] && [ "$2" != "jdbc" ] && [ "$1" == "full" ]; then
|
||||||
|
echo "### run Unit tests ###"
|
||||||
|
|
||||||
|
stopTaosd
|
||||||
|
cd $tests_dir
|
||||||
|
|
||||||
|
if [[ "$tests_dir" == *"$IN_TDINTERNAL"* ]]; then
|
||||||
|
cd ../../
|
||||||
|
else
|
||||||
|
cd ../
|
||||||
|
fi
|
||||||
|
|
||||||
|
pwd
|
||||||
|
cd debug/build/bin
|
||||||
|
nohup ./taosd -c /etc/taos/ > /dev/null 2>&1 &
|
||||||
|
sleep 30
|
||||||
|
|
||||||
|
pwd
|
||||||
|
./queryTest > unittest-out.log 2>&1
|
||||||
|
tail -n 20 unittest-out.log
|
||||||
|
|
||||||
|
totalUnitTests=`grep "Running" unittest-out.log | awk '{print $3}'`
|
||||||
|
totalUnitSuccess=`grep 'PASSED' unittest-out.log | awk '{print $4}'`
|
||||||
|
totalUnitFailed=`expr $totalUnitTests - $totalUnitSuccess`
|
||||||
|
|
||||||
|
if [ "$totalUnitSuccess" -gt "0" ]; then
|
||||||
|
echo -e "\n${GREEN} ### Total $totalUnitSuccess Unit test succeed! ### ${NC}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "$totalUnitFailed" -ne "0" ]; then
|
||||||
|
echo -e "\n${RED} ### Total $totalUnitFailed Unit test failed! ### ${NC}"
|
||||||
|
fi
|
||||||
|
dohavecore 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
|
||||||
|
exit $(($totalFailed + $totalPyFailed + $totalJDBCFailed + $totalUnitFailed))
|
||||||
|
|
|
@ -146,7 +146,7 @@ char *simGetVariable(SScript *script, char *varName, int32_t varLen) {
|
||||||
int32_t simExecuteExpression(SScript *script, char *exp) {
|
int32_t simExecuteExpression(SScript *script, char *exp) {
|
||||||
char * op1, *op2, *var1, *var2, *var3, *rest;
|
char * op1, *op2, *var1, *var2, *var3, *rest;
|
||||||
int32_t op1Len, op2Len, var1Len, var2Len, var3Len, val0, val1;
|
int32_t op1Len, op2Len, var1Len, var2Len, var3Len, val0, val1;
|
||||||
char t0[512], t1[512], t2[512], t3[1024];
|
char t0[1024], t1[1024], t2[1024], t3[2048];
|
||||||
int32_t result;
|
int32_t result;
|
||||||
|
|
||||||
rest = paGetToken(exp, &var1, &var1Len);
|
rest = paGetToken(exp, &var1, &var1Len);
|
||||||
|
|
Loading…
Reference in New Issue